Add Batch e0e7c120-d3f1-4e0a-8d27-7bfd8e992684 data
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +64 -0
- 2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/b5799a7c-6527-47cd-81fd-ab5ff79ff559_content_list.json +1914 -0
- 2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/b5799a7c-6527-47cd-81fd-ab5ff79ff559_model.json +0 -0
- 2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/b5799a7c-6527-47cd-81fd-ab5ff79ff559_origin.pdf +3 -0
- 2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/full.md +391 -0
- 2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/images.zip +3 -0
- 2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/layout.json +0 -0
- 2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/d193eee8-ef57-4a70-9609-6d91a9953312_content_list.json +1576 -0
- 2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/d193eee8-ef57-4a70-9609-6d91a9953312_model.json +0 -0
- 2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/d193eee8-ef57-4a70-9609-6d91a9953312_origin.pdf +3 -0
- 2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/full.md +337 -0
- 2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/images.zip +3 -0
- 2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/layout.json +0 -0
- 2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/d0c52370-d24a-4c84-9ea3-ee5cd6ca6239_content_list.json +1446 -0
- 2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/d0c52370-d24a-4c84-9ea3-ee5cd6ca6239_model.json +2147 -0
- 2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/d0c52370-d24a-4c84-9ea3-ee5cd6ca6239_origin.pdf +3 -0
- 2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/full.md +283 -0
- 2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/images.zip +3 -0
- 2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/layout.json +0 -0
- 2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/33638821-3dce-4a55-a900-82748a75aeee_content_list.json +2022 -0
- 2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/33638821-3dce-4a55-a900-82748a75aeee_model.json +0 -0
- 2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/33638821-3dce-4a55-a900-82748a75aeee_origin.pdf +3 -0
- 2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/full.md +419 -0
- 2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/images.zip +3 -0
- 2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/layout.json +0 -0
- 2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/218a3708-3415-4c53-9a50-ef5f14ed0f9a_content_list.json +1949 -0
- 2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/218a3708-3415-4c53-9a50-ef5f14ed0f9a_model.json +0 -0
- 2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/218a3708-3415-4c53-9a50-ef5f14ed0f9a_origin.pdf +3 -0
- 2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/full.md +405 -0
- 2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/images.zip +3 -0
- 2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/layout.json +0 -0
- 2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/d257d4bc-770d-463b-8ab1-d4cb1749eea8_content_list.json +1627 -0
- 2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/d257d4bc-770d-463b-8ab1-d4cb1749eea8_model.json +0 -0
- 2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/d257d4bc-770d-463b-8ab1-d4cb1749eea8_origin.pdf +3 -0
- 2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/full.md +344 -0
- 2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/images.zip +3 -0
- 2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/layout.json +0 -0
- 2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/755b1314-9e24-437c-8f51-19b61bb62095_content_list.json +1520 -0
- 2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/755b1314-9e24-437c-8f51-19b61bb62095_model.json +0 -0
- 2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/755b1314-9e24-437c-8f51-19b61bb62095_origin.pdf +3 -0
- 2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/full.md +329 -0
- 2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/images.zip +3 -0
- 2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/layout.json +0 -0
- 2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/4150a989-5307-4ff1-8e86-ca9b050e7e76_content_list.json +1447 -0
- 2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/4150a989-5307-4ff1-8e86-ca9b050e7e76_model.json +0 -0
- 2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/4150a989-5307-4ff1-8e86-ca9b050e7e76_origin.pdf +3 -0
- 2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/full.md +288 -0
- 2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/images.zip +3 -0
- 2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/layout.json +0 -0
- 2025/Semantic versus Identity_ A Divide-and-Conquer Approach towards Adjustable Medical Image De-Identification/2b39ad1e-d18f-4579-85b8-6d56df6a82db_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -2204,3 +2204,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 2204 |
2025/Self-Reinforcing[[:space:]]Prototype[[:space:]]Evolution[[:space:]]with[[:space:]]Dual-Knowledge[[:space:]]Cooperation[[:space:]]for[[:space:]]Semi-Supervised[[:space:]]Lifelong[[:space:]]Person[[:space:]]Re-Identification/0d8ec9cc-91fd-4f6f-adc9-3bba1cad24c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2205 |
2025/Self-Supervised[[:space:]]Monocular[[:space:]]4D[[:space:]]Scene[[:space:]]Reconstruction[[:space:]]for[[:space:]]Egocentric[[:space:]]Videos/0f34d99d-368d-4ba9-b31d-032659745be1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2206 |
2025/Self-Supervised[[:space:]]Sparse[[:space:]]Sensor[[:space:]]Fusion[[:space:]]for[[:space:]]Long[[:space:]]Range[[:space:]]Perception/6d4fd112-3fac-483b-b978-3c134ff1cdd1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2204 |
2025/Self-Reinforcing[[:space:]]Prototype[[:space:]]Evolution[[:space:]]with[[:space:]]Dual-Knowledge[[:space:]]Cooperation[[:space:]]for[[:space:]]Semi-Supervised[[:space:]]Lifelong[[:space:]]Person[[:space:]]Re-Identification/0d8ec9cc-91fd-4f6f-adc9-3bba1cad24c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2205 |
2025/Self-Supervised[[:space:]]Monocular[[:space:]]4D[[:space:]]Scene[[:space:]]Reconstruction[[:space:]]for[[:space:]]Egocentric[[:space:]]Videos/0f34d99d-368d-4ba9-b31d-032659745be1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2206 |
2025/Self-Supervised[[:space:]]Sparse[[:space:]]Sensor[[:space:]]Fusion[[:space:]]for[[:space:]]Long[[:space:]]Range[[:space:]]Perception/6d4fd112-3fac-483b-b978-3c134ff1cdd1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2207 |
+
2025/Self-supervised[[:space:]]Learning[[:space:]]of[[:space:]]Hybrid[[:space:]]Part-aware[[:space:]]3D[[:space:]]Representations[[:space:]]of[[:space:]]2D[[:space:]]Gaussians[[:space:]]and[[:space:]]Superquadrics/b5799a7c-6527-47cd-81fd-ab5ff79ff559_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2208 |
+
2025/SemGes_[[:space:]]Semantics-aware[[:space:]]Co-Speech[[:space:]]Gesture[[:space:]]Generation[[:space:]]using[[:space:]]Semantic[[:space:]]Coherence[[:space:]]and[[:space:]]Relevance[[:space:]]Learning/d193eee8-ef57-4a70-9609-6d91a9953312_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2209 |
+
2025/SemTalk_[[:space:]]Holistic[[:space:]]Co-speech[[:space:]]Motion[[:space:]]Generation[[:space:]]with[[:space:]]Frame-level[[:space:]]Semantic[[:space:]]Emphasis/d0c52370-d24a-4c84-9ea3-ee5cd6ca6239_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2210 |
+
2025/Semantic[[:space:]]Alignment[[:space:]]and[[:space:]]Reinforcement[[:space:]]for[[:space:]]Data-Free[[:space:]]Quantization[[:space:]]of[[:space:]]Vision[[:space:]]Transformers/33638821-3dce-4a55-a900-82748a75aeee_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2211 |
+
2025/Semantic[[:space:]]Causality-Aware[[:space:]]Vision-Based[[:space:]]3D[[:space:]]Occupancy[[:space:]]Prediction/218a3708-3415-4c53-9a50-ef5f14ed0f9a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2212 |
+
2025/Semantic[[:space:]]Discrepancy-aware[[:space:]]Detector[[:space:]]for[[:space:]]Image[[:space:]]Forgery[[:space:]]Identification/d257d4bc-770d-463b-8ab1-d4cb1749eea8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2213 |
+
2025/Semantic[[:space:]]Equitable[[:space:]]Clustering_[[:space:]]A[[:space:]]Simple[[:space:]]and[[:space:]]Effective[[:space:]]Strategy[[:space:]]for[[:space:]]Clustering[[:space:]]Vision[[:space:]]Tokens/755b1314-9e24-437c-8f51-19b61bb62095_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2214 |
+
2025/Semantic[[:space:]]Watermarking[[:space:]]Reinvented_[[:space:]]Enhancing[[:space:]]Robustness[[:space:]]and[[:space:]]Generation[[:space:]]Quality[[:space:]]with[[:space:]]Fourier[[:space:]]Integrity/4150a989-5307-4ff1-8e86-ca9b050e7e76_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2215 |
+
2025/Semantic[[:space:]]versus[[:space:]]Identity_[[:space:]]A[[:space:]]Divide-and-Conquer[[:space:]]Approach[[:space:]]towards[[:space:]]Adjustable[[:space:]]Medical[[:space:]]Image[[:space:]]De-Identification/2b39ad1e-d18f-4579-85b8-6d56df6a82db_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2216 |
+
2025/Semantic-guided[[:space:]]Camera[[:space:]]Ray[[:space:]]Regression[[:space:]]for[[:space:]]Visual[[:space:]]Localization/d007a0e2-f228-4563-89f9-6176e65aa729_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2217 |
+
2025/Semi-ViM_[[:space:]]Bidirectional[[:space:]]State[[:space:]]Space[[:space:]]Model[[:space:]]for[[:space:]]Mitigating[[:space:]]Label[[:space:]]Imbalance[[:space:]]in[[:space:]]Semi-Supervised[[:space:]]Learning/694bc6b8-9d54-4920-9947-577e19b17917_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2218 |
+
2025/Semi-supervised[[:space:]]Concept[[:space:]]Bottleneck[[:space:]]Models/790be7e9-6d82-433e-9aab-92f2c72b545a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2219 |
+
2025/Semi-supervised[[:space:]]Deep[[:space:]]Transfer[[:space:]]for[[:space:]]Regression[[:space:]]without[[:space:]]Domain[[:space:]]Alignment/60f16256-f509-4463-b38b-322283424ecd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2220 |
+
2025/SemiVisBooster_[[:space:]]Boosting[[:space:]]Semi-Supervised[[:space:]]Learning[[:space:]]for[[:space:]]Fine-Grained[[:space:]]Classification[[:space:]]through[[:space:]]Pseudo-Label[[:space:]]Semantic[[:space:]]Guidance/2707eef4-2e69-485f-a5ee-896dce310667_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2221 |
+
2025/Separation[[:space:]]for[[:space:]]Better[[:space:]]Integration_[[:space:]]Disentangling[[:space:]]Edge[[:space:]]and[[:space:]]Motion[[:space:]]in[[:space:]]Event-based[[:space:]]Deblurring/9f65b86f-9c52-4842-ae57-85222dbb0f4e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2222 |
+
2025/SeqGrowGraph_[[:space:]]Learning[[:space:]]Lane[[:space:]]Topology[[:space:]]as[[:space:]]a[[:space:]]Chain[[:space:]]of[[:space:]]Graph[[:space:]]Expansions/f66ab5d4-0d27-477f-96dd-6cf261bca1fa_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2223 |
+
2025/Sequential[[:space:]]Gaussian[[:space:]]Avatars[[:space:]]with[[:space:]]Hierarchical[[:space:]]Motion[[:space:]]Context/ce271790-0752-41a2-ac20-e12f48492a27_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2224 |
+
2025/Sequential[[:space:]]keypoint[[:space:]]density[[:space:]]estimator_[[:space:]]an[[:space:]]overlooked[[:space:]]baseline[[:space:]]of[[:space:]]skeleton-based[[:space:]]video[[:space:]]anomaly[[:space:]]detection/a7b634d4-2fd6-469f-a18b-58cbc0beefcb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2225 |
+
2025/Serialization[[:space:]]based[[:space:]]Point[[:space:]]Cloud[[:space:]]Oversegmentation/02d08649-2762-474d-94e0-d2ae2cc700de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2226 |
+
2025/ShadowHack_[[:space:]]Hacking[[:space:]]Shadows[[:space:]]via[[:space:]]Luminance-Color[[:space:]]Divide[[:space:]]and[[:space:]]Conquer/d86f4be3-9e03-4b38-b628-22e777dbb589_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2227 |
+
2025/Shape[[:space:]]of[[:space:]]Motion_[[:space:]]4D[[:space:]]Reconstruction[[:space:]]from[[:space:]]a[[:space:]]Single[[:space:]]Video/a825c689-ac4d-430b-9fb3-d2a95e2ffbee_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2228 |
+
2025/ShortFT_[[:space:]]Diffusion[[:space:]]Model[[:space:]]Alignment[[:space:]]via[[:space:]]Shortcut-based[[:space:]]Fine-Tuning/447f852c-3aba-4c21-a78d-15886a337f7c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2229 |
+
2025/ShortV_[[:space:]]Efficient[[:space:]]Multimodal[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]by[[:space:]]Freezing[[:space:]]Visual[[:space:]]Tokens[[:space:]]in[[:space:]]Ineffective[[:space:]]Layers/213fbd78-7b79-438a-a9fe-b3e13e7f59de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2230 |
+
2025/Shot-by-Shot_[[:space:]]Film-Grammar-Aware[[:space:]]Training-Free[[:space:]]Audio[[:space:]]Description[[:space:]]Generation/b2b3829b-f130-4214-bc1a-c2339869b439_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2231 |
+
2025/SiM3D_[[:space:]]Single-instance[[:space:]]Multiview[[:space:]]Multimodal[[:space:]]and[[:space:]]Multisetup[[:space:]]3D[[:space:]]Anomaly[[:space:]]Detection[[:space:]]Benchmark/fc999caf-1202-46bd-84ea-94a9faab06d6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2232 |
+
2025/Sibai_[[:space:]]A[[:space:]]Few-Shot[[:space:]]Meta-Classifier[[:space:]]for[[:space:]]Poisoning[[:space:]]Detection[[:space:]]in[[:space:]]Federated[[:space:]]Learning/c815ddc4-d777-4075-b792-39640cd474e7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2233 |
+
2025/SignRep_[[:space:]]Enhancing[[:space:]]Self-Supervised[[:space:]]Sign[[:space:]]Representations/b0873b30-3769-4f25-84b2-07088f813479_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2234 |
+
2025/Signs[[:space:]]as[[:space:]]Tokens_[[:space:]]A[[:space:]]Retrieval-Enhanced[[:space:]]Multilingual[[:space:]]Sign[[:space:]]Language[[:space:]]Generator/1cb6aa16-71d9-4a9a-a1f6-c59910d7b13b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2235 |
+
2025/Sim-DETR_[[:space:]]Unlock[[:space:]]DETR[[:space:]]for[[:space:]]Temporal[[:space:]]Sentence[[:space:]]Grounding/461e9907-9af3-47f5-b610-f09041cded55_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2236 |
+
2025/SimMLM_[[:space:]]A[[:space:]]Simple[[:space:]]Framework[[:space:]]for[[:space:]]Multi-modal[[:space:]]Learning[[:space:]]with[[:space:]]Missing[[:space:]]Modality/398740ea-2adc-472d-af59-846c4c676c5a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2237 |
+
2025/Similarity[[:space:]]Memory[[:space:]]Prior[[:space:]]is[[:space:]]All[[:space:]]You[[:space:]]Need[[:space:]]for[[:space:]]Medical[[:space:]]Image[[:space:]]Segmentation/7124b243-3435-4191-a8a2-f64fe05a8d96_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2238 |
+
2025/SimpleVQA_[[:space:]]Multimodal[[:space:]]Factuality[[:space:]]Evaluation[[:space:]]for[[:space:]]Multimodal[[:space:]]Large[[:space:]]Language[[:space:]]Models/d1c51684-4f7b-42eb-97c5-436f06ea7c3e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2239 |
+
2025/Simulating[[:space:]]Dual-Pixel[[:space:]]Images[[:space:]]From[[:space:]]Ray[[:space:]]Tracing[[:space:]]For[[:space:]]Depth[[:space:]]Estimation/daa8d996-9820-46cb-8540-26b2ea341407_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2240 |
+
2025/Simultaneous[[:space:]]Motion[[:space:]]And[[:space:]]Noise[[:space:]]Estimation[[:space:]]with[[:space:]]Event[[:space:]]Cameras/6f9dcd49-a085-404d-b287-141b9737e694_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2241 |
+
2025/Single-Scanline[[:space:]]Relative[[:space:]]Pose[[:space:]]Estimation[[:space:]]for[[:space:]]Rolling[[:space:]]Shutter[[:space:]]Cameras/1be14823-e24f-4aa3-8a5b-8d0994a9e44f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2242 |
+
2025/Skeleton[[:space:]]Motion[[:space:]]Words[[:space:]]for[[:space:]]Unsupervised[[:space:]]Skeleton-Based[[:space:]]Temporal[[:space:]]Action[[:space:]]Segmentation/d38bb523-583d-4f38-b51f-72d4fb904420_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2243 |
+
2025/SketchSplat_[[:space:]]3D[[:space:]]Edge[[:space:]]Reconstruction[[:space:]]via[[:space:]]Differentiable[[:space:]]Multi-view[[:space:]]Sketch[[:space:]]Splatting/03d6f604-4658-4429-92bd-44a5220bf2b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2244 |
+
2025/Skip-Vision_[[:space:]]Efficient[[:space:]]and[[:space:]]Scalable[[:space:]]Acceleration[[:space:]]of[[:space:]]Vision-Language[[:space:]]Models[[:space:]]via[[:space:]]Adaptive[[:space:]]Token[[:space:]]Skipping/1d1ce44b-d4d0-40e2-b544-e98ad6e37776_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2245 |
+
2025/SkySense[[:space:]]V2_[[:space:]]A[[:space:]]Unified[[:space:]]Foundation[[:space:]]Model[[:space:]]for[[:space:]]Multi-modal[[:space:]]Remote[[:space:]]Sensing/5fb33625-ede3-48c2-9ec0-3741fd1fba6c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2246 |
+
2025/Sliced[[:space:]]Wasserstein[[:space:]]Bridge[[:space:]]for[[:space:]]Open-Vocabulary[[:space:]]Video[[:space:]]Instance[[:space:]]Segmentation/320731ee-724e-4243-b22e-a9cb1f48c443_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2247 |
+
2025/SliderSpace_[[:space:]]Decomposing[[:space:]]the[[:space:]]Visual[[:space:]]Capabilities[[:space:]]of[[:space:]]Diffusion[[:space:]]Models/50d20dec-7e48-4238-88fb-f013284e5706_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2248 |
+
2025/SmolDocling_[[:space:]]An[[:space:]]ultra-compact[[:space:]]vision-language[[:space:]]model[[:space:]]for[[:space:]]end-to-end[[:space:]]multi-modal[[:space:]]document[[:space:]]conversion/52fdd715-223a-408c-b30d-b6b134926ef6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2249 |
+
2025/Snakes[[:space:]]and[[:space:]]Ladders_[[:space:]]Two[[:space:]]Steps[[:space:]]Up[[:space:]]for[[:space:]]VideoMamba/b11c3116-c63e-4aac-8755-cf4eabafe782_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2250 |
+
2025/Social[[:space:]]Debiasing[[:space:]]for[[:space:]]Fair[[:space:]]Multi-modal[[:space:]]LLMs/c46b809c-5d37-4c83-a43f-cbf8d15658b3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2251 |
+
2025/Soft[[:space:]]Local[[:space:]]Completeness_[[:space:]]Rethinking[[:space:]]Completeness[[:space:]]in[[:space:]]XAI/4e70f0b0-09f1-4a52-a8df-083516f56711_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2252 |
+
2025/Soft[[:space:]]Separation[[:space:]]and[[:space:]]Distillation_[[:space:]]Toward[[:space:]]Global[[:space:]]Uniformity[[:space:]]in[[:space:]]Federated[[:space:]]Unsupervised[[:space:]]Learning/a7c7f9f2-5e8e-4c1b-b520-1318204d80e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2253 |
+
2025/Sparfels_[[:space:]]Fast[[:space:]]Reconstruction[[:space:]]from[[:space:]]Sparse[[:space:]]Unposed[[:space:]]Imagery/e0c26abe-dde6-464c-bb5e-932b397213e7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2254 |
+
2025/Sparse[[:space:]]Fine-Tuning[[:space:]]of[[:space:]]Transformers[[:space:]]for[[:space:]]Generative[[:space:]]Tasks/93b9d533-60f0-4a0a-a330-dae67ada8155_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2255 |
+
2025/Sparse-Dense[[:space:]]Side-Tuner[[:space:]]for[[:space:]]efficient[[:space:]]Video[[:space:]]Temporal[[:space:]]Grounding/87762bca-719d-433c-abf0-05e54d888e6d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2256 |
+
2025/SparseFlex_[[:space:]]High-Resolution[[:space:]]and[[:space:]]Arbitrary-Topology[[:space:]]3D[[:space:]]Shape[[:space:]]Modeling/fcf007d6-72eb-465a-b05f-460f5649b8d5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2257 |
+
2025/SparseLaneSTP_[[:space:]]Leveraging[[:space:]]Spatio-Temporal[[:space:]]Priors[[:space:]]with[[:space:]]Sparse[[:space:]]Transformers[[:space:]]for[[:space:]]3D[[:space:]]Lane[[:space:]]Detection/1eda17b7-8ae2-4be3-89f2-07bc44a3e4eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2258 |
+
2025/SparseMM_[[:space:]]Head[[:space:]]Sparsity[[:space:]]Emerges[[:space:]]from[[:space:]]Visual[[:space:]]Concept[[:space:]]Responses[[:space:]]in[[:space:]]MLLMs/9ba0adfc-c24a-4716-85c4-644efaf54a4e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2259 |
+
2025/SparseRecon_[[:space:]]Neural[[:space:]]Implicit[[:space:]]Surface[[:space:]]Reconstruction[[:space:]]from[[:space:]]Sparse[[:space:]]Views[[:space:]]with[[:space:]]Feature[[:space:]]and[[:space:]]Depth[[:space:]]Consistencies/0728af14-5300-46a3-bf64-7d187d835efb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2260 |
+
2025/SparseVILA_[[:space:]]Decoupling[[:space:]]Visual[[:space:]]Sparsity[[:space:]]for[[:space:]]Efficient[[:space:]]VLM[[:space:]]Inference/9b15689c-1721-44a2-b85e-1b1667a45deb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2261 |
+
2025/Sparsity[[:space:]]Outperforms[[:space:]]Low-Rank[[:space:]]Projections[[:space:]]in[[:space:]]Few-Shot[[:space:]]Adaptation/6c4b6844-e0ee-4952-860e-16ecf63a3ae1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2262 |
+
2025/Spatial[[:space:]]Alignment[[:space:]]and[[:space:]]Temporal[[:space:]]Matching[[:space:]]Adapter[[:space:]]for[[:space:]]Video-Radar[[:space:]]Remote[[:space:]]Physiological[[:space:]]Measurement/2a28716b-580e-47e9-adc9-9d410d63b302_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2263 |
+
2025/Spatial[[:space:]]Preference[[:space:]]Rewarding[[:space:]]for[[:space:]]MLLMs[[:space:]]Spatial[[:space:]]Understanding/7f1d9c24-f4e8-482b-91ef-4b03709c5ebb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2264 |
+
2025/Spatial-Temporal[[:space:]]Aware[[:space:]]Visuomotor[[:space:]]Diffusion[[:space:]]Policy[[:space:]]Learning/635ea516-8718-4cd2-bbf5-79fa32431478_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2265 |
+
2025/Spatial-Temporal[[:space:]]Forgery[[:space:]]Trace[[:space:]]based[[:space:]]Forgery[[:space:]]Image[[:space:]]Identification/b7964ec0-870b-444a-822c-785edaff983a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2266 |
+
2025/SpatialCrafter_[[:space:]]Unleashing[[:space:]]the[[:space:]]Imagination[[:space:]]of[[:space:]]Video[[:space:]]Diffusion[[:space:]]Models[[:space:]]for[[:space:]]Scene[[:space:]]Reconstruction[[:space:]]from[[:space:]]Limited[[:space:]]Observations/359a0256-c24c-45c3-9f07-0205db934b25_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2267 |
+
2025/SpatialSplat_[[:space:]]Efficient[[:space:]]Semantic[[:space:]]3D[[:space:]]from[[:space:]]Sparse[[:space:]]Unposed[[:space:]]Images/b5246924-8fb7-4786-baa2-29a57614062d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2268 |
+
2025/SpatialTrackerV2_[[:space:]]Advancing[[:space:]]3D[[:space:]]Point[[:space:]]Tracking[[:space:]]with[[:space:]]Explicit[[:space:]]Camera[[:space:]]Motion/925f4fd6-2e60-44c7-a108-57075a837941_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2269 |
+
2025/Spatially-Varying[[:space:]]Autofocus/812fa83e-886a-4a5e-b27e-3f560550ef19_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2270 |
+
2025/Spatio-Spectral[[:space:]]Pattern[[:space:]]Illumination[[:space:]]for[[:space:]]Direct[[:space:]]and[[:space:]]Indirect[[:space:]]Separation[[:space:]]from[[:space:]]a[[:space:]]Single[[:space:]]Hyperspectral[[:space:]]Image/c416257e-42cc-4603-837a-a5cb6ea0c724_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/b5799a7c-6527-47cd-81fd-ab5ff79ff559_content_list.json
ADDED
|
@@ -0,0 +1,1914 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
158,
|
| 8 |
+
130,
|
| 9 |
+
839,
|
| 10 |
+
176
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zhirui Gao Renjiao Yi† Yuhang Huang Wei Chen Chenyang Zhu Kai Xu† National University of Defense Technology zhirui-gao.github.io/PartGS",
|
| 17 |
+
"bbox": [
|
| 18 |
+
158,
|
| 19 |
+
202,
|
| 20 |
+
836,
|
| 21 |
+
257
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Abstract",
|
| 28 |
+
"text_level": 1,
|
| 29 |
+
"bbox": [
|
| 30 |
+
246,
|
| 31 |
+
291,
|
| 32 |
+
326,
|
| 33 |
+
306
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "Low-level 3D representations, such as point clouds, meshes, NeRFs and 3D Gaussians, are commonly used for modeling 3D objects and scenes. However, cognitive studies indicate that human perception operates at higher levels and interprets 3D environments by decomposing them into meaningful structural parts, rather than low-level elements like points or voxels. Structured geometric decomposition enhances scene interpretability and facilitates downstream tasks requiring component-level manipulation. In this work, we introduce PartGS, a self-supervised part-aware reconstruction framework that integrates 2D Gaussians and superquadrics to parse objects and scenes into an interpretable decomposition, leveraging multi-view image inputs to uncover 3D structural information. Our method jointly optimizes superquadric meshes and Gaussians by coupling their parameters within a hybrid representation. On one hand, superquadrics enable the representation of a wide range of shape primitives, facilitating flexible and meaningful decompositions. On the other hand, 2D Gaussians capture detailed texture and geometric details, ensuring high-fidelity appearance and geometry reconstruction. Operating in a self-supervised manner, our approach demonstrates superior performance compared to state-of-the-art methods across extensive experiments on the DTU, ShapeNet, and real-world datasets.",
|
| 40 |
+
"bbox": [
|
| 41 |
+
89,
|
| 42 |
+
323,
|
| 43 |
+
485,
|
| 44 |
+
686
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "1. Introduction",
|
| 51 |
+
"text_level": 1,
|
| 52 |
+
"bbox": [
|
| 53 |
+
91,
|
| 54 |
+
715,
|
| 55 |
+
220,
|
| 56 |
+
729
|
| 57 |
+
],
|
| 58 |
+
"page_idx": 0
|
| 59 |
+
},
|
| 60 |
+
{
|
| 61 |
+
"type": "text",
|
| 62 |
+
"text": "3D reconstruction from multi-view images is a long-standing challenge in 3D vision and graphics [12, 17, 54]. Most reconstructed scenes are in low-level representations such as point clouds, voxels, or meshes. However, humans tend to understand 3D scenes as reasonable parts [35]. For instance, when observing a scene, we naturally construct high-level structural information, such as scene graphs, instead of focusing on low-level details like point clouds or voxels. Motivated by it, we propose a part-aware reconstruction framework",
|
| 63 |
+
"bbox": [
|
| 64 |
+
89,
|
| 65 |
+
739,
|
| 66 |
+
483,
|
| 67 |
+
876
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "image",
|
| 73 |
+
"img_path": "images/e9b5112f0f0000698fc8e41981e9bcd34a1d0411e232cd65435bab3a447439ce.jpg",
|
| 74 |
+
"image_caption": [],
|
| 75 |
+
"image_footnote": [],
|
| 76 |
+
"bbox": [
|
| 77 |
+
519,
|
| 78 |
+
295,
|
| 79 |
+
602,
|
| 80 |
+
367
|
| 81 |
+
],
|
| 82 |
+
"page_idx": 0
|
| 83 |
+
},
|
| 84 |
+
{
|
| 85 |
+
"type": "image",
|
| 86 |
+
"img_path": "images/7960d6e57f33b96c3164e2a169b996caa297f6960aa0b28d639943a539939304.jpg",
|
| 87 |
+
"image_caption": [],
|
| 88 |
+
"image_footnote": [],
|
| 89 |
+
"bbox": [
|
| 90 |
+
611,
|
| 91 |
+
296,
|
| 92 |
+
710,
|
| 93 |
+
364
|
| 94 |
+
],
|
| 95 |
+
"page_idx": 0
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"type": "image",
|
| 99 |
+
"img_path": "images/c70eff9910d387520dac0b67d636cab4ff433cabe32591d4789c5b760d389a60.jpg",
|
| 100 |
+
"image_caption": [],
|
| 101 |
+
"image_footnote": [],
|
| 102 |
+
"bbox": [
|
| 103 |
+
723,
|
| 104 |
+
296,
|
| 105 |
+
895,
|
| 106 |
+
367
|
| 107 |
+
],
|
| 108 |
+
"page_idx": 0
|
| 109 |
+
},
|
| 110 |
+
{
|
| 111 |
+
"type": "image",
|
| 112 |
+
"img_path": "images/fe9a196fd51c54769e7333f9a65c7064bf3e2dae3d63580702f399f6004a1b43.jpg",
|
| 113 |
+
"image_caption": [
|
| 114 |
+
"3D decomposition",
|
| 115 |
+
"Prior work (EMS)"
|
| 116 |
+
],
|
| 117 |
+
"image_footnote": [],
|
| 118 |
+
"bbox": [
|
| 119 |
+
517,
|
| 120 |
+
383,
|
| 121 |
+
604,
|
| 122 |
+
440
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "image",
|
| 128 |
+
"img_path": "images/66b22a31d89c1e7f0e5b6e079ec97e74c5ccfba2e479122679f26b944fa75d42.jpg",
|
| 129 |
+
"image_caption": [
|
| 130 |
+
"Block-level"
|
| 131 |
+
],
|
| 132 |
+
"image_footnote": [],
|
| 133 |
+
"bbox": [
|
| 134 |
+
622,
|
| 135 |
+
383,
|
| 136 |
+
705,
|
| 137 |
+
444
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "image",
|
| 143 |
+
"img_path": "images/3c5961e98e585a5e6c1936f5358c07eb210b0bf051b91053e8f3f57be874c6ec.jpg",
|
| 144 |
+
"image_caption": [
|
| 145 |
+
"Point-level",
|
| 146 |
+
"Ours",
|
| 147 |
+
"Figure 1. Prior works [32, 36] model the scene using primitives, while the proposed method can further model precise geometry details and textures. Here, EMS [32] takes point cloud inputs and reconstructs non-textured primitives."
|
| 148 |
+
],
|
| 149 |
+
"image_footnote": [],
|
| 150 |
+
"bbox": [
|
| 151 |
+
714,
|
| 152 |
+
378,
|
| 153 |
+
803,
|
| 154 |
+
446
|
| 155 |
+
],
|
| 156 |
+
"page_idx": 0
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"type": "image",
|
| 160 |
+
"img_path": "images/d654b2ecedca9d5196535791b9dbc553c6b5ddfd3bad5fe8253689a9104a1689.jpg",
|
| 161 |
+
"image_caption": [],
|
| 162 |
+
"image_footnote": [],
|
| 163 |
+
"bbox": [
|
| 164 |
+
803,
|
| 165 |
+
388,
|
| 166 |
+
895,
|
| 167 |
+
446
|
| 168 |
+
],
|
| 169 |
+
"page_idx": 0
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"type": "text",
|
| 173 |
+
"text": "that decomposes objects or scenes into meaningful shapes or parts, facilitating tasks such as physical simulation, editing, content generation and understanding [18, 28, 52, 53, 56].",
|
| 174 |
+
"bbox": [
|
| 175 |
+
511,
|
| 176 |
+
550,
|
| 177 |
+
906,
|
| 178 |
+
595
|
| 179 |
+
],
|
| 180 |
+
"page_idx": 0
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"type": "text",
|
| 184 |
+
"text": "Several prior works [29, 32, 33, 37, 55, 68] have explored part-aware reconstruction or 3D decomposition. However, most of them rely heavily on 3D supervision and often struggle to retain fine-grained geometric details, limiting their practicality in real-world scenarios. Recent advances [47, 65] have focused on extending the neural radiance field framework for part-aware reconstruction, building upon its remarkable success in 3D reconstruction from multi-view images. For example, PartNeRF [47] models objects using multiple neural radiance fields. Yet, the intricate composition of implicit fields complicates the learning process, leading to suboptimal rendering quality and inefficient decomposition. Recently, DBW [36] proposes a novel approach that decomposes scenes into block-based representations using superquadric primitives [2], optimizing both their parameters and UV texture maps through rendering loss minimization. While this method demonstrates effective scene decomposition into coherent components, it faces challenges in achieving accurate part-aware reconstruction of both geometry and appearance, as illustrated in Fig. 1.",
|
| 185 |
+
"bbox": [
|
| 186 |
+
511,
|
| 187 |
+
598,
|
| 188 |
+
908,
|
| 189 |
+
901
|
| 190 |
+
],
|
| 191 |
+
"page_idx": 0
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"type": "header",
|
| 195 |
+
"text": "CVF",
|
| 196 |
+
"bbox": [
|
| 197 |
+
106,
|
| 198 |
+
2,
|
| 199 |
+
181,
|
| 200 |
+
42
|
| 201 |
+
],
|
| 202 |
+
"page_idx": 0
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"type": "header",
|
| 206 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 207 |
+
"bbox": [
|
| 208 |
+
238,
|
| 209 |
+
0,
|
| 210 |
+
807,
|
| 211 |
+
46
|
| 212 |
+
],
|
| 213 |
+
"page_idx": 0
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"type": "page_footnote",
|
| 217 |
+
"text": "† Corresponding authors",
|
| 218 |
+
"bbox": [
|
| 219 |
+
114,
|
| 220 |
+
886,
|
| 221 |
+
246,
|
| 222 |
+
898
|
| 223 |
+
],
|
| 224 |
+
"page_idx": 0
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"type": "page_number",
|
| 228 |
+
"text": "9649",
|
| 229 |
+
"bbox": [
|
| 230 |
+
482,
|
| 231 |
+
944,
|
| 232 |
+
514,
|
| 233 |
+
955
|
| 234 |
+
],
|
| 235 |
+
"page_idx": 0
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"type": "text",
|
| 239 |
+
"text": "To address these limitations, we propose PartGS, a self-supervised part-aware hybrid representation that integrates 2D Gaussians [21] and superquadrics [2], to achieve both high-quality texture reconstruction and geometrically accurate part decomposition. There are previous approaches [14, 20, 49] combining mesh reconstruction and Gaussian splatting, where they first reconstruct the mesh and then attach Gaussians to it. In contrast to their primary focus on acquiring the representations, the proposed approach is to achieve part-aware decomposition reconstruction. Our method establishes a coupled optimization framework that simultaneously learns superquadric meshes and Gaussians through parameter sharing, with each part constrained to a single superquadric. The differentiable rendering of Gaussians drives this hybrid representation, leveraging the inherent convexity of superquadrics for qualitative 3D decomposition. The self-supervised training of part-aware reconstruction is built upon the assumption that each part of most objects should be a basic shape and can be represented by a superquadric. Meanwhile, Gaussian splatting, renowned for its superior rendering quality and efficient training, is incorporated to capture intricate texture details, speeding up reconstruction and improving rendering quality.",
|
| 240 |
+
"bbox": [
|
| 241 |
+
89,
|
| 242 |
+
90,
|
| 243 |
+
485,
|
| 244 |
+
438
|
| 245 |
+
],
|
| 246 |
+
"page_idx": 1
|
| 247 |
+
},
|
| 248 |
+
{
|
| 249 |
+
"type": "text",
|
| 250 |
+
"text": "In the hybrid representation, 2D Gaussians are distributed on superquadric surfaces to form structured blocks. The pose and shape of Gaussians within each block are determined by their corresponding superquadrics rather than being optimized independently. The parameter space encompasses global controls for superquadric properties (shape, pose, and opacity) and local spherical harmonic coefficients for individual Gaussians. Compared to standard Gaussian splating [21, 25], which populates the occupancy volume with independently parameterized Gaussians, the coupled representation is more compact and efficient.",
|
| 251 |
+
"bbox": [
|
| 252 |
+
89,
|
| 253 |
+
441,
|
| 254 |
+
483,
|
| 255 |
+
607
|
| 256 |
+
],
|
| 257 |
+
"page_idx": 1
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"type": "text",
|
| 261 |
+
"text": "During training, the parameters are optimized through rendering loss without additional supervision. However, we notice that image rendering loss often leads to local minima in superquadric shape optimization. To tackle this, we introduce several regularizers to maintain global consistency between the 3D representation and the input 2D information. This strategic implementation allows us to segment parts fully self-supervised, achieving block-level reconstruction. Finally, to better capture irregular shapes, we implement a point-level refinement step that frees 2D Gaussians to deviate from the surface, thereby enhancing geometric fidelity. Extensive experiments show that PartGS, at block-level and point-level, achieves $33.3\\%$ and $75.9\\%$ improvements in reconstruction accuracy, 3.18 and 16.13 increases in PSNR, and 4X and 3X speedups compared to the state-of-the-art baseline. Our contributions are summarized as follows:",
|
| 262 |
+
"bbox": [
|
| 263 |
+
89,
|
| 264 |
+
609,
|
| 265 |
+
485,
|
| 266 |
+
851
|
| 267 |
+
],
|
| 268 |
+
"page_idx": 1
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"type": "text",
|
| 272 |
+
"text": "- We introduce a novel hybrid representation for part-aware 3D reconstruction, combining the strengths of superquadrics and Gaussian splatting to achieve reasonable",
|
| 273 |
+
"bbox": [
|
| 274 |
+
94,
|
| 275 |
+
854,
|
| 276 |
+
485,
|
| 277 |
+
901
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 1
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "text",
|
| 283 |
+
"text": "decomposition and high-quality rendering.",
|
| 284 |
+
"bbox": [
|
| 285 |
+
529,
|
| 286 |
+
90,
|
| 287 |
+
813,
|
| 288 |
+
106
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 1
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "list",
|
| 294 |
+
"sub_type": "text",
|
| 295 |
+
"list_items": [
|
| 296 |
+
"- We propose several novel regularizers to enforce consistency between 3D decomposition and 2D observations, enabling self-supervised part decomposition.",
|
| 297 |
+
"- Compared to prior works, the method takes one step forward, by reconstructing both the block-level and point-level part-aware reconstructions, preserving both part segmentation and reconstruction precision."
|
| 298 |
+
],
|
| 299 |
+
"bbox": [
|
| 300 |
+
517,
|
| 301 |
+
107,
|
| 302 |
+
906,
|
| 303 |
+
212
|
| 304 |
+
],
|
| 305 |
+
"page_idx": 1
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"type": "text",
|
| 309 |
+
"text": "2. Related Work",
|
| 310 |
+
"text_level": 1,
|
| 311 |
+
"bbox": [
|
| 312 |
+
513,
|
| 313 |
+
228,
|
| 314 |
+
655,
|
| 315 |
+
243
|
| 316 |
+
],
|
| 317 |
+
"page_idx": 1
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"type": "text",
|
| 321 |
+
"text": "2.1. Shape Decomposition and Abstraction",
|
| 322 |
+
"text_level": 1,
|
| 323 |
+
"bbox": [
|
| 324 |
+
511,
|
| 325 |
+
253,
|
| 326 |
+
844,
|
| 327 |
+
268
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 1
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "text",
|
| 333 |
+
"text": "Structured shape representation learning decomposes objects or scenes into coherent geometric primitives, facilitating shape understanding and generation [9, 15, 19, 46, 57]. Early approaches like Blocks World [42] and Generalized Cylinders [3] emphasized compact representations. Modern methods typically process 3D inputs (point clouds, meshes, voxels) by decomposing them into primitive ensembles, including cuboids [41, 48], superquadrics [32, 38, 39, 58], and convex shapes [8, 10]. For instance, MonteBoxFinder [41] integrates clustering [11, 50, 60], cuboid representations, and Monte Carlo Tree Search, while EMS [32] adopts a probabilistic approach for superquadric recovery. However, these methods are limited to coarse shape representations. Recent advances in shape abstraction [13, 22, 31, 40, 45] enable more detailed shape representations through flexible primitive deformation.",
|
| 334 |
+
"bbox": [
|
| 335 |
+
511,
|
| 336 |
+
276,
|
| 337 |
+
906,
|
| 338 |
+
517
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 1
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "Some studies [1, 16, 36, 47, 65, 66] attempt to create structure-aware 3D representations directly from images. PartNeRF [47] introduces ellipsoid representations within NeRFs, its reliance on multiple implicit fields results in inefficient 3D decomposition. ISCO [1] and DBW [36] use 3D superquadrics for shape decomposition, which enables more meaningful structural separation. However, their simple shape parameters struggle to capture complex geometries, leading to poor geometry and appearance reconstruction. DPA-Net [65] has advanced 3D shape abstraction from sparse views but generates redundant parts and struggles with realistic texture rendering. A concurrent work, GaussianBlock [24], employs SAM [27] to guide superquadric splitting and fusion for 3D decomposition, yet its computational efficiency remains limited, typically requiring several hours per scene. In contrast, our approach accomplishes self-supervised [51, 59] part-aware scene reconstruction through an efficient hybrid representation that simultaneously maintains geometric fidelity and photorealistic rendering quality.",
|
| 345 |
+
"bbox": [
|
| 346 |
+
511,
|
| 347 |
+
518,
|
| 348 |
+
908,
|
| 349 |
+
806
|
| 350 |
+
],
|
| 351 |
+
"page_idx": 1
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"type": "text",
|
| 355 |
+
"text": "2.2. Mesh-based Gaussian Splatting",
|
| 356 |
+
"text_level": 1,
|
| 357 |
+
"bbox": [
|
| 358 |
+
511,
|
| 359 |
+
816,
|
| 360 |
+
792,
|
| 361 |
+
834
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 1
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "text",
|
| 367 |
+
"text": "Gaussian splitting (GS) [25] has been rapidly adopted in multiple fields due to its remarkable rendering capability. Several studies [7, 14, 20, 49] attempt to align Gaussians with mesh surfaces for easier editing and animation. Among",
|
| 368 |
+
"bbox": [
|
| 369 |
+
511,
|
| 370 |
+
839,
|
| 371 |
+
906,
|
| 372 |
+
902
|
| 373 |
+
],
|
| 374 |
+
"page_idx": 1
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"type": "page_number",
|
| 378 |
+
"text": "9650",
|
| 379 |
+
"bbox": [
|
| 380 |
+
482,
|
| 381 |
+
944,
|
| 382 |
+
514,
|
| 383 |
+
955
|
| 384 |
+
],
|
| 385 |
+
"page_idx": 1
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"type": "image",
|
| 389 |
+
"img_path": "images/1725736d806238fdb08ae4afbf2598d85df1ee5bf8813240447110681c446bb3.jpg",
|
| 390 |
+
"image_caption": [
|
| 391 |
+
"Figure 2. Overview of our pipeline. PartGS takes multi-view images to learn a parametric hybrid representation of superquadrics and 2D Gaussians. It initializes from random superquadrics and is gradually optimized during training to obtain a block-level reconstruction. Then, we free the constraints of Gaussians to model detailed geometry, achieving point-level reconstruction."
|
| 392 |
+
],
|
| 393 |
+
"image_footnote": [],
|
| 394 |
+
"bbox": [
|
| 395 |
+
112,
|
| 396 |
+
85,
|
| 397 |
+
883,
|
| 398 |
+
340
|
| 399 |
+
],
|
| 400 |
+
"page_idx": 2
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"type": "text",
|
| 404 |
+
"text": "them, SuGaR [20] uses flat 3D Gaussians to enforce the alignment with the scene surface during optimization, minimizing the difference between the signed distance functions (SDF) of the desired Gaussians and actual Gaussians. GaMeS [49] introduces a hybrid representation of Gaussians and mesh, where Gaussians are attached to triangular facets of the mesh. Similarly, Gao et al. [14] proposes a mesh-based GS to achieve large-scale deformation effects on objects. Recently, 2DGS [21] proposes 2D Gaussians for surface modeling and significantly enhances the geometric quality. Inspired by them, we propose a representation that combines 2D Gaussians with superquadrics. A key distinction between our method and previous mesh-based GS approaches is that these methods require first reconstructing the scene's mesh and then binding Gaussians to the mesh surface, which results in a non-continuous mesh. In contrast, our method directly optimizes the mesh through a rendering loss, enabling part-aware mesh reconstruction.",
|
| 405 |
+
"bbox": [
|
| 406 |
+
91,
|
| 407 |
+
412,
|
| 408 |
+
483,
|
| 409 |
+
684
|
| 410 |
+
],
|
| 411 |
+
"page_idx": 2
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"type": "text",
|
| 415 |
+
"text": "3. Method",
|
| 416 |
+
"text_level": 1,
|
| 417 |
+
"bbox": [
|
| 418 |
+
89,
|
| 419 |
+
705,
|
| 420 |
+
181,
|
| 421 |
+
720
|
| 422 |
+
],
|
| 423 |
+
"page_idx": 2
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"text": "Given a set of calibrated multi-view images and foreground masks, we aim to learn part-aware 3D representations, a meaningful decomposition of both geometry and appearance. Our approach adopts a two-stage optimization strategy: first, decomposing the object into basic shapes using a mixture of Gaussians and superquadrics at the block-level, followed by refining the decomposition at the point-level to achieve precise geometry. In Sec 3.1, we parameterize the hybrid representation. Sec. 3.2 then elaborates on leveraging this representation, enhanced by novel regularizers and an adaptive control strategy, to achieve self-supervised block-level",
|
| 428 |
+
"bbox": [
|
| 429 |
+
89,
|
| 430 |
+
734,
|
| 431 |
+
485,
|
| 432 |
+
900
|
| 433 |
+
],
|
| 434 |
+
"page_idx": 2
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "text",
|
| 438 |
+
"text": "decomposition. Finally, Sec. 3.3 presents the process for obtaining detailed part-aware results.",
|
| 439 |
+
"bbox": [
|
| 440 |
+
511,
|
| 441 |
+
412,
|
| 442 |
+
906,
|
| 443 |
+
443
|
| 444 |
+
],
|
| 445 |
+
"page_idx": 2
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"text": "3.1. Parametrizing the Hybrid Representation",
|
| 450 |
+
"text_level": 1,
|
| 451 |
+
"bbox": [
|
| 452 |
+
511,
|
| 453 |
+
450,
|
| 454 |
+
870,
|
| 455 |
+
465
|
| 456 |
+
],
|
| 457 |
+
"page_idx": 2
|
| 458 |
+
},
|
| 459 |
+
{
|
| 460 |
+
"type": "text",
|
| 461 |
+
"text": "As shown in Fig. 2, to leverage the strengths of both, we attach Gaussians to the surface of superquadric meshes. This representation retains the superquadric's ability to parse a 3D scene into distinct parts. Meanwhile, spherical harmonics of Gaussians enable complex texture rendering via Gaussian splatting, addressing the texture learning limitations of prior work [32, 36, 41, 64]. Sharing pose parameters between superquadrics and 2D Gaussians further improves the representation's efficiency.",
|
| 462 |
+
"bbox": [
|
| 463 |
+
509,
|
| 464 |
+
470,
|
| 465 |
+
906,
|
| 466 |
+
607
|
| 467 |
+
],
|
| 468 |
+
"page_idx": 2
|
| 469 |
+
},
|
| 470 |
+
{
|
| 471 |
+
"type": "text",
|
| 472 |
+
"text": "The parametric representation is controlled by both primitive and Gaussian parameters, which are optimized simultaneously. Given a 3D scene $S$ , the proposed method decomposes it into multiple hybrid blocks, each consisting of a superquadric with associated Gaussians. Each scene is denoted by the hybrid representation: $S = \\mathcal{B}_1 \\cup \\ldots \\cup \\mathcal{B}_i \\cup \\mathcal{B}_M$ , where $\\mathcal{B}_i$ is the $i$ -th hybrid block, and $M$ is the total number of blocks. Blocks are defined using manually designed parameters that control pose, opacity, scale, shape, and texture. These parameters are optimized via differentiable rendering to parse the target scene.",
|
| 473 |
+
"bbox": [
|
| 474 |
+
511,
|
| 475 |
+
608,
|
| 476 |
+
908,
|
| 477 |
+
773
|
| 478 |
+
],
|
| 479 |
+
"page_idx": 2
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"type": "text",
|
| 483 |
+
"text": "Shape and Scale Parameters. For each hybrid block $\\mathcal{B}_i$ , its geometry is controlled by superquadric parameters [2]. Specifically, there are two shape parameters $\\epsilon_1, \\epsilon_2$ that define its shape, along with three scale parameters $s_1, s_2, s_3$ , which scale the 3D axes. Analogous to icosphere vertex positioning, superquadric vertex coordinates are computed by:",
|
| 484 |
+
"bbox": [
|
| 485 |
+
511,
|
| 486 |
+
773,
|
| 487 |
+
908,
|
| 488 |
+
864
|
| 489 |
+
],
|
| 490 |
+
"page_idx": 2
|
| 491 |
+
},
|
| 492 |
+
{
|
| 493 |
+
"type": "equation",
|
| 494 |
+
"text": "\n$$\n\\mathbf {v} = \\left[ s _ {1} \\cos^ {\\epsilon 1} (\\theta) \\cos^ {\\epsilon 2} (\\varphi); s _ {2} \\sin^ {\\epsilon 1} (\\theta); s _ {3} \\cos^ {\\epsilon 1} (\\theta) \\sin^ {\\epsilon 2} (\\varphi) \\right], \\quad (1)\n$$\n",
|
| 495 |
+
"text_format": "latex",
|
| 496 |
+
"bbox": [
|
| 497 |
+
526,
|
| 498 |
+
867,
|
| 499 |
+
906,
|
| 500 |
+
882
|
| 501 |
+
],
|
| 502 |
+
"page_idx": 2
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "text",
|
| 506 |
+
"text": "where $\\theta$ and $\\varphi$ represent the azimuth and elevation defined",
|
| 507 |
+
"bbox": [
|
| 508 |
+
511,
|
| 509 |
+
885,
|
| 510 |
+
906,
|
| 511 |
+
900
|
| 512 |
+
],
|
| 513 |
+
"page_idx": 2
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "page_number",
|
| 517 |
+
"text": "9651",
|
| 518 |
+
"bbox": [
|
| 519 |
+
482,
|
| 520 |
+
944,
|
| 521 |
+
513,
|
| 522 |
+
955
|
| 523 |
+
],
|
| 524 |
+
"page_idx": 2
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "text",
|
| 528 |
+
"text": "in the spherical coordinate. The shape and scale parameters govern block deformation to learn part-aware geometry.",
|
| 529 |
+
"bbox": [
|
| 530 |
+
89,
|
| 531 |
+
90,
|
| 532 |
+
480,
|
| 533 |
+
119
|
| 534 |
+
],
|
| 535 |
+
"page_idx": 3
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "text",
|
| 539 |
+
"text": "Pose Parameters. The pose of the $i$ -th hybrid block is defined by its rotation $\\mathbf{R}_i$ and translation $\\mathbf{t}_i$ . The vertex position from Eq. 1 is transformed from the local coordinate to world space as: $\\hat{\\mathbf{v}}_i^j = \\mathbf{R}_i\\mathbf{v}_i^j + \\mathbf{t}_i$ , where $j$ indexes the vertices. Previous approaches, including SuGaR [20] and GaMeS [49], position Gaussians on reconstructed meshes with independent pose and shape parameters to enhance appearance modeling. In contrast, our method employs Gaussians to construct a differentiable rendering that bridges images and superquadrics. To achieve this, we couple the parameters of Gaussians with superquadrics.",
|
| 540 |
+
"bbox": [
|
| 541 |
+
88,
|
| 542 |
+
121,
|
| 543 |
+
482,
|
| 544 |
+
286
|
| 545 |
+
],
|
| 546 |
+
"page_idx": 3
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "text",
|
| 550 |
+
"text": "For a posed superquadric, Gaussian centers are uniformly sampled on triangular faces, with their poses determined by face vertices. Following GaMeS [49], each Gaussian's rotation matrix $\\mathrm{R}_v = [r_1,r_2,r_3]$ and scaling $\\mathrm{S}_v$ are computed from vertex positions. Given a triangular face $V$ with vertices $v_{1},v_{2},v_{3}\\in \\mathbb{R}^{3}$ , orthonormal vectors are constructed such that $r_1$ aligns with the face normal, $r_2$ points from the centroid $m = \\mathrm{mean}(v_1,v_2,v_3)$ to $v_{1}$ . $r_3$ is obtained by orthonormalizing the vector from the center to the second vertex concerning the existing $r_1$ and $r_2$ :",
|
| 551 |
+
"bbox": [
|
| 552 |
+
89,
|
| 553 |
+
287,
|
| 554 |
+
483,
|
| 555 |
+
438
|
| 556 |
+
],
|
| 557 |
+
"page_idx": 3
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "equation",
|
| 561 |
+
"text": "\n$$\nr _ {3} = \\frac {\\operatorname {o r t} \\left(v _ {2} - m ; r _ {1} , r _ {2}\\right)}{\\left\\| \\operatorname {o r t} \\left(v _ {2} - m ; r _ {1} , r _ {2} \\right. \\right\\|}, \\tag {2}\n$$\n",
|
| 562 |
+
"text_format": "latex",
|
| 563 |
+
"bbox": [
|
| 564 |
+
189,
|
| 565 |
+
449,
|
| 566 |
+
483,
|
| 567 |
+
481
|
| 568 |
+
],
|
| 569 |
+
"page_idx": 3
|
| 570 |
+
},
|
| 571 |
+
{
|
| 572 |
+
"type": "text",
|
| 573 |
+
"text": "where $\\text{ort}$ represents the orthonormalization in Gram-Schmidt [4]. For the scaling of 2D Gaussians, we use:",
|
| 574 |
+
"bbox": [
|
| 575 |
+
89,
|
| 576 |
+
486,
|
| 577 |
+
483,
|
| 578 |
+
516
|
| 579 |
+
],
|
| 580 |
+
"page_idx": 3
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"type": "equation",
|
| 584 |
+
"text": "\n$$\n\\mathrm {S} _ {v} = \\operatorname {d i a g} \\left(s _ {v} ^ {2}, s _ {v} ^ {3}\\right), \\tag {3}\n$$\n",
|
| 585 |
+
"text_format": "latex",
|
| 586 |
+
"bbox": [
|
| 587 |
+
220,
|
| 588 |
+
523,
|
| 589 |
+
482,
|
| 590 |
+
542
|
| 591 |
+
],
|
| 592 |
+
"page_idx": 3
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "text",
|
| 596 |
+
"text": "where $s_v^2 = c||m - v_1||$ , $s_v^3 = c < v_2, r_3 >$ , and $c$ is a size control hyperparameter. We place a fixed number of Gaussians on each triangular face. These designs eliminate the need to learn the geometry of Gaussians, thereby enhancing the efficiency of the representation.",
|
| 597 |
+
"bbox": [
|
| 598 |
+
89,
|
| 599 |
+
551,
|
| 600 |
+
483,
|
| 601 |
+
627
|
| 602 |
+
],
|
| 603 |
+
"page_idx": 3
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "text",
|
| 607 |
+
"text": "Opacity Parameters. We define the total number of hybrid blocks as $M$ . However, a typical scene does not contain exactly $M$ blocks. Therefore, only the meaningful blocks are retained. To achieve this, we introduce a learnable parameter $\\tau_{i}$ to represent each block's opacity. During the optimization, only blocks with $\\tau_{i}$ greater than a certain threshold are retained. Note that Gaussians within the same block share the same $\\tau$ for opacity in rasterization.",
|
| 608 |
+
"bbox": [
|
| 609 |
+
89,
|
| 610 |
+
628,
|
| 611 |
+
483,
|
| 612 |
+
748
|
| 613 |
+
],
|
| 614 |
+
"page_idx": 3
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "text",
|
| 618 |
+
"text": "Texture Parameters. The texture is modeled using 2D Gaussians positioned on the surface of the superquadrics. Spherical harmonics of each Gaussian control the texture and are optimized for rendering view-dependent images [25].",
|
| 619 |
+
"bbox": [
|
| 620 |
+
89,
|
| 621 |
+
750,
|
| 622 |
+
483,
|
| 623 |
+
810
|
| 624 |
+
],
|
| 625 |
+
"page_idx": 3
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "text",
|
| 629 |
+
"text": "3.2. Block-level Decomposition",
|
| 630 |
+
"text_level": 1,
|
| 631 |
+
"bbox": [
|
| 632 |
+
89,
|
| 633 |
+
816,
|
| 634 |
+
328,
|
| 635 |
+
834
|
| 636 |
+
],
|
| 637 |
+
"page_idx": 3
|
| 638 |
+
},
|
| 639 |
+
{
|
| 640 |
+
"type": "text",
|
| 641 |
+
"text": "This section describes the optimization of the hybrid representation. We observed that minimizing the rendering loss across multi-view images alone led to instability in positioning hybrid blocks. Therefore, several regularization terms",
|
| 642 |
+
"bbox": [
|
| 643 |
+
89,
|
| 644 |
+
839,
|
| 645 |
+
483,
|
| 646 |
+
900
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 3
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "text",
|
| 652 |
+
"text": "are introduced to optimize the composition for maximal image formation consistency.",
|
| 653 |
+
"bbox": [
|
| 654 |
+
511,
|
| 655 |
+
90,
|
| 656 |
+
903,
|
| 657 |
+
121
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 3
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "text",
|
| 663 |
+
"text": "3.2.1. Optimization",
|
| 664 |
+
"text_level": 1,
|
| 665 |
+
"bbox": [
|
| 666 |
+
511,
|
| 667 |
+
128,
|
| 668 |
+
651,
|
| 669 |
+
143
|
| 670 |
+
],
|
| 671 |
+
"page_idx": 3
|
| 672 |
+
},
|
| 673 |
+
{
|
| 674 |
+
"type": "text",
|
| 675 |
+
"text": "In addition to the image rendering loss, we employ regularizers consisting of coverage, overlap, parsimony, and opacity entropy. The coverage loss encourages hybrid blocks to cover only meaningful regions; the overlap loss prevents blocks from overlapping; the parsimony loss regularizes the number of existing blocks; and the opacity entropy encourages block opacities close to binary.",
|
| 676 |
+
"bbox": [
|
| 677 |
+
511,
|
| 678 |
+
147,
|
| 679 |
+
906,
|
| 680 |
+
252
|
| 681 |
+
],
|
| 682 |
+
"page_idx": 3
|
| 683 |
+
},
|
| 684 |
+
{
|
| 685 |
+
"type": "text",
|
| 686 |
+
"text": "Rendering Loss. The rendering loss is based on 3DGS [25], where it combines an $L_{1}$ term with a D-SSIM term:",
|
| 687 |
+
"bbox": [
|
| 688 |
+
511,
|
| 689 |
+
253,
|
| 690 |
+
906,
|
| 691 |
+
282
|
| 692 |
+
],
|
| 693 |
+
"page_idx": 3
|
| 694 |
+
},
|
| 695 |
+
{
|
| 696 |
+
"type": "equation",
|
| 697 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {r e n}} = (1 - \\lambda) L _ {1} + \\lambda L _ {\\mathrm {D} - \\mathrm {S S I M}}. \\tag {4}\n$$\n",
|
| 698 |
+
"text_format": "latex",
|
| 699 |
+
"bbox": [
|
| 700 |
+
602,
|
| 701 |
+
297,
|
| 702 |
+
906,
|
| 703 |
+
314
|
| 704 |
+
],
|
| 705 |
+
"page_idx": 3
|
| 706 |
+
},
|
| 707 |
+
{
|
| 708 |
+
"type": "text",
|
| 709 |
+
"text": "**Coverage Loss.** The coverage loss ensures that the block set $\\{\\mathcal{B}_1 \\ldots \\mathcal{B}_M\\}$ covers the object while preventing it from extending beyond its boundaries. To determine the 3D occupancy field based on the blocks, we first define the approximate signed distance of a 3D point $x$ to the $i$ -th block, $D_i(x) = \\Psi_i(x) - 1$ . Here, $\\Psi_i$ denotes the superquadric inside-outside function [2] associated with the block $i$ . Consequently, $D_i(x) \\leq 0$ if the point $x$ lies inside the $i$ -th block, and $D_i(x) > 0$ if $x$ is outside the block. Inspired by NeRF [34], we sample a ray set $\\mathcal{R}$ using ray-casting based on camera poses. Given the object mask, we associate each ray $r \\in \\mathcal{R}$ with a binary label: $l_r = 1$ if the ray $r$ is inside mask, otherwise $l_r = 0$ . The coverage loss is defined as:",
|
| 710 |
+
"bbox": [
|
| 711 |
+
511,
|
| 712 |
+
320,
|
| 713 |
+
906,
|
| 714 |
+
517
|
| 715 |
+
],
|
| 716 |
+
"page_idx": 3
|
| 717 |
+
},
|
| 718 |
+
{
|
| 719 |
+
"type": "equation",
|
| 720 |
+
"text": "\n$$\n\\mathcal {L} _ {\\mathrm {c o v}} (\\mathcal {R}) = \\sum_ {r \\in \\mathcal {R}} l _ {r} L _ {\\text {c r o s s}} (r) + \\left(1 - l _ {r}\\right) L _ {\\text {n o n - c r o s s}} (r). \\tag {5}\n$$\n",
|
| 721 |
+
"text_format": "latex",
|
| 722 |
+
"bbox": [
|
| 723 |
+
532,
|
| 724 |
+
526,
|
| 725 |
+
906,
|
| 726 |
+
559
|
| 727 |
+
],
|
| 728 |
+
"page_idx": 3
|
| 729 |
+
},
|
| 730 |
+
{
|
| 731 |
+
"type": "text",
|
| 732 |
+
"text": "Here, $L_{\\mathrm{cross}}$ encourages rays to intersect with blocks, while $L_{\\mathrm{non - cross}}$ discourages rays from intersecting with blocks:",
|
| 733 |
+
"bbox": [
|
| 734 |
+
511,
|
| 735 |
+
568,
|
| 736 |
+
905,
|
| 737 |
+
599
|
| 738 |
+
],
|
| 739 |
+
"page_idx": 3
|
| 740 |
+
},
|
| 741 |
+
{
|
| 742 |
+
"type": "equation",
|
| 743 |
+
"text": "\n$$\nL _ {\\text {c r o s s}} (r) = \\operatorname {R e L U} \\left(\\min _ {i \\in \\mathcal {M}} \\min _ {x _ {j} ^ {r} \\in \\mathcal {X} _ {r}} D _ {i} \\left(x _ {j} ^ {r}\\right)\\right), \\tag {6}\n$$\n",
|
| 744 |
+
"text_format": "latex",
|
| 745 |
+
"bbox": [
|
| 746 |
+
573,
|
| 747 |
+
609,
|
| 748 |
+
906,
|
| 749 |
+
638
|
| 750 |
+
],
|
| 751 |
+
"page_idx": 3
|
| 752 |
+
},
|
| 753 |
+
{
|
| 754 |
+
"type": "equation",
|
| 755 |
+
"text": "\n$$\nL _ {\\text {n o n - c r o s s}} (r) = \\operatorname {R e L U} \\left(\\max _ {i \\in \\mathcal {M}} \\max _ {x _ {j} ^ {r} \\in \\mathcal {X} _ {r}} - D _ {i} \\left(x _ {j} ^ {r}\\right)\\right), \\tag {7}\n$$\n",
|
| 756 |
+
"text_format": "latex",
|
| 757 |
+
"bbox": [
|
| 758 |
+
557,
|
| 759 |
+
645,
|
| 760 |
+
906,
|
| 761 |
+
675
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 3
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "text",
|
| 767 |
+
"text": "where $x_{j}^{r}$ means the $j$ -th sampled point along the ray $r$ and $\\mathcal{M} = \\{1, \\dots, M\\}$ . Intuitively, this implies that at least one sampled points $\\mathcal{X}_{r}$ along the ray $r$ inside the mask that lies within a certain block, while all points along the ray outside the mask should not belong to any block.",
|
| 768 |
+
"bbox": [
|
| 769 |
+
511,
|
| 770 |
+
681,
|
| 771 |
+
905,
|
| 772 |
+
756
|
| 773 |
+
],
|
| 774 |
+
"page_idx": 3
|
| 775 |
+
},
|
| 776 |
+
{
|
| 777 |
+
"type": "text",
|
| 778 |
+
"text": "Overlap Loss. We introduce a regularization term to prevent overlap between individual blocks. Given the difficulty of directly calculating block overlap, we adopt a Monte Carlo method similar to [36]. Specifically, we sample multiple 3D points in space and penalize those that lie inside more than $k$ blocks. Based on the superquadric inside-outside function, a soft occupancy function is defined as:",
|
| 779 |
+
"bbox": [
|
| 780 |
+
511,
|
| 781 |
+
757,
|
| 782 |
+
906,
|
| 783 |
+
862
|
| 784 |
+
],
|
| 785 |
+
"page_idx": 3
|
| 786 |
+
},
|
| 787 |
+
{
|
| 788 |
+
"type": "equation",
|
| 789 |
+
"text": "\n$$\n\\mathcal {O} _ {i} ^ {x} = \\tau_ {i} \\operatorname {s i g m o i d} \\left(\\frac {- D _ {i} (x)}{\\gamma}\\right), \\tag {8}\n$$\n",
|
| 790 |
+
"text_format": "latex",
|
| 791 |
+
"bbox": [
|
| 792 |
+
612,
|
| 793 |
+
872,
|
| 794 |
+
906,
|
| 795 |
+
905
|
| 796 |
+
],
|
| 797 |
+
"page_idx": 3
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "page_number",
|
| 801 |
+
"text": "9652",
|
| 802 |
+
"bbox": [
|
| 803 |
+
480,
|
| 804 |
+
944,
|
| 805 |
+
514,
|
| 806 |
+
955
|
| 807 |
+
],
|
| 808 |
+
"page_idx": 3
|
| 809 |
+
},
|
| 810 |
+
{
|
| 811 |
+
"type": "text",
|
| 812 |
+
"text": "where $\\gamma$ is a temperature parameter. Thus, for a set of $N$ sampled 3D points $\\Omega$ , the overlap loss is expressed as:",
|
| 813 |
+
"bbox": [
|
| 814 |
+
89,
|
| 815 |
+
90,
|
| 816 |
+
482,
|
| 817 |
+
122
|
| 818 |
+
],
|
| 819 |
+
"page_idx": 4
|
| 820 |
+
},
|
| 821 |
+
{
|
| 822 |
+
"type": "equation",
|
| 823 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {o v e r}} = \\frac {1}{N} \\sum_ {x \\in \\Omega} \\operatorname {R e L U} \\left(\\sum_ {i \\in \\mathcal {M}} \\mathcal {O} _ {i} ^ {x} - k\\right). \\tag {9}\n$$\n",
|
| 824 |
+
"text_format": "latex",
|
| 825 |
+
"bbox": [
|
| 826 |
+
153,
|
| 827 |
+
131,
|
| 828 |
+
483,
|
| 829 |
+
172
|
| 830 |
+
],
|
| 831 |
+
"page_idx": 4
|
| 832 |
+
},
|
| 833 |
+
{
|
| 834 |
+
"type": "text",
|
| 835 |
+
"text": "Parsimony Loss. To promote the use of the minimal number of blocks and achieve parsimony in decomposition, we introduce a regularization term that penalizes block opacity $(\\tau)$ . This loss is defined as:",
|
| 836 |
+
"bbox": [
|
| 837 |
+
89,
|
| 838 |
+
184,
|
| 839 |
+
483,
|
| 840 |
+
244
|
| 841 |
+
],
|
| 842 |
+
"page_idx": 4
|
| 843 |
+
},
|
| 844 |
+
{
|
| 845 |
+
"type": "equation",
|
| 846 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {p a r}} = \\frac {1}{M} \\sum_ {i \\in \\mathcal {M}} \\sqrt {\\tau_ {i}}. \\tag {10}\n$$\n",
|
| 847 |
+
"text_format": "latex",
|
| 848 |
+
"bbox": [
|
| 849 |
+
215,
|
| 850 |
+
253,
|
| 851 |
+
483,
|
| 852 |
+
290
|
| 853 |
+
],
|
| 854 |
+
"page_idx": 4
|
| 855 |
+
},
|
| 856 |
+
{
|
| 857 |
+
"type": "text",
|
| 858 |
+
"text": "Opacity Entropy Loss. During optimization, the opacity of the block inside the object region tends to approach 1, while the opacity of the block outside the object region tends to approach 0. To facilitate this, we associate the opacities of blocks with the labeled rays and apply a cross-entropy loss $L_{ce}$ between the block opacity and the mask labels, defined as follows:",
|
| 859 |
+
"bbox": [
|
| 860 |
+
89,
|
| 861 |
+
301,
|
| 862 |
+
483,
|
| 863 |
+
405
|
| 864 |
+
],
|
| 865 |
+
"page_idx": 4
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "equation",
|
| 869 |
+
"text": "\n$$\n\\mathcal {L} _ {\\mathrm {o p a}} (\\mathcal {R}) = \\frac {1}{| \\mathcal {R} |} \\sum_ {r \\in \\mathcal {R}} L _ {c e} \\left(\\max _ {i \\in \\mathcal {M}} \\tau_ {i} \\left(x ^ {r}\\right), l _ {r}\\right). \\tag {11}\n$$\n",
|
| 870 |
+
"text_format": "latex",
|
| 871 |
+
"bbox": [
|
| 872 |
+
125,
|
| 873 |
+
415,
|
| 874 |
+
483,
|
| 875 |
+
452
|
| 876 |
+
],
|
| 877 |
+
"page_idx": 4
|
| 878 |
+
},
|
| 879 |
+
{
|
| 880 |
+
"type": "text",
|
| 881 |
+
"text": "Here, only points $x^r$ inside the blocks are sampled, e.g., $\\{x^r \\in \\mathcal{X}_r \\mid D_i(x^r) \\leq 0\\}$ .",
|
| 882 |
+
"bbox": [
|
| 883 |
+
89,
|
| 884 |
+
463,
|
| 885 |
+
483,
|
| 886 |
+
493
|
| 887 |
+
],
|
| 888 |
+
"page_idx": 4
|
| 889 |
+
},
|
| 890 |
+
{
|
| 891 |
+
"type": "text",
|
| 892 |
+
"text": "The total loss is the weighted sum of the loss terms described above:",
|
| 893 |
+
"bbox": [
|
| 894 |
+
89,
|
| 895 |
+
493,
|
| 896 |
+
483,
|
| 897 |
+
522
|
| 898 |
+
],
|
| 899 |
+
"page_idx": 4
|
| 900 |
+
},
|
| 901 |
+
{
|
| 902 |
+
"type": "equation",
|
| 903 |
+
"text": "\n$$\n\\mathcal {L} = \\mathcal {L} _ {\\text {r e n}} + \\lambda_ {\\text {c o v}} \\mathcal {L} _ {\\text {c o v}} + \\lambda_ {\\text {o v e r}} \\mathcal {L} _ {\\text {o v e r}} + \\lambda_ {\\text {p a r}} \\mathcal {L} _ {\\text {p a r}} + \\lambda_ {\\text {o p a}} \\mathcal {L} _ {\\text {o p a}}. \\tag {12}\n$$\n",
|
| 904 |
+
"text_format": "latex",
|
| 905 |
+
"bbox": [
|
| 906 |
+
99,
|
| 907 |
+
535,
|
| 908 |
+
482,
|
| 909 |
+
551
|
| 910 |
+
],
|
| 911 |
+
"page_idx": 4
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "text",
|
| 915 |
+
"text": "3.2.2. Adaptive Number of Blocks.",
|
| 916 |
+
"text_level": 1,
|
| 917 |
+
"bbox": [
|
| 918 |
+
89,
|
| 919 |
+
563,
|
| 920 |
+
333,
|
| 921 |
+
578
|
| 922 |
+
],
|
| 923 |
+
"page_idx": 4
|
| 924 |
+
},
|
| 925 |
+
{
|
| 926 |
+
"type": "text",
|
| 927 |
+
"text": "Given that the number of parts may vary across different scenes, we allow the number of blocks to adjust during optimization adaptively. Specifically, when the opacity of an existing block falls below a threshold $t$ , this block is removed immediately. Furthermore, we explore a block-adding mechanism to dynamically incorporate new components, ensuring comprehensive coverage of target objects. Specifically, DBSCAN [44] is employed to cluster initial point clouds that are not covered by any blocks, where the point clouds are either randomly initialized or derived as by-products from COLMAP [43]. For each point cloud cluster, a new block is introduced at its center, with the remaining parameters of the block initialized randomly.",
|
| 928 |
+
"bbox": [
|
| 929 |
+
89,
|
| 930 |
+
582,
|
| 931 |
+
483,
|
| 932 |
+
777
|
| 933 |
+
],
|
| 934 |
+
"page_idx": 4
|
| 935 |
+
},
|
| 936 |
+
{
|
| 937 |
+
"type": "text",
|
| 938 |
+
"text": "The final block-level reconstruction generates a textured 3D scene represented by multiple superquadrics, with surface details rendered through Gaussians, as illustrated in Fig. 2.",
|
| 939 |
+
"bbox": [
|
| 940 |
+
89,
|
| 941 |
+
779,
|
| 942 |
+
483,
|
| 943 |
+
839
|
| 944 |
+
],
|
| 945 |
+
"page_idx": 4
|
| 946 |
+
},
|
| 947 |
+
{
|
| 948 |
+
"type": "text",
|
| 949 |
+
"text": "3.3. Point-level Decomposition",
|
| 950 |
+
"text_level": 1,
|
| 951 |
+
"bbox": [
|
| 952 |
+
89,
|
| 953 |
+
848,
|
| 954 |
+
326,
|
| 955 |
+
864
|
| 956 |
+
],
|
| 957 |
+
"page_idx": 4
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "text",
|
| 961 |
+
"text": "The primitive-based hybrid representation effectively decomposes the shape into parts but exhibits suboptimally for",
|
| 962 |
+
"bbox": [
|
| 963 |
+
89,
|
| 964 |
+
869,
|
| 965 |
+
483,
|
| 966 |
+
901
|
| 967 |
+
],
|
| 968 |
+
"page_idx": 4
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"text": "complex objects. To address this limitation, we perform a refinement stage that enhances geometric fitting. In this stage, the constraint between Gaussians and superquadrics is decoupled, enabling independent Gaussian optimization. Furthermore, to prevent Gaussian components from one block passing through adjacent blocks and disrupting the continuity and plausibility of the part decomposition, we impose a new constraint to minimize the negative signed distance of each Gaussian entering other blocks:",
|
| 973 |
+
"bbox": [
|
| 974 |
+
511,
|
| 975 |
+
90,
|
| 976 |
+
906,
|
| 977 |
+
227
|
| 978 |
+
],
|
| 979 |
+
"page_idx": 4
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "equation",
|
| 983 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {e n t e r}} = \\frac {1}{N} \\sum_ {x \\in \\Omega} \\sum_ {m \\in \\mathcal {M} \\backslash \\{\\delta \\}} \\operatorname {R e L U} (- D _ {m} (x)), \\tag {13}\n$$\n",
|
| 984 |
+
"text_format": "latex",
|
| 985 |
+
"bbox": [
|
| 986 |
+
549,
|
| 987 |
+
232,
|
| 988 |
+
906,
|
| 989 |
+
270
|
| 990 |
+
],
|
| 991 |
+
"page_idx": 4
|
| 992 |
+
},
|
| 993 |
+
{
|
| 994 |
+
"type": "text",
|
| 995 |
+
"text": "where $\\Omega$ is the sampled Gaussian set, and $\\delta$ represents the identifier of the block to which the Gaussian $x$ belongs. As seen in Fig. 2, the reconstruction becomes more accurate and aligns better with the target shape.",
|
| 996 |
+
"bbox": [
|
| 997 |
+
511,
|
| 998 |
+
276,
|
| 999 |
+
905,
|
| 1000 |
+
337
|
| 1001 |
+
],
|
| 1002 |
+
"page_idx": 4
|
| 1003 |
+
},
|
| 1004 |
+
{
|
| 1005 |
+
"type": "text",
|
| 1006 |
+
"text": "4. Experiments",
|
| 1007 |
+
"text_level": 1,
|
| 1008 |
+
"bbox": [
|
| 1009 |
+
511,
|
| 1010 |
+
349,
|
| 1011 |
+
645,
|
| 1012 |
+
364
|
| 1013 |
+
],
|
| 1014 |
+
"page_idx": 4
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "text",
|
| 1018 |
+
"text": "We comprehensively evaluate our approach across four key aspects: 3D reconstruction quality, view synthesis performance, shape parsimony, and computational efficiency, comparing it with state-of-the-art methods in the field.",
|
| 1019 |
+
"bbox": [
|
| 1020 |
+
511,
|
| 1021 |
+
373,
|
| 1022 |
+
906,
|
| 1023 |
+
434
|
| 1024 |
+
],
|
| 1025 |
+
"page_idx": 4
|
| 1026 |
+
},
|
| 1027 |
+
{
|
| 1028 |
+
"type": "text",
|
| 1029 |
+
"text": "4.1. Datasets",
|
| 1030 |
+
"text_level": 1,
|
| 1031 |
+
"bbox": [
|
| 1032 |
+
511,
|
| 1033 |
+
441,
|
| 1034 |
+
614,
|
| 1035 |
+
455
|
| 1036 |
+
],
|
| 1037 |
+
"page_idx": 4
|
| 1038 |
+
},
|
| 1039 |
+
{
|
| 1040 |
+
"type": "text",
|
| 1041 |
+
"text": "Extensive experiments are conducted on two widely-used public datasets: DTU [23] and ShapeNet [6]. Specifically, 15 standard scenes from the DTU dataset that are widely adopted for reconstruction quality assessment are used for evaluation. Each scene contains either 49 or 64 images. Additionally, we construct a ShapeNet subset consisting of four categories: Chair, Table, Gun, and Airplane, with 15 objects selected per category. For each object, 50 rendered images are generated for training. We further validate real-world applicability on both the BlendedMVS dataset [61] and self-captured scenes, demonstrating robust performance across synthetic and natural environments.",
|
| 1042 |
+
"bbox": [
|
| 1043 |
+
511,
|
| 1044 |
+
463,
|
| 1045 |
+
908,
|
| 1046 |
+
645
|
| 1047 |
+
],
|
| 1048 |
+
"page_idx": 4
|
| 1049 |
+
},
|
| 1050 |
+
{
|
| 1051 |
+
"type": "text",
|
| 1052 |
+
"text": "4.2. Implementation Details",
|
| 1053 |
+
"text_level": 1,
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
511,
|
| 1056 |
+
652,
|
| 1057 |
+
730,
|
| 1058 |
+
667
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 4
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"text": "We build upon the 2DGS [21] architecture and add a nonlearnable part attribute to each Gaussian component to enforce the constraint in Eq. 13. In the ShapeNet benchmark, we observe a common degradation in mesh reconstruction as shown in Fig. 7. To address this, we propose Gaussian scale regularization, as validated in Sec. 4.4. We set the initial number of primitives $M$ to 8. The temperature parameter $\\gamma$ in Eq. 8 is 0.005 and the overlapping number $k$ in Eq. 9 is 1.95. We train the hybrid representation for 30k iterations, followed by refinement optimization for another 30k iterations. During block-level optimization, the block-adding operation is carried out at the 5k-th and 10k-th iterations. The loss weights $\\lambda_{\\mathrm{cov}}$ , $\\lambda_{\\mathrm{over}}$ , $\\lambda_{\\mathrm{par}}$ , and $\\lambda_{\\mathrm{opa}}$ are set to 10, 1, 0.002, and 0.01, respectively. For more details about datasets, metrics, and baselines, please refer to Supplementary.",
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
511,
|
| 1067 |
+
674,
|
| 1068 |
+
908,
|
| 1069 |
+
901
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 4
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "page_number",
|
| 1075 |
+
"text": "9653",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
482,
|
| 1078 |
+
944,
|
| 1079 |
+
514,
|
| 1080 |
+
955
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 4
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "image",
|
| 1086 |
+
"img_path": "images/9a2893f0459bd4436057f44847546ed02f6e3881a4ec7cc693ace7254912d196.jpg",
|
| 1087 |
+
"image_caption": [
|
| 1088 |
+
"Figure 3. Qualitative comparison on DTU [23] and ShapeNet [23]. The first two rows are DTU examples, and the last two are ShapeNet examples, respectively. Our method is the only one that provides reasonable 3D part decomposition while capturing detailed geometry."
|
| 1089 |
+
],
|
| 1090 |
+
"image_footnote": [],
|
| 1091 |
+
"bbox": [
|
| 1092 |
+
91,
|
| 1093 |
+
85,
|
| 1094 |
+
911,
|
| 1095 |
+
397
|
| 1096 |
+
],
|
| 1097 |
+
"page_idx": 5
|
| 1098 |
+
},
|
| 1099 |
+
{
|
| 1100 |
+
"type": "table",
|
| 1101 |
+
"img_path": "images/8e5f3de70384992d0ad68fdd1fc6188968ad3b4bf69304e88bbf5d51915da125.jpg",
|
| 1102 |
+
"table_caption": [],
|
| 1103 |
+
"table_footnote": [],
|
| 1104 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td rowspan=\"2\">Input</td><td rowspan=\"2\">Renderable</td><td colspan=\"15\">Chamfer distance per scene</td><td>Mean</td><td>Mean</td></tr><tr><td>S24</td><td>S37</td><td>S40</td><td>S55</td><td>S63</td><td>S65</td><td>S69</td><td>S83</td><td>S97</td><td>S105</td><td>S106</td><td>S110</td><td>S114</td><td>S118</td><td>S122</td><td>CD</td><td>#P</td></tr><tr><td>EMS [32]</td><td>3D GT</td><td>X</td><td>6.32</td><td>3.54</td><td>2.99</td><td>4.30</td><td>4.16</td><td>4.01</td><td>3.75</td><td>3.24</td><td>4.97</td><td>4.34</td><td>4.16</td><td>7.62</td><td>7.58</td><td>4.46</td><td>4.03</td><td>4.65</td><td>7.7</td></tr><tr><td>MBF [41]</td><td>3D GT</td><td>X</td><td>3.12</td><td>2.66</td><td>3.84</td><td>2.54</td><td>1.59</td><td>2.11</td><td>2.19</td><td>2.01</td><td>2.32</td><td>2.45</td><td>2.17</td><td>2.12</td><td>3.83</td><td>2.02</td><td>2.55</td><td>2.50</td><td>34.1</td></tr><tr><td>EMS [32] + Neus [54]</td><td>Image</td><td>X</td><td>5.99</td><td>5.56</td><td>4.43</td><td>4.32</td><td>5.42</td><td>6.14</td><td>3.75</td><td>3.96</td><td>4.63</td><td>4.34</td><td>5.88</td><td>5.11</td><td>4.29</td><td>4.83</td><td>3.53</td><td>4.97</td><td>8.87</td></tr><tr><td>MBF [41] + Neus [54]</td><td>Image</td><td>X</td><td>2.69</td><td>3.37</td><td>3.22</td><td>2.69</td><td>3.63</td><td>2.60</td><td>2.59</td><td>3.13</td><td>2.85</td><td>2.51</td><td>2.45</td><td>3.72</td><td>2.24</td><td>2.49</td><td>2.52</td><td>2.85</td><td>46.7</td></tr><tr><td>PartNeRF [47]</td><td>Image</td><td>✓</td><td>9.38</td><td>10.46</td><td>9.08</td><td>8.63</td><td>6.04</td><td>7.25</td><td>7.22</td><td>9.15</td><td>8.72</td><td>10.01</td><td>6.72</td><td>9.85</td><td>7.85</td><td>8.68</td><td>9.21</td><td>8.54</td><td>8.0</td></tr><tr><td>DBW [36]</td><td>Image</td><td>✓</td><td>5.41</td><td>8.35</td><td>1.57</td><td>3.08</td><td>3.40</td><td>4.15</td><td>7.46</td><td>3.94</td><td>6.63</td><td>4.85</td><td>4.38</td><td>4.65</td><td>6.29</td><td>4.34</td><td>3.04</td><td>4.76</td><td>4.8</td></tr><tr><td>Ours (Block-level)</td><td>Image</td><td>✓</td><td>5.68</td><td>4.91</td><td>1.85</td><td>2.61</td><td>3.75</td><td>4.66</td><td>3.75</td><td>7.57</td><td>4.27</td><td>4.38</td><td>3.49</td><td>4.48</td><td>3.61</td><td>4.21</td><td>3.70</td><td>4.19</td><td>5.9</td></tr><tr><td>Ours (Point-level)</td><td>Image</td><td>✓</td><td>0.70</td><td>1.17</td><td>0.55</td><td>0.65</td><td>1.06</td><td>1.23</td><td>1.10</td><td>1.36</td><td>1.37</td><td>0.78</td><td>0.92</td><td>1.41</td><td>0.69</td><td>1.05</td><td>0.71</td><td>0.98</td><td>5.9</td></tr></table>",
|
| 1105 |
+
"bbox": [
|
| 1106 |
+
91,
|
| 1107 |
+
455,
|
| 1108 |
+
903,
|
| 1109 |
+
578
|
| 1110 |
+
],
|
| 1111 |
+
"page_idx": 5
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "table",
|
| 1115 |
+
"img_path": "images/0a458dcd15aa25583c5f3d0ea2026d6871595758658bcfb85d33ef84a9c85d01.jpg",
|
| 1116 |
+
"table_caption": [
|
| 1117 |
+
"Table 1. Quantitative comparison on DTU [23]. The Chamfer distance between the 3D reconstruction and the ground-truth is reported in 15 scenes. The best results are bolded, and the average numbers of primitives found (#P) that are greater than 10 are underlined."
|
| 1118 |
+
],
|
| 1119 |
+
"table_footnote": [
|
| 1120 |
+
"Table 2. Quantitative results on DTU [23]. Our method outperforms all part-aware approaches in image synthesis quality, reconstruction accuracy, and efficiency. Neuralangelo's results are from the original paper, with all times measured on an RTX 3090 GPU."
|
| 1121 |
+
],
|
| 1122 |
+
"table_body": "<table><tr><td>Method</td><td>Part-aware</td><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LIPPS ↓</td><td>Time ↓</td></tr><tr><td>Neuralangelo [30]</td><td>X</td><td>0.61</td><td>33.84</td><td>-</td><td>-</td><td>> 10 h</td></tr><tr><td>2DGS [21]</td><td>X</td><td>0.81</td><td>34.07</td><td>0.99</td><td>0.019</td><td>~ 10 m</td></tr><tr><td>PartNeRF [47]</td><td>✓</td><td>9.59</td><td>17.97</td><td>0.77</td><td>0.246</td><td>~ 8 h</td></tr><tr><td>DBW [36]</td><td>✓</td><td>4.73</td><td>16.44</td><td>0.75</td><td>0.201</td><td>~ 2 h</td></tr><tr><td>Ours (Block-level)</td><td>✓</td><td>4.19</td><td>19.84</td><td>0.82</td><td>0.189</td><td>~ 30 m</td></tr><tr><td>Ours (Point-level)</td><td>✓</td><td>0.98</td><td>35.04</td><td>0.99</td><td>0.015</td><td>~ 40 m</td></tr></table>",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
93,
|
| 1125 |
+
632,
|
| 1126 |
+
480,
|
| 1127 |
+
710
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 5
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "text",
|
| 1133 |
+
"text": "4.3. Evaluations",
|
| 1134 |
+
"text_level": 1,
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
89,
|
| 1137 |
+
794,
|
| 1138 |
+
218,
|
| 1139 |
+
809
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 5
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "text",
|
| 1145 |
+
"text": "4.3.1. DTU and Shapenet Benchmark",
|
| 1146 |
+
"text_level": 1,
|
| 1147 |
+
"bbox": [
|
| 1148 |
+
89,
|
| 1149 |
+
819,
|
| 1150 |
+
356,
|
| 1151 |
+
834
|
| 1152 |
+
],
|
| 1153 |
+
"page_idx": 5
|
| 1154 |
+
},
|
| 1155 |
+
{
|
| 1156 |
+
"type": "text",
|
| 1157 |
+
"text": "Geometry Reconstruction. In Tab. 1 and Tab. 3, we compare our geometry reconstruction to SOTA shape decomposition methods on Chamfer distance and training time using DTU and Shapenet dataset. The Chamfer distance",
|
| 1158 |
+
"bbox": [
|
| 1159 |
+
89,
|
| 1160 |
+
839,
|
| 1161 |
+
485,
|
| 1162 |
+
900
|
| 1163 |
+
],
|
| 1164 |
+
"page_idx": 5
|
| 1165 |
+
},
|
| 1166 |
+
{
|
| 1167 |
+
"type": "table",
|
| 1168 |
+
"img_path": "images/a74c8f1cb64e244e76ba97da6521ae10573bb16f939493226b77961fd2ab850d.jpg",
|
| 1169 |
+
"table_caption": [],
|
| 1170 |
+
"table_footnote": [],
|
| 1171 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td rowspan=\"2\">Input</td><td colspan=\"4\">Chamfer Distance ↓</td><td colspan=\"4\">Primitives (#P)</td><td>Mean</td><td>Mean</td></tr><tr><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>CD</td><td>#P</td></tr><tr><td>EMS [32]</td><td>3D GT</td><td>3.40</td><td>6.92</td><td>19.0</td><td>2.02</td><td>9.4</td><td>7.88</td><td>10.3</td><td>8.4</td><td>7.84</td><td>9.0</td></tr><tr><td>MBF [41]</td><td>3D GT</td><td>2.83</td><td>2.18</td><td>1.59</td><td>2.32</td><td>10.85</td><td>13.9</td><td>13.4</td><td>14.3</td><td>2.21</td><td>13.1</td></tr><tr><td>PartNeRF [47]</td><td>Image</td><td>2.29</td><td>2.77</td><td>2.30</td><td>2.46</td><td>8.0</td><td>8.0</td><td>8.0</td><td>8.0</td><td>2.46</td><td>8.0</td></tr><tr><td>DBW [36]</td><td>Image</td><td>3.61</td><td>7.33</td><td>6.19</td><td>2.09</td><td>2.7</td><td>5.2</td><td>3.6</td><td>3.3</td><td>4.81</td><td>3.7</td></tr><tr><td>Ours (Block-level)</td><td>Image</td><td>2.47</td><td>2.15</td><td>2.32</td><td>1.78</td><td>3.9</td><td>6.6</td><td>7.6</td><td>5.0</td><td>2.18</td><td>5.8</td></tr><tr><td>Ours (Point-level)</td><td>Image</td><td>1.29</td><td>1.72</td><td>0.94</td><td>1.07</td><td>3.9</td><td>6.6</td><td>7.6</td><td>5.0</td><td>1.25</td><td>5.8</td></tr></table>",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
514,
|
| 1174 |
+
632,
|
| 1175 |
+
905,
|
| 1176 |
+
720
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 5
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "text",
|
| 1182 |
+
"text": "Table 3. Quantitative comparison on ShapeNet [6]. We report Chamfer distance and the number of parts. The best results are bolded, and the second-best results are underlined.",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
511,
|
| 1185 |
+
729,
|
| 1186 |
+
906,
|
| 1187 |
+
771
|
| 1188 |
+
],
|
| 1189 |
+
"page_idx": 5
|
| 1190 |
+
},
|
| 1191 |
+
{
|
| 1192 |
+
"type": "text",
|
| 1193 |
+
"text": "metrics reported for ShapeNet are scaled by a factor of 100 for readability. Our method consistently outperforms in all scenes compared to prior works. As shown in Fig. 3, our approach consistently produces interpretable 3D decompositions with further refinement achieving more detailed geometry (the last two columns). MBF [41] achieves a low CD error but at the cost of using significantly more prim",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
511,
|
| 1196 |
+
794,
|
| 1197 |
+
908,
|
| 1198 |
+
902
|
| 1199 |
+
],
|
| 1200 |
+
"page_idx": 5
|
| 1201 |
+
},
|
| 1202 |
+
{
|
| 1203 |
+
"type": "page_number",
|
| 1204 |
+
"text": "9654",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
482,
|
| 1207 |
+
944,
|
| 1208 |
+
514,
|
| 1209 |
+
955
|
| 1210 |
+
],
|
| 1211 |
+
"page_idx": 5
|
| 1212 |
+
},
|
| 1213 |
+
{
|
| 1214 |
+
"type": "table",
|
| 1215 |
+
"img_path": "images/c0ed874782a95993fdfe8b944acb6d17edbc29d67c181e5cee8edef5ddc27757.jpg",
|
| 1216 |
+
"table_caption": [],
|
| 1217 |
+
"table_footnote": [],
|
| 1218 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td rowspan=\"2\">Part-aware</td><td colspan=\"4\">Chamfer Distance ↓</td><td colspan=\"4\">PSNR ↑</td><td colspan=\"4\">SSIM ↑</td><td colspan=\"4\">LIPPS ↓</td></tr><tr><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td></tr><tr><td>2DGS [21]</td><td>✗</td><td>1.47</td><td>2.37</td><td>0.50</td><td>1.03</td><td>40.89</td><td>39.80</td><td>39.05</td><td>41.74</td><td>0.994</td><td>0.990</td><td>0.990</td><td>0.994</td><td>0.009</td><td>0.027</td><td>0.017</td><td>0.009</td></tr><tr><td>PartNeRF [47]</td><td>✓</td><td>2.29</td><td>2.77</td><td>2.30</td><td>2.46</td><td>19.63</td><td>20.66</td><td>19.08</td><td>21.97</td><td>0.898</td><td>0.855</td><td>0.875</td><td>0.916</td><td>0.086</td><td>0.161</td><td>0.136</td><td>0.083</td></tr><tr><td>DBW [36]</td><td>✓</td><td>3.61</td><td>7.33</td><td>6.19</td><td>2.09</td><td>26.11</td><td>23.84</td><td>20.25</td><td>28.72</td><td>0.950</td><td>0.915</td><td>0.892</td><td>0.960</td><td>0.074</td><td>0.136</td><td>0.132</td><td>0.042</td></tr><tr><td>Ours (Block-level)</td><td>✓</td><td>2.47</td><td>2.15</td><td>2.32</td><td>1.78</td><td>27.94</td><td>27.98</td><td>24.92</td><td>29.95</td><td>0.959</td><td>0.925</td><td>0.906</td><td>0.963</td><td>0.072</td><td>0.129</td><td>0.122</td><td>0.047</td></tr><tr><td>Ours (Point-level)</td><td>✓</td><td>1.29</td><td>1.72</td><td>0.94</td><td>1.07</td><td>41.18</td><td>36.80</td><td>36.07</td><td>39.51</td><td>0.992</td><td>0.973</td><td>0.977</td><td>0.989</td><td>0.014</td><td>0.070</td><td>0.038</td><td>0.021</td></tr></table>",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
91,
|
| 1221 |
+
93,
|
| 1222 |
+
903,
|
| 1223 |
+
176
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 6
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "image",
|
| 1229 |
+
"img_path": "images/24ae7e5a67815b404bd07fee031db8fba68f96091208d7f64af3ec24f97d31af.jpg",
|
| 1230 |
+
"image_caption": [
|
| 1231 |
+
"Figure 4. Qualitative results on BlendedMVS [61] and self-captured data. We present RGB renderings and decomposed parts from novel views. The top examples are from the BlendedMVS dataset, and the last example is from our captured scenes."
|
| 1232 |
+
],
|
| 1233 |
+
"image_footnote": [],
|
| 1234 |
+
"bbox": [
|
| 1235 |
+
91,
|
| 1236 |
+
218,
|
| 1237 |
+
230,
|
| 1238 |
+
398
|
| 1239 |
+
],
|
| 1240 |
+
"page_idx": 6
|
| 1241 |
+
},
|
| 1242 |
+
{
|
| 1243 |
+
"type": "image",
|
| 1244 |
+
"img_path": "images/c926f3ba22a56d86a32c5e06a2a597aa03f4573cc5f5d110f7785cc401a7d7e3.jpg",
|
| 1245 |
+
"image_caption": [],
|
| 1246 |
+
"image_footnote": [],
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
245,
|
| 1249 |
+
218,
|
| 1250 |
+
310,
|
| 1251 |
+
398
|
| 1252 |
+
],
|
| 1253 |
+
"page_idx": 6
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "image",
|
| 1257 |
+
"img_path": "images/f27da3159df474a17048759326d7e7f2cf043bbb41b0910ae41fe2687b249422.jpg",
|
| 1258 |
+
"image_caption": [],
|
| 1259 |
+
"image_footnote": [],
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
325,
|
| 1262 |
+
218,
|
| 1263 |
+
387,
|
| 1264 |
+
388
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 6
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "image",
|
| 1270 |
+
"img_path": "images/0a9fecf992c3b20d510816d220905d3c02c3f955ff86d2a7b933ec78efbd4807.jpg",
|
| 1271 |
+
"image_caption": [
|
| 1272 |
+
"Figure 6. Qualitative comparison with SAM-based methods. Visual results demonstrate that our method produces more structurally coherent decompositions, whereas SAM-based approaches frequently exhibit spatial discontinuities."
|
| 1273 |
+
],
|
| 1274 |
+
"image_footnote": [],
|
| 1275 |
+
"bbox": [
|
| 1276 |
+
405,
|
| 1277 |
+
218,
|
| 1278 |
+
470,
|
| 1279 |
+
388
|
| 1280 |
+
],
|
| 1281 |
+
"page_idx": 6
|
| 1282 |
+
},
|
| 1283 |
+
{
|
| 1284 |
+
"type": "image",
|
| 1285 |
+
"img_path": "images/0cf1c6b0b7ee14a1103a679235523a6b7e3143fe7142ee49d274421f4a041d4a.jpg",
|
| 1286 |
+
"image_caption": [
|
| 1287 |
+
"Figure 5. Ablation studies on key strategies. The block-level visual comparisons illustrate the impact of adopting our proposed strategy. The first row shows results without the strategy, and the second with the strategy implemented."
|
| 1288 |
+
],
|
| 1289 |
+
"image_footnote": [],
|
| 1290 |
+
"bbox": [
|
| 1291 |
+
91,
|
| 1292 |
+
484,
|
| 1293 |
+
478,
|
| 1294 |
+
647
|
| 1295 |
+
],
|
| 1296 |
+
"page_idx": 6
|
| 1297 |
+
},
|
| 1298 |
+
{
|
| 1299 |
+
"type": "text",
|
| 1300 |
+
"text": "ities, leading to over-decomposition and the inclusion of meaningless parts. Moreover, as shown in Tab. 2 and Tab. 4, our approach achieves competitive results with the advanced Gaussian-based 2DGS [21] and surface reconstruction Neuralangelo [30], which are limited to producing unstructured meshes. Notably, our model demonstrates excellent efficiency, with a reconstruction speed approximately $10\\mathrm{X}$ faster than PartNeRF [47] and over $3\\mathrm{X}$ faster than DBW [36].",
|
| 1301 |
+
"bbox": [
|
| 1302 |
+
89,
|
| 1303 |
+
732,
|
| 1304 |
+
483,
|
| 1305 |
+
853
|
| 1306 |
+
],
|
| 1307 |
+
"page_idx": 6
|
| 1308 |
+
},
|
| 1309 |
+
{
|
| 1310 |
+
"type": "text",
|
| 1311 |
+
"text": "Appearance Reconstruction. In addition to part decomposition, our method enables high-fidelity image synthesis, attributed to integrating Gaussian splatting within hybrid",
|
| 1312 |
+
"bbox": [
|
| 1313 |
+
89,
|
| 1314 |
+
854,
|
| 1315 |
+
483,
|
| 1316 |
+
901
|
| 1317 |
+
],
|
| 1318 |
+
"page_idx": 6
|
| 1319 |
+
},
|
| 1320 |
+
{
|
| 1321 |
+
"type": "image",
|
| 1322 |
+
"img_path": "images/307d665fea83eea93e22db493cd3d7d1a8350fa763b31b558dd767118b793551.jpg",
|
| 1323 |
+
"image_caption": [],
|
| 1324 |
+
"image_footnote": [],
|
| 1325 |
+
"bbox": [
|
| 1326 |
+
514,
|
| 1327 |
+
218,
|
| 1328 |
+
589,
|
| 1329 |
+
369
|
| 1330 |
+
],
|
| 1331 |
+
"page_idx": 6
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "image",
|
| 1335 |
+
"img_path": "images/70407c2d856a6e1c769162ddf9593c72e1736e682f1fbac84e02effa1cfda752.jpg",
|
| 1336 |
+
"image_caption": [],
|
| 1337 |
+
"image_footnote": [],
|
| 1338 |
+
"bbox": [
|
| 1339 |
+
591,
|
| 1340 |
+
218,
|
| 1341 |
+
709,
|
| 1342 |
+
371
|
| 1343 |
+
],
|
| 1344 |
+
"page_idx": 6
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "image",
|
| 1348 |
+
"img_path": "images/10cdb9e196463b04c8405d36179586a3b92d152ed2795781a4014c35037727cf.jpg",
|
| 1349 |
+
"image_caption": [],
|
| 1350 |
+
"image_footnote": [],
|
| 1351 |
+
"bbox": [
|
| 1352 |
+
709,
|
| 1353 |
+
218,
|
| 1354 |
+
823,
|
| 1355 |
+
371
|
| 1356 |
+
],
|
| 1357 |
+
"page_idx": 6
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "image",
|
| 1361 |
+
"img_path": "images/4111f7b72c108581367c01f4837d509695d917e6f151090dc0b0072f30e0d65e.jpg",
|
| 1362 |
+
"image_caption": [],
|
| 1363 |
+
"image_footnote": [],
|
| 1364 |
+
"bbox": [
|
| 1365 |
+
823,
|
| 1366 |
+
218,
|
| 1367 |
+
903,
|
| 1368 |
+
371
|
| 1369 |
+
],
|
| 1370 |
+
"page_idx": 6
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "table",
|
| 1374 |
+
"img_path": "images/ec3d9a1a29ac736dfa20c8af29ad64edcb2050e51c874a82c80940963d1f7e11.jpg",
|
| 1375 |
+
"table_caption": [
|
| 1376 |
+
"Table 4. Quantitative results on ShapeNet [6]. We report the Chamfer distance and novel view synthesis results across four categories."
|
| 1377 |
+
],
|
| 1378 |
+
"table_footnote": [],
|
| 1379 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">Block-level</td><td colspan=\"4\">Point-level</td><td rowspan=\"2\">#P</td></tr><tr><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td></tr><tr><td>Complete model</td><td>2.32</td><td>24.92</td><td>0.906</td><td>0.122</td><td>0.94</td><td>36.07</td><td>0.977</td><td>0.038</td><td>6.6</td></tr><tr><td>w/o Lover</td><td>2.11</td><td>25.17</td><td>0.914</td><td>0.118</td><td>1.34</td><td>36.04</td><td>0.977</td><td>0.037</td><td>7.6</td></tr><tr><td>w/o Lopa</td><td>5.72</td><td>21.12</td><td>0.853</td><td>0.174</td><td>1.06</td><td>35.60</td><td>0.975</td><td>0.040</td><td>3.8</td></tr><tr><td>w/o Lcov</td><td>3.21</td><td>22.96</td><td>0.896</td><td>0.134</td><td>1.34</td><td>36.00</td><td>0.976</td><td>0.038</td><td>8.53</td></tr><tr><td>w/o Adaptive</td><td>3.16</td><td>22.74</td><td>0.880</td><td>0.141</td><td>1.05</td><td>35.74</td><td>0.974</td><td>0.040</td><td>6.7</td></tr><tr><td>w/o Lpar</td><td>1.91</td><td>25.48</td><td>0.918</td><td>0.115</td><td>0.93</td><td>36.18</td><td>0.977</td><td>0.037</td><td>10.1</td></tr></table>",
|
| 1380 |
+
"bbox": [
|
| 1381 |
+
516,
|
| 1382 |
+
454,
|
| 1383 |
+
903,
|
| 1384 |
+
534
|
| 1385 |
+
],
|
| 1386 |
+
"page_idx": 6
|
| 1387 |
+
},
|
| 1388 |
+
{
|
| 1389 |
+
"type": "text",
|
| 1390 |
+
"text": "Table 5. Ablation studies on the ShapeNet [6]. We report Chamfer Distance, rendering metrics, and the number of primitives (#P).",
|
| 1391 |
+
"bbox": [
|
| 1392 |
+
511,
|
| 1393 |
+
544,
|
| 1394 |
+
906,
|
| 1395 |
+
573
|
| 1396 |
+
],
|
| 1397 |
+
"page_idx": 6
|
| 1398 |
+
},
|
| 1399 |
+
{
|
| 1400 |
+
"type": "text",
|
| 1401 |
+
"text": "representations. EMS [32] and MBF [41] operate directly on point clouds and consequently lack image rendering capabilities, while PartNeRF [47] and DBW [36] yield low-quality view synthesis. In contrast, our method achieves high-quality appearance rendering results, as demonstrated in Tab. 2 and Tab. 4.",
|
| 1402 |
+
"bbox": [
|
| 1403 |
+
511,
|
| 1404 |
+
601,
|
| 1405 |
+
906,
|
| 1406 |
+
690
|
| 1407 |
+
],
|
| 1408 |
+
"page_idx": 6
|
| 1409 |
+
},
|
| 1410 |
+
{
|
| 1411 |
+
"type": "text",
|
| 1412 |
+
"text": "4.3.2. Real-life Data",
|
| 1413 |
+
"text_level": 1,
|
| 1414 |
+
"bbox": [
|
| 1415 |
+
511,
|
| 1416 |
+
700,
|
| 1417 |
+
656,
|
| 1418 |
+
714
|
| 1419 |
+
],
|
| 1420 |
+
"page_idx": 6
|
| 1421 |
+
},
|
| 1422 |
+
{
|
| 1423 |
+
"type": "text",
|
| 1424 |
+
"text": "To further demonstrate the applicability of our method for learning shape decomposition, we test our model on the real-life images from the BlendedMVS dataset [61] and a self-captured dataset. As shown in Fig. 4, our approach can robustly produce both realistic appearances and reasonable 3D decompositions across a variety of data types. More results are provided in the supplementary material.",
|
| 1425 |
+
"bbox": [
|
| 1426 |
+
511,
|
| 1427 |
+
720,
|
| 1428 |
+
905,
|
| 1429 |
+
827
|
| 1430 |
+
],
|
| 1431 |
+
"page_idx": 6
|
| 1432 |
+
},
|
| 1433 |
+
{
|
| 1434 |
+
"type": "text",
|
| 1435 |
+
"text": "4.3.3. Compared to SAM-based Methods",
|
| 1436 |
+
"text_level": 1,
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
511,
|
| 1439 |
+
835,
|
| 1440 |
+
802,
|
| 1441 |
+
849
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 6
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "text",
|
| 1447 |
+
"text": "We also conduct comparisons to SAM-based [27] methods for this task, as shown in Fig. 6. Despite impressive advances in 3D segmentation and editing [5, 26, 62, 63] by",
|
| 1448 |
+
"bbox": [
|
| 1449 |
+
511,
|
| 1450 |
+
854,
|
| 1451 |
+
906,
|
| 1452 |
+
900
|
| 1453 |
+
],
|
| 1454 |
+
"page_idx": 6
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "page_number",
|
| 1458 |
+
"text": "9655",
|
| 1459 |
+
"bbox": [
|
| 1460 |
+
482,
|
| 1461 |
+
944,
|
| 1462 |
+
514,
|
| 1463 |
+
955
|
| 1464 |
+
],
|
| 1465 |
+
"page_idx": 6
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "image",
|
| 1469 |
+
"img_path": "images/c82033cd52c3d624329aede2b26145e5a5fa59c98af17cea26b89603aa1ec4e6.jpg",
|
| 1470 |
+
"image_caption": [
|
| 1471 |
+
"Figure 7. Impact of Gaussian scale regularization (SR). The degraded mesh produced by 2DGS [21] is effectively improved."
|
| 1472 |
+
],
|
| 1473 |
+
"image_footnote": [],
|
| 1474 |
+
"bbox": [
|
| 1475 |
+
93,
|
| 1476 |
+
85,
|
| 1477 |
+
480,
|
| 1478 |
+
159
|
| 1479 |
+
],
|
| 1480 |
+
"page_idx": 7
|
| 1481 |
+
},
|
| 1482 |
+
{
|
| 1483 |
+
"type": "image",
|
| 1484 |
+
"img_path": "images/143f6800a56837a75210e2b7e2342ddd66ce215d4f93d1ed9af327ee18b64fad.jpg",
|
| 1485 |
+
"image_caption": [
|
| 1486 |
+
"Figure 8. Impact of the initial number of primitives $(M)$ . Increasing $M$ yields a finer-grained decomposition, while decreasing it produces a coarser decomposition."
|
| 1487 |
+
],
|
| 1488 |
+
"image_footnote": [],
|
| 1489 |
+
"bbox": [
|
| 1490 |
+
91,
|
| 1491 |
+
210,
|
| 1492 |
+
202,
|
| 1493 |
+
286
|
| 1494 |
+
],
|
| 1495 |
+
"page_idx": 7
|
| 1496 |
+
},
|
| 1497 |
+
{
|
| 1498 |
+
"type": "image",
|
| 1499 |
+
"img_path": "images/4fe97c9d7eee1036ca983cb6d1ff5a65d638c7a2d84c52711fd4a9c3ebf044ab.jpg",
|
| 1500 |
+
"image_caption": [],
|
| 1501 |
+
"image_footnote": [],
|
| 1502 |
+
"bbox": [
|
| 1503 |
+
205,
|
| 1504 |
+
210,
|
| 1505 |
+
290,
|
| 1506 |
+
281
|
| 1507 |
+
],
|
| 1508 |
+
"page_idx": 7
|
| 1509 |
+
},
|
| 1510 |
+
{
|
| 1511 |
+
"type": "image",
|
| 1512 |
+
"img_path": "images/b9e5052eafdcbb60ada0535cf39aab79f0c65599843c8f28937ab275a24121cd.jpg",
|
| 1513 |
+
"image_caption": [],
|
| 1514 |
+
"image_footnote": [],
|
| 1515 |
+
"bbox": [
|
| 1516 |
+
292,
|
| 1517 |
+
210,
|
| 1518 |
+
380,
|
| 1519 |
+
281
|
| 1520 |
+
],
|
| 1521 |
+
"page_idx": 7
|
| 1522 |
+
},
|
| 1523 |
+
{
|
| 1524 |
+
"type": "image",
|
| 1525 |
+
"img_path": "images/37e420abf3cb308307587cda0307fd45dcd6cef15b6656db7d4300f0038b3a18.jpg",
|
| 1526 |
+
"image_caption": [],
|
| 1527 |
+
"image_footnote": [],
|
| 1528 |
+
"bbox": [
|
| 1529 |
+
390,
|
| 1530 |
+
210,
|
| 1531 |
+
473,
|
| 1532 |
+
284
|
| 1533 |
+
],
|
| 1534 |
+
"page_idx": 7
|
| 1535 |
+
},
|
| 1536 |
+
{
|
| 1537 |
+
"type": "image",
|
| 1538 |
+
"img_path": "images/2fcb7e24646fc7aec39c0b4580b93c9fd57fc7ce80c30d2386654825a3e18cd5.jpg",
|
| 1539 |
+
"image_caption": [
|
| 1540 |
+
"Figure 9. Applications. Part-Aware Editing (top): After optimization, we can easily edit the scene by adding, scaling, or moving specific parts. 3D Content Generation (bottom): By combining parts of different objects, we can create new 3D content."
|
| 1541 |
+
],
|
| 1542 |
+
"image_footnote": [],
|
| 1543 |
+
"bbox": [
|
| 1544 |
+
93,
|
| 1545 |
+
345,
|
| 1546 |
+
483,
|
| 1547 |
+
498
|
| 1548 |
+
],
|
| 1549 |
+
"page_idx": 7
|
| 1550 |
+
},
|
| 1551 |
+
{
|
| 1552 |
+
"type": "table",
|
| 1553 |
+
"img_path": "images/39608ccb0875c5ec68a808e27c2807976861aa0c9c971369efe22f9aae7c3dba.jpg",
|
| 1554 |
+
"table_caption": [],
|
| 1555 |
+
"table_footnote": [],
|
| 1556 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">Block-level</td><td colspan=\"4\">Point-level</td><td rowspan=\"2\">#P</td></tr><tr><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td></tr><tr><td>λpar=0.1</td><td>6.46</td><td>16.10</td><td>0.721</td><td>0.255</td><td>1.08</td><td>34.30</td><td>0.987</td><td>0.018</td><td>2.7</td></tr><tr><td>λpar=0.01 (*)</td><td>4.19</td><td>19.84</td><td>0.820</td><td>0.189</td><td>1.05</td><td>35.04</td><td>0.988</td><td>0.015</td><td>5.9</td></tr><tr><td>λpar=0.001</td><td>4.01</td><td>20.18</td><td>0.835</td><td>0.180</td><td>1.03</td><td>35.04</td><td>0.988</td><td>0.015</td><td>7.1</td></tr><tr><td>M=4</td><td>4.97</td><td>19.06</td><td>0.794</td><td>0.213</td><td>1.06</td><td>34.93</td><td>0.987</td><td>0.016</td><td>5.0</td></tr><tr><td>M=8 (*)</td><td>4.19</td><td>19.84</td><td>0.820</td><td>0.189</td><td>1.05</td><td>35.04</td><td>0.988</td><td>0.015</td><td>5.9</td></tr><tr><td>M=16</td><td>3.99</td><td>20.87</td><td>0.83</td><td>0.176</td><td>1.07</td><td>35.05</td><td>0.988</td><td>0.015</td><td>8.3</td></tr></table>",
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
93,
|
| 1559 |
+
579,
|
| 1560 |
+
480,
|
| 1561 |
+
667
|
| 1562 |
+
],
|
| 1563 |
+
"page_idx": 7
|
| 1564 |
+
},
|
| 1565 |
+
{
|
| 1566 |
+
"type": "text",
|
| 1567 |
+
"text": "Table 6. Effect of parsimony weight $(\\lambda_{\\mathrm{par}})$ and initial primitives count $(M)$ on DTU [23]. We report Chamfer distance, rendering metrics, and primitives count (#P). * denotes the default setting.",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
89,
|
| 1570 |
+
678,
|
| 1571 |
+
482,
|
| 1572 |
+
720
|
| 1573 |
+
],
|
| 1574 |
+
"page_idx": 7
|
| 1575 |
+
},
|
| 1576 |
+
{
|
| 1577 |
+
"type": "text",
|
| 1578 |
+
"text": "distilling 2D information, achieving semantic disentanglement but remains challenging due to indistinct textures and severe cross-view 2D inconsistencies. Lifting 2D segmentation features to 3D segmentation will also lead to feature misalignment, leading to visual artifacts. In contrast, our method performs direct 3D space segmentation, yielding more reasonable and structurally coherent decompositions.",
|
| 1579 |
+
"bbox": [
|
| 1580 |
+
89,
|
| 1581 |
+
729,
|
| 1582 |
+
483,
|
| 1583 |
+
835
|
| 1584 |
+
],
|
| 1585 |
+
"page_idx": 7
|
| 1586 |
+
},
|
| 1587 |
+
{
|
| 1588 |
+
"type": "text",
|
| 1589 |
+
"text": "4.4. Ablations",
|
| 1590 |
+
"text_level": 1,
|
| 1591 |
+
"bbox": [
|
| 1592 |
+
89,
|
| 1593 |
+
847,
|
| 1594 |
+
202,
|
| 1595 |
+
862
|
| 1596 |
+
],
|
| 1597 |
+
"page_idx": 7
|
| 1598 |
+
},
|
| 1599 |
+
{
|
| 1600 |
+
"type": "text",
|
| 1601 |
+
"text": "In this section, we first analyze the design choices by isolating each strategy to assess its impact. We report the averaged",
|
| 1602 |
+
"bbox": [
|
| 1603 |
+
89,
|
| 1604 |
+
869,
|
| 1605 |
+
483,
|
| 1606 |
+
902
|
| 1607 |
+
],
|
| 1608 |
+
"page_idx": 7
|
| 1609 |
+
},
|
| 1610 |
+
{
|
| 1611 |
+
"type": "text",
|
| 1612 |
+
"text": "quantitative performance on 15 instances of the chair category in Tab. 5 and provide visual comparisons in Fig. 5. Removing overlapping loss provides better reconstruction accuracy but more overlapping parts. The opacity loss drastically affects the number of primitives and doubles the CD, highlighting its role in controlling primitive presence and reconstruction quality. Coverage loss is crucial in reconstruction accuracy, ensuring primitives align correctly with the target. The adaptive primitive strategy fills in missing parts and improves the reconstruction metric. Without parsimony loss, CD and rendering metrics are optimal but result in over-decomposition. Note that the full model did not yield the highest accuracy because the part-aware reconstruction trades off the accuracy and rationality of the part decomposition. The objective is to achieve high accuracy while preserving reasonable parts.",
|
| 1613 |
+
"bbox": [
|
| 1614 |
+
511,
|
| 1615 |
+
90,
|
| 1616 |
+
906,
|
| 1617 |
+
332
|
| 1618 |
+
],
|
| 1619 |
+
"page_idx": 7
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "text",
|
| 1623 |
+
"text": "We also validate the effectiveness of Gaussian scale regularization in Fig. 7. 2DGS [21] tends to use large-scale Gaussians in texture-less areas and create holes in the constructed mesh by TSDF integration [67]. The presence of holes stems from the inability of large-scale Gaussians to maintain view-consistent depth. Scale regularization effectively addresses this issue by suppressing large-scale Gaussians.",
|
| 1624 |
+
"bbox": [
|
| 1625 |
+
511,
|
| 1626 |
+
332,
|
| 1627 |
+
908,
|
| 1628 |
+
436
|
| 1629 |
+
],
|
| 1630 |
+
"page_idx": 7
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "text",
|
| 1634 |
+
"text": "Lastly, in Tab. 6, we analyze the influence of two key hyperparameters on decomposition granularity: the parsimony loss weight $\\lambda_{\\mathrm{par}}$ and the initial primitive count $M$ . Stronger parsimony regularization reduces the number of primitives, while weaker regularization increases it. As $M$ rises, both reconstruction and view synthesis performance slightly improve. This demonstrates that adjusting $M$ and $\\lambda_{\\mathrm{par}}$ enables our method to effectively control the granularity of object or scene decomposition. Fig. 8 visually illustrates this impact.",
|
| 1635 |
+
"bbox": [
|
| 1636 |
+
511,
|
| 1637 |
+
438,
|
| 1638 |
+
908,
|
| 1639 |
+
575
|
| 1640 |
+
],
|
| 1641 |
+
"page_idx": 7
|
| 1642 |
+
},
|
| 1643 |
+
{
|
| 1644 |
+
"type": "text",
|
| 1645 |
+
"text": "4.5. Applications",
|
| 1646 |
+
"text_level": 1,
|
| 1647 |
+
"bbox": [
|
| 1648 |
+
511,
|
| 1649 |
+
583,
|
| 1650 |
+
648,
|
| 1651 |
+
599
|
| 1652 |
+
],
|
| 1653 |
+
"page_idx": 7
|
| 1654 |
+
},
|
| 1655 |
+
{
|
| 1656 |
+
"type": "text",
|
| 1657 |
+
"text": "Fig. 9 illustrates two applications of our method that the original 3DGS and NeRF-based approaches do not support. Firstly, after optimization, we obtain the part decomposition, facilitating easy editing of specific objects or scene components, e.g., adding, moving, removing, or scaling. Secondly, by combining parts of different objects, our method enables the creation of new high-quality 3D content.",
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
511,
|
| 1660 |
+
604,
|
| 1661 |
+
908,
|
| 1662 |
+
712
|
| 1663 |
+
],
|
| 1664 |
+
"page_idx": 7
|
| 1665 |
+
},
|
| 1666 |
+
{
|
| 1667 |
+
"type": "text",
|
| 1668 |
+
"text": "5. Conclusions",
|
| 1669 |
+
"text_level": 1,
|
| 1670 |
+
"bbox": [
|
| 1671 |
+
511,
|
| 1672 |
+
724,
|
| 1673 |
+
640,
|
| 1674 |
+
739
|
| 1675 |
+
],
|
| 1676 |
+
"page_idx": 7
|
| 1677 |
+
},
|
| 1678 |
+
{
|
| 1679 |
+
"type": "text",
|
| 1680 |
+
"text": "We introduce PartGS, a hybrid representation of superquadrics and 2D Gaussians, to learn 3D scenes in a part-aware representation. Compared to prior works, the proposed method retains geometry details, supporting high-quality image rendering. It obtains state-of-the-art performance in comprehensive evaluations. One limitation is that it takes background-free scene images as inputs, leveraging segmentation tools such as SAM [27]. In the future, we aim to explore how to model backgrounds and extend to larger and more complex scenes.",
|
| 1681 |
+
"bbox": [
|
| 1682 |
+
511,
|
| 1683 |
+
750,
|
| 1684 |
+
908,
|
| 1685 |
+
900
|
| 1686 |
+
],
|
| 1687 |
+
"page_idx": 7
|
| 1688 |
+
},
|
| 1689 |
+
{
|
| 1690 |
+
"type": "page_number",
|
| 1691 |
+
"text": "9656",
|
| 1692 |
+
"bbox": [
|
| 1693 |
+
482,
|
| 1694 |
+
944,
|
| 1695 |
+
514,
|
| 1696 |
+
955
|
| 1697 |
+
],
|
| 1698 |
+
"page_idx": 7
|
| 1699 |
+
},
|
| 1700 |
+
{
|
| 1701 |
+
"type": "text",
|
| 1702 |
+
"text": "Acknowledgements",
|
| 1703 |
+
"text_level": 1,
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
91,
|
| 1706 |
+
90,
|
| 1707 |
+
258,
|
| 1708 |
+
107
|
| 1709 |
+
],
|
| 1710 |
+
"page_idx": 8
|
| 1711 |
+
},
|
| 1712 |
+
{
|
| 1713 |
+
"type": "text",
|
| 1714 |
+
"text": "This work is supported in part by the NSFC (62325211, 62132021, 62372457), the Major Program of Xiangjiang Laboratory (23XJ01009), Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), the Natural Science Foundation of Hunan Province of China (2022RC1104).",
|
| 1715 |
+
"bbox": [
|
| 1716 |
+
89,
|
| 1717 |
+
113,
|
| 1718 |
+
485,
|
| 1719 |
+
183
|
| 1720 |
+
],
|
| 1721 |
+
"page_idx": 8
|
| 1722 |
+
},
|
| 1723 |
+
{
|
| 1724 |
+
"type": "text",
|
| 1725 |
+
"text": "References",
|
| 1726 |
+
"text_level": 1,
|
| 1727 |
+
"bbox": [
|
| 1728 |
+
91,
|
| 1729 |
+
210,
|
| 1730 |
+
187,
|
| 1731 |
+
226
|
| 1732 |
+
],
|
| 1733 |
+
"page_idx": 8
|
| 1734 |
+
},
|
| 1735 |
+
{
|
| 1736 |
+
"type": "list",
|
| 1737 |
+
"sub_type": "ref_text",
|
| 1738 |
+
"list_items": [
|
| 1739 |
+
"[1] Stephan Alaniz, Massimiliano Mancini, and Zeynep Akata. Iterative superquadric decomposition of 3d objects from multiple views. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 18013-18023, 2023. 2",
|
| 1740 |
+
"[2] Alan H. Barr. Superquadrics and Angle-Preserving Transformations. IEEE Computer Graphics and Applications, 1981. 1, 2, 3, 4",
|
| 1741 |
+
"[3] Thomas Binford. Visual Perception by Computer. In IEEE Conference on Systems and Control, 1971. 2",
|
| 1742 |
+
"[4] Åke Björck. Numerics of gram-schmidt orthogonalization. Linear Algebra and Its Applications, 197:297-316, 1994. 4",
|
| 1743 |
+
"[5] Jiazhong Cen, Jiemin Fang, Chen Yang, Lingxi Xie, Xiaopeng Zhang, Wei Shen, and Qi Tian. Segment any 3d gaussians. arXiv preprint arXiv:2312.00860, 2023. 7",
|
| 1744 |
+
"[6] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. arXiv:1512.03012 [cs.CV], 2015. 5, 6, 7",
|
| 1745 |
+
"[7] Hanlin Chen, Chen Li, and Gim Hee Lee. Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846, 2023. 2",
|
| 1746 |
+
"[8] Zhiqin Chen, Andrea Tagliasacchi, and Hao Zhang. Bsp-net: Generating compact meshes via binary space partitioning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2",
|
| 1747 |
+
"[9] Zhiqin Chen, Qimin Chen, Hang Zhou, and Hao Zhang. Dae-net: Deforming auto-encoder for fine-grained shape co-segmentation. arXiv preprint arXiv:2311.13125, 2023. 2",
|
| 1748 |
+
"[10] Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, and Andrea Tagliasacchi. Cvxnet: Learnable convex decomposition. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2",
|
| 1749 |
+
"[11] Martin A. Fischler and Robert C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM, 1981. 2",
|
| 1750 |
+
"[12] Huachen Gao, Shihe Shen, Zhe Zhang, Kaiqiang Xiong, Rui Peng, Zhirui Gao, Qi Wang, Yugui Xie, and Ronggang Wang. Fdc-nerf: learning pose-free neural radiance fields with flow-depth consistency. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3615-3619. IEEE, 2024. 1",
|
| 1751 |
+
"[13] Lin Gao, Jie Yang, Tong Wu, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai, and Hao Zhang. Sdm-net: Deep generative network"
|
| 1752 |
+
],
|
| 1753 |
+
"bbox": [
|
| 1754 |
+
93,
|
| 1755 |
+
234,
|
| 1756 |
+
485,
|
| 1757 |
+
900
|
| 1758 |
+
],
|
| 1759 |
+
"page_idx": 8
|
| 1760 |
+
},
|
| 1761 |
+
{
|
| 1762 |
+
"type": "list",
|
| 1763 |
+
"sub_type": "ref_text",
|
| 1764 |
+
"list_items": [
|
| 1765 |
+
"for structured deformable mesh. ACM Transactions on Graphics (TOG), 38(6):1-15, 2019. 2",
|
| 1766 |
+
"[14] Lin Gao, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu, and Yu-Kun Lai. Mesh-based gaussian splatt-ting for real-time large-scale deformation. arXiv preprint arXiv:2402.04796, 2024. 2, 3",
|
| 1767 |
+
"[15] Zhirui Gao, Renjiao Yi, Zheng Qin, Yunfan Ye, Chenyang Zhu, and Kai Xu. Learning accurate template matching with differentiable coarse-to-fine correspondence refinement. Computational Visual Media, 10(2):309-330, 2024. 2",
|
| 1768 |
+
"[16] Zhirui Gao, Renjiao Yi, Yaqiao Dai, Xuening Zhu, Wei Chen, Chenyang Zhu, and Kai Xu. Curve-aware gaussian splatting for 3d parametric curve reconstruction, 2025. 2",
|
| 1769 |
+
"[17] Zhirui Gao, Renjiao Yi, Chenyang Zhu, Ke Zhuang, Wei Chen, and Kai Xu. Generic objects as pose probes for few-shot view synthesis. IEEE Transactions on Circuits and Systems for Video Technology, 2025. 1",
|
| 1770 |
+
"[18] Zeqi Gu, Yin Cui, Zhaoshuo Li, Fangyin Wei, Yunhao Ge, Jinwei Gu, Ming-Yu Liu, Abe Davis, and Yifan Ding. Artiscene: Language-driven artistic 3d scene generation through image intermediary. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pages 2891-2901, 2025. 1",
|
| 1771 |
+
"[19] Yanran Guan, Han Liu, Kun Liu, Kangxue Yin, Ruizhen Hu, Oliver van Kaick, Yan Zhang, Ersin Yumer, Nathan Carr, Radomir Mech, et al. Fame: 3d shape generation via functionality-aware model evolution. IEEE Transactions on Visualization and Computer Graphics, 28(4):1758-1772, 2020. 2",
|
| 1772 |
+
"[20] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. arXiv preprint arXiv:2311.12775, 2023. 2, 3, 4",
|
| 1773 |
+
"[21] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. SIGGRAPH, 2024. 2, 3, 5, 6, 7, 8",
|
| 1774 |
+
"[22] Ka-Hei Hui, Ruihui Li, Jingyu Hu, and Chi-Wing Fu. Neural template: Topology-aware reconstruction and disentangled generation of 3d meshes. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 18572-18582, 2022. 2",
|
| 1775 |
+
"[23] Rasmus Jensen, Anders Dahl, George Vogiatzis, Engil Tola, and Henrik Aanaes. Large Scale Multi-view Stereopsis Evaluation. In CVPR, 2014. 5, 6, 8",
|
| 1776 |
+
"[24] Shuyi Jiang, Qihao Zhao, Hossein Rahmani, De Wen Soh, Jun Liu, and Na Zhao. Gaussianblock: Building part-aware compositional and editable 3d scene by primitives and gaussians. arXiv preprint arXiv:2410.01535, 2024. 2",
|
| 1777 |
+
"[25] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4):1-14, 2023. 2, 4",
|
| 1778 |
+
"[26] Chung Min* Kim, Mingxuan* Wu, Justin* Kerr, Matthew Tancik, Ken Goldberg, and Angjoo Kanazawa. Garfield: Group anything with radiance fields. In Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 7"
|
| 1779 |
+
],
|
| 1780 |
+
"bbox": [
|
| 1781 |
+
516,
|
| 1782 |
+
92,
|
| 1783 |
+
906,
|
| 1784 |
+
900
|
| 1785 |
+
],
|
| 1786 |
+
"page_idx": 8
|
| 1787 |
+
},
|
| 1788 |
+
{
|
| 1789 |
+
"type": "page_number",
|
| 1790 |
+
"text": "9657",
|
| 1791 |
+
"bbox": [
|
| 1792 |
+
482,
|
| 1793 |
+
944,
|
| 1794 |
+
514,
|
| 1795 |
+
955
|
| 1796 |
+
],
|
| 1797 |
+
"page_idx": 8
|
| 1798 |
+
},
|
| 1799 |
+
{
|
| 1800 |
+
"type": "list",
|
| 1801 |
+
"sub_type": "ref_text",
|
| 1802 |
+
"list_items": [
|
| 1803 |
+
"[27] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything. arXiv:2304.02643, 2023. 2, 7, 8",
|
| 1804 |
+
"[28] Jun Li, Kai Xu, Siddhartha Chaudhuri, Ersin Yumer, Hao Zhang, and Leonidas Guibas. Grass: Generative recursive autoencoders for shape structures. ACM Transactions on Graphics (TOG), 36(4):1-14, 2017. 1",
|
| 1805 |
+
"[29] Lingxiao Li, Minhyuk Sung, Anastasia Dubrovina, Li Yi, and Leonidas J Guibas. Supervised Fitting of Geometric Primitives to 3D Point Clouds. In CVPR, 2019. 1",
|
| 1806 |
+
"[30] Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. Neuralangelo: High-fidelity neural surface reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8456-8465, 2023. 6, 7",
|
| 1807 |
+
"[31] Di Liu, Xiang Yu, Meng Ye, Qilong Zhangli, Zhuowei Li, Zhixing Zhang, and Dimitris N Metaxas. Deformer: Integrating transformers with deformable models for 3d shape abstraction from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14236-14246, 2023. 2",
|
| 1808 |
+
"[32] Weixiao Liu, Yuwei Wu, Sipu Ruan, and Gregory S Chirikjian. Robust and Accurate Superquadric Recovery: a Probabilistic Approach. In CVPR, 2022. 1, 2, 3, 6, 7",
|
| 1809 |
+
"[33] Romain Loiseau, Elliot Vincent, Mathieu Aubry, and Loic Landrieu. Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans. arXiv:2304.09704 [cs.CV], 2023. 1",
|
| 1810 |
+
"[34] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV, 2020. 4",
|
| 1811 |
+
"[35] Niloy Mitra, Michael Wand, Hao (Richard) Zhang, Daniel Cohen-Or, Vladimir Kim, and Qi-Xing Huang. Structure-aware shape processing. In SIGGRAPH Asia 2013 Courses, 2013. 1",
|
| 1812 |
+
"[36] Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei A. Efros, and Mathieu Aubry. Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives. In NeurIPS, 2023. 1, 2, 3, 4, 6, 7",
|
| 1813 |
+
"[37] Chengjie Niu, Jun Li, and Kai Xu. Im2struct: Recovering 3d shape structure from a single rgb image. Cornell University - arXiv, Cornell University - arXiv, 2018. 1",
|
| 1814 |
+
"[38] Despoina Paschalidou, Ali Osman Ulusoy, and Andreas Geiger. Superquadrics Revisited: Learning 3D Shape Parsing Beyond Cuboids. In CVPR, 2019. 2",
|
| 1815 |
+
"[39] Despoina Paschalidou, Luc Van Gool, and Andreas Geiger. Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image. In CVPR, 2020. 2",
|
| 1816 |
+
"[40] Despoina Paschalidou, Angelos Katharopoulos, Andreas Geiger, and Sanja Fidler. Neural parts: Learning expressive 3d shape abstractions with invertible neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3204-3215, 2021. 2"
|
| 1817 |
+
],
|
| 1818 |
+
"bbox": [
|
| 1819 |
+
91,
|
| 1820 |
+
90,
|
| 1821 |
+
485,
|
| 1822 |
+
900
|
| 1823 |
+
],
|
| 1824 |
+
"page_idx": 9
|
| 1825 |
+
},
|
| 1826 |
+
{
|
| 1827 |
+
"type": "list",
|
| 1828 |
+
"sub_type": "ref_text",
|
| 1829 |
+
"list_items": [
|
| 1830 |
+
"[41] Michael Ramamonjisoa, Sinisa Stekovic, and Vincent Lepetit. MonteBoxFinder: Detecting and Filtering Primitives to Fit a Noisy Point Cloud. In ECCV, 2022. 2, 3, 6, 7",
|
| 1831 |
+
"[42] Lawrence G. Roberts. Machine perception of three-dimensional solids. PhD thesis, Massachusetts Institute of Technology, 1963. 2",
|
| 1832 |
+
"[43] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 5",
|
| 1833 |
+
"[44] Erich Schubert, Jörg Sander, Martin Ester, Hans Peter Kriegel, and Xiaowei Xu. Dbscan revisited, revisited: why and how you should (still) use dbscan. ACM Transactions on Database Systems (TODS), 42(3):1-21, 2017. 5",
|
| 1834 |
+
"[45] Qingyao Shuai, Chi Zhang, Kaizhi Yang, and Xuejin Chen. Dpf-net: combining explicit shape priors in deformable primitive field for unsupervised structural reconstruction of 3d objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14321-14329, 2023. 2",
|
| 1835 |
+
"[46] Chunyu Sun, Yuqi Yang, Haoxiang Guo, Pengshuai Wang, Xin Tong, Yang Liu, and Heung-Yeung Shum. Semi-supervised 3d shape segmentation with multilevel consistency and part substitution. Computational Visual Media, 2022. 2",
|
| 1836 |
+
"[47] Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, and Leonidas Guibas. PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision. In CVPR, 2023. 1, 2, 6, 7",
|
| 1837 |
+
"[48] Shubham Tulsiani, Hao Su, Leonidas J. Guibas, Alexei A. Efros, and Jitendra Malik. Learning Shape Abstractions by Assembling Volumetric Primitives. In CVPR, 2017. 2",
|
| 1838 |
+
"[49] Joanna Waczyńska, Piotr Borycki, Sławomir Tadeja, Jacek Tabor, and Przemysław Spurek. Games: Mesh-based adapting and modification of gaussian splatting. arXiv preprint arXiv:2402.01459, 2024. 2, 3, 4",
|
| 1839 |
+
"[50] Xinhang Wan, Jiyuan Liu, Xinbiao Gan, Xinwang Liu, Siwei Wang, Yi Wen, Tianjiao Wan, and En Zhu. One-step multiview clustering with diverse representation. IEEE Transactions on Neural Networks and Learning Systems, pages 1-13, 2024. 2",
|
| 1840 |
+
"[51] Xinhang Wan, Jiyuan Liu, Hao Yu, Qian Qu, Ao Li, Xinwang Liu, Ke Liang, Zhibin Dong, and En Zhu. Contrastive continual multiview clustering with filtered structural fusion. IEEE Transactions on Neural Networks and Learning Systems, 2024. 2",
|
| 1841 |
+
"[52] Fengxiang Wang, Mingshuo Chen, Yueying Li, Di Wang, Haotian Wang, Zonghao Guo, Zefan Wang, Boqi Shan, Long Lan, Yulin Wang, et al. Geollava-8k: Scaling remote-sensing multimodal large language models to 8k resolution. arXiv preprint arXiv:2505.21375, 2025. 1",
|
| 1842 |
+
"[53] Fengxiang Wang, Hongzhen Wang, Zonghao Guo, Di Wang, Yulin Wang, Mingshuo Chen, Qiang Ma, Long Lan, Wenjing Yang, Jing Zhang, et al. Xlrs-bench: Could your multimodal llms understand extremely large ultra-high-resolution remote sensing imagery? In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 14325-14336, 2025. 1",
|
| 1843 |
+
"[54] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. NeuS: Learning Neural Implicit"
|
| 1844 |
+
],
|
| 1845 |
+
"bbox": [
|
| 1846 |
+
516,
|
| 1847 |
+
92,
|
| 1848 |
+
906,
|
| 1849 |
+
900
|
| 1850 |
+
],
|
| 1851 |
+
"page_idx": 9
|
| 1852 |
+
},
|
| 1853 |
+
{
|
| 1854 |
+
"type": "page_number",
|
| 1855 |
+
"text": "9658",
|
| 1856 |
+
"bbox": [
|
| 1857 |
+
482,
|
| 1858 |
+
944,
|
| 1859 |
+
514,
|
| 1860 |
+
955
|
| 1861 |
+
],
|
| 1862 |
+
"page_idx": 9
|
| 1863 |
+
},
|
| 1864 |
+
{
|
| 1865 |
+
"type": "list",
|
| 1866 |
+
"sub_type": "ref_text",
|
| 1867 |
+
"list_items": [
|
| 1868 |
+
"Surfaces by Volume Rendering for Multi-view Reconstruction. In NeurIPS, 2021. 1, 6",
|
| 1869 |
+
"[55] Xiaogang Wang, Yuelang Xu, Kai Xu, Andrea Tagliafaschi, Bin Zhou, Ali Mahdavi-Amiri, and Hao Zhang. Pie-net: Parametric inference of point cloud edges. Advances in neural information processing systems, 33:20167-20178, 2020. 1",
|
| 1870 |
+
"[56] Zhifeng Wang, Renjiao Yi, Xin Wen, Chenyang Zhu, and Kai Xu. Vastsd: Learning 3d vascular tree-state space diffusion model for angiography synthesis. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pages 15693-15702, 2025. 1",
|
| 1871 |
+
"[57] Tong Wu, Lin Gao, Ling-Xiao Zhang, Yu-Kun Lai, and Hao Zhang. Star-tm: Structure aware reconstruction of textured mesh from single image. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-14, 2021. 2",
|
| 1872 |
+
"[58] Yuwei Wu, Weixiao Liu, Sipu Ruan, and Gregory S. Chirikjian. Primitive-based Shape Abstraction via Nonparametric Bayesian Inference. In ECCV, 2022. 2",
|
| 1873 |
+
"[59] Xihong Yang, Xiaochang Hu, Sihang Zhou, Xinwang Liu, and En Zhu. Interpolation-based contrastive learning for few-label semi-supervised learning. IEEE Transactions on Neural Networks and Learning Systems, 35(2):2054-2065, 2022. 2",
|
| 1874 |
+
"[60] Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Wenxuan Tu, Qun Zheng, Xinwang Liu, Liming Fang, and En Zhu. Cluster-guided contrastive graph clustering network. In Proceedings of the AAAI conference on artificial intelligence, pages 10834-10842, 2023. 2",
|
| 1875 |
+
"[61] Yao Yao, Zixin Luo, Shiwei Li, Jingyang Zhang, Yufan Ren, Lei Zhou, Tian Fang, and Long Quan. BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks. In CVPR, 2020. 5, 7",
|
| 1876 |
+
"[62] Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. Gaussian grouping: Segment and edit anything in 3d scenes. In ECCV, 2024. 7",
|
| 1877 |
+
"[63] Haiyang Ying, Yixuan Yin, Jinzhi Zhang, Fan Wang, Tao Yu, Ruqi Huang, and Lu Fang. Omniseg3d: Omniversal 3d segmentation via hierarchical contrastive learning. arXiv preprint arXiv:2311.11666, 2023. 7",
|
| 1878 |
+
"[64] Fenggen Yu, Kun Liu, Yan Zhang, Chenyang Zhu, and Kai Xu. Partnet: A recursive part decomposition network for fine-grained and hierarchical shape segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9491-9500, 2019. 3",
|
| 1879 |
+
"[65] Fenggen Yu, Yimin Qian, Xu Zhang, Francisca Gil-Ureta, Brian Jackson, Eric Bennett, and Hao Zhang. Dpa-net: Structured 3d abstraction from sparse views via differentiable primitive assembly. arXiv preprint arXiv:2404.00875, 2024. 1, 2",
|
| 1880 |
+
"[66] Hongyi Zhou, Xiaogang Wang, Yulan Guo, and Kai Xu. Monomobility: Zero-shot 3d mobility analysis from monocular videos, 2025. 2",
|
| 1881 |
+
"[67] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3D: A modern library for 3D data processing. arXiv:1801.09847, 2018.8",
|
| 1882 |
+
"[68] Chenyang Zhu, Kai Xu, Siddhartha Chaudhuri, Li Yi, Leonidas J Guibas, and Hao Zhang. Adacoseg: Adaptive"
|
| 1883 |
+
],
|
| 1884 |
+
"bbox": [
|
| 1885 |
+
91,
|
| 1886 |
+
90,
|
| 1887 |
+
483,
|
| 1888 |
+
901
|
| 1889 |
+
],
|
| 1890 |
+
"page_idx": 10
|
| 1891 |
+
},
|
| 1892 |
+
{
|
| 1893 |
+
"type": "text",
|
| 1894 |
+
"text": "shape co-segmentation with group consistency loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8543-8552, 2020. 1",
|
| 1895 |
+
"bbox": [
|
| 1896 |
+
545,
|
| 1897 |
+
90,
|
| 1898 |
+
906,
|
| 1899 |
+
133
|
| 1900 |
+
],
|
| 1901 |
+
"page_idx": 10
|
| 1902 |
+
},
|
| 1903 |
+
{
|
| 1904 |
+
"type": "page_number",
|
| 1905 |
+
"text": "9659",
|
| 1906 |
+
"bbox": [
|
| 1907 |
+
482,
|
| 1908 |
+
945,
|
| 1909 |
+
514,
|
| 1910 |
+
955
|
| 1911 |
+
],
|
| 1912 |
+
"page_idx": 10
|
| 1913 |
+
}
|
| 1914 |
+
]
|
2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/b5799a7c-6527-47cd-81fd-ab5ff79ff559_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/b5799a7c-6527-47cd-81fd-ab5ff79ff559_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03362f2a5c1ad24ecfc0c58aeed241dee00bee1f7c46e007798c3c8e88e7240c
|
| 3 |
+
size 8252259
|
2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/full.md
ADDED
|
@@ -0,0 +1,391 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics
|
| 2 |
+
|
| 3 |
+
Zhirui Gao Renjiao Yi† Yuhang Huang Wei Chen Chenyang Zhu Kai Xu† National University of Defense Technology zhirui-gao.github.io/PartGS
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Low-level 3D representations, such as point clouds, meshes, NeRFs and 3D Gaussians, are commonly used for modeling 3D objects and scenes. However, cognitive studies indicate that human perception operates at higher levels and interprets 3D environments by decomposing them into meaningful structural parts, rather than low-level elements like points or voxels. Structured geometric decomposition enhances scene interpretability and facilitates downstream tasks requiring component-level manipulation. In this work, we introduce PartGS, a self-supervised part-aware reconstruction framework that integrates 2D Gaussians and superquadrics to parse objects and scenes into an interpretable decomposition, leveraging multi-view image inputs to uncover 3D structural information. Our method jointly optimizes superquadric meshes and Gaussians by coupling their parameters within a hybrid representation. On one hand, superquadrics enable the representation of a wide range of shape primitives, facilitating flexible and meaningful decompositions. On the other hand, 2D Gaussians capture detailed texture and geometric details, ensuring high-fidelity appearance and geometry reconstruction. Operating in a self-supervised manner, our approach demonstrates superior performance compared to state-of-the-art methods across extensive experiments on the DTU, ShapeNet, and real-world datasets.
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
3D reconstruction from multi-view images is a long-standing challenge in 3D vision and graphics [12, 17, 54]. Most reconstructed scenes are in low-level representations such as point clouds, voxels, or meshes. However, humans tend to understand 3D scenes as reasonable parts [35]. For instance, when observing a scene, we naturally construct high-level structural information, such as scene graphs, instead of focusing on low-level details like point clouds or voxels. Motivated by it, we propose a part-aware reconstruction framework
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
3D decomposition
|
| 21 |
+
Prior work (EMS)
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Block-level
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Point-level
|
| 28 |
+
Ours
|
| 29 |
+
Figure 1. Prior works [32, 36] model the scene using primitives, while the proposed method can further model precise geometry details and textures. Here, EMS [32] takes point cloud inputs and reconstructs non-textured primitives.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
that decomposes objects or scenes into meaningful shapes or parts, facilitating tasks such as physical simulation, editing, content generation and understanding [18, 28, 52, 53, 56].
|
| 34 |
+
|
| 35 |
+
Several prior works [29, 32, 33, 37, 55, 68] have explored part-aware reconstruction or 3D decomposition. However, most of them rely heavily on 3D supervision and often struggle to retain fine-grained geometric details, limiting their practicality in real-world scenarios. Recent advances [47, 65] have focused on extending the neural radiance field framework for part-aware reconstruction, building upon its remarkable success in 3D reconstruction from multi-view images. For example, PartNeRF [47] models objects using multiple neural radiance fields. Yet, the intricate composition of implicit fields complicates the learning process, leading to suboptimal rendering quality and inefficient decomposition. Recently, DBW [36] proposes a novel approach that decomposes scenes into block-based representations using superquadric primitives [2], optimizing both their parameters and UV texture maps through rendering loss minimization. While this method demonstrates effective scene decomposition into coherent components, it faces challenges in achieving accurate part-aware reconstruction of both geometry and appearance, as illustrated in Fig. 1.
|
| 36 |
+
|
| 37 |
+
To address these limitations, we propose PartGS, a self-supervised part-aware hybrid representation that integrates 2D Gaussians [21] and superquadrics [2], to achieve both high-quality texture reconstruction and geometrically accurate part decomposition. There are previous approaches [14, 20, 49] combining mesh reconstruction and Gaussian splatting, where they first reconstruct the mesh and then attach Gaussians to it. In contrast to their primary focus on acquiring the representations, the proposed approach is to achieve part-aware decomposition reconstruction. Our method establishes a coupled optimization framework that simultaneously learns superquadric meshes and Gaussians through parameter sharing, with each part constrained to a single superquadric. The differentiable rendering of Gaussians drives this hybrid representation, leveraging the inherent convexity of superquadrics for qualitative 3D decomposition. The self-supervised training of part-aware reconstruction is built upon the assumption that each part of most objects should be a basic shape and can be represented by a superquadric. Meanwhile, Gaussian splatting, renowned for its superior rendering quality and efficient training, is incorporated to capture intricate texture details, speeding up reconstruction and improving rendering quality.
|
| 38 |
+
|
| 39 |
+
In the hybrid representation, 2D Gaussians are distributed on superquadric surfaces to form structured blocks. The pose and shape of Gaussians within each block are determined by their corresponding superquadrics rather than being optimized independently. The parameter space encompasses global controls for superquadric properties (shape, pose, and opacity) and local spherical harmonic coefficients for individual Gaussians. Compared to standard Gaussian splating [21, 25], which populates the occupancy volume with independently parameterized Gaussians, the coupled representation is more compact and efficient.
|
| 40 |
+
|
| 41 |
+
During training, the parameters are optimized through rendering loss without additional supervision. However, we notice that image rendering loss often leads to local minima in superquadric shape optimization. To tackle this, we introduce several regularizers to maintain global consistency between the 3D representation and the input 2D information. This strategic implementation allows us to segment parts fully self-supervised, achieving block-level reconstruction. Finally, to better capture irregular shapes, we implement a point-level refinement step that frees 2D Gaussians to deviate from the surface, thereby enhancing geometric fidelity. Extensive experiments show that PartGS, at block-level and point-level, achieves $33.3\%$ and $75.9\%$ improvements in reconstruction accuracy, 3.18 and 16.13 increases in PSNR, and 4X and 3X speedups compared to the state-of-the-art baseline. Our contributions are summarized as follows:
|
| 42 |
+
|
| 43 |
+
- We introduce a novel hybrid representation for part-aware 3D reconstruction, combining the strengths of superquadrics and Gaussian splatting to achieve reasonable
|
| 44 |
+
|
| 45 |
+
decomposition and high-quality rendering.
|
| 46 |
+
|
| 47 |
+
- We propose several novel regularizers to enforce consistency between 3D decomposition and 2D observations, enabling self-supervised part decomposition.
|
| 48 |
+
- Compared to prior works, the method takes one step forward, by reconstructing both the block-level and point-level part-aware reconstructions, preserving both part segmentation and reconstruction precision.
|
| 49 |
+
|
| 50 |
+
# 2. Related Work
|
| 51 |
+
|
| 52 |
+
# 2.1. Shape Decomposition and Abstraction
|
| 53 |
+
|
| 54 |
+
Structured shape representation learning decomposes objects or scenes into coherent geometric primitives, facilitating shape understanding and generation [9, 15, 19, 46, 57]. Early approaches like Blocks World [42] and Generalized Cylinders [3] emphasized compact representations. Modern methods typically process 3D inputs (point clouds, meshes, voxels) by decomposing them into primitive ensembles, including cuboids [41, 48], superquadrics [32, 38, 39, 58], and convex shapes [8, 10]. For instance, MonteBoxFinder [41] integrates clustering [11, 50, 60], cuboid representations, and Monte Carlo Tree Search, while EMS [32] adopts a probabilistic approach for superquadric recovery. However, these methods are limited to coarse shape representations. Recent advances in shape abstraction [13, 22, 31, 40, 45] enable more detailed shape representations through flexible primitive deformation.
|
| 55 |
+
|
| 56 |
+
Some studies [1, 16, 36, 47, 65, 66] attempt to create structure-aware 3D representations directly from images. PartNeRF [47] introduces ellipsoid representations within NeRFs, its reliance on multiple implicit fields results in inefficient 3D decomposition. ISCO [1] and DBW [36] use 3D superquadrics for shape decomposition, which enables more meaningful structural separation. However, their simple shape parameters struggle to capture complex geometries, leading to poor geometry and appearance reconstruction. DPA-Net [65] has advanced 3D shape abstraction from sparse views but generates redundant parts and struggles with realistic texture rendering. A concurrent work, GaussianBlock [24], employs SAM [27] to guide superquadric splitting and fusion for 3D decomposition, yet its computational efficiency remains limited, typically requiring several hours per scene. In contrast, our approach accomplishes self-supervised [51, 59] part-aware scene reconstruction through an efficient hybrid representation that simultaneously maintains geometric fidelity and photorealistic rendering quality.
|
| 57 |
+
|
| 58 |
+
# 2.2. Mesh-based Gaussian Splatting
|
| 59 |
+
|
| 60 |
+
Gaussian splitting (GS) [25] has been rapidly adopted in multiple fields due to its remarkable rendering capability. Several studies [7, 14, 20, 49] attempt to align Gaussians with mesh surfaces for easier editing and animation. Among
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
Figure 2. Overview of our pipeline. PartGS takes multi-view images to learn a parametric hybrid representation of superquadrics and 2D Gaussians. It initializes from random superquadrics and is gradually optimized during training to obtain a block-level reconstruction. Then, we free the constraints of Gaussians to model detailed geometry, achieving point-level reconstruction.
|
| 64 |
+
|
| 65 |
+
them, SuGaR [20] uses flat 3D Gaussians to enforce the alignment with the scene surface during optimization, minimizing the difference between the signed distance functions (SDF) of the desired Gaussians and actual Gaussians. GaMeS [49] introduces a hybrid representation of Gaussians and mesh, where Gaussians are attached to triangular facets of the mesh. Similarly, Gao et al. [14] proposes a mesh-based GS to achieve large-scale deformation effects on objects. Recently, 2DGS [21] proposes 2D Gaussians for surface modeling and significantly enhances the geometric quality. Inspired by them, we propose a representation that combines 2D Gaussians with superquadrics. A key distinction between our method and previous mesh-based GS approaches is that these methods require first reconstructing the scene's mesh and then binding Gaussians to the mesh surface, which results in a non-continuous mesh. In contrast, our method directly optimizes the mesh through a rendering loss, enabling part-aware mesh reconstruction.
|
| 66 |
+
|
| 67 |
+
# 3. Method
|
| 68 |
+
|
| 69 |
+
Given a set of calibrated multi-view images and foreground masks, we aim to learn part-aware 3D representations, a meaningful decomposition of both geometry and appearance. Our approach adopts a two-stage optimization strategy: first, decomposing the object into basic shapes using a mixture of Gaussians and superquadrics at the block-level, followed by refining the decomposition at the point-level to achieve precise geometry. In Sec 3.1, we parameterize the hybrid representation. Sec. 3.2 then elaborates on leveraging this representation, enhanced by novel regularizers and an adaptive control strategy, to achieve self-supervised block-level
|
| 70 |
+
|
| 71 |
+
decomposition. Finally, Sec. 3.3 presents the process for obtaining detailed part-aware results.
|
| 72 |
+
|
| 73 |
+
# 3.1. Parametrizing the Hybrid Representation
|
| 74 |
+
|
| 75 |
+
As shown in Fig. 2, to leverage the strengths of both, we attach Gaussians to the surface of superquadric meshes. This representation retains the superquadric's ability to parse a 3D scene into distinct parts. Meanwhile, spherical harmonics of Gaussians enable complex texture rendering via Gaussian splatting, addressing the texture learning limitations of prior work [32, 36, 41, 64]. Sharing pose parameters between superquadrics and 2D Gaussians further improves the representation's efficiency.
|
| 76 |
+
|
| 77 |
+
The parametric representation is controlled by both primitive and Gaussian parameters, which are optimized simultaneously. Given a 3D scene $S$ , the proposed method decomposes it into multiple hybrid blocks, each consisting of a superquadric with associated Gaussians. Each scene is denoted by the hybrid representation: $S = \mathcal{B}_1 \cup \ldots \cup \mathcal{B}_i \cup \mathcal{B}_M$ , where $\mathcal{B}_i$ is the $i$ -th hybrid block, and $M$ is the total number of blocks. Blocks are defined using manually designed parameters that control pose, opacity, scale, shape, and texture. These parameters are optimized via differentiable rendering to parse the target scene.
|
| 78 |
+
|
| 79 |
+
Shape and Scale Parameters. For each hybrid block $\mathcal{B}_i$ , its geometry is controlled by superquadric parameters [2]. Specifically, there are two shape parameters $\epsilon_1, \epsilon_2$ that define its shape, along with three scale parameters $s_1, s_2, s_3$ , which scale the 3D axes. Analogous to icosphere vertex positioning, superquadric vertex coordinates are computed by:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathbf {v} = \left[ s _ {1} \cos^ {\epsilon 1} (\theta) \cos^ {\epsilon 2} (\varphi); s _ {2} \sin^ {\epsilon 1} (\theta); s _ {3} \cos^ {\epsilon 1} (\theta) \sin^ {\epsilon 2} (\varphi) \right], \quad (1)
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $\theta$ and $\varphi$ represent the azimuth and elevation defined
|
| 86 |
+
|
| 87 |
+
in the spherical coordinate. The shape and scale parameters govern block deformation to learn part-aware geometry.
|
| 88 |
+
|
| 89 |
+
Pose Parameters. The pose of the $i$ -th hybrid block is defined by its rotation $\mathbf{R}_i$ and translation $\mathbf{t}_i$ . The vertex position from Eq. 1 is transformed from the local coordinate to world space as: $\hat{\mathbf{v}}_i^j = \mathbf{R}_i\mathbf{v}_i^j + \mathbf{t}_i$ , where $j$ indexes the vertices. Previous approaches, including SuGaR [20] and GaMeS [49], position Gaussians on reconstructed meshes with independent pose and shape parameters to enhance appearance modeling. In contrast, our method employs Gaussians to construct a differentiable rendering that bridges images and superquadrics. To achieve this, we couple the parameters of Gaussians with superquadrics.
|
| 90 |
+
|
| 91 |
+
For a posed superquadric, Gaussian centers are uniformly sampled on triangular faces, with their poses determined by face vertices. Following GaMeS [49], each Gaussian's rotation matrix $\mathrm{R}_v = [r_1,r_2,r_3]$ and scaling $\mathrm{S}_v$ are computed from vertex positions. Given a triangular face $V$ with vertices $v_{1},v_{2},v_{3}\in \mathbb{R}^{3}$ , orthonormal vectors are constructed such that $r_1$ aligns with the face normal, $r_2$ points from the centroid $m = \mathrm{mean}(v_1,v_2,v_3)$ to $v_{1}$ . $r_3$ is obtained by orthonormalizing the vector from the center to the second vertex concerning the existing $r_1$ and $r_2$ :
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
r _ {3} = \frac {\operatorname {o r t} \left(v _ {2} - m ; r _ {1} , r _ {2}\right)}{\left\| \operatorname {o r t} \left(v _ {2} - m ; r _ {1} , r _ {2} \right. \right\|}, \tag {2}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $\text{ort}$ represents the orthonormalization in Gram-Schmidt [4]. For the scaling of 2D Gaussians, we use:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathrm {S} _ {v} = \operatorname {d i a g} \left(s _ {v} ^ {2}, s _ {v} ^ {3}\right), \tag {3}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where $s_v^2 = c||m - v_1||$ , $s_v^3 = c < v_2, r_3 >$ , and $c$ is a size control hyperparameter. We place a fixed number of Gaussians on each triangular face. These designs eliminate the need to learn the geometry of Gaussians, thereby enhancing the efficiency of the representation.
|
| 104 |
+
|
| 105 |
+
Opacity Parameters. We define the total number of hybrid blocks as $M$ . However, a typical scene does not contain exactly $M$ blocks. Therefore, only the meaningful blocks are retained. To achieve this, we introduce a learnable parameter $\tau_{i}$ to represent each block's opacity. During the optimization, only blocks with $\tau_{i}$ greater than a certain threshold are retained. Note that Gaussians within the same block share the same $\tau$ for opacity in rasterization.
|
| 106 |
+
|
| 107 |
+
Texture Parameters. The texture is modeled using 2D Gaussians positioned on the surface of the superquadrics. Spherical harmonics of each Gaussian control the texture and are optimized for rendering view-dependent images [25].
|
| 108 |
+
|
| 109 |
+
# 3.2. Block-level Decomposition
|
| 110 |
+
|
| 111 |
+
This section describes the optimization of the hybrid representation. We observed that minimizing the rendering loss across multi-view images alone led to instability in positioning hybrid blocks. Therefore, several regularization terms
|
| 112 |
+
|
| 113 |
+
are introduced to optimize the composition for maximal image formation consistency.
|
| 114 |
+
|
| 115 |
+
# 3.2.1. Optimization
|
| 116 |
+
|
| 117 |
+
In addition to the image rendering loss, we employ regularizers consisting of coverage, overlap, parsimony, and opacity entropy. The coverage loss encourages hybrid blocks to cover only meaningful regions; the overlap loss prevents blocks from overlapping; the parsimony loss regularizes the number of existing blocks; and the opacity entropy encourages block opacities close to binary.
|
| 118 |
+
|
| 119 |
+
Rendering Loss. The rendering loss is based on 3DGS [25], where it combines an $L_{1}$ term with a D-SSIM term:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mathcal {L} _ {\text {r e n}} = (1 - \lambda) L _ {1} + \lambda L _ {\mathrm {D} - \mathrm {S S I M}}. \tag {4}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
**Coverage Loss.** The coverage loss ensures that the block set $\{\mathcal{B}_1 \ldots \mathcal{B}_M\}$ covers the object while preventing it from extending beyond its boundaries. To determine the 3D occupancy field based on the blocks, we first define the approximate signed distance of a 3D point $x$ to the $i$ -th block, $D_i(x) = \Psi_i(x) - 1$ . Here, $\Psi_i$ denotes the superquadric inside-outside function [2] associated with the block $i$ . Consequently, $D_i(x) \leq 0$ if the point $x$ lies inside the $i$ -th block, and $D_i(x) > 0$ if $x$ is outside the block. Inspired by NeRF [34], we sample a ray set $\mathcal{R}$ using ray-casting based on camera poses. Given the object mask, we associate each ray $r \in \mathcal{R}$ with a binary label: $l_r = 1$ if the ray $r$ is inside mask, otherwise $l_r = 0$ . The coverage loss is defined as:
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
\mathcal {L} _ {\mathrm {c o v}} (\mathcal {R}) = \sum_ {r \in \mathcal {R}} l _ {r} L _ {\text {c r o s s}} (r) + \left(1 - l _ {r}\right) L _ {\text {n o n - c r o s s}} (r). \tag {5}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
Here, $L_{\mathrm{cross}}$ encourages rays to intersect with blocks, while $L_{\mathrm{non - cross}}$ discourages rays from intersecting with blocks:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
L _ {\text {c r o s s}} (r) = \operatorname {R e L U} \left(\min _ {i \in \mathcal {M}} \min _ {x _ {j} ^ {r} \in \mathcal {X} _ {r}} D _ {i} \left(x _ {j} ^ {r}\right)\right), \tag {6}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
L _ {\text {n o n - c r o s s}} (r) = \operatorname {R e L U} \left(\max _ {i \in \mathcal {M}} \max _ {x _ {j} ^ {r} \in \mathcal {X} _ {r}} - D _ {i} \left(x _ {j} ^ {r}\right)\right), \tag {7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
where $x_{j}^{r}$ means the $j$ -th sampled point along the ray $r$ and $\mathcal{M} = \{1, \dots, M\}$ . Intuitively, this implies that at least one sampled points $\mathcal{X}_{r}$ along the ray $r$ inside the mask that lies within a certain block, while all points along the ray outside the mask should not belong to any block.
|
| 142 |
+
|
| 143 |
+
Overlap Loss. We introduce a regularization term to prevent overlap between individual blocks. Given the difficulty of directly calculating block overlap, we adopt a Monte Carlo method similar to [36]. Specifically, we sample multiple 3D points in space and penalize those that lie inside more than $k$ blocks. Based on the superquadric inside-outside function, a soft occupancy function is defined as:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathcal {O} _ {i} ^ {x} = \tau_ {i} \operatorname {s i g m o i d} \left(\frac {- D _ {i} (x)}{\gamma}\right), \tag {8}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where $\gamma$ is a temperature parameter. Thus, for a set of $N$ sampled 3D points $\Omega$ , the overlap loss is expressed as:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathcal {L} _ {\text {o v e r}} = \frac {1}{N} \sum_ {x \in \Omega} \operatorname {R e L U} \left(\sum_ {i \in \mathcal {M}} \mathcal {O} _ {i} ^ {x} - k\right). \tag {9}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Parsimony Loss. To promote the use of the minimal number of blocks and achieve parsimony in decomposition, we introduce a regularization term that penalizes block opacity $(\tau)$ . This loss is defined as:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\mathcal {L} _ {\text {p a r}} = \frac {1}{M} \sum_ {i \in \mathcal {M}} \sqrt {\tau_ {i}}. \tag {10}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
Opacity Entropy Loss. During optimization, the opacity of the block inside the object region tends to approach 1, while the opacity of the block outside the object region tends to approach 0. To facilitate this, we associate the opacities of blocks with the labeled rays and apply a cross-entropy loss $L_{ce}$ between the block opacity and the mask labels, defined as follows:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\mathcal {L} _ {\mathrm {o p a}} (\mathcal {R}) = \frac {1}{| \mathcal {R} |} \sum_ {r \in \mathcal {R}} L _ {c e} \left(\max _ {i \in \mathcal {M}} \tau_ {i} \left(x ^ {r}\right), l _ {r}\right). \tag {11}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
Here, only points $x^r$ inside the blocks are sampled, e.g., $\{x^r \in \mathcal{X}_r \mid D_i(x^r) \leq 0\}$ .
|
| 168 |
+
|
| 169 |
+
The total loss is the weighted sum of the loss terms described above:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\mathcal {L} = \mathcal {L} _ {\text {r e n}} + \lambda_ {\text {c o v}} \mathcal {L} _ {\text {c o v}} + \lambda_ {\text {o v e r}} \mathcal {L} _ {\text {o v e r}} + \lambda_ {\text {p a r}} \mathcal {L} _ {\text {p a r}} + \lambda_ {\text {o p a}} \mathcal {L} _ {\text {o p a}}. \tag {12}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
# 3.2.2. Adaptive Number of Blocks.
|
| 176 |
+
|
| 177 |
+
Given that the number of parts may vary across different scenes, we allow the number of blocks to adjust during optimization adaptively. Specifically, when the opacity of an existing block falls below a threshold $t$ , this block is removed immediately. Furthermore, we explore a block-adding mechanism to dynamically incorporate new components, ensuring comprehensive coverage of target objects. Specifically, DBSCAN [44] is employed to cluster initial point clouds that are not covered by any blocks, where the point clouds are either randomly initialized or derived as by-products from COLMAP [43]. For each point cloud cluster, a new block is introduced at its center, with the remaining parameters of the block initialized randomly.
|
| 178 |
+
|
| 179 |
+
The final block-level reconstruction generates a textured 3D scene represented by multiple superquadrics, with surface details rendered through Gaussians, as illustrated in Fig. 2.
|
| 180 |
+
|
| 181 |
+
# 3.3. Point-level Decomposition
|
| 182 |
+
|
| 183 |
+
The primitive-based hybrid representation effectively decomposes the shape into parts but exhibits suboptimally for
|
| 184 |
+
|
| 185 |
+
complex objects. To address this limitation, we perform a refinement stage that enhances geometric fitting. In this stage, the constraint between Gaussians and superquadrics is decoupled, enabling independent Gaussian optimization. Furthermore, to prevent Gaussian components from one block passing through adjacent blocks and disrupting the continuity and plausibility of the part decomposition, we impose a new constraint to minimize the negative signed distance of each Gaussian entering other blocks:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\mathcal {L} _ {\text {e n t e r}} = \frac {1}{N} \sum_ {x \in \Omega} \sum_ {m \in \mathcal {M} \backslash \{\delta \}} \operatorname {R e L U} (- D _ {m} (x)), \tag {13}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where $\Omega$ is the sampled Gaussian set, and $\delta$ represents the identifier of the block to which the Gaussian $x$ belongs. As seen in Fig. 2, the reconstruction becomes more accurate and aligns better with the target shape.
|
| 192 |
+
|
| 193 |
+
# 4. Experiments
|
| 194 |
+
|
| 195 |
+
We comprehensively evaluate our approach across four key aspects: 3D reconstruction quality, view synthesis performance, shape parsimony, and computational efficiency, comparing it with state-of-the-art methods in the field.
|
| 196 |
+
|
| 197 |
+
# 4.1. Datasets
|
| 198 |
+
|
| 199 |
+
Extensive experiments are conducted on two widely-used public datasets: DTU [23] and ShapeNet [6]. Specifically, 15 standard scenes from the DTU dataset that are widely adopted for reconstruction quality assessment are used for evaluation. Each scene contains either 49 or 64 images. Additionally, we construct a ShapeNet subset consisting of four categories: Chair, Table, Gun, and Airplane, with 15 objects selected per category. For each object, 50 rendered images are generated for training. We further validate real-world applicability on both the BlendedMVS dataset [61] and self-captured scenes, demonstrating robust performance across synthetic and natural environments.
|
| 200 |
+
|
| 201 |
+
# 4.2. Implementation Details
|
| 202 |
+
|
| 203 |
+
We build upon the 2DGS [21] architecture and add a nonlearnable part attribute to each Gaussian component to enforce the constraint in Eq. 13. In the ShapeNet benchmark, we observe a common degradation in mesh reconstruction as shown in Fig. 7. To address this, we propose Gaussian scale regularization, as validated in Sec. 4.4. We set the initial number of primitives $M$ to 8. The temperature parameter $\gamma$ in Eq. 8 is 0.005 and the overlapping number $k$ in Eq. 9 is 1.95. We train the hybrid representation for 30k iterations, followed by refinement optimization for another 30k iterations. During block-level optimization, the block-adding operation is carried out at the 5k-th and 10k-th iterations. The loss weights $\lambda_{\mathrm{cov}}$ , $\lambda_{\mathrm{over}}$ , $\lambda_{\mathrm{par}}$ , and $\lambda_{\mathrm{opa}}$ are set to 10, 1, 0.002, and 0.01, respectively. For more details about datasets, metrics, and baselines, please refer to Supplementary.
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
Figure 3. Qualitative comparison on DTU [23] and ShapeNet [23]. The first two rows are DTU examples, and the last two are ShapeNet examples, respectively. Our method is the only one that provides reasonable 3D part decomposition while capturing detailed geometry.
|
| 207 |
+
|
| 208 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Input</td><td rowspan="2">Renderable</td><td colspan="15">Chamfer distance per scene</td><td>Mean</td><td>Mean</td></tr><tr><td>S24</td><td>S37</td><td>S40</td><td>S55</td><td>S63</td><td>S65</td><td>S69</td><td>S83</td><td>S97</td><td>S105</td><td>S106</td><td>S110</td><td>S114</td><td>S118</td><td>S122</td><td>CD</td><td>#P</td></tr><tr><td>EMS [32]</td><td>3D GT</td><td>X</td><td>6.32</td><td>3.54</td><td>2.99</td><td>4.30</td><td>4.16</td><td>4.01</td><td>3.75</td><td>3.24</td><td>4.97</td><td>4.34</td><td>4.16</td><td>7.62</td><td>7.58</td><td>4.46</td><td>4.03</td><td>4.65</td><td>7.7</td></tr><tr><td>MBF [41]</td><td>3D GT</td><td>X</td><td>3.12</td><td>2.66</td><td>3.84</td><td>2.54</td><td>1.59</td><td>2.11</td><td>2.19</td><td>2.01</td><td>2.32</td><td>2.45</td><td>2.17</td><td>2.12</td><td>3.83</td><td>2.02</td><td>2.55</td><td>2.50</td><td>34.1</td></tr><tr><td>EMS [32] + Neus [54]</td><td>Image</td><td>X</td><td>5.99</td><td>5.56</td><td>4.43</td><td>4.32</td><td>5.42</td><td>6.14</td><td>3.75</td><td>3.96</td><td>4.63</td><td>4.34</td><td>5.88</td><td>5.11</td><td>4.29</td><td>4.83</td><td>3.53</td><td>4.97</td><td>8.87</td></tr><tr><td>MBF [41] + Neus [54]</td><td>Image</td><td>X</td><td>2.69</td><td>3.37</td><td>3.22</td><td>2.69</td><td>3.63</td><td>2.60</td><td>2.59</td><td>3.13</td><td>2.85</td><td>2.51</td><td>2.45</td><td>3.72</td><td>2.24</td><td>2.49</td><td>2.52</td><td>2.85</td><td>46.7</td></tr><tr><td>PartNeRF [47]</td><td>Image</td><td>✓</td><td>9.38</td><td>10.46</td><td>9.08</td><td>8.63</td><td>6.04</td><td>7.25</td><td>7.22</td><td>9.15</td><td>8.72</td><td>10.01</td><td>6.72</td><td>9.85</td><td>7.85</td><td>8.68</td><td>9.21</td><td>8.54</td><td>8.0</td></tr><tr><td>DBW [36]</td><td>Image</td><td>✓</td><td>5.41</td><td>8.35</td><td>1.57</td><td>3.08</td><td>3.40</td><td>4.15</td><td>7.46</td><td>3.94</td><td>6.63</td><td>4.85</td><td>4.38</td><td>4.65</td><td>6.29</td><td>4.34</td><td>3.04</td><td>4.76</td><td>4.8</td></tr><tr><td>Ours (Block-level)</td><td>Image</td><td>✓</td><td>5.68</td><td>4.91</td><td>1.85</td><td>2.61</td><td>3.75</td><td>4.66</td><td>3.75</td><td>7.57</td><td>4.27</td><td>4.38</td><td>3.49</td><td>4.48</td><td>3.61</td><td>4.21</td><td>3.70</td><td>4.19</td><td>5.9</td></tr><tr><td>Ours (Point-level)</td><td>Image</td><td>✓</td><td>0.70</td><td>1.17</td><td>0.55</td><td>0.65</td><td>1.06</td><td>1.23</td><td>1.10</td><td>1.36</td><td>1.37</td><td>0.78</td><td>0.92</td><td>1.41</td><td>0.69</td><td>1.05</td><td>0.71</td><td>0.98</td><td>5.9</td></tr></table>
|
| 209 |
+
|
| 210 |
+
Table 1. Quantitative comparison on DTU [23]. The Chamfer distance between the 3D reconstruction and the ground-truth is reported in 15 scenes. The best results are bolded, and the average numbers of primitives found (#P) that are greater than 10 are underlined.
|
| 211 |
+
|
| 212 |
+
<table><tr><td>Method</td><td>Part-aware</td><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LIPPS ↓</td><td>Time ↓</td></tr><tr><td>Neuralangelo [30]</td><td>X</td><td>0.61</td><td>33.84</td><td>-</td><td>-</td><td>> 10 h</td></tr><tr><td>2DGS [21]</td><td>X</td><td>0.81</td><td>34.07</td><td>0.99</td><td>0.019</td><td>~ 10 m</td></tr><tr><td>PartNeRF [47]</td><td>✓</td><td>9.59</td><td>17.97</td><td>0.77</td><td>0.246</td><td>~ 8 h</td></tr><tr><td>DBW [36]</td><td>✓</td><td>4.73</td><td>16.44</td><td>0.75</td><td>0.201</td><td>~ 2 h</td></tr><tr><td>Ours (Block-level)</td><td>✓</td><td>4.19</td><td>19.84</td><td>0.82</td><td>0.189</td><td>~ 30 m</td></tr><tr><td>Ours (Point-level)</td><td>✓</td><td>0.98</td><td>35.04</td><td>0.99</td><td>0.015</td><td>~ 40 m</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Table 2. Quantitative results on DTU [23]. Our method outperforms all part-aware approaches in image synthesis quality, reconstruction accuracy, and efficiency. Neuralangelo's results are from the original paper, with all times measured on an RTX 3090 GPU.
|
| 215 |
+
|
| 216 |
+
# 4.3. Evaluations
|
| 217 |
+
|
| 218 |
+
# 4.3.1. DTU and Shapenet Benchmark
|
| 219 |
+
|
| 220 |
+
Geometry Reconstruction. In Tab. 1 and Tab. 3, we compare our geometry reconstruction to SOTA shape decomposition methods on Chamfer distance and training time using DTU and Shapenet dataset. The Chamfer distance
|
| 221 |
+
|
| 222 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Input</td><td colspan="4">Chamfer Distance ↓</td><td colspan="4">Primitives (#P)</td><td>Mean</td><td>Mean</td></tr><tr><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>CD</td><td>#P</td></tr><tr><td>EMS [32]</td><td>3D GT</td><td>3.40</td><td>6.92</td><td>19.0</td><td>2.02</td><td>9.4</td><td>7.88</td><td>10.3</td><td>8.4</td><td>7.84</td><td>9.0</td></tr><tr><td>MBF [41]</td><td>3D GT</td><td>2.83</td><td>2.18</td><td>1.59</td><td>2.32</td><td>10.85</td><td>13.9</td><td>13.4</td><td>14.3</td><td>2.21</td><td>13.1</td></tr><tr><td>PartNeRF [47]</td><td>Image</td><td>2.29</td><td>2.77</td><td>2.30</td><td>2.46</td><td>8.0</td><td>8.0</td><td>8.0</td><td>8.0</td><td>2.46</td><td>8.0</td></tr><tr><td>DBW [36]</td><td>Image</td><td>3.61</td><td>7.33</td><td>6.19</td><td>2.09</td><td>2.7</td><td>5.2</td><td>3.6</td><td>3.3</td><td>4.81</td><td>3.7</td></tr><tr><td>Ours (Block-level)</td><td>Image</td><td>2.47</td><td>2.15</td><td>2.32</td><td>1.78</td><td>3.9</td><td>6.6</td><td>7.6</td><td>5.0</td><td>2.18</td><td>5.8</td></tr><tr><td>Ours (Point-level)</td><td>Image</td><td>1.29</td><td>1.72</td><td>0.94</td><td>1.07</td><td>3.9</td><td>6.6</td><td>7.6</td><td>5.0</td><td>1.25</td><td>5.8</td></tr></table>
|
| 223 |
+
|
| 224 |
+
Table 3. Quantitative comparison on ShapeNet [6]. We report Chamfer distance and the number of parts. The best results are bolded, and the second-best results are underlined.
|
| 225 |
+
|
| 226 |
+
metrics reported for ShapeNet are scaled by a factor of 100 for readability. Our method consistently outperforms in all scenes compared to prior works. As shown in Fig. 3, our approach consistently produces interpretable 3D decompositions with further refinement achieving more detailed geometry (the last two columns). MBF [41] achieves a low CD error but at the cost of using significantly more prim
|
| 227 |
+
|
| 228 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Part-aware</td><td colspan="4">Chamfer Distance ↓</td><td colspan="4">PSNR ↑</td><td colspan="4">SSIM ↑</td><td colspan="4">LIPPS ↓</td></tr><tr><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td><td>Airplane</td><td>Table</td><td>Chair</td><td>Gun</td></tr><tr><td>2DGS [21]</td><td>✗</td><td>1.47</td><td>2.37</td><td>0.50</td><td>1.03</td><td>40.89</td><td>39.80</td><td>39.05</td><td>41.74</td><td>0.994</td><td>0.990</td><td>0.990</td><td>0.994</td><td>0.009</td><td>0.027</td><td>0.017</td><td>0.009</td></tr><tr><td>PartNeRF [47]</td><td>✓</td><td>2.29</td><td>2.77</td><td>2.30</td><td>2.46</td><td>19.63</td><td>20.66</td><td>19.08</td><td>21.97</td><td>0.898</td><td>0.855</td><td>0.875</td><td>0.916</td><td>0.086</td><td>0.161</td><td>0.136</td><td>0.083</td></tr><tr><td>DBW [36]</td><td>✓</td><td>3.61</td><td>7.33</td><td>6.19</td><td>2.09</td><td>26.11</td><td>23.84</td><td>20.25</td><td>28.72</td><td>0.950</td><td>0.915</td><td>0.892</td><td>0.960</td><td>0.074</td><td>0.136</td><td>0.132</td><td>0.042</td></tr><tr><td>Ours (Block-level)</td><td>✓</td><td>2.47</td><td>2.15</td><td>2.32</td><td>1.78</td><td>27.94</td><td>27.98</td><td>24.92</td><td>29.95</td><td>0.959</td><td>0.925</td><td>0.906</td><td>0.963</td><td>0.072</td><td>0.129</td><td>0.122</td><td>0.047</td></tr><tr><td>Ours (Point-level)</td><td>✓</td><td>1.29</td><td>1.72</td><td>0.94</td><td>1.07</td><td>41.18</td><td>36.80</td><td>36.07</td><td>39.51</td><td>0.992</td><td>0.973</td><td>0.977</td><td>0.989</td><td>0.014</td><td>0.070</td><td>0.038</td><td>0.021</td></tr></table>
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
Figure 4. Qualitative results on BlendedMVS [61] and self-captured data. We present RGB renderings and decomposed parts from novel views. The top examples are from the BlendedMVS dataset, and the last example is from our captured scenes.
|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Figure 6. Qualitative comparison with SAM-based methods. Visual results demonstrate that our method produces more structurally coherent decompositions, whereas SAM-based approaches frequently exhibit spatial discontinuities.
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Figure 5. Ablation studies on key strategies. The block-level visual comparisons illustrate the impact of adopting our proposed strategy. The first row shows results without the strategy, and the second with the strategy implemented.
|
| 242 |
+
|
| 243 |
+
ities, leading to over-decomposition and the inclusion of meaningless parts. Moreover, as shown in Tab. 2 and Tab. 4, our approach achieves competitive results with the advanced Gaussian-based 2DGS [21] and surface reconstruction Neuralangelo [30], which are limited to producing unstructured meshes. Notably, our model demonstrates excellent efficiency, with a reconstruction speed approximately $10\mathrm{X}$ faster than PartNeRF [47] and over $3\mathrm{X}$ faster than DBW [36].
|
| 244 |
+
|
| 245 |
+
Appearance Reconstruction. In addition to part decomposition, our method enables high-fidelity image synthesis, attributed to integrating Gaussian splatting within hybrid
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
|
| 255 |
+
Table 4. Quantitative results on ShapeNet [6]. We report the Chamfer distance and novel view synthesis results across four categories.
|
| 256 |
+
|
| 257 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">Block-level</td><td colspan="4">Point-level</td><td rowspan="2">#P</td></tr><tr><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td></tr><tr><td>Complete model</td><td>2.32</td><td>24.92</td><td>0.906</td><td>0.122</td><td>0.94</td><td>36.07</td><td>0.977</td><td>0.038</td><td>6.6</td></tr><tr><td>w/o Lover</td><td>2.11</td><td>25.17</td><td>0.914</td><td>0.118</td><td>1.34</td><td>36.04</td><td>0.977</td><td>0.037</td><td>7.6</td></tr><tr><td>w/o Lopa</td><td>5.72</td><td>21.12</td><td>0.853</td><td>0.174</td><td>1.06</td><td>35.60</td><td>0.975</td><td>0.040</td><td>3.8</td></tr><tr><td>w/o Lcov</td><td>3.21</td><td>22.96</td><td>0.896</td><td>0.134</td><td>1.34</td><td>36.00</td><td>0.976</td><td>0.038</td><td>8.53</td></tr><tr><td>w/o Adaptive</td><td>3.16</td><td>22.74</td><td>0.880</td><td>0.141</td><td>1.05</td><td>35.74</td><td>0.974</td><td>0.040</td><td>6.7</td></tr><tr><td>w/o Lpar</td><td>1.91</td><td>25.48</td><td>0.918</td><td>0.115</td><td>0.93</td><td>36.18</td><td>0.977</td><td>0.037</td><td>10.1</td></tr></table>
|
| 258 |
+
|
| 259 |
+
Table 5. Ablation studies on the ShapeNet [6]. We report Chamfer Distance, rendering metrics, and the number of primitives (#P).
|
| 260 |
+
|
| 261 |
+
representations. EMS [32] and MBF [41] operate directly on point clouds and consequently lack image rendering capabilities, while PartNeRF [47] and DBW [36] yield low-quality view synthesis. In contrast, our method achieves high-quality appearance rendering results, as demonstrated in Tab. 2 and Tab. 4.
|
| 262 |
+
|
| 263 |
+
# 4.3.2. Real-life Data
|
| 264 |
+
|
| 265 |
+
To further demonstrate the applicability of our method for learning shape decomposition, we test our model on the real-life images from the BlendedMVS dataset [61] and a self-captured dataset. As shown in Fig. 4, our approach can robustly produce both realistic appearances and reasonable 3D decompositions across a variety of data types. More results are provided in the supplementary material.
|
| 266 |
+
|
| 267 |
+
# 4.3.3. Compared to SAM-based Methods
|
| 268 |
+
|
| 269 |
+
We also conduct comparisons to SAM-based [27] methods for this task, as shown in Fig. 6. Despite impressive advances in 3D segmentation and editing [5, 26, 62, 63] by
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
Figure 7. Impact of Gaussian scale regularization (SR). The degraded mesh produced by 2DGS [21] is effectively improved.
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
Figure 8. Impact of the initial number of primitives $(M)$ . Increasing $M$ yields a finer-grained decomposition, while decreasing it produces a coarser decomposition.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
Figure 9. Applications. Part-Aware Editing (top): After optimization, we can easily edit the scene by adding, scaling, or moving specific parts. 3D Content Generation (bottom): By combining parts of different objects, we can create new 3D content.
|
| 285 |
+
|
| 286 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">Block-level</td><td colspan="4">Point-level</td><td rowspan="2">#P</td></tr><tr><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>CD ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td></tr><tr><td>λpar=0.1</td><td>6.46</td><td>16.10</td><td>0.721</td><td>0.255</td><td>1.08</td><td>34.30</td><td>0.987</td><td>0.018</td><td>2.7</td></tr><tr><td>λpar=0.01 (*)</td><td>4.19</td><td>19.84</td><td>0.820</td><td>0.189</td><td>1.05</td><td>35.04</td><td>0.988</td><td>0.015</td><td>5.9</td></tr><tr><td>λpar=0.001</td><td>4.01</td><td>20.18</td><td>0.835</td><td>0.180</td><td>1.03</td><td>35.04</td><td>0.988</td><td>0.015</td><td>7.1</td></tr><tr><td>M=4</td><td>4.97</td><td>19.06</td><td>0.794</td><td>0.213</td><td>1.06</td><td>34.93</td><td>0.987</td><td>0.016</td><td>5.0</td></tr><tr><td>M=8 (*)</td><td>4.19</td><td>19.84</td><td>0.820</td><td>0.189</td><td>1.05</td><td>35.04</td><td>0.988</td><td>0.015</td><td>5.9</td></tr><tr><td>M=16</td><td>3.99</td><td>20.87</td><td>0.83</td><td>0.176</td><td>1.07</td><td>35.05</td><td>0.988</td><td>0.015</td><td>8.3</td></tr></table>
|
| 287 |
+
|
| 288 |
+
Table 6. Effect of parsimony weight $(\lambda_{\mathrm{par}})$ and initial primitives count $(M)$ on DTU [23]. We report Chamfer distance, rendering metrics, and primitives count (#P). * denotes the default setting.
|
| 289 |
+
|
| 290 |
+
distilling 2D information, achieving semantic disentanglement but remains challenging due to indistinct textures and severe cross-view 2D inconsistencies. Lifting 2D segmentation features to 3D segmentation will also lead to feature misalignment, leading to visual artifacts. In contrast, our method performs direct 3D space segmentation, yielding more reasonable and structurally coherent decompositions.
|
| 291 |
+
|
| 292 |
+
# 4.4. Ablations
|
| 293 |
+
|
| 294 |
+
In this section, we first analyze the design choices by isolating each strategy to assess its impact. We report the averaged
|
| 295 |
+
|
| 296 |
+
quantitative performance on 15 instances of the chair category in Tab. 5 and provide visual comparisons in Fig. 5. Removing overlapping loss provides better reconstruction accuracy but more overlapping parts. The opacity loss drastically affects the number of primitives and doubles the CD, highlighting its role in controlling primitive presence and reconstruction quality. Coverage loss is crucial in reconstruction accuracy, ensuring primitives align correctly with the target. The adaptive primitive strategy fills in missing parts and improves the reconstruction metric. Without parsimony loss, CD and rendering metrics are optimal but result in over-decomposition. Note that the full model did not yield the highest accuracy because the part-aware reconstruction trades off the accuracy and rationality of the part decomposition. The objective is to achieve high accuracy while preserving reasonable parts.
|
| 297 |
+
|
| 298 |
+
We also validate the effectiveness of Gaussian scale regularization in Fig. 7. 2DGS [21] tends to use large-scale Gaussians in texture-less areas and create holes in the constructed mesh by TSDF integration [67]. The presence of holes stems from the inability of large-scale Gaussians to maintain view-consistent depth. Scale regularization effectively addresses this issue by suppressing large-scale Gaussians.
|
| 299 |
+
|
| 300 |
+
Lastly, in Tab. 6, we analyze the influence of two key hyperparameters on decomposition granularity: the parsimony loss weight $\lambda_{\mathrm{par}}$ and the initial primitive count $M$ . Stronger parsimony regularization reduces the number of primitives, while weaker regularization increases it. As $M$ rises, both reconstruction and view synthesis performance slightly improve. This demonstrates that adjusting $M$ and $\lambda_{\mathrm{par}}$ enables our method to effectively control the granularity of object or scene decomposition. Fig. 8 visually illustrates this impact.
|
| 301 |
+
|
| 302 |
+
# 4.5. Applications
|
| 303 |
+
|
| 304 |
+
Fig. 9 illustrates two applications of our method that the original 3DGS and NeRF-based approaches do not support. Firstly, after optimization, we obtain the part decomposition, facilitating easy editing of specific objects or scene components, e.g., adding, moving, removing, or scaling. Secondly, by combining parts of different objects, our method enables the creation of new high-quality 3D content.
|
| 305 |
+
|
| 306 |
+
# 5. Conclusions
|
| 307 |
+
|
| 308 |
+
We introduce PartGS, a hybrid representation of superquadrics and 2D Gaussians, to learn 3D scenes in a part-aware representation. Compared to prior works, the proposed method retains geometry details, supporting high-quality image rendering. It obtains state-of-the-art performance in comprehensive evaluations. One limitation is that it takes background-free scene images as inputs, leveraging segmentation tools such as SAM [27]. In the future, we aim to explore how to model backgrounds and extend to larger and more complex scenes.
|
| 309 |
+
|
| 310 |
+
# Acknowledgements
|
| 311 |
+
|
| 312 |
+
This work is supported in part by the NSFC (62325211, 62132021, 62372457), the Major Program of Xiangjiang Laboratory (23XJ01009), Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), the Natural Science Foundation of Hunan Province of China (2022RC1104).
|
| 313 |
+
|
| 314 |
+
# References
|
| 315 |
+
|
| 316 |
+
[1] Stephan Alaniz, Massimiliano Mancini, and Zeynep Akata. Iterative superquadric decomposition of 3d objects from multiple views. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 18013-18023, 2023. 2
|
| 317 |
+
[2] Alan H. Barr. Superquadrics and Angle-Preserving Transformations. IEEE Computer Graphics and Applications, 1981. 1, 2, 3, 4
|
| 318 |
+
[3] Thomas Binford. Visual Perception by Computer. In IEEE Conference on Systems and Control, 1971. 2
|
| 319 |
+
[4] Åke Björck. Numerics of gram-schmidt orthogonalization. Linear Algebra and Its Applications, 197:297-316, 1994. 4
|
| 320 |
+
[5] Jiazhong Cen, Jiemin Fang, Chen Yang, Lingxi Xie, Xiaopeng Zhang, Wei Shen, and Qi Tian. Segment any 3d gaussians. arXiv preprint arXiv:2312.00860, 2023. 7
|
| 321 |
+
[6] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. arXiv:1512.03012 [cs.CV], 2015. 5, 6, 7
|
| 322 |
+
[7] Hanlin Chen, Chen Li, and Gim Hee Lee. Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846, 2023. 2
|
| 323 |
+
[8] Zhiqin Chen, Andrea Tagliasacchi, and Hao Zhang. Bsp-net: Generating compact meshes via binary space partitioning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 324 |
+
[9] Zhiqin Chen, Qimin Chen, Hang Zhou, and Hao Zhang. Dae-net: Deforming auto-encoder for fine-grained shape co-segmentation. arXiv preprint arXiv:2311.13125, 2023. 2
|
| 325 |
+
[10] Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, and Andrea Tagliasacchi. Cvxnet: Learnable convex decomposition. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 326 |
+
[11] Martin A. Fischler and Robert C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM, 1981. 2
|
| 327 |
+
[12] Huachen Gao, Shihe Shen, Zhe Zhang, Kaiqiang Xiong, Rui Peng, Zhirui Gao, Qi Wang, Yugui Xie, and Ronggang Wang. Fdc-nerf: learning pose-free neural radiance fields with flow-depth consistency. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3615-3619. IEEE, 2024. 1
|
| 328 |
+
[13] Lin Gao, Jie Yang, Tong Wu, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai, and Hao Zhang. Sdm-net: Deep generative network
|
| 329 |
+
|
| 330 |
+
for structured deformable mesh. ACM Transactions on Graphics (TOG), 38(6):1-15, 2019. 2
|
| 331 |
+
[14] Lin Gao, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu, and Yu-Kun Lai. Mesh-based gaussian splatt-ting for real-time large-scale deformation. arXiv preprint arXiv:2402.04796, 2024. 2, 3
|
| 332 |
+
[15] Zhirui Gao, Renjiao Yi, Zheng Qin, Yunfan Ye, Chenyang Zhu, and Kai Xu. Learning accurate template matching with differentiable coarse-to-fine correspondence refinement. Computational Visual Media, 10(2):309-330, 2024. 2
|
| 333 |
+
[16] Zhirui Gao, Renjiao Yi, Yaqiao Dai, Xuening Zhu, Wei Chen, Chenyang Zhu, and Kai Xu. Curve-aware gaussian splatting for 3d parametric curve reconstruction, 2025. 2
|
| 334 |
+
[17] Zhirui Gao, Renjiao Yi, Chenyang Zhu, Ke Zhuang, Wei Chen, and Kai Xu. Generic objects as pose probes for few-shot view synthesis. IEEE Transactions on Circuits and Systems for Video Technology, 2025. 1
|
| 335 |
+
[18] Zeqi Gu, Yin Cui, Zhaoshuo Li, Fangyin Wei, Yunhao Ge, Jinwei Gu, Ming-Yu Liu, Abe Davis, and Yifan Ding. Artiscene: Language-driven artistic 3d scene generation through image intermediary. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pages 2891-2901, 2025. 1
|
| 336 |
+
[19] Yanran Guan, Han Liu, Kun Liu, Kangxue Yin, Ruizhen Hu, Oliver van Kaick, Yan Zhang, Ersin Yumer, Nathan Carr, Radomir Mech, et al. Fame: 3d shape generation via functionality-aware model evolution. IEEE Transactions on Visualization and Computer Graphics, 28(4):1758-1772, 2020. 2
|
| 337 |
+
[20] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. arXiv preprint arXiv:2311.12775, 2023. 2, 3, 4
|
| 338 |
+
[21] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. SIGGRAPH, 2024. 2, 3, 5, 6, 7, 8
|
| 339 |
+
[22] Ka-Hei Hui, Ruihui Li, Jingyu Hu, and Chi-Wing Fu. Neural template: Topology-aware reconstruction and disentangled generation of 3d meshes. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 18572-18582, 2022. 2
|
| 340 |
+
[23] Rasmus Jensen, Anders Dahl, George Vogiatzis, Engil Tola, and Henrik Aanaes. Large Scale Multi-view Stereopsis Evaluation. In CVPR, 2014. 5, 6, 8
|
| 341 |
+
[24] Shuyi Jiang, Qihao Zhao, Hossein Rahmani, De Wen Soh, Jun Liu, and Na Zhao. Gaussianblock: Building part-aware compositional and editable 3d scene by primitives and gaussians. arXiv preprint arXiv:2410.01535, 2024. 2
|
| 342 |
+
[25] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4):1-14, 2023. 2, 4
|
| 343 |
+
[26] Chung Min* Kim, Mingxuan* Wu, Justin* Kerr, Matthew Tancik, Ken Goldberg, and Angjoo Kanazawa. Garfield: Group anything with radiance fields. In Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 7
|
| 344 |
+
|
| 345 |
+
[27] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything. arXiv:2304.02643, 2023. 2, 7, 8
|
| 346 |
+
[28] Jun Li, Kai Xu, Siddhartha Chaudhuri, Ersin Yumer, Hao Zhang, and Leonidas Guibas. Grass: Generative recursive autoencoders for shape structures. ACM Transactions on Graphics (TOG), 36(4):1-14, 2017. 1
|
| 347 |
+
[29] Lingxiao Li, Minhyuk Sung, Anastasia Dubrovina, Li Yi, and Leonidas J Guibas. Supervised Fitting of Geometric Primitives to 3D Point Clouds. In CVPR, 2019. 1
|
| 348 |
+
[30] Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. Neuralangelo: High-fidelity neural surface reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8456-8465, 2023. 6, 7
|
| 349 |
+
[31] Di Liu, Xiang Yu, Meng Ye, Qilong Zhangli, Zhuowei Li, Zhixing Zhang, and Dimitris N Metaxas. Deformer: Integrating transformers with deformable models for 3d shape abstraction from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14236-14246, 2023. 2
|
| 350 |
+
[32] Weixiao Liu, Yuwei Wu, Sipu Ruan, and Gregory S Chirikjian. Robust and Accurate Superquadric Recovery: a Probabilistic Approach. In CVPR, 2022. 1, 2, 3, 6, 7
|
| 351 |
+
[33] Romain Loiseau, Elliot Vincent, Mathieu Aubry, and Loic Landrieu. Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans. arXiv:2304.09704 [cs.CV], 2023. 1
|
| 352 |
+
[34] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV, 2020. 4
|
| 353 |
+
[35] Niloy Mitra, Michael Wand, Hao (Richard) Zhang, Daniel Cohen-Or, Vladimir Kim, and Qi-Xing Huang. Structure-aware shape processing. In SIGGRAPH Asia 2013 Courses, 2013. 1
|
| 354 |
+
[36] Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei A. Efros, and Mathieu Aubry. Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives. In NeurIPS, 2023. 1, 2, 3, 4, 6, 7
|
| 355 |
+
[37] Chengjie Niu, Jun Li, and Kai Xu. Im2struct: Recovering 3d shape structure from a single rgb image. Cornell University - arXiv, Cornell University - arXiv, 2018. 1
|
| 356 |
+
[38] Despoina Paschalidou, Ali Osman Ulusoy, and Andreas Geiger. Superquadrics Revisited: Learning 3D Shape Parsing Beyond Cuboids. In CVPR, 2019. 2
|
| 357 |
+
[39] Despoina Paschalidou, Luc Van Gool, and Andreas Geiger. Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image. In CVPR, 2020. 2
|
| 358 |
+
[40] Despoina Paschalidou, Angelos Katharopoulos, Andreas Geiger, and Sanja Fidler. Neural parts: Learning expressive 3d shape abstractions with invertible neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3204-3215, 2021. 2
|
| 359 |
+
|
| 360 |
+
[41] Michael Ramamonjisoa, Sinisa Stekovic, and Vincent Lepetit. MonteBoxFinder: Detecting and Filtering Primitives to Fit a Noisy Point Cloud. In ECCV, 2022. 2, 3, 6, 7
|
| 361 |
+
[42] Lawrence G. Roberts. Machine perception of three-dimensional solids. PhD thesis, Massachusetts Institute of Technology, 1963. 2
|
| 362 |
+
[43] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 5
|
| 363 |
+
[44] Erich Schubert, Jörg Sander, Martin Ester, Hans Peter Kriegel, and Xiaowei Xu. Dbscan revisited, revisited: why and how you should (still) use dbscan. ACM Transactions on Database Systems (TODS), 42(3):1-21, 2017. 5
|
| 364 |
+
[45] Qingyao Shuai, Chi Zhang, Kaizhi Yang, and Xuejin Chen. Dpf-net: combining explicit shape priors in deformable primitive field for unsupervised structural reconstruction of 3d objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14321-14329, 2023. 2
|
| 365 |
+
[46] Chunyu Sun, Yuqi Yang, Haoxiang Guo, Pengshuai Wang, Xin Tong, Yang Liu, and Heung-Yeung Shum. Semi-supervised 3d shape segmentation with multilevel consistency and part substitution. Computational Visual Media, 2022. 2
|
| 366 |
+
[47] Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, and Leonidas Guibas. PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision. In CVPR, 2023. 1, 2, 6, 7
|
| 367 |
+
[48] Shubham Tulsiani, Hao Su, Leonidas J. Guibas, Alexei A. Efros, and Jitendra Malik. Learning Shape Abstractions by Assembling Volumetric Primitives. In CVPR, 2017. 2
|
| 368 |
+
[49] Joanna Waczyńska, Piotr Borycki, Sławomir Tadeja, Jacek Tabor, and Przemysław Spurek. Games: Mesh-based adapting and modification of gaussian splatting. arXiv preprint arXiv:2402.01459, 2024. 2, 3, 4
|
| 369 |
+
[50] Xinhang Wan, Jiyuan Liu, Xinbiao Gan, Xinwang Liu, Siwei Wang, Yi Wen, Tianjiao Wan, and En Zhu. One-step multiview clustering with diverse representation. IEEE Transactions on Neural Networks and Learning Systems, pages 1-13, 2024. 2
|
| 370 |
+
[51] Xinhang Wan, Jiyuan Liu, Hao Yu, Qian Qu, Ao Li, Xinwang Liu, Ke Liang, Zhibin Dong, and En Zhu. Contrastive continual multiview clustering with filtered structural fusion. IEEE Transactions on Neural Networks and Learning Systems, 2024. 2
|
| 371 |
+
[52] Fengxiang Wang, Mingshuo Chen, Yueying Li, Di Wang, Haotian Wang, Zonghao Guo, Zefan Wang, Boqi Shan, Long Lan, Yulin Wang, et al. Geollava-8k: Scaling remote-sensing multimodal large language models to 8k resolution. arXiv preprint arXiv:2505.21375, 2025. 1
|
| 372 |
+
[53] Fengxiang Wang, Hongzhen Wang, Zonghao Guo, Di Wang, Yulin Wang, Mingshuo Chen, Qiang Ma, Long Lan, Wenjing Yang, Jing Zhang, et al. Xlrs-bench: Could your multimodal llms understand extremely large ultra-high-resolution remote sensing imagery? In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 14325-14336, 2025. 1
|
| 373 |
+
[54] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. NeuS: Learning Neural Implicit
|
| 374 |
+
|
| 375 |
+
Surfaces by Volume Rendering for Multi-view Reconstruction. In NeurIPS, 2021. 1, 6
|
| 376 |
+
[55] Xiaogang Wang, Yuelang Xu, Kai Xu, Andrea Tagliafaschi, Bin Zhou, Ali Mahdavi-Amiri, and Hao Zhang. Pie-net: Parametric inference of point cloud edges. Advances in neural information processing systems, 33:20167-20178, 2020. 1
|
| 377 |
+
[56] Zhifeng Wang, Renjiao Yi, Xin Wen, Chenyang Zhu, and Kai Xu. Vastsd: Learning 3d vascular tree-state space diffusion model for angiography synthesis. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pages 15693-15702, 2025. 1
|
| 378 |
+
[57] Tong Wu, Lin Gao, Ling-Xiao Zhang, Yu-Kun Lai, and Hao Zhang. Star-tm: Structure aware reconstruction of textured mesh from single image. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-14, 2021. 2
|
| 379 |
+
[58] Yuwei Wu, Weixiao Liu, Sipu Ruan, and Gregory S. Chirikjian. Primitive-based Shape Abstraction via Nonparametric Bayesian Inference. In ECCV, 2022. 2
|
| 380 |
+
[59] Xihong Yang, Xiaochang Hu, Sihang Zhou, Xinwang Liu, and En Zhu. Interpolation-based contrastive learning for few-label semi-supervised learning. IEEE Transactions on Neural Networks and Learning Systems, 35(2):2054-2065, 2022. 2
|
| 381 |
+
[60] Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Wenxuan Tu, Qun Zheng, Xinwang Liu, Liming Fang, and En Zhu. Cluster-guided contrastive graph clustering network. In Proceedings of the AAAI conference on artificial intelligence, pages 10834-10842, 2023. 2
|
| 382 |
+
[61] Yao Yao, Zixin Luo, Shiwei Li, Jingyang Zhang, Yufan Ren, Lei Zhou, Tian Fang, and Long Quan. BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks. In CVPR, 2020. 5, 7
|
| 383 |
+
[62] Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. Gaussian grouping: Segment and edit anything in 3d scenes. In ECCV, 2024. 7
|
| 384 |
+
[63] Haiyang Ying, Yixuan Yin, Jinzhi Zhang, Fan Wang, Tao Yu, Ruqi Huang, and Lu Fang. Omniseg3d: Omniversal 3d segmentation via hierarchical contrastive learning. arXiv preprint arXiv:2311.11666, 2023. 7
|
| 385 |
+
[64] Fenggen Yu, Kun Liu, Yan Zhang, Chenyang Zhu, and Kai Xu. Partnet: A recursive part decomposition network for fine-grained and hierarchical shape segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9491-9500, 2019. 3
|
| 386 |
+
[65] Fenggen Yu, Yimin Qian, Xu Zhang, Francisca Gil-Ureta, Brian Jackson, Eric Bennett, and Hao Zhang. Dpa-net: Structured 3d abstraction from sparse views via differentiable primitive assembly. arXiv preprint arXiv:2404.00875, 2024. 1, 2
|
| 387 |
+
[66] Hongyi Zhou, Xiaogang Wang, Yulan Guo, and Kai Xu. Monomobility: Zero-shot 3d mobility analysis from monocular videos, 2025. 2
|
| 388 |
+
[67] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3D: A modern library for 3D data processing. arXiv:1801.09847, 2018.8
|
| 389 |
+
[68] Chenyang Zhu, Kai Xu, Siddhartha Chaudhuri, Li Yi, Leonidas J Guibas, and Hao Zhang. Adacoseg: Adaptive
|
| 390 |
+
|
| 391 |
+
shape co-segmentation with group consistency loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8543-8552, 2020. 1
|
2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e7ffe0eaf2e7929ffb0ad805cbde11c23f913de25a5c5f47f8d7b5b44d009184
|
| 3 |
+
size 756809
|
2025/Self-supervised Learning of Hybrid Part-aware 3D Representations of 2D Gaussians and Superquadrics/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/d193eee8-ef57-4a70-9609-6d91a9953312_content_list.json
ADDED
|
@@ -0,0 +1,1576 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "SemGes: Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
125,
|
| 8 |
+
130,
|
| 9 |
+
870,
|
| 10 |
+
176
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Lanmiao Liu $^{1,2,3}$ Esam Ghaleb $^{1,2}$ Asli Özyurek $^{1,2}$ and Zerrin Yumak $^{3}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
235,
|
| 19 |
+
202,
|
| 20 |
+
795,
|
| 21 |
+
220
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Max Planck Institute for Psycholinguistics $^{2}$ Donders Institute for Brain Cognition and Behaviour $^{3}$ Utrecht University {lanmiao.liu, esam.ghaleb, asli.ozyurek}@mpi.nl z.yumak@uu.nl",
|
| 28 |
+
"bbox": [
|
| 29 |
+
122,
|
| 30 |
+
222,
|
| 31 |
+
910,
|
| 32 |
+
256
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
258,
|
| 42 |
+
291,
|
| 43 |
+
336,
|
| 44 |
+
306
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Creating a virtual avatar with semantically coherent gestures that are aligned with speech is a challenging task. Existing gesture generation research mainly focused on generating rhythmic beat gestures, neglecting the semantic context of the gestures. In this paper, we propose a novel approach for semantic grounding in co-speech gesture generation that integrates semantic information at both fine-grained and global levels. Our approach starts with learning the motion prior through a vector-quantized variational autoencoder. Built on this model, a second-stage module is applied to automatically generate gestures from speech, text-based semantics and speaker identity that ensures consistency between the semantic relevance of generated gestures and co-occurring speech semantics through semantic coherence and relevance modules. Experimental results demonstrate that our approach enhances the realism and coherence of semantic gestures. Extensive experiments and user studies show that our method outperforms state-of-the-art approaches across two benchmarks in co-speech gesture generation in both objective and subjective metrics. The qualitative results of our model, code, dataset and pre-trained models can be viewed at https://semgesture.github.io/.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
109,
|
| 53 |
+
324,
|
| 54 |
+
485,
|
| 55 |
+
686
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1. Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
112,
|
| 65 |
+
718,
|
| 66 |
+
243,
|
| 67 |
+
734
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Human language is inherently multimodal, with gestures and speech complementing each other to convey pragmatic and semantic information [21, 35]. Cospeech gestures are non-verbal cues that are uniquely related to co-occurring speech, pragmatically, semantically, and temporally. For example, representational iconic gestures that visually express the semantic content of speech and interact with spoken language [13, 14, 19, 21, 41]. A long-standing goal in Computer Vision is to create digital humans that use non-verbal cues in sync with speech. Gesture generation—synthesizing",
|
| 74 |
+
"bbox": [
|
| 75 |
+
109,
|
| 76 |
+
744,
|
| 77 |
+
483,
|
| 78 |
+
912
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "image",
|
| 84 |
+
"img_path": "images/cd16a7daec2693e07ed1d0fd1d4ecb8a33f5cb99f942f7a9f8240ae716d35d3e.jpg",
|
| 85 |
+
"image_caption": [
|
| 86 |
+
"Figure 1. SemGes integrates audio, text-based semantics, and speaker identity to produce both contextually relevant (discourse-level) and fine-grained (local) gestures. A semantic coherence module aligns text and motion embeddings. The multimodal consistency loss synchronizes the quantized multimodal representations to match the quantized learned motion features for final speech-driven semantics-aware gesture generation. The semantic relevance loss selectively emphasizes gestures with semantic annotations."
|
| 87 |
+
],
|
| 88 |
+
"image_footnote": [],
|
| 89 |
+
"bbox": [
|
| 90 |
+
535,
|
| 91 |
+
299,
|
| 92 |
+
867,
|
| 93 |
+
455
|
| 94 |
+
],
|
| 95 |
+
"page_idx": 0
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"type": "text",
|
| 99 |
+
"text": "movements from co-occurring speech, masked motion, or speaker identity—has advanced to enhance AI agents' expressiveness and realism [29]. However, much of the focus has gone into generating rhythmic beat gestures with limited semantic information, leaving representational gestures that convey semantic messages (e.g., iconic) less explored [29, 40].",
|
| 100 |
+
"bbox": [
|
| 101 |
+
511,
|
| 102 |
+
625,
|
| 103 |
+
883,
|
| 104 |
+
729
|
| 105 |
+
],
|
| 106 |
+
"page_idx": 0
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"type": "text",
|
| 110 |
+
"text": "Generating spontaneous and semantically rich gestures from speech comes with multiple challenges. First, it requires capturing global discourse-level information and local fine-grained details (e.g., salient words) to generate speech-driven gestures that reflect the intended meaning and align with speech temporally and semantically. Second, existing methods often generate repetitive and short sequences that do not span the full range of expressive motions required for natural communication. To leverage semantics when generating gestures, researchers have attempted to align motion with speech representations at a global level, e.g., by leveraging pre",
|
| 111 |
+
"bbox": [
|
| 112 |
+
511,
|
| 113 |
+
731,
|
| 114 |
+
883,
|
| 115 |
+
912
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 0
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "header",
|
| 121 |
+
"text": "CVF",
|
| 122 |
+
"bbox": [
|
| 123 |
+
106,
|
| 124 |
+
2,
|
| 125 |
+
181,
|
| 126 |
+
42
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "header",
|
| 132 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 133 |
+
"bbox": [
|
| 134 |
+
238,
|
| 135 |
+
0,
|
| 136 |
+
807,
|
| 137 |
+
46
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "page_number",
|
| 143 |
+
"text": "13963",
|
| 144 |
+
"bbox": [
|
| 145 |
+
480,
|
| 146 |
+
950,
|
| 147 |
+
517,
|
| 148 |
+
963
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "text",
|
| 154 |
+
"text": "trained semantic representations such as CLIP [58] or focusing on semantically important keywords [6, 57]. Nonetheless, they often fail to (i) unify global and local semantic modelling within a single framework and (ii) exploit the relevance of the semantic information in guiding gesture generation [27]. At the same time, raw audio features and speaker identity are relevant to the timing and style of gestures. In this paper, we address these limitations by integrating speech, speech semantics, and gesturing style, exploiting semantic information at different levels.",
|
| 155 |
+
"bbox": [
|
| 156 |
+
109,
|
| 157 |
+
90,
|
| 158 |
+
480,
|
| 159 |
+
256
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 1
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "text",
|
| 165 |
+
"text": "Specifically, we propose a two-stage framework, namely, SemGes, that integrates speech, text-based semantics, and speaker identity into a unified gesture-generation model (see Figure 1). In stage 1, we build motion prior of holistic gestures (i.e., body and hands) by training a vector-quantized variational autoencoder (VQ-VAE) to learn an efficient, compositional motion latent space. This stage results in a robust motion encoder & decoder and quantized codebooks that can reconstruct naturalistic gestures while allowing the reuse of learned codebook entries. Stage 2 leverages the learned motion priors to drive gesture synthesis by fusing three modalities using a cross-modal Transformer encoder: (i) text-based semantics, (ii) raw-audio speech features, and (iii) speaker identity for style consistency. We impose a semantic coherence loss that aligns text-based embeddings with the VQ-VAE motion latent space and a semantic relevance loss that emphasises representational gestures (e.g. iconic and metaphoric gestures). A multimodal consistency objective ensures the fused multimodal representations are compatible with the learned motion codebooks, enabling the generation of gestures that are both semantically rich and visually natural. Finally, we introduce a simple but effective long-sequence inference strategy that smoothly combines overlapping motion clips for extended durations. To summarize our contributions,",
|
| 166 |
+
"bbox": [
|
| 167 |
+
109,
|
| 168 |
+
260,
|
| 169 |
+
482,
|
| 170 |
+
667
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 1
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "list",
|
| 176 |
+
"sub_type": "text",
|
| 177 |
+
"list_items": [
|
| 178 |
+
"- We introduce a novel framework, SemGes, that first learns a robust VQ-VAE motion prior for body and hand gestures, and then generates gestures driven by fused speech audio, text-based semantics, and speaker identity in a cross-modal transformer.",
|
| 179 |
+
"- Our method jointly captures discourse-level context via a semantic coherence loss and fine-grained representational gestures (e.g., iconic, metaphoric) via a semantic relevance loss.",
|
| 180 |
+
"- We propose an overlap-and-combine inference algorithm that maintains smooth continuity over extended durations.",
|
| 181 |
+
"- Extensive experiments on two benchmarks, namely, the BEAT [27] and TED Expressive [33] datasets show that our method outperforms recent baselines in both objective metrics (e.g., Fréchet Gesture Distance"
|
| 182 |
+
],
|
| 183 |
+
"bbox": [
|
| 184 |
+
112,
|
| 185 |
+
670,
|
| 186 |
+
482,
|
| 187 |
+
911
|
| 188 |
+
],
|
| 189 |
+
"page_idx": 1
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"type": "text",
|
| 193 |
+
"text": "(FGD), diversity, semantic alignment) and user judgment of generated gestures.",
|
| 194 |
+
"bbox": [
|
| 195 |
+
527,
|
| 196 |
+
90,
|
| 197 |
+
880,
|
| 198 |
+
121
|
| 199 |
+
],
|
| 200 |
+
"page_idx": 1
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"type": "text",
|
| 204 |
+
"text": "2. Related Work",
|
| 205 |
+
"text_level": 1,
|
| 206 |
+
"bbox": [
|
| 207 |
+
513,
|
| 208 |
+
137,
|
| 209 |
+
653,
|
| 210 |
+
152
|
| 211 |
+
],
|
| 212 |
+
"page_idx": 1
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"text": "Data-driven Co-Speech Gesture Generation. Current gesture generation approaches are based on generative deep neural networks. These approaches use advanced models such as Transformers [28], Generative Adversarial Networks [23], Normalizing Flows [18, 30], Vector Quantized Variational Autoencoder(VQ-VAE) [16] and Denoising Diffusion Probabilistic Models [47]. In addition, researchers have explored the impact of different model inputs on the naturalness and appropriateness of generated gestures. Various modal inputs have been used, such as text [15], audio [50, 59], image [32, 39], and speaking style [3]. For a comprehensive survey, we refer to Nyatsanga et al. [40]. Although there have been significant improvements in this field, current methods fall short in generating semantically grounded gestures at a fine-grained level. In other words, while the generated motions look convincing at first glance, they do not match well with the meaning of the text, or they mostly focus on beat-type gestures.",
|
| 217 |
+
"bbox": [
|
| 218 |
+
511,
|
| 219 |
+
162,
|
| 220 |
+
883,
|
| 221 |
+
450
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 1
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"text": "Semantics-aware Co-Speech Gesture Generation.",
|
| 228 |
+
"text_level": 1,
|
| 229 |
+
"bbox": [
|
| 230 |
+
511,
|
| 231 |
+
473,
|
| 232 |
+
880,
|
| 233 |
+
488
|
| 234 |
+
],
|
| 235 |
+
"page_idx": 1
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"type": "text",
|
| 239 |
+
"text": "A group of work focused on semantics-aware gesture generation where the semantic information is handled in two ways: global semantics and local semantics. Methods that focus on global semantic information [10, 22, 58] align gestures with text or audio, but they fall short in generating gesture types matching the semantic context, such as iconic, metaphoric and deictic gestures. To capture a wider range of semantic gestures, works like [5, 6, 27] adopt local semantic-aware modelling by integrating the semantic salient words to the neural network. However, these approaches often fail to ensure that the generated gestures align with both the broader audio or textual context and a combination of global and local semantics. Liang et al. [26], Voß and Kopp [49] incorporate both global and local semantics, however, they require extensive annotations. Recently, Zhang et al. [57] employed a generative retrieval framework based on LLMs to address the sparsity problem in datasets with semantic gestures. However, they do not explicitly model the different types of gestures [27] or gesture phases [12] grounded in linguistic research. Moreover, there is still not enough understanding of the impact of different annotations and fine-grained semantics.",
|
| 240 |
+
"bbox": [
|
| 241 |
+
511,
|
| 242 |
+
489,
|
| 243 |
+
883,
|
| 244 |
+
848
|
| 245 |
+
],
|
| 246 |
+
"page_idx": 1
|
| 247 |
+
},
|
| 248 |
+
{
|
| 249 |
+
"type": "text",
|
| 250 |
+
"text": "Substantial research [6, 28, 57, 58] focused on two-stage latent space generative modelling to overcome the limitations of co-speech gesture generation and to generate more naturalistic and diverse gestures.",
|
| 251 |
+
"bbox": [
|
| 252 |
+
511,
|
| 253 |
+
851,
|
| 254 |
+
883,
|
| 255 |
+
912
|
| 256 |
+
],
|
| 257 |
+
"page_idx": 1
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"type": "page_number",
|
| 261 |
+
"text": "13964",
|
| 262 |
+
"bbox": [
|
| 263 |
+
480,
|
| 264 |
+
950,
|
| 265 |
+
517,
|
| 266 |
+
962
|
| 267 |
+
],
|
| 268 |
+
"page_idx": 1
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"type": "image",
|
| 272 |
+
"img_path": "images/71af008c2730c688b5250cfcba0cceaa9c17e67ca54a1d376357b4a68f77ac40.jpg",
|
| 273 |
+
"image_caption": [
|
| 274 |
+
"Figure 2. We pre-train two VQ-VAEs by reconstructing body and hand motions with a dedicated codebook for each."
|
| 275 |
+
],
|
| 276 |
+
"image_footnote": [],
|
| 277 |
+
"bbox": [
|
| 278 |
+
130,
|
| 279 |
+
90,
|
| 280 |
+
467,
|
| 281 |
+
141
|
| 282 |
+
],
|
| 283 |
+
"page_idx": 2
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"type": "text",
|
| 287 |
+
"text": "These approaches first learn a latent space and then model gestures probabilistically, effectively integrating the strengths of different methods in different stages. Liu et al. [28], Zhang et al. [57] capture complex dependencies in the latent space using VQ-VAE while Zhi et al. [58] employs CLIP [46, 56] to align text and motion embeddings. Ao et al. [6] introduces a diffusion-based model that leverages semantic awareness, while Liu et al. [28] utilizes a transformer-based approach to generate holistic body gestures. In contrast with two-stage generative modeling approaches, end-to-end methods such as [5, 59], are prone to jittering artifacts, especially in hand-motion generation.",
|
| 288 |
+
"bbox": [
|
| 289 |
+
109,
|
| 290 |
+
207,
|
| 291 |
+
483,
|
| 292 |
+
402
|
| 293 |
+
],
|
| 294 |
+
"page_idx": 2
|
| 295 |
+
},
|
| 296 |
+
{
|
| 297 |
+
"type": "text",
|
| 298 |
+
"text": "Our model contributes to the research line on semantics-aware co-speech gesture generation by taking into account both global and local semantics. Inspired by the previous work, we employ a two-stage latent space generative modelling for high-quality motion representation. We learn semantic coherence between text and gestures globally with cosine similarity. Moreover, our model takes into account the semantic relevancy of gesture types with minimally required annotations. In contrast with other semantic learning models, we focus on annotations with different gesture types embedded in linguistic research. Our work is closest to CAMN [27] in that sense; however, CAMN does not include semantic coherence learning by aligning text and gestures' latent space globally.",
|
| 299 |
+
"bbox": [
|
| 300 |
+
109,
|
| 301 |
+
404,
|
| 302 |
+
483,
|
| 303 |
+
630
|
| 304 |
+
],
|
| 305 |
+
"page_idx": 2
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"type": "text",
|
| 309 |
+
"text": "3. Methodology",
|
| 310 |
+
"text_level": 1,
|
| 311 |
+
"bbox": [
|
| 312 |
+
112,
|
| 313 |
+
648,
|
| 314 |
+
246,
|
| 315 |
+
666
|
| 316 |
+
],
|
| 317 |
+
"page_idx": 2
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"type": "text",
|
| 321 |
+
"text": "We propose a two-stage approach that generates cospeech gestures by grounding them in raw speech, text-based semantics, and speaker identity. In Section 3.1, we introduce a VQ-VAE encoder-decoder that learns a robust motion prior. Section 3.2 details our gesture synthesis and inference pipeline based on speech, semantics, and identity.",
|
| 322 |
+
"bbox": [
|
| 323 |
+
109,
|
| 324 |
+
674,
|
| 325 |
+
482,
|
| 326 |
+
780
|
| 327 |
+
],
|
| 328 |
+
"page_idx": 2
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"type": "text",
|
| 332 |
+
"text": "Problem formulation. Our goal is to generate hand gestures $\\pmb{G}^{h} = (g_{1}^{h},\\dots,g_{T}^{h})\\in \\mathbb{R}^{T\\times J}$ and body gestures $\\pmb{G}^{b} = (g_{1}^{b},\\dots,g_{T}^{b})\\in \\mathbb{R}^{T\\times J}$ , where $T$ is the number of time steps and $J$ the number of joints (e.g., 38 for hands, 9 for body). Each motion vector $g_{t}^{h}$ or $g_{t}^{b}$ is encoded in a Rot6D representation, capturing joint rotations at time $t$ .",
|
| 333 |
+
"bbox": [
|
| 334 |
+
109,
|
| 335 |
+
805,
|
| 336 |
+
483,
|
| 337 |
+
910
|
| 338 |
+
],
|
| 339 |
+
"page_idx": 2
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"type": "text",
|
| 343 |
+
"text": "To model human motion of body and hands, we first learn a motion generator $\\mathcal{M}_g$ (Stage 1), which synthesizes a plausible motion sequence:",
|
| 344 |
+
"bbox": [
|
| 345 |
+
511,
|
| 346 |
+
90,
|
| 347 |
+
883,
|
| 348 |
+
137
|
| 349 |
+
],
|
| 350 |
+
"page_idx": 2
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"type": "equation",
|
| 354 |
+
"text": "\n$$\n\\arg \\min _ {\\mathcal {M} _ {g}} \\left\\| \\boldsymbol {G} - \\mathcal {M} _ {g} \\left(g _ {1}, \\dots , g _ {T}\\right) \\right\\|. \\tag {1}\n$$\n",
|
| 355 |
+
"text_format": "latex",
|
| 356 |
+
"bbox": [
|
| 357 |
+
586,
|
| 358 |
+
142,
|
| 359 |
+
883,
|
| 360 |
+
171
|
| 361 |
+
],
|
| 362 |
+
"page_idx": 2
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"type": "text",
|
| 366 |
+
"text": "Next, we condition on (i) the raw input audio $\\mathbf{A} = (a_{1},\\ldots ,a_{T})$ , (ii) the speaker identity embedding $I$ , and (iii) the text-based semantic embeddings of the speech $S = (s_{1},\\dots ,s_{T})$ . Our second-stage model $\\mathcal{M}_{a,s,i}$ uses these inputs to generate a latent sequence that the motion generator $\\mathcal{M}_g$ then decodes into naturalistic gestures:",
|
| 367 |
+
"bbox": [
|
| 368 |
+
511,
|
| 369 |
+
178,
|
| 370 |
+
883,
|
| 371 |
+
270
|
| 372 |
+
],
|
| 373 |
+
"page_idx": 2
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"type": "equation",
|
| 377 |
+
"text": "\n$$\n\\arg \\min _ {\\mathcal {M} _ {a, s, i}} \\left\\| \\boldsymbol {G} - \\mathcal {M} _ {g} \\left(\\mathcal {M} _ {a, s, i} (\\boldsymbol {A}, \\boldsymbol {S}, I)\\right) \\right\\|. \\tag {2}\n$$\n",
|
| 378 |
+
"text_format": "latex",
|
| 379 |
+
"bbox": [
|
| 380 |
+
584,
|
| 381 |
+
280,
|
| 382 |
+
883,
|
| 383 |
+
301
|
| 384 |
+
],
|
| 385 |
+
"page_idx": 2
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"type": "text",
|
| 389 |
+
"text": "3.1. Stage 1: Learning Efficient Codebooks & Compositional Motion Priors",
|
| 390 |
+
"text_level": 1,
|
| 391 |
+
"bbox": [
|
| 392 |
+
511,
|
| 393 |
+
309,
|
| 394 |
+
880,
|
| 395 |
+
339
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 2
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "text",
|
| 401 |
+
"text": "Realistic co-speech gestures require modelling the sequential motion of both body and hand joints. Rather than learning a single representation for the entire body, we adopt a compositional approach, using a discrete codebook of learned representations specific to each part (hands & body). Any gesture motion can then be represented by selecting appropriate codebook entries. Following [28, 48, 53], we employ a VQ-VAE architecture (see Fig. 2) with encoder $\\mathcal{E}_m$ and decoder $\\mathcal{D}_m$ . Given hand motion $G^h \\in \\mathbb{R}^{T \\times J}$ and body motion $G^b \\in \\mathbb{R}^{T \\times J}$ , the encoder produces latent vectors $\\hat{z}^h$ and $\\hat{z}^b$ , which are quantized by selecting the nearest entries in the codebooks. Formally,",
|
| 402 |
+
"bbox": [
|
| 403 |
+
511,
|
| 404 |
+
345,
|
| 405 |
+
883,
|
| 406 |
+
542
|
| 407 |
+
],
|
| 408 |
+
"page_idx": 2
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"type": "equation",
|
| 412 |
+
"text": "\n$$\n\\mathbf {q} (\\hat {\\boldsymbol {z}}) = \\arg \\min _ {z ^ {i} \\in \\mathcal {Z}} \\| \\hat {z} ^ {j} - z ^ {i} \\|, \\tag {3}\n$$\n",
|
| 413 |
+
"text_format": "latex",
|
| 414 |
+
"bbox": [
|
| 415 |
+
607,
|
| 416 |
+
555,
|
| 417 |
+
883,
|
| 418 |
+
579
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 2
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "text",
|
| 424 |
+
"text": "where $z^i$ are the learned codebook entries, and $\\hat{z}^j$ denotes an element of the latent vector for either hand or body. We train the VQ-VAE via a straight-through gradient estimator, minimizing:",
|
| 425 |
+
"bbox": [
|
| 426 |
+
511,
|
| 427 |
+
584,
|
| 428 |
+
883,
|
| 429 |
+
643
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 2
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "equation",
|
| 435 |
+
"text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {\\mathrm {V Q - V A E}} = \\left\\| \\mathbf {g} - \\hat {\\mathbf {g}} \\right\\| ^ {2} + \\left\\| \\dot {\\mathbf {g}} - \\hat {\\dot {\\mathbf {g}}} \\right\\| ^ {2} + \\left\\| \\ddot {\\mathbf {g}} - \\hat {\\ddot {\\mathbf {g}}} \\right\\| ^ {2} \\tag {4} \\\\ + \\left\\| \\operatorname {s g} [ \\mathbf {E} (\\mathbf {g}) ] - \\mathbf {q} (\\hat {z}) \\right\\| ^ {2} + \\left\\| \\mathbf {E} (\\mathbf {g}) - \\operatorname {s g} [ \\mathbf {q} (\\hat {z}) ] \\right\\| ^ {2}, \\\\ \\end{array}\n$$\n",
|
| 436 |
+
"text_format": "latex",
|
| 437 |
+
"bbox": [
|
| 438 |
+
545,
|
| 439 |
+
662,
|
| 440 |
+
883,
|
| 441 |
+
695
|
| 442 |
+
],
|
| 443 |
+
"page_idx": 2
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"text": "where the first three terms reconstruct joint positions, velocities, and accelerations, and the last two terms implement the VQ-VAE commitment loss [48].",
|
| 448 |
+
"bbox": [
|
| 449 |
+
511,
|
| 450 |
+
700,
|
| 451 |
+
880,
|
| 452 |
+
744
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 2
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "text",
|
| 458 |
+
"text": "By the end of this stage, we have motion $(m)$ encoder $(\\mathcal{E}_m)$ , decoder $(\\mathcal{D}_m)$ and codebooks $(\\text{Quant}^m(\\cdot))$ for hands and body. In the next section (Section 3.2), we show how this discretized motion of hands and body guides speech, semantics and speaker identity-driven generation to produce realized co-speech gestures.",
|
| 459 |
+
"bbox": [
|
| 460 |
+
511,
|
| 461 |
+
746,
|
| 462 |
+
883,
|
| 463 |
+
837
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 2
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "3.2. Stage 2: Speech and Identity Driven Semantic Gesture Generator",
|
| 470 |
+
"text_level": 1,
|
| 471 |
+
"bbox": [
|
| 472 |
+
511,
|
| 473 |
+
844,
|
| 474 |
+
880,
|
| 475 |
+
873
|
| 476 |
+
],
|
| 477 |
+
"page_idx": 2
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"type": "text",
|
| 481 |
+
"text": "This stage focuses on generating gestures conditioned on three inputs: speech embeddings, text-based seman",
|
| 482 |
+
"bbox": [
|
| 483 |
+
511,
|
| 484 |
+
881,
|
| 485 |
+
883,
|
| 486 |
+
912
|
| 487 |
+
],
|
| 488 |
+
"page_idx": 2
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"type": "page_number",
|
| 492 |
+
"text": "13965",
|
| 493 |
+
"bbox": [
|
| 494 |
+
480,
|
| 495 |
+
950,
|
| 496 |
+
517,
|
| 497 |
+
962
|
| 498 |
+
],
|
| 499 |
+
"page_idx": 2
|
| 500 |
+
},
|
| 501 |
+
{
|
| 502 |
+
"type": "image",
|
| 503 |
+
"img_path": "images/f390537d0780d93a79329de2bb3a674c27d7e5f1c1a52b19a07db673cc813312.jpg",
|
| 504 |
+
"image_caption": [
|
| 505 |
+
"Figure 3. SemGes employs three training pathways: (1) Global semantic coherence, which minimizes latent disparities between gesture and text encoders; (2) Multimodal Quantization learning, where integrated multimodal representation codes are aligned with quantized motion to decode them into hand and body movements; and (3) Semantic relevance learning, which emphasizes semantic gestures."
|
| 506 |
+
],
|
| 507 |
+
"image_footnote": [],
|
| 508 |
+
"bbox": [
|
| 509 |
+
147,
|
| 510 |
+
98,
|
| 511 |
+
857,
|
| 512 |
+
268
|
| 513 |
+
],
|
| 514 |
+
"page_idx": 3
|
| 515 |
+
},
|
| 516 |
+
{
|
| 517 |
+
"type": "text",
|
| 518 |
+
"text": "tic embeddings, and speaker identity. As illustrated in Figure 3, the second-stage architecture has three main modules, which we elaborate on in the following subsections.",
|
| 519 |
+
"bbox": [
|
| 520 |
+
111,
|
| 521 |
+
359,
|
| 522 |
+
483,
|
| 523 |
+
419
|
| 524 |
+
],
|
| 525 |
+
"page_idx": 3
|
| 526 |
+
},
|
| 527 |
+
{
|
| 528 |
+
"type": "image",
|
| 529 |
+
"img_path": "images/79def40ab4ae068b8c0a3781378d3b936df9ae93d335c8b434390dd14871c50a.jpg",
|
| 530 |
+
"image_caption": [
|
| 531 |
+
"Figure 4. Semantic Coherence Embedding Learning."
|
| 532 |
+
],
|
| 533 |
+
"image_footnote": [],
|
| 534 |
+
"bbox": [
|
| 535 |
+
179,
|
| 536 |
+
446,
|
| 537 |
+
428,
|
| 538 |
+
627
|
| 539 |
+
],
|
| 540 |
+
"page_idx": 3
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"type": "text",
|
| 544 |
+
"text": "3.2.1. Semantic Coherence Embedding Learning",
|
| 545 |
+
"text_level": 1,
|
| 546 |
+
"bbox": [
|
| 547 |
+
112,
|
| 548 |
+
683,
|
| 549 |
+
455,
|
| 550 |
+
699
|
| 551 |
+
],
|
| 552 |
+
"page_idx": 3
|
| 553 |
+
},
|
| 554 |
+
{
|
| 555 |
+
"type": "text",
|
| 556 |
+
"text": "To align text-based semantics with motion embeddings, we introduce a shared embedding space for both motion priors and speech transcripts. Specifically, we embed word tokens using a pre-trained FastText model [8], then feed these embeddings into a trainable text-based semantic encoder $\\mathcal{E}_s$ . At the same time, we use the pre-trained motion encoder $\\mathcal{E}_m$ from Stage 1 to encode ground-truth gesture sequences. Thus, for a batch of paired (gesture, transcript) samples, we get:",
|
| 557 |
+
"bbox": [
|
| 558 |
+
111,
|
| 559 |
+
702,
|
| 560 |
+
483,
|
| 561 |
+
838
|
| 562 |
+
],
|
| 563 |
+
"page_idx": 3
|
| 564 |
+
},
|
| 565 |
+
{
|
| 566 |
+
"type": "equation",
|
| 567 |
+
"text": "\n$$\n\\mathcal {Z} ^ {S} = \\mathcal {E} _ {s} (S), \\quad \\mathcal {Z} ^ {h} = \\mathcal {E} _ {m} ^ {h} (G ^ {h}), \\quad \\mathcal {Z} ^ {b} = \\mathcal {E} _ {m} ^ {b} (G ^ {b}), \\tag {5}\n$$\n",
|
| 568 |
+
"text_format": "latex",
|
| 569 |
+
"bbox": [
|
| 570 |
+
161,
|
| 571 |
+
844,
|
| 572 |
+
482,
|
| 573 |
+
859
|
| 574 |
+
],
|
| 575 |
+
"page_idx": 3
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "text",
|
| 579 |
+
"text": "where $S$ is the tokenized speech transcript, and $G^{h}$ and $G^{b}$ correspond to the ground-truth hand gesture sequence and body gesture sequence, respectively. $\\mathcal{Z}^h$ and",
|
| 580 |
+
"bbox": [
|
| 581 |
+
111,
|
| 582 |
+
866,
|
| 583 |
+
483,
|
| 584 |
+
912
|
| 585 |
+
],
|
| 586 |
+
"page_idx": 3
|
| 587 |
+
},
|
| 588 |
+
{
|
| 589 |
+
"type": "text",
|
| 590 |
+
"text": "$\\mathcal{Z}^b$ represent the hand and body ground-truth motion encodings from Stage 1, and $\\mathcal{Z}^s$ denotes the text-based semantic encoder output.",
|
| 591 |
+
"bbox": [
|
| 592 |
+
511,
|
| 593 |
+
359,
|
| 594 |
+
882,
|
| 595 |
+
405
|
| 596 |
+
],
|
| 597 |
+
"page_idx": 3
|
| 598 |
+
},
|
| 599 |
+
{
|
| 600 |
+
"type": "text",
|
| 601 |
+
"text": "Semantic Coherence Loss. We maximize the similarity of correct (gesture, transcript) pairs and minimize it for mismatched pairs, enforcing semantic coherence. This aligns gestures and textual semantics in a common space while keeping $\\mathcal{E}_m$ frozen, as illustrated in Figure 4. We impose the semantic coherence constraint separately on both hand and body movements to align gestures with transcripts in the shared embedding space. Specifically, we introduce two distinct cosine similarity losses: one between the text encoder output and the hand motion latent encoding and another between the text encoder output and the body motion latent encoding. Formally, we minimize:",
|
| 602 |
+
"bbox": [
|
| 603 |
+
511,
|
| 604 |
+
425,
|
| 605 |
+
883,
|
| 606 |
+
621
|
| 607 |
+
],
|
| 608 |
+
"page_idx": 3
|
| 609 |
+
},
|
| 610 |
+
{
|
| 611 |
+
"type": "equation",
|
| 612 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {s e m a n t i c - c o h e r e n c e}} = 1 - \\cos \\left(\\mathcal {Z} ^ {h}, \\mathcal {Z} ^ {s}\\right) + 1 - \\cos \\left(\\mathcal {Z} ^ {b}, \\mathcal {Z} ^ {s}\\right), \\tag {6}\n$$\n",
|
| 613 |
+
"text_format": "latex",
|
| 614 |
+
"bbox": [
|
| 615 |
+
529,
|
| 616 |
+
645,
|
| 617 |
+
883,
|
| 618 |
+
662
|
| 619 |
+
],
|
| 620 |
+
"page_idx": 3
|
| 621 |
+
},
|
| 622 |
+
{
|
| 623 |
+
"type": "text",
|
| 624 |
+
"text": "where the function $\\cos (\\cdot ,\\cdot)$ measures cosine similarity.",
|
| 625 |
+
"bbox": [
|
| 626 |
+
511,
|
| 627 |
+
672,
|
| 628 |
+
875,
|
| 629 |
+
688
|
| 630 |
+
],
|
| 631 |
+
"page_idx": 3
|
| 632 |
+
},
|
| 633 |
+
{
|
| 634 |
+
"type": "text",
|
| 635 |
+
"text": "3.2.2. Crossmodal Integration",
|
| 636 |
+
"text_level": 1,
|
| 637 |
+
"bbox": [
|
| 638 |
+
511,
|
| 639 |
+
696,
|
| 640 |
+
725,
|
| 641 |
+
710
|
| 642 |
+
],
|
| 643 |
+
"page_idx": 3
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "text",
|
| 647 |
+
"text": "SemGes supports multi-modal inputs in the second training stage by combining audio features and speaker identity with semantic text embeddings using a Transformer encoder with self and cross-attention layers (see Figure 3). We begin by extracting HuBERT features [20] from raw speech (keeping the HuBERT encoder frozen). We concatenate audio features $\\mathcal{Z}^a$ and speaker embeddings $\\mathcal{Z}^i$ , resulting in $\\mathcal{Z}^r$ , which we feed into a self-attention layer.",
|
| 648 |
+
"bbox": [
|
| 649 |
+
511,
|
| 650 |
+
715,
|
| 651 |
+
883,
|
| 652 |
+
849
|
| 653 |
+
],
|
| 654 |
+
"page_idx": 3
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"type": "text",
|
| 658 |
+
"text": "Next, we use a cross-attention layer that takes $\\mathcal{Z}^r$ as the query and the motion-aligned text-based semantic features $\\mathcal{Z}^s$ as the key-value pair. The final hidden representation $\\mathcal{Z}^f$ serves as the multimodal latent code",
|
| 659 |
+
"bbox": [
|
| 660 |
+
511,
|
| 661 |
+
851,
|
| 662 |
+
883,
|
| 663 |
+
912
|
| 664 |
+
],
|
| 665 |
+
"page_idx": 3
|
| 666 |
+
},
|
| 667 |
+
{
|
| 668 |
+
"type": "page_number",
|
| 669 |
+
"text": "13966",
|
| 670 |
+
"bbox": [
|
| 671 |
+
480,
|
| 672 |
+
950,
|
| 673 |
+
519,
|
| 674 |
+
963
|
| 675 |
+
],
|
| 676 |
+
"page_idx": 3
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"text": "that drives gesture synthesis when passed to our vector quantization and VQ-VAE-based motion decoder, which is learned in our first stage (see the yellow box in Figure 3).",
|
| 681 |
+
"bbox": [
|
| 682 |
+
111,
|
| 683 |
+
90,
|
| 684 |
+
483,
|
| 685 |
+
151
|
| 686 |
+
],
|
| 687 |
+
"page_idx": 4
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "text",
|
| 691 |
+
"text": "Multimodal Quantization Consistency Loss. SemGes quantizes the multimodal latent code using separate hand and body codebooks. To align this code with the ground-truth motion latent codes, we apply independent quantization consistency losses for each component. Specifically, the quantization loss is defined as:",
|
| 692 |
+
"bbox": [
|
| 693 |
+
111,
|
| 694 |
+
169,
|
| 695 |
+
483,
|
| 696 |
+
273
|
| 697 |
+
],
|
| 698 |
+
"page_idx": 4
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "equation",
|
| 702 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {q u a n t i z a t i o n}} = \\left\\| Q u a n t ^ {h} \\left(\\mathcal {Z} ^ {f}\\right) - Q u a n t ^ {h} \\left(\\mathcal {Z} ^ {h}\\right) \\right\\| ^ {2} + \\tag {7}\n$$\n",
|
| 703 |
+
"text_format": "latex",
|
| 704 |
+
"bbox": [
|
| 705 |
+
165,
|
| 706 |
+
280,
|
| 707 |
+
483,
|
| 708 |
+
303
|
| 709 |
+
],
|
| 710 |
+
"page_idx": 4
|
| 711 |
+
},
|
| 712 |
+
{
|
| 713 |
+
"type": "equation",
|
| 714 |
+
"text": "\n$$\n\\left\\| Q u a n t ^ {b} \\left(\\mathcal {Z} ^ {f}\\right) - Q u a n t ^ {b} \\left(\\mathcal {Z} ^ {b}\\right) \\right\\| ^ {2} \\tag {17}\n$$\n",
|
| 715 |
+
"text_format": "latex",
|
| 716 |
+
"bbox": [
|
| 717 |
+
253,
|
| 718 |
+
303,
|
| 719 |
+
483,
|
| 720 |
+
321
|
| 721 |
+
],
|
| 722 |
+
"page_idx": 4
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"text": "where $\\text{Quant}^h(\\cdot)$ and $\\text{Quant}^b(\\cdot)$ denote the quantization functions for the hand and body codebooks, respectively.",
|
| 727 |
+
"bbox": [
|
| 728 |
+
111,
|
| 729 |
+
330,
|
| 730 |
+
482,
|
| 731 |
+
375
|
| 732 |
+
],
|
| 733 |
+
"page_idx": 4
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "text",
|
| 737 |
+
"text": "The multimodal quantization loss aligns the integrated latent code $\\mathcal{Z}^f$ with the learned motion code, a critical step since gesture synthesis is obtained through the quantized multimodal representation. Specifically, $\\mathcal{Z}^f$ is vector-quantized using separate hand and body codebooks before being decoded by their respective VQ decoders. This process ensures that both hand and body movements contribute effectively to the final output. Formally, the generated gestures are given by:",
|
| 738 |
+
"bbox": [
|
| 739 |
+
111,
|
| 740 |
+
377,
|
| 741 |
+
483,
|
| 742 |
+
512
|
| 743 |
+
],
|
| 744 |
+
"page_idx": 4
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "equation",
|
| 748 |
+
"text": "\n$$\n\\hat {G} = \\hat {G} ^ {h} \\oplus \\hat {G} ^ {b} = \\mathcal {D} _ {m} ^ {h} \\left(\\operatorname {Q u a n t} ^ {h} \\left(\\mathcal {Z} ^ {f}\\right)\\right) \\oplus \\mathcal {D} _ {m} ^ {b} \\left(\\operatorname {Q u a n t} ^ {b} \\left(\\mathcal {Z} ^ {f}\\right)\\right), \\tag {8}\n$$\n",
|
| 749 |
+
"text_format": "latex",
|
| 750 |
+
"bbox": [
|
| 751 |
+
125,
|
| 752 |
+
518,
|
| 753 |
+
482,
|
| 754 |
+
536
|
| 755 |
+
],
|
| 756 |
+
"page_idx": 4
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"type": "text",
|
| 760 |
+
"text": "where $\\oplus$ denotes concatenation, jointly synthesizing hand and body motions (i.e. $\\hat{G}$ ).",
|
| 761 |
+
"bbox": [
|
| 762 |
+
111,
|
| 763 |
+
544,
|
| 764 |
+
483,
|
| 765 |
+
575
|
| 766 |
+
],
|
| 767 |
+
"page_idx": 4
|
| 768 |
+
},
|
| 769 |
+
{
|
| 770 |
+
"type": "text",
|
| 771 |
+
"text": "3.2.3. Gesture Semantic Relevance Loss",
|
| 772 |
+
"text_level": 1,
|
| 773 |
+
"bbox": [
|
| 774 |
+
112,
|
| 775 |
+
580,
|
| 776 |
+
393,
|
| 777 |
+
594
|
| 778 |
+
],
|
| 779 |
+
"page_idx": 4
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"text": "To prioritize the generation of semantically meaningful gestures (e.g., iconic, metaphoric, or deictic), which are less frequent than beat gestures, we introduce a semantic relevance loss. This loss emphasizes semantic annotations while preventing over-penalization of minor deviations. Formally, it is defined as:",
|
| 784 |
+
"bbox": [
|
| 785 |
+
111,
|
| 786 |
+
599,
|
| 787 |
+
483,
|
| 788 |
+
690
|
| 789 |
+
],
|
| 790 |
+
"page_idx": 4
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "equation",
|
| 794 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {s e m a n t i c - r e l e v a n c e}} = \\mathbb {E} \\left[ \\lambda \\Psi (\\mathbf {G} - \\hat {\\mathbf {G}}) \\right], \\tag {9}\n$$\n",
|
| 795 |
+
"text_format": "latex",
|
| 796 |
+
"bbox": [
|
| 797 |
+
191,
|
| 798 |
+
698,
|
| 799 |
+
482,
|
| 800 |
+
718
|
| 801 |
+
],
|
| 802 |
+
"page_idx": 4
|
| 803 |
+
},
|
| 804 |
+
{
|
| 805 |
+
"type": "text",
|
| 806 |
+
"text": "where $\\lambda$ is the annotation relevance factor, and $\\Psi(\\cdot)$ is a piecewise function that applies a quadratic penalty for small errors and a linear penalty for larger ones:",
|
| 807 |
+
"bbox": [
|
| 808 |
+
111,
|
| 809 |
+
724,
|
| 810 |
+
483,
|
| 811 |
+
771
|
| 812 |
+
],
|
| 813 |
+
"page_idx": 4
|
| 814 |
+
},
|
| 815 |
+
{
|
| 816 |
+
"type": "equation",
|
| 817 |
+
"text": "\n$$\n\\Psi (\\mathbf {G} - \\hat {\\mathbf {G}}) = \\left\\{ \\begin{array}{l l} \\frac {1}{2} (\\mathbf {G} - \\hat {\\mathbf {G}}) ^ {2}, & \\text {i f} | \\mathbf {G} - \\hat {\\mathbf {G}} | < \\alpha , \\\\ \\alpha \\left(| \\mathbf {G} - \\hat {\\mathbf {G}} | - \\frac {1}{2} \\alpha\\right), & \\text {o t h e r w i s e}, \\end{array} \\right. \\tag {10}\n$$\n",
|
| 818 |
+
"text_format": "latex",
|
| 819 |
+
"bbox": [
|
| 820 |
+
140,
|
| 821 |
+
777,
|
| 822 |
+
482,
|
| 823 |
+
814
|
| 824 |
+
],
|
| 825 |
+
"page_idx": 4
|
| 826 |
+
},
|
| 827 |
+
{
|
| 828 |
+
"type": "text",
|
| 829 |
+
"text": "with $\\alpha = 0.01$",
|
| 830 |
+
"bbox": [
|
| 831 |
+
112,
|
| 832 |
+
821,
|
| 833 |
+
215,
|
| 834 |
+
834
|
| 835 |
+
],
|
| 836 |
+
"page_idx": 4
|
| 837 |
+
},
|
| 838 |
+
{
|
| 839 |
+
"type": "text",
|
| 840 |
+
"text": "Combined Objective Functions. Finally, the overall objective is:",
|
| 841 |
+
"bbox": [
|
| 842 |
+
111,
|
| 843 |
+
854,
|
| 844 |
+
483,
|
| 845 |
+
883
|
| 846 |
+
],
|
| 847 |
+
"page_idx": 4
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "equation",
|
| 851 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {S e m G e s}} = \\mathcal {L} _ {\\text {s e m a n t i c - c o h e r e n c e}} + \\mathcal {L} _ {\\text {s e m a n t i c - r e l e v a n c e}} + \\mathcal {L} _ {\\text {q u a n t i z a t i o n}}, \\tag {11}\n$$\n",
|
| 852 |
+
"text_format": "latex",
|
| 853 |
+
"bbox": [
|
| 854 |
+
133,
|
| 855 |
+
893,
|
| 856 |
+
482,
|
| 857 |
+
907
|
| 858 |
+
],
|
| 859 |
+
"page_idx": 4
|
| 860 |
+
},
|
| 861 |
+
{
|
| 862 |
+
"type": "code",
|
| 863 |
+
"sub_type": "algorithm",
|
| 864 |
+
"code_caption": [
|
| 865 |
+
"Algorithm 1 Long Gesture Sequence Algorithm"
|
| 866 |
+
],
|
| 867 |
+
"code_body": "Require: Audio $\\mathcal{A}$ aligned speech transcript $S$ and speaker ID $\\mathcal{I}$ Pre-trained codebooks and motion decoder (Stage 1) \nEnsure: Long-sequence gesture M \n1: Partition $(\\mathcal{A},\\mathcal{S},\\mathcal{I})$ into clips $\\{(\\mathcal{A}_c,\\mathcal{S}_c,\\mathcal{I}_c)\\}_{c = 1}^C$ \n2: Compute latent representation: $\\mathcal{Z}^{f}\\gets$ Encode(A,S,I) \n3: Quantize: $\\mathcal{Z}^e\\gets$ VectorQuantize $(\\mathcal{Z}^{f})$ \n4: Decode initial clip: $\\hat{M}_1\\gets \\mathrm{Dec}(\\mathcal{Z}^e)$ \n5: for each clip $c = 2$ to $C$ do \n6: Set first 4 frames of $\\hat{M}_c$ to the last 4 frames of $\\hat{M}_{c - 1}$ \n7: Generate remaining frames of $\\hat{M}_c$ \n8: end for \n9: return $\\hat{M}$",
|
| 868 |
+
"bbox": [
|
| 869 |
+
514,
|
| 870 |
+
109,
|
| 871 |
+
883,
|
| 872 |
+
335
|
| 873 |
+
],
|
| 874 |
+
"page_idx": 4
|
| 875 |
+
},
|
| 876 |
+
{
|
| 877 |
+
"type": "text",
|
| 878 |
+
"text": "which jointly optimizes the model to generate gestures that are semantically coherent at both global and fine-grained levels while remaining faithful to the Stage 1 motion prior.",
|
| 879 |
+
"bbox": [
|
| 880 |
+
511,
|
| 881 |
+
364,
|
| 882 |
+
883,
|
| 883 |
+
426
|
| 884 |
+
],
|
| 885 |
+
"page_idx": 4
|
| 886 |
+
},
|
| 887 |
+
{
|
| 888 |
+
"type": "text",
|
| 889 |
+
"text": "3.3. Inference of Long Gesture Sequences",
|
| 890 |
+
"text_level": 1,
|
| 891 |
+
"bbox": [
|
| 892 |
+
511,
|
| 893 |
+
435,
|
| 894 |
+
834,
|
| 895 |
+
452
|
| 896 |
+
],
|
| 897 |
+
"page_idx": 4
|
| 898 |
+
},
|
| 899 |
+
{
|
| 900 |
+
"type": "text",
|
| 901 |
+
"text": "Generating long sequences of gestures is challenging due to the need to maintain coherence and smooth transitions. Our Long-Sequence Gesture Motion algorithm (Alg. 1) addresses these challenges by partitioning the input speech, transcript, and speaker identity into aligned clips. For each clip, a multimodal latent representation is computed using our cross-modal encoder, vector-quantized via the Stage 1 codebooks, and decoded into gesture motions. Overlapping 4-frame segments between clips provides continuity, resulting in extended, naturalistic gesture sequences.",
|
| 902 |
+
"bbox": [
|
| 903 |
+
511,
|
| 904 |
+
458,
|
| 905 |
+
883,
|
| 906 |
+
625
|
| 907 |
+
],
|
| 908 |
+
"page_idx": 4
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "text",
|
| 912 |
+
"text": "4. Experimental Setup",
|
| 913 |
+
"text_level": 1,
|
| 914 |
+
"bbox": [
|
| 915 |
+
511,
|
| 916 |
+
638,
|
| 917 |
+
705,
|
| 918 |
+
656
|
| 919 |
+
],
|
| 920 |
+
"page_idx": 4
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "text",
|
| 924 |
+
"text": "4.1. Datasets",
|
| 925 |
+
"text_level": 1,
|
| 926 |
+
"bbox": [
|
| 927 |
+
511,
|
| 928 |
+
662,
|
| 929 |
+
614,
|
| 930 |
+
678
|
| 931 |
+
],
|
| 932 |
+
"page_idx": 4
|
| 933 |
+
},
|
| 934 |
+
{
|
| 935 |
+
"type": "text",
|
| 936 |
+
"text": "Our proposed methodology is evaluated on two benchmarks, namely, BEAT [27] and the TED expressive dataset [33]. The BEAT dataset consists of 76 hours of multimodal recordings, which include speech audio recordings, speech transcriptions, and, more importantly, motion data collected from 30 participants, leveraging Motion Capture (MOCAP) technology. The participants expressed emotions in eight distinct scenarios across four languages. The motion data contains joint rotation angles, which were designed for consistency across varying body sizes. The TED Expressive dataset [33] is segmented from TED Talk videos into smaller shots based on scene boundaries. Liu et al. [33] extracted each frame's 2D human pose using OpenPose BEAT [9]. Using these 2D pose priors, ExPose [43] was",
|
| 937 |
+
"bbox": [
|
| 938 |
+
511,
|
| 939 |
+
685,
|
| 940 |
+
883,
|
| 941 |
+
912
|
| 942 |
+
],
|
| 943 |
+
"page_idx": 4
|
| 944 |
+
},
|
| 945 |
+
{
|
| 946 |
+
"type": "page_number",
|
| 947 |
+
"text": "13967",
|
| 948 |
+
"bbox": [
|
| 949 |
+
480,
|
| 950 |
+
950,
|
| 951 |
+
517,
|
| 952 |
+
963
|
| 953 |
+
],
|
| 954 |
+
"page_idx": 4
|
| 955 |
+
},
|
| 956 |
+
{
|
| 957 |
+
"type": "table",
|
| 958 |
+
"img_path": "images/fe64ef64ce5708037abbe668334be41eacceca7f62ef9e0e84f81d5cab389ab9.jpg",
|
| 959 |
+
"table_caption": [
|
| 960 |
+
"Table 1. Comparison of SemGesGen with other methods on the BEAT and TED-Expressive datasets. For BEAT, we compare with CaMN [27], DiffGesture [59], LivelySpeaker [58], and DiffSheg [10]. The same methods are evaluated on TED-Expressive. SRGR is not applicable (denoted with $-$ ) for TED-Expressive as it does not contain annotations for semantic relevance of gestures."
|
| 961 |
+
],
|
| 962 |
+
"table_footnote": [],
|
| 963 |
+
"table_body": "<table><tr><td colspan=\"5\">BEAT</td><td colspan=\"4\">TED-Expressive</td></tr><tr><td>Method</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td><td>SRGR ↑</td><td>Method</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td></tr><tr><td>CaMN [27]</td><td>8.510</td><td>0.797</td><td>206.789</td><td>0.231</td><td>CaMN [27]</td><td>7.673</td><td>0.642</td><td>156.236</td></tr><tr><td>DiffGesture [59]</td><td>9.632</td><td>0.876</td><td>210.678</td><td>0.106</td><td>DiffGesture [59]</td><td>9.326</td><td>0.662</td><td>119.889</td></tr><tr><td>LivelySpeaker [58]</td><td>13.378</td><td>0.891</td><td>214.946</td><td>0.229</td><td>LivelySpeaker [58]</td><td>8.145</td><td>0.691</td><td>119.231</td></tr><tr><td>DiffSheg [10]</td><td>6.623</td><td>0.922</td><td>257.674</td><td>0.250</td><td>DiffSheg [10]</td><td>8.457</td><td>0.712</td><td>108.972</td></tr><tr><td>SemGes (Ours)</td><td>4.467</td><td>0.453</td><td>305.706</td><td>0.256</td><td>SemGes (Ours)</td><td>7.263</td><td>0.671</td><td>302.772</td></tr></table>",
|
| 964 |
+
"bbox": [
|
| 965 |
+
148,
|
| 966 |
+
142,
|
| 967 |
+
846,
|
| 968 |
+
258
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 5
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "text",
|
| 974 |
+
"text": "employed to annotate the 3D upper body keypoints, including 13 upper body joints and 30 finger joints. Both datasets' training and validation samples are divided into 34-frame clips.",
|
| 975 |
+
"bbox": [
|
| 976 |
+
111,
|
| 977 |
+
276,
|
| 978 |
+
483,
|
| 979 |
+
338
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 5
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "text",
|
| 985 |
+
"text": "Cross-Validation. We evaluate our approach on the BEAT dataset, following the protocol in [27], where the data is randomly split into a 19:2:2 ratio for training, validation, and testing. Similarly, for the TED Expressive dataset, we adapt the protocol in [33], using a random split of 8:1:1 for training, validation, and testing.",
|
| 986 |
+
"bbox": [
|
| 987 |
+
111,
|
| 988 |
+
357,
|
| 989 |
+
483,
|
| 990 |
+
450
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 5
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "text",
|
| 996 |
+
"text": "Implementation Details. The details of the model architectures and training are provided in Section 2 of the Supplementary Materials.",
|
| 997 |
+
"bbox": [
|
| 998 |
+
111,
|
| 999 |
+
468,
|
| 1000 |
+
483,
|
| 1001 |
+
513
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 5
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"text": "4.2. State-of-the-Art Baselines",
|
| 1008 |
+
"text_level": 1,
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
112,
|
| 1011 |
+
523,
|
| 1012 |
+
348,
|
| 1013 |
+
539
|
| 1014 |
+
],
|
| 1015 |
+
"page_idx": 5
|
| 1016 |
+
},
|
| 1017 |
+
{
|
| 1018 |
+
"type": "text",
|
| 1019 |
+
"text": "We compare SemGes against a set of representative state-of-the-art models that focus on semantic-driven gesture generation. The selected models achieved strong performance on the BEAT and TED-Expressive datasets, making them suitable for a fair comparison with our method. The selected models are as follows:",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
111,
|
| 1022 |
+
546,
|
| 1023 |
+
483,
|
| 1024 |
+
637
|
| 1025 |
+
],
|
| 1026 |
+
"page_idx": 5
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"type": "list",
|
| 1030 |
+
"sub_type": "text",
|
| 1031 |
+
"list_items": [
|
| 1032 |
+
"1. Cascaded Motion Network(CaMN) [27] is the current benchmark model for the BEAT dataset. CaMN is based on LSTMs and integrates multiple input modalities, including audio, text, facial expressions, and emotion. Additionally, like SemGes, it leverages semantic relevance annotations to enhance gesture generation.",
|
| 1033 |
+
"2. DiffSHEG [10] is a state-of-the-art diffusion-based model for real-time speech-driven holistic gesture generation. It is conditioned on noisy motion, audio, and speaker ID. DiffSHEG introduces a Fast Out-painting-based Partial Autoregressive Sampling method to efficiently generate arbitrary-length sequences in real time.",
|
| 1034 |
+
"3. LivelySpeaker [58] generates semantically and rhythmically aware co-speech gestures by leveraging an MLP-based diffusion model. The model conditions gesture generation on text, noised motion,"
|
| 1035 |
+
],
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
112,
|
| 1038 |
+
638,
|
| 1039 |
+
483,
|
| 1040 |
+
912
|
| 1041 |
+
],
|
| 1042 |
+
"page_idx": 5
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"type": "text",
|
| 1046 |
+
"text": "speaker ID, and audio to enable text-driven gesture control while incorporating global semantics.",
|
| 1047 |
+
"bbox": [
|
| 1048 |
+
532,
|
| 1049 |
+
276,
|
| 1050 |
+
882,
|
| 1051 |
+
306
|
| 1052 |
+
],
|
| 1053 |
+
"page_idx": 5
|
| 1054 |
+
},
|
| 1055 |
+
{
|
| 1056 |
+
"type": "text",
|
| 1057 |
+
"text": "4. DiffGes [59] models the diffusion and denoising processes within the gesture domain, enabling the generation of high-fidelity, audio-driven gestures conditioned on both audio and gesture inputs. Several recent studies[6, 26] have also demonstrated strong performance in this area.",
|
| 1058 |
+
"bbox": [
|
| 1059 |
+
511,
|
| 1060 |
+
306,
|
| 1061 |
+
883,
|
| 1062 |
+
397
|
| 1063 |
+
],
|
| 1064 |
+
"page_idx": 5
|
| 1065 |
+
},
|
| 1066 |
+
{
|
| 1067 |
+
"type": "text",
|
| 1068 |
+
"text": "We exclude certain models from our comparison. For instance, SEEG [26] and [57] rely on additional data annotations (e.g., Semantic Prompt Gallery or ChatGPT-generated annotations) that are not uniformly available. In addition, other works, such as Ao et al. [6], Pang et al. [42], Zhang et al. [57], are excluded from our analysis due to the inaccessibility of their codebase. Voß and Kopp [49] is omitted due to its high computational cost and the unavailability of annotations. Liu et al. [31, 34], Mughal et al. [36, 37], Ng et al. [38], Yi et al. [54] are excluded as they primarily focus on holistic gestures with face and mesh data, which fall outside the scope of this work. Similarly, Chhatre et al. [11], Qi et al. [44] are excluded, as their emphasis lies in emotion-driven gesture generation rather than the semantic aspects. Furthermore, Ahuja et al. [1, 2], Alexander et al. [4], Habibie et al. [17], Liu et al. [33], Sun et al. [45], Yang et al. [51], Ye et al. [52] are omitted due to their lack of relevance to semantic-driven gesture generation.",
|
| 1069 |
+
"bbox": [
|
| 1070 |
+
511,
|
| 1071 |
+
400,
|
| 1072 |
+
883,
|
| 1073 |
+
704
|
| 1074 |
+
],
|
| 1075 |
+
"page_idx": 5
|
| 1076 |
+
},
|
| 1077 |
+
{
|
| 1078 |
+
"type": "text",
|
| 1079 |
+
"text": "5. Quantitative Objective Evaluations",
|
| 1080 |
+
"text_level": 1,
|
| 1081 |
+
"bbox": [
|
| 1082 |
+
511,
|
| 1083 |
+
719,
|
| 1084 |
+
815,
|
| 1085 |
+
736
|
| 1086 |
+
],
|
| 1087 |
+
"page_idx": 5
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "text",
|
| 1091 |
+
"text": "Evaluation Metrics. We employ four standard objective metrics for evaluating the quality of gesture generation, namely, Fréchet Gesture Distance (FGD) [55], Beat Consistency Score (BC) [25], Diversity [24], and Semantic-Relevant Gesture Recall (SRGR) [27].",
|
| 1092 |
+
"bbox": [
|
| 1093 |
+
511,
|
| 1094 |
+
744,
|
| 1095 |
+
882,
|
| 1096 |
+
820
|
| 1097 |
+
],
|
| 1098 |
+
"page_idx": 5
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "text",
|
| 1102 |
+
"text": "FGD measures how the generated gestures resemble real motion distributions by embedding sequences into a latent space via a pre-trained autoencoder. In contrast, BC focuses on synchronization with speech, measuring the alignment between speech onsets (audio beats) and motion beats, which are identified as velocity minima in",
|
| 1103 |
+
"bbox": [
|
| 1104 |
+
511,
|
| 1105 |
+
821,
|
| 1106 |
+
883,
|
| 1107 |
+
912
|
| 1108 |
+
],
|
| 1109 |
+
"page_idx": 5
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "page_number",
|
| 1113 |
+
"text": "13968",
|
| 1114 |
+
"bbox": [
|
| 1115 |
+
480,
|
| 1116 |
+
950,
|
| 1117 |
+
517,
|
| 1118 |
+
962
|
| 1119 |
+
],
|
| 1120 |
+
"page_idx": 5
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "table",
|
| 1124 |
+
"img_path": "images/6ce76706a7dfae3d06af8432d0a430ad28cd1f590b81667879d4f45900acd19f.jpg",
|
| 1125 |
+
"table_caption": [
|
| 1126 |
+
"Table 2. Ablation studies evaluating the contributions of key components in SemGes on the BEAT and TED-Expressive Datasets. For BEAT, performance is measured using FGD (lower is better), BC, Diversity, and SRGR, while for TED-Expressive, SRGR is not applicable (denoted as -)."
|
| 1127 |
+
],
|
| 1128 |
+
"table_footnote": [],
|
| 1129 |
+
"table_body": "<table><tr><td colspan=\"5\">BEAT</td><td colspan=\"4\">TED-Expressive</td></tr><tr><td>Model Variants</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td><td>SRGR ↑</td><td>Model Variants</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td></tr><tr><td>Baseline (VQVAE)</td><td>10.348</td><td>0.564</td><td>198.568</td><td>0.176</td><td>Baseline (VQVAE)</td><td>10.682</td><td>0.612</td><td>114.692</td></tr><tr><td>w/o Semantic Coherence Module</td><td>8.053</td><td>0.556</td><td>249.550</td><td>0.180</td><td>w/o Semantic Coherence Module</td><td>7.924</td><td>0.623</td><td>109.256</td></tr><tr><td>w/o Semantic Relevance Module</td><td>7.549</td><td>0.573</td><td>245.319</td><td>0.195</td><td>w/o Semantic Relevance Module</td><td>-</td><td>-</td><td>-</td></tr><tr><td>w/ SpeechCLIP Encoder</td><td>6.787</td><td>0.468</td><td>289.621</td><td>0.245</td><td>w/ SpeechCLIP Encoder</td><td>7.341</td><td>0.605</td><td>245.680</td></tr><tr><td>SemGes (Ours)</td><td>4.467</td><td>0.453</td><td>305.706</td><td>0.256</td><td>SemGes (Ours)</td><td>7.263</td><td>0.671</td><td>302.772</td></tr></table>",
|
| 1130 |
+
"bbox": [
|
| 1131 |
+
148,
|
| 1132 |
+
141,
|
| 1133 |
+
844,
|
| 1134 |
+
234
|
| 1135 |
+
],
|
| 1136 |
+
"page_idx": 6
|
| 1137 |
+
},
|
| 1138 |
+
{
|
| 1139 |
+
"type": "text",
|
| 1140 |
+
"text": "upper-body joints (excluding fingers). Meanwhile, **Diversity** captures the variability of generated motions by computing the average $L1$ distance between pairs of $N$ generated clips. Finally, SRGR assesses semantic relevance by determining how well generated gestures align with the annotated semantic gestures. Further details on the objective metrics are included in the Supplementary Materials (Section 1).",
|
| 1141 |
+
"bbox": [
|
| 1142 |
+
109,
|
| 1143 |
+
252,
|
| 1144 |
+
480,
|
| 1145 |
+
371
|
| 1146 |
+
],
|
| 1147 |
+
"page_idx": 6
|
| 1148 |
+
},
|
| 1149 |
+
{
|
| 1150 |
+
"type": "text",
|
| 1151 |
+
"text": "Comparisons with Other Models. Table 1 compares the performance of our approach against four baseline methods across four evaluation metrics. As highlighted in the table, SemGes outperforms the baselines in FGD, Diversity, and SRGR.",
|
| 1152 |
+
"bbox": [
|
| 1153 |
+
109,
|
| 1154 |
+
396,
|
| 1155 |
+
480,
|
| 1156 |
+
470
|
| 1157 |
+
],
|
| 1158 |
+
"page_idx": 6
|
| 1159 |
+
},
|
| 1160 |
+
{
|
| 1161 |
+
"type": "text",
|
| 1162 |
+
"text": "For the BEAT dataset, our approach achieves the highest SRGR, which we attribute to the exploitation of semantic relevance information in our training objectives. In addition, our approach shows a significant improvement in FGD and Diversity, indicating a closer alignment with the ground truth gesture distribution and a broader range of generated gestures compared to the second-best baselines. The performance on the Beat Consistency (BC) metric is lower for our method. This is expected given our focus on improving semantic awareness of the generated gestures rather than optimizing for strict temporal alignment between rhythmic beat gestures and speech. In addition, the BC metric can be sensitive to rapid, jittery movements; even minor motion artefacts may be mistakenly counted as additional beats, thereby increasing the BC score artificially- a phenomenon also observed in the diffusion-based baselines, as further illustrated in our supplementary video.",
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
109,
|
| 1165 |
+
472,
|
| 1166 |
+
482,
|
| 1167 |
+
743
|
| 1168 |
+
],
|
| 1169 |
+
"page_idx": 6
|
| 1170 |
+
},
|
| 1171 |
+
{
|
| 1172 |
+
"type": "text",
|
| 1173 |
+
"text": "We evaluate how our model handles the trade-off between semantic and beat scores by testing the model on beat-dominant gestures (without semantic content). The results show a significantly higher Beat score (0.689) than the full dataset Beat score (0.453). This confirms rhythmic consistency in beat-focused contexts. We provide additional evaluation in the supplementary materials (Section 3) to show how the model handles difficult cases (such as noisy speech or misaligned speech).",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
109,
|
| 1176 |
+
744,
|
| 1177 |
+
482,
|
| 1178 |
+
881
|
| 1179 |
+
],
|
| 1180 |
+
"page_idx": 6
|
| 1181 |
+
},
|
| 1182 |
+
{
|
| 1183 |
+
"type": "text",
|
| 1184 |
+
"text": "Note that the TED Expressive dataset lacks annotations for gesture semantic relevance, so SRGR is not",
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
111,
|
| 1187 |
+
882,
|
| 1188 |
+
483,
|
| 1189 |
+
912
|
| 1190 |
+
],
|
| 1191 |
+
"page_idx": 6
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"type": "text",
|
| 1195 |
+
"text": "applicable, and the semantic relevance loss was omitted during training. Nevertheless, SemGes produces diverse, naturalistic gestures on TED Expressive, outperforming baselines in FGD and Diversity metrics.",
|
| 1196 |
+
"bbox": [
|
| 1197 |
+
511,
|
| 1198 |
+
252,
|
| 1199 |
+
883,
|
| 1200 |
+
313
|
| 1201 |
+
],
|
| 1202 |
+
"page_idx": 6
|
| 1203 |
+
},
|
| 1204 |
+
{
|
| 1205 |
+
"type": "text",
|
| 1206 |
+
"text": "Ablation Study. We evaluate the contributions of key components in SemGes through ablation experiments. First, we assess a baseline VQ-VAE model (Stage 1 only), which uses two stacked encoder-decoder blocks and an MLP. In this experiment, we test its ability to generate gestures, conditioned on audio, masked motion, and speaker identity. As shown in Table 2, this baseline underperforms compared to state-of-the-art methods (Table 1). As a result, we motivate our two-stage design where the VQ-VAE is reserved to learn the motion latent space and Stage 2 leverages speech and identity conditioning to generate gestures.",
|
| 1207 |
+
"bbox": [
|
| 1208 |
+
511,
|
| 1209 |
+
330,
|
| 1210 |
+
883,
|
| 1211 |
+
511
|
| 1212 |
+
],
|
| 1213 |
+
"page_idx": 6
|
| 1214 |
+
},
|
| 1215 |
+
{
|
| 1216 |
+
"type": "text",
|
| 1217 |
+
"text": "Next, we examine Stage 2 by removing its components: (i) the Semantic Coherence Loss, (ii) the Semantic Relevance Loss, and (iii) by replacing the HuBERT-based speech encoder with SpeechCLIP. Results in Table 2 show that removing either the Semantic Coherence or Relevance Loss degrades FGD, Diversity, and SRGR scores, highlighting their roles in aligning gesture representations with textual semantics and capturing semantic importance. In addition, replacing the speech encoder results in marginal gains. The semantic encoder is fixed as FastText, which we believe is sufficient to capture the necessary semantic information [7]. Overall, these results confirm the importance of each module in generating semantics-aware gestures.",
|
| 1218 |
+
"bbox": [
|
| 1219 |
+
511,
|
| 1220 |
+
512,
|
| 1221 |
+
883,
|
| 1222 |
+
723
|
| 1223 |
+
],
|
| 1224 |
+
"page_idx": 6
|
| 1225 |
+
},
|
| 1226 |
+
{
|
| 1227 |
+
"type": "text",
|
| 1228 |
+
"text": "6. Qualitative & Subjective Evaluations",
|
| 1229 |
+
"text_level": 1,
|
| 1230 |
+
"bbox": [
|
| 1231 |
+
511,
|
| 1232 |
+
734,
|
| 1233 |
+
848,
|
| 1234 |
+
753
|
| 1235 |
+
],
|
| 1236 |
+
"page_idx": 6
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "text",
|
| 1240 |
+
"text": "Visualization Comparisons. Before presenting the subjective ratings of the generated gestures, Figure 5 provides a visual comparison between the ground truth, results from our approach and two baseline models. We use examples from the BEAT dataset. It is clear from the figure that our approach not only achieves better speech-gesture alignment but also produces gestures that are more naturalistic, diverse, and semantically aware. For example, while CaMN generates smooth movements, its gestures tend to be slower and less varied compared to",
|
| 1241 |
+
"bbox": [
|
| 1242 |
+
511,
|
| 1243 |
+
760,
|
| 1244 |
+
883,
|
| 1245 |
+
912
|
| 1246 |
+
],
|
| 1247 |
+
"page_idx": 6
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "page_number",
|
| 1251 |
+
"text": "13969",
|
| 1252 |
+
"bbox": [
|
| 1253 |
+
480,
|
| 1254 |
+
950,
|
| 1255 |
+
519,
|
| 1256 |
+
963
|
| 1257 |
+
],
|
| 1258 |
+
"page_idx": 6
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "image",
|
| 1262 |
+
"img_path": "images/b4028ad1da7211a158b3965e028ce1b10b9d5550fdc235e8203825ca3cebacf9.jpg",
|
| 1263 |
+
"image_caption": [
|
| 1264 |
+
"Figure 5. Comparisons with baselines and ground truth gestures. Compared to the baseline method, our approach generates gestures that are aligned with speech content (semantics). For instance, when the speaker says \"remix\", our method produces gestures where the character raises both hands to emphasize the word before gradually lowering them—a movement that other methods fail to achieve. Similarly, when uttering \"first\", our method generates a raised hand gesture, producing an iconic gesture."
|
| 1265 |
+
],
|
| 1266 |
+
"image_footnote": [],
|
| 1267 |
+
"bbox": [
|
| 1268 |
+
130,
|
| 1269 |
+
89,
|
| 1270 |
+
869,
|
| 1271 |
+
224
|
| 1272 |
+
],
|
| 1273 |
+
"page_idx": 7
|
| 1274 |
+
},
|
| 1275 |
+
{
|
| 1276 |
+
"type": "text",
|
| 1277 |
+
"text": "our model. Additionally, the baseline methods show varying degrees of jitter—DiffGesture shows the highest jitter, followed by LivelySpeaker and DiffSheg, with CaMN displaying the least. Although CaMN includes semantic information, our approach strikes a more effective balance, generating gestures that align with actual motion, as shown also with the objective metrics. Based on these qualitative observations, our subsequent rating study focuses on evaluating gestures produced by the ground truth, our model, CaMN, and DiffSHEG.",
|
| 1278 |
+
"bbox": [
|
| 1279 |
+
109,
|
| 1280 |
+
316,
|
| 1281 |
+
483,
|
| 1282 |
+
467
|
| 1283 |
+
],
|
| 1284 |
+
"page_idx": 7
|
| 1285 |
+
},
|
| 1286 |
+
{
|
| 1287 |
+
"type": "text",
|
| 1288 |
+
"text": "User Ratings of Generated Gestures. We conducted a user study using 40-second video clips from the BEAT test set, featuring subjects narrating six topics. Thirty native English speakers from the United Kingdom and the United States participated, with an average age of $36 \\pm 20$ years and a female-to-male ratio of approximately 2:1. Each participant evaluated 24 videos generated by the ground truth, CaMN, DiffSHEG, and our model over a study duration of that lasted on average $27 \\pm 5$ minutes. For data quality, participants were required to pass attention verification questions, i.e., correctly answering at least two out of four questions regarding the narration topic. Participants rated the videos on three criteria: naturalness, diversity, and alignment with speech content and timing on a scale from 1 to 5. The videos were presented in a randomized order to avoid bias. In Section 3 of the Supplementary Materials, we provide screenshots and more details on the user study and interface.",
|
| 1289 |
+
"bbox": [
|
| 1290 |
+
109,
|
| 1291 |
+
488,
|
| 1292 |
+
482,
|
| 1293 |
+
773
|
| 1294 |
+
],
|
| 1295 |
+
"page_idx": 7
|
| 1296 |
+
},
|
| 1297 |
+
{
|
| 1298 |
+
"type": "text",
|
| 1299 |
+
"text": "Figure 6 shows that ground-truth gestures received an average rating of 4 across all metrics, establishing an upper bound and validating the participant survey. Our model received the highest ratings among the generated gestures, significantly outperforming CaMN and Diff-SHEG in naturalness, synchronization, and diversity (indicated by the in Figure 6). These results prove that our approach produces gestures that are more natural, better aligned with speech, and more diverse than those gener",
|
| 1300 |
+
"bbox": [
|
| 1301 |
+
109,
|
| 1302 |
+
776,
|
| 1303 |
+
482,
|
| 1304 |
+
912
|
| 1305 |
+
],
|
| 1306 |
+
"page_idx": 7
|
| 1307 |
+
},
|
| 1308 |
+
{
|
| 1309 |
+
"type": "text",
|
| 1310 |
+
"text": "ated by SOTA baselines.",
|
| 1311 |
+
"bbox": [
|
| 1312 |
+
513,
|
| 1313 |
+
316,
|
| 1314 |
+
679,
|
| 1315 |
+
330
|
| 1316 |
+
],
|
| 1317 |
+
"page_idx": 7
|
| 1318 |
+
},
|
| 1319 |
+
{
|
| 1320 |
+
"type": "image",
|
| 1321 |
+
"img_path": "images/e3d5c4222b692542165121d51ee5f693894562de6161cdae88f72d56ef8b033e.jpg",
|
| 1322 |
+
"image_caption": [
|
| 1323 |
+
"Figure 6. Average ratings of users for ground truth gestures and gestures generated through our approach, CAMN, and DiffSHEG. The bars illustrate the average user ratings across three metrics: naturalness, diversity, and alignment with speech content and timing. Statistical t-tests show that our approach received significantly higher ratings than CAMN and DiffSHEG, with $p < 0.05$ ."
|
| 1324 |
+
],
|
| 1325 |
+
"image_footnote": [],
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
535,
|
| 1328 |
+
347,
|
| 1329 |
+
849,
|
| 1330 |
+
508
|
| 1331 |
+
],
|
| 1332 |
+
"page_idx": 7
|
| 1333 |
+
},
|
| 1334 |
+
{
|
| 1335 |
+
"type": "text",
|
| 1336 |
+
"text": "7. Conclusion",
|
| 1337 |
+
"text_level": 1,
|
| 1338 |
+
"bbox": [
|
| 1339 |
+
513,
|
| 1340 |
+
643,
|
| 1341 |
+
633,
|
| 1342 |
+
660
|
| 1343 |
+
],
|
| 1344 |
+
"page_idx": 7
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "text",
|
| 1348 |
+
"text": "We proposed SemGes, a novel two-stage approach to semantic grounding in co-speech gesture generation by integrating semantic information at both fine-grained and global levels. In the first stage, a motion prior generation module is trained using a vector-quantized variational autoencoder to produce realistic and smooth gesture motions. Building upon this model, the second stage generates gestures from speech, text-based semantics, and speaker identity while maintaining consistency between gesture semantics and co-occurring speech through semantic coherence and relevance modules. Subjective and objective evaluations show that our work achieves state-of-the-art performance across two public benchmarks, generating semantics-aware and diverse gestures. Future direction and limitations are discussed in Section 5 of the Supplementary Materials.",
|
| 1349 |
+
"bbox": [
|
| 1350 |
+
511,
|
| 1351 |
+
670,
|
| 1352 |
+
883,
|
| 1353 |
+
912
|
| 1354 |
+
],
|
| 1355 |
+
"page_idx": 7
|
| 1356 |
+
},
|
| 1357 |
+
{
|
| 1358 |
+
"type": "page_number",
|
| 1359 |
+
"text": "13970",
|
| 1360 |
+
"bbox": [
|
| 1361 |
+
480,
|
| 1362 |
+
950,
|
| 1363 |
+
519,
|
| 1364 |
+
963
|
| 1365 |
+
],
|
| 1366 |
+
"page_idx": 7
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "text",
|
| 1370 |
+
"text": "Acknowledgement",
|
| 1371 |
+
"text_level": 1,
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
112,
|
| 1374 |
+
90,
|
| 1375 |
+
272,
|
| 1376 |
+
107
|
| 1377 |
+
],
|
| 1378 |
+
"page_idx": 8
|
| 1379 |
+
},
|
| 1380 |
+
{
|
| 1381 |
+
"type": "text",
|
| 1382 |
+
"text": "The project is funded by the Max Planck Society. We thank Sachit Misra for his invaluable assistance with rendering Avatar characters. We extend our gratitude to the members of the Multimodal Language Department at Max Planck Institute for Psycholinguistics for their feedback.",
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
111,
|
| 1385 |
+
114,
|
| 1386 |
+
483,
|
| 1387 |
+
200
|
| 1388 |
+
],
|
| 1389 |
+
"page_idx": 8
|
| 1390 |
+
},
|
| 1391 |
+
{
|
| 1392 |
+
"type": "text",
|
| 1393 |
+
"text": "References",
|
| 1394 |
+
"text_level": 1,
|
| 1395 |
+
"bbox": [
|
| 1396 |
+
114,
|
| 1397 |
+
228,
|
| 1398 |
+
209,
|
| 1399 |
+
244
|
| 1400 |
+
],
|
| 1401 |
+
"page_idx": 8
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"type": "list",
|
| 1405 |
+
"sub_type": "ref_text",
|
| 1406 |
+
"list_items": [
|
| 1407 |
+
"[1] Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. Low-resource adaptation for personalized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20566-20576, 2022. 6",
|
| 1408 |
+
"[2] Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, and Louis-Philippe Morency. Continual learning for personalized co-speech gesture generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20893-20903, 2023. 6",
|
| 1409 |
+
"[3] Simon Alexander, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. Style-controllable speech-driven gesture synthesis using normalising flows. In Computer Graphics Forum, pages 487-496. Wiley Online Library, 2020. 2",
|
| 1410 |
+
"[4] Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. Listen, denoise, action! audiodriven motion synthesis with diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-20, 2023. 6",
|
| 1411 |
+
"[5] Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. Rhythmic gesticulator: Rhythm-aware co-speech gesture synthesis with hierarchical neural embeddings. ACM Transactions on Graphics (TOG), 41(6): 1-19, 2022. 2, 3",
|
| 1412 |
+
"[6] Tenglong Ao, Zeyi Zhang, and Libin Liu. Gesture diffusion model with clip latents. ACM Transactions on Graphics (TOG), 42(4):1-18, 2023. 2, 3, 6",
|
| 1413 |
+
"[7] Ben Athiwaratkun, Andrew Gordon Wilson, and Anima Anandkumar. Probabilistic fasttext for multi-sense word embeddings. arXiv preprint arXiv:1806.02901, 2018. 7",
|
| 1414 |
+
"[8] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5:135-146, 2017. 4",
|
| 1415 |
+
"[9] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7291–7299, 2017. 5",
|
| 1416 |
+
"[10] Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, and Qifeng Chen. Diffsheg: A diffusion-based approach for real-time speech-driven holistic 3d expression and gesture generation. In CVPR, 2024. 2, 6",
|
| 1417 |
+
"[11] Kiran Chhatre, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J Black, Timo Bolkart, et al."
|
| 1418 |
+
],
|
| 1419 |
+
"bbox": [
|
| 1420 |
+
114,
|
| 1421 |
+
253,
|
| 1422 |
+
483,
|
| 1423 |
+
912
|
| 1424 |
+
],
|
| 1425 |
+
"page_idx": 8
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "list",
|
| 1429 |
+
"sub_type": "ref_text",
|
| 1430 |
+
"list_items": [
|
| 1431 |
+
"Emotional speech-driven 3d body animation via disentangled latent diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1942-1953, 2024. 6",
|
| 1432 |
+
"[12] Ylva Ferstl, Michael Neff, and Rachel McDonnell. Adversarial gesture generation with realistic gesture phasing. Computers & Graphics, 89:117-130, 2020. 2",
|
| 1433 |
+
"[13] Esam Ghaleb, Bulat Khaertdinov, Wim Pouw, Marlou Rasenberg, Judith Holler, Asli Ozyurek, and Raquel Fernandez. Learning co-speech gesture representations in dialogue through contrastive learning: An intrinsic evaluation. In Proceedings of the 26th International Conference on Multimodal Interaction, pages 274-283, 2024. 1",
|
| 1434 |
+
"[14] Esam Ghaleb, Bulat Khaertdinov, Asli Özyürek, and Raquel Fernández. I see what you mean: Co-speech gestures for reference resolution in multimodal dialogue. In Proceedings of the 63rd Conference of the Association for Computational Linguistics (ACL Findings), 2025. To appear. 1",
|
| 1435 |
+
"[15] Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5152-5161, 2022. 2",
|
| 1436 |
+
"[16] Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In European Conference on Computer Vision, pages 580-597. Springer, 2022. 2",
|
| 1437 |
+
"[17] Ikhsanul Habibie, Mohamed Elgharib, Kripasindhu Sarkar, Ahsan Abdullah, Simbarashe Nyatsanga, Michael Neff, and Christian Theobalt. A motion matching-based framework for controllable gesture synthesis from speech. In ACM SIGGRAPH 2022 conference proceedings, pages 1-9, 2022. 6",
|
| 1438 |
+
"[18] Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. Moglow: Probabilistic and controllable motion synthesis using normalising flows. ACM Transactions on Graphics (TOG), 39(6):1-14, 2020. 2",
|
| 1439 |
+
"[19] Judith Holler and Stephen C Levinson. Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8):639-652, 2019. 1",
|
| 1440 |
+
"[20] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing, 29:3451-3460, 2021. 4",
|
| 1441 |
+
"[21] Adam Kendon. Gesture units, gesture phrases and speech. In *Gesture: Visible Action as Utterance*, chapter 7, page 108–126. Cambridge University Press, 2004. 1",
|
| 1442 |
+
"[22] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 international conference on multimodal interaction, pages 242-250, 2020. 2"
|
| 1443 |
+
],
|
| 1444 |
+
"bbox": [
|
| 1445 |
+
516,
|
| 1446 |
+
92,
|
| 1447 |
+
883,
|
| 1448 |
+
912
|
| 1449 |
+
],
|
| 1450 |
+
"page_idx": 8
|
| 1451 |
+
},
|
| 1452 |
+
{
|
| 1453 |
+
"type": "page_number",
|
| 1454 |
+
"text": "13971",
|
| 1455 |
+
"bbox": [
|
| 1456 |
+
480,
|
| 1457 |
+
950,
|
| 1458 |
+
517,
|
| 1459 |
+
962
|
| 1460 |
+
],
|
| 1461 |
+
"page_idx": 8
|
| 1462 |
+
},
|
| 1463 |
+
{
|
| 1464 |
+
"type": "list",
|
| 1465 |
+
"sub_type": "ref_text",
|
| 1466 |
+
"list_items": [
|
| 1467 |
+
"[23] Buyu Li, Yongchi Zhao, Shi Zhelun, and Lu Sheng. Danceformer: Music conditioned 3d dance generation with parametric motion transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1272-1279, 2022. 2",
|
| 1468 |
+
"[24] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and Linchao Bao. Audio2gestures: Generating diverse gestures from speech audio with conditional variational autoencoders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11293-11302, 2021. 6",
|
| 1469 |
+
"[25] Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13401-13412, 2021. 6",
|
| 1470 |
+
"[26] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. Seeg: Semantic energized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10473-10482, 2022. 2, 6",
|
| 1471 |
+
"[27] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Beat: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis. In European Conference on Computer Vision, pages 612-630. Springer, 2022. 2, 3, 5, 6",
|
| 1472 |
+
"[28] Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. Emage: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1144-1154, 2024. 2, 3",
|
| 1473 |
+
"[29] Li Liu, Lufei Gao, Wentao Lei, Fengji Ma, Xiaotian Lin, and Jinting Wang. A survey on deep multi-modal learning for body language recognition and generation. arXiv preprint arXiv:2308.08849, 2023. 1",
|
| 1474 |
+
"[30] Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, and Adriana Tapus. Human gesture recognition with a flow-based model for human robot interaction. In *Compan- ion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction*, pages 548–551, 2023. 2",
|
| 1475 |
+
"[31] Pinxin Liu, Luchuan Song, Junhua Huang, Haiyang Liu, and Chenliang Xu. Gesturelsm: Latent shortcut based co-speech gesture generation with spatial-temporal modeling. arXiv preprint arXiv:2501.18898, 2025. 6",
|
| 1476 |
+
"[32] Xian Liu, Qianyi Wu, Hang Zhou, Yuanqi Du, Wayne Wu, Dahua Lin, and Ziwei Liu. Audio-driven co-speech gesture video generation. Advances in Neural Information Processing Systems, 35:21386-21399, 2022. 2",
|
| 1477 |
+
"[33] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou, Wayne Wu, Bo Dai, and Bolei Zhou. Learning hierarchical cross-modal association for co-speech gesture generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10462-10472, 2022. 2, 5, 6",
|
| 1478 |
+
"[34] Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, and Changxing Ding. Towards variable and coordinated"
|
| 1479 |
+
],
|
| 1480 |
+
"bbox": [
|
| 1481 |
+
114,
|
| 1482 |
+
90,
|
| 1483 |
+
485,
|
| 1484 |
+
912
|
| 1485 |
+
],
|
| 1486 |
+
"page_idx": 9
|
| 1487 |
+
},
|
| 1488 |
+
{
|
| 1489 |
+
"type": "list",
|
| 1490 |
+
"sub_type": "ref_text",
|
| 1491 |
+
"list_items": [
|
| 1492 |
+
"holistic co-speech motion generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1566-1576, 2024. 6",
|
| 1493 |
+
"[35] David McNeill. Hand and mind. Advances in Visual Semiotics, 351, 1992. 1",
|
| 1494 |
+
"[36] Muhammad Hamza Mughal, Rishabh Dabral, Ikhsanul Habibie, Lucia Donatelli, Marc Habermann, and Christian Theobalt. Convofusion: Multi-modal conversational diffusion for co-speech gesture synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1388-1398, 2024. 6",
|
| 1495 |
+
"[37] M Hamza Mughal, Rishabh Dabral, Merel CJ Scholman, Vera Demberg, and Christian Theobalt. Retrieving semantics from the deep: an rag solution for gesture synthesis. arXiv preprint arXiv:2412.06786, 2024. 6",
|
| 1496 |
+
"[38] Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, and Alexander Richard. From audio to photoreal embodiment: Synthesizing humans in conversations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1001-1010, 2024. 6",
|
| 1497 |
+
"[39] Mang Ning, Mingxiao Li, Jianlin Su, Haozhe Jia, Lanmiao Liu, Martin Beneš, Wenshuo Chen, Albert Ali Salah, and Itir Onal Ertugrul. Dctdiff: Intriguing properties of image generative modeling in the dct space. arXiv preprint arXiv:2412.15032, 2024. 2",
|
| 1498 |
+
"[40] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, and Michael Neff. A comprehensive review of data-driven co-speech gesture generation. In Computer Graphics Forum, pages 569-596. Wiley Online Library, 2023. 1, 2",
|
| 1499 |
+
"[41] Asli Özyürek. Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1651):20130296, 2014. 1",
|
| 1500 |
+
"[42] Kunkun Pang, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi, and Taku Komura. Bodyformer: Semantics-guided 3d body gesture synthesis with transformer. ACM Transactions on Graphics (TOG), 42(4):1-12, 2023. 6",
|
| 1501 |
+
"[43] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10975-10985, 2019. 5",
|
| 1502 |
+
"[44] Xingqun Qi, Jiahao Pan, Peng Li, Ruibin Yuan, Xiaowei Chi, Mengfei Li, Wenhan Luo, Wei Xue, Shanghang Zhang, Qifeng Liu, et al. Weakly-supervised emotion transition learning for diverse 3d co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10424-10434, 2024. 6",
|
| 1503 |
+
"[45] Mingyang Sun, Mengchen Zhao, Yaqing Hou, Minglei Li, Huang Xu, Songcen Xu, and Jianye Hao. Cospeech gesture synthesis by reinforcement learning with contrastive pre-trained rewards. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2331-2340, 2023. 6"
|
| 1504 |
+
],
|
| 1505 |
+
"bbox": [
|
| 1506 |
+
516,
|
| 1507 |
+
92,
|
| 1508 |
+
883,
|
| 1509 |
+
912
|
| 1510 |
+
],
|
| 1511 |
+
"page_idx": 9
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "page_number",
|
| 1515 |
+
"text": "13972",
|
| 1516 |
+
"bbox": [
|
| 1517 |
+
480,
|
| 1518 |
+
950,
|
| 1519 |
+
519,
|
| 1520 |
+
963
|
| 1521 |
+
],
|
| 1522 |
+
"page_idx": 9
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "list",
|
| 1526 |
+
"sub_type": "ref_text",
|
| 1527 |
+
"list_items": [
|
| 1528 |
+
"[46] Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or. Motionclip: Exposing human motion generation to clip space. In European Conference on Computer Vision, pages 358–374. Springer, 2022. 3",
|
| 1529 |
+
"[47] Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffusion model. In The Eleventh International Conference on Learning Representations, 2022. 2",
|
| 1530 |
+
"[48] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 3",
|
| 1531 |
+
"[49] Hendric Voß and Stefan Kopp. Augmented co-speech gesture generation: Including form and meaning features to guide learning-based gesture synthesis. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, pages 1-8, 2023. 2, 6",
|
| 1532 |
+
"[50] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. Diffusestylegesture: Stylized audio-driven cospeech gesture generation with diffusion models. arXiv preprint arXiv:2305.04919, 2023. 2",
|
| 1533 |
+
"[51] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, and Haolin Zhuang. Qpgesture: Quantization-based and phase-guided motion matching for natural speech-driven gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2321-2330, 2023. 6",
|
| 1534 |
+
"[52] Sheng Ye, Yu-Hui Wen, Yanan Sun, Ying He, Ziyang Zhang, Yaoyuan Wang, Weihua He, and Yong-Jin Liu. Audio-driven stylized gesture generation with flow-based model. In European Conference on Computer Vision, pages 712-728. Springer, 2022. 6",
|
| 1535 |
+
"[53] Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In CVPR, 2023. 3",
|
| 1536 |
+
"[54] Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yan-dong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 469-480, 2023. 6",
|
| 1537 |
+
"[55] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG), 39(6):1-16, 2020. 6",
|
| 1538 |
+
"[56] Pengfei Zhang, Pinxin Liu, Hyeongwoo Kim, Pablo Garrido, and Bindita Chaudhuri. Kinmo: Kinematic-aware human motion understanding and generation. arXiv preprint arXiv:2411.15472, 2024.3",
|
| 1539 |
+
"[57] Zeyi Zhang, Tenglong Ao, Yuyao Zhang, Qingzhe Gao, Chuan Lin, Baoquan Chen, and Libin Liu. Semantic gesticulator: Semantics-aware co-speech gesture synthesis. ACM Transactions on Graphics (TOG), 43(4):1-17, 2024. 2, 3, 6",
|
| 1540 |
+
"[58] Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, and Shenghua Gao. Livelyspeaker:"
|
| 1541 |
+
],
|
| 1542 |
+
"bbox": [
|
| 1543 |
+
114,
|
| 1544 |
+
92,
|
| 1545 |
+
482,
|
| 1546 |
+
911
|
| 1547 |
+
],
|
| 1548 |
+
"page_idx": 10
|
| 1549 |
+
},
|
| 1550 |
+
{
|
| 1551 |
+
"type": "list",
|
| 1552 |
+
"sub_type": "ref_text",
|
| 1553 |
+
"list_items": [
|
| 1554 |
+
"Towards semantic-aware co-speech gesture generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20807-20817, 2023. 2, 3, 6",
|
| 1555 |
+
"[59] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. Taming diffusion models for audiodriven co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10544-10553, 2023. 2, 3, 6"
|
| 1556 |
+
],
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
516,
|
| 1559 |
+
93,
|
| 1560 |
+
883,
|
| 1561 |
+
218
|
| 1562 |
+
],
|
| 1563 |
+
"page_idx": 10
|
| 1564 |
+
},
|
| 1565 |
+
{
|
| 1566 |
+
"type": "page_number",
|
| 1567 |
+
"text": "13973",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
480,
|
| 1570 |
+
950,
|
| 1571 |
+
517,
|
| 1572 |
+
962
|
| 1573 |
+
],
|
| 1574 |
+
"page_idx": 10
|
| 1575 |
+
}
|
| 1576 |
+
]
|
2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/d193eee8-ef57-4a70-9609-6d91a9953312_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/d193eee8-ef57-4a70-9609-6d91a9953312_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cafb5bef682ae8d821e531952f6236f5dff33cae1fa0c50c2440e4c6744b70e5
|
| 3 |
+
size 5470833
|
2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/full.md
ADDED
|
@@ -0,0 +1,337 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SemGes: Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning
|
| 2 |
+
|
| 3 |
+
Lanmiao Liu $^{1,2,3}$ Esam Ghaleb $^{1,2}$ Asli Özyurek $^{1,2}$ and Zerrin Yumak $^{3}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Max Planck Institute for Psycholinguistics $^{2}$ Donders Institute for Brain Cognition and Behaviour $^{3}$ Utrecht University {lanmiao.liu, esam.ghaleb, asli.ozyurek}@mpi.nl z.yumak@uu.nl
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Creating a virtual avatar with semantically coherent gestures that are aligned with speech is a challenging task. Existing gesture generation research mainly focused on generating rhythmic beat gestures, neglecting the semantic context of the gestures. In this paper, we propose a novel approach for semantic grounding in co-speech gesture generation that integrates semantic information at both fine-grained and global levels. Our approach starts with learning the motion prior through a vector-quantized variational autoencoder. Built on this model, a second-stage module is applied to automatically generate gestures from speech, text-based semantics and speaker identity that ensures consistency between the semantic relevance of generated gestures and co-occurring speech semantics through semantic coherence and relevance modules. Experimental results demonstrate that our approach enhances the realism and coherence of semantic gestures. Extensive experiments and user studies show that our method outperforms state-of-the-art approaches across two benchmarks in co-speech gesture generation in both objective and subjective metrics. The qualitative results of our model, code, dataset and pre-trained models can be viewed at https://semgesture.github.io/.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
Human language is inherently multimodal, with gestures and speech complementing each other to convey pragmatic and semantic information [21, 35]. Cospeech gestures are non-verbal cues that are uniquely related to co-occurring speech, pragmatically, semantically, and temporally. For example, representational iconic gestures that visually express the semantic content of speech and interact with spoken language [13, 14, 19, 21, 41]. A long-standing goal in Computer Vision is to create digital humans that use non-verbal cues in sync with speech. Gesture generation—synthesizing
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1. SemGes integrates audio, text-based semantics, and speaker identity to produce both contextually relevant (discourse-level) and fine-grained (local) gestures. A semantic coherence module aligns text and motion embeddings. The multimodal consistency loss synchronizes the quantized multimodal representations to match the quantized learned motion features for final speech-driven semantics-aware gesture generation. The semantic relevance loss selectively emphasizes gestures with semantic annotations.
|
| 17 |
+
|
| 18 |
+
movements from co-occurring speech, masked motion, or speaker identity—has advanced to enhance AI agents' expressiveness and realism [29]. However, much of the focus has gone into generating rhythmic beat gestures with limited semantic information, leaving representational gestures that convey semantic messages (e.g., iconic) less explored [29, 40].
|
| 19 |
+
|
| 20 |
+
Generating spontaneous and semantically rich gestures from speech comes with multiple challenges. First, it requires capturing global discourse-level information and local fine-grained details (e.g., salient words) to generate speech-driven gestures that reflect the intended meaning and align with speech temporally and semantically. Second, existing methods often generate repetitive and short sequences that do not span the full range of expressive motions required for natural communication. To leverage semantics when generating gestures, researchers have attempted to align motion with speech representations at a global level, e.g., by leveraging pre
|
| 21 |
+
|
| 22 |
+
trained semantic representations such as CLIP [58] or focusing on semantically important keywords [6, 57]. Nonetheless, they often fail to (i) unify global and local semantic modelling within a single framework and (ii) exploit the relevance of the semantic information in guiding gesture generation [27]. At the same time, raw audio features and speaker identity are relevant to the timing and style of gestures. In this paper, we address these limitations by integrating speech, speech semantics, and gesturing style, exploiting semantic information at different levels.
|
| 23 |
+
|
| 24 |
+
Specifically, we propose a two-stage framework, namely, SemGes, that integrates speech, text-based semantics, and speaker identity into a unified gesture-generation model (see Figure 1). In stage 1, we build motion prior of holistic gestures (i.e., body and hands) by training a vector-quantized variational autoencoder (VQ-VAE) to learn an efficient, compositional motion latent space. This stage results in a robust motion encoder & decoder and quantized codebooks that can reconstruct naturalistic gestures while allowing the reuse of learned codebook entries. Stage 2 leverages the learned motion priors to drive gesture synthesis by fusing three modalities using a cross-modal Transformer encoder: (i) text-based semantics, (ii) raw-audio speech features, and (iii) speaker identity for style consistency. We impose a semantic coherence loss that aligns text-based embeddings with the VQ-VAE motion latent space and a semantic relevance loss that emphasises representational gestures (e.g. iconic and metaphoric gestures). A multimodal consistency objective ensures the fused multimodal representations are compatible with the learned motion codebooks, enabling the generation of gestures that are both semantically rich and visually natural. Finally, we introduce a simple but effective long-sequence inference strategy that smoothly combines overlapping motion clips for extended durations. To summarize our contributions,
|
| 25 |
+
|
| 26 |
+
- We introduce a novel framework, SemGes, that first learns a robust VQ-VAE motion prior for body and hand gestures, and then generates gestures driven by fused speech audio, text-based semantics, and speaker identity in a cross-modal transformer.
|
| 27 |
+
- Our method jointly captures discourse-level context via a semantic coherence loss and fine-grained representational gestures (e.g., iconic, metaphoric) via a semantic relevance loss.
|
| 28 |
+
- We propose an overlap-and-combine inference algorithm that maintains smooth continuity over extended durations.
|
| 29 |
+
- Extensive experiments on two benchmarks, namely, the BEAT [27] and TED Expressive [33] datasets show that our method outperforms recent baselines in both objective metrics (e.g., Fréchet Gesture Distance
|
| 30 |
+
|
| 31 |
+
(FGD), diversity, semantic alignment) and user judgment of generated gestures.
|
| 32 |
+
|
| 33 |
+
# 2. Related Work
|
| 34 |
+
|
| 35 |
+
Data-driven Co-Speech Gesture Generation. Current gesture generation approaches are based on generative deep neural networks. These approaches use advanced models such as Transformers [28], Generative Adversarial Networks [23], Normalizing Flows [18, 30], Vector Quantized Variational Autoencoder(VQ-VAE) [16] and Denoising Diffusion Probabilistic Models [47]. In addition, researchers have explored the impact of different model inputs on the naturalness and appropriateness of generated gestures. Various modal inputs have been used, such as text [15], audio [50, 59], image [32, 39], and speaking style [3]. For a comprehensive survey, we refer to Nyatsanga et al. [40]. Although there have been significant improvements in this field, current methods fall short in generating semantically grounded gestures at a fine-grained level. In other words, while the generated motions look convincing at first glance, they do not match well with the meaning of the text, or they mostly focus on beat-type gestures.
|
| 36 |
+
|
| 37 |
+
# Semantics-aware Co-Speech Gesture Generation.
|
| 38 |
+
|
| 39 |
+
A group of work focused on semantics-aware gesture generation where the semantic information is handled in two ways: global semantics and local semantics. Methods that focus on global semantic information [10, 22, 58] align gestures with text or audio, but they fall short in generating gesture types matching the semantic context, such as iconic, metaphoric and deictic gestures. To capture a wider range of semantic gestures, works like [5, 6, 27] adopt local semantic-aware modelling by integrating the semantic salient words to the neural network. However, these approaches often fail to ensure that the generated gestures align with both the broader audio or textual context and a combination of global and local semantics. Liang et al. [26], Voß and Kopp [49] incorporate both global and local semantics, however, they require extensive annotations. Recently, Zhang et al. [57] employed a generative retrieval framework based on LLMs to address the sparsity problem in datasets with semantic gestures. However, they do not explicitly model the different types of gestures [27] or gesture phases [12] grounded in linguistic research. Moreover, there is still not enough understanding of the impact of different annotations and fine-grained semantics.
|
| 40 |
+
|
| 41 |
+
Substantial research [6, 28, 57, 58] focused on two-stage latent space generative modelling to overcome the limitations of co-speech gesture generation and to generate more naturalistic and diverse gestures.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
Figure 2. We pre-train two VQ-VAEs by reconstructing body and hand motions with a dedicated codebook for each.
|
| 45 |
+
|
| 46 |
+
These approaches first learn a latent space and then model gestures probabilistically, effectively integrating the strengths of different methods in different stages. Liu et al. [28], Zhang et al. [57] capture complex dependencies in the latent space using VQ-VAE while Zhi et al. [58] employs CLIP [46, 56] to align text and motion embeddings. Ao et al. [6] introduces a diffusion-based model that leverages semantic awareness, while Liu et al. [28] utilizes a transformer-based approach to generate holistic body gestures. In contrast with two-stage generative modeling approaches, end-to-end methods such as [5, 59], are prone to jittering artifacts, especially in hand-motion generation.
|
| 47 |
+
|
| 48 |
+
Our model contributes to the research line on semantics-aware co-speech gesture generation by taking into account both global and local semantics. Inspired by the previous work, we employ a two-stage latent space generative modelling for high-quality motion representation. We learn semantic coherence between text and gestures globally with cosine similarity. Moreover, our model takes into account the semantic relevancy of gesture types with minimally required annotations. In contrast with other semantic learning models, we focus on annotations with different gesture types embedded in linguistic research. Our work is closest to CAMN [27] in that sense; however, CAMN does not include semantic coherence learning by aligning text and gestures' latent space globally.
|
| 49 |
+
|
| 50 |
+
# 3. Methodology
|
| 51 |
+
|
| 52 |
+
We propose a two-stage approach that generates cospeech gestures by grounding them in raw speech, text-based semantics, and speaker identity. In Section 3.1, we introduce a VQ-VAE encoder-decoder that learns a robust motion prior. Section 3.2 details our gesture synthesis and inference pipeline based on speech, semantics, and identity.
|
| 53 |
+
|
| 54 |
+
Problem formulation. Our goal is to generate hand gestures $\pmb{G}^{h} = (g_{1}^{h},\dots,g_{T}^{h})\in \mathbb{R}^{T\times J}$ and body gestures $\pmb{G}^{b} = (g_{1}^{b},\dots,g_{T}^{b})\in \mathbb{R}^{T\times J}$ , where $T$ is the number of time steps and $J$ the number of joints (e.g., 38 for hands, 9 for body). Each motion vector $g_{t}^{h}$ or $g_{t}^{b}$ is encoded in a Rot6D representation, capturing joint rotations at time $t$ .
|
| 55 |
+
|
| 56 |
+
To model human motion of body and hands, we first learn a motion generator $\mathcal{M}_g$ (Stage 1), which synthesizes a plausible motion sequence:
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\arg \min _ {\mathcal {M} _ {g}} \left\| \boldsymbol {G} - \mathcal {M} _ {g} \left(g _ {1}, \dots , g _ {T}\right) \right\|. \tag {1}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
Next, we condition on (i) the raw input audio $\mathbf{A} = (a_{1},\ldots ,a_{T})$ , (ii) the speaker identity embedding $I$ , and (iii) the text-based semantic embeddings of the speech $S = (s_{1},\dots ,s_{T})$ . Our second-stage model $\mathcal{M}_{a,s,i}$ uses these inputs to generate a latent sequence that the motion generator $\mathcal{M}_g$ then decodes into naturalistic gestures:
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\arg \min _ {\mathcal {M} _ {a, s, i}} \left\| \boldsymbol {G} - \mathcal {M} _ {g} \left(\mathcal {M} _ {a, s, i} (\boldsymbol {A}, \boldsymbol {S}, I)\right) \right\|. \tag {2}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
# 3.1. Stage 1: Learning Efficient Codebooks & Compositional Motion Priors
|
| 69 |
+
|
| 70 |
+
Realistic co-speech gestures require modelling the sequential motion of both body and hand joints. Rather than learning a single representation for the entire body, we adopt a compositional approach, using a discrete codebook of learned representations specific to each part (hands & body). Any gesture motion can then be represented by selecting appropriate codebook entries. Following [28, 48, 53], we employ a VQ-VAE architecture (see Fig. 2) with encoder $\mathcal{E}_m$ and decoder $\mathcal{D}_m$ . Given hand motion $G^h \in \mathbb{R}^{T \times J}$ and body motion $G^b \in \mathbb{R}^{T \times J}$ , the encoder produces latent vectors $\hat{z}^h$ and $\hat{z}^b$ , which are quantized by selecting the nearest entries in the codebooks. Formally,
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
\mathbf {q} (\hat {\boldsymbol {z}}) = \arg \min _ {z ^ {i} \in \mathcal {Z}} \| \hat {z} ^ {j} - z ^ {i} \|, \tag {3}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
where $z^i$ are the learned codebook entries, and $\hat{z}^j$ denotes an element of the latent vector for either hand or body. We train the VQ-VAE via a straight-through gradient estimator, minimizing:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\begin{array}{l} \mathcal {L} _ {\mathrm {V Q - V A E}} = \left\| \mathbf {g} - \hat {\mathbf {g}} \right\| ^ {2} + \left\| \dot {\mathbf {g}} - \hat {\dot {\mathbf {g}}} \right\| ^ {2} + \left\| \ddot {\mathbf {g}} - \hat {\ddot {\mathbf {g}}} \right\| ^ {2} \tag {4} \\ + \left\| \operatorname {s g} [ \mathbf {E} (\mathbf {g}) ] - \mathbf {q} (\hat {z}) \right\| ^ {2} + \left\| \mathbf {E} (\mathbf {g}) - \operatorname {s g} [ \mathbf {q} (\hat {z}) ] \right\| ^ {2}, \\ \end{array}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where the first three terms reconstruct joint positions, velocities, and accelerations, and the last two terms implement the VQ-VAE commitment loss [48].
|
| 83 |
+
|
| 84 |
+
By the end of this stage, we have motion $(m)$ encoder $(\mathcal{E}_m)$ , decoder $(\mathcal{D}_m)$ and codebooks $(\text{Quant}^m(\cdot))$ for hands and body. In the next section (Section 3.2), we show how this discretized motion of hands and body guides speech, semantics and speaker identity-driven generation to produce realized co-speech gestures.
|
| 85 |
+
|
| 86 |
+
# 3.2. Stage 2: Speech and Identity Driven Semantic Gesture Generator
|
| 87 |
+
|
| 88 |
+
This stage focuses on generating gestures conditioned on three inputs: speech embeddings, text-based seman
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Figure 3. SemGes employs three training pathways: (1) Global semantic coherence, which minimizes latent disparities between gesture and text encoders; (2) Multimodal Quantization learning, where integrated multimodal representation codes are aligned with quantized motion to decode them into hand and body movements; and (3) Semantic relevance learning, which emphasizes semantic gestures.
|
| 92 |
+
|
| 93 |
+
tic embeddings, and speaker identity. As illustrated in Figure 3, the second-stage architecture has three main modules, which we elaborate on in the following subsections.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
Figure 4. Semantic Coherence Embedding Learning.
|
| 97 |
+
|
| 98 |
+
# 3.2.1. Semantic Coherence Embedding Learning
|
| 99 |
+
|
| 100 |
+
To align text-based semantics with motion embeddings, we introduce a shared embedding space for both motion priors and speech transcripts. Specifically, we embed word tokens using a pre-trained FastText model [8], then feed these embeddings into a trainable text-based semantic encoder $\mathcal{E}_s$ . At the same time, we use the pre-trained motion encoder $\mathcal{E}_m$ from Stage 1 to encode ground-truth gesture sequences. Thus, for a batch of paired (gesture, transcript) samples, we get:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\mathcal {Z} ^ {S} = \mathcal {E} _ {s} (S), \quad \mathcal {Z} ^ {h} = \mathcal {E} _ {m} ^ {h} (G ^ {h}), \quad \mathcal {Z} ^ {b} = \mathcal {E} _ {m} ^ {b} (G ^ {b}), \tag {5}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
where $S$ is the tokenized speech transcript, and $G^{h}$ and $G^{b}$ correspond to the ground-truth hand gesture sequence and body gesture sequence, respectively. $\mathcal{Z}^h$ and
|
| 107 |
+
|
| 108 |
+
$\mathcal{Z}^b$ represent the hand and body ground-truth motion encodings from Stage 1, and $\mathcal{Z}^s$ denotes the text-based semantic encoder output.
|
| 109 |
+
|
| 110 |
+
Semantic Coherence Loss. We maximize the similarity of correct (gesture, transcript) pairs and minimize it for mismatched pairs, enforcing semantic coherence. This aligns gestures and textual semantics in a common space while keeping $\mathcal{E}_m$ frozen, as illustrated in Figure 4. We impose the semantic coherence constraint separately on both hand and body movements to align gestures with transcripts in the shared embedding space. Specifically, we introduce two distinct cosine similarity losses: one between the text encoder output and the hand motion latent encoding and another between the text encoder output and the body motion latent encoding. Formally, we minimize:
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\mathcal {L} _ {\text {s e m a n t i c - c o h e r e n c e}} = 1 - \cos \left(\mathcal {Z} ^ {h}, \mathcal {Z} ^ {s}\right) + 1 - \cos \left(\mathcal {Z} ^ {b}, \mathcal {Z} ^ {s}\right), \tag {6}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where the function $\cos (\cdot ,\cdot)$ measures cosine similarity.
|
| 117 |
+
|
| 118 |
+
# 3.2.2. Crossmodal Integration
|
| 119 |
+
|
| 120 |
+
SemGes supports multi-modal inputs in the second training stage by combining audio features and speaker identity with semantic text embeddings using a Transformer encoder with self and cross-attention layers (see Figure 3). We begin by extracting HuBERT features [20] from raw speech (keeping the HuBERT encoder frozen). We concatenate audio features $\mathcal{Z}^a$ and speaker embeddings $\mathcal{Z}^i$ , resulting in $\mathcal{Z}^r$ , which we feed into a self-attention layer.
|
| 121 |
+
|
| 122 |
+
Next, we use a cross-attention layer that takes $\mathcal{Z}^r$ as the query and the motion-aligned text-based semantic features $\mathcal{Z}^s$ as the key-value pair. The final hidden representation $\mathcal{Z}^f$ serves as the multimodal latent code
|
| 123 |
+
|
| 124 |
+
that drives gesture synthesis when passed to our vector quantization and VQ-VAE-based motion decoder, which is learned in our first stage (see the yellow box in Figure 3).
|
| 125 |
+
|
| 126 |
+
Multimodal Quantization Consistency Loss. SemGes quantizes the multimodal latent code using separate hand and body codebooks. To align this code with the ground-truth motion latent codes, we apply independent quantization consistency losses for each component. Specifically, the quantization loss is defined as:
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
\mathcal {L} _ {\text {q u a n t i z a t i o n}} = \left\| Q u a n t ^ {h} \left(\mathcal {Z} ^ {f}\right) - Q u a n t ^ {h} \left(\mathcal {Z} ^ {h}\right) \right\| ^ {2} + \tag {7}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\left\| Q u a n t ^ {b} \left(\mathcal {Z} ^ {f}\right) - Q u a n t ^ {b} \left(\mathcal {Z} ^ {b}\right) \right\| ^ {2} \tag {17}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
where $\text{Quant}^h(\cdot)$ and $\text{Quant}^b(\cdot)$ denote the quantization functions for the hand and body codebooks, respectively.
|
| 137 |
+
|
| 138 |
+
The multimodal quantization loss aligns the integrated latent code $\mathcal{Z}^f$ with the learned motion code, a critical step since gesture synthesis is obtained through the quantized multimodal representation. Specifically, $\mathcal{Z}^f$ is vector-quantized using separate hand and body codebooks before being decoded by their respective VQ decoders. This process ensures that both hand and body movements contribute effectively to the final output. Formally, the generated gestures are given by:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\hat {G} = \hat {G} ^ {h} \oplus \hat {G} ^ {b} = \mathcal {D} _ {m} ^ {h} \left(\operatorname {Q u a n t} ^ {h} \left(\mathcal {Z} ^ {f}\right)\right) \oplus \mathcal {D} _ {m} ^ {b} \left(\operatorname {Q u a n t} ^ {b} \left(\mathcal {Z} ^ {f}\right)\right), \tag {8}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where $\oplus$ denotes concatenation, jointly synthesizing hand and body motions (i.e. $\hat{G}$ ).
|
| 145 |
+
|
| 146 |
+
# 3.2.3. Gesture Semantic Relevance Loss
|
| 147 |
+
|
| 148 |
+
To prioritize the generation of semantically meaningful gestures (e.g., iconic, metaphoric, or deictic), which are less frequent than beat gestures, we introduce a semantic relevance loss. This loss emphasizes semantic annotations while preventing over-penalization of minor deviations. Formally, it is defined as:
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
\mathcal {L} _ {\text {s e m a n t i c - r e l e v a n c e}} = \mathbb {E} \left[ \lambda \Psi (\mathbf {G} - \hat {\mathbf {G}}) \right], \tag {9}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
where $\lambda$ is the annotation relevance factor, and $\Psi(\cdot)$ is a piecewise function that applies a quadratic penalty for small errors and a linear penalty for larger ones:
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\Psi (\mathbf {G} - \hat {\mathbf {G}}) = \left\{ \begin{array}{l l} \frac {1}{2} (\mathbf {G} - \hat {\mathbf {G}}) ^ {2}, & \text {i f} | \mathbf {G} - \hat {\mathbf {G}} | < \alpha , \\ \alpha \left(| \mathbf {G} - \hat {\mathbf {G}} | - \frac {1}{2} \alpha\right), & \text {o t h e r w i s e}, \end{array} \right. \tag {10}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
with $\alpha = 0.01$
|
| 161 |
+
|
| 162 |
+
Combined Objective Functions. Finally, the overall objective is:
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
\mathcal {L} _ {\text {S e m G e s}} = \mathcal {L} _ {\text {s e m a n t i c - c o h e r e n c e}} + \mathcal {L} _ {\text {s e m a n t i c - r e l e v a n c e}} + \mathcal {L} _ {\text {q u a n t i z a t i o n}}, \tag {11}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
Algorithm 1 Long Gesture Sequence Algorithm
|
| 169 |
+
Require: Audio $\mathcal{A}$ aligned speech transcript $S$ and speaker ID $\mathcal{I}$ Pre-trained codebooks and motion decoder (Stage 1)
|
| 170 |
+
Ensure: Long-sequence gesture M
|
| 171 |
+
1: Partition $(\mathcal{A},\mathcal{S},\mathcal{I})$ into clips $\{(\mathcal{A}_c,\mathcal{S}_c,\mathcal{I}_c)\}_{c = 1}^C$
|
| 172 |
+
2: Compute latent representation: $\mathcal{Z}^{f}\gets$ Encode(A,S,I)
|
| 173 |
+
3: Quantize: $\mathcal{Z}^e\gets$ VectorQuantize $(\mathcal{Z}^{f})$
|
| 174 |
+
4: Decode initial clip: $\hat{M}_1\gets \mathrm{Dec}(\mathcal{Z}^e)$
|
| 175 |
+
5: for each clip $c = 2$ to $C$ do
|
| 176 |
+
6: Set first 4 frames of $\hat{M}_c$ to the last 4 frames of $\hat{M}_{c - 1}$
|
| 177 |
+
7: Generate remaining frames of $\hat{M}_c$
|
| 178 |
+
8: end for
|
| 179 |
+
9: return $\hat{M}$
|
| 180 |
+
|
| 181 |
+
which jointly optimizes the model to generate gestures that are semantically coherent at both global and fine-grained levels while remaining faithful to the Stage 1 motion prior.
|
| 182 |
+
|
| 183 |
+
# 3.3. Inference of Long Gesture Sequences
|
| 184 |
+
|
| 185 |
+
Generating long sequences of gestures is challenging due to the need to maintain coherence and smooth transitions. Our Long-Sequence Gesture Motion algorithm (Alg. 1) addresses these challenges by partitioning the input speech, transcript, and speaker identity into aligned clips. For each clip, a multimodal latent representation is computed using our cross-modal encoder, vector-quantized via the Stage 1 codebooks, and decoded into gesture motions. Overlapping 4-frame segments between clips provides continuity, resulting in extended, naturalistic gesture sequences.
|
| 186 |
+
|
| 187 |
+
# 4. Experimental Setup
|
| 188 |
+
|
| 189 |
+
# 4.1. Datasets
|
| 190 |
+
|
| 191 |
+
Our proposed methodology is evaluated on two benchmarks, namely, BEAT [27] and the TED expressive dataset [33]. The BEAT dataset consists of 76 hours of multimodal recordings, which include speech audio recordings, speech transcriptions, and, more importantly, motion data collected from 30 participants, leveraging Motion Capture (MOCAP) technology. The participants expressed emotions in eight distinct scenarios across four languages. The motion data contains joint rotation angles, which were designed for consistency across varying body sizes. The TED Expressive dataset [33] is segmented from TED Talk videos into smaller shots based on scene boundaries. Liu et al. [33] extracted each frame's 2D human pose using OpenPose BEAT [9]. Using these 2D pose priors, ExPose [43] was
|
| 192 |
+
|
| 193 |
+
Table 1. Comparison of SemGesGen with other methods on the BEAT and TED-Expressive datasets. For BEAT, we compare with CaMN [27], DiffGesture [59], LivelySpeaker [58], and DiffSheg [10]. The same methods are evaluated on TED-Expressive. SRGR is not applicable (denoted with $-$ ) for TED-Expressive as it does not contain annotations for semantic relevance of gestures.
|
| 194 |
+
|
| 195 |
+
<table><tr><td colspan="5">BEAT</td><td colspan="4">TED-Expressive</td></tr><tr><td>Method</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td><td>SRGR ↑</td><td>Method</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td></tr><tr><td>CaMN [27]</td><td>8.510</td><td>0.797</td><td>206.789</td><td>0.231</td><td>CaMN [27]</td><td>7.673</td><td>0.642</td><td>156.236</td></tr><tr><td>DiffGesture [59]</td><td>9.632</td><td>0.876</td><td>210.678</td><td>0.106</td><td>DiffGesture [59]</td><td>9.326</td><td>0.662</td><td>119.889</td></tr><tr><td>LivelySpeaker [58]</td><td>13.378</td><td>0.891</td><td>214.946</td><td>0.229</td><td>LivelySpeaker [58]</td><td>8.145</td><td>0.691</td><td>119.231</td></tr><tr><td>DiffSheg [10]</td><td>6.623</td><td>0.922</td><td>257.674</td><td>0.250</td><td>DiffSheg [10]</td><td>8.457</td><td>0.712</td><td>108.972</td></tr><tr><td>SemGes (Ours)</td><td>4.467</td><td>0.453</td><td>305.706</td><td>0.256</td><td>SemGes (Ours)</td><td>7.263</td><td>0.671</td><td>302.772</td></tr></table>
|
| 196 |
+
|
| 197 |
+
employed to annotate the 3D upper body keypoints, including 13 upper body joints and 30 finger joints. Both datasets' training and validation samples are divided into 34-frame clips.
|
| 198 |
+
|
| 199 |
+
Cross-Validation. We evaluate our approach on the BEAT dataset, following the protocol in [27], where the data is randomly split into a 19:2:2 ratio for training, validation, and testing. Similarly, for the TED Expressive dataset, we adapt the protocol in [33], using a random split of 8:1:1 for training, validation, and testing.
|
| 200 |
+
|
| 201 |
+
Implementation Details. The details of the model architectures and training are provided in Section 2 of the Supplementary Materials.
|
| 202 |
+
|
| 203 |
+
# 4.2. State-of-the-Art Baselines
|
| 204 |
+
|
| 205 |
+
We compare SemGes against a set of representative state-of-the-art models that focus on semantic-driven gesture generation. The selected models achieved strong performance on the BEAT and TED-Expressive datasets, making them suitable for a fair comparison with our method. The selected models are as follows:
|
| 206 |
+
|
| 207 |
+
1. Cascaded Motion Network(CaMN) [27] is the current benchmark model for the BEAT dataset. CaMN is based on LSTMs and integrates multiple input modalities, including audio, text, facial expressions, and emotion. Additionally, like SemGes, it leverages semantic relevance annotations to enhance gesture generation.
|
| 208 |
+
2. DiffSHEG [10] is a state-of-the-art diffusion-based model for real-time speech-driven holistic gesture generation. It is conditioned on noisy motion, audio, and speaker ID. DiffSHEG introduces a Fast Out-painting-based Partial Autoregressive Sampling method to efficiently generate arbitrary-length sequences in real time.
|
| 209 |
+
3. LivelySpeaker [58] generates semantically and rhythmically aware co-speech gestures by leveraging an MLP-based diffusion model. The model conditions gesture generation on text, noised motion,
|
| 210 |
+
|
| 211 |
+
speaker ID, and audio to enable text-driven gesture control while incorporating global semantics.
|
| 212 |
+
|
| 213 |
+
4. DiffGes [59] models the diffusion and denoising processes within the gesture domain, enabling the generation of high-fidelity, audio-driven gestures conditioned on both audio and gesture inputs. Several recent studies[6, 26] have also demonstrated strong performance in this area.
|
| 214 |
+
|
| 215 |
+
We exclude certain models from our comparison. For instance, SEEG [26] and [57] rely on additional data annotations (e.g., Semantic Prompt Gallery or ChatGPT-generated annotations) that are not uniformly available. In addition, other works, such as Ao et al. [6], Pang et al. [42], Zhang et al. [57], are excluded from our analysis due to the inaccessibility of their codebase. Voß and Kopp [49] is omitted due to its high computational cost and the unavailability of annotations. Liu et al. [31, 34], Mughal et al. [36, 37], Ng et al. [38], Yi et al. [54] are excluded as they primarily focus on holistic gestures with face and mesh data, which fall outside the scope of this work. Similarly, Chhatre et al. [11], Qi et al. [44] are excluded, as their emphasis lies in emotion-driven gesture generation rather than the semantic aspects. Furthermore, Ahuja et al. [1, 2], Alexander et al. [4], Habibie et al. [17], Liu et al. [33], Sun et al. [45], Yang et al. [51], Ye et al. [52] are omitted due to their lack of relevance to semantic-driven gesture generation.
|
| 216 |
+
|
| 217 |
+
# 5. Quantitative Objective Evaluations
|
| 218 |
+
|
| 219 |
+
Evaluation Metrics. We employ four standard objective metrics for evaluating the quality of gesture generation, namely, Fréchet Gesture Distance (FGD) [55], Beat Consistency Score (BC) [25], Diversity [24], and Semantic-Relevant Gesture Recall (SRGR) [27].
|
| 220 |
+
|
| 221 |
+
FGD measures how the generated gestures resemble real motion distributions by embedding sequences into a latent space via a pre-trained autoencoder. In contrast, BC focuses on synchronization with speech, measuring the alignment between speech onsets (audio beats) and motion beats, which are identified as velocity minima in
|
| 222 |
+
|
| 223 |
+
Table 2. Ablation studies evaluating the contributions of key components in SemGes on the BEAT and TED-Expressive Datasets. For BEAT, performance is measured using FGD (lower is better), BC, Diversity, and SRGR, while for TED-Expressive, SRGR is not applicable (denoted as -).
|
| 224 |
+
|
| 225 |
+
<table><tr><td colspan="5">BEAT</td><td colspan="4">TED-Expressive</td></tr><tr><td>Model Variants</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td><td>SRGR ↑</td><td>Model Variants</td><td>FGD ↓</td><td>BC ↑</td><td>Diversity ↑</td></tr><tr><td>Baseline (VQVAE)</td><td>10.348</td><td>0.564</td><td>198.568</td><td>0.176</td><td>Baseline (VQVAE)</td><td>10.682</td><td>0.612</td><td>114.692</td></tr><tr><td>w/o Semantic Coherence Module</td><td>8.053</td><td>0.556</td><td>249.550</td><td>0.180</td><td>w/o Semantic Coherence Module</td><td>7.924</td><td>0.623</td><td>109.256</td></tr><tr><td>w/o Semantic Relevance Module</td><td>7.549</td><td>0.573</td><td>245.319</td><td>0.195</td><td>w/o Semantic Relevance Module</td><td>-</td><td>-</td><td>-</td></tr><tr><td>w/ SpeechCLIP Encoder</td><td>6.787</td><td>0.468</td><td>289.621</td><td>0.245</td><td>w/ SpeechCLIP Encoder</td><td>7.341</td><td>0.605</td><td>245.680</td></tr><tr><td>SemGes (Ours)</td><td>4.467</td><td>0.453</td><td>305.706</td><td>0.256</td><td>SemGes (Ours)</td><td>7.263</td><td>0.671</td><td>302.772</td></tr></table>
|
| 226 |
+
|
| 227 |
+
upper-body joints (excluding fingers). Meanwhile, **Diversity** captures the variability of generated motions by computing the average $L1$ distance between pairs of $N$ generated clips. Finally, SRGR assesses semantic relevance by determining how well generated gestures align with the annotated semantic gestures. Further details on the objective metrics are included in the Supplementary Materials (Section 1).
|
| 228 |
+
|
| 229 |
+
Comparisons with Other Models. Table 1 compares the performance of our approach against four baseline methods across four evaluation metrics. As highlighted in the table, SemGes outperforms the baselines in FGD, Diversity, and SRGR.
|
| 230 |
+
|
| 231 |
+
For the BEAT dataset, our approach achieves the highest SRGR, which we attribute to the exploitation of semantic relevance information in our training objectives. In addition, our approach shows a significant improvement in FGD and Diversity, indicating a closer alignment with the ground truth gesture distribution and a broader range of generated gestures compared to the second-best baselines. The performance on the Beat Consistency (BC) metric is lower for our method. This is expected given our focus on improving semantic awareness of the generated gestures rather than optimizing for strict temporal alignment between rhythmic beat gestures and speech. In addition, the BC metric can be sensitive to rapid, jittery movements; even minor motion artefacts may be mistakenly counted as additional beats, thereby increasing the BC score artificially- a phenomenon also observed in the diffusion-based baselines, as further illustrated in our supplementary video.
|
| 232 |
+
|
| 233 |
+
We evaluate how our model handles the trade-off between semantic and beat scores by testing the model on beat-dominant gestures (without semantic content). The results show a significantly higher Beat score (0.689) than the full dataset Beat score (0.453). This confirms rhythmic consistency in beat-focused contexts. We provide additional evaluation in the supplementary materials (Section 3) to show how the model handles difficult cases (such as noisy speech or misaligned speech).
|
| 234 |
+
|
| 235 |
+
Note that the TED Expressive dataset lacks annotations for gesture semantic relevance, so SRGR is not
|
| 236 |
+
|
| 237 |
+
applicable, and the semantic relevance loss was omitted during training. Nevertheless, SemGes produces diverse, naturalistic gestures on TED Expressive, outperforming baselines in FGD and Diversity metrics.
|
| 238 |
+
|
| 239 |
+
Ablation Study. We evaluate the contributions of key components in SemGes through ablation experiments. First, we assess a baseline VQ-VAE model (Stage 1 only), which uses two stacked encoder-decoder blocks and an MLP. In this experiment, we test its ability to generate gestures, conditioned on audio, masked motion, and speaker identity. As shown in Table 2, this baseline underperforms compared to state-of-the-art methods (Table 1). As a result, we motivate our two-stage design where the VQ-VAE is reserved to learn the motion latent space and Stage 2 leverages speech and identity conditioning to generate gestures.
|
| 240 |
+
|
| 241 |
+
Next, we examine Stage 2 by removing its components: (i) the Semantic Coherence Loss, (ii) the Semantic Relevance Loss, and (iii) by replacing the HuBERT-based speech encoder with SpeechCLIP. Results in Table 2 show that removing either the Semantic Coherence or Relevance Loss degrades FGD, Diversity, and SRGR scores, highlighting their roles in aligning gesture representations with textual semantics and capturing semantic importance. In addition, replacing the speech encoder results in marginal gains. The semantic encoder is fixed as FastText, which we believe is sufficient to capture the necessary semantic information [7]. Overall, these results confirm the importance of each module in generating semantics-aware gestures.
|
| 242 |
+
|
| 243 |
+
# 6. Qualitative & Subjective Evaluations
|
| 244 |
+
|
| 245 |
+
Visualization Comparisons. Before presenting the subjective ratings of the generated gestures, Figure 5 provides a visual comparison between the ground truth, results from our approach and two baseline models. We use examples from the BEAT dataset. It is clear from the figure that our approach not only achieves better speech-gesture alignment but also produces gestures that are more naturalistic, diverse, and semantically aware. For example, while CaMN generates smooth movements, its gestures tend to be slower and less varied compared to
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
Figure 5. Comparisons with baselines and ground truth gestures. Compared to the baseline method, our approach generates gestures that are aligned with speech content (semantics). For instance, when the speaker says "remix", our method produces gestures where the character raises both hands to emphasize the word before gradually lowering them—a movement that other methods fail to achieve. Similarly, when uttering "first", our method generates a raised hand gesture, producing an iconic gesture.
|
| 249 |
+
|
| 250 |
+
our model. Additionally, the baseline methods show varying degrees of jitter—DiffGesture shows the highest jitter, followed by LivelySpeaker and DiffSheg, with CaMN displaying the least. Although CaMN includes semantic information, our approach strikes a more effective balance, generating gestures that align with actual motion, as shown also with the objective metrics. Based on these qualitative observations, our subsequent rating study focuses on evaluating gestures produced by the ground truth, our model, CaMN, and DiffSHEG.
|
| 251 |
+
|
| 252 |
+
User Ratings of Generated Gestures. We conducted a user study using 40-second video clips from the BEAT test set, featuring subjects narrating six topics. Thirty native English speakers from the United Kingdom and the United States participated, with an average age of $36 \pm 20$ years and a female-to-male ratio of approximately 2:1. Each participant evaluated 24 videos generated by the ground truth, CaMN, DiffSHEG, and our model over a study duration of that lasted on average $27 \pm 5$ minutes. For data quality, participants were required to pass attention verification questions, i.e., correctly answering at least two out of four questions regarding the narration topic. Participants rated the videos on three criteria: naturalness, diversity, and alignment with speech content and timing on a scale from 1 to 5. The videos were presented in a randomized order to avoid bias. In Section 3 of the Supplementary Materials, we provide screenshots and more details on the user study and interface.
|
| 253 |
+
|
| 254 |
+
Figure 6 shows that ground-truth gestures received an average rating of 4 across all metrics, establishing an upper bound and validating the participant survey. Our model received the highest ratings among the generated gestures, significantly outperforming CaMN and Diff-SHEG in naturalness, synchronization, and diversity (indicated by the in Figure 6). These results prove that our approach produces gestures that are more natural, better aligned with speech, and more diverse than those gener
|
| 255 |
+
|
| 256 |
+
ated by SOTA baselines.
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
Figure 6. Average ratings of users for ground truth gestures and gestures generated through our approach, CAMN, and DiffSHEG. The bars illustrate the average user ratings across three metrics: naturalness, diversity, and alignment with speech content and timing. Statistical t-tests show that our approach received significantly higher ratings than CAMN and DiffSHEG, with $p < 0.05$ .
|
| 260 |
+
|
| 261 |
+
# 7. Conclusion
|
| 262 |
+
|
| 263 |
+
We proposed SemGes, a novel two-stage approach to semantic grounding in co-speech gesture generation by integrating semantic information at both fine-grained and global levels. In the first stage, a motion prior generation module is trained using a vector-quantized variational autoencoder to produce realistic and smooth gesture motions. Building upon this model, the second stage generates gestures from speech, text-based semantics, and speaker identity while maintaining consistency between gesture semantics and co-occurring speech through semantic coherence and relevance modules. Subjective and objective evaluations show that our work achieves state-of-the-art performance across two public benchmarks, generating semantics-aware and diverse gestures. Future direction and limitations are discussed in Section 5 of the Supplementary Materials.
|
| 264 |
+
|
| 265 |
+
# Acknowledgement
|
| 266 |
+
|
| 267 |
+
The project is funded by the Max Planck Society. We thank Sachit Misra for his invaluable assistance with rendering Avatar characters. We extend our gratitude to the members of the Multimodal Language Department at Max Planck Institute for Psycholinguistics for their feedback.
|
| 268 |
+
|
| 269 |
+
# References
|
| 270 |
+
|
| 271 |
+
[1] Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. Low-resource adaptation for personalized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20566-20576, 2022. 6
|
| 272 |
+
[2] Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, and Louis-Philippe Morency. Continual learning for personalized co-speech gesture generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20893-20903, 2023. 6
|
| 273 |
+
[3] Simon Alexander, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. Style-controllable speech-driven gesture synthesis using normalising flows. In Computer Graphics Forum, pages 487-496. Wiley Online Library, 2020. 2
|
| 274 |
+
[4] Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. Listen, denoise, action! audiodriven motion synthesis with diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-20, 2023. 6
|
| 275 |
+
[5] Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. Rhythmic gesticulator: Rhythm-aware co-speech gesture synthesis with hierarchical neural embeddings. ACM Transactions on Graphics (TOG), 41(6): 1-19, 2022. 2, 3
|
| 276 |
+
[6] Tenglong Ao, Zeyi Zhang, and Libin Liu. Gesture diffusion model with clip latents. ACM Transactions on Graphics (TOG), 42(4):1-18, 2023. 2, 3, 6
|
| 277 |
+
[7] Ben Athiwaratkun, Andrew Gordon Wilson, and Anima Anandkumar. Probabilistic fasttext for multi-sense word embeddings. arXiv preprint arXiv:1806.02901, 2018. 7
|
| 278 |
+
[8] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5:135-146, 2017. 4
|
| 279 |
+
[9] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7291–7299, 2017. 5
|
| 280 |
+
[10] Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, and Qifeng Chen. Diffsheg: A diffusion-based approach for real-time speech-driven holistic 3d expression and gesture generation. In CVPR, 2024. 2, 6
|
| 281 |
+
[11] Kiran Chhatre, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J Black, Timo Bolkart, et al.
|
| 282 |
+
|
| 283 |
+
Emotional speech-driven 3d body animation via disentangled latent diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1942-1953, 2024. 6
|
| 284 |
+
[12] Ylva Ferstl, Michael Neff, and Rachel McDonnell. Adversarial gesture generation with realistic gesture phasing. Computers & Graphics, 89:117-130, 2020. 2
|
| 285 |
+
[13] Esam Ghaleb, Bulat Khaertdinov, Wim Pouw, Marlou Rasenberg, Judith Holler, Asli Ozyurek, and Raquel Fernandez. Learning co-speech gesture representations in dialogue through contrastive learning: An intrinsic evaluation. In Proceedings of the 26th International Conference on Multimodal Interaction, pages 274-283, 2024. 1
|
| 286 |
+
[14] Esam Ghaleb, Bulat Khaertdinov, Asli Özyürek, and Raquel Fernández. I see what you mean: Co-speech gestures for reference resolution in multimodal dialogue. In Proceedings of the 63rd Conference of the Association for Computational Linguistics (ACL Findings), 2025. To appear. 1
|
| 287 |
+
[15] Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5152-5161, 2022. 2
|
| 288 |
+
[16] Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In European Conference on Computer Vision, pages 580-597. Springer, 2022. 2
|
| 289 |
+
[17] Ikhsanul Habibie, Mohamed Elgharib, Kripasindhu Sarkar, Ahsan Abdullah, Simbarashe Nyatsanga, Michael Neff, and Christian Theobalt. A motion matching-based framework for controllable gesture synthesis from speech. In ACM SIGGRAPH 2022 conference proceedings, pages 1-9, 2022. 6
|
| 290 |
+
[18] Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. Moglow: Probabilistic and controllable motion synthesis using normalising flows. ACM Transactions on Graphics (TOG), 39(6):1-14, 2020. 2
|
| 291 |
+
[19] Judith Holler and Stephen C Levinson. Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8):639-652, 2019. 1
|
| 292 |
+
[20] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing, 29:3451-3460, 2021. 4
|
| 293 |
+
[21] Adam Kendon. Gesture units, gesture phrases and speech. In *Gesture: Visible Action as Utterance*, chapter 7, page 108–126. Cambridge University Press, 2004. 1
|
| 294 |
+
[22] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 international conference on multimodal interaction, pages 242-250, 2020. 2
|
| 295 |
+
|
| 296 |
+
[23] Buyu Li, Yongchi Zhao, Shi Zhelun, and Lu Sheng. Danceformer: Music conditioned 3d dance generation with parametric motion transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1272-1279, 2022. 2
|
| 297 |
+
[24] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and Linchao Bao. Audio2gestures: Generating diverse gestures from speech audio with conditional variational autoencoders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11293-11302, 2021. 6
|
| 298 |
+
[25] Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13401-13412, 2021. 6
|
| 299 |
+
[26] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. Seeg: Semantic energized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10473-10482, 2022. 2, 6
|
| 300 |
+
[27] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Beat: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis. In European Conference on Computer Vision, pages 612-630. Springer, 2022. 2, 3, 5, 6
|
| 301 |
+
[28] Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. Emage: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1144-1154, 2024. 2, 3
|
| 302 |
+
[29] Li Liu, Lufei Gao, Wentao Lei, Fengji Ma, Xiaotian Lin, and Jinting Wang. A survey on deep multi-modal learning for body language recognition and generation. arXiv preprint arXiv:2308.08849, 2023. 1
|
| 303 |
+
[30] Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, and Adriana Tapus. Human gesture recognition with a flow-based model for human robot interaction. In *Compan- ion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction*, pages 548–551, 2023. 2
|
| 304 |
+
[31] Pinxin Liu, Luchuan Song, Junhua Huang, Haiyang Liu, and Chenliang Xu. Gesturelsm: Latent shortcut based co-speech gesture generation with spatial-temporal modeling. arXiv preprint arXiv:2501.18898, 2025. 6
|
| 305 |
+
[32] Xian Liu, Qianyi Wu, Hang Zhou, Yuanqi Du, Wayne Wu, Dahua Lin, and Ziwei Liu. Audio-driven co-speech gesture video generation. Advances in Neural Information Processing Systems, 35:21386-21399, 2022. 2
|
| 306 |
+
[33] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou, Wayne Wu, Bo Dai, and Bolei Zhou. Learning hierarchical cross-modal association for co-speech gesture generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10462-10472, 2022. 2, 5, 6
|
| 307 |
+
[34] Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, and Changxing Ding. Towards variable and coordinated
|
| 308 |
+
|
| 309 |
+
holistic co-speech motion generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1566-1576, 2024. 6
|
| 310 |
+
[35] David McNeill. Hand and mind. Advances in Visual Semiotics, 351, 1992. 1
|
| 311 |
+
[36] Muhammad Hamza Mughal, Rishabh Dabral, Ikhsanul Habibie, Lucia Donatelli, Marc Habermann, and Christian Theobalt. Convofusion: Multi-modal conversational diffusion for co-speech gesture synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1388-1398, 2024. 6
|
| 312 |
+
[37] M Hamza Mughal, Rishabh Dabral, Merel CJ Scholman, Vera Demberg, and Christian Theobalt. Retrieving semantics from the deep: an rag solution for gesture synthesis. arXiv preprint arXiv:2412.06786, 2024. 6
|
| 313 |
+
[38] Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, and Alexander Richard. From audio to photoreal embodiment: Synthesizing humans in conversations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1001-1010, 2024. 6
|
| 314 |
+
[39] Mang Ning, Mingxiao Li, Jianlin Su, Haozhe Jia, Lanmiao Liu, Martin Beneš, Wenshuo Chen, Albert Ali Salah, and Itir Onal Ertugrul. Dctdiff: Intriguing properties of image generative modeling in the dct space. arXiv preprint arXiv:2412.15032, 2024. 2
|
| 315 |
+
[40] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, and Michael Neff. A comprehensive review of data-driven co-speech gesture generation. In Computer Graphics Forum, pages 569-596. Wiley Online Library, 2023. 1, 2
|
| 316 |
+
[41] Asli Özyürek. Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1651):20130296, 2014. 1
|
| 317 |
+
[42] Kunkun Pang, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi, and Taku Komura. Bodyformer: Semantics-guided 3d body gesture synthesis with transformer. ACM Transactions on Graphics (TOG), 42(4):1-12, 2023. 6
|
| 318 |
+
[43] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10975-10985, 2019. 5
|
| 319 |
+
[44] Xingqun Qi, Jiahao Pan, Peng Li, Ruibin Yuan, Xiaowei Chi, Mengfei Li, Wenhan Luo, Wei Xue, Shanghang Zhang, Qifeng Liu, et al. Weakly-supervised emotion transition learning for diverse 3d co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10424-10434, 2024. 6
|
| 320 |
+
[45] Mingyang Sun, Mengchen Zhao, Yaqing Hou, Minglei Li, Huang Xu, Songcen Xu, and Jianye Hao. Cospeech gesture synthesis by reinforcement learning with contrastive pre-trained rewards. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2331-2340, 2023. 6
|
| 321 |
+
|
| 322 |
+
[46] Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or. Motionclip: Exposing human motion generation to clip space. In European Conference on Computer Vision, pages 358–374. Springer, 2022. 3
|
| 323 |
+
[47] Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffusion model. In The Eleventh International Conference on Learning Representations, 2022. 2
|
| 324 |
+
[48] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 3
|
| 325 |
+
[49] Hendric Voß and Stefan Kopp. Augmented co-speech gesture generation: Including form and meaning features to guide learning-based gesture synthesis. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, pages 1-8, 2023. 2, 6
|
| 326 |
+
[50] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. Diffusestylegesture: Stylized audio-driven cospeech gesture generation with diffusion models. arXiv preprint arXiv:2305.04919, 2023. 2
|
| 327 |
+
[51] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, and Haolin Zhuang. Qpgesture: Quantization-based and phase-guided motion matching for natural speech-driven gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2321-2330, 2023. 6
|
| 328 |
+
[52] Sheng Ye, Yu-Hui Wen, Yanan Sun, Ying He, Ziyang Zhang, Yaoyuan Wang, Weihua He, and Yong-Jin Liu. Audio-driven stylized gesture generation with flow-based model. In European Conference on Computer Vision, pages 712-728. Springer, 2022. 6
|
| 329 |
+
[53] Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In CVPR, 2023. 3
|
| 330 |
+
[54] Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yan-dong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 469-480, 2023. 6
|
| 331 |
+
[55] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG), 39(6):1-16, 2020. 6
|
| 332 |
+
[56] Pengfei Zhang, Pinxin Liu, Hyeongwoo Kim, Pablo Garrido, and Bindita Chaudhuri. Kinmo: Kinematic-aware human motion understanding and generation. arXiv preprint arXiv:2411.15472, 2024.3
|
| 333 |
+
[57] Zeyi Zhang, Tenglong Ao, Yuyao Zhang, Qingzhe Gao, Chuan Lin, Baoquan Chen, and Libin Liu. Semantic gesticulator: Semantics-aware co-speech gesture synthesis. ACM Transactions on Graphics (TOG), 43(4):1-17, 2024. 2, 3, 6
|
| 334 |
+
[58] Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, and Shenghua Gao. Livelyspeaker:
|
| 335 |
+
|
| 336 |
+
Towards semantic-aware co-speech gesture generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20807-20817, 2023. 2, 3, 6
|
| 337 |
+
[59] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. Taming diffusion models for audiodriven co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10544-10553, 2023. 2, 3, 6
|
2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:37caf8fb614e1ea0ff177d6dbd1fc5b984bdfc144501d18b65c22b2da3e83f00
|
| 3 |
+
size 342436
|
2025/SemGes_ Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/d0c52370-d24a-4c84-9ea3-ee5cd6ca6239_content_list.json
ADDED
|
@@ -0,0 +1,1446 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "SemTalk: Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
163,
|
| 8 |
+
128,
|
| 9 |
+
833,
|
| 10 |
+
176
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "image",
|
| 16 |
+
"img_path": "images/71477c7f42ee897a304a1baaff56e265bfc8ff144b3bf0b898e4065bbbead50f.jpg",
|
| 17 |
+
"image_caption": [],
|
| 18 |
+
"image_footnote": [],
|
| 19 |
+
"bbox": [
|
| 20 |
+
116,
|
| 21 |
+
200,
|
| 22 |
+
875,
|
| 23 |
+
276
|
| 24 |
+
],
|
| 25 |
+
"page_idx": 0
|
| 26 |
+
},
|
| 27 |
+
{
|
| 28 |
+
"type": "image",
|
| 29 |
+
"img_path": "images/49b2f3acde280b07ad3698249e8c8c9186570f4d3f6fe253805a2336977286cd.jpg",
|
| 30 |
+
"image_caption": [
|
| 31 |
+
"Figure 1. On the left, we analyze semantic labels from the BEAT2 dataset [31] and visualize frame-level motion, revealing that semantically relevant motions are rare and sparse, aligning with real-life observations. On the right, this observation drives the design of SemTalk, which establishes a rhythm-aligned base motion and dynamically emphasizes sparse semantic gestures at the frame-level. In this example, SemTalk amplifies expressiveness on words like \"watching\" and \"just,\" enhancing gesture and torso movements. The semantic scores below are automatically generated by SemTalk to modulate semantic emphasis over time."
|
| 32 |
+
],
|
| 33 |
+
"image_footnote": [],
|
| 34 |
+
"bbox": [
|
| 35 |
+
114,
|
| 36 |
+
282,
|
| 37 |
+
370,
|
| 38 |
+
452
|
| 39 |
+
],
|
| 40 |
+
"page_idx": 0
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"type": "image",
|
| 44 |
+
"img_path": "images/0d782c86a891238469b65982dca38bade915fe2835c8fecb4deb855fcb404705.jpg",
|
| 45 |
+
"image_caption": [],
|
| 46 |
+
"image_footnote": [],
|
| 47 |
+
"bbox": [
|
| 48 |
+
375,
|
| 49 |
+
282,
|
| 50 |
+
885,
|
| 51 |
+
452
|
| 52 |
+
],
|
| 53 |
+
"page_idx": 0
|
| 54 |
+
},
|
| 55 |
+
{
|
| 56 |
+
"type": "text",
|
| 57 |
+
"text": "Abstract",
|
| 58 |
+
"text_level": 1,
|
| 59 |
+
"bbox": [
|
| 60 |
+
246,
|
| 61 |
+
546,
|
| 62 |
+
326,
|
| 63 |
+
561
|
| 64 |
+
],
|
| 65 |
+
"page_idx": 0
|
| 66 |
+
},
|
| 67 |
+
{
|
| 68 |
+
"type": "text",
|
| 69 |
+
"text": "A good co-speech motion generation cannot be achieved without a careful integration of common rhythmic motion and rare yet essential semantic motion. In this work, we propose SemTalk for holistic co-speech motion generation with frame-level semantic emphasis. Our key insight is to separately learn base motions and sparse motions, and then adaptively fuse them. In particular, coarse2fine cross-attention module and rhythmic consistency learning are explored to establish rhythm-related base motion, ensuring a coherent foundation that synchronizes gestures with the speech rhythm. Subsequently, semantic emphasis learning is designed to generate semantic-aware sparse motion, focusing on frame-level semantic cues. Finally, to integrate sparse motion into the base motion and generate semantic-emphasized co-speech gestures, we further leverage a learned semantic score for adaptive synthesis. Qualitative and quantitative comparisons on two public datasets demonstrate that our method outperforms the state-of-the-art, delivering high-quality co-speech motion",
|
| 70 |
+
"bbox": [
|
| 71 |
+
88,
|
| 72 |
+
578,
|
| 73 |
+
485,
|
| 74 |
+
867
|
| 75 |
+
],
|
| 76 |
+
"page_idx": 0
|
| 77 |
+
},
|
| 78 |
+
{
|
| 79 |
+
"type": "text",
|
| 80 |
+
"text": "with enhanced semantic richness over a stable base motion.",
|
| 81 |
+
"bbox": [
|
| 82 |
+
513,
|
| 83 |
+
547,
|
| 84 |
+
906,
|
| 85 |
+
561
|
| 86 |
+
],
|
| 87 |
+
"page_idx": 0
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"type": "text",
|
| 91 |
+
"text": "1. Introduction",
|
| 92 |
+
"text_level": 1,
|
| 93 |
+
"bbox": [
|
| 94 |
+
513,
|
| 95 |
+
613,
|
| 96 |
+
645,
|
| 97 |
+
628
|
| 98 |
+
],
|
| 99 |
+
"page_idx": 0
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"type": "text",
|
| 103 |
+
"text": "Nonverbal communication, including body language, hand gestures, and facial expressions, is integral to human interactions. It enriches conversations with contextual cues and enhances understanding among participants [6, 14, 20, 24]. This aspect is particularly significant in holistic co-speech motion generation, where the challenge lies in synthesizing gestures that align with speech rhythm while also capturing the infrequent yet critical semantic gestures [25, 38].",
|
| 104 |
+
"bbox": [
|
| 105 |
+
509,
|
| 106 |
+
641,
|
| 107 |
+
906,
|
| 108 |
+
762
|
| 109 |
+
],
|
| 110 |
+
"page_idx": 0
|
| 111 |
+
},
|
| 112 |
+
{
|
| 113 |
+
"type": "text",
|
| 114 |
+
"text": "Most existing methods [17, 30, 48] rely heavily on rhythm-related audio features as conditions for gesture generation. While these rhythm-based features successfully align gestures with the timing of speech, they often overshadow the sparse yet expressive semantic motion (see Fig. 1). As a result, the generated motions may lack the contextual depth necessary and nuanced expressiveness for natural interaction. Some methods try to address this by incorporating semantic information like emotion, style, and",
|
| 115 |
+
"bbox": [
|
| 116 |
+
509,
|
| 117 |
+
763,
|
| 118 |
+
908,
|
| 119 |
+
902
|
| 120 |
+
],
|
| 121 |
+
"page_idx": 0
|
| 122 |
+
},
|
| 123 |
+
{
|
| 124 |
+
"type": "header",
|
| 125 |
+
"text": "CVF",
|
| 126 |
+
"bbox": [
|
| 127 |
+
106,
|
| 128 |
+
2,
|
| 129 |
+
181,
|
| 130 |
+
42
|
| 131 |
+
],
|
| 132 |
+
"page_idx": 0
|
| 133 |
+
},
|
| 134 |
+
{
|
| 135 |
+
"type": "header",
|
| 136 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 137 |
+
"bbox": [
|
| 138 |
+
238,
|
| 139 |
+
0,
|
| 140 |
+
807,
|
| 141 |
+
46
|
| 142 |
+
],
|
| 143 |
+
"page_idx": 0
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"type": "page_footnote",
|
| 147 |
+
"text": "* Equal contribution.",
|
| 148 |
+
"bbox": [
|
| 149 |
+
107,
|
| 150 |
+
875,
|
| 151 |
+
222,
|
| 152 |
+
887
|
| 153 |
+
],
|
| 154 |
+
"page_idx": 0
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"type": "page_footnote",
|
| 158 |
+
"text": "† Corresponding author.",
|
| 159 |
+
"bbox": [
|
| 160 |
+
109,
|
| 161 |
+
887,
|
| 162 |
+
238,
|
| 163 |
+
898
|
| 164 |
+
],
|
| 165 |
+
"page_idx": 0
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"type": "page_number",
|
| 169 |
+
"text": "13761",
|
| 170 |
+
"bbox": [
|
| 171 |
+
480,
|
| 172 |
+
944,
|
| 173 |
+
517,
|
| 174 |
+
955
|
| 175 |
+
],
|
| 176 |
+
"page_idx": 0
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"type": "text",
|
| 180 |
+
"text": "content[10, 12, 23, 32]. However, the rhythm features tend to dominate, making the models difficult to capture sparse, semantically relevant gestures at the frame level. These rare but impactful gestures are often diluted or overlooked, highlighting the challenge of balancing rhythmic alignment with semantic expressiveness in co-speech motion generation.",
|
| 181 |
+
"bbox": [
|
| 182 |
+
89,
|
| 183 |
+
90,
|
| 184 |
+
480,
|
| 185 |
+
181
|
| 186 |
+
],
|
| 187 |
+
"page_idx": 1
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"type": "text",
|
| 191 |
+
"text": "In real-world human conversations, we have an observation that while most speech-related gestures are indeed rhythm-related, only a limited number of frames involve semantically emphasized gestures. This insight suggests that co-speech motions can be decomposed into two distinct components: (i) Rhythm-related base motion. These provide a continuous, coherent base motion aligned with the speech rhythm, reflecting the natural timing of speaking. (ii) Semantic-aware sparse motion: These occur infrequently but are essential for conveying specific meanings or emphasizing key points within the conversation.",
|
| 192 |
+
"bbox": [
|
| 193 |
+
89,
|
| 194 |
+
184,
|
| 195 |
+
482,
|
| 196 |
+
349
|
| 197 |
+
],
|
| 198 |
+
"page_idx": 1
|
| 199 |
+
},
|
| 200 |
+
{
|
| 201 |
+
"type": "text",
|
| 202 |
+
"text": "Inspired by this observation, we propose a new framework SemTalk. SemTalk models the base motion and the sparse motion separately and then fuses them adaptively to generate high-fidelity co-speech motion. Specifically, we first focus on generating rhythm-related base motion by introducing coarse2fine cross-attention module and rhythmic consistency learning. We design a hierarchical coarse2fine cross-attention module, which progressively refines the base motion cues in a coarse-to-fine manner, starting from the face and moving through the hands, upper body, and lower body. This approach ensures consistent rhythmic transmission across all body parts, enhancing coherence base motion. Moreover, we propose a local-global rhythmic consistency learning approach, which enforces alignment at both the frame and sequence levels. Locally, a frame-level consistency loss ensures that each frame is precisely synchronized with its corresponding speech features, guaranteeing accurate temporal alignment. Globally, a sequence-level consistency loss sustains a coherent rhythmic flow across the entire motion sequence, preserving consistency throughout the generated gestures.",
|
| 203 |
+
"bbox": [
|
| 204 |
+
91,
|
| 205 |
+
353,
|
| 206 |
+
482,
|
| 207 |
+
669
|
| 208 |
+
],
|
| 209 |
+
"page_idx": 1
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"type": "text",
|
| 213 |
+
"text": "Furthermore, we introduce semantic emphasis learning approach, which focuses on generating semantic-aware sparse motion. This approach utilizes frame-level semantic cues from textual information, high-level speech features, and emotion to identify frames that require emphasis through a learned semantic score produced by a gating strategy, i.e., sem-gate. The sem-gate is designed to dynamically activate semantic motions at key frames through two weighting methods applied on the motion condition and the loss, respectively, and semantic label guidance, allowing the model to produce motion that enhances the motion with deeper semantic meaning and contextual relevance.",
|
| 214 |
+
"bbox": [
|
| 215 |
+
89,
|
| 216 |
+
672,
|
| 217 |
+
482,
|
| 218 |
+
852
|
| 219 |
+
],
|
| 220 |
+
"page_idx": 1
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"type": "text",
|
| 224 |
+
"text": "Finally, the base motion and sparse motion are integrated through semantic score-based motion fusion, which adaptively amplifies expressiveness by incorporating semantic-",
|
| 225 |
+
"bbox": [
|
| 226 |
+
89,
|
| 227 |
+
854,
|
| 228 |
+
482,
|
| 229 |
+
900
|
| 230 |
+
],
|
| 231 |
+
"page_idx": 1
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"type": "text",
|
| 235 |
+
"text": "aware key frames into the rhythm-related base motion. Our contributions are summarized below:",
|
| 236 |
+
"bbox": [
|
| 237 |
+
513,
|
| 238 |
+
90,
|
| 239 |
+
872,
|
| 240 |
+
119
|
| 241 |
+
],
|
| 242 |
+
"page_idx": 1
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"type": "list",
|
| 246 |
+
"sub_type": "text",
|
| 247 |
+
"list_items": [
|
| 248 |
+
"- We propose SemTalk, a novel framework for holistic co-speech motion generation that separately models rhythm-related base motion and semantic-aware sparse motion, adaptively integrating them via a learned semantic gate.",
|
| 249 |
+
"- We propose a hierarchical coarse2fine cross-attention module to refine base motion and a local-global rhythmic consistency learning to integrate latent face and hand features with rhythm-related priors, ensuring coherence and rhythmic consistency. We then propose semantic emphasis learning to generate semantic gestures at certain frames, enhancing semantic-aware sparse motion.",
|
| 250 |
+
"- Experimental results show that our model surpasses state-of-the-art methods qualitatively and quantitatively, achieving higher motion quality and richer semantics."
|
| 251 |
+
],
|
| 252 |
+
"bbox": [
|
| 253 |
+
514,
|
| 254 |
+
121,
|
| 255 |
+
903,
|
| 256 |
+
332
|
| 257 |
+
],
|
| 258 |
+
"page_idx": 1
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "text",
|
| 262 |
+
"text": "2. Related Work",
|
| 263 |
+
"text_level": 1,
|
| 264 |
+
"bbox": [
|
| 265 |
+
513,
|
| 266 |
+
345,
|
| 267 |
+
653,
|
| 268 |
+
361
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "Co-speech Gesture Generation. Co-speech gesture generation aims to produce gestures aligned with speech. Early rule-based methods [7, 19, 21, 22, 41] lacked variability, while deterministic models [5, 7, 29, 36, 46, 49] mapped speech directly to gestures. Probabilistic models, including GANs [1, 17, 40] and diffusion models [2, 10, 47, 54], introduced variability. Some methods incorporated semantic cues, such as HA2G [32] and SEEG [28], which used hierarchical networks and alignment techniques. SynTalk [8] employs prompt-based control but treats inputs as signal strengths rather than fully interpreting semantics. LivelySpeaker [53] combines rhythmic features and semantic cues using CLIP [39] but struggles to integrate gestures with rhythm and capture semantics consistently, moreover, it only provides global control, limiting fine-grained refinement. DisCo [29] disentangles content and rhythm but lacks explicit modeling of sparse semantic gestures. SemTalk addresses this by separately modeling rhythm-related base motion and semantic-aware sparse motion, integrating them adaptively through a learned semantic score.",
|
| 275 |
+
"bbox": [
|
| 276 |
+
511,
|
| 277 |
+
372,
|
| 278 |
+
903,
|
| 279 |
+
672
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "text",
|
| 285 |
+
"text": "Holistic Co-speech Motion Generation. Generating synchronized, expressive full-body motion from speech remains challenging, especially in coordinating the face, hands, and torso [9, 31, 34, 37, 48, 52]. Early methods introduced generative models to improve synchronization, but issues persisted. TalkSHOW [48] improved with VQ-VAE [42] cross-conditioning but handled facial expressions separately, causing fragmented outputs. DiffSHEG [9] and EMAGE [31] used separate encoders for expressions and gestures, but their unidirectional flow limited coherence. ProbTalk [33] leverages PQ-VAE [43] for improved body-facial synchronization but mainly relies on rhythmic cues, risking the loss of nuanced semantic gestures. Inspired by TM2D [15], which decomposes dance motion into music-related components, we separately model co-speech motion",
|
| 286 |
+
"bbox": [
|
| 287 |
+
511,
|
| 288 |
+
674,
|
| 289 |
+
903,
|
| 290 |
+
900
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 1
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "page_number",
|
| 296 |
+
"text": "13762",
|
| 297 |
+
"bbox": [
|
| 298 |
+
480,
|
| 299 |
+
944,
|
| 300 |
+
517,
|
| 301 |
+
955
|
| 302 |
+
],
|
| 303 |
+
"page_idx": 1
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"type": "text",
|
| 307 |
+
"text": "into rhythm-related and semantic-aware motion.",
|
| 308 |
+
"bbox": [
|
| 309 |
+
89,
|
| 310 |
+
90,
|
| 311 |
+
408,
|
| 312 |
+
104
|
| 313 |
+
],
|
| 314 |
+
"page_idx": 2
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"type": "text",
|
| 318 |
+
"text": "3. Method",
|
| 319 |
+
"text_level": 1,
|
| 320 |
+
"bbox": [
|
| 321 |
+
89,
|
| 322 |
+
119,
|
| 323 |
+
181,
|
| 324 |
+
135
|
| 325 |
+
],
|
| 326 |
+
"page_idx": 2
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"type": "text",
|
| 330 |
+
"text": "3.1. Preliminary on RVQ-VAE",
|
| 331 |
+
"text_level": 1,
|
| 332 |
+
"bbox": [
|
| 333 |
+
89,
|
| 334 |
+
145,
|
| 335 |
+
328,
|
| 336 |
+
162
|
| 337 |
+
],
|
| 338 |
+
"page_idx": 2
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"type": "text",
|
| 342 |
+
"text": "Following [4, 16, 51], our approach uses a residual vector-quantized autoencoder (RVQ-VAE) to progressively capture complex body movements in a few players. To retain unique motion characteristics across body regions, we segment the body into four parts—face, upper body, hands, and lower body—each with a dedicated RVQ-VAE, following [3, 31]. This segmentation preserves each part's dynamics and prevents feature entanglement.",
|
| 343 |
+
"bbox": [
|
| 344 |
+
89,
|
| 345 |
+
167,
|
| 346 |
+
483,
|
| 347 |
+
289
|
| 348 |
+
],
|
| 349 |
+
"page_idx": 2
|
| 350 |
+
},
|
| 351 |
+
{
|
| 352 |
+
"type": "text",
|
| 353 |
+
"text": "3.2. Overview",
|
| 354 |
+
"text_level": 1,
|
| 355 |
+
"bbox": [
|
| 356 |
+
89,
|
| 357 |
+
297,
|
| 358 |
+
200,
|
| 359 |
+
313
|
| 360 |
+
],
|
| 361 |
+
"page_idx": 2
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"type": "text",
|
| 365 |
+
"text": "As shown in Figure 2, our SemTalk pipeline includes two main components: the Base Motion Blocks $f_{r}(\\cdot)$ and the Sparse Motion Blocks $f_{b}(\\cdot)$ . Given rhythmic features $\\gamma_{b}, \\gamma_{h}$ , a seed pose $\\tilde{m}$ , and a speaker ID $id$ , the Base Motion Blocks generate rhythm-aligned codes $q^{b}$ , forming the rhythmic foundation of the base motion:",
|
| 366 |
+
"bbox": [
|
| 367 |
+
89,
|
| 368 |
+
321,
|
| 369 |
+
483,
|
| 370 |
+
411
|
| 371 |
+
],
|
| 372 |
+
"page_idx": 2
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"type": "equation",
|
| 376 |
+
"text": "\n$$\nf _ {r}: \\left(\\gamma_ {b}, \\gamma_ {h}, \\tilde {m}, i d; \\theta_ {f _ {r}}\\right)\\rightarrow q ^ {b}, \\tag {1}\n$$\n",
|
| 377 |
+
"text_format": "latex",
|
| 378 |
+
"bbox": [
|
| 379 |
+
184,
|
| 380 |
+
422,
|
| 381 |
+
480,
|
| 382 |
+
441
|
| 383 |
+
],
|
| 384 |
+
"page_idx": 2
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"type": "text",
|
| 388 |
+
"text": "where $\\theta_{f_r}$ denotes the learnable parameters of the Base Motion Blocks. The Sparse Motion Blocks then take semantic features $\\phi_l$ , $\\phi_g$ , $\\phi_e$ , along with $\\gamma_h$ , $\\tilde{m}$ and $id$ , to produce frame-level semantic codes $q^s$ and semantic score $\\psi$ . $\\psi$ then triggers these codes only for semantically significant frames, producing a sparse motion representation:",
|
| 389 |
+
"bbox": [
|
| 390 |
+
89,
|
| 391 |
+
453,
|
| 392 |
+
483,
|
| 393 |
+
544
|
| 394 |
+
],
|
| 395 |
+
"page_idx": 2
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"type": "equation",
|
| 399 |
+
"text": "\n$$\nf _ {s}: \\left(\\phi_ {l}, \\phi_ {g}, \\phi_ {e}, \\tilde {m}, i d; \\theta_ {f _ {a}}\\right)\\rightarrow \\left(q ^ {s}, \\psi\\right), \\tag {2}\n$$\n",
|
| 400 |
+
"text_format": "latex",
|
| 401 |
+
"bbox": [
|
| 402 |
+
156,
|
| 403 |
+
556,
|
| 404 |
+
480,
|
| 405 |
+
573
|
| 406 |
+
],
|
| 407 |
+
"page_idx": 2
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"type": "text",
|
| 411 |
+
"text": "where $\\theta_{f_s}$ represents the Sparse Motion Block parameters. Finally, the semantic emphasis mechanism $\\mathcal{E}$ combines $q^b$ and $q^s$ , guided by $\\psi$ , to form the final motion codes $q^m$ :",
|
| 412 |
+
"bbox": [
|
| 413 |
+
89,
|
| 414 |
+
585,
|
| 415 |
+
482,
|
| 416 |
+
631
|
| 417 |
+
],
|
| 418 |
+
"page_idx": 2
|
| 419 |
+
},
|
| 420 |
+
{
|
| 421 |
+
"type": "equation",
|
| 422 |
+
"text": "\n$$\nq ^ {m} = \\mathcal {E} \\left(q ^ {b}, q ^ {s}; \\psi\\right). \\tag {3}\n$$\n",
|
| 423 |
+
"text_format": "latex",
|
| 424 |
+
"bbox": [
|
| 425 |
+
220,
|
| 426 |
+
642,
|
| 427 |
+
480,
|
| 428 |
+
659
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 2
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "text",
|
| 434 |
+
"text": "The motion decoder then uses $q^{m}$ to generate the output $m'$ .",
|
| 435 |
+
"bbox": [
|
| 436 |
+
89,
|
| 437 |
+
671,
|
| 438 |
+
482,
|
| 439 |
+
686
|
| 440 |
+
],
|
| 441 |
+
"page_idx": 2
|
| 442 |
+
},
|
| 443 |
+
{
|
| 444 |
+
"type": "text",
|
| 445 |
+
"text": "3.3. Generating Rhythm-related Base Motion",
|
| 446 |
+
"text_level": 1,
|
| 447 |
+
"bbox": [
|
| 448 |
+
89,
|
| 449 |
+
710,
|
| 450 |
+
441,
|
| 451 |
+
728
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 2
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "The Base Motion Generation (Fig. 3 a) in SemTalk establishes a rhythmically aligned foundation by leveraging both rhythmic and speaker-specific features, enhancing the naturalness and personalization of generated motion.",
|
| 458 |
+
"bbox": [
|
| 459 |
+
89,
|
| 460 |
+
734,
|
| 461 |
+
482,
|
| 462 |
+
794
|
| 463 |
+
],
|
| 464 |
+
"page_idx": 2
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"type": "text",
|
| 468 |
+
"text": "Rhythmic Speech Encoding. To synchronize motion with speech, SemTalk incorporates rhythmic features: beats $\\gamma_{b}$ and HuBERT features $\\gamma_{h}$ . $\\gamma_{b}$ , derived from amplitude, short-time energy [11], and onset detection, mark key rhythmic points for aligning gestures with speech. Meanwhile, $\\gamma_{h}$ , extracted by the HuBERT encoder [18], captures high-level audio traits. In addition to rhythmic features $\\gamma$ ,",
|
| 469 |
+
"bbox": [
|
| 470 |
+
89,
|
| 471 |
+
795,
|
| 472 |
+
483,
|
| 473 |
+
901
|
| 474 |
+
],
|
| 475 |
+
"page_idx": 2
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"type": "image",
|
| 479 |
+
"img_path": "images/1299c229de3e4d85275790079a0d9224f4c1d716d31f7f6960ad7aecdf6289bd.jpg",
|
| 480 |
+
"image_caption": [
|
| 481 |
+
"Figure 2. An overview of the SemTalk pipeline. SemTalk generates holistic co-speech motion by first constructing rhythm-aligned $q^r$ in $f_r$ , guided by rhythmic consistency loss $L_{\\mathrm{Rhy}}$ . Meanwhile, $f_s$ produce frame-level semantic codes $q^s$ , activated selectively by the semantic score $\\psi$ . Finally, $q^m$ is achieved by fusing $q^r$ and $q^s$ based on $\\psi$ , with motion decoder, yielding synchronized and contextually enriched motions."
|
| 482 |
+
],
|
| 483 |
+
"image_footnote": [],
|
| 484 |
+
"bbox": [
|
| 485 |
+
526,
|
| 486 |
+
88,
|
| 487 |
+
887,
|
| 488 |
+
393
|
| 489 |
+
],
|
| 490 |
+
"page_idx": 2
|
| 491 |
+
},
|
| 492 |
+
{
|
| 493 |
+
"type": "text",
|
| 494 |
+
"text": "SemTalk uses a seed pose $\\tilde{m}$ and speaker identity $id$ to generate a personalized, rhythm-aligned latent pose $p$ . Then MLP-based Face Enhancement and Body Part-Aware modules utilize $\\gamma$ , $p$ and $id$ to obtain latent face $f_{e}$ , hands $f_{h}$ , upper body $f_{u}$ and lower body $f_{l}$ .",
|
| 495 |
+
"bbox": [
|
| 496 |
+
511,
|
| 497 |
+
526,
|
| 498 |
+
905,
|
| 499 |
+
602
|
| 500 |
+
],
|
| 501 |
+
"page_idx": 2
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"type": "text",
|
| 505 |
+
"text": "Coarse2Fine Cross-Attention Module. To facilitate the learning of base motion, we first proposed a transformer-based hierarchical Coarse2Fine Cross-Attn Module utilize $f_{e}$ , $f_{h}$ , $f_{u}$ and $f_{l}$ to obtain latent base motion $f_{b}$ . The refinement begins with $\\gamma$ for $f_{e}$ , which guides the rhythmic representation for $f_{h}$ , followed by conditioning $f_{u}$ and finally influencing $f_{l}$ . Since mouth movements closely correspond to speech syllables with minimal delay, we use the face to guide hand motions, inspired by DiffSHEG [9]. As the upper and lower body movements are less directly driven by speech and instead reflect the natural swinging of the hands and torso, we adopt cascading guidance: hands influence the upper body, which in turn drives the lower body. This structured approach, moving from the face to the hands, upper body, and lower body, ensures smooth and coherent motion propagation across the entire body.",
|
| 506 |
+
"bbox": [
|
| 507 |
+
511,
|
| 508 |
+
607,
|
| 509 |
+
906,
|
| 510 |
+
849
|
| 511 |
+
],
|
| 512 |
+
"page_idx": 2
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"type": "text",
|
| 516 |
+
"text": "Rhythmic Consistency Learning. Inspired by CoG's use of InfoNCE loss [45] to synchronize facial expressions with audio cues, our approach adopts a similar philosophy of",
|
| 517 |
+
"bbox": [
|
| 518 |
+
511,
|
| 519 |
+
854,
|
| 520 |
+
908,
|
| 521 |
+
902
|
| 522 |
+
],
|
| 523 |
+
"page_idx": 2
|
| 524 |
+
},
|
| 525 |
+
{
|
| 526 |
+
"type": "page_number",
|
| 527 |
+
"text": "13763",
|
| 528 |
+
"bbox": [
|
| 529 |
+
480,
|
| 530 |
+
944,
|
| 531 |
+
517,
|
| 532 |
+
955
|
| 533 |
+
],
|
| 534 |
+
"page_idx": 2
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"type": "image",
|
| 538 |
+
"img_path": "images/144d417bbc31eefff39cb9d8e50d3900e837a9275475db832198428d94277b60.jpg",
|
| 539 |
+
"image_caption": [
|
| 540 |
+
"Figure 3. Architecture of SemTalk. SemTalk generates holistic co-speech motion in three stages. (a) Base Motion Generation uses rhythmic consistency learning to produce rhythm-aligned codes $q^b$ , conditioned on rhythmic features $\\gamma_b, \\gamma_h$ . (b) Sparse Motion Generation employs semantic emphasis learning to generate semantic codes $q^s$ , activated by semantic score $\\psi$ . (c) Adaptively Fusion automatically combines $q^b$ and $q^s$ based on $\\psi$ to produce mixed codes $q^m$ at frame level for rhythmically aligned and contextually rich motions."
|
| 541 |
+
],
|
| 542 |
+
"image_footnote": [],
|
| 543 |
+
"bbox": [
|
| 544 |
+
133,
|
| 545 |
+
89,
|
| 546 |
+
859,
|
| 547 |
+
305
|
| 548 |
+
],
|
| 549 |
+
"page_idx": 3
|
| 550 |
+
},
|
| 551 |
+
{
|
| 552 |
+
"type": "text",
|
| 553 |
+
"text": "aligning motion and speech rhythm. It can be defined as:",
|
| 554 |
+
"bbox": [
|
| 555 |
+
89,
|
| 556 |
+
382,
|
| 557 |
+
465,
|
| 558 |
+
398
|
| 559 |
+
],
|
| 560 |
+
"page_idx": 3
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"type": "equation",
|
| 564 |
+
"text": "\n$$\n\\mathcal {L} _ {\\mathrm {R h y}} = - \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\log \\frac {\\exp \\left(\\operatorname {s i m} \\left(h \\left(f _ {i}\\right) , \\gamma_ {h} ^ {i}\\right) / \\tau\\right)}{\\sum_ {j = 1} ^ {N} \\exp \\left(\\operatorname {s i m} \\left(h \\left(f _ {i}\\right) , \\gamma_ {h} ^ {j}\\right) / \\tau\\right)}, \\tag {4}\n$$\n",
|
| 565 |
+
"text_format": "latex",
|
| 566 |
+
"bbox": [
|
| 567 |
+
99,
|
| 568 |
+
409,
|
| 569 |
+
480,
|
| 570 |
+
468
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 3
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "text",
|
| 576 |
+
"text": "where $N$ denotes the number of frames(or the batch size), $\\tau$ denotes the temperature hyperparameter, $h(\\cdot)$ is the projection head for latent motion, $f_{i}$ and $\\gamma_h^i$ are the latent motion and rhythmic features at frame (or sample) $i$ , and $\\mathrm{sim}(\\cdot)$ represents cosine similarity.",
|
| 577 |
+
"bbox": [
|
| 578 |
+
89,
|
| 579 |
+
468,
|
| 580 |
+
482,
|
| 581 |
+
544
|
| 582 |
+
],
|
| 583 |
+
"page_idx": 3
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "text",
|
| 587 |
+
"text": "Unlike CoG, our approach fundamentally differs by incorporating separate local and global rhythmic consistency losses, which are applied to both latent face $f_{e}$ and latent hands $f_{h}$ , ensuring a more cohesive and synchronized representation across the entire motion sequence. This rhythmic consistency loss ensures that the motions are not only synchronized at the frame level but also maintain a consistent rhythmic flow across the entire sequence.",
|
| 588 |
+
"bbox": [
|
| 589 |
+
89,
|
| 590 |
+
545,
|
| 591 |
+
482,
|
| 592 |
+
665
|
| 593 |
+
],
|
| 594 |
+
"page_idx": 3
|
| 595 |
+
},
|
| 596 |
+
{
|
| 597 |
+
"type": "text",
|
| 598 |
+
"text": "The local frame-level consistency loss $\\mathcal{L}_{\\mathrm{Rhy}}^{(L)}$ aligns the motion features of each frame with the corresponding rhythmic cues $\\gamma_{h}$ . By leveraging HuBERT features $\\gamma_{h}$ instead of basic beat features $\\gamma_{b}$ , which only capture rhythmic pauses, we incorporate rich, high-level audio representations that enhance the model's ability to capture rhythm-related motion patterns and maintain temporal coherence.",
|
| 599 |
+
"bbox": [
|
| 600 |
+
89,
|
| 601 |
+
666,
|
| 602 |
+
482,
|
| 603 |
+
772
|
| 604 |
+
],
|
| 605 |
+
"page_idx": 3
|
| 606 |
+
},
|
| 607 |
+
{
|
| 608 |
+
"type": "text",
|
| 609 |
+
"text": "The global sentence-level consistency loss $\\mathcal{L}_{\\mathrm{Rhy}}^{(G)}$ is designed to ensure rhythmic coherence at a global level. Unlike local loss, $\\mathcal{L}_{\\mathrm{Rhy}}^{(G)}$ reinforces rhythm consistency throughout the sequence, ensuring that the generated motion maintains smooth and rhythm-aligned throughout its duration.",
|
| 610 |
+
"bbox": [
|
| 611 |
+
89,
|
| 612 |
+
773,
|
| 613 |
+
482,
|
| 614 |
+
852
|
| 615 |
+
],
|
| 616 |
+
"page_idx": 3
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"type": "text",
|
| 620 |
+
"text": "By jointly minimizing $\\mathcal{L}_{\\mathrm{Rhy}}^{(L)}$ and $\\mathcal{L}_{\\mathrm{Rhy}}^{(G)}$ , rhythmic consistency learning enables SemTalk to produce base motions that are rhythmically aligned and temporally cohesive,",
|
| 621 |
+
"bbox": [
|
| 622 |
+
89,
|
| 623 |
+
852,
|
| 624 |
+
483,
|
| 625 |
+
901
|
| 626 |
+
],
|
| 627 |
+
"page_idx": 3
|
| 628 |
+
},
|
| 629 |
+
{
|
| 630 |
+
"type": "text",
|
| 631 |
+
"text": "forming a solid rhythm-related base motion foundation.",
|
| 632 |
+
"bbox": [
|
| 633 |
+
511,
|
| 634 |
+
382,
|
| 635 |
+
879,
|
| 636 |
+
398
|
| 637 |
+
],
|
| 638 |
+
"page_idx": 3
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"type": "text",
|
| 642 |
+
"text": "3.4. Generating Semantic-aware Sparse Motion",
|
| 643 |
+
"text_level": 1,
|
| 644 |
+
"bbox": [
|
| 645 |
+
511,
|
| 646 |
+
409,
|
| 647 |
+
880,
|
| 648 |
+
425
|
| 649 |
+
],
|
| 650 |
+
"page_idx": 3
|
| 651 |
+
},
|
| 652 |
+
{
|
| 653 |
+
"type": "text",
|
| 654 |
+
"text": "The Sparse Motion Generation (Fig. 3 b) in SemTalk adds semantic-aware sparse motion to base motion by incorporating semantic cues drawn from speech content and emotional tone. By separating rhythm and semantics, this stage enhances motion generation by emphasizing contextually meaningful motion at key semantic moments.",
|
| 655 |
+
"bbox": [
|
| 656 |
+
511,
|
| 657 |
+
431,
|
| 658 |
+
903,
|
| 659 |
+
521
|
| 660 |
+
],
|
| 661 |
+
"page_idx": 3
|
| 662 |
+
},
|
| 663 |
+
{
|
| 664 |
+
"type": "text",
|
| 665 |
+
"text": "Semantic Speech Encoding. To capture semantic cues in speech, similar to [10], Semantic Emphasis Learning combines frame-level text embeddings $\\phi_t$ , sentence-level features $\\phi_g$ from the CLIP model [39], and emotion features $\\phi_e$ from the emotion2vec model [35]. These features form a comprehensive semantic representation $f_t$ , together with audio feature $\\gamma_h$ , that reflects both the content and emotional undertones of speech, enabling SemTalk to activate motions that are sensitive to nuanced semantic cues.",
|
| 666 |
+
"bbox": [
|
| 667 |
+
511,
|
| 668 |
+
522,
|
| 669 |
+
905,
|
| 670 |
+
657
|
| 671 |
+
],
|
| 672 |
+
"page_idx": 3
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"type": "text",
|
| 676 |
+
"text": "Semantic Emphasis Learning. The process begins by generating $f_{t}$ , combining local and global cues from text, speech, emotion embeddings and HuBERT features $\\gamma_{h}$ . Then, the sem-gate leverages multi-modal inputs to generate a semantic score, identifying frames that require enhanced semantic emphasis. The sem-gate in SemTalk refines keyframe motion by applying two forms of weighting methods $\\mathcal{W}$ : feature weighting $\\mathcal{W}_{f}$ and loss weighting $\\mathcal{W}_{l}$ . Using $f_{t}$ and $\\gamma_{h}$ , SemTalk computes a semantic score $\\psi$ which dynamically scales feature weighting—filtering back semantic features $f_{t}$ to activate frames with significant relevance, ensuring that the model emphasizes frames aligned with specific communicative intentions. Second, the loss weighting is applied by supervising $\\psi$ , with a classification loss $\\mathcal{L}_{cls}^{G}$ based on semantic labels, further enhancing the model's ability to identify key frames. The two weight-",
|
| 677 |
+
"bbox": [
|
| 678 |
+
511,
|
| 679 |
+
659,
|
| 680 |
+
906,
|
| 681 |
+
901
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 3
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "page_number",
|
| 687 |
+
"text": "13764",
|
| 688 |
+
"bbox": [
|
| 689 |
+
480,
|
| 690 |
+
944,
|
| 691 |
+
517,
|
| 692 |
+
955
|
| 693 |
+
],
|
| 694 |
+
"page_idx": 3
|
| 695 |
+
},
|
| 696 |
+
{
|
| 697 |
+
"type": "image",
|
| 698 |
+
"img_path": "images/c2aefad23af9a6868a75803373bfe37fae4fe3203eb7f4810dda18663f9b900a.jpg",
|
| 699 |
+
"image_caption": [
|
| 700 |
+
"Figure 4. Concept comparison with LivelySpeaker [53]. (Top) LivelySpeaker generates semantic gestures with CLIP embeddings in SAG and refines rhythm-related gestures separately using diffusion, causing potential jitter. (Bottom) SemTalk integrates text and speech, uses a semantic gate for fine-grained control, and unifies rhythm and semantics for smoother, more coherent motions."
|
| 701 |
+
],
|
| 702 |
+
"image_footnote": [],
|
| 703 |
+
"bbox": [
|
| 704 |
+
101,
|
| 705 |
+
90,
|
| 706 |
+
460,
|
| 707 |
+
218
|
| 708 |
+
],
|
| 709 |
+
"page_idx": 4
|
| 710 |
+
},
|
| 711 |
+
{
|
| 712 |
+
"type": "text",
|
| 713 |
+
"text": "ing methods allow SemTalk to selectively enhance semantic gestures while suppressing uninformative motion, leading to more expressive co-speech motion.",
|
| 714 |
+
"bbox": [
|
| 715 |
+
89,
|
| 716 |
+
320,
|
| 717 |
+
482,
|
| 718 |
+
364
|
| 719 |
+
],
|
| 720 |
+
"page_idx": 4
|
| 721 |
+
},
|
| 722 |
+
{
|
| 723 |
+
"type": "text",
|
| 724 |
+
"text": "Once $\\psi$ is established, it modulates the integration of rhythm-aligned base motion $f_{b}$ and sparse semantic motion $f_{s}$ . Through alpha-blending, frames with high semantic relevance draw more from $f_{s}$ , while others rely on $f_{b}$ . The final motion codes $q^{s}$ are computed as:",
|
| 725 |
+
"bbox": [
|
| 726 |
+
89,
|
| 727 |
+
364,
|
| 728 |
+
483,
|
| 729 |
+
441
|
| 730 |
+
],
|
| 731 |
+
"page_idx": 4
|
| 732 |
+
},
|
| 733 |
+
{
|
| 734 |
+
"type": "equation",
|
| 735 |
+
"text": "\n$$\nq ^ {s} = M L P \\left(\\psi f _ {s} + (1 - \\psi) f _ {b}\\right), \\tag {5}\n$$\n",
|
| 736 |
+
"text_format": "latex",
|
| 737 |
+
"bbox": [
|
| 738 |
+
179,
|
| 739 |
+
455,
|
| 740 |
+
482,
|
| 741 |
+
472
|
| 742 |
+
],
|
| 743 |
+
"page_idx": 4
|
| 744 |
+
},
|
| 745 |
+
{
|
| 746 |
+
"type": "text",
|
| 747 |
+
"text": "To ensure cohesive propagation of semantic emphasis across body regions, we employ the Coarse2Fine Cross-Attention Module, similar to Sec. 3.3. In this stage, we focuses solely on body motion, excluding facial movements, as body gestures play a more critical role in conveying semantic meaning in co-speech interactions.",
|
| 748 |
+
"bbox": [
|
| 749 |
+
89,
|
| 750 |
+
478,
|
| 751 |
+
482,
|
| 752 |
+
568
|
| 753 |
+
],
|
| 754 |
+
"page_idx": 4
|
| 755 |
+
},
|
| 756 |
+
{
|
| 757 |
+
"type": "text",
|
| 758 |
+
"text": "To foster diverse motion generation, SemTalk includes a code classification loss $\\mathcal{L}_{cls}$ and a reconstruction loss $\\mathcal{L}_{rec}$ . These losses are specifically focused on frames with high semantic scores, guiding the model to prioritize the generation of sparse, meaningful gestures.",
|
| 759 |
+
"bbox": [
|
| 760 |
+
89,
|
| 761 |
+
569,
|
| 762 |
+
482,
|
| 763 |
+
643
|
| 764 |
+
],
|
| 765 |
+
"page_idx": 4
|
| 766 |
+
},
|
| 767 |
+
{
|
| 768 |
+
"type": "text",
|
| 769 |
+
"text": "Discussion. Recently, LivelySpeaker [53] designs the Semantic-Aware Generator (SAG) and Rhythm-Aware Generator (RAG) for co-speech gesture generation, combining them through beat empowerment. While effective, key differences exist between LivelySpeaker and SemTalk, see Fig. 4. First, SAG generates gestures from text using CLIP embeddings, but bridging words and expressive gestures is challenging, causing jitter. SemTalk incorporates speech features (pitch, tone, emotion) alongside text and GT supervision for adaptive gestures. Second, LivelySpeaker applies global control, missing local semantic details, while SemTalk uses fine-grained, frame-level semantic control for subtle variations. Third, LivelySpeaker fuses SAG and RAG in separate latent spaces, leading to misalignment and inconsistencies. SemTalk jointly models rhythm and semantics in a unified framework, ensuring smoother transitions and coherence. We further compare SAG with our",
|
| 770 |
+
"bbox": [
|
| 771 |
+
89,
|
| 772 |
+
643,
|
| 773 |
+
482,
|
| 774 |
+
900
|
| 775 |
+
],
|
| 776 |
+
"page_idx": 4
|
| 777 |
+
},
|
| 778 |
+
{
|
| 779 |
+
"type": "text",
|
| 780 |
+
"text": "semantic gate in experiments.",
|
| 781 |
+
"bbox": [
|
| 782 |
+
513,
|
| 783 |
+
92,
|
| 784 |
+
712,
|
| 785 |
+
106
|
| 786 |
+
],
|
| 787 |
+
"page_idx": 4
|
| 788 |
+
},
|
| 789 |
+
{
|
| 790 |
+
"type": "text",
|
| 791 |
+
"text": "3.5. Semantic Score-based motion fusion",
|
| 792 |
+
"text_level": 1,
|
| 793 |
+
"bbox": [
|
| 794 |
+
513,
|
| 795 |
+
117,
|
| 796 |
+
828,
|
| 797 |
+
132
|
| 798 |
+
],
|
| 799 |
+
"page_idx": 4
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "text",
|
| 803 |
+
"text": "The Adaptive Fusion stage (Fig. 3 c) in SemTalk seamlessly integrates semantic-aware sparse motion into the rhythmic-related base motion. By strategically enhancing frames based on their semantic importance, it maintains a smooth and natural motion flow across sequences. For each frame $i$ , the semantic score $\\psi_i$ computed during the Sparse Motion Generation stage is compared to a threshold $\\beta$ . If $\\psi_i > \\beta$ , the base motion's latent code $q_i^r$ is replaced with the sparse semantic code $q_i^s$ , effectively highlighting expressive gestures where they are most relevant; otherwise, $q_i = q_i^r$ .",
|
| 804 |
+
"bbox": [
|
| 805 |
+
511,
|
| 806 |
+
140,
|
| 807 |
+
903,
|
| 808 |
+
291
|
| 809 |
+
],
|
| 810 |
+
"page_idx": 4
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "text",
|
| 814 |
+
"text": "This selective replacement emphasizes semantically critical gestures while preserving the natural rhythmic base motion. By blending $q^{b}$ and $q^{s}$ based on semantic scores, SemTalk adapts to the expressive needs of the speech context while ensuring coherence. Additionally, the convolution structure of the RVQ-VAE decoder ensures smooth transitions between frames, preserving motion continuity.",
|
| 815 |
+
"bbox": [
|
| 816 |
+
511,
|
| 817 |
+
292,
|
| 818 |
+
903,
|
| 819 |
+
398
|
| 820 |
+
],
|
| 821 |
+
"page_idx": 4
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "text",
|
| 825 |
+
"text": "4. Experiments",
|
| 826 |
+
"text_level": 1,
|
| 827 |
+
"bbox": [
|
| 828 |
+
513,
|
| 829 |
+
412,
|
| 830 |
+
645,
|
| 831 |
+
430
|
| 832 |
+
],
|
| 833 |
+
"page_idx": 4
|
| 834 |
+
},
|
| 835 |
+
{
|
| 836 |
+
"type": "text",
|
| 837 |
+
"text": "4.1. Experimental Setup",
|
| 838 |
+
"text_level": 1,
|
| 839 |
+
"bbox": [
|
| 840 |
+
513,
|
| 841 |
+
439,
|
| 842 |
+
702,
|
| 843 |
+
455
|
| 844 |
+
],
|
| 845 |
+
"page_idx": 4
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "text",
|
| 849 |
+
"text": "Datasets. For training and evaluation, we use two datasets: BEAT2 and SHOW. BEAT2, introduced in EMAGE [31], extends BEAT [30] with 76 hours of data from 30 speakers, standardized into a mesh representation with paired audio, text, and frame-level semantic labels. We follow [31] and use the BEAT2-standard subset with an $85\\% / 7.5\\% / 7.5\\%$ train/val/test split. SHOW [48] includes 26.9 hours of high-quality talk show videos with 3D body meshes at 30fps. Since it lacks frame-level semantic labels, we use the semgate from SemTalk, pre-trained on BEAT2, to generate them. Following [48], we select video clips longer than 10 seconds and split the data $80\\% / 10\\% / 10\\%$ for train/val/test.",
|
| 850 |
+
"bbox": [
|
| 851 |
+
511,
|
| 852 |
+
460,
|
| 853 |
+
906,
|
| 854 |
+
643
|
| 855 |
+
],
|
| 856 |
+
"page_idx": 4
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"text": "Implementation Details. Our model is trained on a single NVIDIA A100 GPU for 200 epochs with a batch size of 64. We use RVQ-VAE [42], downscaling by 4. The residual quantization has 6 layers, a codebook size of 256 and a dropout rate of 0.2. We use five transformer layers to predict the last five layer codes. In Base Motion Learning, $\\tau = 0.1$ ; in Sparse Motion Learning, $\\beta = 0.5$ empirically. The training uses ADAM with a 1e-4 learning rate. Following [31], we start with a 4-frame seed pose, gradually increasing masked frames from 0 to $40\\%$ over 120 epochs.",
|
| 861 |
+
"bbox": [
|
| 862 |
+
511,
|
| 863 |
+
643,
|
| 864 |
+
908,
|
| 865 |
+
794
|
| 866 |
+
],
|
| 867 |
+
"page_idx": 4
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"text": "Metrics. We evaluate generated body gestures using FGD [50] to measure distributional alignment with GT, reflecting realism. DIV [26] quantifies gesture variation via the average L1 distance across clips. BC [27] assesses speech-motion synchrony. For facial expressions, we use MSE [47] to quantify positional differences and LVD [48] to measure discrepancies between GT and generated facial vertices.",
|
| 872 |
+
"bbox": [
|
| 873 |
+
511,
|
| 874 |
+
795,
|
| 875 |
+
908,
|
| 876 |
+
902
|
| 877 |
+
],
|
| 878 |
+
"page_idx": 4
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "page_number",
|
| 882 |
+
"text": "13765",
|
| 883 |
+
"bbox": [
|
| 884 |
+
480,
|
| 885 |
+
944,
|
| 886 |
+
517,
|
| 887 |
+
955
|
| 888 |
+
],
|
| 889 |
+
"page_idx": 4
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "image",
|
| 893 |
+
"img_path": "images/ecfa083670d714d79dd4c95afe184dc95c93b4c4df5646584ffc0173e277becc.jpg",
|
| 894 |
+
"image_caption": [
|
| 895 |
+
"Figure 5. Comparison on BEAT2 [31] Dataset. SemTalk* refers to the model trained solely on the Base Motion Generation stage, capturing rhythmic alignment but lacking semantic gestures. In contrast, SemTalk successfully emphasized sparse yet vivid motions. For instance, when saying \"my opinion,\" SemTalk generates a hand-raising gesture followed by an index finger extension for emphasis. Similarly, for \"never tell,\" our model produces a clear, repeated gesture matching the rhythm, reinforcing the intended emphasis."
|
| 896 |
+
],
|
| 897 |
+
"image_footnote": [],
|
| 898 |
+
"bbox": [
|
| 899 |
+
109,
|
| 900 |
+
90,
|
| 901 |
+
897,
|
| 902 |
+
510
|
| 903 |
+
],
|
| 904 |
+
"page_idx": 5
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "image",
|
| 908 |
+
"img_path": "images/f09db62adf066365352e12b4bbf519f8db3b6c167df8738489b0a3ac7649784f.jpg",
|
| 909 |
+
"image_caption": [
|
| 910 |
+
"Figure 6. Comparison on SHOW [48] Dataset. Our method performs better in motion diversity and semantic richness."
|
| 911 |
+
],
|
| 912 |
+
"image_footnote": [],
|
| 913 |
+
"bbox": [
|
| 914 |
+
133,
|
| 915 |
+
578,
|
| 916 |
+
282,
|
| 917 |
+
672
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 5
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "image",
|
| 923 |
+
"img_path": "images/d15e04660b009631faa3ac6e9e6bff96583a48bd97717a24d032bc6b4078ce89.jpg",
|
| 924 |
+
"image_caption": [],
|
| 925 |
+
"image_footnote": [],
|
| 926 |
+
"bbox": [
|
| 927 |
+
282,
|
| 928 |
+
578,
|
| 929 |
+
864,
|
| 930 |
+
674
|
| 931 |
+
],
|
| 932 |
+
"page_idx": 5
|
| 933 |
+
},
|
| 934 |
+
{
|
| 935 |
+
"type": "text",
|
| 936 |
+
"text": "4.2. Qualitative Results",
|
| 937 |
+
"text_level": 1,
|
| 938 |
+
"bbox": [
|
| 939 |
+
89,
|
| 940 |
+
705,
|
| 941 |
+
274,
|
| 942 |
+
720
|
| 943 |
+
],
|
| 944 |
+
"page_idx": 5
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "text",
|
| 948 |
+
"text": "Qualitative Comparisons. We encourage readers to watch our demo video for a clearer understanding of SemTalk's qualitative performance. Our method achieves superior speech-motion alignment, generating more realistic, diverse, and semantically consistent gestures than the baselines. As shown in Fig. 5, LivelySpeaker, TalkSHOW, EMAGE, and DiffSHEG exhibit jitter—EMAGE mainly in the legs and shoulders, while TalkSHOW affects the entire body. LivelySpeaker and DiffSHEG, which focus primarily on the upper body, produce slow and inconsistent motions, especially at speech clip boundaries. DiffSHEG improves",
|
| 949 |
+
"bbox": [
|
| 950 |
+
89,
|
| 951 |
+
734,
|
| 952 |
+
485,
|
| 953 |
+
902
|
| 954 |
+
],
|
| 955 |
+
"page_idx": 5
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"text": "gesture diversity over IMAGE and TalkSHOW, though IMAGE maintains greater naturalness. SemTalk surpasses all baselines in both realism and diversity. Compared to SemTalk*, SemTalk generates more expressive gestures, emphasizing key phrases (e.g., raising hands for “dream job” or pointing for “that is why”). While SemTalk* ensures rhythmic consistency, it lacks semantic expressiveness. By integrating frame-level semantic emphasis, SemTalk aligns motion with both rhythm and semantics, demonstrating the effectiveness of rhythmic consistency learning and semantic emphasis learning. In facial comparisons (Fig. 7), IMAGE shows minimal lip movement, while both DiffSHEG and",
|
| 960 |
+
"bbox": [
|
| 961 |
+
511,
|
| 962 |
+
705,
|
| 963 |
+
908,
|
| 964 |
+
888
|
| 965 |
+
],
|
| 966 |
+
"page_idx": 5
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "page_number",
|
| 970 |
+
"text": "13766",
|
| 971 |
+
"bbox": [
|
| 972 |
+
480,
|
| 973 |
+
944,
|
| 974 |
+
519,
|
| 975 |
+
955
|
| 976 |
+
],
|
| 977 |
+
"page_idx": 5
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "image",
|
| 981 |
+
"img_path": "images/a717cf177762c731d3be4305e5fb997193b304e071da862c3175abdaaafa8d47.jpg",
|
| 982 |
+
"image_caption": [
|
| 983 |
+
"Figure 7. Facial Comparison on the BEAT2 [31] Dataset."
|
| 984 |
+
],
|
| 985 |
+
"image_footnote": [],
|
| 986 |
+
"bbox": [
|
| 987 |
+
91,
|
| 988 |
+
89,
|
| 989 |
+
486,
|
| 990 |
+
218
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 6
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "image",
|
| 996 |
+
"img_path": "images/a501893e2c1a842880f342ab42f131c7d71c2b26442d82b3b6605150e683eb60.jpg",
|
| 997 |
+
"image_caption": [
|
| 998 |
+
"Figure 8. Qualitative study on semantic score. Semantic score aligns with keywords, influencing gesture intensity."
|
| 999 |
+
],
|
| 1000 |
+
"image_footnote": [],
|
| 1001 |
+
"bbox": [
|
| 1002 |
+
109,
|
| 1003 |
+
246,
|
| 1004 |
+
455,
|
| 1005 |
+
401
|
| 1006 |
+
],
|
| 1007 |
+
"page_idx": 6
|
| 1008 |
+
},
|
| 1009 |
+
{
|
| 1010 |
+
"type": "text",
|
| 1011 |
+
"text": "EMAGE reveal inconsistencies between lip motion and the rhythm of speech. In contrast, SemTalk produces smooth, natural transitions across syllables, resulting in realistic and expressive lips, significantly surpassing the baselines.",
|
| 1012 |
+
"bbox": [
|
| 1013 |
+
89,
|
| 1014 |
+
446,
|
| 1015 |
+
482,
|
| 1016 |
+
508
|
| 1017 |
+
],
|
| 1018 |
+
"page_idx": 6
|
| 1019 |
+
},
|
| 1020 |
+
{
|
| 1021 |
+
"type": "text",
|
| 1022 |
+
"text": "On the SHOW dataset (Fig. 6), SemTalk shows more agile gestures than all baselines, when applied to unseen data. Our method captures natural and contextually rich gestures, particularly in moments of emphasis such as \"I like to do\" and \"relaxing,\" where our model produces lively hand and body movements that align with the speech content.",
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
89,
|
| 1025 |
+
508,
|
| 1026 |
+
482,
|
| 1027 |
+
598
|
| 1028 |
+
],
|
| 1029 |
+
"page_idx": 6
|
| 1030 |
+
},
|
| 1031 |
+
{
|
| 1032 |
+
"type": "text",
|
| 1033 |
+
"text": "Semantic Score. Fig. 8 shows how semantic emphasis influences gesture intensity, with peaks in the semantic score aligning with keywords like \"comes,\" \"fantastic,\" and \"captured.\" By extracting semantic scores from key frames, we track gesture emphasis trends. Furthermore, as shown in Fig. 9, SemTalk adapts to different emotional tones even when the text remains unchanged. This adaptability prevents overfitting to the text itself, allowing the model to generate gestures that vary according to the emotional delivery of the speech. The learned semantic score provides fine-grained, frame-level control, keeping gestures both rhythmically synchronized and semantically aligned in real time. User Study. We conducted a user study with 10 video samples and 25 participants from diverse backgrounds, evaluating realism, semantic consistency, motion-speech synchrony, and diversity. Participants were required to rank shuffled videos across different methods. As shown in Fig. 10, our approach received dominant preferences across all metrics, especially in semantic consistency and realism.",
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
89,
|
| 1036 |
+
598,
|
| 1037 |
+
482,
|
| 1038 |
+
883
|
| 1039 |
+
],
|
| 1040 |
+
"page_idx": 6
|
| 1041 |
+
},
|
| 1042 |
+
{
|
| 1043 |
+
"type": "image",
|
| 1044 |
+
"img_path": "images/760d1fd802be9416574db3e923376768c8d359df700277ab247316886991aed9.jpg",
|
| 1045 |
+
"image_caption": [
|
| 1046 |
+
"Figure 9. Same words with different speech from the internet. \"emo\" represents different emotional tones extracted from speech. SemTalk can generate different motions, even when the text script is the same, preventing overfitting to the text itself."
|
| 1047 |
+
],
|
| 1048 |
+
"image_footnote": [],
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
521,
|
| 1051 |
+
88,
|
| 1052 |
+
905,
|
| 1053 |
+
223
|
| 1054 |
+
],
|
| 1055 |
+
"page_idx": 6
|
| 1056 |
+
},
|
| 1057 |
+
{
|
| 1058 |
+
"type": "image",
|
| 1059 |
+
"img_path": "images/d1f5db3de63a6fab1a877bb143eb01bf4d2b4cb70944e878ac8e7d740c826275.jpg",
|
| 1060 |
+
"image_caption": [
|
| 1061 |
+
"Figure 10. Results of the user study."
|
| 1062 |
+
],
|
| 1063 |
+
"image_footnote": [],
|
| 1064 |
+
"bbox": [
|
| 1065 |
+
539,
|
| 1066 |
+
291,
|
| 1067 |
+
885,
|
| 1068 |
+
402
|
| 1069 |
+
],
|
| 1070 |
+
"page_idx": 6
|
| 1071 |
+
},
|
| 1072 |
+
{
|
| 1073 |
+
"type": "text",
|
| 1074 |
+
"text": "4.3.Quantitative Results",
|
| 1075 |
+
"text_level": 1,
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
511,
|
| 1078 |
+
436,
|
| 1079 |
+
705,
|
| 1080 |
+
452
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 6
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "Comparison with Baselines. As shown in Tab. 1, SemTalk outperforms previous methods on BEAT2, achieving lower FGD, MSE, and LVD, indicating better distribution alignment and reduced motion errors. For fairness, we follow [31] and add a lower-body VQ-VAE to TalkSHOW, Diff-SHEG, and SemTalk. Notably, SemTalk significantly reduces FGD, ensuring strong distribution matching. While TalkSHOW and EMAGE achieve competitive diversity (DIV) scores, SemTalk balances high semantic relevance with natural motion flow.",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
511,
|
| 1089 |
+
459,
|
| 1090 |
+
906,
|
| 1091 |
+
609
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 6
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "text",
|
| 1097 |
+
"text": "On the SHOW dataset, SemTalk excels with the lowest FGD, MSE, and the highest BC, indicating precise beat alignment with the audio and enhanced semantic consistency in generated motions. Although EMAGE exhibits high DIV, our model achieves comparable results while maintaining smooth, realistic motion free from jitter.",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
511,
|
| 1100 |
+
612,
|
| 1101 |
+
905,
|
| 1102 |
+
702
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 6
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "text",
|
| 1108 |
+
"text": "Sem-gate. Tab. 2 highlights the effectiveness of sem-gate. Without sem-gate, the model fails to emphasize key moments. Randomized semantic scores led to poor performance by preventing meaningful frame distinction. Introducing a learned sem-gate even (w/o $\\mathcal{W}$ ) significantly improves semantic alignment and classification accuracy. Refinement is further enhanced through weighting strategies: feature weighting $\\mathcal{W}_f$ enhances motion emphasis, while loss weighting $\\mathcal{W}_l$ improves FGD and overall accuracy. These results suggest that weighting methods enhance the accuracy of the semantic score and help the model prioritize important frames. The best results come from applying two weighting methods together, where frames with stronger se",
|
| 1109 |
+
"bbox": [
|
| 1110 |
+
511,
|
| 1111 |
+
704,
|
| 1112 |
+
906,
|
| 1113 |
+
900
|
| 1114 |
+
],
|
| 1115 |
+
"page_idx": 6
|
| 1116 |
+
},
|
| 1117 |
+
{
|
| 1118 |
+
"type": "page_number",
|
| 1119 |
+
"text": "13767",
|
| 1120 |
+
"bbox": [
|
| 1121 |
+
480,
|
| 1122 |
+
944,
|
| 1123 |
+
517,
|
| 1124 |
+
955
|
| 1125 |
+
],
|
| 1126 |
+
"page_idx": 6
|
| 1127 |
+
},
|
| 1128 |
+
{
|
| 1129 |
+
"type": "table",
|
| 1130 |
+
"img_path": "images/58edef6b5eff11c192d309fb2a029ed365faf04ed21a1a6efe927bab1cd944e8.jpg",
|
| 1131 |
+
"table_caption": [],
|
| 1132 |
+
"table_footnote": [],
|
| 1133 |
+
"table_body": "<table><tr><td>Dataset</td><td>Method</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>MSE↓</td><td>LVD↓</td></tr><tr><td rowspan=\"10\">BEAT2</td><td>FaceFormer [13]</td><td>-</td><td>-</td><td>-</td><td>7.787</td><td>7.593</td></tr><tr><td>CodeTalker [44]</td><td>-</td><td>-</td><td>-</td><td>8.026</td><td>7.766</td></tr><tr><td>CaMN [30]</td><td>6.644</td><td>6.769</td><td>10.86</td><td>-</td><td>-</td></tr><tr><td>DSG [47]</td><td>8.811</td><td>7.241</td><td>11.49</td><td>-</td><td>-</td></tr><tr><td>LivelySpeaker [17]</td><td>11.80</td><td>6.659</td><td>11.28</td><td>-</td><td>-</td></tr><tr><td>Habibie et al. [17]</td><td>9.040</td><td>7.716</td><td>8.213</td><td>8.614</td><td>8.043</td></tr><tr><td>TalkSHOW [48]</td><td>6.209</td><td>6.947</td><td>13.47</td><td>7.791</td><td>7.771</td></tr><tr><td>EMAGE [31]</td><td>5.512</td><td>7.724</td><td>13.06</td><td>7.680</td><td>7.556</td></tr><tr><td>DiffSHEG [9]</td><td>8.986</td><td>7.142</td><td>11.91</td><td>7.665</td><td>8.673</td></tr><tr><td>SemTalk (Ours)</td><td>4.278</td><td>7.770</td><td>12.91</td><td>6.153</td><td>6.938</td></tr><tr><td rowspan=\"10\">SHOW</td><td>FaceFormer [13]</td><td>-</td><td>-</td><td>-</td><td>138.1</td><td>43.69</td></tr><tr><td>CodeTalker [44]</td><td>-</td><td>-</td><td>-</td><td>140.7</td><td>45.84</td></tr><tr><td>CaMN [30]</td><td>22.12</td><td>7.712</td><td>10.37</td><td>-</td><td>-</td></tr><tr><td>DSG [47]</td><td>24.84</td><td>8.027</td><td>10.23</td><td>-</td><td>-</td></tr><tr><td>LivelySpeaker [17]</td><td>32.17</td><td>7.844</td><td>10.14</td><td>-</td><td>-</td></tr><tr><td>Habibie et al. [17]</td><td>27.22</td><td>8.209</td><td>8.541</td><td>145.6</td><td>47.35</td></tr><tr><td>TalkSHOW [48]</td><td>24.43</td><td>8.249</td><td>10.98</td><td>139.6</td><td>45.17</td></tr><tr><td>EMAGE [31]</td><td>22.12</td><td>8.280</td><td>12.46</td><td>136.1</td><td>42.44</td></tr><tr><td>DiffSHEG [9]</td><td>24.87</td><td>8.061</td><td>10.79</td><td>139.0</td><td>45.77</td></tr><tr><td>SemTalk (Ours)</td><td>20.18</td><td>8.304</td><td>11.36</td><td>134.1</td><td>39.15</td></tr></table>",
|
| 1134 |
+
"bbox": [
|
| 1135 |
+
99,
|
| 1136 |
+
88,
|
| 1137 |
+
475,
|
| 1138 |
+
343
|
| 1139 |
+
],
|
| 1140 |
+
"page_idx": 7
|
| 1141 |
+
},
|
| 1142 |
+
{
|
| 1143 |
+
"type": "text",
|
| 1144 |
+
"text": "mantic signals receive higher emphasis. We also compare sem-gate with LivelySpeaker's SAG [53]. We find that replacing the Sparse Motion stage with SAG and substituting motion using GT semantic labels led to poor performance. SAG relies only on text-motion alignment, ignoring emotional tone, making it more prone to overfitting the text. In contrast, our sem-gate applies GT supervision with two weighting methods, achieving more accurate and stable semantic motion.",
|
| 1145 |
+
"bbox": [
|
| 1146 |
+
89,
|
| 1147 |
+
441,
|
| 1148 |
+
483,
|
| 1149 |
+
577
|
| 1150 |
+
],
|
| 1151 |
+
"page_idx": 7
|
| 1152 |
+
},
|
| 1153 |
+
{
|
| 1154 |
+
"type": "text",
|
| 1155 |
+
"text": "Ablation Study on Components. We assess the impact of each component of our model on BEAT2 and present the results in Tab. 3, which reveals several key insights (more ablation results please see supplementary material):",
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
89,
|
| 1158 |
+
580,
|
| 1159 |
+
483,
|
| 1160 |
+
642
|
| 1161 |
+
],
|
| 1162 |
+
"page_idx": 7
|
| 1163 |
+
},
|
| 1164 |
+
{
|
| 1165 |
+
"type": "list",
|
| 1166 |
+
"sub_type": "text",
|
| 1167 |
+
"list_items": [
|
| 1168 |
+
"- Rhythmic Consistency Learning (RC) not only boosts performance on key metrics like FGD, LVD, and BC but also reduces the MSE, contributing to smoother and more realistic base motion.",
|
| 1169 |
+
"- Semantic Emphasis Learning (SE) proves essential for selectively enhancing semantic-rich gestures. The inclusion of SE, as shown in rows with SE enabled, improves both diversity (DIV) and FGD, enabling the model to emphasize semantically relevant motions. SE demonstrates its effectiveness in focusing on frame-level semantic information, which contributes to the generation of lifelike gestures with enriched contextual meaning.",
|
| 1170 |
+
"- Coarse2Fine Cross-Attention Module (C2F) effectively refines motion details, improving BC, FGD, and DIV. When combined with RVQ and RC, C2F achieves the best MSE and LVD, highlighting its role in enhancing motion realism and diversity hierarchically."
|
| 1171 |
+
],
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
89,
|
| 1174 |
+
643,
|
| 1175 |
+
482,
|
| 1176 |
+
901
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 7
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "table",
|
| 1182 |
+
"img_path": "images/46044737566a791b5e411fd91acb5ab4722fbdb7212022197ec7a2d652deb453.jpg",
|
| 1183 |
+
"table_caption": [
|
| 1184 |
+
"Table 1. Quantitative comparison with SOTA. SemTalk consistently outperforms baselines across both the BEAT2 and SHOW datasets. Lower values are better for FMD, FGD, MSE, and LVD. Higher values are better for BC and DIV. We report $\\mathrm{FGD} \\times 10^{-1}$ , $\\mathrm{BC} \\times 10^{-1}$ , $\\mathrm{MSE} \\times 10^{-8}$ and $\\mathrm{LVD} \\times 10^{-5}$ for simplify."
|
| 1185 |
+
],
|
| 1186 |
+
"table_footnote": [],
|
| 1187 |
+
"table_body": "<table><tr><td>Method</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>Acc(%)↑</td></tr><tr><td>w/o Sem-gate</td><td>4.893</td><td>7.702</td><td>12.42</td><td>-</td></tr><tr><td>SAG (LivelySpeaker [53])</td><td>4.618</td><td>7.682</td><td>12.45</td><td>-</td></tr><tr><td>Sem-gate (Random ψ)</td><td>4.634</td><td>7.700</td><td>12.44</td><td>50.07</td></tr><tr><td>Sem-gate (w/o W)</td><td>4.495</td><td>7.633</td><td>12.26</td><td>72.32</td></tr><tr><td>Sem-gate (w/ Wf)</td><td>4.408</td><td>7.679</td><td>12.28</td><td>78.52</td></tr><tr><td>Sem-gate (w/ Wl)</td><td>4.366</td><td>7.772</td><td>11.94</td><td>77.83</td></tr><tr><td>Sem-gate (ours)</td><td>4.278</td><td>7.770</td><td>12.91</td><td>82.76</td></tr></table>",
|
| 1188 |
+
"bbox": [
|
| 1189 |
+
526,
|
| 1190 |
+
88,
|
| 1191 |
+
895,
|
| 1192 |
+
200
|
| 1193 |
+
],
|
| 1194 |
+
"page_idx": 7
|
| 1195 |
+
},
|
| 1196 |
+
{
|
| 1197 |
+
"type": "table",
|
| 1198 |
+
"img_path": "images/44f4ce8af71fc71a6a99c3ea070234be949d6df598113e8ea29ab0ea3b82177e.jpg",
|
| 1199 |
+
"table_caption": [
|
| 1200 |
+
"Table 2. Ablation study on Sem-gate. \"Acc\" denotes semantic classification performance on BEAT2. \"w/o Sem-gate\" means directly input $f_{t}$ and $\\gamma_{h}$ without Sem-gate. \"SAG (LivelySpeaker [53])\" replaces the Sparse Motion Generation stage with LivelySpeaker's SAG method. \"Random $\\psi$ \" assigns frame-level scores randomly. \"w/o $\\mathcal{W}$ \" applies the semantic gate but excludes frame-level weighting. \"w/ $\\mathcal{W}_{f}$ \" applies feature weighting. \"w/ $\\mathcal{W}_{l}$ \" applies loss weighting. (as mentioned in Sec. 3.4). Sem-gate (ours) integrates both the semantic gate and frame-level weighting to enhance emphasis."
|
| 1201 |
+
],
|
| 1202 |
+
"table_footnote": [],
|
| 1203 |
+
"table_body": "<table><tr><td>RC</td><td>SE</td><td>C2F</td><td>RVQ</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>MSE↓</td><td>LVD↓</td></tr><tr><td>-</td><td>-</td><td>-</td><td>-</td><td>6.234</td><td>7.628</td><td>11.44</td><td>8.239</td><td>7.831</td></tr><tr><td>-</td><td>-</td><td>-</td><td>√</td><td>5.484</td><td>7.641</td><td>11.84</td><td>13.882</td><td>15.42</td></tr><tr><td>√</td><td>-</td><td>-</td><td>√</td><td>4.867</td><td>7.701</td><td>12.38</td><td>6.201</td><td>6.928</td></tr><tr><td>√</td><td>√</td><td>-</td><td>√</td><td>4.526</td><td>7.751</td><td>12.83</td><td>6.215</td><td>6.997</td></tr><tr><td>-</td><td>-</td><td>√</td><td>√</td><td>4.897</td><td>7.702</td><td>12.42</td><td>13.416</td><td>15.72</td></tr><tr><td>√</td><td>√</td><td>-</td><td>-</td><td>5.831</td><td>7.758</td><td>11.97</td><td>6.587</td><td>7.106</td></tr><tr><td>√</td><td>-</td><td>√</td><td>√</td><td>4.397</td><td>7.776</td><td>12.49</td><td>6.100</td><td>6.898</td></tr><tr><td>√</td><td>√</td><td>√</td><td>√</td><td>4.278</td><td>7.770</td><td>12.91</td><td>6.153</td><td>6.938</td></tr></table>",
|
| 1204 |
+
"bbox": [
|
| 1205 |
+
526,
|
| 1206 |
+
345,
|
| 1207 |
+
895,
|
| 1208 |
+
460
|
| 1209 |
+
],
|
| 1210 |
+
"page_idx": 7
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "text",
|
| 1214 |
+
"text": "Table 3. Ablation study on each key component. \"RC\" denotes rhythmic consistency learning, \"SE\" denotes the semantic emphasis learning, and \"C2F\" denotes Coarse2Fine Cross-Att Module, \"RVQ\" denotes the RVQ-VAE.",
|
| 1215 |
+
"bbox": [
|
| 1216 |
+
511,
|
| 1217 |
+
465,
|
| 1218 |
+
903,
|
| 1219 |
+
521
|
| 1220 |
+
],
|
| 1221 |
+
"page_idx": 7
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "text",
|
| 1225 |
+
"text": "- RVQ-VAE (RVQ) enhances the diversity and realism of generated motion. Though it slightly increases MSE and LVD, it notably improves FGD, leading to more natural motion generation compared to standard VQ-VAE.",
|
| 1226 |
+
"bbox": [
|
| 1227 |
+
513,
|
| 1228 |
+
542,
|
| 1229 |
+
906,
|
| 1230 |
+
604
|
| 1231 |
+
],
|
| 1232 |
+
"page_idx": 7
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "text",
|
| 1236 |
+
"text": "5. Conclusion",
|
| 1237 |
+
"text_level": 1,
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
513,
|
| 1240 |
+
617,
|
| 1241 |
+
633,
|
| 1242 |
+
633
|
| 1243 |
+
],
|
| 1244 |
+
"page_idx": 7
|
| 1245 |
+
},
|
| 1246 |
+
{
|
| 1247 |
+
"type": "text",
|
| 1248 |
+
"text": "We propose SemTalk, a novel approach for holistic cospeech motion generation with frame-level semantic emphasis. Our method addresses the integration of sparse yet expressive motion into foundational rhythm-related motion, which has received less attention in previous works. We develop a framework that separately learns rhythm-related base motion through coarse2fine cross-attention module and rhythmic consistency learning, while capturing semantic-aware motion through Semantic Emphasis Learning. These components are then adaptively fused based on a learned semantic score. Our approach has demonstrated state-of-the-art performance on two public datasets quantitatively and qualitatively. The qualitative results and user study show that our method can generate high-quality cospeech motion sequences that enhance frame-level semantics over robust base motions, reflecting the full spectrum of human expressiveness.",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
509,
|
| 1251 |
+
643,
|
| 1252 |
+
906,
|
| 1253 |
+
900
|
| 1254 |
+
],
|
| 1255 |
+
"page_idx": 7
|
| 1256 |
+
},
|
| 1257 |
+
{
|
| 1258 |
+
"type": "page_number",
|
| 1259 |
+
"text": "13768",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
480,
|
| 1262 |
+
944,
|
| 1263 |
+
519,
|
| 1264 |
+
955
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 7
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "text",
|
| 1270 |
+
"text": "Acknowledgments. This work was supported by Alibaba Research Intern Program, the Young Scientists Fund of the National Natural Science Foundation of China No. 624B2110, the National Key Research and Development Program of China No. 2024YFC3015600, the Fundamental Research Funds for Central Universities No.2042023KF0180 & No.2042025KF0053. The numerical calculation is supported by supercomputing system in Super-computing Center of Wuhan University and Tongyi Lab, Alibaba Group.",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
89,
|
| 1273 |
+
90,
|
| 1274 |
+
485,
|
| 1275 |
+
233
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 8
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "text",
|
| 1281 |
+
"text": "References",
|
| 1282 |
+
"text_level": 1,
|
| 1283 |
+
"bbox": [
|
| 1284 |
+
91,
|
| 1285 |
+
257,
|
| 1286 |
+
187,
|
| 1287 |
+
273
|
| 1288 |
+
],
|
| 1289 |
+
"page_idx": 8
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "list",
|
| 1293 |
+
"sub_type": "ref_text",
|
| 1294 |
+
"list_items": [
|
| 1295 |
+
"[1] Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. Low-resource adaptation for personalized cospeech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20566–20576, 2022. 2",
|
| 1296 |
+
"[2] Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. Listen, denoise, action! audio-driven motion synthesis with diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-20, 2023. 2",
|
| 1297 |
+
"[3] Tenglong Ao, Zeyi Zhang, and Libin Liu. Gesture diffuclip: Gesture diffusion model with clip latents. ACM Transactions on Graphics (TOG), 42(4):1-18, 2023. 3",
|
| 1298 |
+
"[4] Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. Audiolm: a language modeling approach to audio generation. IEEE/ACM transactions on audio, speech, and language processing, 31:2523-2533, 2023. 3",
|
| 1299 |
+
"[5] Justine Cassell, Catherine Pelachaud, Norman Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, and Matthew Stone. Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 413-420, 1994. 2",
|
| 1300 |
+
"[6] Justine Cassell, David McNeill, and Karl-Erik McCullough. Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information. *Pragmatics & cognition*, 7(1):1-34, 1999. 1",
|
| 1301 |
+
"[7] Justine Cassell, Hannes Högni Vilhjalmsson, and Timothy Bickmore. Beat: the behavior expression animation toolkit. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 477-486, 2001. 2",
|
| 1302 |
+
"[8] Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, and Kun Zhou. Enabling synergistic full-body control in prompt-based co-speech motion generation. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 6774-6783, 2024. 2",
|
| 1303 |
+
"[9] Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, and Qifeng Chen. Diffsheg: A diffusion-based approach for real-time speech-driven holistic 3d expression and gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7352-7361, 2024. 2, 3, 8"
|
| 1304 |
+
],
|
| 1305 |
+
"bbox": [
|
| 1306 |
+
99,
|
| 1307 |
+
282,
|
| 1308 |
+
480,
|
| 1309 |
+
898
|
| 1310 |
+
],
|
| 1311 |
+
"page_idx": 8
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "list",
|
| 1315 |
+
"sub_type": "ref_text",
|
| 1316 |
+
"list_items": [
|
| 1317 |
+
"[10] Kiran Chhatre, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J Black, Timo Bolkart, et al. Emotional speech-driven 3d body animation via disentangled latent diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1942-1953, 2024. 2, 4",
|
| 1318 |
+
"[11] Selina Chu, Shrikanth Narayanan, and C-C Jay Kuo. Environmental sound recognition with time-frequency audio features. IEEE Transactions on Audio, Speech, and Language Processing, 17(6):1142-1158, 2009. 3",
|
| 1319 |
+
"[12] Radek Daneček, Kiran Chhatre, Shashank Tripathi, Yandong Wen, Michael Black, and Timo Bolkart. Emotional speech-driven animation with content-emotion disentanglement. In SIGGRAPH Asia 2023 Conference Papers, pages 1-13, 2023. 2",
|
| 1320 |
+
"[13] Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, and Taku Komura. Faceformer: Speech-driven 3d facial animation with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18770-18780, 2022. 8",
|
| 1321 |
+
"[14] Susan Goldin-Meadow. The role of gesture in communication and thinking. Trends in cognitive sciences, 3(11):419-429, 1999. 1",
|
| 1322 |
+
"[15] Kehong Gong, Dongze Lian, Heng Chang, Chuan Guo, Zihang Jiang, Xinxin Zuo, Michael Bi Mi, and Xinchao Wang. Tm2d: Bimodality driven 3d dance generation via music-text integration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9942–9952, 2023. 2",
|
| 1323 |
+
"[16] Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1900-1910, 2024. 3",
|
| 1324 |
+
"[17] Ikhsanul Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, Hans-Peter Seidel, Gerard Pons-Moll, Mohamed Elgharib, and Christian Theobalt. Learning speech-driven 3d conversational gestures from video. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, pages 101-108, 2021. 1, 2, 8",
|
| 1325 |
+
"[18] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing, 29: 3451-3460, 2021. 3",
|
| 1326 |
+
"[19] Chien-Ming Huang and Bilge Mutlu. Robot behavior toolkit: generating effective social behaviors for robots. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 25-32, 2012. 2",
|
| 1327 |
+
"[20] Adam Kendon. Gesture: Visible action as utterance. Cambridge University Press, 2004. 1",
|
| 1328 |
+
"[21] Michael Kipp. Gesture generation by imitation: From human behavior to computer character animation. Universal-Publishers, 2005. 2",
|
| 1329 |
+
"[22] Stefan Kopp, Brigitte Krenn, Stacy Marsella, Andrew N Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R Thorisson, and Hannes Vilhjalmsson. Towards a common"
|
| 1330 |
+
],
|
| 1331 |
+
"bbox": [
|
| 1332 |
+
516,
|
| 1333 |
+
92,
|
| 1334 |
+
903,
|
| 1335 |
+
898
|
| 1336 |
+
],
|
| 1337 |
+
"page_idx": 8
|
| 1338 |
+
},
|
| 1339 |
+
{
|
| 1340 |
+
"type": "page_number",
|
| 1341 |
+
"text": "13769",
|
| 1342 |
+
"bbox": [
|
| 1343 |
+
480,
|
| 1344 |
+
944,
|
| 1345 |
+
519,
|
| 1346 |
+
955
|
| 1347 |
+
],
|
| 1348 |
+
"page_idx": 8
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "list",
|
| 1352 |
+
"sub_type": "ref_text",
|
| 1353 |
+
"list_items": [
|
| 1354 |
+
"framework for multimodal generation: The behavior markup language. In Intelligent Virtual Agents: 6th International Conference, IVA 2006, Marina Del Rey, CA, USA, August 21-23, 2006. Proceedings 6, pages 205-217. Springer, 2006. 2",
|
| 1355 |
+
"[23] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 international conference on multimodal interaction, pages 242-250, 2020. 2",
|
| 1356 |
+
"[24] Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, and Gustav Eje Henter. A large, crowdsourced evaluation of gesture generation systems on common data: The genea challenge 2020. In Proceedings of the 26th International Conference on Intelligent User Interfaces, pages 11-21, 2021. 1",
|
| 1357 |
+
"[25] Alex Lascarides and Matthew Stone. A formal semantic analysis of gesture. Journal of Semantics, 26(4):393-449, 2009. 1",
|
| 1358 |
+
"[26] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and Linchao Bao. Audio2gestures: Generating diverse gestures from speech audio with conditional variational autoencoders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11293-11302, 2021. 5",
|
| 1359 |
+
"[27] Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13401-13412, 2021. 5",
|
| 1360 |
+
"[28] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. Seeg: Semantic energized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10473-10482, 2022. 2",
|
| 1361 |
+
"[29] Haiyang Liu, Naoya Iwamoto, Zihao Zhu, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Disco: Disentangled implicit content and rhythm learning for diverse co-speech gestures synthesis. In Proceedings of the 30th ACM international conference on multimedia, pages 3764-3773, 2022. 2",
|
| 1362 |
+
"[30] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Beat: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis. In European conference on computer vision, pages 612-630. Springer, 2022. 1, 5, 8",
|
| 1363 |
+
"[31] Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. Emage: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1144-1154, 2024. 1, 2, 3, 5, 6, 7, 8",
|
| 1364 |
+
"[32] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou, Wayne Wu, Bo Dai, and Bolei"
|
| 1365 |
+
],
|
| 1366 |
+
"bbox": [
|
| 1367 |
+
91,
|
| 1368 |
+
90,
|
| 1369 |
+
483,
|
| 1370 |
+
900
|
| 1371 |
+
],
|
| 1372 |
+
"page_idx": 9
|
| 1373 |
+
},
|
| 1374 |
+
{
|
| 1375 |
+
"type": "list",
|
| 1376 |
+
"sub_type": "ref_text",
|
| 1377 |
+
"list_items": [
|
| 1378 |
+
"Zhou. Learning hierarchical cross-modal association for cospeech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10462-10472, 2022. 2",
|
| 1379 |
+
"[33] Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, and Changxing Ding. Towards variable and coordinated holistic co-speech motion generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1566-1576, 2024. 2",
|
| 1380 |
+
"[34] Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, and Heung-Yeung Shum. Humantomato: Text-aligned whole-body motion generation. arXiv preprint arXiv:2310.12978, 2023. 2",
|
| 1381 |
+
"[35] Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, and Xie Chen. emotion2vec: Self-supervised pre-training for speech emotion representation. arXiv preprint arXiv:2312.15185, 2023. 4",
|
| 1382 |
+
"[36] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro. Virtual character performance from speech. In Proceedings of the 12th ACM SIGGRAPH/Eurographics symposium on computer animation, pages 25-35, 2013. 2",
|
| 1383 |
+
"[37] Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, and Alexander Richard. From audio to photoreal embodiment: Synthesizing humans in conversations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1001-1010, 2024. 2",
|
| 1384 |
+
"[38] Ashi Özyürek, Roel M Willems, Sotaro Kita, and Peter Hagoort. On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of cognitive neuroscience, 19(4):605-616, 2007. 1",
|
| 1385 |
+
"[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 4",
|
| 1386 |
+
"[40] Manuel Rebol, Christian Güttl, and Krzysztof Pietroszek. Real-time gesture animation generation from speech for virtual human interaction. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-4, 2021. 2",
|
| 1387 |
+
"[41] Shuai Shen, Wenliang Zhao, Zibin Meng, Wanhua Li, Zheng Zhu, Jie Zhou, and Jiwen Lu. Difftalk: Crafting diffusion models for generalized talking head synthesis. arXiv preprint arXiv:2301.03786, 2(4):5, 2023. 2",
|
| 1388 |
+
"[42] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 2, 5",
|
| 1389 |
+
"[43] Hanwei Wu and Markus Flierl. Learning product codebooks using vector-quantized autoencoders for image retrieval. In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pages 1-5. IEEE, 2019. 2",
|
| 1390 |
+
"[44] Jinbo Xing, Menghan Xia, Yuechen Zhang, Xiaodong Cun, Jue Wang, and Tien-Tsin Wong. Codetalker: Speech-driven"
|
| 1391 |
+
],
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
516,
|
| 1394 |
+
92,
|
| 1395 |
+
906,
|
| 1396 |
+
900
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 9
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "page_number",
|
| 1402 |
+
"text": "13770",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
480,
|
| 1405 |
+
944,
|
| 1406 |
+
519,
|
| 1407 |
+
955
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 9
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "list",
|
| 1413 |
+
"sub_type": "ref_text",
|
| 1414 |
+
"list_items": [
|
| 1415 |
+
"3d facial animation with discrete motion prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12780-12790, 2023. 8",
|
| 1416 |
+
"[45] Zunnan Xu, Yachao Zhang, Sicheng Yang, Ronghui Li, and Xiu Li. Chain of generation: Multi-modal gesture synthesis via cascaded conditional control. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6387-6395, 2024. 3",
|
| 1417 |
+
"[46] Sicheng Yang, Zilin Wang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Qiaochu Huang, Lei Hao, Songcen Xu, Xiaofei Wu, Changpeng Yang, et al. Unified gesture: A unified gesture synthesis model for multiple skeletons. In Proceedings of the 31st ACM International Conference on Multimedia, pages 1033-1044, 2023. 2",
|
| 1418 |
+
"[47] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. Diffusestylegesture: Stylized audio-driven co-speech gesture generation with diffusion models. arXiv preprint arXiv:2305.04919, 2023. 2, 5, 8",
|
| 1419 |
+
"[48] Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 469-480, 2023. 1, 2, 5, 6, 8",
|
| 1420 |
+
"[49] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jae-hong Kim, and Geehyuk Lee. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In 2019 International Conference on Robotics and Automation (ICRA), pages 4303-4309. IEEE, 2019. 2",
|
| 1421 |
+
"[50] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG), 39 (6):1-16, 2020. 5",
|
| 1422 |
+
"[51] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. Soundstream: An end-to-end neural audio codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:495-507, 2021. 3",
|
| 1423 |
+
"[52] Jinsong Zhang, Minjie Zhu, Yuxiang Zhang, Zerong Zheng, Yebin Liu, and Kun Li. Speechact: Towards generating whole-body motion from speech. IEEE Transactions on Visualization and Computer Graphics, 2025. 2",
|
| 1424 |
+
"[53] Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, and Shenghua Gao. Livelyspeaker: Towards semantic-aware co-speech gesture generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20807-20817, 2023. 2, 5, 8",
|
| 1425 |
+
"[54] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. Taming diffusion models for audio-driven co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10544-10553, 2023. 2"
|
| 1426 |
+
],
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
91,
|
| 1429 |
+
90,
|
| 1430 |
+
483,
|
| 1431 |
+
852
|
| 1432 |
+
],
|
| 1433 |
+
"page_idx": 10
|
| 1434 |
+
},
|
| 1435 |
+
{
|
| 1436 |
+
"type": "page_number",
|
| 1437 |
+
"text": "13771",
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
480,
|
| 1440 |
+
945,
|
| 1441 |
+
517,
|
| 1442 |
+
955
|
| 1443 |
+
],
|
| 1444 |
+
"page_idx": 10
|
| 1445 |
+
}
|
| 1446 |
+
]
|
2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/d0c52370-d24a-4c84-9ea3-ee5cd6ca6239_model.json
ADDED
|
@@ -0,0 +1,2147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "header",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.107,
|
| 7 |
+
0.003,
|
| 8 |
+
0.182,
|
| 9 |
+
0.044
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "CVF"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "header",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.239,
|
| 18 |
+
0.001,
|
| 19 |
+
0.808,
|
| 20 |
+
0.047
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore."
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "title",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.165,
|
| 29 |
+
0.13,
|
| 30 |
+
0.834,
|
| 31 |
+
0.178
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "SemTalk: Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "image",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.117,
|
| 40 |
+
0.202,
|
| 41 |
+
0.877,
|
| 42 |
+
0.277
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": null
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "image",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.115,
|
| 51 |
+
0.284,
|
| 52 |
+
0.372,
|
| 53 |
+
0.453
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": null
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "image",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.376,
|
| 62 |
+
0.284,
|
| 63 |
+
0.886,
|
| 64 |
+
0.453
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": null
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "image_caption",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.089,
|
| 73 |
+
0.459,
|
| 74 |
+
0.908,
|
| 75 |
+
0.531
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "Figure 1. On the left, we analyze semantic labels from the BEAT2 dataset [31] and visualize frame-level motion, revealing that semantically relevant motions are rare and sparse, aligning with real-life observations. On the right, this observation drives the design of SemTalk, which establishes a rhythm-aligned base motion and dynamically emphasizes sparse semantic gestures at the frame-level. In this example, SemTalk amplifies expressiveness on words like \"watching\" and \"just,\" enhancing gesture and torso movements. The semantic scores below are automatically generated by SemTalk to modulate semantic emphasis over time."
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "title",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.248,
|
| 84 |
+
0.547,
|
| 85 |
+
0.328,
|
| 86 |
+
0.563
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Abstract"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.089,
|
| 95 |
+
0.579,
|
| 96 |
+
0.486,
|
| 97 |
+
0.868
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "A good co-speech motion generation cannot be achieved without a careful integration of common rhythmic motion and rare yet essential semantic motion. In this work, we propose SemTalk for holistic co-speech motion generation with frame-level semantic emphasis. Our key insight is to separately learn base motions and sparse motions, and then adaptively fuse them. In particular, coarse2fine cross-attention module and rhythmic consistency learning are explored to establish rhythm-related base motion, ensuring a coherent foundation that synchronizes gestures with the speech rhythm. Subsequently, semantic emphasis learning is designed to generate semantic-aware sparse motion, focusing on frame-level semantic cues. Finally, to integrate sparse motion into the base motion and generate semantic-emphasized co-speech gestures, we further leverage a learned semantic score for adaptive synthesis. Qualitative and quantitative comparisons on two public datasets demonstrate that our method outperforms the state-of-the-art, delivering high-quality co-speech motion"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.514,
|
| 106 |
+
0.549,
|
| 107 |
+
0.907,
|
| 108 |
+
0.563
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "with enhanced semantic richness over a stable base motion."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "title",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.514,
|
| 117 |
+
0.614,
|
| 118 |
+
0.646,
|
| 119 |
+
0.63
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "1. Introduction"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.511,
|
| 128 |
+
0.642,
|
| 129 |
+
0.907,
|
| 130 |
+
0.763
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "Nonverbal communication, including body language, hand gestures, and facial expressions, is integral to human interactions. It enriches conversations with contextual cues and enhances understanding among participants [6, 14, 20, 24]. This aspect is particularly significant in holistic co-speech motion generation, where the challenge lies in synthesizing gestures that align with speech rhythm while also capturing the infrequent yet critical semantic gestures [25, 38]."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.511,
|
| 139 |
+
0.765,
|
| 140 |
+
0.909,
|
| 141 |
+
0.903
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "Most existing methods [17, 30, 48] rely heavily on rhythm-related audio features as conditions for gesture generation. While these rhythm-based features successfully align gestures with the timing of speech, they often overshadow the sparse yet expressive semantic motion (see Fig. 1). As a result, the generated motions may lack the contextual depth necessary and nuanced expressiveness for natural interaction. Some methods try to address this by incorporating semantic information like emotion, style, and"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_footnote",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.109,
|
| 150 |
+
0.875,
|
| 151 |
+
0.223,
|
| 152 |
+
0.888
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "* Equal contribution."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "page_footnote",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.11,
|
| 161 |
+
0.888,
|
| 162 |
+
0.24,
|
| 163 |
+
0.9
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "† Corresponding author."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "list",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.109,
|
| 172 |
+
0.875,
|
| 173 |
+
0.24,
|
| 174 |
+
0.9
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": null
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "page_number",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.481,
|
| 183 |
+
0.945,
|
| 184 |
+
0.518,
|
| 185 |
+
0.957
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "13761"
|
| 189 |
+
}
|
| 190 |
+
],
|
| 191 |
+
[
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.09,
|
| 196 |
+
0.092,
|
| 197 |
+
0.482,
|
| 198 |
+
0.182
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "content[10, 12, 23, 32]. However, the rhythm features tend to dominate, making the models difficult to capture sparse, semantically relevant gestures at the frame level. These rare but impactful gestures are often diluted or overlooked, highlighting the challenge of balancing rhythmic alignment with semantic expressiveness in co-speech motion generation."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.09,
|
| 207 |
+
0.185,
|
| 208 |
+
0.483,
|
| 209 |
+
0.35
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "In real-world human conversations, we have an observation that while most speech-related gestures are indeed rhythm-related, only a limited number of frames involve semantically emphasized gestures. This insight suggests that co-speech motions can be decomposed into two distinct components: (i) Rhythm-related base motion. These provide a continuous, coherent base motion aligned with the speech rhythm, reflecting the natural timing of speaking. (ii) Semantic-aware sparse motion: These occur infrequently but are essential for conveying specific meanings or emphasizing key points within the conversation."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.093,
|
| 218 |
+
0.354,
|
| 219 |
+
0.483,
|
| 220 |
+
0.67
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "Inspired by this observation, we propose a new framework SemTalk. SemTalk models the base motion and the sparse motion separately and then fuses them adaptively to generate high-fidelity co-speech motion. Specifically, we first focus on generating rhythm-related base motion by introducing coarse2fine cross-attention module and rhythmic consistency learning. We design a hierarchical coarse2fine cross-attention module, which progressively refines the base motion cues in a coarse-to-fine manner, starting from the face and moving through the hands, upper body, and lower body. This approach ensures consistent rhythmic transmission across all body parts, enhancing coherence base motion. Moreover, we propose a local-global rhythmic consistency learning approach, which enforces alignment at both the frame and sequence levels. Locally, a frame-level consistency loss ensures that each frame is precisely synchronized with its corresponding speech features, guaranteeing accurate temporal alignment. Globally, a sequence-level consistency loss sustains a coherent rhythmic flow across the entire motion sequence, preserving consistency throughout the generated gestures."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.09,
|
| 229 |
+
0.673,
|
| 230 |
+
0.483,
|
| 231 |
+
0.853
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "Furthermore, we introduce semantic emphasis learning approach, which focuses on generating semantic-aware sparse motion. This approach utilizes frame-level semantic cues from textual information, high-level speech features, and emotion to identify frames that require emphasis through a learned semantic score produced by a gating strategy, i.e., sem-gate. The sem-gate is designed to dynamically activate semantic motions at key frames through two weighting methods applied on the motion condition and the loss, respectively, and semantic label guidance, allowing the model to produce motion that enhances the motion with deeper semantic meaning and contextual relevance."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.09,
|
| 240 |
+
0.856,
|
| 241 |
+
0.483,
|
| 242 |
+
0.901
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "Finally, the base motion and sparse motion are integrated through semantic score-based motion fusion, which adaptively amplifies expressiveness by incorporating semantic-"
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.514,
|
| 251 |
+
0.092,
|
| 252 |
+
0.874,
|
| 253 |
+
0.121
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "aware key frames into the rhythm-related base motion. Our contributions are summarized below:"
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.515,
|
| 262 |
+
0.122,
|
| 263 |
+
0.905,
|
| 264 |
+
0.182
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "- We propose SemTalk, a novel framework for holistic co-speech motion generation that separately models rhythm-related base motion and semantic-aware sparse motion, adaptively integrating them via a learned semantic gate."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.515,
|
| 273 |
+
0.183,
|
| 274 |
+
0.905,
|
| 275 |
+
0.287
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "- We propose a hierarchical coarse2fine cross-attention module to refine base motion and a local-global rhythmic consistency learning to integrate latent face and hand features with rhythm-related priors, ensuring coherence and rhythmic consistency. We then propose semantic emphasis learning to generate semantic gestures at certain frames, enhancing semantic-aware sparse motion."
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.515,
|
| 284 |
+
0.289,
|
| 285 |
+
0.904,
|
| 286 |
+
0.333
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "- Experimental results show that our model surpasses state-of-the-art methods qualitatively and quantitatively, achieving higher motion quality and richer semantics."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "list",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.515,
|
| 295 |
+
0.122,
|
| 296 |
+
0.905,
|
| 297 |
+
0.333
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": null
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "title",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.514,
|
| 306 |
+
0.347,
|
| 307 |
+
0.655,
|
| 308 |
+
0.362
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "2. Related Work"
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "text",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.513,
|
| 317 |
+
0.373,
|
| 318 |
+
0.905,
|
| 319 |
+
0.674
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "Co-speech Gesture Generation. Co-speech gesture generation aims to produce gestures aligned with speech. Early rule-based methods [7, 19, 21, 22, 41] lacked variability, while deterministic models [5, 7, 29, 36, 46, 49] mapped speech directly to gestures. Probabilistic models, including GANs [1, 17, 40] and diffusion models [2, 10, 47, 54], introduced variability. Some methods incorporated semantic cues, such as HA2G [32] and SEEG [28], which used hierarchical networks and alignment techniques. SynTalk [8] employs prompt-based control but treats inputs as signal strengths rather than fully interpreting semantics. LivelySpeaker [53] combines rhythmic features and semantic cues using CLIP [39] but struggles to integrate gestures with rhythm and capture semantics consistently, moreover, it only provides global control, limiting fine-grained refinement. DisCo [29] disentangles content and rhythm but lacks explicit modeling of sparse semantic gestures. SemTalk addresses this by separately modeling rhythm-related base motion and semantic-aware sparse motion, integrating them adaptively through a learned semantic score."
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.513,
|
| 328 |
+
0.675,
|
| 329 |
+
0.905,
|
| 330 |
+
0.901
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "Holistic Co-speech Motion Generation. Generating synchronized, expressive full-body motion from speech remains challenging, especially in coordinating the face, hands, and torso [9, 31, 34, 37, 48, 52]. Early methods introduced generative models to improve synchronization, but issues persisted. TalkSHOW [48] improved with VQ-VAE [42] cross-conditioning but handled facial expressions separately, causing fragmented outputs. DiffSHEG [9] and EMAGE [31] used separate encoders for expressions and gestures, but their unidirectional flow limited coherence. ProbTalk [33] leverages PQ-VAE [43] for improved body-facial synchronization but mainly relies on rhythmic cues, risking the loss of nuanced semantic gestures. Inspired by TM2D [15], which decomposes dance motion into music-related components, we separately model co-speech motion"
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "page_number",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.481,
|
| 339 |
+
0.945,
|
| 340 |
+
0.519,
|
| 341 |
+
0.957
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "13762"
|
| 345 |
+
}
|
| 346 |
+
],
|
| 347 |
+
[
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.091,
|
| 352 |
+
0.092,
|
| 353 |
+
0.41,
|
| 354 |
+
0.106
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "into rhythm-related and semantic-aware motion."
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "title",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.091,
|
| 363 |
+
0.121,
|
| 364 |
+
0.182,
|
| 365 |
+
0.136
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "3. Method"
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "title",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.091,
|
| 374 |
+
0.146,
|
| 375 |
+
0.329,
|
| 376 |
+
0.163
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "3.1. Preliminary on RVQ-VAE"
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.09,
|
| 385 |
+
0.169,
|
| 386 |
+
0.484,
|
| 387 |
+
0.29
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "Following [4, 16, 51], our approach uses a residual vector-quantized autoencoder (RVQ-VAE) to progressively capture complex body movements in a few players. To retain unique motion characteristics across body regions, we segment the body into four parts—face, upper body, hands, and lower body—each with a dedicated RVQ-VAE, following [3, 31]. This segmentation preserves each part's dynamics and prevents feature entanglement."
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "title",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.091,
|
| 396 |
+
0.299,
|
| 397 |
+
0.201,
|
| 398 |
+
0.314
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "3.2. Overview"
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.09,
|
| 407 |
+
0.322,
|
| 408 |
+
0.484,
|
| 409 |
+
0.412
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "As shown in Figure 2, our SemTalk pipeline includes two main components: the Base Motion Blocks \\( f_{r}(\\cdot) \\) and the Sparse Motion Blocks \\( f_{b}(\\cdot) \\). Given rhythmic features \\( \\gamma_{b}, \\gamma_{h} \\), a seed pose \\( \\tilde{m} \\), and a speaker ID \\( id \\), the Base Motion Blocks generate rhythm-aligned codes \\( q^{b} \\), forming the rhythmic foundation of the base motion:"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "equation",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.185,
|
| 418 |
+
0.424,
|
| 419 |
+
0.482,
|
| 420 |
+
0.442
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "\\[\nf _ {r}: \\left(\\gamma_ {b}, \\gamma_ {h}, \\tilde {m}, i d; \\theta_ {f _ {r}}\\right)\\rightarrow q ^ {b}, \\tag {1}\n\\]"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.09,
|
| 429 |
+
0.454,
|
| 430 |
+
0.484,
|
| 431 |
+
0.545
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "where \\(\\theta_{f_r}\\) denotes the learnable parameters of the Base Motion Blocks. The Sparse Motion Blocks then take semantic features \\(\\phi_l\\), \\(\\phi_g\\), \\(\\phi_e\\), along with \\(\\gamma_h\\), \\(\\tilde{m}\\) and \\(id\\), to produce frame-level semantic codes \\(q^s\\) and semantic score \\(\\psi\\). \\(\\psi\\) then triggers these codes only for semantically significant frames, producing a sparse motion representation:"
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "equation",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.158,
|
| 440 |
+
0.557,
|
| 441 |
+
0.482,
|
| 442 |
+
0.574
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "\\[\nf _ {s}: \\left(\\phi_ {l}, \\phi_ {g}, \\phi_ {e}, \\tilde {m}, i d; \\theta_ {f _ {a}}\\right)\\rightarrow \\left(q ^ {s}, \\psi\\right), \\tag {2}\n\\]"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.09,
|
| 451 |
+
0.586,
|
| 452 |
+
0.483,
|
| 453 |
+
0.632
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "where \\(\\theta_{f_s}\\) represents the Sparse Motion Block parameters. Finally, the semantic emphasis mechanism \\(\\mathcal{E}\\) combines \\(q^b\\) and \\(q^s\\), guided by \\(\\psi\\), to form the final motion codes \\(q^m\\):"
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "equation",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.221,
|
| 462 |
+
0.643,
|
| 463 |
+
0.482,
|
| 464 |
+
0.66
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "\\[\nq ^ {m} = \\mathcal {E} \\left(q ^ {b}, q ^ {s}; \\psi\\right). \\tag {3}\n\\]"
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.091,
|
| 473 |
+
0.672,
|
| 474 |
+
0.483,
|
| 475 |
+
0.687
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "The motion decoder then uses \\( q^{m} \\) to generate the output \\( m' \\)."
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "title",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.091,
|
| 484 |
+
0.712,
|
| 485 |
+
0.442,
|
| 486 |
+
0.729
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "3.3. Generating Rhythm-related Base Motion"
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.09,
|
| 495 |
+
0.735,
|
| 496 |
+
0.483,
|
| 497 |
+
0.795
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "The Base Motion Generation (Fig. 3 a) in SemTalk establishes a rhythmically aligned foundation by leveraging both rhythmic and speaker-specific features, enhancing the naturalness and personalization of generated motion."
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.09,
|
| 506 |
+
0.796,
|
| 507 |
+
0.484,
|
| 508 |
+
0.902
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "Rhythmic Speech Encoding. To synchronize motion with speech, SemTalk incorporates rhythmic features: beats \\(\\gamma_{b}\\) and HuBERT features \\(\\gamma_{h}\\). \\(\\gamma_{b}\\), derived from amplitude, short-time energy [11], and onset detection, mark key rhythmic points for aligning gestures with speech. Meanwhile, \\(\\gamma_{h}\\), extracted by the HuBERT encoder [18], captures high-level audio traits. In addition to rhythmic features \\(\\gamma\\),"
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "image",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.527,
|
| 517 |
+
0.089,
|
| 518 |
+
0.888,
|
| 519 |
+
0.395
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": null
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "image_caption",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.513,
|
| 528 |
+
0.403,
|
| 529 |
+
0.907,
|
| 530 |
+
0.501
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "Figure 2. An overview of the SemTalk pipeline. SemTalk generates holistic co-speech motion by first constructing rhythm-aligned \\( q^r \\) in \\( f_r \\), guided by rhythmic consistency loss \\( L_{\\mathrm{Rhy}} \\). Meanwhile, \\( f_s \\) produce frame-level semantic codes \\( q^s \\), activated selectively by the semantic score \\( \\psi \\). Finally, \\( q^m \\) is achieved by fusing \\( q^r \\) and \\( q^s \\) based on \\( \\psi \\), with motion decoder, yielding synchronized and contextually enriched motions."
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.512,
|
| 539 |
+
0.527,
|
| 540 |
+
0.906,
|
| 541 |
+
0.603
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "SemTalk uses a seed pose \\(\\tilde{m}\\) and speaker identity \\(id\\) to generate a personalized, rhythm-aligned latent pose \\(p\\). Then MLP-based Face Enhancement and Body Part-Aware modules utilize \\(\\gamma\\), \\(p\\) and \\(id\\) to obtain latent face \\(f_{e}\\), hands \\(f_{h}\\), upper body \\(f_{u}\\) and lower body \\(f_{l}\\)."
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.512,
|
| 550 |
+
0.608,
|
| 551 |
+
0.907,
|
| 552 |
+
0.85
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": "Coarse2Fine Cross-Attention Module. To facilitate the learning of base motion, we first proposed a transformer-based hierarchical Coarse2Fine Cross-Attn Module utilize \\( f_{e} \\), \\( f_{h} \\), \\( f_{u} \\) and \\( f_{l} \\) to obtain latent base motion \\( f_{b} \\). The refinement begins with \\( \\gamma \\) for \\( f_{e} \\), which guides the rhythmic representation for \\( f_{h} \\), followed by conditioning \\( f_{u} \\) and finally influencing \\( f_{l} \\). Since mouth movements closely correspond to speech syllables with minimal delay, we use the face to guide hand motions, inspired by DiffSHEG [9]. As the upper and lower body movements are less directly driven by speech and instead reflect the natural swinging of the hands and torso, we adopt cascading guidance: hands influence the upper body, which in turn drives the lower body. This structured approach, moving from the face to the hands, upper body, and lower body, ensures smooth and coherent motion propagation across the entire body."
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "text",
|
| 559 |
+
"bbox": [
|
| 560 |
+
0.512,
|
| 561 |
+
0.856,
|
| 562 |
+
0.909,
|
| 563 |
+
0.903
|
| 564 |
+
],
|
| 565 |
+
"angle": 0,
|
| 566 |
+
"content": "Rhythmic Consistency Learning. Inspired by CoG's use of InfoNCE loss [45] to synchronize facial expressions with audio cues, our approach adopts a similar philosophy of"
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "page_number",
|
| 570 |
+
"bbox": [
|
| 571 |
+
0.481,
|
| 572 |
+
0.945,
|
| 573 |
+
0.519,
|
| 574 |
+
0.957
|
| 575 |
+
],
|
| 576 |
+
"angle": 0,
|
| 577 |
+
"content": "13763"
|
| 578 |
+
}
|
| 579 |
+
],
|
| 580 |
+
[
|
| 581 |
+
{
|
| 582 |
+
"type": "image",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.134,
|
| 585 |
+
0.09,
|
| 586 |
+
0.861,
|
| 587 |
+
0.306
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": null
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "image_caption",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.089,
|
| 596 |
+
0.313,
|
| 597 |
+
0.908,
|
| 598 |
+
0.371
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "Figure 3. Architecture of SemTalk. SemTalk generates holistic co-speech motion in three stages. (a) Base Motion Generation uses rhythmic consistency learning to produce rhythm-aligned codes \\( q^b \\), conditioned on rhythmic features \\( \\gamma_b, \\gamma_h \\). (b) Sparse Motion Generation employs semantic emphasis learning to generate semantic codes \\( q^s \\), activated by semantic score \\( \\psi \\). (c) Adaptively Fusion automatically combines \\( q^b \\) and \\( q^s \\) based on \\( \\psi \\) to produce mixed codes \\( q^m \\) at frame level for rhythmically aligned and contextually rich motions."
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.091,
|
| 607 |
+
0.383,
|
| 608 |
+
0.467,
|
| 609 |
+
0.399
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "aligning motion and speech rhythm. It can be defined as:"
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "equation",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.1,
|
| 618 |
+
0.41,
|
| 619 |
+
0.482,
|
| 620 |
+
0.469
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "\\[\n\\mathcal {L} _ {\\mathrm {R h y}} = - \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\log \\frac {\\exp \\left(\\operatorname {s i m} \\left(h \\left(f _ {i}\\right) , \\gamma_ {h} ^ {i}\\right) / \\tau\\right)}{\\sum_ {j = 1} ^ {N} \\exp \\left(\\operatorname {s i m} \\left(h \\left(f _ {i}\\right) , \\gamma_ {h} ^ {j}\\right) / \\tau\\right)}, \\tag {4}\n\\]"
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.09,
|
| 629 |
+
0.469,
|
| 630 |
+
0.483,
|
| 631 |
+
0.545
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "where \\(N\\) denotes the number of frames(or the batch size), \\(\\tau\\) denotes the temperature hyperparameter, \\(h(\\cdot)\\) is the projection head for latent motion, \\(f_{i}\\) and \\(\\gamma_h^i\\) are the latent motion and rhythmic features at frame (or sample) \\(i\\), and \\(\\mathrm{sim}(\\cdot)\\) represents cosine similarity."
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.09,
|
| 640 |
+
0.546,
|
| 641 |
+
0.483,
|
| 642 |
+
0.666
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "Unlike CoG, our approach fundamentally differs by incorporating separate local and global rhythmic consistency losses, which are applied to both latent face \\( f_{e} \\) and latent hands \\( f_{h} \\), ensuring a more cohesive and synchronized representation across the entire motion sequence. This rhythmic consistency loss ensures that the motions are not only synchronized at the frame level but also maintain a consistent rhythmic flow across the entire sequence."
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.09,
|
| 651 |
+
0.667,
|
| 652 |
+
0.483,
|
| 653 |
+
0.773
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "The local frame-level consistency loss \\(\\mathcal{L}_{\\mathrm{Rhy}}^{(L)}\\) aligns the motion features of each frame with the corresponding rhythmic cues \\(\\gamma_{h}\\). By leveraging HuBERT features \\(\\gamma_{h}\\) instead of basic beat features \\(\\gamma_{b}\\), which only capture rhythmic pauses, we incorporate rich, high-level audio representations that enhance the model's ability to capture rhythm-related motion patterns and maintain temporal coherence."
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "text",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.09,
|
| 662 |
+
0.774,
|
| 663 |
+
0.483,
|
| 664 |
+
0.853
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "The global sentence-level consistency loss \\(\\mathcal{L}_{\\mathrm{Rhy}}^{(G)}\\) is designed to ensure rhythmic coherence at a global level. Unlike local loss, \\(\\mathcal{L}_{\\mathrm{Rhy}}^{(G)}\\) reinforces rhythm consistency throughout the sequence, ensuring that the generated motion maintains smooth and rhythm-aligned throughout its duration."
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.09,
|
| 673 |
+
0.853,
|
| 674 |
+
0.484,
|
| 675 |
+
0.902
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "By jointly minimizing \\(\\mathcal{L}_{\\mathrm{Rhy}}^{(L)}\\) and \\(\\mathcal{L}_{\\mathrm{Rhy}}^{(G)}\\), rhythmic consistency learning enables SemTalk to produce base motions that are rhythmically aligned and temporally cohesive,"
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.513,
|
| 684 |
+
0.383,
|
| 685 |
+
0.88,
|
| 686 |
+
0.399
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": "forming a solid rhythm-related base motion foundation."
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "title",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.513,
|
| 695 |
+
0.41,
|
| 696 |
+
0.881,
|
| 697 |
+
0.426
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "3.4. Generating Semantic-aware Sparse Motion"
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "text",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.512,
|
| 706 |
+
0.432,
|
| 707 |
+
0.905,
|
| 708 |
+
0.522
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "The Sparse Motion Generation (Fig. 3 b) in SemTalk adds semantic-aware sparse motion to base motion by incorporating semantic cues drawn from speech content and emotional tone. By separating rhythm and semantics, this stage enhances motion generation by emphasizing contextually meaningful motion at key semantic moments."
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "text",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.512,
|
| 717 |
+
0.523,
|
| 718 |
+
0.906,
|
| 719 |
+
0.658
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "Semantic Speech Encoding. To capture semantic cues in speech, similar to [10], Semantic Emphasis Learning combines frame-level text embeddings \\(\\phi_t\\), sentence-level features \\(\\phi_g\\) from the CLIP model [39], and emotion features \\(\\phi_e\\) from the emotion2vec model [35]. These features form a comprehensive semantic representation \\(f_t\\), together with audio feature \\(\\gamma_h\\), that reflects both the content and emotional undertones of speech, enabling SemTalk to activate motions that are sensitive to nuanced semantic cues."
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.512,
|
| 728 |
+
0.66,
|
| 729 |
+
0.907,
|
| 730 |
+
0.902
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "Semantic Emphasis Learning. The process begins by generating \\( f_{t} \\), combining local and global cues from text, speech, emotion embeddings and HuBERT features \\( \\gamma_{h} \\). Then, the sem-gate leverages multi-modal inputs to generate a semantic score, identifying frames that require enhanced semantic emphasis. The sem-gate in SemTalk refines keyframe motion by applying two forms of weighting methods \\( \\mathcal{W} \\): feature weighting \\( \\mathcal{W}_{f} \\) and loss weighting \\( \\mathcal{W}_{l} \\). Using \\( f_{t} \\) and \\( \\gamma_{h} \\), SemTalk computes a semantic score \\( \\psi \\) which dynamically scales feature weighting—filtering back semantic features \\( f_{t} \\) to activate frames with significant relevance, ensuring that the model emphasizes frames aligned with specific communicative intentions. Second, the loss weighting is applied by supervising \\( \\psi \\), with a classification loss \\( \\mathcal{L}_{cls}^{G} \\) based on semantic labels, further enhancing the model's ability to identify key frames. The two weight-"
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "page_number",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.481,
|
| 739 |
+
0.945,
|
| 740 |
+
0.519,
|
| 741 |
+
0.957
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "13764"
|
| 745 |
+
}
|
| 746 |
+
],
|
| 747 |
+
[
|
| 748 |
+
{
|
| 749 |
+
"type": "image",
|
| 750 |
+
"bbox": [
|
| 751 |
+
0.102,
|
| 752 |
+
0.092,
|
| 753 |
+
0.462,
|
| 754 |
+
0.219
|
| 755 |
+
],
|
| 756 |
+
"angle": 0,
|
| 757 |
+
"content": null
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "image_caption",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.09,
|
| 763 |
+
0.224,
|
| 764 |
+
0.485,
|
| 765 |
+
0.309
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "Figure 4. Concept comparison with LivelySpeaker [53]. (Top) LivelySpeaker generates semantic gestures with CLIP embeddings in SAG and refines rhythm-related gestures separately using diffusion, causing potential jitter. (Bottom) SemTalk integrates text and speech, uses a semantic gate for fine-grained control, and unifies rhythm and semantics for smoother, more coherent motions."
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.09,
|
| 774 |
+
0.321,
|
| 775 |
+
0.483,
|
| 776 |
+
0.366
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "ing methods allow SemTalk to selectively enhance semantic gestures while suppressing uninformative motion, leading to more expressive co-speech motion."
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.09,
|
| 785 |
+
0.366,
|
| 786 |
+
0.484,
|
| 787 |
+
0.442
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "Once \\(\\psi\\) is established, it modulates the integration of rhythm-aligned base motion \\(f_{b}\\) and sparse semantic motion \\(f_{s}\\). Through alpha-blending, frames with high semantic relevance draw more from \\(f_{s}\\), while others rely on \\(f_{b}\\). The final motion codes \\(q^{s}\\) are computed as:"
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "equation",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.18,
|
| 796 |
+
0.456,
|
| 797 |
+
0.483,
|
| 798 |
+
0.473
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "\\[\nq ^ {s} = M L P \\left(\\psi f _ {s} + (1 - \\psi) f _ {b}\\right), \\tag {5}\n\\]"
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.09,
|
| 807 |
+
0.479,
|
| 808 |
+
0.483,
|
| 809 |
+
0.569
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "To ensure cohesive propagation of semantic emphasis across body regions, we employ the Coarse2Fine Cross-Attention Module, similar to Sec. 3.3. In this stage, we focuses solely on body motion, excluding facial movements, as body gestures play a more critical role in conveying semantic meaning in co-speech interactions."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "text",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.09,
|
| 818 |
+
0.57,
|
| 819 |
+
0.483,
|
| 820 |
+
0.645
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "To foster diverse motion generation, SemTalk includes a code classification loss \\(\\mathcal{L}_{cls}\\) and a reconstruction loss \\(\\mathcal{L}_{rec}\\). These losses are specifically focused on frames with high semantic scores, guiding the model to prioritize the generation of sparse, meaningful gestures."
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.09,
|
| 829 |
+
0.645,
|
| 830 |
+
0.483,
|
| 831 |
+
0.901
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "Discussion. Recently, LivelySpeaker [53] designs the Semantic-Aware Generator (SAG) and Rhythm-Aware Generator (RAG) for co-speech gesture generation, combining them through beat empowerment. While effective, key differences exist between LivelySpeaker and SemTalk, see Fig. 4. First, SAG generates gestures from text using CLIP embeddings, but bridging words and expressive gestures is challenging, causing jitter. SemTalk incorporates speech features (pitch, tone, emotion) alongside text and GT supervision for adaptive gestures. Second, LivelySpeaker applies global control, missing local semantic details, while SemTalk uses fine-grained, frame-level semantic control for subtle variations. Third, LivelySpeaker fuses SAG and RAG in separate latent spaces, leading to misalignment and inconsistencies. SemTalk jointly models rhythm and semantics in a unified framework, ensuring smoother transitions and coherence. We further compare SAG with our"
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.514,
|
| 840 |
+
0.093,
|
| 841 |
+
0.714,
|
| 842 |
+
0.107
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "semantic gate in experiments."
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "title",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.514,
|
| 851 |
+
0.118,
|
| 852 |
+
0.829,
|
| 853 |
+
0.133
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "3.5. Semantic Score-based motion fusion"
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.512,
|
| 862 |
+
0.141,
|
| 863 |
+
0.905,
|
| 864 |
+
0.292
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "The Adaptive Fusion stage (Fig. 3 c) in SemTalk seamlessly integrates semantic-aware sparse motion into the rhythmic-related base motion. By strategically enhancing frames based on their semantic importance, it maintains a smooth and natural motion flow across sequences. For each frame \\( i \\), the semantic score \\( \\psi_i \\) computed during the Sparse Motion Generation stage is compared to a threshold \\( \\beta \\). If \\( \\psi_i > \\beta \\), the base motion's latent code \\( q_i^r \\) is replaced with the sparse semantic code \\( q_i^s \\), effectively highlighting expressive gestures where they are most relevant; otherwise, \\( q_i = q_i^r \\)."
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.512,
|
| 873 |
+
0.293,
|
| 874 |
+
0.905,
|
| 875 |
+
0.4
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "This selective replacement emphasizes semantically critical gestures while preserving the natural rhythmic base motion. By blending \\( q^{b} \\) and \\( q^{s} \\) based on semantic scores, SemTalk adapts to the expressive needs of the speech context while ensuring coherence. Additionally, the convolution structure of the RVQ-VAE decoder ensures smooth transitions between frames, preserving motion continuity."
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "title",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.514,
|
| 884 |
+
0.414,
|
| 885 |
+
0.646,
|
| 886 |
+
0.431
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "4. Experiments"
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "title",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.514,
|
| 895 |
+
0.44,
|
| 896 |
+
0.704,
|
| 897 |
+
0.456
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "4.1. Experimental Setup"
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.512,
|
| 906 |
+
0.462,
|
| 907 |
+
0.907,
|
| 908 |
+
0.644
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "Datasets. For training and evaluation, we use two datasets: BEAT2 and SHOW. BEAT2, introduced in EMAGE [31], extends BEAT [30] with 76 hours of data from 30 speakers, standardized into a mesh representation with paired audio, text, and frame-level semantic labels. We follow [31] and use the BEAT2-standard subset with an \\(85\\% / 7.5\\% / 7.5\\%\\) train/val/test split. SHOW [48] includes 26.9 hours of high-quality talk show videos with 3D body meshes at 30fps. Since it lacks frame-level semantic labels, we use the semgate from SemTalk, pre-trained on BEAT2, to generate them. Following [48], we select video clips longer than 10 seconds and split the data \\(80\\% / 10\\% / 10\\%\\) for train/val/test."
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "text",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.512,
|
| 917 |
+
0.644,
|
| 918 |
+
0.909,
|
| 919 |
+
0.795
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "Implementation Details. Our model is trained on a single NVIDIA A100 GPU for 200 epochs with a batch size of 64. We use RVQ-VAE [42], downscaling by 4. The residual quantization has 6 layers, a codebook size of 256 and a dropout rate of 0.2. We use five transformer layers to predict the last five layer codes. In Base Motion Learning, \\(\\tau = 0.1\\); in Sparse Motion Learning, \\(\\beta = 0.5\\) empirically. The training uses ADAM with a 1e-4 learning rate. Following [31], we start with a 4-frame seed pose, gradually increasing masked frames from 0 to \\(40\\%\\) over 120 epochs."
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.512,
|
| 928 |
+
0.796,
|
| 929 |
+
0.909,
|
| 930 |
+
0.903
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "Metrics. We evaluate generated body gestures using FGD [50] to measure distributional alignment with GT, reflecting realism. DIV [26] quantifies gesture variation via the average L1 distance across clips. BC [27] assesses speech-motion synchrony. For facial expressions, we use MSE [47] to quantify positional differences and LVD [48] to measure discrepancies between GT and generated facial vertices."
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "page_number",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.481,
|
| 939 |
+
0.945,
|
| 940 |
+
0.519,
|
| 941 |
+
0.957
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "13765"
|
| 945 |
+
}
|
| 946 |
+
],
|
| 947 |
+
[
|
| 948 |
+
{
|
| 949 |
+
"type": "image",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.111,
|
| 952 |
+
0.092,
|
| 953 |
+
0.898,
|
| 954 |
+
0.511
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": null
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "image_caption",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.09,
|
| 963 |
+
0.514,
|
| 964 |
+
0.908,
|
| 965 |
+
0.573
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "Figure 5. Comparison on BEAT2 [31] Dataset. SemTalk* refers to the model trained solely on the Base Motion Generation stage, capturing rhythmic alignment but lacking semantic gestures. In contrast, SemTalk successfully emphasized sparse yet vivid motions. For instance, when saying \"my opinion,\" SemTalk generates a hand-raising gesture followed by an index finger extension for emphasis. Similarly, for \"never tell,\" our model produces a clear, repeated gesture matching the rhythm, reinforcing the intended emphasis."
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "image",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.134,
|
| 974 |
+
0.579,
|
| 975 |
+
0.283,
|
| 976 |
+
0.674
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": null
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "image",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.284,
|
| 985 |
+
0.579,
|
| 986 |
+
0.865,
|
| 987 |
+
0.675
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": null
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "image_caption",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.142,
|
| 996 |
+
0.684,
|
| 997 |
+
0.854,
|
| 998 |
+
0.699
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "Figure 6. Comparison on SHOW [48] Dataset. Our method performs better in motion diversity and semantic richness."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "title",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.091,
|
| 1007 |
+
0.706,
|
| 1008 |
+
0.275,
|
| 1009 |
+
0.722
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "4.2. Qualitative Results"
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "text",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.09,
|
| 1018 |
+
0.735,
|
| 1019 |
+
0.486,
|
| 1020 |
+
0.903
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "Qualitative Comparisons. We encourage readers to watch our demo video for a clearer understanding of SemTalk's qualitative performance. Our method achieves superior speech-motion alignment, generating more realistic, diverse, and semantically consistent gestures than the baselines. As shown in Fig. 5, LivelySpeaker, TalkSHOW, EMAGE, and DiffSHEG exhibit jitter—EMAGE mainly in the legs and shoulders, while TalkSHOW affects the entire body. LivelySpeaker and DiffSHEG, which focus primarily on the upper body, produce slow and inconsistent motions, especially at speech clip boundaries. DiffSHEG improves"
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.512,
|
| 1029 |
+
0.707,
|
| 1030 |
+
0.909,
|
| 1031 |
+
0.89
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "gesture diversity over IMAGE and TalkSHOW, though IMAGE maintains greater naturalness. SemTalk surpasses all baselines in both realism and diversity. Compared to SemTalk*, SemTalk generates more expressive gestures, emphasizing key phrases (e.g., raising hands for “dream job” or pointing for “that is why”). While SemTalk* ensures rhythmic consistency, it lacks semantic expressiveness. By integrating frame-level semantic emphasis, SemTalk aligns motion with both rhythm and semantics, demonstrating the effectiveness of rhythmic consistency learning and semantic emphasis learning. In facial comparisons (Fig. 7), IMAGE shows minimal lip movement, while both DiffSHEG and"
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "page_number",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.481,
|
| 1040 |
+
0.945,
|
| 1041 |
+
0.52,
|
| 1042 |
+
0.957
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "13766"
|
| 1046 |
+
}
|
| 1047 |
+
],
|
| 1048 |
+
[
|
| 1049 |
+
{
|
| 1050 |
+
"type": "image",
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
0.092,
|
| 1053 |
+
0.09,
|
| 1054 |
+
0.488,
|
| 1055 |
+
0.219
|
| 1056 |
+
],
|
| 1057 |
+
"angle": 0,
|
| 1058 |
+
"content": null
|
| 1059 |
+
},
|
| 1060 |
+
{
|
| 1061 |
+
"type": "image_caption",
|
| 1062 |
+
"bbox": [
|
| 1063 |
+
0.109,
|
| 1064 |
+
0.226,
|
| 1065 |
+
0.465,
|
| 1066 |
+
0.24
|
| 1067 |
+
],
|
| 1068 |
+
"angle": 0,
|
| 1069 |
+
"content": "Figure 7. Facial Comparison on the BEAT2 [31] Dataset."
|
| 1070 |
+
},
|
| 1071 |
+
{
|
| 1072 |
+
"type": "image",
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
0.11,
|
| 1075 |
+
0.247,
|
| 1076 |
+
0.456,
|
| 1077 |
+
0.402
|
| 1078 |
+
],
|
| 1079 |
+
"angle": 0,
|
| 1080 |
+
"content": null
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "image_caption",
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
0.09,
|
| 1086 |
+
0.405,
|
| 1087 |
+
0.483,
|
| 1088 |
+
0.434
|
| 1089 |
+
],
|
| 1090 |
+
"angle": 0,
|
| 1091 |
+
"content": "Figure 8. Qualitative study on semantic score. Semantic score aligns with keywords, influencing gesture intensity."
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "text",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
0.09,
|
| 1097 |
+
0.447,
|
| 1098 |
+
0.483,
|
| 1099 |
+
0.509
|
| 1100 |
+
],
|
| 1101 |
+
"angle": 0,
|
| 1102 |
+
"content": "EMAGE reveal inconsistencies between lip motion and the rhythm of speech. In contrast, SemTalk produces smooth, natural transitions across syllables, resulting in realistic and expressive lips, significantly surpassing the baselines."
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "text",
|
| 1106 |
+
"bbox": [
|
| 1107 |
+
0.09,
|
| 1108 |
+
0.509,
|
| 1109 |
+
0.483,
|
| 1110 |
+
0.599
|
| 1111 |
+
],
|
| 1112 |
+
"angle": 0,
|
| 1113 |
+
"content": "On the SHOW dataset (Fig. 6), SemTalk shows more agile gestures than all baselines, when applied to unseen data. Our method captures natural and contextually rich gestures, particularly in moments of emphasis such as \"I like to do\" and \"relaxing,\" where our model produces lively hand and body movements that align with the speech content."
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "text",
|
| 1117 |
+
"bbox": [
|
| 1118 |
+
0.09,
|
| 1119 |
+
0.599,
|
| 1120 |
+
0.483,
|
| 1121 |
+
0.884
|
| 1122 |
+
],
|
| 1123 |
+
"angle": 0,
|
| 1124 |
+
"content": "Semantic Score. Fig. 8 shows how semantic emphasis influences gesture intensity, with peaks in the semantic score aligning with keywords like \"comes,\" \"fantastic,\" and \"captured.\" By extracting semantic scores from key frames, we track gesture emphasis trends. Furthermore, as shown in Fig. 9, SemTalk adapts to different emotional tones even when the text remains unchanged. This adaptability prevents overfitting to the text itself, allowing the model to generate gestures that vary according to the emotional delivery of the speech. The learned semantic score provides fine-grained, frame-level control, keeping gestures both rhythmically synchronized and semantically aligned in real time. User Study. We conducted a user study with 10 video samples and 25 participants from diverse backgrounds, evaluating realism, semantic consistency, motion-speech synchrony, and diversity. Participants were required to rank shuffled videos across different methods. As shown in Fig. 10, our approach received dominant preferences across all metrics, especially in semantic consistency and realism."
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "image",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
0.522,
|
| 1130 |
+
0.089,
|
| 1131 |
+
0.906,
|
| 1132 |
+
0.224
|
| 1133 |
+
],
|
| 1134 |
+
"angle": 0,
|
| 1135 |
+
"content": null
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "image_caption",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
0.513,
|
| 1141 |
+
0.231,
|
| 1142 |
+
0.907,
|
| 1143 |
+
0.287
|
| 1144 |
+
],
|
| 1145 |
+
"angle": 0,
|
| 1146 |
+
"content": "Figure 9. Same words with different speech from the internet. \"emo\" represents different emotional tones extracted from speech. SemTalk can generate different motions, even when the text script is the same, preventing overfitting to the text itself."
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "image",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
0.54,
|
| 1152 |
+
0.292,
|
| 1153 |
+
0.886,
|
| 1154 |
+
0.404
|
| 1155 |
+
],
|
| 1156 |
+
"angle": 0,
|
| 1157 |
+
"content": null
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "image_caption",
|
| 1161 |
+
"bbox": [
|
| 1162 |
+
0.597,
|
| 1163 |
+
0.413,
|
| 1164 |
+
0.822,
|
| 1165 |
+
0.427
|
| 1166 |
+
],
|
| 1167 |
+
"angle": 0,
|
| 1168 |
+
"content": "Figure 10. Results of the user study."
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "title",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
0.513,
|
| 1174 |
+
0.437,
|
| 1175 |
+
0.707,
|
| 1176 |
+
0.453
|
| 1177 |
+
],
|
| 1178 |
+
"angle": 0,
|
| 1179 |
+
"content": "4.3.Quantitative Results"
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
0.512,
|
| 1185 |
+
0.46,
|
| 1186 |
+
0.907,
|
| 1187 |
+
0.61
|
| 1188 |
+
],
|
| 1189 |
+
"angle": 0,
|
| 1190 |
+
"content": "Comparison with Baselines. As shown in Tab. 1, SemTalk outperforms previous methods on BEAT2, achieving lower FGD, MSE, and LVD, indicating better distribution alignment and reduced motion errors. For fairness, we follow [31] and add a lower-body VQ-VAE to TalkSHOW, Diff-SHEG, and SemTalk. Notably, SemTalk significantly reduces FGD, ensuring strong distribution matching. While TalkSHOW and EMAGE achieve competitive diversity (DIV) scores, SemTalk balances high semantic relevance with natural motion flow."
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "text",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
0.512,
|
| 1196 |
+
0.613,
|
| 1197 |
+
0.906,
|
| 1198 |
+
0.703
|
| 1199 |
+
],
|
| 1200 |
+
"angle": 0,
|
| 1201 |
+
"content": "On the SHOW dataset, SemTalk excels with the lowest FGD, MSE, and the highest BC, indicating precise beat alignment with the audio and enhanced semantic consistency in generated motions. Although EMAGE exhibits high DIV, our model achieves comparable results while maintaining smooth, realistic motion free from jitter."
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.512,
|
| 1207 |
+
0.705,
|
| 1208 |
+
0.907,
|
| 1209 |
+
0.901
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": "Sem-gate. Tab. 2 highlights the effectiveness of sem-gate. Without sem-gate, the model fails to emphasize key moments. Randomized semantic scores led to poor performance by preventing meaningful frame distinction. Introducing a learned sem-gate even (w/o \\(\\mathcal{W}\\)) significantly improves semantic alignment and classification accuracy. Refinement is further enhanced through weighting strategies: feature weighting \\(\\mathcal{W}_f\\) enhances motion emphasis, while loss weighting \\(\\mathcal{W}_l\\) improves FGD and overall accuracy. These results suggest that weighting methods enhance the accuracy of the semantic score and help the model prioritize important frames. The best results come from applying two weighting methods together, where frames with stronger se"
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "page_number",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.481,
|
| 1218 |
+
0.945,
|
| 1219 |
+
0.519,
|
| 1220 |
+
0.957
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": "13767"
|
| 1224 |
+
}
|
| 1225 |
+
],
|
| 1226 |
+
[
|
| 1227 |
+
{
|
| 1228 |
+
"type": "table",
|
| 1229 |
+
"bbox": [
|
| 1230 |
+
0.1,
|
| 1231 |
+
0.089,
|
| 1232 |
+
0.477,
|
| 1233 |
+
0.344
|
| 1234 |
+
],
|
| 1235 |
+
"angle": 0,
|
| 1236 |
+
"content": "<table><tr><td>Dataset</td><td>Method</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>MSE↓</td><td>LVD↓</td></tr><tr><td rowspan=\"10\">BEAT2</td><td>FaceFormer [13]</td><td>-</td><td>-</td><td>-</td><td>7.787</td><td>7.593</td></tr><tr><td>CodeTalker [44]</td><td>-</td><td>-</td><td>-</td><td>8.026</td><td>7.766</td></tr><tr><td>CaMN [30]</td><td>6.644</td><td>6.769</td><td>10.86</td><td>-</td><td>-</td></tr><tr><td>DSG [47]</td><td>8.811</td><td>7.241</td><td>11.49</td><td>-</td><td>-</td></tr><tr><td>LivelySpeaker [17]</td><td>11.80</td><td>6.659</td><td>11.28</td><td>-</td><td>-</td></tr><tr><td>Habibie et al. [17]</td><td>9.040</td><td>7.716</td><td>8.213</td><td>8.614</td><td>8.043</td></tr><tr><td>TalkSHOW [48]</td><td>6.209</td><td>6.947</td><td>13.47</td><td>7.791</td><td>7.771</td></tr><tr><td>EMAGE [31]</td><td>5.512</td><td>7.724</td><td>13.06</td><td>7.680</td><td>7.556</td></tr><tr><td>DiffSHEG [9]</td><td>8.986</td><td>7.142</td><td>11.91</td><td>7.665</td><td>8.673</td></tr><tr><td>SemTalk (Ours)</td><td>4.278</td><td>7.770</td><td>12.91</td><td>6.153</td><td>6.938</td></tr><tr><td rowspan=\"10\">SHOW</td><td>FaceFormer [13]</td><td>-</td><td>-</td><td>-</td><td>138.1</td><td>43.69</td></tr><tr><td>CodeTalker [44]</td><td>-</td><td>-</td><td>-</td><td>140.7</td><td>45.84</td></tr><tr><td>CaMN [30]</td><td>22.12</td><td>7.712</td><td>10.37</td><td>-</td><td>-</td></tr><tr><td>DSG [47]</td><td>24.84</td><td>8.027</td><td>10.23</td><td>-</td><td>-</td></tr><tr><td>LivelySpeaker [17]</td><td>32.17</td><td>7.844</td><td>10.14</td><td>-</td><td>-</td></tr><tr><td>Habibie et al. [17]</td><td>27.22</td><td>8.209</td><td>8.541</td><td>145.6</td><td>47.35</td></tr><tr><td>TalkSHOW [48]</td><td>24.43</td><td>8.249</td><td>10.98</td><td>139.6</td><td>45.17</td></tr><tr><td>EMAGE [31]</td><td>22.12</td><td>8.280</td><td>12.46</td><td>136.1</td><td>42.44</td></tr><tr><td>DiffSHEG [9]</td><td>24.87</td><td>8.061</td><td>10.79</td><td>139.0</td><td>45.77</td></tr><tr><td>SemTalk (Ours)</td><td>20.18</td><td>8.304</td><td>11.36</td><td>134.1</td><td>39.15</td></tr></table>"
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "table_caption",
|
| 1240 |
+
"bbox": [
|
| 1241 |
+
0.09,
|
| 1242 |
+
0.347,
|
| 1243 |
+
0.483,
|
| 1244 |
+
0.418
|
| 1245 |
+
],
|
| 1246 |
+
"angle": 0,
|
| 1247 |
+
"content": "Table 1. Quantitative comparison with SOTA. SemTalk consistently outperforms baselines across both the BEAT2 and SHOW datasets. Lower values are better for FMD, FGD, MSE, and LVD. Higher values are better for BC and DIV. We report \\(\\mathrm{FGD} \\times 10^{-1}\\), \\(\\mathrm{BC} \\times 10^{-1}\\), \\(\\mathrm{MSE} \\times 10^{-8}\\) and \\(\\mathrm{LVD} \\times 10^{-5}\\) for simplify."
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "text",
|
| 1251 |
+
"bbox": [
|
| 1252 |
+
0.09,
|
| 1253 |
+
0.443,
|
| 1254 |
+
0.484,
|
| 1255 |
+
0.578
|
| 1256 |
+
],
|
| 1257 |
+
"angle": 0,
|
| 1258 |
+
"content": "mantic signals receive higher emphasis. We also compare sem-gate with LivelySpeaker's SAG [53]. We find that replacing the Sparse Motion stage with SAG and substituting motion using GT semantic labels led to poor performance. SAG relies only on text-motion alignment, ignoring emotional tone, making it more prone to overfitting the text. In contrast, our sem-gate applies GT supervision with two weighting methods, achieving more accurate and stable semantic motion."
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "text",
|
| 1262 |
+
"bbox": [
|
| 1263 |
+
0.09,
|
| 1264 |
+
0.581,
|
| 1265 |
+
0.484,
|
| 1266 |
+
0.643
|
| 1267 |
+
],
|
| 1268 |
+
"angle": 0,
|
| 1269 |
+
"content": "Ablation Study on Components. We assess the impact of each component of our model on BEAT2 and present the results in Tab. 3, which reveals several key insights (more ablation results please see supplementary material):"
|
| 1270 |
+
},
|
| 1271 |
+
{
|
| 1272 |
+
"type": "text",
|
| 1273 |
+
"bbox": [
|
| 1274 |
+
0.091,
|
| 1275 |
+
0.645,
|
| 1276 |
+
0.483,
|
| 1277 |
+
0.703
|
| 1278 |
+
],
|
| 1279 |
+
"angle": 0,
|
| 1280 |
+
"content": "- Rhythmic Consistency Learning (RC) not only boosts performance on key metrics like FGD, LVD, and BC but also reduces the MSE, contributing to smoother and more realistic base motion."
|
| 1281 |
+
},
|
| 1282 |
+
{
|
| 1283 |
+
"type": "text",
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
0.091,
|
| 1286 |
+
0.705,
|
| 1287 |
+
0.483,
|
| 1288 |
+
0.825
|
| 1289 |
+
],
|
| 1290 |
+
"angle": 0,
|
| 1291 |
+
"content": "- Semantic Emphasis Learning (SE) proves essential for selectively enhancing semantic-rich gestures. The inclusion of SE, as shown in rows with SE enabled, improves both diversity (DIV) and FGD, enabling the model to emphasize semantically relevant motions. SE demonstrates its effectiveness in focusing on frame-level semantic information, which contributes to the generation of lifelike gestures with enriched contextual meaning."
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"type": "text",
|
| 1295 |
+
"bbox": [
|
| 1296 |
+
0.091,
|
| 1297 |
+
0.826,
|
| 1298 |
+
0.483,
|
| 1299 |
+
0.902
|
| 1300 |
+
],
|
| 1301 |
+
"angle": 0,
|
| 1302 |
+
"content": "- Coarse2Fine Cross-Attention Module (C2F) effectively refines motion details, improving BC, FGD, and DIV. When combined with RVQ and RC, C2F achieves the best MSE and LVD, highlighting its role in enhancing motion realism and diversity hierarchically."
|
| 1303 |
+
},
|
| 1304 |
+
{
|
| 1305 |
+
"type": "list",
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
0.091,
|
| 1308 |
+
0.645,
|
| 1309 |
+
0.483,
|
| 1310 |
+
0.902
|
| 1311 |
+
],
|
| 1312 |
+
"angle": 0,
|
| 1313 |
+
"content": null
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "table",
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
0.527,
|
| 1319 |
+
0.089,
|
| 1320 |
+
0.897,
|
| 1321 |
+
0.201
|
| 1322 |
+
],
|
| 1323 |
+
"angle": 0,
|
| 1324 |
+
"content": "<table><tr><td>Method</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>Acc(%)↑</td></tr><tr><td>w/o Sem-gate</td><td>4.893</td><td>7.702</td><td>12.42</td><td>-</td></tr><tr><td>SAG (LivelySpeaker [53])</td><td>4.618</td><td>7.682</td><td>12.45</td><td>-</td></tr><tr><td>Sem-gate (Random ψ)</td><td>4.634</td><td>7.700</td><td>12.44</td><td>50.07</td></tr><tr><td>Sem-gate (w/o W)</td><td>4.495</td><td>7.633</td><td>12.26</td><td>72.32</td></tr><tr><td>Sem-gate (w/ Wf)</td><td>4.408</td><td>7.679</td><td>12.28</td><td>78.52</td></tr><tr><td>Sem-gate (w/ Wl)</td><td>4.366</td><td>7.772</td><td>11.94</td><td>77.83</td></tr><tr><td>Sem-gate (ours)</td><td>4.278</td><td>7.770</td><td>12.91</td><td>82.76</td></tr></table>"
|
| 1325 |
+
},
|
| 1326 |
+
{
|
| 1327 |
+
"type": "table_caption",
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
0.513,
|
| 1330 |
+
0.206,
|
| 1331 |
+
0.905,
|
| 1332 |
+
0.345
|
| 1333 |
+
],
|
| 1334 |
+
"angle": 0,
|
| 1335 |
+
"content": "Table 2. Ablation study on Sem-gate. \"Acc\" denotes semantic classification performance on BEAT2. \"w/o Sem-gate\" means directly input \\( f_{t} \\) and \\( \\gamma_{h} \\) without Sem-gate. \"SAG (LivelySpeaker [53])\" replaces the Sparse Motion Generation stage with LivelySpeaker's SAG method. \"Random \\( \\psi \\)\" assigns frame-level scores randomly. \"w/o \\( \\mathcal{W} \\)\" applies the semantic gate but excludes frame-level weighting. \"w/ \\( \\mathcal{W}_{f} \\)\" applies feature weighting. \"w/ \\( \\mathcal{W}_{l} \\)\" applies loss weighting. (as mentioned in Sec. 3.4). Sem-gate (ours) integrates both the semantic gate and frame-level weighting to enhance emphasis."
|
| 1336 |
+
},
|
| 1337 |
+
{
|
| 1338 |
+
"type": "table",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
0.527,
|
| 1341 |
+
0.347,
|
| 1342 |
+
0.897,
|
| 1343 |
+
0.462
|
| 1344 |
+
],
|
| 1345 |
+
"angle": 0,
|
| 1346 |
+
"content": "<table><tr><td>RC</td><td>SE</td><td>C2F</td><td>RVQ</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>MSE↓</td><td>LVD↓</td></tr><tr><td>-</td><td>-</td><td>-</td><td>-</td><td>6.234</td><td>7.628</td><td>11.44</td><td>8.239</td><td>7.831</td></tr><tr><td>-</td><td>-</td><td>-</td><td>√</td><td>5.484</td><td>7.641</td><td>11.84</td><td>13.882</td><td>15.42</td></tr><tr><td>√</td><td>-</td><td>-</td><td>√</td><td>4.867</td><td>7.701</td><td>12.38</td><td>6.201</td><td>6.928</td></tr><tr><td>√</td><td>√</td><td>-</td><td>√</td><td>4.526</td><td>7.751</td><td>12.83</td><td>6.215</td><td>6.997</td></tr><tr><td>-</td><td>-</td><td>√</td><td>√</td><td>4.897</td><td>7.702</td><td>12.42</td><td>13.416</td><td>15.72</td></tr><tr><td>√</td><td>√</td><td>-</td><td>-</td><td>5.831</td><td>7.758</td><td>11.97</td><td>6.587</td><td>7.106</td></tr><tr><td>√</td><td>-</td><td>√</td><td>√</td><td>4.397</td><td>7.776</td><td>12.49</td><td>6.100</td><td>6.898</td></tr><tr><td>√</td><td>√</td><td>√</td><td>√</td><td>4.278</td><td>7.770</td><td>12.91</td><td>6.153</td><td>6.938</td></tr></table>"
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"type": "table_caption",
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
0.513,
|
| 1352 |
+
0.466,
|
| 1353 |
+
0.905,
|
| 1354 |
+
0.522
|
| 1355 |
+
],
|
| 1356 |
+
"angle": 0,
|
| 1357 |
+
"content": "Table 3. Ablation study on each key component. \"RC\" denotes rhythmic consistency learning, \"SE\" denotes the semantic emphasis learning, and \"C2F\" denotes Coarse2Fine Cross-Att Module, \"RVQ\" denotes the RVQ-VAE."
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "text",
|
| 1361 |
+
"bbox": [
|
| 1362 |
+
0.514,
|
| 1363 |
+
0.543,
|
| 1364 |
+
0.907,
|
| 1365 |
+
0.605
|
| 1366 |
+
],
|
| 1367 |
+
"angle": 0,
|
| 1368 |
+
"content": "- RVQ-VAE (RVQ) enhances the diversity and realism of generated motion. Though it slightly increases MSE and LVD, it notably improves FGD, leading to more natural motion generation compared to standard VQ-VAE."
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "title",
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
0.514,
|
| 1374 |
+
0.618,
|
| 1375 |
+
0.634,
|
| 1376 |
+
0.635
|
| 1377 |
+
],
|
| 1378 |
+
"angle": 0,
|
| 1379 |
+
"content": "5. Conclusion"
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "text",
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
0.511,
|
| 1385 |
+
0.644,
|
| 1386 |
+
0.907,
|
| 1387 |
+
0.901
|
| 1388 |
+
],
|
| 1389 |
+
"angle": 0,
|
| 1390 |
+
"content": "We propose SemTalk, a novel approach for holistic cospeech motion generation with frame-level semantic emphasis. Our method addresses the integration of sparse yet expressive motion into foundational rhythm-related motion, which has received less attention in previous works. We develop a framework that separately learns rhythm-related base motion through coarse2fine cross-attention module and rhythmic consistency learning, while capturing semantic-aware motion through Semantic Emphasis Learning. These components are then adaptively fused based on a learned semantic score. Our approach has demonstrated state-of-the-art performance on two public datasets quantitatively and qualitatively. The qualitative results and user study show that our method can generate high-quality cospeech motion sequences that enhance frame-level semantics over robust base motions, reflecting the full spectrum of human expressiveness."
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "page_number",
|
| 1394 |
+
"bbox": [
|
| 1395 |
+
0.481,
|
| 1396 |
+
0.945,
|
| 1397 |
+
0.52,
|
| 1398 |
+
0.957
|
| 1399 |
+
],
|
| 1400 |
+
"angle": 0,
|
| 1401 |
+
"content": "13768"
|
| 1402 |
+
}
|
| 1403 |
+
],
|
| 1404 |
+
[
|
| 1405 |
+
{
|
| 1406 |
+
"type": "text",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
0.09,
|
| 1409 |
+
0.092,
|
| 1410 |
+
0.486,
|
| 1411 |
+
0.234
|
| 1412 |
+
],
|
| 1413 |
+
"angle": 0,
|
| 1414 |
+
"content": "Acknowledgments. This work was supported by Alibaba Research Intern Program, the Young Scientists Fund of the National Natural Science Foundation of China No. 624B2110, the National Key Research and Development Program of China No. 2024YFC3015600, the Fundamental Research Funds for Central Universities No.2042023KF0180 & No.2042025KF0053. The numerical calculation is supported by supercomputing system in Super-computing Center of Wuhan University and Tongyi Lab, Alibaba Group."
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "title",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
0.093,
|
| 1420 |
+
0.258,
|
| 1421 |
+
0.188,
|
| 1422 |
+
0.274
|
| 1423 |
+
],
|
| 1424 |
+
"angle": 0,
|
| 1425 |
+
"content": "References"
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "ref_text",
|
| 1429 |
+
"bbox": [
|
| 1430 |
+
0.101,
|
| 1431 |
+
0.284,
|
| 1432 |
+
0.482,
|
| 1433 |
+
0.352
|
| 1434 |
+
],
|
| 1435 |
+
"angle": 0,
|
| 1436 |
+
"content": "[1] Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. Low-resource adaptation for personalized cospeech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20566–20576, 2022. 2"
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "ref_text",
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
0.101,
|
| 1442 |
+
0.355,
|
| 1443 |
+
0.482,
|
| 1444 |
+
0.408
|
| 1445 |
+
],
|
| 1446 |
+
"angle": 0,
|
| 1447 |
+
"content": "[2] Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. Listen, denoise, action! audio-driven motion synthesis with diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-20, 2023. 2"
|
| 1448 |
+
},
|
| 1449 |
+
{
|
| 1450 |
+
"type": "ref_text",
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
0.102,
|
| 1453 |
+
0.411,
|
| 1454 |
+
0.482,
|
| 1455 |
+
0.451
|
| 1456 |
+
],
|
| 1457 |
+
"angle": 0,
|
| 1458 |
+
"content": "[3] Tenglong Ao, Zeyi Zhang, and Libin Liu. Gesture diffuclip: Gesture diffusion model with clip latents. ACM Transactions on Graphics (TOG), 42(4):1-18, 2023. 3"
|
| 1459 |
+
},
|
| 1460 |
+
{
|
| 1461 |
+
"type": "ref_text",
|
| 1462 |
+
"bbox": [
|
| 1463 |
+
0.102,
|
| 1464 |
+
0.454,
|
| 1465 |
+
0.482,
|
| 1466 |
+
0.535
|
| 1467 |
+
],
|
| 1468 |
+
"angle": 0,
|
| 1469 |
+
"content": "[4] Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. Audiolm: a language modeling approach to audio generation. IEEE/ACM transactions on audio, speech, and language processing, 31:2523-2533, 2023. 3"
|
| 1470 |
+
},
|
| 1471 |
+
{
|
| 1472 |
+
"type": "ref_text",
|
| 1473 |
+
"bbox": [
|
| 1474 |
+
0.102,
|
| 1475 |
+
0.538,
|
| 1476 |
+
0.482,
|
| 1477 |
+
0.633
|
| 1478 |
+
],
|
| 1479 |
+
"angle": 0,
|
| 1480 |
+
"content": "[5] Justine Cassell, Catherine Pelachaud, Norman Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, and Matthew Stone. Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 413-420, 1994. 2"
|
| 1481 |
+
},
|
| 1482 |
+
{
|
| 1483 |
+
"type": "ref_text",
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
0.102,
|
| 1486 |
+
0.635,
|
| 1487 |
+
0.482,
|
| 1488 |
+
0.689
|
| 1489 |
+
],
|
| 1490 |
+
"angle": 0,
|
| 1491 |
+
"content": "[6] Justine Cassell, David McNeill, and Karl-Erik McCullough. Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information. *Pragmatics & cognition*, 7(1):1-34, 1999. 1"
|
| 1492 |
+
},
|
| 1493 |
+
{
|
| 1494 |
+
"type": "ref_text",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
0.102,
|
| 1497 |
+
0.692,
|
| 1498 |
+
0.482,
|
| 1499 |
+
0.746
|
| 1500 |
+
],
|
| 1501 |
+
"angle": 0,
|
| 1502 |
+
"content": "[7] Justine Cassell, Hannes Högni Vilhjalmsson, and Timothy Bickmore. Beat: the behavior expression animation toolkit. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 477-486, 2001. 2"
|
| 1503 |
+
},
|
| 1504 |
+
{
|
| 1505 |
+
"type": "ref_text",
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
0.102,
|
| 1508 |
+
0.748,
|
| 1509 |
+
0.482,
|
| 1510 |
+
0.816
|
| 1511 |
+
],
|
| 1512 |
+
"angle": 0,
|
| 1513 |
+
"content": "[8] Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, and Kun Zhou. Enabling synergistic full-body control in prompt-based co-speech motion generation. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 6774-6783, 2024. 2"
|
| 1514 |
+
},
|
| 1515 |
+
{
|
| 1516 |
+
"type": "ref_text",
|
| 1517 |
+
"bbox": [
|
| 1518 |
+
0.102,
|
| 1519 |
+
0.818,
|
| 1520 |
+
0.482,
|
| 1521 |
+
0.899
|
| 1522 |
+
],
|
| 1523 |
+
"angle": 0,
|
| 1524 |
+
"content": "[9] Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, and Qifeng Chen. Diffsheg: A diffusion-based approach for real-time speech-driven holistic 3d expression and gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7352-7361, 2024. 2, 3, 8"
|
| 1525 |
+
},
|
| 1526 |
+
{
|
| 1527 |
+
"type": "list",
|
| 1528 |
+
"bbox": [
|
| 1529 |
+
0.101,
|
| 1530 |
+
0.284,
|
| 1531 |
+
0.482,
|
| 1532 |
+
0.899
|
| 1533 |
+
],
|
| 1534 |
+
"angle": 0,
|
| 1535 |
+
"content": null
|
| 1536 |
+
},
|
| 1537 |
+
{
|
| 1538 |
+
"type": "ref_text",
|
| 1539 |
+
"bbox": [
|
| 1540 |
+
0.517,
|
| 1541 |
+
0.093,
|
| 1542 |
+
0.905,
|
| 1543 |
+
0.174
|
| 1544 |
+
],
|
| 1545 |
+
"angle": 0,
|
| 1546 |
+
"content": "[10] Kiran Chhatre, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J Black, Timo Bolkart, et al. Emotional speech-driven 3d body animation via disentangled latent diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1942-1953, 2024. 2, 4"
|
| 1547 |
+
},
|
| 1548 |
+
{
|
| 1549 |
+
"type": "ref_text",
|
| 1550 |
+
"bbox": [
|
| 1551 |
+
0.518,
|
| 1552 |
+
0.178,
|
| 1553 |
+
0.905,
|
| 1554 |
+
0.233
|
| 1555 |
+
],
|
| 1556 |
+
"angle": 0,
|
| 1557 |
+
"content": "[11] Selina Chu, Shrikanth Narayanan, and C-C Jay Kuo. Environmental sound recognition with time-frequency audio features. IEEE Transactions on Audio, Speech, and Language Processing, 17(6):1142-1158, 2009. 3"
|
| 1558 |
+
},
|
| 1559 |
+
{
|
| 1560 |
+
"type": "ref_text",
|
| 1561 |
+
"bbox": [
|
| 1562 |
+
0.518,
|
| 1563 |
+
0.235,
|
| 1564 |
+
0.905,
|
| 1565 |
+
0.302
|
| 1566 |
+
],
|
| 1567 |
+
"angle": 0,
|
| 1568 |
+
"content": "[12] Radek Daneček, Kiran Chhatre, Shashank Tripathi, Yandong Wen, Michael Black, and Timo Bolkart. Emotional speech-driven animation with content-emotion disentanglement. In SIGGRAPH Asia 2023 Conference Papers, pages 1-13, 2023. 2"
|
| 1569 |
+
},
|
| 1570 |
+
{
|
| 1571 |
+
"type": "ref_text",
|
| 1572 |
+
"bbox": [
|
| 1573 |
+
0.518,
|
| 1574 |
+
0.306,
|
| 1575 |
+
0.905,
|
| 1576 |
+
0.374
|
| 1577 |
+
],
|
| 1578 |
+
"angle": 0,
|
| 1579 |
+
"content": "[13] Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, and Taku Komura. Faceformer: Speech-driven 3d facial animation with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18770-18780, 2022. 8"
|
| 1580 |
+
},
|
| 1581 |
+
{
|
| 1582 |
+
"type": "ref_text",
|
| 1583 |
+
"bbox": [
|
| 1584 |
+
0.518,
|
| 1585 |
+
0.377,
|
| 1586 |
+
0.905,
|
| 1587 |
+
0.416
|
| 1588 |
+
],
|
| 1589 |
+
"angle": 0,
|
| 1590 |
+
"content": "[14] Susan Goldin-Meadow. The role of gesture in communication and thinking. Trends in cognitive sciences, 3(11):419-429, 1999. 1"
|
| 1591 |
+
},
|
| 1592 |
+
{
|
| 1593 |
+
"type": "ref_text",
|
| 1594 |
+
"bbox": [
|
| 1595 |
+
0.518,
|
| 1596 |
+
0.42,
|
| 1597 |
+
0.905,
|
| 1598 |
+
0.488
|
| 1599 |
+
],
|
| 1600 |
+
"angle": 0,
|
| 1601 |
+
"content": "[15] Kehong Gong, Dongze Lian, Heng Chang, Chuan Guo, Zihang Jiang, Xinxin Zuo, Michael Bi Mi, and Xinchao Wang. Tm2d: Bimodality driven 3d dance generation via music-text integration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9942–9952, 2023. 2"
|
| 1602 |
+
},
|
| 1603 |
+
{
|
| 1604 |
+
"type": "ref_text",
|
| 1605 |
+
"bbox": [
|
| 1606 |
+
0.518,
|
| 1607 |
+
0.491,
|
| 1608 |
+
0.905,
|
| 1609 |
+
0.559
|
| 1610 |
+
],
|
| 1611 |
+
"angle": 0,
|
| 1612 |
+
"content": "[16] Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1900-1910, 2024. 3"
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "ref_text",
|
| 1616 |
+
"bbox": [
|
| 1617 |
+
0.518,
|
| 1618 |
+
0.562,
|
| 1619 |
+
0.905,
|
| 1620 |
+
0.643
|
| 1621 |
+
],
|
| 1622 |
+
"angle": 0,
|
| 1623 |
+
"content": "[17] Ikhsanul Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, Hans-Peter Seidel, Gerard Pons-Moll, Mohamed Elgharib, and Christian Theobalt. Learning speech-driven 3d conversational gestures from video. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, pages 101-108, 2021. 1, 2, 8"
|
| 1624 |
+
},
|
| 1625 |
+
{
|
| 1626 |
+
"type": "ref_text",
|
| 1627 |
+
"bbox": [
|
| 1628 |
+
0.518,
|
| 1629 |
+
0.646,
|
| 1630 |
+
0.905,
|
| 1631 |
+
0.727
|
| 1632 |
+
],
|
| 1633 |
+
"angle": 0,
|
| 1634 |
+
"content": "[18] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing, 29: 3451-3460, 2021. 3"
|
| 1635 |
+
},
|
| 1636 |
+
{
|
| 1637 |
+
"type": "ref_text",
|
| 1638 |
+
"bbox": [
|
| 1639 |
+
0.518,
|
| 1640 |
+
0.731,
|
| 1641 |
+
0.905,
|
| 1642 |
+
0.785
|
| 1643 |
+
],
|
| 1644 |
+
"angle": 0,
|
| 1645 |
+
"content": "[19] Chien-Ming Huang and Bilge Mutlu. Robot behavior toolkit: generating effective social behaviors for robots. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 25-32, 2012. 2"
|
| 1646 |
+
},
|
| 1647 |
+
{
|
| 1648 |
+
"type": "ref_text",
|
| 1649 |
+
"bbox": [
|
| 1650 |
+
0.518,
|
| 1651 |
+
0.788,
|
| 1652 |
+
0.905,
|
| 1653 |
+
0.814
|
| 1654 |
+
],
|
| 1655 |
+
"angle": 0,
|
| 1656 |
+
"content": "[20] Adam Kendon. Gesture: Visible action as utterance. Cambridge University Press, 2004. 1"
|
| 1657 |
+
},
|
| 1658 |
+
{
|
| 1659 |
+
"type": "ref_text",
|
| 1660 |
+
"bbox": [
|
| 1661 |
+
0.518,
|
| 1662 |
+
0.817,
|
| 1663 |
+
0.905,
|
| 1664 |
+
0.856
|
| 1665 |
+
],
|
| 1666 |
+
"angle": 0,
|
| 1667 |
+
"content": "[21] Michael Kipp. Gesture generation by imitation: From human behavior to computer character animation. Universal-Publishers, 2005. 2"
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"type": "ref_text",
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
0.518,
|
| 1673 |
+
0.86,
|
| 1674 |
+
0.905,
|
| 1675 |
+
0.9
|
| 1676 |
+
],
|
| 1677 |
+
"angle": 0,
|
| 1678 |
+
"content": "[22] Stefan Kopp, Brigitte Krenn, Stacy Marsella, Andrew N Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R Thorisson, and Hannes Vilhjalmsson. Towards a common"
|
| 1679 |
+
},
|
| 1680 |
+
{
|
| 1681 |
+
"type": "list",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
0.517,
|
| 1684 |
+
0.093,
|
| 1685 |
+
0.905,
|
| 1686 |
+
0.9
|
| 1687 |
+
],
|
| 1688 |
+
"angle": 0,
|
| 1689 |
+
"content": null
|
| 1690 |
+
},
|
| 1691 |
+
{
|
| 1692 |
+
"type": "page_number",
|
| 1693 |
+
"bbox": [
|
| 1694 |
+
0.481,
|
| 1695 |
+
0.945,
|
| 1696 |
+
0.52,
|
| 1697 |
+
0.957
|
| 1698 |
+
],
|
| 1699 |
+
"angle": 0,
|
| 1700 |
+
"content": "13769"
|
| 1701 |
+
}
|
| 1702 |
+
],
|
| 1703 |
+
[
|
| 1704 |
+
{
|
| 1705 |
+
"type": "ref_text",
|
| 1706 |
+
"bbox": [
|
| 1707 |
+
0.125,
|
| 1708 |
+
0.092,
|
| 1709 |
+
0.484,
|
| 1710 |
+
0.161
|
| 1711 |
+
],
|
| 1712 |
+
"angle": 0,
|
| 1713 |
+
"content": "framework for multimodal generation: The behavior markup language. In Intelligent Virtual Agents: 6th International Conference, IVA 2006, Marina Del Rey, CA, USA, August 21-23, 2006. Proceedings 6, pages 205-217. Springer, 2006. 2"
|
| 1714 |
+
},
|
| 1715 |
+
{
|
| 1716 |
+
"type": "ref_text",
|
| 1717 |
+
"bbox": [
|
| 1718 |
+
0.093,
|
| 1719 |
+
0.164,
|
| 1720 |
+
0.484,
|
| 1721 |
+
0.247
|
| 1722 |
+
],
|
| 1723 |
+
"angle": 0,
|
| 1724 |
+
"content": "[23] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 international conference on multimodal interaction, pages 242-250, 2020. 2"
|
| 1725 |
+
},
|
| 1726 |
+
{
|
| 1727 |
+
"type": "ref_text",
|
| 1728 |
+
"bbox": [
|
| 1729 |
+
0.093,
|
| 1730 |
+
0.249,
|
| 1731 |
+
0.483,
|
| 1732 |
+
0.331
|
| 1733 |
+
],
|
| 1734 |
+
"angle": 0,
|
| 1735 |
+
"content": "[24] Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, and Gustav Eje Henter. A large, crowdsourced evaluation of gesture generation systems on common data: The genea challenge 2020. In Proceedings of the 26th International Conference on Intelligent User Interfaces, pages 11-21, 2021. 1"
|
| 1736 |
+
},
|
| 1737 |
+
{
|
| 1738 |
+
"type": "ref_text",
|
| 1739 |
+
"bbox": [
|
| 1740 |
+
0.093,
|
| 1741 |
+
0.334,
|
| 1742 |
+
0.483,
|
| 1743 |
+
0.375
|
| 1744 |
+
],
|
| 1745 |
+
"angle": 0,
|
| 1746 |
+
"content": "[25] Alex Lascarides and Matthew Stone. A formal semantic analysis of gesture. Journal of Semantics, 26(4):393-449, 2009. 1"
|
| 1747 |
+
},
|
| 1748 |
+
{
|
| 1749 |
+
"type": "ref_text",
|
| 1750 |
+
"bbox": [
|
| 1751 |
+
0.093,
|
| 1752 |
+
0.377,
|
| 1753 |
+
0.483,
|
| 1754 |
+
0.46
|
| 1755 |
+
],
|
| 1756 |
+
"angle": 0,
|
| 1757 |
+
"content": "[26] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and Linchao Bao. Audio2gestures: Generating diverse gestures from speech audio with conditional variational autoencoders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11293-11302, 2021. 5"
|
| 1758 |
+
},
|
| 1759 |
+
{
|
| 1760 |
+
"type": "ref_text",
|
| 1761 |
+
"bbox": [
|
| 1762 |
+
0.093,
|
| 1763 |
+
0.462,
|
| 1764 |
+
0.483,
|
| 1765 |
+
0.531
|
| 1766 |
+
],
|
| 1767 |
+
"angle": 0,
|
| 1768 |
+
"content": "[27] Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13401-13412, 2021. 5"
|
| 1769 |
+
},
|
| 1770 |
+
{
|
| 1771 |
+
"type": "ref_text",
|
| 1772 |
+
"bbox": [
|
| 1773 |
+
0.093,
|
| 1774 |
+
0.533,
|
| 1775 |
+
0.483,
|
| 1776 |
+
0.602
|
| 1777 |
+
],
|
| 1778 |
+
"angle": 0,
|
| 1779 |
+
"content": "[28] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. Seeg: Semantic energized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10473-10482, 2022. 2"
|
| 1780 |
+
},
|
| 1781 |
+
{
|
| 1782 |
+
"type": "ref_text",
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
0.093,
|
| 1785 |
+
0.604,
|
| 1786 |
+
0.483,
|
| 1787 |
+
0.687
|
| 1788 |
+
],
|
| 1789 |
+
"angle": 0,
|
| 1790 |
+
"content": "[29] Haiyang Liu, Naoya Iwamoto, Zihao Zhu, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Disco: Disentangled implicit content and rhythm learning for diverse co-speech gestures synthesis. In Proceedings of the 30th ACM international conference on multimedia, pages 3764-3773, 2022. 2"
|
| 1791 |
+
},
|
| 1792 |
+
{
|
| 1793 |
+
"type": "ref_text",
|
| 1794 |
+
"bbox": [
|
| 1795 |
+
0.093,
|
| 1796 |
+
0.689,
|
| 1797 |
+
0.483,
|
| 1798 |
+
0.773
|
| 1799 |
+
],
|
| 1800 |
+
"angle": 0,
|
| 1801 |
+
"content": "[30] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Beat: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis. In European conference on computer vision, pages 612-630. Springer, 2022. 1, 5, 8"
|
| 1802 |
+
},
|
| 1803 |
+
{
|
| 1804 |
+
"type": "ref_text",
|
| 1805 |
+
"bbox": [
|
| 1806 |
+
0.093,
|
| 1807 |
+
0.775,
|
| 1808 |
+
0.483,
|
| 1809 |
+
0.871
|
| 1810 |
+
],
|
| 1811 |
+
"angle": 0,
|
| 1812 |
+
"content": "[31] Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. Emage: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1144-1154, 2024. 1, 2, 3, 5, 6, 7, 8"
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "ref_text",
|
| 1816 |
+
"bbox": [
|
| 1817 |
+
0.093,
|
| 1818 |
+
0.873,
|
| 1819 |
+
0.483,
|
| 1820 |
+
0.901
|
| 1821 |
+
],
|
| 1822 |
+
"angle": 0,
|
| 1823 |
+
"content": "[32] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou, Wayne Wu, Bo Dai, and Bolei"
|
| 1824 |
+
},
|
| 1825 |
+
{
|
| 1826 |
+
"type": "list",
|
| 1827 |
+
"bbox": [
|
| 1828 |
+
0.093,
|
| 1829 |
+
0.092,
|
| 1830 |
+
0.484,
|
| 1831 |
+
0.901
|
| 1832 |
+
],
|
| 1833 |
+
"angle": 0,
|
| 1834 |
+
"content": null
|
| 1835 |
+
},
|
| 1836 |
+
{
|
| 1837 |
+
"type": "ref_text",
|
| 1838 |
+
"bbox": [
|
| 1839 |
+
0.545,
|
| 1840 |
+
0.093,
|
| 1841 |
+
0.905,
|
| 1842 |
+
0.149
|
| 1843 |
+
],
|
| 1844 |
+
"angle": 0,
|
| 1845 |
+
"content": "Zhou. Learning hierarchical cross-modal association for cospeech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10462-10472, 2022. 2"
|
| 1846 |
+
},
|
| 1847 |
+
{
|
| 1848 |
+
"type": "ref_text",
|
| 1849 |
+
"bbox": [
|
| 1850 |
+
0.517,
|
| 1851 |
+
0.151,
|
| 1852 |
+
0.907,
|
| 1853 |
+
0.219
|
| 1854 |
+
],
|
| 1855 |
+
"angle": 0,
|
| 1856 |
+
"content": "[33] Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, and Changxing Ding. Towards variable and coordinated holistic co-speech motion generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1566-1576, 2024. 2"
|
| 1857 |
+
},
|
| 1858 |
+
{
|
| 1859 |
+
"type": "ref_text",
|
| 1860 |
+
"bbox": [
|
| 1861 |
+
0.517,
|
| 1862 |
+
0.221,
|
| 1863 |
+
0.905,
|
| 1864 |
+
0.275
|
| 1865 |
+
],
|
| 1866 |
+
"angle": 0,
|
| 1867 |
+
"content": "[34] Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, and Heung-Yeung Shum. Humantomato: Text-aligned whole-body motion generation. arXiv preprint arXiv:2310.12978, 2023. 2"
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "ref_text",
|
| 1871 |
+
"bbox": [
|
| 1872 |
+
0.517,
|
| 1873 |
+
0.278,
|
| 1874 |
+
0.905,
|
| 1875 |
+
0.332
|
| 1876 |
+
],
|
| 1877 |
+
"angle": 0,
|
| 1878 |
+
"content": "[35] Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, and Xie Chen. emotion2vec: Self-supervised pre-training for speech emotion representation. arXiv preprint arXiv:2312.15185, 2023. 4"
|
| 1879 |
+
},
|
| 1880 |
+
{
|
| 1881 |
+
"type": "ref_text",
|
| 1882 |
+
"bbox": [
|
| 1883 |
+
0.517,
|
| 1884 |
+
0.334,
|
| 1885 |
+
0.905,
|
| 1886 |
+
0.404
|
| 1887 |
+
],
|
| 1888 |
+
"angle": 0,
|
| 1889 |
+
"content": "[36] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro. Virtual character performance from speech. In Proceedings of the 12th ACM SIGGRAPH/Eurographics symposium on computer animation, pages 25-35, 2013. 2"
|
| 1890 |
+
},
|
| 1891 |
+
{
|
| 1892 |
+
"type": "ref_text",
|
| 1893 |
+
"bbox": [
|
| 1894 |
+
0.517,
|
| 1895 |
+
0.406,
|
| 1896 |
+
0.905,
|
| 1897 |
+
0.488
|
| 1898 |
+
],
|
| 1899 |
+
"angle": 0,
|
| 1900 |
+
"content": "[37] Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, and Alexander Richard. From audio to photoreal embodiment: Synthesizing humans in conversations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1001-1010, 2024. 2"
|
| 1901 |
+
},
|
| 1902 |
+
{
|
| 1903 |
+
"type": "ref_text",
|
| 1904 |
+
"bbox": [
|
| 1905 |
+
0.517,
|
| 1906 |
+
0.49,
|
| 1907 |
+
0.905,
|
| 1908 |
+
0.558
|
| 1909 |
+
],
|
| 1910 |
+
"angle": 0,
|
| 1911 |
+
"content": "[38] Ashi Özyürek, Roel M Willems, Sotaro Kita, and Peter Hagoort. On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of cognitive neuroscience, 19(4):605-616, 2007. 1"
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "ref_text",
|
| 1915 |
+
"bbox": [
|
| 1916 |
+
0.517,
|
| 1917 |
+
0.561,
|
| 1918 |
+
0.905,
|
| 1919 |
+
0.643
|
| 1920 |
+
],
|
| 1921 |
+
"angle": 0,
|
| 1922 |
+
"content": "[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 4"
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "ref_text",
|
| 1926 |
+
"bbox": [
|
| 1927 |
+
0.517,
|
| 1928 |
+
0.645,
|
| 1929 |
+
0.905,
|
| 1930 |
+
0.715
|
| 1931 |
+
],
|
| 1932 |
+
"angle": 0,
|
| 1933 |
+
"content": "[40] Manuel Rebol, Christian Güttl, and Krzysztof Pietroszek. Real-time gesture animation generation from speech for virtual human interaction. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-4, 2021. 2"
|
| 1934 |
+
},
|
| 1935 |
+
{
|
| 1936 |
+
"type": "ref_text",
|
| 1937 |
+
"bbox": [
|
| 1938 |
+
0.517,
|
| 1939 |
+
0.717,
|
| 1940 |
+
0.905,
|
| 1941 |
+
0.771
|
| 1942 |
+
],
|
| 1943 |
+
"angle": 0,
|
| 1944 |
+
"content": "[41] Shuai Shen, Wenliang Zhao, Zibin Meng, Wanhua Li, Zheng Zhu, Jie Zhou, and Jiwen Lu. Difftalk: Crafting diffusion models for generalized talking head synthesis. arXiv preprint arXiv:2301.03786, 2(4):5, 2023. 2"
|
| 1945 |
+
},
|
| 1946 |
+
{
|
| 1947 |
+
"type": "ref_text",
|
| 1948 |
+
"bbox": [
|
| 1949 |
+
0.517,
|
| 1950 |
+
0.773,
|
| 1951 |
+
0.905,
|
| 1952 |
+
0.815
|
| 1953 |
+
],
|
| 1954 |
+
"angle": 0,
|
| 1955 |
+
"content": "[42] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 2, 5"
|
| 1956 |
+
},
|
| 1957 |
+
{
|
| 1958 |
+
"type": "ref_text",
|
| 1959 |
+
"bbox": [
|
| 1960 |
+
0.517,
|
| 1961 |
+
0.817,
|
| 1962 |
+
0.905,
|
| 1963 |
+
0.872
|
| 1964 |
+
],
|
| 1965 |
+
"angle": 0,
|
| 1966 |
+
"content": "[43] Hanwei Wu and Markus Flierl. Learning product codebooks using vector-quantized autoencoders for image retrieval. In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pages 1-5. IEEE, 2019. 2"
|
| 1967 |
+
},
|
| 1968 |
+
{
|
| 1969 |
+
"type": "ref_text",
|
| 1970 |
+
"bbox": [
|
| 1971 |
+
0.517,
|
| 1972 |
+
0.874,
|
| 1973 |
+
0.905,
|
| 1974 |
+
0.901
|
| 1975 |
+
],
|
| 1976 |
+
"angle": 0,
|
| 1977 |
+
"content": "[44] Jinbo Xing, Menghan Xia, Yuechen Zhang, Xiaodong Cun, Jue Wang, and Tien-Tsin Wong. Codetalker: Speech-driven"
|
| 1978 |
+
},
|
| 1979 |
+
{
|
| 1980 |
+
"type": "list",
|
| 1981 |
+
"bbox": [
|
| 1982 |
+
0.517,
|
| 1983 |
+
0.093,
|
| 1984 |
+
0.907,
|
| 1985 |
+
0.901
|
| 1986 |
+
],
|
| 1987 |
+
"angle": 0,
|
| 1988 |
+
"content": null
|
| 1989 |
+
},
|
| 1990 |
+
{
|
| 1991 |
+
"type": "page_number",
|
| 1992 |
+
"bbox": [
|
| 1993 |
+
0.481,
|
| 1994 |
+
0.945,
|
| 1995 |
+
0.52,
|
| 1996 |
+
0.957
|
| 1997 |
+
],
|
| 1998 |
+
"angle": 0,
|
| 1999 |
+
"content": "13770"
|
| 2000 |
+
}
|
| 2001 |
+
],
|
| 2002 |
+
[
|
| 2003 |
+
{
|
| 2004 |
+
"type": "ref_text",
|
| 2005 |
+
"bbox": [
|
| 2006 |
+
0.125,
|
| 2007 |
+
0.092,
|
| 2008 |
+
0.484,
|
| 2009 |
+
0.134
|
| 2010 |
+
],
|
| 2011 |
+
"angle": 0,
|
| 2012 |
+
"content": "3d facial animation with discrete motion prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12780-12790, 2023. 8"
|
| 2013 |
+
},
|
| 2014 |
+
{
|
| 2015 |
+
"type": "ref_text",
|
| 2016 |
+
"bbox": [
|
| 2017 |
+
0.093,
|
| 2018 |
+
0.136,
|
| 2019 |
+
0.484,
|
| 2020 |
+
0.205
|
| 2021 |
+
],
|
| 2022 |
+
"angle": 0,
|
| 2023 |
+
"content": "[45] Zunnan Xu, Yachao Zhang, Sicheng Yang, Ronghui Li, and Xiu Li. Chain of generation: Multi-modal gesture synthesis via cascaded conditional control. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6387-6395, 2024. 3"
|
| 2024 |
+
},
|
| 2025 |
+
{
|
| 2026 |
+
"type": "ref_text",
|
| 2027 |
+
"bbox": [
|
| 2028 |
+
0.093,
|
| 2029 |
+
0.207,
|
| 2030 |
+
0.483,
|
| 2031 |
+
0.289
|
| 2032 |
+
],
|
| 2033 |
+
"angle": 0,
|
| 2034 |
+
"content": "[46] Sicheng Yang, Zilin Wang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Qiaochu Huang, Lei Hao, Songcen Xu, Xiaofei Wu, Changpeng Yang, et al. Unified gesture: A unified gesture synthesis model for multiple skeletons. In Proceedings of the 31st ACM International Conference on Multimedia, pages 1033-1044, 2023. 2"
|
| 2035 |
+
},
|
| 2036 |
+
{
|
| 2037 |
+
"type": "ref_text",
|
| 2038 |
+
"bbox": [
|
| 2039 |
+
0.093,
|
| 2040 |
+
0.291,
|
| 2041 |
+
0.483,
|
| 2042 |
+
0.359
|
| 2043 |
+
],
|
| 2044 |
+
"angle": 0,
|
| 2045 |
+
"content": "[47] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. Diffusestylegesture: Stylized audio-driven co-speech gesture generation with diffusion models. arXiv preprint arXiv:2305.04919, 2023. 2, 5, 8"
|
| 2046 |
+
},
|
| 2047 |
+
{
|
| 2048 |
+
"type": "ref_text",
|
| 2049 |
+
"bbox": [
|
| 2050 |
+
0.093,
|
| 2051 |
+
0.361,
|
| 2052 |
+
0.483,
|
| 2053 |
+
0.43
|
| 2054 |
+
],
|
| 2055 |
+
"angle": 0,
|
| 2056 |
+
"content": "[48] Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 469-480, 2023. 1, 2, 5, 6, 8"
|
| 2057 |
+
},
|
| 2058 |
+
{
|
| 2059 |
+
"type": "ref_text",
|
| 2060 |
+
"bbox": [
|
| 2061 |
+
0.093,
|
| 2062 |
+
0.432,
|
| 2063 |
+
0.483,
|
| 2064 |
+
0.513
|
| 2065 |
+
],
|
| 2066 |
+
"angle": 0,
|
| 2067 |
+
"content": "[49] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jae-hong Kim, and Geehyuk Lee. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In 2019 International Conference on Robotics and Automation (ICRA), pages 4303-4309. IEEE, 2019. 2"
|
| 2068 |
+
},
|
| 2069 |
+
{
|
| 2070 |
+
"type": "ref_text",
|
| 2071 |
+
"bbox": [
|
| 2072 |
+
0.093,
|
| 2073 |
+
0.516,
|
| 2074 |
+
0.483,
|
| 2075 |
+
0.584
|
| 2076 |
+
],
|
| 2077 |
+
"angle": 0,
|
| 2078 |
+
"content": "[50] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG), 39 (6):1-16, 2020. 5"
|
| 2079 |
+
},
|
| 2080 |
+
{
|
| 2081 |
+
"type": "ref_text",
|
| 2082 |
+
"bbox": [
|
| 2083 |
+
0.093,
|
| 2084 |
+
0.586,
|
| 2085 |
+
0.483,
|
| 2086 |
+
0.653
|
| 2087 |
+
],
|
| 2088 |
+
"angle": 0,
|
| 2089 |
+
"content": "[51] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. Soundstream: An end-to-end neural audio codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:495-507, 2021. 3"
|
| 2090 |
+
},
|
| 2091 |
+
{
|
| 2092 |
+
"type": "ref_text",
|
| 2093 |
+
"bbox": [
|
| 2094 |
+
0.093,
|
| 2095 |
+
0.656,
|
| 2096 |
+
0.483,
|
| 2097 |
+
0.712
|
| 2098 |
+
],
|
| 2099 |
+
"angle": 0,
|
| 2100 |
+
"content": "[52] Jinsong Zhang, Minjie Zhu, Yuxiang Zhang, Zerong Zheng, Yebin Liu, and Kun Li. Speechact: Towards generating whole-body motion from speech. IEEE Transactions on Visualization and Computer Graphics, 2025. 2"
|
| 2101 |
+
},
|
| 2102 |
+
{
|
| 2103 |
+
"type": "ref_text",
|
| 2104 |
+
"bbox": [
|
| 2105 |
+
0.093,
|
| 2106 |
+
0.714,
|
| 2107 |
+
0.483,
|
| 2108 |
+
0.782
|
| 2109 |
+
],
|
| 2110 |
+
"angle": 0,
|
| 2111 |
+
"content": "[53] Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, and Shenghua Gao. Livelyspeaker: Towards semantic-aware co-speech gesture generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20807-20817, 2023. 2, 5, 8"
|
| 2112 |
+
},
|
| 2113 |
+
{
|
| 2114 |
+
"type": "ref_text",
|
| 2115 |
+
"bbox": [
|
| 2116 |
+
0.093,
|
| 2117 |
+
0.784,
|
| 2118 |
+
0.483,
|
| 2119 |
+
0.853
|
| 2120 |
+
],
|
| 2121 |
+
"angle": 0,
|
| 2122 |
+
"content": "[54] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. Taming diffusion models for audio-driven co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10544-10553, 2023. 2"
|
| 2123 |
+
},
|
| 2124 |
+
{
|
| 2125 |
+
"type": "list",
|
| 2126 |
+
"bbox": [
|
| 2127 |
+
0.093,
|
| 2128 |
+
0.092,
|
| 2129 |
+
0.484,
|
| 2130 |
+
0.853
|
| 2131 |
+
],
|
| 2132 |
+
"angle": 0,
|
| 2133 |
+
"content": null
|
| 2134 |
+
},
|
| 2135 |
+
{
|
| 2136 |
+
"type": "page_number",
|
| 2137 |
+
"bbox": [
|
| 2138 |
+
0.481,
|
| 2139 |
+
0.946,
|
| 2140 |
+
0.518,
|
| 2141 |
+
0.956
|
| 2142 |
+
],
|
| 2143 |
+
"angle": 0,
|
| 2144 |
+
"content": "13771"
|
| 2145 |
+
}
|
| 2146 |
+
]
|
| 2147 |
+
]
|
2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/d0c52370-d24a-4c84-9ea3-ee5cd6ca6239_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2224ed3409a318d3f2294388518ba4046521a18cdb8b83410a4db03be8a1b811
|
| 3 |
+
size 14239274
|
2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/full.md
ADDED
|
@@ -0,0 +1,283 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SemTalk: Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
Figure 1. On the left, we analyze semantic labels from the BEAT2 dataset [31] and visualize frame-level motion, revealing that semantically relevant motions are rare and sparse, aligning with real-life observations. On the right, this observation drives the design of SemTalk, which establishes a rhythm-aligned base motion and dynamically emphasizes sparse semantic gestures at the frame-level. In this example, SemTalk amplifies expressiveness on words like "watching" and "just," enhancing gesture and torso movements. The semantic scores below are automatically generated by SemTalk to modulate semantic emphasis over time.
|
| 7 |
+
|
| 8 |
+

|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
A good co-speech motion generation cannot be achieved without a careful integration of common rhythmic motion and rare yet essential semantic motion. In this work, we propose SemTalk for holistic co-speech motion generation with frame-level semantic emphasis. Our key insight is to separately learn base motions and sparse motions, and then adaptively fuse them. In particular, coarse2fine cross-attention module and rhythmic consistency learning are explored to establish rhythm-related base motion, ensuring a coherent foundation that synchronizes gestures with the speech rhythm. Subsequently, semantic emphasis learning is designed to generate semantic-aware sparse motion, focusing on frame-level semantic cues. Finally, to integrate sparse motion into the base motion and generate semantic-emphasized co-speech gestures, we further leverage a learned semantic score for adaptive synthesis. Qualitative and quantitative comparisons on two public datasets demonstrate that our method outperforms the state-of-the-art, delivering high-quality co-speech motion
|
| 13 |
+
|
| 14 |
+
with enhanced semantic richness over a stable base motion.
|
| 15 |
+
|
| 16 |
+
# 1. Introduction
|
| 17 |
+
|
| 18 |
+
Nonverbal communication, including body language, hand gestures, and facial expressions, is integral to human interactions. It enriches conversations with contextual cues and enhances understanding among participants [6, 14, 20, 24]. This aspect is particularly significant in holistic co-speech motion generation, where the challenge lies in synthesizing gestures that align with speech rhythm while also capturing the infrequent yet critical semantic gestures [25, 38].
|
| 19 |
+
|
| 20 |
+
Most existing methods [17, 30, 48] rely heavily on rhythm-related audio features as conditions for gesture generation. While these rhythm-based features successfully align gestures with the timing of speech, they often overshadow the sparse yet expressive semantic motion (see Fig. 1). As a result, the generated motions may lack the contextual depth necessary and nuanced expressiveness for natural interaction. Some methods try to address this by incorporating semantic information like emotion, style, and
|
| 21 |
+
|
| 22 |
+
content[10, 12, 23, 32]. However, the rhythm features tend to dominate, making the models difficult to capture sparse, semantically relevant gestures at the frame level. These rare but impactful gestures are often diluted or overlooked, highlighting the challenge of balancing rhythmic alignment with semantic expressiveness in co-speech motion generation.
|
| 23 |
+
|
| 24 |
+
In real-world human conversations, we have an observation that while most speech-related gestures are indeed rhythm-related, only a limited number of frames involve semantically emphasized gestures. This insight suggests that co-speech motions can be decomposed into two distinct components: (i) Rhythm-related base motion. These provide a continuous, coherent base motion aligned with the speech rhythm, reflecting the natural timing of speaking. (ii) Semantic-aware sparse motion: These occur infrequently but are essential for conveying specific meanings or emphasizing key points within the conversation.
|
| 25 |
+
|
| 26 |
+
Inspired by this observation, we propose a new framework SemTalk. SemTalk models the base motion and the sparse motion separately and then fuses them adaptively to generate high-fidelity co-speech motion. Specifically, we first focus on generating rhythm-related base motion by introducing coarse2fine cross-attention module and rhythmic consistency learning. We design a hierarchical coarse2fine cross-attention module, which progressively refines the base motion cues in a coarse-to-fine manner, starting from the face and moving through the hands, upper body, and lower body. This approach ensures consistent rhythmic transmission across all body parts, enhancing coherence base motion. Moreover, we propose a local-global rhythmic consistency learning approach, which enforces alignment at both the frame and sequence levels. Locally, a frame-level consistency loss ensures that each frame is precisely synchronized with its corresponding speech features, guaranteeing accurate temporal alignment. Globally, a sequence-level consistency loss sustains a coherent rhythmic flow across the entire motion sequence, preserving consistency throughout the generated gestures.
|
| 27 |
+
|
| 28 |
+
Furthermore, we introduce semantic emphasis learning approach, which focuses on generating semantic-aware sparse motion. This approach utilizes frame-level semantic cues from textual information, high-level speech features, and emotion to identify frames that require emphasis through a learned semantic score produced by a gating strategy, i.e., sem-gate. The sem-gate is designed to dynamically activate semantic motions at key frames through two weighting methods applied on the motion condition and the loss, respectively, and semantic label guidance, allowing the model to produce motion that enhances the motion with deeper semantic meaning and contextual relevance.
|
| 29 |
+
|
| 30 |
+
Finally, the base motion and sparse motion are integrated through semantic score-based motion fusion, which adaptively amplifies expressiveness by incorporating semantic-
|
| 31 |
+
|
| 32 |
+
aware key frames into the rhythm-related base motion. Our contributions are summarized below:
|
| 33 |
+
|
| 34 |
+
- We propose SemTalk, a novel framework for holistic co-speech motion generation that separately models rhythm-related base motion and semantic-aware sparse motion, adaptively integrating them via a learned semantic gate.
|
| 35 |
+
- We propose a hierarchical coarse2fine cross-attention module to refine base motion and a local-global rhythmic consistency learning to integrate latent face and hand features with rhythm-related priors, ensuring coherence and rhythmic consistency. We then propose semantic emphasis learning to generate semantic gestures at certain frames, enhancing semantic-aware sparse motion.
|
| 36 |
+
- Experimental results show that our model surpasses state-of-the-art methods qualitatively and quantitatively, achieving higher motion quality and richer semantics.
|
| 37 |
+
|
| 38 |
+
# 2. Related Work
|
| 39 |
+
|
| 40 |
+
Co-speech Gesture Generation. Co-speech gesture generation aims to produce gestures aligned with speech. Early rule-based methods [7, 19, 21, 22, 41] lacked variability, while deterministic models [5, 7, 29, 36, 46, 49] mapped speech directly to gestures. Probabilistic models, including GANs [1, 17, 40] and diffusion models [2, 10, 47, 54], introduced variability. Some methods incorporated semantic cues, such as HA2G [32] and SEEG [28], which used hierarchical networks and alignment techniques. SynTalk [8] employs prompt-based control but treats inputs as signal strengths rather than fully interpreting semantics. LivelySpeaker [53] combines rhythmic features and semantic cues using CLIP [39] but struggles to integrate gestures with rhythm and capture semantics consistently, moreover, it only provides global control, limiting fine-grained refinement. DisCo [29] disentangles content and rhythm but lacks explicit modeling of sparse semantic gestures. SemTalk addresses this by separately modeling rhythm-related base motion and semantic-aware sparse motion, integrating them adaptively through a learned semantic score.
|
| 41 |
+
|
| 42 |
+
Holistic Co-speech Motion Generation. Generating synchronized, expressive full-body motion from speech remains challenging, especially in coordinating the face, hands, and torso [9, 31, 34, 37, 48, 52]. Early methods introduced generative models to improve synchronization, but issues persisted. TalkSHOW [48] improved with VQ-VAE [42] cross-conditioning but handled facial expressions separately, causing fragmented outputs. DiffSHEG [9] and EMAGE [31] used separate encoders for expressions and gestures, but their unidirectional flow limited coherence. ProbTalk [33] leverages PQ-VAE [43] for improved body-facial synchronization but mainly relies on rhythmic cues, risking the loss of nuanced semantic gestures. Inspired by TM2D [15], which decomposes dance motion into music-related components, we separately model co-speech motion
|
| 43 |
+
|
| 44 |
+
into rhythm-related and semantic-aware motion.
|
| 45 |
+
|
| 46 |
+
# 3. Method
|
| 47 |
+
|
| 48 |
+
# 3.1. Preliminary on RVQ-VAE
|
| 49 |
+
|
| 50 |
+
Following [4, 16, 51], our approach uses a residual vector-quantized autoencoder (RVQ-VAE) to progressively capture complex body movements in a few players. To retain unique motion characteristics across body regions, we segment the body into four parts—face, upper body, hands, and lower body—each with a dedicated RVQ-VAE, following [3, 31]. This segmentation preserves each part's dynamics and prevents feature entanglement.
|
| 51 |
+
|
| 52 |
+
# 3.2. Overview
|
| 53 |
+
|
| 54 |
+
As shown in Figure 2, our SemTalk pipeline includes two main components: the Base Motion Blocks $f_{r}(\cdot)$ and the Sparse Motion Blocks $f_{b}(\cdot)$ . Given rhythmic features $\gamma_{b}, \gamma_{h}$ , a seed pose $\tilde{m}$ , and a speaker ID $id$ , the Base Motion Blocks generate rhythm-aligned codes $q^{b}$ , forming the rhythmic foundation of the base motion:
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
f _ {r}: \left(\gamma_ {b}, \gamma_ {h}, \tilde {m}, i d; \theta_ {f _ {r}}\right)\rightarrow q ^ {b}, \tag {1}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
where $\theta_{f_r}$ denotes the learnable parameters of the Base Motion Blocks. The Sparse Motion Blocks then take semantic features $\phi_l$ , $\phi_g$ , $\phi_e$ , along with $\gamma_h$ , $\tilde{m}$ and $id$ , to produce frame-level semantic codes $q^s$ and semantic score $\psi$ . $\psi$ then triggers these codes only for semantically significant frames, producing a sparse motion representation:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
f _ {s}: \left(\phi_ {l}, \phi_ {g}, \phi_ {e}, \tilde {m}, i d; \theta_ {f _ {a}}\right)\rightarrow \left(q ^ {s}, \psi\right), \tag {2}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where $\theta_{f_s}$ represents the Sparse Motion Block parameters. Finally, the semantic emphasis mechanism $\mathcal{E}$ combines $q^b$ and $q^s$ , guided by $\psi$ , to form the final motion codes $q^m$ :
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
q ^ {m} = \mathcal {E} \left(q ^ {b}, q ^ {s}; \psi\right). \tag {3}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
The motion decoder then uses $q^{m}$ to generate the output $m'$ .
|
| 73 |
+
|
| 74 |
+
# 3.3. Generating Rhythm-related Base Motion
|
| 75 |
+
|
| 76 |
+
The Base Motion Generation (Fig. 3 a) in SemTalk establishes a rhythmically aligned foundation by leveraging both rhythmic and speaker-specific features, enhancing the naturalness and personalization of generated motion.
|
| 77 |
+
|
| 78 |
+
Rhythmic Speech Encoding. To synchronize motion with speech, SemTalk incorporates rhythmic features: beats $\gamma_{b}$ and HuBERT features $\gamma_{h}$ . $\gamma_{b}$ , derived from amplitude, short-time energy [11], and onset detection, mark key rhythmic points for aligning gestures with speech. Meanwhile, $\gamma_{h}$ , extracted by the HuBERT encoder [18], captures high-level audio traits. In addition to rhythmic features $\gamma$ ,
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
Figure 2. An overview of the SemTalk pipeline. SemTalk generates holistic co-speech motion by first constructing rhythm-aligned $q^r$ in $f_r$ , guided by rhythmic consistency loss $L_{\mathrm{Rhy}}$ . Meanwhile, $f_s$ produce frame-level semantic codes $q^s$ , activated selectively by the semantic score $\psi$ . Finally, $q^m$ is achieved by fusing $q^r$ and $q^s$ based on $\psi$ , with motion decoder, yielding synchronized and contextually enriched motions.
|
| 82 |
+
|
| 83 |
+
SemTalk uses a seed pose $\tilde{m}$ and speaker identity $id$ to generate a personalized, rhythm-aligned latent pose $p$ . Then MLP-based Face Enhancement and Body Part-Aware modules utilize $\gamma$ , $p$ and $id$ to obtain latent face $f_{e}$ , hands $f_{h}$ , upper body $f_{u}$ and lower body $f_{l}$ .
|
| 84 |
+
|
| 85 |
+
Coarse2Fine Cross-Attention Module. To facilitate the learning of base motion, we first proposed a transformer-based hierarchical Coarse2Fine Cross-Attn Module utilize $f_{e}$ , $f_{h}$ , $f_{u}$ and $f_{l}$ to obtain latent base motion $f_{b}$ . The refinement begins with $\gamma$ for $f_{e}$ , which guides the rhythmic representation for $f_{h}$ , followed by conditioning $f_{u}$ and finally influencing $f_{l}$ . Since mouth movements closely correspond to speech syllables with minimal delay, we use the face to guide hand motions, inspired by DiffSHEG [9]. As the upper and lower body movements are less directly driven by speech and instead reflect the natural swinging of the hands and torso, we adopt cascading guidance: hands influence the upper body, which in turn drives the lower body. This structured approach, moving from the face to the hands, upper body, and lower body, ensures smooth and coherent motion propagation across the entire body.
|
| 86 |
+
|
| 87 |
+
Rhythmic Consistency Learning. Inspired by CoG's use of InfoNCE loss [45] to synchronize facial expressions with audio cues, our approach adopts a similar philosophy of
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
Figure 3. Architecture of SemTalk. SemTalk generates holistic co-speech motion in three stages. (a) Base Motion Generation uses rhythmic consistency learning to produce rhythm-aligned codes $q^b$ , conditioned on rhythmic features $\gamma_b, \gamma_h$ . (b) Sparse Motion Generation employs semantic emphasis learning to generate semantic codes $q^s$ , activated by semantic score $\psi$ . (c) Adaptively Fusion automatically combines $q^b$ and $q^s$ based on $\psi$ to produce mixed codes $q^m$ at frame level for rhythmically aligned and contextually rich motions.
|
| 91 |
+
|
| 92 |
+
aligning motion and speech rhythm. It can be defined as:
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\mathcal {L} _ {\mathrm {R h y}} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {\exp \left(\operatorname {s i m} \left(h \left(f _ {i}\right) , \gamma_ {h} ^ {i}\right) / \tau\right)}{\sum_ {j = 1} ^ {N} \exp \left(\operatorname {s i m} \left(h \left(f _ {i}\right) , \gamma_ {h} ^ {j}\right) / \tau\right)}, \tag {4}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
where $N$ denotes the number of frames(or the batch size), $\tau$ denotes the temperature hyperparameter, $h(\cdot)$ is the projection head for latent motion, $f_{i}$ and $\gamma_h^i$ are the latent motion and rhythmic features at frame (or sample) $i$ , and $\mathrm{sim}(\cdot)$ represents cosine similarity.
|
| 99 |
+
|
| 100 |
+
Unlike CoG, our approach fundamentally differs by incorporating separate local and global rhythmic consistency losses, which are applied to both latent face $f_{e}$ and latent hands $f_{h}$ , ensuring a more cohesive and synchronized representation across the entire motion sequence. This rhythmic consistency loss ensures that the motions are not only synchronized at the frame level but also maintain a consistent rhythmic flow across the entire sequence.
|
| 101 |
+
|
| 102 |
+
The local frame-level consistency loss $\mathcal{L}_{\mathrm{Rhy}}^{(L)}$ aligns the motion features of each frame with the corresponding rhythmic cues $\gamma_{h}$ . By leveraging HuBERT features $\gamma_{h}$ instead of basic beat features $\gamma_{b}$ , which only capture rhythmic pauses, we incorporate rich, high-level audio representations that enhance the model's ability to capture rhythm-related motion patterns and maintain temporal coherence.
|
| 103 |
+
|
| 104 |
+
The global sentence-level consistency loss $\mathcal{L}_{\mathrm{Rhy}}^{(G)}$ is designed to ensure rhythmic coherence at a global level. Unlike local loss, $\mathcal{L}_{\mathrm{Rhy}}^{(G)}$ reinforces rhythm consistency throughout the sequence, ensuring that the generated motion maintains smooth and rhythm-aligned throughout its duration.
|
| 105 |
+
|
| 106 |
+
By jointly minimizing $\mathcal{L}_{\mathrm{Rhy}}^{(L)}$ and $\mathcal{L}_{\mathrm{Rhy}}^{(G)}$ , rhythmic consistency learning enables SemTalk to produce base motions that are rhythmically aligned and temporally cohesive,
|
| 107 |
+
|
| 108 |
+
forming a solid rhythm-related base motion foundation.
|
| 109 |
+
|
| 110 |
+
# 3.4. Generating Semantic-aware Sparse Motion
|
| 111 |
+
|
| 112 |
+
The Sparse Motion Generation (Fig. 3 b) in SemTalk adds semantic-aware sparse motion to base motion by incorporating semantic cues drawn from speech content and emotional tone. By separating rhythm and semantics, this stage enhances motion generation by emphasizing contextually meaningful motion at key semantic moments.
|
| 113 |
+
|
| 114 |
+
Semantic Speech Encoding. To capture semantic cues in speech, similar to [10], Semantic Emphasis Learning combines frame-level text embeddings $\phi_t$ , sentence-level features $\phi_g$ from the CLIP model [39], and emotion features $\phi_e$ from the emotion2vec model [35]. These features form a comprehensive semantic representation $f_t$ , together with audio feature $\gamma_h$ , that reflects both the content and emotional undertones of speech, enabling SemTalk to activate motions that are sensitive to nuanced semantic cues.
|
| 115 |
+
|
| 116 |
+
Semantic Emphasis Learning. The process begins by generating $f_{t}$ , combining local and global cues from text, speech, emotion embeddings and HuBERT features $\gamma_{h}$ . Then, the sem-gate leverages multi-modal inputs to generate a semantic score, identifying frames that require enhanced semantic emphasis. The sem-gate in SemTalk refines keyframe motion by applying two forms of weighting methods $\mathcal{W}$ : feature weighting $\mathcal{W}_{f}$ and loss weighting $\mathcal{W}_{l}$ . Using $f_{t}$ and $\gamma_{h}$ , SemTalk computes a semantic score $\psi$ which dynamically scales feature weighting—filtering back semantic features $f_{t}$ to activate frames with significant relevance, ensuring that the model emphasizes frames aligned with specific communicative intentions. Second, the loss weighting is applied by supervising $\psi$ , with a classification loss $\mathcal{L}_{cls}^{G}$ based on semantic labels, further enhancing the model's ability to identify key frames. The two weight-
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
Figure 4. Concept comparison with LivelySpeaker [53]. (Top) LivelySpeaker generates semantic gestures with CLIP embeddings in SAG and refines rhythm-related gestures separately using diffusion, causing potential jitter. (Bottom) SemTalk integrates text and speech, uses a semantic gate for fine-grained control, and unifies rhythm and semantics for smoother, more coherent motions.
|
| 120 |
+
|
| 121 |
+
ing methods allow SemTalk to selectively enhance semantic gestures while suppressing uninformative motion, leading to more expressive co-speech motion.
|
| 122 |
+
|
| 123 |
+
Once $\psi$ is established, it modulates the integration of rhythm-aligned base motion $f_{b}$ and sparse semantic motion $f_{s}$ . Through alpha-blending, frames with high semantic relevance draw more from $f_{s}$ , while others rely on $f_{b}$ . The final motion codes $q^{s}$ are computed as:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
q ^ {s} = M L P \left(\psi f _ {s} + (1 - \psi) f _ {b}\right), \tag {5}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
To ensure cohesive propagation of semantic emphasis across body regions, we employ the Coarse2Fine Cross-Attention Module, similar to Sec. 3.3. In this stage, we focuses solely on body motion, excluding facial movements, as body gestures play a more critical role in conveying semantic meaning in co-speech interactions.
|
| 130 |
+
|
| 131 |
+
To foster diverse motion generation, SemTalk includes a code classification loss $\mathcal{L}_{cls}$ and a reconstruction loss $\mathcal{L}_{rec}$ . These losses are specifically focused on frames with high semantic scores, guiding the model to prioritize the generation of sparse, meaningful gestures.
|
| 132 |
+
|
| 133 |
+
Discussion. Recently, LivelySpeaker [53] designs the Semantic-Aware Generator (SAG) and Rhythm-Aware Generator (RAG) for co-speech gesture generation, combining them through beat empowerment. While effective, key differences exist between LivelySpeaker and SemTalk, see Fig. 4. First, SAG generates gestures from text using CLIP embeddings, but bridging words and expressive gestures is challenging, causing jitter. SemTalk incorporates speech features (pitch, tone, emotion) alongside text and GT supervision for adaptive gestures. Second, LivelySpeaker applies global control, missing local semantic details, while SemTalk uses fine-grained, frame-level semantic control for subtle variations. Third, LivelySpeaker fuses SAG and RAG in separate latent spaces, leading to misalignment and inconsistencies. SemTalk jointly models rhythm and semantics in a unified framework, ensuring smoother transitions and coherence. We further compare SAG with our
|
| 134 |
+
|
| 135 |
+
semantic gate in experiments.
|
| 136 |
+
|
| 137 |
+
# 3.5. Semantic Score-based motion fusion
|
| 138 |
+
|
| 139 |
+
The Adaptive Fusion stage (Fig. 3 c) in SemTalk seamlessly integrates semantic-aware sparse motion into the rhythmic-related base motion. By strategically enhancing frames based on their semantic importance, it maintains a smooth and natural motion flow across sequences. For each frame $i$ , the semantic score $\psi_i$ computed during the Sparse Motion Generation stage is compared to a threshold $\beta$ . If $\psi_i > \beta$ , the base motion's latent code $q_i^r$ is replaced with the sparse semantic code $q_i^s$ , effectively highlighting expressive gestures where they are most relevant; otherwise, $q_i = q_i^r$ .
|
| 140 |
+
|
| 141 |
+
This selective replacement emphasizes semantically critical gestures while preserving the natural rhythmic base motion. By blending $q^{b}$ and $q^{s}$ based on semantic scores, SemTalk adapts to the expressive needs of the speech context while ensuring coherence. Additionally, the convolution structure of the RVQ-VAE decoder ensures smooth transitions between frames, preserving motion continuity.
|
| 142 |
+
|
| 143 |
+
# 4. Experiments
|
| 144 |
+
|
| 145 |
+
# 4.1. Experimental Setup
|
| 146 |
+
|
| 147 |
+
Datasets. For training and evaluation, we use two datasets: BEAT2 and SHOW. BEAT2, introduced in EMAGE [31], extends BEAT [30] with 76 hours of data from 30 speakers, standardized into a mesh representation with paired audio, text, and frame-level semantic labels. We follow [31] and use the BEAT2-standard subset with an $85\% / 7.5\% / 7.5\%$ train/val/test split. SHOW [48] includes 26.9 hours of high-quality talk show videos with 3D body meshes at 30fps. Since it lacks frame-level semantic labels, we use the semgate from SemTalk, pre-trained on BEAT2, to generate them. Following [48], we select video clips longer than 10 seconds and split the data $80\% / 10\% / 10\%$ for train/val/test.
|
| 148 |
+
|
| 149 |
+
Implementation Details. Our model is trained on a single NVIDIA A100 GPU for 200 epochs with a batch size of 64. We use RVQ-VAE [42], downscaling by 4. The residual quantization has 6 layers, a codebook size of 256 and a dropout rate of 0.2. We use five transformer layers to predict the last five layer codes. In Base Motion Learning, $\tau = 0.1$ ; in Sparse Motion Learning, $\beta = 0.5$ empirically. The training uses ADAM with a 1e-4 learning rate. Following [31], we start with a 4-frame seed pose, gradually increasing masked frames from 0 to $40\%$ over 120 epochs.
|
| 150 |
+
|
| 151 |
+
Metrics. We evaluate generated body gestures using FGD [50] to measure distributional alignment with GT, reflecting realism. DIV [26] quantifies gesture variation via the average L1 distance across clips. BC [27] assesses speech-motion synchrony. For facial expressions, we use MSE [47] to quantify positional differences and LVD [48] to measure discrepancies between GT and generated facial vertices.
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
Figure 5. Comparison on BEAT2 [31] Dataset. SemTalk* refers to the model trained solely on the Base Motion Generation stage, capturing rhythmic alignment but lacking semantic gestures. In contrast, SemTalk successfully emphasized sparse yet vivid motions. For instance, when saying "my opinion," SemTalk generates a hand-raising gesture followed by an index finger extension for emphasis. Similarly, for "never tell," our model produces a clear, repeated gesture matching the rhythm, reinforcing the intended emphasis.
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
Figure 6. Comparison on SHOW [48] Dataset. Our method performs better in motion diversity and semantic richness.
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+
# 4.2. Qualitative Results
|
| 162 |
+
|
| 163 |
+
Qualitative Comparisons. We encourage readers to watch our demo video for a clearer understanding of SemTalk's qualitative performance. Our method achieves superior speech-motion alignment, generating more realistic, diverse, and semantically consistent gestures than the baselines. As shown in Fig. 5, LivelySpeaker, TalkSHOW, EMAGE, and DiffSHEG exhibit jitter—EMAGE mainly in the legs and shoulders, while TalkSHOW affects the entire body. LivelySpeaker and DiffSHEG, which focus primarily on the upper body, produce slow and inconsistent motions, especially at speech clip boundaries. DiffSHEG improves
|
| 164 |
+
|
| 165 |
+
gesture diversity over IMAGE and TalkSHOW, though IMAGE maintains greater naturalness. SemTalk surpasses all baselines in both realism and diversity. Compared to SemTalk*, SemTalk generates more expressive gestures, emphasizing key phrases (e.g., raising hands for “dream job” or pointing for “that is why”). While SemTalk* ensures rhythmic consistency, it lacks semantic expressiveness. By integrating frame-level semantic emphasis, SemTalk aligns motion with both rhythm and semantics, demonstrating the effectiveness of rhythmic consistency learning and semantic emphasis learning. In facial comparisons (Fig. 7), IMAGE shows minimal lip movement, while both DiffSHEG and
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
Figure 7. Facial Comparison on the BEAT2 [31] Dataset.
|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
Figure 8. Qualitative study on semantic score. Semantic score aligns with keywords, influencing gesture intensity.
|
| 172 |
+
|
| 173 |
+
EMAGE reveal inconsistencies between lip motion and the rhythm of speech. In contrast, SemTalk produces smooth, natural transitions across syllables, resulting in realistic and expressive lips, significantly surpassing the baselines.
|
| 174 |
+
|
| 175 |
+
On the SHOW dataset (Fig. 6), SemTalk shows more agile gestures than all baselines, when applied to unseen data. Our method captures natural and contextually rich gestures, particularly in moments of emphasis such as "I like to do" and "relaxing," where our model produces lively hand and body movements that align with the speech content.
|
| 176 |
+
|
| 177 |
+
Semantic Score. Fig. 8 shows how semantic emphasis influences gesture intensity, with peaks in the semantic score aligning with keywords like "comes," "fantastic," and "captured." By extracting semantic scores from key frames, we track gesture emphasis trends. Furthermore, as shown in Fig. 9, SemTalk adapts to different emotional tones even when the text remains unchanged. This adaptability prevents overfitting to the text itself, allowing the model to generate gestures that vary according to the emotional delivery of the speech. The learned semantic score provides fine-grained, frame-level control, keeping gestures both rhythmically synchronized and semantically aligned in real time. User Study. We conducted a user study with 10 video samples and 25 participants from diverse backgrounds, evaluating realism, semantic consistency, motion-speech synchrony, and diversity. Participants were required to rank shuffled videos across different methods. As shown in Fig. 10, our approach received dominant preferences across all metrics, especially in semantic consistency and realism.
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
Figure 9. Same words with different speech from the internet. "emo" represents different emotional tones extracted from speech. SemTalk can generate different motions, even when the text script is the same, preventing overfitting to the text itself.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
Figure 10. Results of the user study.
|
| 184 |
+
|
| 185 |
+
# 4.3.Quantitative Results
|
| 186 |
+
|
| 187 |
+
Comparison with Baselines. As shown in Tab. 1, SemTalk outperforms previous methods on BEAT2, achieving lower FGD, MSE, and LVD, indicating better distribution alignment and reduced motion errors. For fairness, we follow [31] and add a lower-body VQ-VAE to TalkSHOW, Diff-SHEG, and SemTalk. Notably, SemTalk significantly reduces FGD, ensuring strong distribution matching. While TalkSHOW and EMAGE achieve competitive diversity (DIV) scores, SemTalk balances high semantic relevance with natural motion flow.
|
| 188 |
+
|
| 189 |
+
On the SHOW dataset, SemTalk excels with the lowest FGD, MSE, and the highest BC, indicating precise beat alignment with the audio and enhanced semantic consistency in generated motions. Although EMAGE exhibits high DIV, our model achieves comparable results while maintaining smooth, realistic motion free from jitter.
|
| 190 |
+
|
| 191 |
+
Sem-gate. Tab. 2 highlights the effectiveness of sem-gate. Without sem-gate, the model fails to emphasize key moments. Randomized semantic scores led to poor performance by preventing meaningful frame distinction. Introducing a learned sem-gate even (w/o $\mathcal{W}$ ) significantly improves semantic alignment and classification accuracy. Refinement is further enhanced through weighting strategies: feature weighting $\mathcal{W}_f$ enhances motion emphasis, while loss weighting $\mathcal{W}_l$ improves FGD and overall accuracy. These results suggest that weighting methods enhance the accuracy of the semantic score and help the model prioritize important frames. The best results come from applying two weighting methods together, where frames with stronger se
|
| 192 |
+
|
| 193 |
+
<table><tr><td>Dataset</td><td>Method</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>MSE↓</td><td>LVD↓</td></tr><tr><td rowspan="10">BEAT2</td><td>FaceFormer [13]</td><td>-</td><td>-</td><td>-</td><td>7.787</td><td>7.593</td></tr><tr><td>CodeTalker [44]</td><td>-</td><td>-</td><td>-</td><td>8.026</td><td>7.766</td></tr><tr><td>CaMN [30]</td><td>6.644</td><td>6.769</td><td>10.86</td><td>-</td><td>-</td></tr><tr><td>DSG [47]</td><td>8.811</td><td>7.241</td><td>11.49</td><td>-</td><td>-</td></tr><tr><td>LivelySpeaker [17]</td><td>11.80</td><td>6.659</td><td>11.28</td><td>-</td><td>-</td></tr><tr><td>Habibie et al. [17]</td><td>9.040</td><td>7.716</td><td>8.213</td><td>8.614</td><td>8.043</td></tr><tr><td>TalkSHOW [48]</td><td>6.209</td><td>6.947</td><td>13.47</td><td>7.791</td><td>7.771</td></tr><tr><td>EMAGE [31]</td><td>5.512</td><td>7.724</td><td>13.06</td><td>7.680</td><td>7.556</td></tr><tr><td>DiffSHEG [9]</td><td>8.986</td><td>7.142</td><td>11.91</td><td>7.665</td><td>8.673</td></tr><tr><td>SemTalk (Ours)</td><td>4.278</td><td>7.770</td><td>12.91</td><td>6.153</td><td>6.938</td></tr><tr><td rowspan="10">SHOW</td><td>FaceFormer [13]</td><td>-</td><td>-</td><td>-</td><td>138.1</td><td>43.69</td></tr><tr><td>CodeTalker [44]</td><td>-</td><td>-</td><td>-</td><td>140.7</td><td>45.84</td></tr><tr><td>CaMN [30]</td><td>22.12</td><td>7.712</td><td>10.37</td><td>-</td><td>-</td></tr><tr><td>DSG [47]</td><td>24.84</td><td>8.027</td><td>10.23</td><td>-</td><td>-</td></tr><tr><td>LivelySpeaker [17]</td><td>32.17</td><td>7.844</td><td>10.14</td><td>-</td><td>-</td></tr><tr><td>Habibie et al. [17]</td><td>27.22</td><td>8.209</td><td>8.541</td><td>145.6</td><td>47.35</td></tr><tr><td>TalkSHOW [48]</td><td>24.43</td><td>8.249</td><td>10.98</td><td>139.6</td><td>45.17</td></tr><tr><td>EMAGE [31]</td><td>22.12</td><td>8.280</td><td>12.46</td><td>136.1</td><td>42.44</td></tr><tr><td>DiffSHEG [9]</td><td>24.87</td><td>8.061</td><td>10.79</td><td>139.0</td><td>45.77</td></tr><tr><td>SemTalk (Ours)</td><td>20.18</td><td>8.304</td><td>11.36</td><td>134.1</td><td>39.15</td></tr></table>
|
| 194 |
+
|
| 195 |
+
mantic signals receive higher emphasis. We also compare sem-gate with LivelySpeaker's SAG [53]. We find that replacing the Sparse Motion stage with SAG and substituting motion using GT semantic labels led to poor performance. SAG relies only on text-motion alignment, ignoring emotional tone, making it more prone to overfitting the text. In contrast, our sem-gate applies GT supervision with two weighting methods, achieving more accurate and stable semantic motion.
|
| 196 |
+
|
| 197 |
+
Ablation Study on Components. We assess the impact of each component of our model on BEAT2 and present the results in Tab. 3, which reveals several key insights (more ablation results please see supplementary material):
|
| 198 |
+
|
| 199 |
+
- Rhythmic Consistency Learning (RC) not only boosts performance on key metrics like FGD, LVD, and BC but also reduces the MSE, contributing to smoother and more realistic base motion.
|
| 200 |
+
- Semantic Emphasis Learning (SE) proves essential for selectively enhancing semantic-rich gestures. The inclusion of SE, as shown in rows with SE enabled, improves both diversity (DIV) and FGD, enabling the model to emphasize semantically relevant motions. SE demonstrates its effectiveness in focusing on frame-level semantic information, which contributes to the generation of lifelike gestures with enriched contextual meaning.
|
| 201 |
+
- Coarse2Fine Cross-Attention Module (C2F) effectively refines motion details, improving BC, FGD, and DIV. When combined with RVQ and RC, C2F achieves the best MSE and LVD, highlighting its role in enhancing motion realism and diversity hierarchically.
|
| 202 |
+
|
| 203 |
+
Table 1. Quantitative comparison with SOTA. SemTalk consistently outperforms baselines across both the BEAT2 and SHOW datasets. Lower values are better for FMD, FGD, MSE, and LVD. Higher values are better for BC and DIV. We report $\mathrm{FGD} \times 10^{-1}$ , $\mathrm{BC} \times 10^{-1}$ , $\mathrm{MSE} \times 10^{-8}$ and $\mathrm{LVD} \times 10^{-5}$ for simplify.
|
| 204 |
+
|
| 205 |
+
<table><tr><td>Method</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>Acc(%)↑</td></tr><tr><td>w/o Sem-gate</td><td>4.893</td><td>7.702</td><td>12.42</td><td>-</td></tr><tr><td>SAG (LivelySpeaker [53])</td><td>4.618</td><td>7.682</td><td>12.45</td><td>-</td></tr><tr><td>Sem-gate (Random ψ)</td><td>4.634</td><td>7.700</td><td>12.44</td><td>50.07</td></tr><tr><td>Sem-gate (w/o W)</td><td>4.495</td><td>7.633</td><td>12.26</td><td>72.32</td></tr><tr><td>Sem-gate (w/ Wf)</td><td>4.408</td><td>7.679</td><td>12.28</td><td>78.52</td></tr><tr><td>Sem-gate (w/ Wl)</td><td>4.366</td><td>7.772</td><td>11.94</td><td>77.83</td></tr><tr><td>Sem-gate (ours)</td><td>4.278</td><td>7.770</td><td>12.91</td><td>82.76</td></tr></table>
|
| 206 |
+
|
| 207 |
+
Table 2. Ablation study on Sem-gate. "Acc" denotes semantic classification performance on BEAT2. "w/o Sem-gate" means directly input $f_{t}$ and $\gamma_{h}$ without Sem-gate. "SAG (LivelySpeaker [53])" replaces the Sparse Motion Generation stage with LivelySpeaker's SAG method. "Random $\psi$ " assigns frame-level scores randomly. "w/o $\mathcal{W}$ " applies the semantic gate but excludes frame-level weighting. "w/ $\mathcal{W}_{f}$ " applies feature weighting. "w/ $\mathcal{W}_{l}$ " applies loss weighting. (as mentioned in Sec. 3.4). Sem-gate (ours) integrates both the semantic gate and frame-level weighting to enhance emphasis.
|
| 208 |
+
|
| 209 |
+
<table><tr><td>RC</td><td>SE</td><td>C2F</td><td>RVQ</td><td>FGD↓</td><td>BC↑</td><td>DIV↑</td><td>MSE↓</td><td>LVD↓</td></tr><tr><td>-</td><td>-</td><td>-</td><td>-</td><td>6.234</td><td>7.628</td><td>11.44</td><td>8.239</td><td>7.831</td></tr><tr><td>-</td><td>-</td><td>-</td><td>√</td><td>5.484</td><td>7.641</td><td>11.84</td><td>13.882</td><td>15.42</td></tr><tr><td>√</td><td>-</td><td>-</td><td>√</td><td>4.867</td><td>7.701</td><td>12.38</td><td>6.201</td><td>6.928</td></tr><tr><td>√</td><td>√</td><td>-</td><td>√</td><td>4.526</td><td>7.751</td><td>12.83</td><td>6.215</td><td>6.997</td></tr><tr><td>-</td><td>-</td><td>√</td><td>√</td><td>4.897</td><td>7.702</td><td>12.42</td><td>13.416</td><td>15.72</td></tr><tr><td>√</td><td>√</td><td>-</td><td>-</td><td>5.831</td><td>7.758</td><td>11.97</td><td>6.587</td><td>7.106</td></tr><tr><td>√</td><td>-</td><td>√</td><td>√</td><td>4.397</td><td>7.776</td><td>12.49</td><td>6.100</td><td>6.898</td></tr><tr><td>√</td><td>√</td><td>√</td><td>√</td><td>4.278</td><td>7.770</td><td>12.91</td><td>6.153</td><td>6.938</td></tr></table>
|
| 210 |
+
|
| 211 |
+
Table 3. Ablation study on each key component. "RC" denotes rhythmic consistency learning, "SE" denotes the semantic emphasis learning, and "C2F" denotes Coarse2Fine Cross-Att Module, "RVQ" denotes the RVQ-VAE.
|
| 212 |
+
|
| 213 |
+
- RVQ-VAE (RVQ) enhances the diversity and realism of generated motion. Though it slightly increases MSE and LVD, it notably improves FGD, leading to more natural motion generation compared to standard VQ-VAE.
|
| 214 |
+
|
| 215 |
+
# 5. Conclusion
|
| 216 |
+
|
| 217 |
+
We propose SemTalk, a novel approach for holistic cospeech motion generation with frame-level semantic emphasis. Our method addresses the integration of sparse yet expressive motion into foundational rhythm-related motion, which has received less attention in previous works. We develop a framework that separately learns rhythm-related base motion through coarse2fine cross-attention module and rhythmic consistency learning, while capturing semantic-aware motion through Semantic Emphasis Learning. These components are then adaptively fused based on a learned semantic score. Our approach has demonstrated state-of-the-art performance on two public datasets quantitatively and qualitatively. The qualitative results and user study show that our method can generate high-quality cospeech motion sequences that enhance frame-level semantics over robust base motions, reflecting the full spectrum of human expressiveness.
|
| 218 |
+
|
| 219 |
+
Acknowledgments. This work was supported by Alibaba Research Intern Program, the Young Scientists Fund of the National Natural Science Foundation of China No. 624B2110, the National Key Research and Development Program of China No. 2024YFC3015600, the Fundamental Research Funds for Central Universities No.2042023KF0180 & No.2042025KF0053. The numerical calculation is supported by supercomputing system in Super-computing Center of Wuhan University and Tongyi Lab, Alibaba Group.
|
| 220 |
+
|
| 221 |
+
# References
|
| 222 |
+
|
| 223 |
+
[1] Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. Low-resource adaptation for personalized cospeech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20566–20576, 2022. 2
|
| 224 |
+
[2] Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. Listen, denoise, action! audio-driven motion synthesis with diffusion models. ACM Transactions on Graphics (TOG), 42(4):1-20, 2023. 2
|
| 225 |
+
[3] Tenglong Ao, Zeyi Zhang, and Libin Liu. Gesture diffuclip: Gesture diffusion model with clip latents. ACM Transactions on Graphics (TOG), 42(4):1-18, 2023. 3
|
| 226 |
+
[4] Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. Audiolm: a language modeling approach to audio generation. IEEE/ACM transactions on audio, speech, and language processing, 31:2523-2533, 2023. 3
|
| 227 |
+
[5] Justine Cassell, Catherine Pelachaud, Norman Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, and Matthew Stone. Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 413-420, 1994. 2
|
| 228 |
+
[6] Justine Cassell, David McNeill, and Karl-Erik McCullough. Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information. *Pragmatics & cognition*, 7(1):1-34, 1999. 1
|
| 229 |
+
[7] Justine Cassell, Hannes Högni Vilhjalmsson, and Timothy Bickmore. Beat: the behavior expression animation toolkit. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 477-486, 2001. 2
|
| 230 |
+
[8] Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, and Kun Zhou. Enabling synergistic full-body control in prompt-based co-speech motion generation. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 6774-6783, 2024. 2
|
| 231 |
+
[9] Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, and Qifeng Chen. Diffsheg: A diffusion-based approach for real-time speech-driven holistic 3d expression and gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7352-7361, 2024. 2, 3, 8
|
| 232 |
+
|
| 233 |
+
[10] Kiran Chhatre, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J Black, Timo Bolkart, et al. Emotional speech-driven 3d body animation via disentangled latent diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1942-1953, 2024. 2, 4
|
| 234 |
+
[11] Selina Chu, Shrikanth Narayanan, and C-C Jay Kuo. Environmental sound recognition with time-frequency audio features. IEEE Transactions on Audio, Speech, and Language Processing, 17(6):1142-1158, 2009. 3
|
| 235 |
+
[12] Radek Daneček, Kiran Chhatre, Shashank Tripathi, Yandong Wen, Michael Black, and Timo Bolkart. Emotional speech-driven animation with content-emotion disentanglement. In SIGGRAPH Asia 2023 Conference Papers, pages 1-13, 2023. 2
|
| 236 |
+
[13] Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, and Taku Komura. Faceformer: Speech-driven 3d facial animation with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18770-18780, 2022. 8
|
| 237 |
+
[14] Susan Goldin-Meadow. The role of gesture in communication and thinking. Trends in cognitive sciences, 3(11):419-429, 1999. 1
|
| 238 |
+
[15] Kehong Gong, Dongze Lian, Heng Chang, Chuan Guo, Zihang Jiang, Xinxin Zuo, Michael Bi Mi, and Xinchao Wang. Tm2d: Bimodality driven 3d dance generation via music-text integration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9942–9952, 2023. 2
|
| 239 |
+
[16] Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1900-1910, 2024. 3
|
| 240 |
+
[17] Ikhsanul Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, Hans-Peter Seidel, Gerard Pons-Moll, Mohamed Elgharib, and Christian Theobalt. Learning speech-driven 3d conversational gestures from video. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, pages 101-108, 2021. 1, 2, 8
|
| 241 |
+
[18] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing, 29: 3451-3460, 2021. 3
|
| 242 |
+
[19] Chien-Ming Huang and Bilge Mutlu. Robot behavior toolkit: generating effective social behaviors for robots. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 25-32, 2012. 2
|
| 243 |
+
[20] Adam Kendon. Gesture: Visible action as utterance. Cambridge University Press, 2004. 1
|
| 244 |
+
[21] Michael Kipp. Gesture generation by imitation: From human behavior to computer character animation. Universal-Publishers, 2005. 2
|
| 245 |
+
[22] Stefan Kopp, Brigitte Krenn, Stacy Marsella, Andrew N Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R Thorisson, and Hannes Vilhjalmsson. Towards a common
|
| 246 |
+
|
| 247 |
+
framework for multimodal generation: The behavior markup language. In Intelligent Virtual Agents: 6th International Conference, IVA 2006, Marina Del Rey, CA, USA, August 21-23, 2006. Proceedings 6, pages 205-217. Springer, 2006. 2
|
| 248 |
+
[23] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, Simon Alexandersson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 international conference on multimodal interaction, pages 242-250, 2020. 2
|
| 249 |
+
[24] Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, and Gustav Eje Henter. A large, crowdsourced evaluation of gesture generation systems on common data: The genea challenge 2020. In Proceedings of the 26th International Conference on Intelligent User Interfaces, pages 11-21, 2021. 1
|
| 250 |
+
[25] Alex Lascarides and Matthew Stone. A formal semantic analysis of gesture. Journal of Semantics, 26(4):393-449, 2009. 1
|
| 251 |
+
[26] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and Linchao Bao. Audio2gestures: Generating diverse gestures from speech audio with conditional variational autoencoders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11293-11302, 2021. 5
|
| 252 |
+
[27] Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13401-13412, 2021. 5
|
| 253 |
+
[28] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. Seeg: Semantic energized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10473-10482, 2022. 2
|
| 254 |
+
[29] Haiyang Liu, Naoya Iwamoto, Zihao Zhu, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Disco: Disentangled implicit content and rhythm learning for diverse co-speech gestures synthesis. In Proceedings of the 30th ACM international conference on multimedia, pages 3764-3773, 2022. 2
|
| 255 |
+
[30] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. Beat: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis. In European conference on computer vision, pages 612-630. Springer, 2022. 1, 5, 8
|
| 256 |
+
[31] Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. Emage: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1144-1154, 2024. 1, 2, 3, 5, 6, 7, 8
|
| 257 |
+
[32] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou, Wayne Wu, Bo Dai, and Bolei
|
| 258 |
+
|
| 259 |
+
Zhou. Learning hierarchical cross-modal association for cospeech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10462-10472, 2022. 2
|
| 260 |
+
[33] Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, and Changxing Ding. Towards variable and coordinated holistic co-speech motion generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1566-1576, 2024. 2
|
| 261 |
+
[34] Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, and Heung-Yeung Shum. Humantomato: Text-aligned whole-body motion generation. arXiv preprint arXiv:2310.12978, 2023. 2
|
| 262 |
+
[35] Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, and Xie Chen. emotion2vec: Self-supervised pre-training for speech emotion representation. arXiv preprint arXiv:2312.15185, 2023. 4
|
| 263 |
+
[36] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro. Virtual character performance from speech. In Proceedings of the 12th ACM SIGGRAPH/Eurographics symposium on computer animation, pages 25-35, 2013. 2
|
| 264 |
+
[37] Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, and Alexander Richard. From audio to photoreal embodiment: Synthesizing humans in conversations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1001-1010, 2024. 2
|
| 265 |
+
[38] Ashi Özyürek, Roel M Willems, Sotaro Kita, and Peter Hagoort. On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of cognitive neuroscience, 19(4):605-616, 2007. 1
|
| 266 |
+
[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 4
|
| 267 |
+
[40] Manuel Rebol, Christian Güttl, and Krzysztof Pietroszek. Real-time gesture animation generation from speech for virtual human interaction. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-4, 2021. 2
|
| 268 |
+
[41] Shuai Shen, Wenliang Zhao, Zibin Meng, Wanhua Li, Zheng Zhu, Jie Zhou, and Jiwen Lu. Difftalk: Crafting diffusion models for generalized talking head synthesis. arXiv preprint arXiv:2301.03786, 2(4):5, 2023. 2
|
| 269 |
+
[42] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 2, 5
|
| 270 |
+
[43] Hanwei Wu and Markus Flierl. Learning product codebooks using vector-quantized autoencoders for image retrieval. In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pages 1-5. IEEE, 2019. 2
|
| 271 |
+
[44] Jinbo Xing, Menghan Xia, Yuechen Zhang, Xiaodong Cun, Jue Wang, and Tien-Tsin Wong. Codetalker: Speech-driven
|
| 272 |
+
|
| 273 |
+
3d facial animation with discrete motion prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12780-12790, 2023. 8
|
| 274 |
+
[45] Zunnan Xu, Yachao Zhang, Sicheng Yang, Ronghui Li, and Xiu Li. Chain of generation: Multi-modal gesture synthesis via cascaded conditional control. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6387-6395, 2024. 3
|
| 275 |
+
[46] Sicheng Yang, Zilin Wang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Qiaochu Huang, Lei Hao, Songcen Xu, Xiaofei Wu, Changpeng Yang, et al. Unified gesture: A unified gesture synthesis model for multiple skeletons. In Proceedings of the 31st ACM International Conference on Multimedia, pages 1033-1044, 2023. 2
|
| 276 |
+
[47] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. Diffusestylegesture: Stylized audio-driven co-speech gesture generation with diffusion models. arXiv preprint arXiv:2305.04919, 2023. 2, 5, 8
|
| 277 |
+
[48] Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 469-480, 2023. 1, 2, 5, 6, 8
|
| 278 |
+
[49] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jae-hong Kim, and Geehyuk Lee. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In 2019 International Conference on Robotics and Automation (ICRA), pages 4303-4309. IEEE, 2019. 2
|
| 279 |
+
[50] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG), 39 (6):1-16, 2020. 5
|
| 280 |
+
[51] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. Soundstream: An end-to-end neural audio codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:495-507, 2021. 3
|
| 281 |
+
[52] Jinsong Zhang, Minjie Zhu, Yuxiang Zhang, Zerong Zheng, Yebin Liu, and Kun Li. Speechact: Towards generating whole-body motion from speech. IEEE Transactions on Visualization and Computer Graphics, 2025. 2
|
| 282 |
+
[53] Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, and Shenghua Gao. Livelyspeaker: Towards semantic-aware co-speech gesture generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20807-20817, 2023. 2, 5, 8
|
| 283 |
+
[54] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. Taming diffusion models for audio-driven co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10544-10553, 2023. 2
|
2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e8d1c5d025316f3c328805c7149160f84bbe6b4715bbe4d421ad983f71d7dbcd
|
| 3 |
+
size 694650
|
2025/SemTalk_ Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/33638821-3dce-4a55-a900-82748a75aeee_content_list.json
ADDED
|
@@ -0,0 +1,2022 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
107,
|
| 8 |
+
128,
|
| 9 |
+
888,
|
| 10 |
+
174
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Yunshan Zhong $^{1,2}$ , Yuyao Zhou $^{2}$ , Yuxin Zhang $^{2}$ , Wanchen Sui $^{3}$ , Shen Li $^{3}$ , Yong Li $^{3}$ , Fei Chao $^{2}$ , Rongrong Ji $^{1,2*}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
163,
|
| 19 |
+
202,
|
| 20 |
+
833,
|
| 21 |
+
239
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Institute of Artificial Intelligence, Xiamen University",
|
| 28 |
+
"bbox": [
|
| 29 |
+
284,
|
| 30 |
+
238,
|
| 31 |
+
712,
|
| 32 |
+
256
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{2}$ MAC Lab, School of Informatics, Xiamen University $^{3}$ Alibaba Group",
|
| 39 |
+
"bbox": [
|
| 40 |
+
214,
|
| 41 |
+
255,
|
| 42 |
+
782,
|
| 43 |
+
273
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "viper.zhong@gmail.com, {yuyaozhou, yuxinzhang}@stu.xmu.edu.cn \n{wanchen.swc, litan.ps, jiufeng.ly}@alibaba-inc.com, {fchao, rrji}@xmu.edu.cn",
|
| 50 |
+
"bbox": [
|
| 51 |
+
156,
|
| 52 |
+
277,
|
| 53 |
+
834,
|
| 54 |
+
311
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
246,
|
| 64 |
+
344,
|
| 65 |
+
326,
|
| 66 |
+
359
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Data-free quantization (DFQ) enables model quantization without accessing real data, addressing concerns regarding data security and privacy. With the growing adoption of Vision Transformers (ViTs), DFQ for ViTs has garnered significant attention. However, existing DFQ methods exhibit two limitations: (1) semantic distortion, where the semantics of synthetic images deviate substantially from those of real images, and (2) semantic inadequacy, where synthetic images contain extensive regions with limited content and oversimplified textures, leading to suboptimal quantization performance. To address these limitations, we propose SARDFQ, a novel Semantics Alignment and Reinforcement Data-Free Quantization method for ViTs. To address semantic distortion, SARDFQ incorporates Attention Priors Alignment (APA), which optimizes synthetic images to follow randomly generated structure attention priors. To mitigate semantic inadequacy, SARDFQ introduces Multi-Semantic Reinforcement (MSR), leveraging localized patch optimization to enhance semantic richness across synthetic images. Furthermore, SARDFQ employs Soft-Label Learning (SL), wherein multiple semantic targets are adapted to facilitate the learning of multi-semantic images augmented by MSR. Extensive experiments demonstrate the effectiveness of SARDFQ, significantly surpassing existing methods. For example, SARDFQ improves top-1 accuracy on ImageNet by $15.52\\%$ for W4A4 ViT-B<sup>1</sup>.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
89,
|
| 75 |
+
378,
|
| 76 |
+
483,
|
| 77 |
+
772
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "1. Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
91,
|
| 87 |
+
803,
|
| 88 |
+
220,
|
| 89 |
+
819
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Vision Transformers (ViTs) [20, 29, 36] have demonstrated remarkable success across various computer vision tasks [1,",
|
| 96 |
+
"bbox": [
|
| 97 |
+
89,
|
| 98 |
+
830,
|
| 99 |
+
483,
|
| 100 |
+
861
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "image",
|
| 106 |
+
"img_path": "images/4b50f3a45edef337a0f4073e0ff2e040144da215e4fb9126beb74c732643f2fe.jpg",
|
| 107 |
+
"image_caption": [],
|
| 108 |
+
"image_footnote": [],
|
| 109 |
+
"bbox": [
|
| 110 |
+
575,
|
| 111 |
+
344,
|
| 112 |
+
846,
|
| 113 |
+
518
|
| 114 |
+
],
|
| 115 |
+
"page_idx": 0
|
| 116 |
+
},
|
| 117 |
+
{
|
| 118 |
+
"type": "text",
|
| 119 |
+
"text": "(a) The t-SNE[69] visualization of the penultimate-layer features (extracted by DeiT-S) of synthetic images. Each marker (circle, star, triangle) represents a distinct category. The red dashed circles highlight the features extracted from our APA and real images. Notably, the features produced by PSAQ-ViT[47] exhibit substantial deviation from those of real images, indicating semantic distortion. In contrast, our APA yields features that more closely align with those of real images, suggesting improved semantics alignment.",
|
| 120 |
+
"bbox": [
|
| 121 |
+
511,
|
| 122 |
+
518,
|
| 123 |
+
906,
|
| 124 |
+
616
|
| 125 |
+
],
|
| 126 |
+
"page_idx": 0
|
| 127 |
+
},
|
| 128 |
+
{
|
| 129 |
+
"type": "image",
|
| 130 |
+
"img_path": "images/249fb76fabd4d8d549bcfc3fb1bc6e46377268668c0690f604df88d19982974d.jpg",
|
| 131 |
+
"image_caption": [
|
| 132 |
+
"PSAQ"
|
| 133 |
+
],
|
| 134 |
+
"image_footnote": [],
|
| 135 |
+
"bbox": [
|
| 136 |
+
573,
|
| 137 |
+
617,
|
| 138 |
+
665,
|
| 139 |
+
744
|
| 140 |
+
],
|
| 141 |
+
"page_idx": 0
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"type": "image",
|
| 145 |
+
"img_path": "images/39086b1071ec1e2cac7df6a959b641a108c4ee0baa0e9e30a61a0d1ebef8644d.jpg",
|
| 146 |
+
"image_caption": [
|
| 147 |
+
"PSAQ V2"
|
| 148 |
+
],
|
| 149 |
+
"image_footnote": [],
|
| 150 |
+
"bbox": [
|
| 151 |
+
666,
|
| 152 |
+
617,
|
| 153 |
+
754,
|
| 154 |
+
744
|
| 155 |
+
],
|
| 156 |
+
"page_idx": 0
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"type": "image",
|
| 160 |
+
"img_path": "images/faf3b7d4f308588c04140baf283cbb66741b4e3f32e492455dcd0bf875558327.jpg",
|
| 161 |
+
"image_caption": [
|
| 162 |
+
"SARDFQ (Ours)"
|
| 163 |
+
],
|
| 164 |
+
"image_footnote": [],
|
| 165 |
+
"bbox": [
|
| 166 |
+
754,
|
| 167 |
+
617,
|
| 168 |
+
844,
|
| 169 |
+
744
|
| 170 |
+
],
|
| 171 |
+
"page_idx": 0
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"type": "text",
|
| 175 |
+
"text": "(b) The images of PSAQ-ViT and PSAQ-ViT V2 exhibit numerous dull regions with limited content and simplified textures, reflecting semantic inadequacy. In comparison, our SARDFQ generates images with greater diversity in both content and texture, demonstrating enhanced semantics.",
|
| 176 |
+
"bbox": [
|
| 177 |
+
511,
|
| 178 |
+
757,
|
| 179 |
+
906,
|
| 180 |
+
808
|
| 181 |
+
],
|
| 182 |
+
"page_idx": 0
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"type": "text",
|
| 186 |
+
"text": "Figure 1. Illustration of (a) semantic distortion and (b) semantic inadequacy.",
|
| 187 |
+
"bbox": [
|
| 188 |
+
511,
|
| 189 |
+
819,
|
| 190 |
+
906,
|
| 191 |
+
847
|
| 192 |
+
],
|
| 193 |
+
"page_idx": 0
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"type": "text",
|
| 197 |
+
"text": "7, 56, 68]. However, their high computational cost and substantial memory footprint hinder deployment in resource-",
|
| 198 |
+
"bbox": [
|
| 199 |
+
511,
|
| 200 |
+
869,
|
| 201 |
+
908,
|
| 202 |
+
902
|
| 203 |
+
],
|
| 204 |
+
"page_idx": 0
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"type": "header",
|
| 208 |
+
"text": "CVF",
|
| 209 |
+
"bbox": [
|
| 210 |
+
106,
|
| 211 |
+
2,
|
| 212 |
+
181,
|
| 213 |
+
42
|
| 214 |
+
],
|
| 215 |
+
"page_idx": 0
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"type": "header",
|
| 219 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 220 |
+
"bbox": [
|
| 221 |
+
238,
|
| 222 |
+
0,
|
| 223 |
+
807,
|
| 224 |
+
46
|
| 225 |
+
],
|
| 226 |
+
"page_idx": 0
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"type": "page_footnote",
|
| 230 |
+
"text": "*Corresponding Author: rrji@xmu.edu.cn",
|
| 231 |
+
"bbox": [
|
| 232 |
+
107,
|
| 233 |
+
875,
|
| 234 |
+
334,
|
| 235 |
+
887
|
| 236 |
+
],
|
| 237 |
+
"page_idx": 0
|
| 238 |
+
},
|
| 239 |
+
{
|
| 240 |
+
"type": "page_footnote",
|
| 241 |
+
"text": "1The code is at https://github.com/zysxmu/SARDFQ.",
|
| 242 |
+
"bbox": [
|
| 243 |
+
109,
|
| 244 |
+
887,
|
| 245 |
+
447,
|
| 246 |
+
898
|
| 247 |
+
],
|
| 248 |
+
"page_idx": 0
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"type": "page_number",
|
| 252 |
+
"text": "12479",
|
| 253 |
+
"bbox": [
|
| 254 |
+
480,
|
| 255 |
+
944,
|
| 256 |
+
519,
|
| 257 |
+
957
|
| 258 |
+
],
|
| 259 |
+
"page_idx": 0
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"type": "text",
|
| 263 |
+
"text": "constrained environments [11, 30, 32, 46, 55, 67, 82]. To address these limitations, quantization [39] has emerged as a promising solution, which reduces model complexity by converting full-precision weight and activations into low-bit representations.",
|
| 264 |
+
"bbox": [
|
| 265 |
+
89,
|
| 266 |
+
90,
|
| 267 |
+
480,
|
| 268 |
+
165
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "Traditional quantization methods typically require access to the original training dataset, which raises data privacy and security concerns [6, 27, 77, 84]. As a result, data-free quantization (DFQ) has gained increasing attention, allowing quantization without the need for real data [14, 76, 81]. However, most existing DFQ methods are designed specifically for convolutional neural networks (CNNs) and are not directly applicable to vision transformers (ViTs). These methods generally rely on batch normalization statistics (BNS), which capture the distribution of real data, to synthesize in-distribution synthetic data [6, 76, 81, 84]. Yet, BNS is unavailable for ViTs, which use layer normalization (LN) to dynamically compute distribution statistics during inference [47]. Recently, several DFQ methods have been proposed for ViTs [16, 33, 47, 48, 62]. For example, PSAQ-ViT [47] introduces patch similarity entropy (PSE) loss to optimize Gaussian noise towards usable synthetic images.",
|
| 275 |
+
"bbox": [
|
| 276 |
+
89,
|
| 277 |
+
167,
|
| 278 |
+
482,
|
| 279 |
+
422
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "text",
|
| 285 |
+
"text": "Nevertheless, we observe that existing methods suffer from semantic distortion and inadequacy issues. As shown in Fig. 1a, features of synthetic images generated by PSAQ-ViT deviate significantly from those of real images. Tab. 1 shows that the cosine similarity between synthetic images from PSAQ-ViT and real images is notably low, also indicating significant distortion. These results highlight the issue of semantic distortion. Moreover, as shown in Fig. $1\\mathrm{b}^2$ synthetic images generated by PSAQ-ViT and PSAQ-ViT V2 exhibit many regions with limited content diversity and overly simplified textures. These low-quality dull regions are useless or even detrimental to model learning [33], highlighting the issue of semantic inadequacy. Consequently, quantized models trained on such low-quality images suffer from degraded performance.",
|
| 286 |
+
"bbox": [
|
| 287 |
+
89,
|
| 288 |
+
424,
|
| 289 |
+
482,
|
| 290 |
+
648
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 1
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "text",
|
| 296 |
+
"text": "Motivated by the above analysis, we propose a novel Semantics Alignment and Reinforcement Data-Free Quantization method for ViTs, termed SARDFQ. The overall framework is depicted in Fig.2. To address the semantic distortion issue, SARDFQ introduces Attention Priors Alignment (APA), where synthetic images are optimized to follow structured attention priors generated using Gaussian Mixture Models (GMMs). APA effectively aligns the semantics of synthetic images with real images, as validated by both visual and quantitative analyses. As shown in Fig. 1a, features of APA exhibit a closer alignment to those of real images, while quantitative results in Tab. 1 also confirm that the semantics of APA are more consistent to the real images, indicating enhanced semantics alignment. To address the semantic inadequacy issue, SARDFQ incorpo",
|
| 297 |
+
"bbox": [
|
| 298 |
+
89,
|
| 299 |
+
651,
|
| 300 |
+
482,
|
| 301 |
+
876
|
| 302 |
+
],
|
| 303 |
+
"page_idx": 1
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"type": "text",
|
| 307 |
+
"text": "rates Multi-Semantic Reinforcement (MSR) and Softlabel Learning (SL). MSR utilizes localized patch optimization, which encourages different sub-patches of synthetic images to capture various semantics, reinforcing the rich semantics across images. SL applies multiple semantic targets to accommodate the learning of multi-semantic images augmented by MSR. As shown in Fig. 1b, synthetic images after applying MSR exhibit greater diversity in content and texture, providing reinforced semantics.",
|
| 308 |
+
"bbox": [
|
| 309 |
+
511,
|
| 310 |
+
90,
|
| 311 |
+
903,
|
| 312 |
+
226
|
| 313 |
+
],
|
| 314 |
+
"page_idx": 1
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"type": "text",
|
| 318 |
+
"text": "Experimental results across various ViT models and tasks demonstrate that SARDFQ presents substantial performance improvements. For example, SARDFQ achieves a $15.52\\%$ increase in top-1 accuracy on the ImageNet dataset for the W4A4 ViT-B model.",
|
| 319 |
+
"bbox": [
|
| 320 |
+
511,
|
| 321 |
+
227,
|
| 322 |
+
903,
|
| 323 |
+
301
|
| 324 |
+
],
|
| 325 |
+
"page_idx": 1
|
| 326 |
+
},
|
| 327 |
+
{
|
| 328 |
+
"type": "text",
|
| 329 |
+
"text": "2. Related Works",
|
| 330 |
+
"text_level": 1,
|
| 331 |
+
"bbox": [
|
| 332 |
+
513,
|
| 333 |
+
319,
|
| 334 |
+
661,
|
| 335 |
+
335
|
| 336 |
+
],
|
| 337 |
+
"page_idx": 1
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"type": "text",
|
| 341 |
+
"text": "2.1. Vision Transformers",
|
| 342 |
+
"text_level": 1,
|
| 343 |
+
"bbox": [
|
| 344 |
+
511,
|
| 345 |
+
344,
|
| 346 |
+
707,
|
| 347 |
+
359
|
| 348 |
+
],
|
| 349 |
+
"page_idx": 1
|
| 350 |
+
},
|
| 351 |
+
{
|
| 352 |
+
"type": "text",
|
| 353 |
+
"text": "The great success of transformers in the natural language processing field has driven widespread attempts in the computer vision community to apply them to vision tasks [12, 28, 72]. ViT [20] is the pioneer that builds a transformer-based model to handle images, boosting the performance on the image classification task. DeiT [68] introduces an efficient teacher-student training strategy where a distillation token is employed to distill knowledge from the teacher model to the student model. Swin Transformers [54] builds an efficient and effective hierarchical model by introducing a shifted window-based self-attention mechanism. Other than the image classification task, the applications of ViTs also have broadened considerably, manifesting groundbreaking performance in object detection [7], image segmentation [9, 83], low-level vision [50], video recognition [1, 59], and medical image processing [64], etc. Nevertheless, the impressive performance of ViTs relies on a high number of parameters and significant computational overhead, preventing deployment in resource-constrained environments. Several recent efforts design lightweight ViTs, such as MobileViT [57], MiniVit [80], and TinyViT [73]. However, the model complexity is still unsatisfactory [47].",
|
| 354 |
+
"bbox": [
|
| 355 |
+
511,
|
| 356 |
+
367,
|
| 357 |
+
903,
|
| 358 |
+
700
|
| 359 |
+
],
|
| 360 |
+
"page_idx": 1
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"type": "text",
|
| 364 |
+
"text": "2.2. Network Quantization",
|
| 365 |
+
"text_level": 1,
|
| 366 |
+
"bbox": [
|
| 367 |
+
511,
|
| 368 |
+
710,
|
| 369 |
+
720,
|
| 370 |
+
727
|
| 371 |
+
],
|
| 372 |
+
"page_idx": 1
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"type": "text",
|
| 376 |
+
"text": "Data-Driven Quantization. Model quantization reduces the complexity of neural networks by replacing full-precision weight and activation with the low-bit format. Data-driven quantization can be roughly divided into two categories: quantization-aware training (QAT) and posttraining quantization (PTQ). QAT is compute-heavy since it re-trains the quantized model with the full training data to retain performance [22, 25, 41, 43, 46, 52, 65, 74]. PTQ perform quantization with a tiny dataset and a reduced time overhead, harvesting widespread attention [3, 42]. The specific architecture of ViTs, such as LayerNorm and the self",
|
| 377 |
+
"bbox": [
|
| 378 |
+
511,
|
| 379 |
+
734,
|
| 380 |
+
903,
|
| 381 |
+
900
|
| 382 |
+
],
|
| 383 |
+
"page_idx": 1
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"type": "page_footnote",
|
| 387 |
+
"text": "2More results are presented in the appendix.",
|
| 388 |
+
"bbox": [
|
| 389 |
+
107,
|
| 390 |
+
886,
|
| 391 |
+
344,
|
| 392 |
+
900
|
| 393 |
+
],
|
| 394 |
+
"page_idx": 1
|
| 395 |
+
},
|
| 396 |
+
{
|
| 397 |
+
"type": "page_number",
|
| 398 |
+
"text": "12480",
|
| 399 |
+
"bbox": [
|
| 400 |
+
480,
|
| 401 |
+
944,
|
| 402 |
+
517,
|
| 403 |
+
955
|
| 404 |
+
],
|
| 405 |
+
"page_idx": 1
|
| 406 |
+
},
|
| 407 |
+
{
|
| 408 |
+
"type": "image",
|
| 409 |
+
"img_path": "images/1b7b933fd4ed13c1c2693d8e289055bfe0cf694f3afb23c5f54be139c7e73d04.jpg",
|
| 410 |
+
"image_caption": [
|
| 411 |
+
"Figure 2. SARDFQ Framework overview: Attention Priors Alignment (APA) employs randomly generated attention priors to improve semantics alignment. Multi-Semantic Reinforcement (MSR) learns the different regions of synthetic images with various semantics to enhance overall semantic richness. Meanwhile, Softlabel Learning (SL) adopts multiple semantic targets to ensure consistent learning of multi-semantic images augmented by MSR."
|
| 412 |
+
],
|
| 413 |
+
"image_footnote": [],
|
| 414 |
+
"bbox": [
|
| 415 |
+
171,
|
| 416 |
+
85,
|
| 417 |
+
823,
|
| 418 |
+
236
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 2
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "text",
|
| 424 |
+
"text": "attention module, urges distinct PTQ methods compared to CNNs [18, 24, 46, 51, 53, 86]. For example, Liu et al. [55] develop a ranking loss to maintain the relative order of the self-attention activation. Unfortunately, both QAT and PTQ involve the original training data, causing concerns about data privacy and security issues in data-sensitive scenarios.",
|
| 425 |
+
"bbox": [
|
| 426 |
+
88,
|
| 427 |
+
316,
|
| 428 |
+
480,
|
| 429 |
+
407
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 2
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "Data-Free Quantization. DFQ quantizes models without accessing real data [2, 13-15, 27, 35, 40, 44, 45, 61]. Most previous DFQ methods focus on CNN, where the BNS can be adopted as the regularization term [6, 81]. However, BNS is infeasible for ViTs built on the LN. Recently, few efforts have been explored to accommodate ViTs [16, 33, 47, 48, 62]. PSAQ-ViT [47] introduces the first DFQ method for ViTs. They discover that Gaussian noise yields homogeneous patches, while the real image yields heterogeneous patches. Thus, patch similarity entropy (PSE) loss is proposed to optimize the Gaussian noise towards real-like images by making them showcase heterogeneous patches. Based on PSAQ-ViT, PSAQ-ViT V2 [48] further introduces an adversarial learning strategy [26]. [62] incorporates contrastive learning and proposes an iterative generation-quantization PTQ-based DFQ method. [33] proposes a sparse generation method to remove noisy and hallucination backgrounds in synthetic images.",
|
| 436 |
+
"bbox": [
|
| 437 |
+
93,
|
| 438 |
+
410,
|
| 439 |
+
483,
|
| 440 |
+
681
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "3. Method",
|
| 447 |
+
"text_level": 1,
|
| 448 |
+
"bbox": [
|
| 449 |
+
89,
|
| 450 |
+
698,
|
| 451 |
+
181,
|
| 452 |
+
714
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 2
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "text",
|
| 458 |
+
"text": "3.1. Preliminaries",
|
| 459 |
+
"text_level": 1,
|
| 460 |
+
"bbox": [
|
| 461 |
+
89,
|
| 462 |
+
724,
|
| 463 |
+
230,
|
| 464 |
+
739
|
| 465 |
+
],
|
| 466 |
+
"page_idx": 2
|
| 467 |
+
},
|
| 468 |
+
{
|
| 469 |
+
"type": "text",
|
| 470 |
+
"text": "3.1.1.Quantizers",
|
| 471 |
+
"text_level": 1,
|
| 472 |
+
"bbox": [
|
| 473 |
+
89,
|
| 474 |
+
747,
|
| 475 |
+
215,
|
| 476 |
+
762
|
| 477 |
+
],
|
| 478 |
+
"page_idx": 2
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"text": "We employ the linear quantizer for all weights and activations, except for the attention scores, which use a log2 quantizer to handle highly non-negative and uneven values [24, 49, 51]. For the linear quantizer, given a full-precision input $\\mathbf{x}$ and bit-width $b$ , the quantized value $\\mathbf{x}_q$ and the de-quantized value $\\bar{\\mathbf{x}}$ are computed as follows:",
|
| 483 |
+
"bbox": [
|
| 484 |
+
89,
|
| 485 |
+
767,
|
| 486 |
+
482,
|
| 487 |
+
858
|
| 488 |
+
],
|
| 489 |
+
"page_idx": 2
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "equation",
|
| 493 |
+
"text": "\n$$\n\\mathbf {x} _ {q} = \\operatorname {c l i p} \\left(\\left\\lfloor \\frac {\\mathbf {x}}{\\Delta} \\right\\rceil + z, 0, 2 ^ {b} - 1\\right), \\bar {\\mathbf {x}} = \\Delta \\cdot \\left(\\mathbf {x} _ {q} - z\\right), \\tag {1}\n$$\n",
|
| 494 |
+
"text_format": "latex",
|
| 495 |
+
"bbox": [
|
| 496 |
+
99,
|
| 497 |
+
869,
|
| 498 |
+
482,
|
| 499 |
+
898
|
| 500 |
+
],
|
| 501 |
+
"page_idx": 2
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"type": "text",
|
| 505 |
+
"text": "where $\\lfloor \\cdot \\rfloor$ denotes rounding to the nearest integer, and clip limits the value to $[0,2^{b} - 1]$ . Here, $\\Delta$ and $z$ are the scale factor and zero-point, respectively. For the log2 quantizer:",
|
| 506 |
+
"bbox": [
|
| 507 |
+
511,
|
| 508 |
+
316,
|
| 509 |
+
906,
|
| 510 |
+
364
|
| 511 |
+
],
|
| 512 |
+
"page_idx": 2
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"type": "equation",
|
| 516 |
+
"text": "\n$$\n\\mathbf {x} _ {q} = \\operatorname {c l i p} \\left(\\left\\lfloor - \\log_ {2} \\frac {\\mathbf {x}}{\\Delta} \\right\\rceil , 0, 2 ^ {b} - 1\\right), \\bar {\\mathbf {x}} = \\Delta \\cdot 2 ^ {- \\mathbf {x} _ {q}}. \\tag {2}\n$$\n",
|
| 517 |
+
"text_format": "latex",
|
| 518 |
+
"bbox": [
|
| 519 |
+
526,
|
| 520 |
+
375,
|
| 521 |
+
906,
|
| 522 |
+
402
|
| 523 |
+
],
|
| 524 |
+
"page_idx": 2
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "text",
|
| 528 |
+
"text": "3.1.2. Data Synthesis",
|
| 529 |
+
"text_level": 1,
|
| 530 |
+
"bbox": [
|
| 531 |
+
511,
|
| 532 |
+
411,
|
| 533 |
+
663,
|
| 534 |
+
426
|
| 535 |
+
],
|
| 536 |
+
"page_idx": 2
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"type": "text",
|
| 540 |
+
"text": "DFQ methods parameterize synthetic images and optimize them toward real-like images with a pre-trained full-precision model $F$ . Given a image $\\tilde{I}$ initialized from Gaussian noise, the one-hot loss [75] is introduced to learn label-related semantics:",
|
| 541 |
+
"bbox": [
|
| 542 |
+
511,
|
| 543 |
+
430,
|
| 544 |
+
905,
|
| 545 |
+
503
|
| 546 |
+
],
|
| 547 |
+
"page_idx": 2
|
| 548 |
+
},
|
| 549 |
+
{
|
| 550 |
+
"type": "equation",
|
| 551 |
+
"text": "\n$$\n\\mathcal {L} ^ {\\mathrm {O H}} (\\tilde {\\boldsymbol {I}}) = C E (F (\\tilde {\\boldsymbol {I}}), c) \\tag {3}\n$$\n",
|
| 552 |
+
"text_format": "latex",
|
| 553 |
+
"bbox": [
|
| 554 |
+
625,
|
| 555 |
+
516,
|
| 556 |
+
906,
|
| 557 |
+
536
|
| 558 |
+
],
|
| 559 |
+
"page_idx": 2
|
| 560 |
+
},
|
| 561 |
+
{
|
| 562 |
+
"type": "text",
|
| 563 |
+
"text": "where $CE(\\cdot, \\cdot)$ represents the cross entropy, $c$ is a random class label, and $F(\\cdot)$ returns the predicted probability for image $\\tilde{I}$ .",
|
| 564 |
+
"bbox": [
|
| 565 |
+
511,
|
| 566 |
+
547,
|
| 567 |
+
905,
|
| 568 |
+
592
|
| 569 |
+
],
|
| 570 |
+
"page_idx": 2
|
| 571 |
+
},
|
| 572 |
+
{
|
| 573 |
+
"type": "text",
|
| 574 |
+
"text": "Moreover, total variance (TV) [77] loss is a smoothing regularization term to improve the image quality:",
|
| 575 |
+
"bbox": [
|
| 576 |
+
511,
|
| 577 |
+
593,
|
| 578 |
+
905,
|
| 579 |
+
625
|
| 580 |
+
],
|
| 581 |
+
"page_idx": 2
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"type": "equation",
|
| 585 |
+
"text": "\n$$\n\\mathcal {L} ^ {\\mathrm {T V}} (\\tilde {\\boldsymbol {I}}) = \\iint | \\nabla \\tilde {\\boldsymbol {I}} (\\tau_ {1}, \\tau_ {2}) | d \\tau_ {1} d \\tau_ {2}. \\tag {4}\n$$\n",
|
| 586 |
+
"text_format": "latex",
|
| 587 |
+
"bbox": [
|
| 588 |
+
589,
|
| 589 |
+
633,
|
| 590 |
+
906,
|
| 591 |
+
666
|
| 592 |
+
],
|
| 593 |
+
"page_idx": 2
|
| 594 |
+
},
|
| 595 |
+
{
|
| 596 |
+
"type": "text",
|
| 597 |
+
"text": "where $\\nabla \\tilde{I} (\\tau_1,\\tau_2)$ denotes the gradient at $\\tilde{I}$ at $(\\tau_{1},\\tau_{2})$",
|
| 598 |
+
"bbox": [
|
| 599 |
+
511,
|
| 600 |
+
678,
|
| 601 |
+
870,
|
| 602 |
+
694
|
| 603 |
+
],
|
| 604 |
+
"page_idx": 2
|
| 605 |
+
},
|
| 606 |
+
{
|
| 607 |
+
"type": "text",
|
| 608 |
+
"text": "To perform DFQ for ViTs, PSAQ-ViT [47] proposes patch similarity entropy (PSE) loss. It first compute patch similarity $\\Gamma_{l}[i,j] = \\frac{u_{i} \\cdot u_{j}}{||u_{i}||||u_{j}||}$ , where $u_{i}, u_{j}$ are feature vectors of MHSA outputs in $l$ -th block and $||\\cdot||$ denotes the $l_{2}$ norm. Then, it estimates the density function $\\hat{f}_{l}(x) = \\frac{1}{Mh}\\sum_{m=1}^{M}K\\left(\\frac{x - x_{m}}{h}\\right)$ , where $K(\\cdot)$ is a normal kernel, $h$ is the bandwidth, and $x_{m}$ is the kernel center derived from $\\Gamma_{l}$ . Finally, the PSE loss is defined as:",
|
| 609 |
+
"bbox": [
|
| 610 |
+
511,
|
| 611 |
+
695,
|
| 612 |
+
905,
|
| 613 |
+
820
|
| 614 |
+
],
|
| 615 |
+
"page_idx": 2
|
| 616 |
+
},
|
| 617 |
+
{
|
| 618 |
+
"type": "equation",
|
| 619 |
+
"text": "\n$$\n\\mathcal {L} ^ {\\mathrm {P S E}} (\\tilde {\\boldsymbol {I}}) = \\sum_ {l = 1} ^ {L} \\int \\hat {f} _ {l} (x) \\log [ \\hat {f} _ {l} (x) ] d x, \\tag {5}\n$$\n",
|
| 620 |
+
"text_format": "latex",
|
| 621 |
+
"bbox": [
|
| 622 |
+
576,
|
| 623 |
+
833,
|
| 624 |
+
906,
|
| 625 |
+
875
|
| 626 |
+
],
|
| 627 |
+
"page_idx": 2
|
| 628 |
+
},
|
| 629 |
+
{
|
| 630 |
+
"type": "text",
|
| 631 |
+
"text": "where $L$ is the block number of the model.",
|
| 632 |
+
"bbox": [
|
| 633 |
+
511,
|
| 634 |
+
885,
|
| 635 |
+
797,
|
| 636 |
+
898
|
| 637 |
+
],
|
| 638 |
+
"page_idx": 2
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"type": "page_number",
|
| 642 |
+
"text": "12481",
|
| 643 |
+
"bbox": [
|
| 644 |
+
480,
|
| 645 |
+
944,
|
| 646 |
+
517,
|
| 647 |
+
955
|
| 648 |
+
],
|
| 649 |
+
"page_idx": 2
|
| 650 |
+
},
|
| 651 |
+
{
|
| 652 |
+
"type": "table",
|
| 653 |
+
"img_path": "images/8e49475232f1a4334a49796ef4e3b995aa80fada2b27fb929cd6cf44f8dd8986.jpg",
|
| 654 |
+
"table_caption": [],
|
| 655 |
+
"table_footnote": [],
|
| 656 |
+
"table_body": "<table><tr><td>Method Classes</td><td>Real</td><td>PSAQ-ViT [47]</td><td>APA (Ours)</td></tr><tr><td>Class 1</td><td>0.68</td><td>0.44</td><td>0.64</td></tr><tr><td>Class 2</td><td>0.32</td><td>0.26</td><td>0.31</td></tr><tr><td>Class 3</td><td>0.41</td><td>0.31</td><td>0.36</td></tr></table>",
|
| 657 |
+
"bbox": [
|
| 658 |
+
94,
|
| 659 |
+
88,
|
| 660 |
+
480,
|
| 661 |
+
175
|
| 662 |
+
],
|
| 663 |
+
"page_idx": 3
|
| 664 |
+
},
|
| 665 |
+
{
|
| 666 |
+
"type": "text",
|
| 667 |
+
"text": "Table 1. Average cosine similarity of three randomly selected classes. For real images, the similarity is measured within the class itself, while for PSAQ-ViT and APA, the similarity is measured between synthetic and real images of the same class. The results show that APA achieves higher similarity than PSE loss, indicating aligned semantics.",
|
| 668 |
+
"bbox": [
|
| 669 |
+
89,
|
| 670 |
+
186,
|
| 671 |
+
483,
|
| 672 |
+
270
|
| 673 |
+
],
|
| 674 |
+
"page_idx": 3
|
| 675 |
+
},
|
| 676 |
+
{
|
| 677 |
+
"type": "text",
|
| 678 |
+
"text": "3.2. Observations",
|
| 679 |
+
"text_level": 1,
|
| 680 |
+
"bbox": [
|
| 681 |
+
89,
|
| 682 |
+
297,
|
| 683 |
+
230,
|
| 684 |
+
314
|
| 685 |
+
],
|
| 686 |
+
"page_idx": 3
|
| 687 |
+
},
|
| 688 |
+
{
|
| 689 |
+
"type": "text",
|
| 690 |
+
"text": "Existing DFQ methods have made significant progress. However, through carefully analyzing the synthetic images, we reveal that these images suffer from issues of semantic distortion and semantic inadequacy, both of which hinder further advancement in the DFQ of ViTs.",
|
| 691 |
+
"bbox": [
|
| 692 |
+
89,
|
| 693 |
+
321,
|
| 694 |
+
483,
|
| 695 |
+
397
|
| 696 |
+
],
|
| 697 |
+
"page_idx": 3
|
| 698 |
+
},
|
| 699 |
+
{
|
| 700 |
+
"type": "text",
|
| 701 |
+
"text": "The semantic distortion issue refers to the significant divergence between the semantics of synthetic images and real images. To demonstrate this, we visualize the features of synthetic and real images in Fig. 1a. Note that the penultimate feature typically is regarded to represent the semantics of the input [4, 58, 78, 79]. It is clear that the features of PSAQ-ViT diverge significantly from real images, suggesting that these images fail to capture the true semantic distribution of real data. Tab. 1 quantitatively measures the semantics using the average cosine similarity (ranging from $-1$ to $1$ ). We also report the intra-class similarity within real images as an approximate upper bound for comparison. This result further supports the observation of low similarity between synthetic images from PSAQ-ViT and real images. For instance, for Class 1, the intra-class similarity within real images is 0.68, while PSAQ-ViT achieves only 0.44.",
|
| 702 |
+
"bbox": [
|
| 703 |
+
89,
|
| 704 |
+
398,
|
| 705 |
+
483,
|
| 706 |
+
641
|
| 707 |
+
],
|
| 708 |
+
"page_idx": 3
|
| 709 |
+
},
|
| 710 |
+
{
|
| 711 |
+
"type": "text",
|
| 712 |
+
"text": "The semantic inadequacy issue refers to the presence of dull regions in synthetic images, which contain redundant or non-semantic content [33, 37], hindering the model's learning process. As indicated in [13, 14], a diverse content and textures generally suggests rich information. As shown in Fig. 1b, many regions of synthetic images generated by PSAQ-ViT and PSAQ-ViT V2 exhibit a lack of diversity in content, with overly simplified textures. Specifically, the central region of PSAQ-ViT images only contains faint object structures, while PSAQ-ViT V2 images appear excessively smoothed and indistinct.",
|
| 713 |
+
"bbox": [
|
| 714 |
+
89,
|
| 715 |
+
642,
|
| 716 |
+
483,
|
| 717 |
+
808
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 3
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "text",
|
| 723 |
+
"text": "For high-bit quantization, where model capacity is largely retained [42], the performance degradation remains relatively minor even using semantic distorted and inadequate images [47, 62]. However, in low-bit quantization, where model capacity is severely damaged and informative images are essential for recovering performance [13, 84],",
|
| 724 |
+
"bbox": [
|
| 725 |
+
89,
|
| 726 |
+
810,
|
| 727 |
+
483,
|
| 728 |
+
901
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 3
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "text",
|
| 734 |
+
"text": "fine-tuning on these poor-quality images leads to inferior generalization to real datasets, resulting in limited performance. For example, as shown in Tab. 2, the W4A4 ViT-B fine-tuned on real images yields $68.16\\%$ , whereas PSAQ-ViT only achieves $36.32\\%$ .",
|
| 735 |
+
"bbox": [
|
| 736 |
+
511,
|
| 737 |
+
90,
|
| 738 |
+
905,
|
| 739 |
+
167
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 3
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "image",
|
| 745 |
+
"img_path": "images/a76c498bc5d105f8c3d2cd2c8ad6b212c421d9237e70e6e0570cd8cd08d8f7fd.jpg",
|
| 746 |
+
"image_caption": [
|
| 747 |
+
"Figure 3. Comparison between attention maps."
|
| 748 |
+
],
|
| 749 |
+
"image_footnote": [],
|
| 750 |
+
"bbox": [
|
| 751 |
+
573,
|
| 752 |
+
179,
|
| 753 |
+
848,
|
| 754 |
+
281
|
| 755 |
+
],
|
| 756 |
+
"page_idx": 3
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"type": "text",
|
| 760 |
+
"text": "3.3. Semantics Alignment and Reinforcement Data-Free Quantization",
|
| 761 |
+
"text_level": 1,
|
| 762 |
+
"bbox": [
|
| 763 |
+
511,
|
| 764 |
+
321,
|
| 765 |
+
905,
|
| 766 |
+
353
|
| 767 |
+
],
|
| 768 |
+
"page_idx": 3
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"text": "In the following, we introduce the proposed SARDFQ, whose framework is illustrated in Fig. 2.",
|
| 773 |
+
"bbox": [
|
| 774 |
+
511,
|
| 775 |
+
359,
|
| 776 |
+
905,
|
| 777 |
+
390
|
| 778 |
+
],
|
| 779 |
+
"page_idx": 3
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"text": "3.3.1. Attention Priors Alignment",
|
| 784 |
+
"text_level": 1,
|
| 785 |
+
"bbox": [
|
| 786 |
+
511,
|
| 787 |
+
397,
|
| 788 |
+
751,
|
| 789 |
+
412
|
| 790 |
+
],
|
| 791 |
+
"page_idx": 3
|
| 792 |
+
},
|
| 793 |
+
{
|
| 794 |
+
"type": "text",
|
| 795 |
+
"text": "In ViTs, the self-attention mechanism encodes semantic correlations between image regions, where high-response areas in attention maps strongly correlate with semantic-discriminative content [10, 20, 31]. However, existing DFQ methods overlook this intrinsic property in the generation process. As a result, as shown in Fig. 3, synthetic images often exhibit disordered and unnatural attention patterns, with attention maps either overly diffuse or misaligned toward peripheral regions. This undermines their ability to preserve semantic-discriminative content, causing semantic distortion, as demonstrated in Fig. 1a and Tab. 1. In response, we propose Attention Priors Alignment (APA), which improves semantics alignment by optimizing synthetic images to follow randomly generated structure attention priors.",
|
| 796 |
+
"bbox": [
|
| 797 |
+
511,
|
| 798 |
+
416,
|
| 799 |
+
905,
|
| 800 |
+
628
|
| 801 |
+
],
|
| 802 |
+
"page_idx": 3
|
| 803 |
+
},
|
| 804 |
+
{
|
| 805 |
+
"type": "text",
|
| 806 |
+
"text": "Specifically, given a synthetic image $\\tilde{I}$ , we first obtain its attention maps in the $h$ -th head of the $l$ -th block, denoting as $\\mathbf{A}_{l,h} \\in \\mathbb{R}^{N \\times N}$ , where $N$ represents the total number of tokens. In DeiT, the attention of the classification token toward other tokens serves as the indicator for semantic versus non-semantic parts [5]. Thus, we extract $\\mathbf{A}_{l,h}^c \\in \\mathbb{R}^{1 \\times (N - 1)}$ from $\\mathbf{A}_{l,h}$ , representing the attention of the classification token to all tokens except itself. We then randomly generate attention priors $\\tilde{\\mathbf{A}}_{l,h}$ , whose generation is detailed in the next part, and align $\\mathbf{A}_{l,h}^c$ with $\\tilde{\\mathbf{A}}_{l,h}$ by:",
|
| 807 |
+
"bbox": [
|
| 808 |
+
511,
|
| 809 |
+
628,
|
| 810 |
+
906,
|
| 811 |
+
782
|
| 812 |
+
],
|
| 813 |
+
"page_idx": 3
|
| 814 |
+
},
|
| 815 |
+
{
|
| 816 |
+
"type": "equation",
|
| 817 |
+
"text": "\n$$\n\\mathcal {L} _ {l, h} (\\tilde {\\boldsymbol {I}}) = \\operatorname {M S E} \\left(\\mathbf {A} _ {l, h} ^ {c} - \\tilde {\\mathbf {A}} _ {l, h}\\right), \\tag {6}\n$$\n",
|
| 818 |
+
"text_format": "latex",
|
| 819 |
+
"bbox": [
|
| 820 |
+
604,
|
| 821 |
+
794,
|
| 822 |
+
906,
|
| 823 |
+
813
|
| 824 |
+
],
|
| 825 |
+
"page_idx": 3
|
| 826 |
+
},
|
| 827 |
+
{
|
| 828 |
+
"type": "text",
|
| 829 |
+
"text": "where MSE represents the mean squared error. For Swin models that do not use a classification token, we substitute $\\mathbf{A}_{l,h}^{c}$ in Eq.6 with the average attention map of all tokens [10]. As noted in [17], ViTs initially focus on all regions to capture low-level information in shallow blocks",
|
| 830 |
+
"bbox": [
|
| 831 |
+
511,
|
| 832 |
+
824,
|
| 833 |
+
905,
|
| 834 |
+
900
|
| 835 |
+
],
|
| 836 |
+
"page_idx": 3
|
| 837 |
+
},
|
| 838 |
+
{
|
| 839 |
+
"type": "page_number",
|
| 840 |
+
"text": "12482",
|
| 841 |
+
"bbox": [
|
| 842 |
+
480,
|
| 843 |
+
944,
|
| 844 |
+
519,
|
| 845 |
+
955
|
| 846 |
+
],
|
| 847 |
+
"page_idx": 3
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "image",
|
| 851 |
+
"img_path": "images/47a751ddad2314c95508a2a12f85e247727bb9004ac501eed58fe564b277eaf4.jpg",
|
| 852 |
+
"image_caption": [
|
| 853 |
+
"Figure 4. Examples of generated attention priors."
|
| 854 |
+
],
|
| 855 |
+
"image_footnote": [],
|
| 856 |
+
"bbox": [
|
| 857 |
+
153,
|
| 858 |
+
88,
|
| 859 |
+
217,
|
| 860 |
+
138
|
| 861 |
+
],
|
| 862 |
+
"page_idx": 4
|
| 863 |
+
},
|
| 864 |
+
{
|
| 865 |
+
"type": "image",
|
| 866 |
+
"img_path": "images/3c735d92212b727d406c580060dcabab8f5fe18157239ef6f24724c4a4698a16.jpg",
|
| 867 |
+
"image_caption": [],
|
| 868 |
+
"image_footnote": [],
|
| 869 |
+
"bbox": [
|
| 870 |
+
222,
|
| 871 |
+
88,
|
| 872 |
+
285,
|
| 873 |
+
138
|
| 874 |
+
],
|
| 875 |
+
"page_idx": 4
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"type": "image",
|
| 879 |
+
"img_path": "images/e9511c52c1df3beaa3c5e411286edf72afd6a1e11835c3eaf0e2f6b22a8878de.jpg",
|
| 880 |
+
"image_caption": [],
|
| 881 |
+
"image_footnote": [],
|
| 882 |
+
"bbox": [
|
| 883 |
+
290,
|
| 884 |
+
88,
|
| 885 |
+
354,
|
| 886 |
+
138
|
| 887 |
+
],
|
| 888 |
+
"page_idx": 4
|
| 889 |
+
},
|
| 890 |
+
{
|
| 891 |
+
"type": "image",
|
| 892 |
+
"img_path": "images/0d1652aa491f07440ea2ee43d6f9c93ab483199aba85c303bd36c784fca49c7c.jpg",
|
| 893 |
+
"image_caption": [],
|
| 894 |
+
"image_footnote": [],
|
| 895 |
+
"bbox": [
|
| 896 |
+
359,
|
| 897 |
+
88,
|
| 898 |
+
423,
|
| 899 |
+
138
|
| 900 |
+
],
|
| 901 |
+
"page_idx": 4
|
| 902 |
+
},
|
| 903 |
+
{
|
| 904 |
+
"type": "text",
|
| 905 |
+
"text": "and gradually shift their focus toward semantic regions in deeper blocks to extract high-level semantic information. Leveraging this property, we selectively apply $\\mathcal{L}_{l,h}^{\\mathrm{APA}}$ to deeper blocks, progressively aligning attention towards semantically relevant areas. The total APA loss is computed as a depth-weighted sum of the individual Eq. 6 across these deeper blocks:",
|
| 906 |
+
"bbox": [
|
| 907 |
+
89,
|
| 908 |
+
179,
|
| 909 |
+
483,
|
| 910 |
+
285
|
| 911 |
+
],
|
| 912 |
+
"page_idx": 4
|
| 913 |
+
},
|
| 914 |
+
{
|
| 915 |
+
"type": "equation",
|
| 916 |
+
"text": "\n$$\n\\mathcal {L} ^ {\\mathrm {A P A}} (\\tilde {\\boldsymbol {I}}) = \\sum_ {l = S} ^ {L} \\sum_ {h = 1} ^ {H} \\frac {l}{L} \\mathcal {L} _ {l, h} (\\tilde {\\boldsymbol {I}}), \\tag {7}\n$$\n",
|
| 917 |
+
"text_format": "latex",
|
| 918 |
+
"bbox": [
|
| 919 |
+
183,
|
| 920 |
+
294,
|
| 921 |
+
480,
|
| 922 |
+
335
|
| 923 |
+
],
|
| 924 |
+
"page_idx": 4
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "text",
|
| 928 |
+
"text": "where $S$ is a pre-given hyper-parameter denoting the start of deep blocks and are experimentally set $S = \\frac{L}{2}$ .",
|
| 929 |
+
"bbox": [
|
| 930 |
+
89,
|
| 931 |
+
344,
|
| 932 |
+
483,
|
| 933 |
+
375
|
| 934 |
+
],
|
| 935 |
+
"page_idx": 4
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"text": "Attention Priors Generation. To generate attention priors $\\tilde{\\mathbf{A}}_{l,h}$ , Gaussian Mixture Models (GMMs) are employed as it is the most commonly used distribution with high flexibility. ViTs utilize different attention heads to capture diverse patterns and learn varied, informative representations [8]. Thus, we use distinct GMMs for each head. Note that the goal here is not to replicate real attention maps precisely, but to generate simulated structure attention priors to guide the learning of synthetic images.",
|
| 940 |
+
"bbox": [
|
| 941 |
+
89,
|
| 942 |
+
375,
|
| 943 |
+
483,
|
| 944 |
+
508
|
| 945 |
+
],
|
| 946 |
+
"page_idx": 4
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "text",
|
| 950 |
+
"text": "In particular, we first initialize an all zero matrix $\\tilde{\\mathbf{P}}\\in$ $\\mathbb{R}^{H\\times W}$ , where $H = W = \\sqrt{N - 1}$ for DeiT, $H =$ $W = \\sqrt{N}$ for Swin. For example, for DeiT-S, the $H =$ $W = \\sqrt{196} = 14$ . Then, we generate $k$ two-dimensional Gaussian distributions, where $k$ is randomly sampled from $1\\sim K_{APA}$ and $K_{APA}$ is set to 5 in all experiments. Each Gaussian has its mean and covariance3. Consequently, the matrix element at the $i$ -th row and $j$ -th column, $\\tilde{\\mathbf{P}} [i,j]$ , is determined by:",
|
| 951 |
+
"bbox": [
|
| 952 |
+
89,
|
| 953 |
+
510,
|
| 954 |
+
483,
|
| 955 |
+
647
|
| 956 |
+
],
|
| 957 |
+
"page_idx": 4
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "equation",
|
| 961 |
+
"text": "\n$$\n\\tilde {\\mathbf {P}} [ i, j ] = \\max _ {m = 1, \\dots , k} \\mathbf {G} ^ {m} [ i, j ]. \\tag {8}\n$$\n",
|
| 962 |
+
"text_format": "latex",
|
| 963 |
+
"bbox": [
|
| 964 |
+
191,
|
| 965 |
+
654,
|
| 966 |
+
482,
|
| 967 |
+
679
|
| 968 |
+
],
|
| 969 |
+
"page_idx": 4
|
| 970 |
+
},
|
| 971 |
+
{
|
| 972 |
+
"type": "text",
|
| 973 |
+
"text": "Then, $\\tilde{\\mathbf{P}}$ is normalized by:",
|
| 974 |
+
"bbox": [
|
| 975 |
+
89,
|
| 976 |
+
691,
|
| 977 |
+
267,
|
| 978 |
+
705
|
| 979 |
+
],
|
| 980 |
+
"page_idx": 4
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"type": "equation",
|
| 984 |
+
"text": "\n$$\n\\tilde {\\mathbf {P}} _ {n} = \\frac {\\tilde {\\mathbf {P}}}{\\sum \\tilde {\\mathbf {P}}} \\cdot (1 - x). \\tag {9}\n$$\n",
|
| 985 |
+
"text_format": "latex",
|
| 986 |
+
"bbox": [
|
| 987 |
+
210,
|
| 988 |
+
715,
|
| 989 |
+
482,
|
| 990 |
+
753
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 4
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "text",
|
| 996 |
+
"text": "Here, for DeiT that incorporates the classification token, $x$ is randomly sampled from a uniform distribution $U(0,1)$ , representing the proportion of the attention score that the classification token allocates to itself. For Swin, which does not use a classification token, $x$ is set to 0. Finally, $\\tilde{\\mathbf{P}}_n$ is flatten to match the dimensionality:",
|
| 997 |
+
"bbox": [
|
| 998 |
+
89,
|
| 999 |
+
760,
|
| 1000 |
+
483,
|
| 1001 |
+
851
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 4
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "equation",
|
| 1007 |
+
"text": "\n$$\n\\tilde {\\mathbf {A}} _ {l, h} = \\operatorname {f l a t t e n} \\left(\\tilde {\\mathbf {P}} _ {n}\\right). \\tag {10}\n$$\n",
|
| 1008 |
+
"text_format": "latex",
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
217,
|
| 1011 |
+
859,
|
| 1012 |
+
482,
|
| 1013 |
+
878
|
| 1014 |
+
],
|
| 1015 |
+
"page_idx": 4
|
| 1016 |
+
},
|
| 1017 |
+
{
|
| 1018 |
+
"type": "text",
|
| 1019 |
+
"text": "Fig. 4 displays examples of generated attention priors.",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
531,
|
| 1022 |
+
90,
|
| 1023 |
+
888,
|
| 1024 |
+
106
|
| 1025 |
+
],
|
| 1026 |
+
"page_idx": 4
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"type": "text",
|
| 1030 |
+
"text": "Discussion. APA prevents attention disorder, ensuring that synthetic images exhibit more coherent and natural attention patterns, as demonstrated in Fig. 3. As a result, APA selectively enhances the responses in certain regions, effectively emphasizing semantic-discriminative regions within synthetic images and thus prompting the discriminative features. Although simple, APA enables synthetic images to align better with the real semantics, as validated by both visual and quantitative evaluations. As shown in Fig. 1a, compared to PSAQ-ViT, features obtained after applying the APA loss are more closely aligned with real images, indicating better semantic alignment. The quantitative results in Tab. 1 further support that APA achieves superior semantic alignment. For example, in Class 1, the intra-class similarity for PSAQ-ViT is 0.44, whereas APA achieves a higher similarity of 0.64.",
|
| 1031 |
+
"bbox": [
|
| 1032 |
+
511,
|
| 1033 |
+
107,
|
| 1034 |
+
906,
|
| 1035 |
+
348
|
| 1036 |
+
],
|
| 1037 |
+
"page_idx": 4
|
| 1038 |
+
},
|
| 1039 |
+
{
|
| 1040 |
+
"type": "text",
|
| 1041 |
+
"text": "3.3.2. Multi-Semantic Reinforcement",
|
| 1042 |
+
"text_level": 1,
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
511,
|
| 1045 |
+
356,
|
| 1046 |
+
774,
|
| 1047 |
+
369
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 4
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "text",
|
| 1053 |
+
"text": "Current DFQ methods for ViTs [47, 48, 62] optimize images through global optimization, treating the entire image as a single semantic unit. However, this is affected by low-rank structural regularity [34], where adjacent pixels exhibit strong similarity, leading to dull regions with redundant or non-semantic content for model learning [37]. This issue is further exacerbated by the tokenization mechanism in ViTs, as processing images in fixed-size patches make dull regions increase at the patch level [19, 70]. Consequently, as shown in Fig. 1b, synthetic images generated by existing methods exhibit large dull regions, resulting in semantic inadequacy [37]. In response, we propose Multi-Semantic Reinforcement (MSR), which applies localized patch optimization to enhance semantic richness by learning local patches with distinct semantics.",
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
511,
|
| 1056 |
+
375,
|
| 1057 |
+
906,
|
| 1058 |
+
601
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 4
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"text": "Specifically, for a synthetic image $\\tilde{I}$ , instead of feeding only the entire image, we also feed its patches and optimize them individually. Initially, we select $m$ nonoverlapping patches, where $m$ is chosen randomly from the set $\\{1,2,\\dots,K_{MSR}\\}$ , with $K_{MSR}$ set to 4 in all experiments. These $m$ patches are then cropped and resized to match the model's input dimensions:",
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
511,
|
| 1067 |
+
601,
|
| 1068 |
+
905,
|
| 1069 |
+
708
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 4
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "equation",
|
| 1075 |
+
"text": "\n$$\n\\{\\tilde {\\boldsymbol {I}} _ {M S R} ^ {i} \\} _ {i = 1, \\dots , m} = \\operatorname {r e s i z e} \\left(\\operatorname {c r o p} _ {m} (\\tilde {\\boldsymbol {I}})\\right), \\tag {11}\n$$\n",
|
| 1076 |
+
"text_format": "latex",
|
| 1077 |
+
"bbox": [
|
| 1078 |
+
578,
|
| 1079 |
+
718,
|
| 1080 |
+
903,
|
| 1081 |
+
738
|
| 1082 |
+
],
|
| 1083 |
+
"page_idx": 4
|
| 1084 |
+
},
|
| 1085 |
+
{
|
| 1086 |
+
"type": "text",
|
| 1087 |
+
"text": "where $\\text{crop}_m(\\cdot)$ crops $m$ non-overlapping patches from input, and $\\text{resize}(\\cdot)$ is the resize function. Each patch, denoted as $\\tilde{I}_{MSR}^i$ , is treated as a new image with an assigned semantic target $c^i$ . Note that the gradient is backpropagated only to update the corresponding patch in the original image, leaving the rest of the image unaffected.",
|
| 1088 |
+
"bbox": [
|
| 1089 |
+
511,
|
| 1090 |
+
750,
|
| 1091 |
+
905,
|
| 1092 |
+
839
|
| 1093 |
+
],
|
| 1094 |
+
"page_idx": 4
|
| 1095 |
+
},
|
| 1096 |
+
{
|
| 1097 |
+
"type": "text",
|
| 1098 |
+
"text": "Softlabel Learning. The one-hot loss (Eq. 3) only learns the semantics of the target class, making it unsuitable for $\\tilde{\\pmb{I}}$ under MSR, as its patches $\\{\\tilde{I}_{MSR}^i\\}_{i = 1,\\dots ,m}$ contain distinct semantics. In response, we propose Softlabel Learning",
|
| 1099 |
+
"bbox": [
|
| 1100 |
+
511,
|
| 1101 |
+
840,
|
| 1102 |
+
906,
|
| 1103 |
+
901
|
| 1104 |
+
],
|
| 1105 |
+
"page_idx": 4
|
| 1106 |
+
},
|
| 1107 |
+
{
|
| 1108 |
+
"type": "page_footnote",
|
| 1109 |
+
"text": "3The pseudo code is detailed in appendix.",
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
107,
|
| 1112 |
+
886,
|
| 1113 |
+
331,
|
| 1114 |
+
900
|
| 1115 |
+
],
|
| 1116 |
+
"page_idx": 4
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"type": "page_number",
|
| 1120 |
+
"text": "12483",
|
| 1121 |
+
"bbox": [
|
| 1122 |
+
480,
|
| 1123 |
+
944,
|
| 1124 |
+
517,
|
| 1125 |
+
955
|
| 1126 |
+
],
|
| 1127 |
+
"page_idx": 4
|
| 1128 |
+
},
|
| 1129 |
+
{
|
| 1130 |
+
"type": "text",
|
| 1131 |
+
"text": "(SL), which applies multiple semantic targets to accommodate the learning of images augmented by MSR. Specifically, we first sample $Z \\in \\mathbb{R}^{C} \\sim U(0,1)$ , then modify its values by:",
|
| 1132 |
+
"bbox": [
|
| 1133 |
+
89,
|
| 1134 |
+
90,
|
| 1135 |
+
480,
|
| 1136 |
+
151
|
| 1137 |
+
],
|
| 1138 |
+
"page_idx": 5
|
| 1139 |
+
},
|
| 1140 |
+
{
|
| 1141 |
+
"type": "equation",
|
| 1142 |
+
"text": "\n$$\n\\left\\{ \\begin{array}{l l} Z [ c ^ {i} ] \\sim U (\\epsilon_ {1}, \\epsilon_ {2}), & \\text {f o r} \\tilde {\\boldsymbol {I}} _ {M S R} ^ {i}, \\\\ Z [ c ^ {1}, \\ldots , c ^ {m} ] \\sim U (\\epsilon_ {1}, \\epsilon_ {2}), & \\text {f o r} \\tilde {\\boldsymbol {I}}, \\end{array} \\right.\n$$\n",
|
| 1143 |
+
"text_format": "latex",
|
| 1144 |
+
"bbox": [
|
| 1145 |
+
145,
|
| 1146 |
+
161,
|
| 1147 |
+
428,
|
| 1148 |
+
200
|
| 1149 |
+
],
|
| 1150 |
+
"page_idx": 5
|
| 1151 |
+
},
|
| 1152 |
+
{
|
| 1153 |
+
"type": "text",
|
| 1154 |
+
"text": "where $U(\\epsilon_1, \\epsilon_2)$ denotes the uniform distribution over the interval $[\\epsilon_1, \\epsilon_2]$ , $m$ is the number of patches determined in MSR, and $\\epsilon_1$ and $\\epsilon_2$ control the softness, both empirically set consistently to 5 and 10 in all experiments. The soft target is defined as $T_s = \\mathrm{softmax}(Z)$ , and SL loss is:",
|
| 1155 |
+
"bbox": [
|
| 1156 |
+
89,
|
| 1157 |
+
210,
|
| 1158 |
+
483,
|
| 1159 |
+
287
|
| 1160 |
+
],
|
| 1161 |
+
"page_idx": 5
|
| 1162 |
+
},
|
| 1163 |
+
{
|
| 1164 |
+
"type": "equation",
|
| 1165 |
+
"text": "\n$$\n\\mathcal {L} ^ {\\mathrm {S L}} \\left(\\tilde {I} / \\tilde {I} _ {M S R} ^ {i}\\right) = S C E \\left(F \\left(\\tilde {I} / \\tilde {I} _ {M S R} ^ {i}, T _ {s}\\right), \\right. \\tag {12}\n$$\n",
|
| 1166 |
+
"text_format": "latex",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
129,
|
| 1169 |
+
295,
|
| 1170 |
+
480,
|
| 1171 |
+
315
|
| 1172 |
+
],
|
| 1173 |
+
"page_idx": 5
|
| 1174 |
+
},
|
| 1175 |
+
{
|
| 1176 |
+
"type": "text",
|
| 1177 |
+
"text": "where $SCE(\\cdot, \\cdot)$ is soft cross entropy and $F(\\cdot)$ returns the predicted probability for its input. SL facilitates smooth learning between across semantic targets, ensuring that MSR-enhanced images receive consistent supervision rather than conflicting supervision.",
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
89,
|
| 1180 |
+
321,
|
| 1181 |
+
483,
|
| 1182 |
+
398
|
| 1183 |
+
],
|
| 1184 |
+
"page_idx": 5
|
| 1185 |
+
},
|
| 1186 |
+
{
|
| 1187 |
+
"type": "text",
|
| 1188 |
+
"text": "Discussion. By leveraging localized patch optimization, MSR ensures that each patch contributes unique semantics, forcing synthetic images to capture diverse features rather than being dominated by large homogeneous dull regions. As demonstrated by the richer and more diverse content and textures in Fig. 1b, MSR effectively reduces dull regions and enhances semantic richness in synthetic images. Moreover, MSR transforms synthetic images into composites of multiple semantic objects rather than a single unit, thereby providing more distinct semantic samples, i.e., $\\{\\tilde{I}_{MSR}^i\\}_{i = 1,\\dots ,m}$ , for model training. Unlike traditional cropping used in data augmentation, which aims to improve classification robustness by training with cropped patches labeled with the original class, MSR aims for semantic richness within synthetic images, ultimately enabling accurate data-free quantization.",
|
| 1189 |
+
"bbox": [
|
| 1190 |
+
89,
|
| 1191 |
+
398,
|
| 1192 |
+
483,
|
| 1193 |
+
641
|
| 1194 |
+
],
|
| 1195 |
+
"page_idx": 5
|
| 1196 |
+
},
|
| 1197 |
+
{
|
| 1198 |
+
"type": "text",
|
| 1199 |
+
"text": "3.4. Overall Pipeline",
|
| 1200 |
+
"text_level": 1,
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
89,
|
| 1203 |
+
648,
|
| 1204 |
+
251,
|
| 1205 |
+
666
|
| 1206 |
+
],
|
| 1207 |
+
"page_idx": 5
|
| 1208 |
+
},
|
| 1209 |
+
{
|
| 1210 |
+
"type": "text",
|
| 1211 |
+
"text": "The overall pipeline consists of two stages: data synthesis and quantized network learning. The first stage uses the proposed SARDFQ to produce synthetic images. The second stage fine-tunes the quantized model using the generated synthetic images.",
|
| 1212 |
+
"bbox": [
|
| 1213 |
+
89,
|
| 1214 |
+
671,
|
| 1215 |
+
483,
|
| 1216 |
+
747
|
| 1217 |
+
],
|
| 1218 |
+
"page_idx": 5
|
| 1219 |
+
},
|
| 1220 |
+
{
|
| 1221 |
+
"type": "text",
|
| 1222 |
+
"text": "3.4.1. Data Synthesis",
|
| 1223 |
+
"text_level": 1,
|
| 1224 |
+
"bbox": [
|
| 1225 |
+
89,
|
| 1226 |
+
753,
|
| 1227 |
+
241,
|
| 1228 |
+
768
|
| 1229 |
+
],
|
| 1230 |
+
"page_idx": 5
|
| 1231 |
+
},
|
| 1232 |
+
{
|
| 1233 |
+
"type": "text",
|
| 1234 |
+
"text": "In the data synthesis stage, we combine the proposed APA loss of Eq. 7, SL loss of Eq. 12, and TV loss of Eq. 4 to formulate the objective function as follows:",
|
| 1235 |
+
"bbox": [
|
| 1236 |
+
89,
|
| 1237 |
+
772,
|
| 1238 |
+
483,
|
| 1239 |
+
818
|
| 1240 |
+
],
|
| 1241 |
+
"page_idx": 5
|
| 1242 |
+
},
|
| 1243 |
+
{
|
| 1244 |
+
"type": "equation",
|
| 1245 |
+
"text": "\n$$\n\\mathcal {L} _ {G} (\\tilde {\\boldsymbol {I}}) = \\alpha_ {1} \\mathcal {L} ^ {\\mathrm {A P A}} (\\tilde {\\boldsymbol {I}}) + \\mathcal {L} ^ {\\mathrm {S L}} (\\tilde {\\boldsymbol {I}}) + 0. 0 5 \\mathcal {L} ^ {\\mathrm {T V}} (\\tilde {\\boldsymbol {I}}). \\tag {13}\n$$\n",
|
| 1246 |
+
"text_format": "latex",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
107,
|
| 1249 |
+
827,
|
| 1250 |
+
483,
|
| 1251 |
+
847
|
| 1252 |
+
],
|
| 1253 |
+
"page_idx": 5
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "text",
|
| 1257 |
+
"text": "where $\\alpha_{1}$ is hyperparameters and is determined by gridsearch. Note that the weight of TV loss is fixed to 0.05, following [47], to avoid a cumbersome hyperparameter search.",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
89,
|
| 1260 |
+
854,
|
| 1261 |
+
483,
|
| 1262 |
+
901
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 5
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "text",
|
| 1268 |
+
"text": "3.4.2. Quantized Network Learning",
|
| 1269 |
+
"text_level": 1,
|
| 1270 |
+
"bbox": [
|
| 1271 |
+
513,
|
| 1272 |
+
90,
|
| 1273 |
+
766,
|
| 1274 |
+
107
|
| 1275 |
+
],
|
| 1276 |
+
"page_idx": 5
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "text",
|
| 1280 |
+
"text": "Recently DFQ methods have introduced the PTQ methods in learning quantized models due to their advantages of speed, memory efficiency, and performance [35, 62]. Thus, following the success of [42, 71], we fine-tune the quantized network block-wisely. Specifically, denote $\\mathbf{X}_l$ as the outputs of the $l$ -th block of the full-precision model, and $\\bar{\\mathbf{X}}_l$ represent outputs of the quantized counterpart. The reconstruction loss is defined as:",
|
| 1281 |
+
"bbox": [
|
| 1282 |
+
511,
|
| 1283 |
+
109,
|
| 1284 |
+
906,
|
| 1285 |
+
231
|
| 1286 |
+
],
|
| 1287 |
+
"page_idx": 5
|
| 1288 |
+
},
|
| 1289 |
+
{
|
| 1290 |
+
"type": "equation",
|
| 1291 |
+
"text": "\n$$\n\\mathcal {L} _ {l} = \\left\\| \\mathbf {X} _ {l} - \\bar {\\mathbf {X}} _ {l} \\right\\| _ {2}. \\tag {14}\n$$\n",
|
| 1292 |
+
"text_format": "latex",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
643,
|
| 1295 |
+
239,
|
| 1296 |
+
906,
|
| 1297 |
+
258
|
| 1298 |
+
],
|
| 1299 |
+
"page_idx": 5
|
| 1300 |
+
},
|
| 1301 |
+
{
|
| 1302 |
+
"type": "text",
|
| 1303 |
+
"text": "Here, $\\mathcal{L}_l$ is only backward to update weights within $l$ -th block. Note that for a fair comparison, all compared methods adopt the same quantized network learning stage.",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
511,
|
| 1306 |
+
266,
|
| 1307 |
+
906,
|
| 1308 |
+
313
|
| 1309 |
+
],
|
| 1310 |
+
"page_idx": 5
|
| 1311 |
+
},
|
| 1312 |
+
{
|
| 1313 |
+
"type": "text",
|
| 1314 |
+
"text": "4. Experiment",
|
| 1315 |
+
"text_level": 1,
|
| 1316 |
+
"bbox": [
|
| 1317 |
+
513,
|
| 1318 |
+
324,
|
| 1319 |
+
638,
|
| 1320 |
+
342
|
| 1321 |
+
],
|
| 1322 |
+
"page_idx": 5
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "text",
|
| 1326 |
+
"text": "4.1. Implementation Details",
|
| 1327 |
+
"text_level": 1,
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
511,
|
| 1330 |
+
349,
|
| 1331 |
+
730,
|
| 1332 |
+
366
|
| 1333 |
+
],
|
| 1334 |
+
"page_idx": 5
|
| 1335 |
+
},
|
| 1336 |
+
{
|
| 1337 |
+
"type": "text",
|
| 1338 |
+
"text": "Models and Tasks. We evaluate the performance of SARDFQ by test quantized ViT-S/B [20], DeiT-T/S/B [68], and Swin-S/B [54] on the classification task using ImageNet [63]. The pre-trained models are downloaded from the timm library. In appendix, we further provide results on detection and segmentation tasks.",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
511,
|
| 1341 |
+
371,
|
| 1342 |
+
905,
|
| 1343 |
+
462
|
| 1344 |
+
],
|
| 1345 |
+
"page_idx": 5
|
| 1346 |
+
},
|
| 1347 |
+
{
|
| 1348 |
+
"type": "text",
|
| 1349 |
+
"text": "Comparison methods. We compare our SARDFQ against Gaussian noise, real images, and previous methods including SMI [33] and PSAQ-ViT [47] and its subsequent version, PSAQ-ViT V2 [48]. For a fair comparison, we generate synthetic images using their methods and apply our quantized network learning strategy. We use the official code for SMI and PSAQ-ViT to reproduce their images, while PSAQ-ViT V2 is re-implemented by us, as no official code is available.",
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
511,
|
| 1352 |
+
462,
|
| 1353 |
+
905,
|
| 1354 |
+
595
|
| 1355 |
+
],
|
| 1356 |
+
"page_idx": 5
|
| 1357 |
+
},
|
| 1358 |
+
{
|
| 1359 |
+
"type": "text",
|
| 1360 |
+
"text": "Experimental settings. All experiments were conducted using the PyTorch framework [60] on a single NVIDIA 3090 GPU. In the data synthesis stage, synthetic images were initialized with standard Gaussian noise, generating 32 images in total. The Adam optimizer [38] with $\\beta_{1} = 0.5$ , $\\beta_{2} = 0.9$ was used, with learning rates of 0.25 for Swin and 0.2 for others, and a total of 1,000 iterations. For all models, $K_{APA}$ , $K_{MSR}$ , $\\epsilon_{1}$ , and $\\epsilon_{2}$ were set to 5, 4, 5, and 10, respectively, based on a search with W4A4 DeiT-S. The value of $\\alpha_{1}$ was determined by grid search for each model: 1e5 for DeiT-T/S, 1e4 for DeiT-B, 100 for ViT-B and Swin-B, 10 for Swin-S, and 1 for ViT-S. Although further hyperparameter search may improve performance, the current settings already yield superior results. In the quantized network learning stage, following [85], the Adam optimizer with $\\beta_{1} = 0.9$ , $\\beta_{2} = 0.999$ was used, with weight decay set to 0 and an initial learning rate of 4e-5, adjusted",
|
| 1361 |
+
"bbox": [
|
| 1362 |
+
511,
|
| 1363 |
+
598,
|
| 1364 |
+
906,
|
| 1365 |
+
854
|
| 1366 |
+
],
|
| 1367 |
+
"page_idx": 5
|
| 1368 |
+
},
|
| 1369 |
+
{
|
| 1370 |
+
"type": "page_footnote",
|
| 1371 |
+
"text": "4In appendix, we present practical efficiency results and more performance comparisons with more methods such as CLAMP-ViT [62] on W8/A8 and W4/A8 settings.",
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
511,
|
| 1374 |
+
862,
|
| 1375 |
+
906,
|
| 1376 |
+
900
|
| 1377 |
+
],
|
| 1378 |
+
"page_idx": 5
|
| 1379 |
+
},
|
| 1380 |
+
{
|
| 1381 |
+
"type": "page_number",
|
| 1382 |
+
"text": "12484",
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
480,
|
| 1385 |
+
944,
|
| 1386 |
+
519,
|
| 1387 |
+
957
|
| 1388 |
+
],
|
| 1389 |
+
"page_idx": 5
|
| 1390 |
+
},
|
| 1391 |
+
{
|
| 1392 |
+
"type": "table",
|
| 1393 |
+
"img_path": "images/33299b202b35ff497d1ff72b39a86307dea4795633c4b17dd51632f5f95dd691.jpg",
|
| 1394 |
+
"table_caption": [],
|
| 1395 |
+
"table_footnote": [],
|
| 1396 |
+
"table_body": "<table><tr><td>Model</td><td>W/A</td><td>Real</td><td>Gaussian noise</td><td>PSAQ-ViT [47]</td><td>PSAQ-ViT V2 [48]</td><td>SMI [33]</td><td>SARDFQ (Ours)</td></tr><tr><td rowspan=\"3\">ViT-S (81.39)</td><td>4/4</td><td>66.57</td><td>6.02</td><td>47.24</td><td>41.53</td><td>24.3329.41</td><td>50.32</td></tr><tr><td>5/5</td><td>76.69</td><td>36.77</td><td>71.59</td><td>68.41</td><td>61.3365.19</td><td>74.31</td></tr><tr><td>6/6</td><td>79.46</td><td>61.20</td><td>77.20</td><td>74.76</td><td>72.9572.46</td><td>78.40</td></tr><tr><td rowspan=\"3\">ViT-B (84.54)</td><td>4/4</td><td>68.16</td><td>0.15</td><td>36.32</td><td>26.32</td><td>35.2719.67</td><td>51.84</td></tr><tr><td>5/5</td><td>79.21</td><td>4.16</td><td>68.48</td><td>67.95</td><td>67.5357.13</td><td>70.70</td></tr><tr><td>6/6</td><td>81.89</td><td>55.18</td><td>76.65</td><td>71.87</td><td>76.3369.82</td><td>79.16</td></tr><tr><td rowspan=\"3\">DeiT-T (72.21)</td><td>4/4</td><td>56.60</td><td>17.43</td><td>47.75</td><td>30.20</td><td>30.1413.18</td><td>52.06</td></tr><tr><td>5/5</td><td>67.09</td><td>43.49</td><td>64.10</td><td>55.16</td><td>56.4439.35</td><td>66.41</td></tr><tr><td>6/6</td><td>69.81</td><td>56.23</td><td>68.37</td><td>62.77</td><td>64.0344.39</td><td>69.73</td></tr><tr><td rowspan=\"3\">DeiT-S (79.85)</td><td>4/4</td><td>68.46</td><td>20.89</td><td>58.28</td><td>45.53</td><td>42.7711.71</td><td>62.29</td></tr><tr><td>5/5</td><td>75.06</td><td>41.06</td><td>71.90</td><td>63.14</td><td>62.8829.13</td><td>74.06</td></tr><tr><td>6/6</td><td>77.87</td><td>65.63</td><td>75.85</td><td>68.85</td><td>71.6537.69</td><td>77.31</td></tr><tr><td rowspan=\"3\">DeiT-B (81.85)</td><td>4/4</td><td>77.07</td><td>47.20</td><td>71.75</td><td>66.43</td><td>65.3359.04</td><td>72.17</td></tr><tr><td>5/5</td><td>79.86</td><td>65.46</td><td>78.45</td><td>76.77</td><td>76.7475.33</td><td>78.72</td></tr><tr><td>6/6</td><td>80.90</td><td>62.79</td><td>80.00</td><td>79.22</td><td>78.8177.66</td><td>80.15</td></tr><tr><td rowspan=\"3\">Swin-S (83.20)</td><td>4/4</td><td>78.12</td><td>31.92</td><td>73.19</td><td>65.55</td><td>65.85</td><td>74.74</td></tr><tr><td>5/5</td><td>80.51</td><td>52.10</td><td>78.15</td><td>74.37</td><td>75.41</td><td>79.56</td></tr><tr><td>6/6</td><td>80.60</td><td>65.66</td><td>79.74</td><td>78.50</td><td>78.25</td><td>80.56</td></tr><tr><td rowspan=\"3\">Swin-B (85.27)</td><td>4/4</td><td>78.80</td><td>30.14</td><td>71.84</td><td>67.42</td><td>65.23</td><td>76.42</td></tr><tr><td>5/5</td><td>82.51</td><td>35.28</td><td>78.50</td><td>77.20</td><td>75.25</td><td>80.82</td></tr><tr><td>6/6</td><td>82.64</td><td>67.37</td><td>82.00</td><td>81.41</td><td>80.30</td><td>83.03</td></tr></table>",
|
| 1397 |
+
"bbox": [
|
| 1398 |
+
102,
|
| 1399 |
+
89,
|
| 1400 |
+
893,
|
| 1401 |
+
474
|
| 1402 |
+
],
|
| 1403 |
+
"page_idx": 6
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "text",
|
| 1407 |
+
"text": "Table 2. Quantization results on ImageNet dataset, with top-1 accuracy $(\\%)$ reported. The performance of the full-precision model is listed below the model name. \"W/A\" denotes the bit-width of weights/activations. \"Real\" refers to using real images. For SMI [33], we provide the performance of using dense (normal-sized numbers) and sparse (smaller-sized numbers) synthetic images, respectively. Note that for Swin models, we do not provide the results for sparse synthetic images as the sparse generation method of SMI is infeasible.",
|
| 1408 |
+
"bbox": [
|
| 1409 |
+
89,
|
| 1410 |
+
484,
|
| 1411 |
+
906,
|
| 1412 |
+
542
|
| 1413 |
+
],
|
| 1414 |
+
"page_idx": 6
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "text",
|
| 1418 |
+
"text": "via cosine decay for 100 iterations. A channel-wise quantizer was used for weights, and a layer-wise quantizer for activations, with all matrix multiplications in ViTs quantized [47-49].",
|
| 1419 |
+
"bbox": [
|
| 1420 |
+
89,
|
| 1421 |
+
556,
|
| 1422 |
+
482,
|
| 1423 |
+
617
|
| 1424 |
+
],
|
| 1425 |
+
"page_idx": 6
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "text",
|
| 1429 |
+
"text": "4.2. Quantization Results",
|
| 1430 |
+
"text_level": 1,
|
| 1431 |
+
"bbox": [
|
| 1432 |
+
89,
|
| 1433 |
+
633,
|
| 1434 |
+
289,
|
| 1435 |
+
648
|
| 1436 |
+
],
|
| 1437 |
+
"page_idx": 6
|
| 1438 |
+
},
|
| 1439 |
+
{
|
| 1440 |
+
"type": "text",
|
| 1441 |
+
"text": "The quantization results are presented in Tab.2. Our SARDFQ demonstrates consistent improvements across various quantization bit-width configurations, particularly with low bit-width settings. Specifically, for ViT-S, SARDFQ improves the performance by $3.08\\%$ in the W4/A4 setting, $2.72\\%$ in the W5/A5 setting, and $1.20\\%$ in the W6/A6 setting. For ViT-B, SARDFQ achieves performance gains of $15.52\\%$ in the W4/A4 setting, $2.22\\%$ in the W5/A5 setting, and $2.51\\%$ in the W6/A6 setting. Results on DeiT also demonstrate the effectiveness of SARDFQ. For example, on DeiT-T, SARDFQ shows a marked improvement by increasing top-1 accuracy by $4.31\\%$ in the W4/A4 setting, $2.31\\%$ in the W5/A5 setting, and $1.36\\%$ in the W6/A6 setting. For DeiT-S, SARDFQ enhances top-1 accuracy by $4.01\\%$ in the W4/A4 setting, $2.16\\%$ in the W5/A5 setting, and $1.46\\%$ in the W6/A6 setting. The quan",
|
| 1442 |
+
"bbox": [
|
| 1443 |
+
89,
|
| 1444 |
+
659,
|
| 1445 |
+
482,
|
| 1446 |
+
900
|
| 1447 |
+
],
|
| 1448 |
+
"page_idx": 6
|
| 1449 |
+
},
|
| 1450 |
+
{
|
| 1451 |
+
"type": "text",
|
| 1452 |
+
"text": "tization results of Swin-S/B also affirm the superiority of our SARDFQ in enhancing model accuracy under different quantization configurations. In particular, for Swin-S, the proposed SARDFQ increases the accuracy by $1.55\\%$ for the W4/A4 setting, $1.41\\%$ for the W5/A5 setting, and $0.82\\%$ for the W6/A6 setting, respectively. When it comes to Swin-B, the proposed SARDFQ increases the accuracy by $4.58\\%$ for the W4/A4 setting, $2.32\\%$ for the W5/A5 setting, and $1.03\\%$ for the W6/A6 setting, respectively.",
|
| 1453 |
+
"bbox": [
|
| 1454 |
+
511,
|
| 1455 |
+
556,
|
| 1456 |
+
906,
|
| 1457 |
+
693
|
| 1458 |
+
],
|
| 1459 |
+
"page_idx": 6
|
| 1460 |
+
},
|
| 1461 |
+
{
|
| 1462 |
+
"type": "text",
|
| 1463 |
+
"text": "4.3. Ablation Study",
|
| 1464 |
+
"text_level": 1,
|
| 1465 |
+
"bbox": [
|
| 1466 |
+
511,
|
| 1467 |
+
708,
|
| 1468 |
+
666,
|
| 1469 |
+
724
|
| 1470 |
+
],
|
| 1471 |
+
"page_idx": 6
|
| 1472 |
+
},
|
| 1473 |
+
{
|
| 1474 |
+
"type": "text",
|
| 1475 |
+
"text": "All ablation studies are conducted on the W4A4 DeiT-S.",
|
| 1476 |
+
"bbox": [
|
| 1477 |
+
511,
|
| 1478 |
+
732,
|
| 1479 |
+
885,
|
| 1480 |
+
747
|
| 1481 |
+
],
|
| 1482 |
+
"page_idx": 6
|
| 1483 |
+
},
|
| 1484 |
+
{
|
| 1485 |
+
"type": "text",
|
| 1486 |
+
"text": "Analysis of APA, MSR, and SL. We analyze the effectiveness of the proposed APA (Sec.,3.3.1), MSR (Sec.,3.3.2), and SL (Eq.,12) in Tab.,3. Adding APA and SL individually to the baseline increases accuracy. Notably, APA boosts performance from $51.73\\%$ to $60.26\\%$ , confirming its effectiveness in aligning semantics (Sec.,3.3.1). Applying MSR alone slightly decreases accuracy from $51.73\\%$ to $50.75\\%$ , indicating that one-hot loss is unsuitable for MSR-augmented synthetic images. However, when both MSR and SL are applied, accuracy rises to $56.08\\%$ , sug-",
|
| 1487 |
+
"bbox": [
|
| 1488 |
+
511,
|
| 1489 |
+
750,
|
| 1490 |
+
906,
|
| 1491 |
+
902
|
| 1492 |
+
],
|
| 1493 |
+
"page_idx": 6
|
| 1494 |
+
},
|
| 1495 |
+
{
|
| 1496 |
+
"type": "page_number",
|
| 1497 |
+
"text": "12485",
|
| 1498 |
+
"bbox": [
|
| 1499 |
+
480,
|
| 1500 |
+
945,
|
| 1501 |
+
517,
|
| 1502 |
+
955
|
| 1503 |
+
],
|
| 1504 |
+
"page_idx": 6
|
| 1505 |
+
},
|
| 1506 |
+
{
|
| 1507 |
+
"type": "table",
|
| 1508 |
+
"img_path": "images/206a27669ba58a5391cf71ee5a1676d1fde71dbe5309b8c26d17b7bd17575392.jpg",
|
| 1509 |
+
"table_caption": [],
|
| 1510 |
+
"table_footnote": [],
|
| 1511 |
+
"table_body": "<table><tr><td>APA</td><td>MSR</td><td>SL</td><td>Acc. (%)</td></tr><tr><td colspan=\"3\">Baseline</td><td>51.73</td></tr><tr><td>✓</td><td></td><td></td><td>60.26</td></tr><tr><td></td><td>✓</td><td></td><td>50.75</td></tr><tr><td></td><td></td><td>✓</td><td>52.02</td></tr><tr><td>✓</td><td>✓</td><td></td><td>61.58</td></tr><tr><td>✓</td><td></td><td>✓</td><td>60.51</td></tr><tr><td></td><td>✓</td><td>✓</td><td>56.08</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>62.29</td></tr></table>",
|
| 1512 |
+
"bbox": [
|
| 1513 |
+
169,
|
| 1514 |
+
88,
|
| 1515 |
+
406,
|
| 1516 |
+
250
|
| 1517 |
+
],
|
| 1518 |
+
"page_idx": 7
|
| 1519 |
+
},
|
| 1520 |
+
{
|
| 1521 |
+
"type": "text",
|
| 1522 |
+
"text": "gesting SL is more compatible with MSR than one-hot loss. Combining APA with either MSR or SL further improves performance. For example, APA and SL together yield an accuracy of $60.51\\%$ , and when all three strategies are used, the best performance of $62.29\\%$ is achieved.",
|
| 1523 |
+
"bbox": [
|
| 1524 |
+
89,
|
| 1525 |
+
321,
|
| 1526 |
+
482,
|
| 1527 |
+
396
|
| 1528 |
+
],
|
| 1529 |
+
"page_idx": 7
|
| 1530 |
+
},
|
| 1531 |
+
{
|
| 1532 |
+
"type": "text",
|
| 1533 |
+
"text": "Analysis of priors distribution. Tab. 4a showcases the results of using other distributions to formulate the attention priors. The unevenly distributed GMM and Laplace present comparable performance of $62.29\\%$ and $62.16\\%$ , respectively. Moreover, GMM provides a similar performance to the real's, indicating it performs well in imitating the patterns of real images.",
|
| 1534 |
+
"bbox": [
|
| 1535 |
+
89,
|
| 1536 |
+
397,
|
| 1537 |
+
483,
|
| 1538 |
+
503
|
| 1539 |
+
],
|
| 1540 |
+
"page_idx": 7
|
| 1541 |
+
},
|
| 1542 |
+
{
|
| 1543 |
+
"type": "image",
|
| 1544 |
+
"img_path": "images/a9648db1c30d9fa309266ebee1ec85a0f5984e694d58af35381fa07d3dce66ed.jpg",
|
| 1545 |
+
"image_caption": [
|
| 1546 |
+
"(a)",
|
| 1547 |
+
"Figure 5. Effect of varying (a) $\\alpha_{1}$ and (b) $K_{MSR}$ ."
|
| 1548 |
+
],
|
| 1549 |
+
"image_footnote": [],
|
| 1550 |
+
"bbox": [
|
| 1551 |
+
94,
|
| 1552 |
+
515,
|
| 1553 |
+
282,
|
| 1554 |
+
631
|
| 1555 |
+
],
|
| 1556 |
+
"page_idx": 7
|
| 1557 |
+
},
|
| 1558 |
+
{
|
| 1559 |
+
"type": "image",
|
| 1560 |
+
"img_path": "images/3e1333dc94a67c2dbdb0700a59515b6c89c5e3a10c0ae3c609fbbe5e2a6d3f60.jpg",
|
| 1561 |
+
"image_caption": [
|
| 1562 |
+
"(b)"
|
| 1563 |
+
],
|
| 1564 |
+
"image_footnote": [],
|
| 1565 |
+
"bbox": [
|
| 1566 |
+
282,
|
| 1567 |
+
515,
|
| 1568 |
+
478,
|
| 1569 |
+
631
|
| 1570 |
+
],
|
| 1571 |
+
"page_idx": 7
|
| 1572 |
+
},
|
| 1573 |
+
{
|
| 1574 |
+
"type": "text",
|
| 1575 |
+
"text": "Analysis of $\\alpha_{1}, K_{APA}$ , and $K_{MSR}$ . The $\\alpha_{1}$ from Eq. 13 balance the importance of the proposed APA loss during the update of the synthetic images. Fig. 5a demonstrates that the optimal performance is achieved when $\\alpha_{1} = 1e5$ . Incrementally increasing $\\alpha_{1}$ improves performance up to $62.29\\%$ at $\\alpha_{1} = 1e5$ . However, further increases in $\\alpha_{1}$ subsequently degrade performance. The $K_{APA}$ in APA is the upper limit on the number of the Gaussian distributions used for priors generation. Tab. 4b displays the ablation study for different values of $K_{APA}$ . The best accuracy is achieved when $K_{APA} = 5$ . The $K_{MSR}$ in MSR is the upper limit on the number of patches. Fig. 5b demonstrates that when $K_{MSR} = 4$ , the optimal performance is achieved. Using $K_{MSR}$ larger than 4 will hurt the accuracy. We consider this",
|
| 1576 |
+
"bbox": [
|
| 1577 |
+
89,
|
| 1578 |
+
688,
|
| 1579 |
+
483,
|
| 1580 |
+
901
|
| 1581 |
+
],
|
| 1582 |
+
"page_idx": 7
|
| 1583 |
+
},
|
| 1584 |
+
{
|
| 1585 |
+
"type": "table",
|
| 1586 |
+
"img_path": "images/3e3d62c782164115c35624252c98cb9d7fdd5faa0e95401c20fe58b13586962c.jpg",
|
| 1587 |
+
"table_caption": [
|
| 1588 |
+
"Table 3. Influence of the proposed APA, MSR, and SL on accuracy. The baseline adopts the one-hot loss. Applying APA, MSR, and SL yields SARDFQ."
|
| 1589 |
+
],
|
| 1590 |
+
"table_footnote": [],
|
| 1591 |
+
"table_body": "<table><tr><td>Priors Distribution</td><td>Top-1</td></tr><tr><td>GMM</td><td>62.29</td></tr><tr><td>Laplace</td><td>62.16</td></tr><tr><td>Real</td><td>63.19</td></tr></table>",
|
| 1592 |
+
"bbox": [
|
| 1593 |
+
514,
|
| 1594 |
+
103,
|
| 1595 |
+
720,
|
| 1596 |
+
181
|
| 1597 |
+
],
|
| 1598 |
+
"page_idx": 7
|
| 1599 |
+
},
|
| 1600 |
+
{
|
| 1601 |
+
"type": "table",
|
| 1602 |
+
"img_path": "images/29b607cecea06dacbd324c39babe14929a287cf17e47dc3308ce9afe27f79bbd.jpg",
|
| 1603 |
+
"table_caption": [
|
| 1604 |
+
"(a)"
|
| 1605 |
+
],
|
| 1606 |
+
"table_footnote": [],
|
| 1607 |
+
"table_body": "<table><tr><td>KAPA</td><td>Top-1</td></tr><tr><td>1</td><td>61.13</td></tr><tr><td>3</td><td>61.52</td></tr><tr><td>5</td><td>62.29</td></tr><tr><td>7</td><td>61.53</td></tr><tr><td>9</td><td>61.05</td></tr></table>",
|
| 1608 |
+
"bbox": [
|
| 1609 |
+
751,
|
| 1610 |
+
88,
|
| 1611 |
+
880,
|
| 1612 |
+
196
|
| 1613 |
+
],
|
| 1614 |
+
"page_idx": 7
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "table",
|
| 1618 |
+
"img_path": "images/428c915ab844ae292e8cac279f6eac909d561e07aa4b86888bd7f7747fb61b2f.jpg",
|
| 1619 |
+
"table_caption": [
|
| 1620 |
+
"(b)"
|
| 1621 |
+
],
|
| 1622 |
+
"table_footnote": [],
|
| 1623 |
+
"table_body": "<table><tr><td>S</td><td>Top-1</td></tr><tr><td>0</td><td>61.96</td></tr><tr><td>L/2</td><td>62.29</td></tr></table>",
|
| 1624 |
+
"bbox": [
|
| 1625 |
+
563,
|
| 1626 |
+
217,
|
| 1627 |
+
676,
|
| 1628 |
+
280
|
| 1629 |
+
],
|
| 1630 |
+
"page_idx": 7
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "table",
|
| 1634 |
+
"img_path": "images/8bb3b141338774aeae4a45994c2d336555dc3fc8f80865f42494534157eafc84.jpg",
|
| 1635 |
+
"table_caption": [
|
| 1636 |
+
"(c)"
|
| 1637 |
+
],
|
| 1638 |
+
"table_footnote": [],
|
| 1639 |
+
"table_body": "<table><tr><td>w. l/L</td><td>Top-1</td></tr><tr><td>✓</td><td>62.29</td></tr><tr><td>×</td><td>61.32</td></tr></table>",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
740,
|
| 1642 |
+
217,
|
| 1643 |
+
856,
|
| 1644 |
+
280
|
| 1645 |
+
],
|
| 1646 |
+
"page_idx": 7
|
| 1647 |
+
},
|
| 1648 |
+
{
|
| 1649 |
+
"type": "text",
|
| 1650 |
+
"text": "(d)",
|
| 1651 |
+
"bbox": [
|
| 1652 |
+
790,
|
| 1653 |
+
282,
|
| 1654 |
+
808,
|
| 1655 |
+
295
|
| 1656 |
+
],
|
| 1657 |
+
"page_idx": 7
|
| 1658 |
+
},
|
| 1659 |
+
{
|
| 1660 |
+
"type": "text",
|
| 1661 |
+
"text": "Table 4. Effect of varying (a) priors types; (b) $K_{APA}$ ; (c) $S$ ; (d) $\\frac{l}{L}$ .",
|
| 1662 |
+
"bbox": [
|
| 1663 |
+
511,
|
| 1664 |
+
306,
|
| 1665 |
+
903,
|
| 1666 |
+
337
|
| 1667 |
+
],
|
| 1668 |
+
"page_idx": 7
|
| 1669 |
+
},
|
| 1670 |
+
{
|
| 1671 |
+
"type": "text",
|
| 1672 |
+
"text": "due to limited patch resolution if using a too large $K_{MSR}$ .",
|
| 1673 |
+
"bbox": [
|
| 1674 |
+
511,
|
| 1675 |
+
352,
|
| 1676 |
+
898,
|
| 1677 |
+
367
|
| 1678 |
+
],
|
| 1679 |
+
"page_idx": 7
|
| 1680 |
+
},
|
| 1681 |
+
{
|
| 1682 |
+
"type": "text",
|
| 1683 |
+
"text": "Analysis of APA loss. Here, we conduct the ablation study by considering the $S$ and scale $\\frac{l}{L}$ in Eq.7. Tab.4c presents the effect of varying $S$ in Eq.7. If applying APA loss to all blocks $(S = 0)$ , the top-1 accuracy decreases to $61.96\\%$ . From Tab.4d, it can be seen that absorbing the scale $\\frac{l}{L}$ in Eq.7 presents $0.97\\%$ performance gains.",
|
| 1684 |
+
"bbox": [
|
| 1685 |
+
511,
|
| 1686 |
+
367,
|
| 1687 |
+
905,
|
| 1688 |
+
459
|
| 1689 |
+
],
|
| 1690 |
+
"page_idx": 7
|
| 1691 |
+
},
|
| 1692 |
+
{
|
| 1693 |
+
"type": "text",
|
| 1694 |
+
"text": "5. Limitations",
|
| 1695 |
+
"text_level": 1,
|
| 1696 |
+
"bbox": [
|
| 1697 |
+
511,
|
| 1698 |
+
472,
|
| 1699 |
+
635,
|
| 1700 |
+
488
|
| 1701 |
+
],
|
| 1702 |
+
"page_idx": 7
|
| 1703 |
+
},
|
| 1704 |
+
{
|
| 1705 |
+
"type": "text",
|
| 1706 |
+
"text": "We further discuss some limitations of the proposed SARDFQ, which will guide future research directions. First, although SARDFQ shows substantial performance improvement, a performance gap between SARDFQ and real data remains challenging, highlighting the need for a stronger semantics alignment and reinforcement method. Second, SARDFQ currently lacks a theoretical foundation. Future work could establish a theoretical framework for SARDFQ, particularly in understanding how APA and MSR influence synthetic images in a formalized manner.",
|
| 1707 |
+
"bbox": [
|
| 1708 |
+
511,
|
| 1709 |
+
497,
|
| 1710 |
+
906,
|
| 1711 |
+
650
|
| 1712 |
+
],
|
| 1713 |
+
"page_idx": 7
|
| 1714 |
+
},
|
| 1715 |
+
{
|
| 1716 |
+
"type": "text",
|
| 1717 |
+
"text": "6. Conclusion",
|
| 1718 |
+
"text_level": 1,
|
| 1719 |
+
"bbox": [
|
| 1720 |
+
511,
|
| 1721 |
+
662,
|
| 1722 |
+
633,
|
| 1723 |
+
679
|
| 1724 |
+
],
|
| 1725 |
+
"page_idx": 7
|
| 1726 |
+
},
|
| 1727 |
+
{
|
| 1728 |
+
"type": "text",
|
| 1729 |
+
"text": "In this paper, we investigate the DFQ method for ViTs. We first identify that synthetic images generated by existing methods suffer from semantic distortion and inadequacy issues, and propose SARDFQ to address these issues. To mitigate semantic distortion, SARDFQ introduces APA, which guides synthetic images to align with randomly generated structural attention patterns. To tackle semantic inadequacy, SARDFQ incorporates MSR. MSR optimizes different regions of synthetic images with unique semantics, thereby enhancing overall semantic richness. Moreover, SARDFQ employs SL, which adopts multiple semantic targets to ensure seamless learning of images augmented by MSR. Extensive experiments on various ViT models and tasks validate the effectiveness of SARDFQ.",
|
| 1730 |
+
"bbox": [
|
| 1731 |
+
509,
|
| 1732 |
+
688,
|
| 1733 |
+
906,
|
| 1734 |
+
900
|
| 1735 |
+
],
|
| 1736 |
+
"page_idx": 7
|
| 1737 |
+
},
|
| 1738 |
+
{
|
| 1739 |
+
"type": "page_number",
|
| 1740 |
+
"text": "12486",
|
| 1741 |
+
"bbox": [
|
| 1742 |
+
480,
|
| 1743 |
+
944,
|
| 1744 |
+
519,
|
| 1745 |
+
957
|
| 1746 |
+
],
|
| 1747 |
+
"page_idx": 7
|
| 1748 |
+
},
|
| 1749 |
+
{
|
| 1750 |
+
"type": "text",
|
| 1751 |
+
"text": "7. Acknowledgments",
|
| 1752 |
+
"text_level": 1,
|
| 1753 |
+
"bbox": [
|
| 1754 |
+
91,
|
| 1755 |
+
90,
|
| 1756 |
+
269,
|
| 1757 |
+
107
|
| 1758 |
+
],
|
| 1759 |
+
"page_idx": 8
|
| 1760 |
+
},
|
| 1761 |
+
{
|
| 1762 |
+
"type": "text",
|
| 1763 |
+
"text": "This work was supported by National Science and Technology Major Project (No. 2022ZD0118202), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (Np. 624B2119, No. U21B2037, No. U22B2051, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001).",
|
| 1764 |
+
"bbox": [
|
| 1765 |
+
89,
|
| 1766 |
+
113,
|
| 1767 |
+
485,
|
| 1768 |
+
255
|
| 1769 |
+
],
|
| 1770 |
+
"page_idx": 8
|
| 1771 |
+
},
|
| 1772 |
+
{
|
| 1773 |
+
"type": "text",
|
| 1774 |
+
"text": "References",
|
| 1775 |
+
"text_level": 1,
|
| 1776 |
+
"bbox": [
|
| 1777 |
+
91,
|
| 1778 |
+
281,
|
| 1779 |
+
187,
|
| 1780 |
+
297
|
| 1781 |
+
],
|
| 1782 |
+
"page_idx": 8
|
| 1783 |
+
},
|
| 1784 |
+
{
|
| 1785 |
+
"type": "list",
|
| 1786 |
+
"sub_type": "ref_text",
|
| 1787 |
+
"list_items": [
|
| 1788 |
+
"[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. Vivit: A video vision transformer. In IEEE/CVF international conference on computer vision (ICCV), pages 6836-6846, 2021. 1, 2",
|
| 1789 |
+
"[2] Jianhong Bai, Yuchen Yang, Huanpeng Chu, Hualiang Wang, Zuozhu Liu, Ruizhe Chen, Xiaoxuan He, Lianrui Mu, Chengfei Cai, and Haoji Hu. Robustness-guided image synthesis for data-free quantization. arXiv preprint arXiv:2310.03661, 2023. 3",
|
| 1790 |
+
"[3] Ron Banner, Yury Nahshan, Daniel Soudry, et al. Post training 4-bit quantization of convolutional networks for rapid-deployment. In Advances in Neural Information Processing Systems (NeurIPS), pages 7950-7958, 2019. 2",
|
| 1791 |
+
"[4] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(8):1798-1828, 2013. 4",
|
| 1792 |
+
"[5] Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, and Judy Hoffman. Token merging: Your vit but faster. In The Eleventh International Conference on Learning Representations (ICLR), 2023. 4",
|
| 1793 |
+
"[6] Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Zeroq: A novel zero shot quantization framework. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13169-13178, 2020. 2, 3",
|
| 1794 |
+
"[7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European Conference on Computer Vision (ECCV), pages 213-229. Springer, 2020. 1, 2",
|
| 1795 |
+
"[8] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 397-406, 2021. 5",
|
| 1796 |
+
"[9] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12299-12310, 2021. 2",
|
| 1797 |
+
"[10] Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, and Rongrong Ji. Cf-vit: A general"
|
| 1798 |
+
],
|
| 1799 |
+
"bbox": [
|
| 1800 |
+
93,
|
| 1801 |
+
306,
|
| 1802 |
+
483,
|
| 1803 |
+
901
|
| 1804 |
+
],
|
| 1805 |
+
"page_idx": 8
|
| 1806 |
+
},
|
| 1807 |
+
{
|
| 1808 |
+
"type": "list",
|
| 1809 |
+
"sub_type": "ref_text",
|
| 1810 |
+
"list_items": [
|
| 1811 |
+
"coarse-to-fine method for vision transformer. In AAAI Conference on Artificial Intelligence, pages 7042-7052, 2023. 4",
|
| 1812 |
+
"[11] Mengzhao Chen, Wenqi Shao, Peng Xu, Mingbao Lin, Kaipeng Zhang, Fei Chao, Rongrong Ji, Yu Qiao, and Ping Luo. Diffrate: Differentiable compression rate for efficient vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17164-17174, 2023. 2",
|
| 1813 |
+
"[12] Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun Yang, and Huchuan Lu. Transformer tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8126-8135, 2021. 2",
|
| 1814 |
+
"[13] Xinrui Chen, Yizhi Wang, Renao Yan, Yiqing Liu, Tian Guan, and Yonghong He. Texq: Zero-shot network quantization with texture feature distribution calibration. In Advances in Neural Information Processing Systems (NeurIPS), 2024. 3, 4, 1",
|
| 1815 |
+
"[14] Kanghyun Choi, Deokki Hong, Noseong Park, Youngsok Kim, and Jinho Lee. Qimera: Data-free quantization with synthetic boundary supporting samples. In Advances in Neural Information Processing Systems (NeurIPS), pages 14835-14847, 2021. 2, 4, 1",
|
| 1816 |
+
"[15] Kanghyun Choi, Hye Yoon Lee, Deokki Hong, Joonsang Yu, Noseong Park, Youngsok Kim, and Jinho Lee. It's all in the teacher: Zero-shot quantization brought closer to the teacher. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8311-8321, 2022. 3",
|
| 1817 |
+
"[16] Kanghyun Choi, Hye Yoon Lee, Dain Kwon, SunJong Park, Kyuyeun Kim, Noseong Park, and Jinho Lee. Mimiq: Low-bit data-free quantization of vision transformers. arXiv preprint arXiv:2407.20021, 2024. 2, 3",
|
| 1818 |
+
"[17] Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In International Conference on Learning Representations (ICLR), 2020. 4",
|
| 1819 |
+
"[18] Yifu Ding, Haotong Qin, Qinghua Yan, Zhenhua Chai, Junjie Liu, Xiaolin Wei, and Xianglong Liu. Towards accurate posttraining quantization for vision transformer. In 30th ACM International Conference on Multimedia (ACMMM), pages 5380-5388, 2022. 3",
|
| 1820 |
+
"[19] Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In International conference on machine learning (ICML), pages 2793-2803. PMLR, 2021. 5",
|
| 1821 |
+
"[20] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021. 1, 2, 4, 6",
|
| 1822 |
+
"[21] Hoang Anh Dung, Cuong Pham, Trung Le, Jianfei Cai, and Thanh-Toan Do. Sharpness-aware data generation for zero-shot quantization. In International Conference on Machine Learning (ICML), 2024. 2",
|
| 1823 |
+
"[22] Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S. Modha."
|
| 1824 |
+
],
|
| 1825 |
+
"bbox": [
|
| 1826 |
+
516,
|
| 1827 |
+
92,
|
| 1828 |
+
903,
|
| 1829 |
+
901
|
| 1830 |
+
],
|
| 1831 |
+
"page_idx": 8
|
| 1832 |
+
},
|
| 1833 |
+
{
|
| 1834 |
+
"type": "page_number",
|
| 1835 |
+
"text": "12487",
|
| 1836 |
+
"bbox": [
|
| 1837 |
+
480,
|
| 1838 |
+
944,
|
| 1839 |
+
517,
|
| 1840 |
+
955
|
| 1841 |
+
],
|
| 1842 |
+
"page_idx": 8
|
| 1843 |
+
},
|
| 1844 |
+
{
|
| 1845 |
+
"type": "list",
|
| 1846 |
+
"sub_type": "ref_text",
|
| 1847 |
+
"list_items": [
|
| 1848 |
+
"Learned step size quantization. In International Conference on Learning Representations (ICLR), 2020. 2",
|
| 1849 |
+
"[23] Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, and Mingli Song. Contrastive model inversion for data-free knowledge distillation. In Thirty-First International Joint Conference on Artificial Intelligence, (IJCAI), 2021. 2",
|
| 1850 |
+
"[24] Natalia Frumkin, Dibakar Gope, and Diana Marculescu. Jumping through local minima: Quantization in the loss landscape of vision transformers. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 16978-16988, 2023. 3",
|
| 1851 |
+
"[25] Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4852–4861, 2019. 2",
|
| 1852 |
+
"[26] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), pages 2672–2680, 2014. 3",
|
| 1853 |
+
"[27] Cong Guo, Yuxian Qiu, Jingwen Leng, Xiaotian Gao, Chen Zhang, Yunxin Liu, Fan Yang, Yuhao Zhu, and Minyi Guo. Squant: On-the-fly data-free quantization via diagonal hessian approximation. In The Eleventh International Conference on Learning Representations (ICLR), 2022. 2, 3",
|
| 1854 |
+
"[28] Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2",
|
| 1855 |
+
"[29] Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, et al. A survey on vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 45(1):87-110, 2022. 1",
|
| 1856 |
+
"[30] Zhiwei Hao, Jianyuan Guo, Ding Jia, Kai Han, Yehui Tang, Chao Zhang, Han Hu, and Yunhe Wang. Learning efficient vision transformers via fine-grained manifold distillation. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2",
|
| 1857 |
+
"[31] Joakim Bruslund Haurum, Sergio Escalera, Graham W Taylor, and Thomas B Moeslund. Which tokens to use? investigating token reduction in vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 773-783, 2023. 4",
|
| 1858 |
+
"[32] Zejiang Hou and Sun-Yuan Kung. Multi-dimensional vision transformer compression via dependency guided gaussian process search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3669-3678, 2022. 2",
|
| 1859 |
+
"[33] Zixuan Hu, Yongxian Wei, Li Shen, Zhenyi Wang, Lei Li, Chun Yuan, and Dacheng Tao. Sparse model inversion: Efficient inversion of vision transformers for data-free applications. In International Conference on Machine Learning (ICML), 2024. 2, 3, 4, 6, 7, 1"
|
| 1860 |
+
],
|
| 1861 |
+
"bbox": [
|
| 1862 |
+
91,
|
| 1863 |
+
90,
|
| 1864 |
+
482,
|
| 1865 |
+
898
|
| 1866 |
+
],
|
| 1867 |
+
"page_idx": 9
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "list",
|
| 1871 |
+
"sub_type": "ref_text",
|
| 1872 |
+
"list_items": [
|
| 1873 |
+
"[34] Jinggang Huang and David Mumford. Statistics of natural images and models. In Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 541-547. IEEE, 1999. 5",
|
| 1874 |
+
"[35] Yongkweon Jeon, Chungman Lee, and Ho-young Kim. Genie: show me the data for quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12064-12073, 2023. 3, 6",
|
| 1875 |
+
"[36] Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. ACM Computing Surveys (CSUR), 2021. 1",
|
| 1876 |
+
"[37] Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, Joonhyun Jeong, Jung-Woo Ha, and Hyun Oh Song. Dataset condensation via efficient synthetic-data parameterization. In International Conference on Machine Learning (ICML), pages 11102-11118. PMLR, 2022. 4, 5",
|
| 1877 |
+
"[38] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2014. 6",
|
| 1878 |
+
"[39] Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018. 2",
|
| 1879 |
+
"[40] Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H Li, Yonggang Zhang, Bo Han, and Mingkui Tan. Hard sample matters a lot in zero-shot quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24417-24426, 2023. 3",
|
| 1880 |
+
"[41] Yuhang Li, Xin Dong, and Wei Wang. Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. In International Conference on Learning Representations (ICLR), 2020. 2",
|
| 1881 |
+
"[42] Yuhang Li, Ruihao Gong, Xu Tan, Yang Yang, Peng Hu, Qi Zhang, Fengwei Yu, Wei Wang, and Shi Gu. Brecq: Pushing the limit of post-training quantization by block reconstruction. In International Conference on Learning Representations (ICLR), 2021. 2, 4, 6",
|
| 1882 |
+
"[43] Yanjing Li, Sheng Xu, Baochang Zhang, Xianbin Cao, Peng Gao, and Guodong Guo. Q-vit: Accurate and fully quantized low-bit vision transformer. In Advances in Neural Information Processing Systems (NeurIPS), pages 34451-34463, 2022. 2",
|
| 1883 |
+
"[44] Yuhang Li, Youngeun Kim, Donghyun Lee, and Priyadarshini Panda. Stableq: Enhancing data-scarce quantization with text-to-image data. arXiv preprint arXiv:2312.05272, 2023. 3",
|
| 1884 |
+
"[45] Yuhang Li, Youngeun Kim, Donghyun Lee, Souvik Kundu, and Priyadarshini Panda. Genq: Quantization in low data regimes with generative synthetic data. In European Conference on Computer Vision (ECCV), pages 216-235. Springer, 2024. 3",
|
| 1885 |
+
"[46] Zhikai Li and Qingyi Gu. I-vit: Integer-only quantization for efficient vision transformer inference. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 17065-17075, 2023. 2, 3",
|
| 1886 |
+
"[47] Zhikai Li, Liping Ma, Mengjuan Chen, Junrui Xiao, and Qingyi Gu. Patch similarity aware data-free quantization for"
|
| 1887 |
+
],
|
| 1888 |
+
"bbox": [
|
| 1889 |
+
516,
|
| 1890 |
+
92,
|
| 1891 |
+
903,
|
| 1892 |
+
901
|
| 1893 |
+
],
|
| 1894 |
+
"page_idx": 9
|
| 1895 |
+
},
|
| 1896 |
+
{
|
| 1897 |
+
"type": "page_number",
|
| 1898 |
+
"text": "12488",
|
| 1899 |
+
"bbox": [
|
| 1900 |
+
480,
|
| 1901 |
+
945,
|
| 1902 |
+
517,
|
| 1903 |
+
955
|
| 1904 |
+
],
|
| 1905 |
+
"page_idx": 9
|
| 1906 |
+
},
|
| 1907 |
+
{
|
| 1908 |
+
"type": "list",
|
| 1909 |
+
"sub_type": "ref_text",
|
| 1910 |
+
"list_items": [
|
| 1911 |
+
"vision transformers. In European Conference on Computer Vision (ECCV), pages 154-170. Springer, 2022. 1, 2, 3, 4, 5, 6, 7",
|
| 1912 |
+
"[48] Zhikai Li, Mengjuan Chen, Junrui Xiao, and Qingyi Gu. Psaq-vit v2: Toward accurate and general data-free quantization for vision transformers. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023. 2, 3, 5, 6, 7, 1",
|
| 1913 |
+
"[49] Zhikai Li, Junrui Xiao, Lianwei Yang, and Qingyi Gu. Repqvit: Scale reparameterization for post-training quantization of vision transformers. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 17227-17236, 2023. 3, 7",
|
| 1914 |
+
"[50] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swim transformer. In IEEE/CVF international conference on computer vision (ICCV), pages 1833-1844, 2021. 2",
|
| 1915 |
+
"[51] Yang Lin, Tianyu Zhang, Peiqin Sun, Zheng Li, and Shuchang Zhou. Fq-vit: Post-training quantization for fully quantized vision transformer. In Thirty-First International Joint Conference on Artificial Intelligence, (IJCAI), pages 1173–1179, 2022. 3",
|
| 1916 |
+
"[52] Shih-Yang Liu, Zechun Liu, and Kwang-Ting Cheng. Oscillation-free quantization for low-bit vision transformers. In International Conference on Machine Learning (ICML), pages 21813-21824, 2023. 2",
|
| 1917 |
+
"[53] Yijiang Liu, Huanrui Yang, Zhen Dong, Kurt Keutzer, Li Du, and Shanghang Zhang. Noisyquant: Noisy bias-enhanced post-training activation quantization for vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20321-20330, 2023. 3",
|
| 1918 |
+
"[54] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In IEEE/CVF international conference on computer vision (ICCV), pages 10012-10022, 2021. 2, 6",
|
| 1919 |
+
"[55] Zhenhua Liu, Yunhe Wang, Kai Han, Wei Zhang, Siwei Ma, and Wen Gao. Post-training quantization for vision transformer. In Advances in Neural Information Processing Systems (NeurIPS), pages 28092-28103, 2021. 2, 3",
|
| 1920 |
+
"[56] Yiwei Ma, Jiayi Ji, Xiaoshuai Sun, Yiyi Zhou, and Rongrong Ji. Towards local visual modeling for image captioning. Pattern Recognition, 138:109420, 2023. 1",
|
| 1921 |
+
"[57] Sachin Mehta and Mohammad Rastegari. Mobilevit: Lightweight, general-purpose, and mobile-friendly vision transformer. In International Conference on Learning Representations (ICLR), 2022. 2",
|
| 1922 |
+
"[58] Muhammad Muzammal Naseer, Kanchana Ranasinghe, Salman H Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Intriguing properties of vision transformers. In Advances in Neural Information Processing Systems (NeurIPS), pages 23296-23308, 2021. 4",
|
| 1923 |
+
"[59] Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 3163-3172, 2021. 2"
|
| 1924 |
+
],
|
| 1925 |
+
"bbox": [
|
| 1926 |
+
91,
|
| 1927 |
+
90,
|
| 1928 |
+
483,
|
| 1929 |
+
898
|
| 1930 |
+
],
|
| 1931 |
+
"page_idx": 10
|
| 1932 |
+
},
|
| 1933 |
+
{
|
| 1934 |
+
"type": "list",
|
| 1935 |
+
"sub_type": "ref_text",
|
| 1936 |
+
"list_items": [
|
| 1937 |
+
"[60] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), pages 8026-8037, 2019. 6",
|
| 1938 |
+
"[61] Biao Qian, Yang Wang, Richang Hong, and Meng Wang. Adaptive data-free quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7960-7968, 2023. 3",
|
| 1939 |
+
"[62] Akshit Ramachandran, Souvik Kundu, and Tushar Krishna. Clamp-vit: contrastive data-free learning for adaptive posttraining quantization of vits. In European Conference on Computer Vision (ECCV). Springer, 2024. 2, 3, 4, 5, 6, 1",
|
| 1940 |
+
"[63] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 115:211-252, 2015. 6",
|
| 1941 |
+
"[64] Fahad Shamshad, Salman Khan, Syed Waqas Zamir, Muhammad Haris Khan, Munawar Hayat, Fahad Shahbaz Khan, and Huazhu Fu. Transformers in medical imaging: A survey. Medical Image Analysis, page 102802, 2023. 2",
|
| 1942 |
+
"[65] Huixin Sun, Runqi Wang, Yanjing Li, Xianbin Cao, Xiaolong Jiang, Yao Hu, and Baochang Zhang. P4q: Learning to prompt for quantization in visual-language models. arXiv preprint arXiv:2409.17634, 2024. 2",
|
| 1943 |
+
"[66] Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, Dacheng Tao, and Chang Xu. Manifold regularized dynamic network pruning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5018-5028, 2021. 2",
|
| 1944 |
+
"[67] Yehui Tang, Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chao Xu, and Dacheng Tao. Patch slimming for efficient vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12165-12174, 2022. 2",
|
| 1945 |
+
"[68] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning (ICML), pages 10347-10357. PMLR, 2021. 1, 2, 6",
|
| 1946 |
+
"[69] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research (JMLR), 9:2579-2605, 2008. 1",
|
| 1947 |
+
"[70] Peihao Wang, Wenqing Zheng, Tianlong Chen, and Zhangyang Wang. Anti-oversmoothing in deep vision transformers via the fourier domain analysis: From theory to practice. arXiv preprint arXiv:2203.05962, 2022. 5",
|
| 1948 |
+
"[71] Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, and Fengwei Yu. Qdrop: Randomly dropping quantization for extremely low-bit post-training quantization. In International Conference on Learning Representations (ICLR), 2022. 6",
|
| 1949 |
+
"[72] Kan Wu, Houwen Peng, Minghao Chen, Jianlong Fu, and Hongyang Chao. Rethinking and improving relative position encoding for vision transformer. In IEEE/CVF In"
|
| 1950 |
+
],
|
| 1951 |
+
"bbox": [
|
| 1952 |
+
516,
|
| 1953 |
+
90,
|
| 1954 |
+
905,
|
| 1955 |
+
900
|
| 1956 |
+
],
|
| 1957 |
+
"page_idx": 10
|
| 1958 |
+
},
|
| 1959 |
+
{
|
| 1960 |
+
"type": "page_number",
|
| 1961 |
+
"text": "12489",
|
| 1962 |
+
"bbox": [
|
| 1963 |
+
480,
|
| 1964 |
+
944,
|
| 1965 |
+
519,
|
| 1966 |
+
955
|
| 1967 |
+
],
|
| 1968 |
+
"page_idx": 10
|
| 1969 |
+
},
|
| 1970 |
+
{
|
| 1971 |
+
"type": "list",
|
| 1972 |
+
"sub_type": "ref_text",
|
| 1973 |
+
"list_items": [
|
| 1974 |
+
"ternational Conference on Computer Vision (ICCV), pages 10033-10041, 2021. 2",
|
| 1975 |
+
"[73] Kan Wu, Jinnian Zhang, Houwen Peng, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan. Tinyvit: Fast pretraining distillation for small vision transformers. In European Conference on Computer Vision (ECCV), pages 68-85, 2022. 2",
|
| 1976 |
+
"[74] Zhiqiang Shen Xijie Huang and Kwang-Ting Cheng. Variation-aware vision transformer quantization. arXiv preprint arXiv:2307.00331, 2023. 2",
|
| 1977 |
+
"[75] Shoukai Xu, Haokun Li, Bohan Zhuang, Jing Liu, Jiezhang Cao, Chuangrun Liang, and Mingkui Tan. Generative low-bitwidth data free quantization. In European Conference on Computer Vision (ECCV), pages 1-17, 2020. 3",
|
| 1978 |
+
"[76] Shoukai Xu, Haokun Li, Bohan Zhuang, Jing Liu, Jiezhang Cao, Chuangrun Liang, and Mingkui Tan. Generative low-bitwidth data free quantization. In European Conference on Computer Vision (ECCV), pages 1-17. Springer, 2020. 2",
|
| 1979 |
+
"[77] Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz. Dreaming to distill: Data-free knowledge transffer via deep-inversion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8715-8724, 2020. 2, 3, 1",
|
| 1980 |
+
"[78] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. 4",
|
| 1981 |
+
"[79] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 818-833, 2014. 4",
|
| 1982 |
+
"[80] Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan. Minivit: Compressing vision transformers with weight multiplexing. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12145-12154, 2022. 2",
|
| 1983 |
+
"[81] Xiangguo Zhang, Haotong Qin, Yifu Ding, Ruihao Gong, Qinghua Yan, Renshuai Tao, Yuhang Li, Fengwei Yu, and Xianglong Liu. Diversifying sample generation for accurate data-free quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15658-15667, 2021. 2, 3",
|
| 1984 |
+
"[82] Dehua Zheng, Wenhui Dong, Hailin Hu, Xinghao Chen, and Yunhe Wang. Less is more: Focus attention for efficient detr. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 6674-6683, 2023. 2",
|
| 1985 |
+
"[83] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 6881-6890, 2021. 2",
|
| 1986 |
+
"[84] Yunshan Zhong, Mingbao Lin, Gongrui Nan, Jianzhuang Liu, Baochang Zhang, Yonghong Tian, and Rongrong Ji. Intraq: Learning synthetic images with intra-class heterogeneity for zero-shot network quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12339-12348, 2022. 2, 4, 1"
|
| 1987 |
+
],
|
| 1988 |
+
"bbox": [
|
| 1989 |
+
91,
|
| 1990 |
+
90,
|
| 1991 |
+
482,
|
| 1992 |
+
900
|
| 1993 |
+
],
|
| 1994 |
+
"page_idx": 11
|
| 1995 |
+
},
|
| 1996 |
+
{
|
| 1997 |
+
"type": "list",
|
| 1998 |
+
"sub_type": "ref_text",
|
| 1999 |
+
"list_items": [
|
| 2000 |
+
"[85] Yunshan Zhong, Jiawei Hu, Mingbao Lin, Mengzhao Chen, and Rongrong Ji. I&S-vit: An inclusive & stable method for pushing the limit of post-training vits quantization. arXiv preprint arXiv:2311.10126, 2023. 6",
|
| 2001 |
+
"[86] Yunshan Zhong, Jiawei Hu, You Huang, Yuxin Zhang, and Rongrong Ji. Erq: Error reduction for post-training quantization of vision transformers. In International Conference on Machine Learning (ICML), 2024. 3"
|
| 2002 |
+
],
|
| 2003 |
+
"bbox": [
|
| 2004 |
+
516,
|
| 2005 |
+
90,
|
| 2006 |
+
905,
|
| 2007 |
+
204
|
| 2008 |
+
],
|
| 2009 |
+
"page_idx": 11
|
| 2010 |
+
},
|
| 2011 |
+
{
|
| 2012 |
+
"type": "page_number",
|
| 2013 |
+
"text": "12490",
|
| 2014 |
+
"bbox": [
|
| 2015 |
+
480,
|
| 2016 |
+
945,
|
| 2017 |
+
517,
|
| 2018 |
+
955
|
| 2019 |
+
],
|
| 2020 |
+
"page_idx": 11
|
| 2021 |
+
}
|
| 2022 |
+
]
|
2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/33638821-3dce-4a55-a900-82748a75aeee_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/33638821-3dce-4a55-a900-82748a75aeee_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5a24929928c29339fcdb7983134465b5ac7b62bd61902eb273eaf994850c6aae
|
| 3 |
+
size 688280
|
2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/full.md
ADDED
|
@@ -0,0 +1,419 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers
|
| 2 |
+
|
| 3 |
+
Yunshan Zhong $^{1,2}$ , Yuyao Zhou $^{2}$ , Yuxin Zhang $^{2}$ , Wanchen Sui $^{3}$ , Shen Li $^{3}$ , Yong Li $^{3}$ , Fei Chao $^{2}$ , Rongrong Ji $^{1,2*}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Institute of Artificial Intelligence, Xiamen University
|
| 6 |
+
|
| 7 |
+
$^{2}$ MAC Lab, School of Informatics, Xiamen University $^{3}$ Alibaba Group
|
| 8 |
+
|
| 9 |
+
viper.zhong@gmail.com, {yuyaozhou, yuxinzhang}@stu.xmu.edu.cn
|
| 10 |
+
{wanchen.swc, litan.ps, jiufeng.ly}@alibaba-inc.com, {fchao, rrji}@xmu.edu.cn
|
| 11 |
+
|
| 12 |
+
# Abstract
|
| 13 |
+
|
| 14 |
+
Data-free quantization (DFQ) enables model quantization without accessing real data, addressing concerns regarding data security and privacy. With the growing adoption of Vision Transformers (ViTs), DFQ for ViTs has garnered significant attention. However, existing DFQ methods exhibit two limitations: (1) semantic distortion, where the semantics of synthetic images deviate substantially from those of real images, and (2) semantic inadequacy, where synthetic images contain extensive regions with limited content and oversimplified textures, leading to suboptimal quantization performance. To address these limitations, we propose SARDFQ, a novel Semantics Alignment and Reinforcement Data-Free Quantization method for ViTs. To address semantic distortion, SARDFQ incorporates Attention Priors Alignment (APA), which optimizes synthetic images to follow randomly generated structure attention priors. To mitigate semantic inadequacy, SARDFQ introduces Multi-Semantic Reinforcement (MSR), leveraging localized patch optimization to enhance semantic richness across synthetic images. Furthermore, SARDFQ employs Soft-Label Learning (SL), wherein multiple semantic targets are adapted to facilitate the learning of multi-semantic images augmented by MSR. Extensive experiments demonstrate the effectiveness of SARDFQ, significantly surpassing existing methods. For example, SARDFQ improves top-1 accuracy on ImageNet by $15.52\%$ for W4A4 ViT-B<sup>1</sup>.
|
| 15 |
+
|
| 16 |
+
# 1. Introduction
|
| 17 |
+
|
| 18 |
+
Vision Transformers (ViTs) [20, 29, 36] have demonstrated remarkable success across various computer vision tasks [1,
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+
(a) The t-SNE[69] visualization of the penultimate-layer features (extracted by DeiT-S) of synthetic images. Each marker (circle, star, triangle) represents a distinct category. The red dashed circles highlight the features extracted from our APA and real images. Notably, the features produced by PSAQ-ViT[47] exhibit substantial deviation from those of real images, indicating semantic distortion. In contrast, our APA yields features that more closely align with those of real images, suggesting improved semantics alignment.
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
PSAQ
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
PSAQ V2
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
SARDFQ (Ours)
|
| 32 |
+
|
| 33 |
+
(b) The images of PSAQ-ViT and PSAQ-ViT V2 exhibit numerous dull regions with limited content and simplified textures, reflecting semantic inadequacy. In comparison, our SARDFQ generates images with greater diversity in both content and texture, demonstrating enhanced semantics.
|
| 34 |
+
|
| 35 |
+
Figure 1. Illustration of (a) semantic distortion and (b) semantic inadequacy.
|
| 36 |
+
|
| 37 |
+
7, 56, 68]. However, their high computational cost and substantial memory footprint hinder deployment in resource-
|
| 38 |
+
|
| 39 |
+
constrained environments [11, 30, 32, 46, 55, 67, 82]. To address these limitations, quantization [39] has emerged as a promising solution, which reduces model complexity by converting full-precision weight and activations into low-bit representations.
|
| 40 |
+
|
| 41 |
+
Traditional quantization methods typically require access to the original training dataset, which raises data privacy and security concerns [6, 27, 77, 84]. As a result, data-free quantization (DFQ) has gained increasing attention, allowing quantization without the need for real data [14, 76, 81]. However, most existing DFQ methods are designed specifically for convolutional neural networks (CNNs) and are not directly applicable to vision transformers (ViTs). These methods generally rely on batch normalization statistics (BNS), which capture the distribution of real data, to synthesize in-distribution synthetic data [6, 76, 81, 84]. Yet, BNS is unavailable for ViTs, which use layer normalization (LN) to dynamically compute distribution statistics during inference [47]. Recently, several DFQ methods have been proposed for ViTs [16, 33, 47, 48, 62]. For example, PSAQ-ViT [47] introduces patch similarity entropy (PSE) loss to optimize Gaussian noise towards usable synthetic images.
|
| 42 |
+
|
| 43 |
+
Nevertheless, we observe that existing methods suffer from semantic distortion and inadequacy issues. As shown in Fig. 1a, features of synthetic images generated by PSAQ-ViT deviate significantly from those of real images. Tab. 1 shows that the cosine similarity between synthetic images from PSAQ-ViT and real images is notably low, also indicating significant distortion. These results highlight the issue of semantic distortion. Moreover, as shown in Fig. $1\mathrm{b}^2$ synthetic images generated by PSAQ-ViT and PSAQ-ViT V2 exhibit many regions with limited content diversity and overly simplified textures. These low-quality dull regions are useless or even detrimental to model learning [33], highlighting the issue of semantic inadequacy. Consequently, quantized models trained on such low-quality images suffer from degraded performance.
|
| 44 |
+
|
| 45 |
+
Motivated by the above analysis, we propose a novel Semantics Alignment and Reinforcement Data-Free Quantization method for ViTs, termed SARDFQ. The overall framework is depicted in Fig.2. To address the semantic distortion issue, SARDFQ introduces Attention Priors Alignment (APA), where synthetic images are optimized to follow structured attention priors generated using Gaussian Mixture Models (GMMs). APA effectively aligns the semantics of synthetic images with real images, as validated by both visual and quantitative analyses. As shown in Fig. 1a, features of APA exhibit a closer alignment to those of real images, while quantitative results in Tab. 1 also confirm that the semantics of APA are more consistent to the real images, indicating enhanced semantics alignment. To address the semantic inadequacy issue, SARDFQ incorpo
|
| 46 |
+
|
| 47 |
+
rates Multi-Semantic Reinforcement (MSR) and Softlabel Learning (SL). MSR utilizes localized patch optimization, which encourages different sub-patches of synthetic images to capture various semantics, reinforcing the rich semantics across images. SL applies multiple semantic targets to accommodate the learning of multi-semantic images augmented by MSR. As shown in Fig. 1b, synthetic images after applying MSR exhibit greater diversity in content and texture, providing reinforced semantics.
|
| 48 |
+
|
| 49 |
+
Experimental results across various ViT models and tasks demonstrate that SARDFQ presents substantial performance improvements. For example, SARDFQ achieves a $15.52\%$ increase in top-1 accuracy on the ImageNet dataset for the W4A4 ViT-B model.
|
| 50 |
+
|
| 51 |
+
# 2. Related Works
|
| 52 |
+
|
| 53 |
+
# 2.1. Vision Transformers
|
| 54 |
+
|
| 55 |
+
The great success of transformers in the natural language processing field has driven widespread attempts in the computer vision community to apply them to vision tasks [12, 28, 72]. ViT [20] is the pioneer that builds a transformer-based model to handle images, boosting the performance on the image classification task. DeiT [68] introduces an efficient teacher-student training strategy where a distillation token is employed to distill knowledge from the teacher model to the student model. Swin Transformers [54] builds an efficient and effective hierarchical model by introducing a shifted window-based self-attention mechanism. Other than the image classification task, the applications of ViTs also have broadened considerably, manifesting groundbreaking performance in object detection [7], image segmentation [9, 83], low-level vision [50], video recognition [1, 59], and medical image processing [64], etc. Nevertheless, the impressive performance of ViTs relies on a high number of parameters and significant computational overhead, preventing deployment in resource-constrained environments. Several recent efforts design lightweight ViTs, such as MobileViT [57], MiniVit [80], and TinyViT [73]. However, the model complexity is still unsatisfactory [47].
|
| 56 |
+
|
| 57 |
+
# 2.2. Network Quantization
|
| 58 |
+
|
| 59 |
+
Data-Driven Quantization. Model quantization reduces the complexity of neural networks by replacing full-precision weight and activation with the low-bit format. Data-driven quantization can be roughly divided into two categories: quantization-aware training (QAT) and posttraining quantization (PTQ). QAT is compute-heavy since it re-trains the quantized model with the full training data to retain performance [22, 25, 41, 43, 46, 52, 65, 74]. PTQ perform quantization with a tiny dataset and a reduced time overhead, harvesting widespread attention [3, 42]. The specific architecture of ViTs, such as LayerNorm and the self
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
Figure 2. SARDFQ Framework overview: Attention Priors Alignment (APA) employs randomly generated attention priors to improve semantics alignment. Multi-Semantic Reinforcement (MSR) learns the different regions of synthetic images with various semantics to enhance overall semantic richness. Meanwhile, Softlabel Learning (SL) adopts multiple semantic targets to ensure consistent learning of multi-semantic images augmented by MSR.
|
| 63 |
+
|
| 64 |
+
attention module, urges distinct PTQ methods compared to CNNs [18, 24, 46, 51, 53, 86]. For example, Liu et al. [55] develop a ranking loss to maintain the relative order of the self-attention activation. Unfortunately, both QAT and PTQ involve the original training data, causing concerns about data privacy and security issues in data-sensitive scenarios.
|
| 65 |
+
|
| 66 |
+
Data-Free Quantization. DFQ quantizes models without accessing real data [2, 13-15, 27, 35, 40, 44, 45, 61]. Most previous DFQ methods focus on CNN, where the BNS can be adopted as the regularization term [6, 81]. However, BNS is infeasible for ViTs built on the LN. Recently, few efforts have been explored to accommodate ViTs [16, 33, 47, 48, 62]. PSAQ-ViT [47] introduces the first DFQ method for ViTs. They discover that Gaussian noise yields homogeneous patches, while the real image yields heterogeneous patches. Thus, patch similarity entropy (PSE) loss is proposed to optimize the Gaussian noise towards real-like images by making them showcase heterogeneous patches. Based on PSAQ-ViT, PSAQ-ViT V2 [48] further introduces an adversarial learning strategy [26]. [62] incorporates contrastive learning and proposes an iterative generation-quantization PTQ-based DFQ method. [33] proposes a sparse generation method to remove noisy and hallucination backgrounds in synthetic images.
|
| 67 |
+
|
| 68 |
+
# 3. Method
|
| 69 |
+
|
| 70 |
+
# 3.1. Preliminaries
|
| 71 |
+
|
| 72 |
+
# 3.1.1.Quantizers
|
| 73 |
+
|
| 74 |
+
We employ the linear quantizer for all weights and activations, except for the attention scores, which use a log2 quantizer to handle highly non-negative and uneven values [24, 49, 51]. For the linear quantizer, given a full-precision input $\mathbf{x}$ and bit-width $b$ , the quantized value $\mathbf{x}_q$ and the de-quantized value $\bar{\mathbf{x}}$ are computed as follows:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\mathbf {x} _ {q} = \operatorname {c l i p} \left(\left\lfloor \frac {\mathbf {x}}{\Delta} \right\rceil + z, 0, 2 ^ {b} - 1\right), \bar {\mathbf {x}} = \Delta \cdot \left(\mathbf {x} _ {q} - z\right), \tag {1}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
where $\lfloor \cdot \rfloor$ denotes rounding to the nearest integer, and clip limits the value to $[0,2^{b} - 1]$ . Here, $\Delta$ and $z$ are the scale factor and zero-point, respectively. For the log2 quantizer:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\mathbf {x} _ {q} = \operatorname {c l i p} \left(\left\lfloor - \log_ {2} \frac {\mathbf {x}}{\Delta} \right\rceil , 0, 2 ^ {b} - 1\right), \bar {\mathbf {x}} = \Delta \cdot 2 ^ {- \mathbf {x} _ {q}}. \tag {2}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
# 3.1.2. Data Synthesis
|
| 87 |
+
|
| 88 |
+
DFQ methods parameterize synthetic images and optimize them toward real-like images with a pre-trained full-precision model $F$ . Given a image $\tilde{I}$ initialized from Gaussian noise, the one-hot loss [75] is introduced to learn label-related semantics:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\mathcal {L} ^ {\mathrm {O H}} (\tilde {\boldsymbol {I}}) = C E (F (\tilde {\boldsymbol {I}}), c) \tag {3}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
where $CE(\cdot, \cdot)$ represents the cross entropy, $c$ is a random class label, and $F(\cdot)$ returns the predicted probability for image $\tilde{I}$ .
|
| 95 |
+
|
| 96 |
+
Moreover, total variance (TV) [77] loss is a smoothing regularization term to improve the image quality:
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
\mathcal {L} ^ {\mathrm {T V}} (\tilde {\boldsymbol {I}}) = \iint | \nabla \tilde {\boldsymbol {I}} (\tau_ {1}, \tau_ {2}) | d \tau_ {1} d \tau_ {2}. \tag {4}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $\nabla \tilde{I} (\tau_1,\tau_2)$ denotes the gradient at $\tilde{I}$ at $(\tau_{1},\tau_{2})$
|
| 103 |
+
|
| 104 |
+
To perform DFQ for ViTs, PSAQ-ViT [47] proposes patch similarity entropy (PSE) loss. It first compute patch similarity $\Gamma_{l}[i,j] = \frac{u_{i} \cdot u_{j}}{||u_{i}||||u_{j}||}$ , where $u_{i}, u_{j}$ are feature vectors of MHSA outputs in $l$ -th block and $||\cdot||$ denotes the $l_{2}$ norm. Then, it estimates the density function $\hat{f}_{l}(x) = \frac{1}{Mh}\sum_{m=1}^{M}K\left(\frac{x - x_{m}}{h}\right)$ , where $K(\cdot)$ is a normal kernel, $h$ is the bandwidth, and $x_{m}$ is the kernel center derived from $\Gamma_{l}$ . Finally, the PSE loss is defined as:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathcal {L} ^ {\mathrm {P S E}} (\tilde {\boldsymbol {I}}) = \sum_ {l = 1} ^ {L} \int \hat {f} _ {l} (x) \log [ \hat {f} _ {l} (x) ] d x, \tag {5}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where $L$ is the block number of the model.
|
| 111 |
+
|
| 112 |
+
<table><tr><td>Method Classes</td><td>Real</td><td>PSAQ-ViT [47]</td><td>APA (Ours)</td></tr><tr><td>Class 1</td><td>0.68</td><td>0.44</td><td>0.64</td></tr><tr><td>Class 2</td><td>0.32</td><td>0.26</td><td>0.31</td></tr><tr><td>Class 3</td><td>0.41</td><td>0.31</td><td>0.36</td></tr></table>
|
| 113 |
+
|
| 114 |
+
Table 1. Average cosine similarity of three randomly selected classes. For real images, the similarity is measured within the class itself, while for PSAQ-ViT and APA, the similarity is measured between synthetic and real images of the same class. The results show that APA achieves higher similarity than PSE loss, indicating aligned semantics.
|
| 115 |
+
|
| 116 |
+
# 3.2. Observations
|
| 117 |
+
|
| 118 |
+
Existing DFQ methods have made significant progress. However, through carefully analyzing the synthetic images, we reveal that these images suffer from issues of semantic distortion and semantic inadequacy, both of which hinder further advancement in the DFQ of ViTs.
|
| 119 |
+
|
| 120 |
+
The semantic distortion issue refers to the significant divergence between the semantics of synthetic images and real images. To demonstrate this, we visualize the features of synthetic and real images in Fig. 1a. Note that the penultimate feature typically is regarded to represent the semantics of the input [4, 58, 78, 79]. It is clear that the features of PSAQ-ViT diverge significantly from real images, suggesting that these images fail to capture the true semantic distribution of real data. Tab. 1 quantitatively measures the semantics using the average cosine similarity (ranging from $-1$ to $1$ ). We also report the intra-class similarity within real images as an approximate upper bound for comparison. This result further supports the observation of low similarity between synthetic images from PSAQ-ViT and real images. For instance, for Class 1, the intra-class similarity within real images is 0.68, while PSAQ-ViT achieves only 0.44.
|
| 121 |
+
|
| 122 |
+
The semantic inadequacy issue refers to the presence of dull regions in synthetic images, which contain redundant or non-semantic content [33, 37], hindering the model's learning process. As indicated in [13, 14], a diverse content and textures generally suggests rich information. As shown in Fig. 1b, many regions of synthetic images generated by PSAQ-ViT and PSAQ-ViT V2 exhibit a lack of diversity in content, with overly simplified textures. Specifically, the central region of PSAQ-ViT images only contains faint object structures, while PSAQ-ViT V2 images appear excessively smoothed and indistinct.
|
| 123 |
+
|
| 124 |
+
For high-bit quantization, where model capacity is largely retained [42], the performance degradation remains relatively minor even using semantic distorted and inadequate images [47, 62]. However, in low-bit quantization, where model capacity is severely damaged and informative images are essential for recovering performance [13, 84],
|
| 125 |
+
|
| 126 |
+
fine-tuning on these poor-quality images leads to inferior generalization to real datasets, resulting in limited performance. For example, as shown in Tab. 2, the W4A4 ViT-B fine-tuned on real images yields $68.16\%$ , whereas PSAQ-ViT only achieves $36.32\%$ .
|
| 127 |
+
|
| 128 |
+

|
| 129 |
+
Figure 3. Comparison between attention maps.
|
| 130 |
+
|
| 131 |
+
# 3.3. Semantics Alignment and Reinforcement Data-Free Quantization
|
| 132 |
+
|
| 133 |
+
In the following, we introduce the proposed SARDFQ, whose framework is illustrated in Fig. 2.
|
| 134 |
+
|
| 135 |
+
# 3.3.1. Attention Priors Alignment
|
| 136 |
+
|
| 137 |
+
In ViTs, the self-attention mechanism encodes semantic correlations between image regions, where high-response areas in attention maps strongly correlate with semantic-discriminative content [10, 20, 31]. However, existing DFQ methods overlook this intrinsic property in the generation process. As a result, as shown in Fig. 3, synthetic images often exhibit disordered and unnatural attention patterns, with attention maps either overly diffuse or misaligned toward peripheral regions. This undermines their ability to preserve semantic-discriminative content, causing semantic distortion, as demonstrated in Fig. 1a and Tab. 1. In response, we propose Attention Priors Alignment (APA), which improves semantics alignment by optimizing synthetic images to follow randomly generated structure attention priors.
|
| 138 |
+
|
| 139 |
+
Specifically, given a synthetic image $\tilde{I}$ , we first obtain its attention maps in the $h$ -th head of the $l$ -th block, denoting as $\mathbf{A}_{l,h} \in \mathbb{R}^{N \times N}$ , where $N$ represents the total number of tokens. In DeiT, the attention of the classification token toward other tokens serves as the indicator for semantic versus non-semantic parts [5]. Thus, we extract $\mathbf{A}_{l,h}^c \in \mathbb{R}^{1 \times (N - 1)}$ from $\mathbf{A}_{l,h}$ , representing the attention of the classification token to all tokens except itself. We then randomly generate attention priors $\tilde{\mathbf{A}}_{l,h}$ , whose generation is detailed in the next part, and align $\mathbf{A}_{l,h}^c$ with $\tilde{\mathbf{A}}_{l,h}$ by:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathcal {L} _ {l, h} (\tilde {\boldsymbol {I}}) = \operatorname {M S E} \left(\mathbf {A} _ {l, h} ^ {c} - \tilde {\mathbf {A}} _ {l, h}\right), \tag {6}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where MSE represents the mean squared error. For Swin models that do not use a classification token, we substitute $\mathbf{A}_{l,h}^{c}$ in Eq.6 with the average attention map of all tokens [10]. As noted in [17], ViTs initially focus on all regions to capture low-level information in shallow blocks
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
Figure 4. Examples of generated attention priors.
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+
and gradually shift their focus toward semantic regions in deeper blocks to extract high-level semantic information. Leveraging this property, we selectively apply $\mathcal{L}_{l,h}^{\mathrm{APA}}$ to deeper blocks, progressively aligning attention towards semantically relevant areas. The total APA loss is computed as a depth-weighted sum of the individual Eq. 6 across these deeper blocks:
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\mathcal {L} ^ {\mathrm {A P A}} (\tilde {\boldsymbol {I}}) = \sum_ {l = S} ^ {L} \sum_ {h = 1} ^ {H} \frac {l}{L} \mathcal {L} _ {l, h} (\tilde {\boldsymbol {I}}), \tag {7}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
where $S$ is a pre-given hyper-parameter denoting the start of deep blocks and are experimentally set $S = \frac{L}{2}$ .
|
| 163 |
+
|
| 164 |
+
Attention Priors Generation. To generate attention priors $\tilde{\mathbf{A}}_{l,h}$ , Gaussian Mixture Models (GMMs) are employed as it is the most commonly used distribution with high flexibility. ViTs utilize different attention heads to capture diverse patterns and learn varied, informative representations [8]. Thus, we use distinct GMMs for each head. Note that the goal here is not to replicate real attention maps precisely, but to generate simulated structure attention priors to guide the learning of synthetic images.
|
| 165 |
+
|
| 166 |
+
In particular, we first initialize an all zero matrix $\tilde{\mathbf{P}}\in$ $\mathbb{R}^{H\times W}$ , where $H = W = \sqrt{N - 1}$ for DeiT, $H =$ $W = \sqrt{N}$ for Swin. For example, for DeiT-S, the $H =$ $W = \sqrt{196} = 14$ . Then, we generate $k$ two-dimensional Gaussian distributions, where $k$ is randomly sampled from $1\sim K_{APA}$ and $K_{APA}$ is set to 5 in all experiments. Each Gaussian has its mean and covariance3. Consequently, the matrix element at the $i$ -th row and $j$ -th column, $\tilde{\mathbf{P}} [i,j]$ , is determined by:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\tilde {\mathbf {P}} [ i, j ] = \max _ {m = 1, \dots , k} \mathbf {G} ^ {m} [ i, j ]. \tag {8}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
Then, $\tilde{\mathbf{P}}$ is normalized by:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\tilde {\mathbf {P}} _ {n} = \frac {\tilde {\mathbf {P}}}{\sum \tilde {\mathbf {P}}} \cdot (1 - x). \tag {9}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
Here, for DeiT that incorporates the classification token, $x$ is randomly sampled from a uniform distribution $U(0,1)$ , representing the proportion of the attention score that the classification token allocates to itself. For Swin, which does not use a classification token, $x$ is set to 0. Finally, $\tilde{\mathbf{P}}_n$ is flatten to match the dimensionality:
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\tilde {\mathbf {A}} _ {l, h} = \operatorname {f l a t t e n} \left(\tilde {\mathbf {P}} _ {n}\right). \tag {10}
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
Fig. 4 displays examples of generated attention priors.
|
| 185 |
+
|
| 186 |
+
Discussion. APA prevents attention disorder, ensuring that synthetic images exhibit more coherent and natural attention patterns, as demonstrated in Fig. 3. As a result, APA selectively enhances the responses in certain regions, effectively emphasizing semantic-discriminative regions within synthetic images and thus prompting the discriminative features. Although simple, APA enables synthetic images to align better with the real semantics, as validated by both visual and quantitative evaluations. As shown in Fig. 1a, compared to PSAQ-ViT, features obtained after applying the APA loss are more closely aligned with real images, indicating better semantic alignment. The quantitative results in Tab. 1 further support that APA achieves superior semantic alignment. For example, in Class 1, the intra-class similarity for PSAQ-ViT is 0.44, whereas APA achieves a higher similarity of 0.64.
|
| 187 |
+
|
| 188 |
+
# 3.3.2. Multi-Semantic Reinforcement
|
| 189 |
+
|
| 190 |
+
Current DFQ methods for ViTs [47, 48, 62] optimize images through global optimization, treating the entire image as a single semantic unit. However, this is affected by low-rank structural regularity [34], where adjacent pixels exhibit strong similarity, leading to dull regions with redundant or non-semantic content for model learning [37]. This issue is further exacerbated by the tokenization mechanism in ViTs, as processing images in fixed-size patches make dull regions increase at the patch level [19, 70]. Consequently, as shown in Fig. 1b, synthetic images generated by existing methods exhibit large dull regions, resulting in semantic inadequacy [37]. In response, we propose Multi-Semantic Reinforcement (MSR), which applies localized patch optimization to enhance semantic richness by learning local patches with distinct semantics.
|
| 191 |
+
|
| 192 |
+
Specifically, for a synthetic image $\tilde{I}$ , instead of feeding only the entire image, we also feed its patches and optimize them individually. Initially, we select $m$ nonoverlapping patches, where $m$ is chosen randomly from the set $\{1,2,\dots,K_{MSR}\}$ , with $K_{MSR}$ set to 4 in all experiments. These $m$ patches are then cropped and resized to match the model's input dimensions:
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
\{\tilde {\boldsymbol {I}} _ {M S R} ^ {i} \} _ {i = 1, \dots , m} = \operatorname {r e s i z e} \left(\operatorname {c r o p} _ {m} (\tilde {\boldsymbol {I}})\right), \tag {11}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
where $\text{crop}_m(\cdot)$ crops $m$ non-overlapping patches from input, and $\text{resize}(\cdot)$ is the resize function. Each patch, denoted as $\tilde{I}_{MSR}^i$ , is treated as a new image with an assigned semantic target $c^i$ . Note that the gradient is backpropagated only to update the corresponding patch in the original image, leaving the rest of the image unaffected.
|
| 199 |
+
|
| 200 |
+
Softlabel Learning. The one-hot loss (Eq. 3) only learns the semantics of the target class, making it unsuitable for $\tilde{\pmb{I}}$ under MSR, as its patches $\{\tilde{I}_{MSR}^i\}_{i = 1,\dots ,m}$ contain distinct semantics. In response, we propose Softlabel Learning
|
| 201 |
+
|
| 202 |
+
(SL), which applies multiple semantic targets to accommodate the learning of images augmented by MSR. Specifically, we first sample $Z \in \mathbb{R}^{C} \sim U(0,1)$ , then modify its values by:
|
| 203 |
+
|
| 204 |
+
$$
|
| 205 |
+
\left\{ \begin{array}{l l} Z [ c ^ {i} ] \sim U (\epsilon_ {1}, \epsilon_ {2}), & \text {f o r} \tilde {\boldsymbol {I}} _ {M S R} ^ {i}, \\ Z [ c ^ {1}, \ldots , c ^ {m} ] \sim U (\epsilon_ {1}, \epsilon_ {2}), & \text {f o r} \tilde {\boldsymbol {I}}, \end{array} \right.
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
where $U(\epsilon_1, \epsilon_2)$ denotes the uniform distribution over the interval $[\epsilon_1, \epsilon_2]$ , $m$ is the number of patches determined in MSR, and $\epsilon_1$ and $\epsilon_2$ control the softness, both empirically set consistently to 5 and 10 in all experiments. The soft target is defined as $T_s = \mathrm{softmax}(Z)$ , and SL loss is:
|
| 209 |
+
|
| 210 |
+
$$
|
| 211 |
+
\mathcal {L} ^ {\mathrm {S L}} \left(\tilde {I} / \tilde {I} _ {M S R} ^ {i}\right) = S C E \left(F \left(\tilde {I} / \tilde {I} _ {M S R} ^ {i}, T _ {s}\right), \right. \tag {12}
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
where $SCE(\cdot, \cdot)$ is soft cross entropy and $F(\cdot)$ returns the predicted probability for its input. SL facilitates smooth learning between across semantic targets, ensuring that MSR-enhanced images receive consistent supervision rather than conflicting supervision.
|
| 215 |
+
|
| 216 |
+
Discussion. By leveraging localized patch optimization, MSR ensures that each patch contributes unique semantics, forcing synthetic images to capture diverse features rather than being dominated by large homogeneous dull regions. As demonstrated by the richer and more diverse content and textures in Fig. 1b, MSR effectively reduces dull regions and enhances semantic richness in synthetic images. Moreover, MSR transforms synthetic images into composites of multiple semantic objects rather than a single unit, thereby providing more distinct semantic samples, i.e., $\{\tilde{I}_{MSR}^i\}_{i = 1,\dots ,m}$ , for model training. Unlike traditional cropping used in data augmentation, which aims to improve classification robustness by training with cropped patches labeled with the original class, MSR aims for semantic richness within synthetic images, ultimately enabling accurate data-free quantization.
|
| 217 |
+
|
| 218 |
+
# 3.4. Overall Pipeline
|
| 219 |
+
|
| 220 |
+
The overall pipeline consists of two stages: data synthesis and quantized network learning. The first stage uses the proposed SARDFQ to produce synthetic images. The second stage fine-tunes the quantized model using the generated synthetic images.
|
| 221 |
+
|
| 222 |
+
# 3.4.1. Data Synthesis
|
| 223 |
+
|
| 224 |
+
In the data synthesis stage, we combine the proposed APA loss of Eq. 7, SL loss of Eq. 12, and TV loss of Eq. 4 to formulate the objective function as follows:
|
| 225 |
+
|
| 226 |
+
$$
|
| 227 |
+
\mathcal {L} _ {G} (\tilde {\boldsymbol {I}}) = \alpha_ {1} \mathcal {L} ^ {\mathrm {A P A}} (\tilde {\boldsymbol {I}}) + \mathcal {L} ^ {\mathrm {S L}} (\tilde {\boldsymbol {I}}) + 0. 0 5 \mathcal {L} ^ {\mathrm {T V}} (\tilde {\boldsymbol {I}}). \tag {13}
|
| 228 |
+
$$
|
| 229 |
+
|
| 230 |
+
where $\alpha_{1}$ is hyperparameters and is determined by gridsearch. Note that the weight of TV loss is fixed to 0.05, following [47], to avoid a cumbersome hyperparameter search.
|
| 231 |
+
|
| 232 |
+
# 3.4.2. Quantized Network Learning
|
| 233 |
+
|
| 234 |
+
Recently DFQ methods have introduced the PTQ methods in learning quantized models due to their advantages of speed, memory efficiency, and performance [35, 62]. Thus, following the success of [42, 71], we fine-tune the quantized network block-wisely. Specifically, denote $\mathbf{X}_l$ as the outputs of the $l$ -th block of the full-precision model, and $\bar{\mathbf{X}}_l$ represent outputs of the quantized counterpart. The reconstruction loss is defined as:
|
| 235 |
+
|
| 236 |
+
$$
|
| 237 |
+
\mathcal {L} _ {l} = \left\| \mathbf {X} _ {l} - \bar {\mathbf {X}} _ {l} \right\| _ {2}. \tag {14}
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
Here, $\mathcal{L}_l$ is only backward to update weights within $l$ -th block. Note that for a fair comparison, all compared methods adopt the same quantized network learning stage.
|
| 241 |
+
|
| 242 |
+
# 4. Experiment
|
| 243 |
+
|
| 244 |
+
# 4.1. Implementation Details
|
| 245 |
+
|
| 246 |
+
Models and Tasks. We evaluate the performance of SARDFQ by test quantized ViT-S/B [20], DeiT-T/S/B [68], and Swin-S/B [54] on the classification task using ImageNet [63]. The pre-trained models are downloaded from the timm library. In appendix, we further provide results on detection and segmentation tasks.
|
| 247 |
+
|
| 248 |
+
Comparison methods. We compare our SARDFQ against Gaussian noise, real images, and previous methods including SMI [33] and PSAQ-ViT [47] and its subsequent version, PSAQ-ViT V2 [48]. For a fair comparison, we generate synthetic images using their methods and apply our quantized network learning strategy. We use the official code for SMI and PSAQ-ViT to reproduce their images, while PSAQ-ViT V2 is re-implemented by us, as no official code is available.
|
| 249 |
+
|
| 250 |
+
Experimental settings. All experiments were conducted using the PyTorch framework [60] on a single NVIDIA 3090 GPU. In the data synthesis stage, synthetic images were initialized with standard Gaussian noise, generating 32 images in total. The Adam optimizer [38] with $\beta_{1} = 0.5$ , $\beta_{2} = 0.9$ was used, with learning rates of 0.25 for Swin and 0.2 for others, and a total of 1,000 iterations. For all models, $K_{APA}$ , $K_{MSR}$ , $\epsilon_{1}$ , and $\epsilon_{2}$ were set to 5, 4, 5, and 10, respectively, based on a search with W4A4 DeiT-S. The value of $\alpha_{1}$ was determined by grid search for each model: 1e5 for DeiT-T/S, 1e4 for DeiT-B, 100 for ViT-B and Swin-B, 10 for Swin-S, and 1 for ViT-S. Although further hyperparameter search may improve performance, the current settings already yield superior results. In the quantized network learning stage, following [85], the Adam optimizer with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ was used, with weight decay set to 0 and an initial learning rate of 4e-5, adjusted
|
| 251 |
+
|
| 252 |
+
<table><tr><td>Model</td><td>W/A</td><td>Real</td><td>Gaussian noise</td><td>PSAQ-ViT [47]</td><td>PSAQ-ViT V2 [48]</td><td>SMI [33]</td><td>SARDFQ (Ours)</td></tr><tr><td rowspan="3">ViT-S (81.39)</td><td>4/4</td><td>66.57</td><td>6.02</td><td>47.24</td><td>41.53</td><td>24.3329.41</td><td>50.32</td></tr><tr><td>5/5</td><td>76.69</td><td>36.77</td><td>71.59</td><td>68.41</td><td>61.3365.19</td><td>74.31</td></tr><tr><td>6/6</td><td>79.46</td><td>61.20</td><td>77.20</td><td>74.76</td><td>72.9572.46</td><td>78.40</td></tr><tr><td rowspan="3">ViT-B (84.54)</td><td>4/4</td><td>68.16</td><td>0.15</td><td>36.32</td><td>26.32</td><td>35.2719.67</td><td>51.84</td></tr><tr><td>5/5</td><td>79.21</td><td>4.16</td><td>68.48</td><td>67.95</td><td>67.5357.13</td><td>70.70</td></tr><tr><td>6/6</td><td>81.89</td><td>55.18</td><td>76.65</td><td>71.87</td><td>76.3369.82</td><td>79.16</td></tr><tr><td rowspan="3">DeiT-T (72.21)</td><td>4/4</td><td>56.60</td><td>17.43</td><td>47.75</td><td>30.20</td><td>30.1413.18</td><td>52.06</td></tr><tr><td>5/5</td><td>67.09</td><td>43.49</td><td>64.10</td><td>55.16</td><td>56.4439.35</td><td>66.41</td></tr><tr><td>6/6</td><td>69.81</td><td>56.23</td><td>68.37</td><td>62.77</td><td>64.0344.39</td><td>69.73</td></tr><tr><td rowspan="3">DeiT-S (79.85)</td><td>4/4</td><td>68.46</td><td>20.89</td><td>58.28</td><td>45.53</td><td>42.7711.71</td><td>62.29</td></tr><tr><td>5/5</td><td>75.06</td><td>41.06</td><td>71.90</td><td>63.14</td><td>62.8829.13</td><td>74.06</td></tr><tr><td>6/6</td><td>77.87</td><td>65.63</td><td>75.85</td><td>68.85</td><td>71.6537.69</td><td>77.31</td></tr><tr><td rowspan="3">DeiT-B (81.85)</td><td>4/4</td><td>77.07</td><td>47.20</td><td>71.75</td><td>66.43</td><td>65.3359.04</td><td>72.17</td></tr><tr><td>5/5</td><td>79.86</td><td>65.46</td><td>78.45</td><td>76.77</td><td>76.7475.33</td><td>78.72</td></tr><tr><td>6/6</td><td>80.90</td><td>62.79</td><td>80.00</td><td>79.22</td><td>78.8177.66</td><td>80.15</td></tr><tr><td rowspan="3">Swin-S (83.20)</td><td>4/4</td><td>78.12</td><td>31.92</td><td>73.19</td><td>65.55</td><td>65.85</td><td>74.74</td></tr><tr><td>5/5</td><td>80.51</td><td>52.10</td><td>78.15</td><td>74.37</td><td>75.41</td><td>79.56</td></tr><tr><td>6/6</td><td>80.60</td><td>65.66</td><td>79.74</td><td>78.50</td><td>78.25</td><td>80.56</td></tr><tr><td rowspan="3">Swin-B (85.27)</td><td>4/4</td><td>78.80</td><td>30.14</td><td>71.84</td><td>67.42</td><td>65.23</td><td>76.42</td></tr><tr><td>5/5</td><td>82.51</td><td>35.28</td><td>78.50</td><td>77.20</td><td>75.25</td><td>80.82</td></tr><tr><td>6/6</td><td>82.64</td><td>67.37</td><td>82.00</td><td>81.41</td><td>80.30</td><td>83.03</td></tr></table>
|
| 253 |
+
|
| 254 |
+
Table 2. Quantization results on ImageNet dataset, with top-1 accuracy $(\%)$ reported. The performance of the full-precision model is listed below the model name. "W/A" denotes the bit-width of weights/activations. "Real" refers to using real images. For SMI [33], we provide the performance of using dense (normal-sized numbers) and sparse (smaller-sized numbers) synthetic images, respectively. Note that for Swin models, we do not provide the results for sparse synthetic images as the sparse generation method of SMI is infeasible.
|
| 255 |
+
|
| 256 |
+
via cosine decay for 100 iterations. A channel-wise quantizer was used for weights, and a layer-wise quantizer for activations, with all matrix multiplications in ViTs quantized [47-49].
|
| 257 |
+
|
| 258 |
+
# 4.2. Quantization Results
|
| 259 |
+
|
| 260 |
+
The quantization results are presented in Tab.2. Our SARDFQ demonstrates consistent improvements across various quantization bit-width configurations, particularly with low bit-width settings. Specifically, for ViT-S, SARDFQ improves the performance by $3.08\%$ in the W4/A4 setting, $2.72\%$ in the W5/A5 setting, and $1.20\%$ in the W6/A6 setting. For ViT-B, SARDFQ achieves performance gains of $15.52\%$ in the W4/A4 setting, $2.22\%$ in the W5/A5 setting, and $2.51\%$ in the W6/A6 setting. Results on DeiT also demonstrate the effectiveness of SARDFQ. For example, on DeiT-T, SARDFQ shows a marked improvement by increasing top-1 accuracy by $4.31\%$ in the W4/A4 setting, $2.31\%$ in the W5/A5 setting, and $1.36\%$ in the W6/A6 setting. For DeiT-S, SARDFQ enhances top-1 accuracy by $4.01\%$ in the W4/A4 setting, $2.16\%$ in the W5/A5 setting, and $1.46\%$ in the W6/A6 setting. The quan
|
| 261 |
+
|
| 262 |
+
tization results of Swin-S/B also affirm the superiority of our SARDFQ in enhancing model accuracy under different quantization configurations. In particular, for Swin-S, the proposed SARDFQ increases the accuracy by $1.55\%$ for the W4/A4 setting, $1.41\%$ for the W5/A5 setting, and $0.82\%$ for the W6/A6 setting, respectively. When it comes to Swin-B, the proposed SARDFQ increases the accuracy by $4.58\%$ for the W4/A4 setting, $2.32\%$ for the W5/A5 setting, and $1.03\%$ for the W6/A6 setting, respectively.
|
| 263 |
+
|
| 264 |
+
# 4.3. Ablation Study
|
| 265 |
+
|
| 266 |
+
All ablation studies are conducted on the W4A4 DeiT-S.
|
| 267 |
+
|
| 268 |
+
Analysis of APA, MSR, and SL. We analyze the effectiveness of the proposed APA (Sec.,3.3.1), MSR (Sec.,3.3.2), and SL (Eq.,12) in Tab.,3. Adding APA and SL individually to the baseline increases accuracy. Notably, APA boosts performance from $51.73\%$ to $60.26\%$ , confirming its effectiveness in aligning semantics (Sec.,3.3.1). Applying MSR alone slightly decreases accuracy from $51.73\%$ to $50.75\%$ , indicating that one-hot loss is unsuitable for MSR-augmented synthetic images. However, when both MSR and SL are applied, accuracy rises to $56.08\%$ , sug-
|
| 269 |
+
|
| 270 |
+
<table><tr><td>APA</td><td>MSR</td><td>SL</td><td>Acc. (%)</td></tr><tr><td colspan="3">Baseline</td><td>51.73</td></tr><tr><td>✓</td><td></td><td></td><td>60.26</td></tr><tr><td></td><td>✓</td><td></td><td>50.75</td></tr><tr><td></td><td></td><td>✓</td><td>52.02</td></tr><tr><td>✓</td><td>✓</td><td></td><td>61.58</td></tr><tr><td>✓</td><td></td><td>✓</td><td>60.51</td></tr><tr><td></td><td>✓</td><td>✓</td><td>56.08</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>62.29</td></tr></table>
|
| 271 |
+
|
| 272 |
+
gesting SL is more compatible with MSR than one-hot loss. Combining APA with either MSR or SL further improves performance. For example, APA and SL together yield an accuracy of $60.51\%$ , and when all three strategies are used, the best performance of $62.29\%$ is achieved.
|
| 273 |
+
|
| 274 |
+
Analysis of priors distribution. Tab. 4a showcases the results of using other distributions to formulate the attention priors. The unevenly distributed GMM and Laplace present comparable performance of $62.29\%$ and $62.16\%$ , respectively. Moreover, GMM provides a similar performance to the real's, indicating it performs well in imitating the patterns of real images.
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
(a)
|
| 278 |
+
Figure 5. Effect of varying (a) $\alpha_{1}$ and (b) $K_{MSR}$ .
|
| 279 |
+
|
| 280 |
+

|
| 281 |
+
(b)
|
| 282 |
+
|
| 283 |
+
Analysis of $\alpha_{1}, K_{APA}$ , and $K_{MSR}$ . The $\alpha_{1}$ from Eq. 13 balance the importance of the proposed APA loss during the update of the synthetic images. Fig. 5a demonstrates that the optimal performance is achieved when $\alpha_{1} = 1e5$ . Incrementally increasing $\alpha_{1}$ improves performance up to $62.29\%$ at $\alpha_{1} = 1e5$ . However, further increases in $\alpha_{1}$ subsequently degrade performance. The $K_{APA}$ in APA is the upper limit on the number of the Gaussian distributions used for priors generation. Tab. 4b displays the ablation study for different values of $K_{APA}$ . The best accuracy is achieved when $K_{APA} = 5$ . The $K_{MSR}$ in MSR is the upper limit on the number of patches. Fig. 5b demonstrates that when $K_{MSR} = 4$ , the optimal performance is achieved. Using $K_{MSR}$ larger than 4 will hurt the accuracy. We consider this
|
| 284 |
+
|
| 285 |
+
Table 3. Influence of the proposed APA, MSR, and SL on accuracy. The baseline adopts the one-hot loss. Applying APA, MSR, and SL yields SARDFQ.
|
| 286 |
+
|
| 287 |
+
<table><tr><td>Priors Distribution</td><td>Top-1</td></tr><tr><td>GMM</td><td>62.29</td></tr><tr><td>Laplace</td><td>62.16</td></tr><tr><td>Real</td><td>63.19</td></tr></table>
|
| 288 |
+
|
| 289 |
+
(a)
|
| 290 |
+
|
| 291 |
+
<table><tr><td>KAPA</td><td>Top-1</td></tr><tr><td>1</td><td>61.13</td></tr><tr><td>3</td><td>61.52</td></tr><tr><td>5</td><td>62.29</td></tr><tr><td>7</td><td>61.53</td></tr><tr><td>9</td><td>61.05</td></tr></table>
|
| 292 |
+
|
| 293 |
+
(b)
|
| 294 |
+
|
| 295 |
+
<table><tr><td>S</td><td>Top-1</td></tr><tr><td>0</td><td>61.96</td></tr><tr><td>L/2</td><td>62.29</td></tr></table>
|
| 296 |
+
|
| 297 |
+
(c)
|
| 298 |
+
|
| 299 |
+
<table><tr><td>w. l/L</td><td>Top-1</td></tr><tr><td>✓</td><td>62.29</td></tr><tr><td>×</td><td>61.32</td></tr></table>
|
| 300 |
+
|
| 301 |
+
(d)
|
| 302 |
+
|
| 303 |
+
Table 4. Effect of varying (a) priors types; (b) $K_{APA}$ ; (c) $S$ ; (d) $\frac{l}{L}$ .
|
| 304 |
+
|
| 305 |
+
due to limited patch resolution if using a too large $K_{MSR}$ .
|
| 306 |
+
|
| 307 |
+
Analysis of APA loss. Here, we conduct the ablation study by considering the $S$ and scale $\frac{l}{L}$ in Eq.7. Tab.4c presents the effect of varying $S$ in Eq.7. If applying APA loss to all blocks $(S = 0)$ , the top-1 accuracy decreases to $61.96\%$ . From Tab.4d, it can be seen that absorbing the scale $\frac{l}{L}$ in Eq.7 presents $0.97\%$ performance gains.
|
| 308 |
+
|
| 309 |
+
# 5. Limitations
|
| 310 |
+
|
| 311 |
+
We further discuss some limitations of the proposed SARDFQ, which will guide future research directions. First, although SARDFQ shows substantial performance improvement, a performance gap between SARDFQ and real data remains challenging, highlighting the need for a stronger semantics alignment and reinforcement method. Second, SARDFQ currently lacks a theoretical foundation. Future work could establish a theoretical framework for SARDFQ, particularly in understanding how APA and MSR influence synthetic images in a formalized manner.
|
| 312 |
+
|
| 313 |
+
# 6. Conclusion
|
| 314 |
+
|
| 315 |
+
In this paper, we investigate the DFQ method for ViTs. We first identify that synthetic images generated by existing methods suffer from semantic distortion and inadequacy issues, and propose SARDFQ to address these issues. To mitigate semantic distortion, SARDFQ introduces APA, which guides synthetic images to align with randomly generated structural attention patterns. To tackle semantic inadequacy, SARDFQ incorporates MSR. MSR optimizes different regions of synthetic images with unique semantics, thereby enhancing overall semantic richness. Moreover, SARDFQ employs SL, which adopts multiple semantic targets to ensure seamless learning of images augmented by MSR. Extensive experiments on various ViT models and tasks validate the effectiveness of SARDFQ.
|
| 316 |
+
|
| 317 |
+
# 7. Acknowledgments
|
| 318 |
+
|
| 319 |
+
This work was supported by National Science and Technology Major Project (No. 2022ZD0118202), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (Np. 624B2119, No. U21B2037, No. U22B2051, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001).
|
| 320 |
+
|
| 321 |
+
# References
|
| 322 |
+
|
| 323 |
+
[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. Vivit: A video vision transformer. In IEEE/CVF international conference on computer vision (ICCV), pages 6836-6846, 2021. 1, 2
|
| 324 |
+
[2] Jianhong Bai, Yuchen Yang, Huanpeng Chu, Hualiang Wang, Zuozhu Liu, Ruizhe Chen, Xiaoxuan He, Lianrui Mu, Chengfei Cai, and Haoji Hu. Robustness-guided image synthesis for data-free quantization. arXiv preprint arXiv:2310.03661, 2023. 3
|
| 325 |
+
[3] Ron Banner, Yury Nahshan, Daniel Soudry, et al. Post training 4-bit quantization of convolutional networks for rapid-deployment. In Advances in Neural Information Processing Systems (NeurIPS), pages 7950-7958, 2019. 2
|
| 326 |
+
[4] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(8):1798-1828, 2013. 4
|
| 327 |
+
[5] Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, and Judy Hoffman. Token merging: Your vit but faster. In The Eleventh International Conference on Learning Representations (ICLR), 2023. 4
|
| 328 |
+
[6] Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Zeroq: A novel zero shot quantization framework. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13169-13178, 2020. 2, 3
|
| 329 |
+
[7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European Conference on Computer Vision (ECCV), pages 213-229. Springer, 2020. 1, 2
|
| 330 |
+
[8] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 397-406, 2021. 5
|
| 331 |
+
[9] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12299-12310, 2021. 2
|
| 332 |
+
[10] Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, and Rongrong Ji. Cf-vit: A general
|
| 333 |
+
|
| 334 |
+
coarse-to-fine method for vision transformer. In AAAI Conference on Artificial Intelligence, pages 7042-7052, 2023. 4
|
| 335 |
+
[11] Mengzhao Chen, Wenqi Shao, Peng Xu, Mingbao Lin, Kaipeng Zhang, Fei Chao, Rongrong Ji, Yu Qiao, and Ping Luo. Diffrate: Differentiable compression rate for efficient vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17164-17174, 2023. 2
|
| 336 |
+
[12] Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun Yang, and Huchuan Lu. Transformer tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8126-8135, 2021. 2
|
| 337 |
+
[13] Xinrui Chen, Yizhi Wang, Renao Yan, Yiqing Liu, Tian Guan, and Yonghong He. Texq: Zero-shot network quantization with texture feature distribution calibration. In Advances in Neural Information Processing Systems (NeurIPS), 2024. 3, 4, 1
|
| 338 |
+
[14] Kanghyun Choi, Deokki Hong, Noseong Park, Youngsok Kim, and Jinho Lee. Qimera: Data-free quantization with synthetic boundary supporting samples. In Advances in Neural Information Processing Systems (NeurIPS), pages 14835-14847, 2021. 2, 4, 1
|
| 339 |
+
[15] Kanghyun Choi, Hye Yoon Lee, Deokki Hong, Joonsang Yu, Noseong Park, Youngsok Kim, and Jinho Lee. It's all in the teacher: Zero-shot quantization brought closer to the teacher. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8311-8321, 2022. 3
|
| 340 |
+
[16] Kanghyun Choi, Hye Yoon Lee, Dain Kwon, SunJong Park, Kyuyeun Kim, Noseong Park, and Jinho Lee. Mimiq: Low-bit data-free quantization of vision transformers. arXiv preprint arXiv:2407.20021, 2024. 2, 3
|
| 341 |
+
[17] Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In International Conference on Learning Representations (ICLR), 2020. 4
|
| 342 |
+
[18] Yifu Ding, Haotong Qin, Qinghua Yan, Zhenhua Chai, Junjie Liu, Xiaolin Wei, and Xianglong Liu. Towards accurate posttraining quantization for vision transformer. In 30th ACM International Conference on Multimedia (ACMMM), pages 5380-5388, 2022. 3
|
| 343 |
+
[19] Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In International conference on machine learning (ICML), pages 2793-2803. PMLR, 2021. 5
|
| 344 |
+
[20] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021. 1, 2, 4, 6
|
| 345 |
+
[21] Hoang Anh Dung, Cuong Pham, Trung Le, Jianfei Cai, and Thanh-Toan Do. Sharpness-aware data generation for zero-shot quantization. In International Conference on Machine Learning (ICML), 2024. 2
|
| 346 |
+
[22] Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S. Modha.
|
| 347 |
+
|
| 348 |
+
Learned step size quantization. In International Conference on Learning Representations (ICLR), 2020. 2
|
| 349 |
+
[23] Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, and Mingli Song. Contrastive model inversion for data-free knowledge distillation. In Thirty-First International Joint Conference on Artificial Intelligence, (IJCAI), 2021. 2
|
| 350 |
+
[24] Natalia Frumkin, Dibakar Gope, and Diana Marculescu. Jumping through local minima: Quantization in the loss landscape of vision transformers. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 16978-16988, 2023. 3
|
| 351 |
+
[25] Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4852–4861, 2019. 2
|
| 352 |
+
[26] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), pages 2672–2680, 2014. 3
|
| 353 |
+
[27] Cong Guo, Yuxian Qiu, Jingwen Leng, Xiaotian Gao, Chen Zhang, Yunxin Liu, Fan Yang, Yuhao Zhu, and Minyi Guo. Squant: On-the-fly data-free quantization via diagonal hessian approximation. In The Eleventh International Conference on Learning Representations (ICLR), 2022. 2, 3
|
| 354 |
+
[28] Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2
|
| 355 |
+
[29] Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, et al. A survey on vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 45(1):87-110, 2022. 1
|
| 356 |
+
[30] Zhiwei Hao, Jianyuan Guo, Ding Jia, Kai Han, Yehui Tang, Chao Zhang, Han Hu, and Yunhe Wang. Learning efficient vision transformers via fine-grained manifold distillation. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2
|
| 357 |
+
[31] Joakim Bruslund Haurum, Sergio Escalera, Graham W Taylor, and Thomas B Moeslund. Which tokens to use? investigating token reduction in vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 773-783, 2023. 4
|
| 358 |
+
[32] Zejiang Hou and Sun-Yuan Kung. Multi-dimensional vision transformer compression via dependency guided gaussian process search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3669-3678, 2022. 2
|
| 359 |
+
[33] Zixuan Hu, Yongxian Wei, Li Shen, Zhenyi Wang, Lei Li, Chun Yuan, and Dacheng Tao. Sparse model inversion: Efficient inversion of vision transformers for data-free applications. In International Conference on Machine Learning (ICML), 2024. 2, 3, 4, 6, 7, 1
|
| 360 |
+
|
| 361 |
+
[34] Jinggang Huang and David Mumford. Statistics of natural images and models. In Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 541-547. IEEE, 1999. 5
|
| 362 |
+
[35] Yongkweon Jeon, Chungman Lee, and Ho-young Kim. Genie: show me the data for quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12064-12073, 2023. 3, 6
|
| 363 |
+
[36] Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. ACM Computing Surveys (CSUR), 2021. 1
|
| 364 |
+
[37] Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, Joonhyun Jeong, Jung-Woo Ha, and Hyun Oh Song. Dataset condensation via efficient synthetic-data parameterization. In International Conference on Machine Learning (ICML), pages 11102-11118. PMLR, 2022. 4, 5
|
| 365 |
+
[38] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2014. 6
|
| 366 |
+
[39] Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018. 2
|
| 367 |
+
[40] Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H Li, Yonggang Zhang, Bo Han, and Mingkui Tan. Hard sample matters a lot in zero-shot quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24417-24426, 2023. 3
|
| 368 |
+
[41] Yuhang Li, Xin Dong, and Wei Wang. Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. In International Conference on Learning Representations (ICLR), 2020. 2
|
| 369 |
+
[42] Yuhang Li, Ruihao Gong, Xu Tan, Yang Yang, Peng Hu, Qi Zhang, Fengwei Yu, Wei Wang, and Shi Gu. Brecq: Pushing the limit of post-training quantization by block reconstruction. In International Conference on Learning Representations (ICLR), 2021. 2, 4, 6
|
| 370 |
+
[43] Yanjing Li, Sheng Xu, Baochang Zhang, Xianbin Cao, Peng Gao, and Guodong Guo. Q-vit: Accurate and fully quantized low-bit vision transformer. In Advances in Neural Information Processing Systems (NeurIPS), pages 34451-34463, 2022. 2
|
| 371 |
+
[44] Yuhang Li, Youngeun Kim, Donghyun Lee, and Priyadarshini Panda. Stableq: Enhancing data-scarce quantization with text-to-image data. arXiv preprint arXiv:2312.05272, 2023. 3
|
| 372 |
+
[45] Yuhang Li, Youngeun Kim, Donghyun Lee, Souvik Kundu, and Priyadarshini Panda. Genq: Quantization in low data regimes with generative synthetic data. In European Conference on Computer Vision (ECCV), pages 216-235. Springer, 2024. 3
|
| 373 |
+
[46] Zhikai Li and Qingyi Gu. I-vit: Integer-only quantization for efficient vision transformer inference. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 17065-17075, 2023. 2, 3
|
| 374 |
+
[47] Zhikai Li, Liping Ma, Mengjuan Chen, Junrui Xiao, and Qingyi Gu. Patch similarity aware data-free quantization for
|
| 375 |
+
|
| 376 |
+
vision transformers. In European Conference on Computer Vision (ECCV), pages 154-170. Springer, 2022. 1, 2, 3, 4, 5, 6, 7
|
| 377 |
+
[48] Zhikai Li, Mengjuan Chen, Junrui Xiao, and Qingyi Gu. Psaq-vit v2: Toward accurate and general data-free quantization for vision transformers. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023. 2, 3, 5, 6, 7, 1
|
| 378 |
+
[49] Zhikai Li, Junrui Xiao, Lianwei Yang, and Qingyi Gu. Repqvit: Scale reparameterization for post-training quantization of vision transformers. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 17227-17236, 2023. 3, 7
|
| 379 |
+
[50] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swim transformer. In IEEE/CVF international conference on computer vision (ICCV), pages 1833-1844, 2021. 2
|
| 380 |
+
[51] Yang Lin, Tianyu Zhang, Peiqin Sun, Zheng Li, and Shuchang Zhou. Fq-vit: Post-training quantization for fully quantized vision transformer. In Thirty-First International Joint Conference on Artificial Intelligence, (IJCAI), pages 1173–1179, 2022. 3
|
| 381 |
+
[52] Shih-Yang Liu, Zechun Liu, and Kwang-Ting Cheng. Oscillation-free quantization for low-bit vision transformers. In International Conference on Machine Learning (ICML), pages 21813-21824, 2023. 2
|
| 382 |
+
[53] Yijiang Liu, Huanrui Yang, Zhen Dong, Kurt Keutzer, Li Du, and Shanghang Zhang. Noisyquant: Noisy bias-enhanced post-training activation quantization for vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20321-20330, 2023. 3
|
| 383 |
+
[54] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In IEEE/CVF international conference on computer vision (ICCV), pages 10012-10022, 2021. 2, 6
|
| 384 |
+
[55] Zhenhua Liu, Yunhe Wang, Kai Han, Wei Zhang, Siwei Ma, and Wen Gao. Post-training quantization for vision transformer. In Advances in Neural Information Processing Systems (NeurIPS), pages 28092-28103, 2021. 2, 3
|
| 385 |
+
[56] Yiwei Ma, Jiayi Ji, Xiaoshuai Sun, Yiyi Zhou, and Rongrong Ji. Towards local visual modeling for image captioning. Pattern Recognition, 138:109420, 2023. 1
|
| 386 |
+
[57] Sachin Mehta and Mohammad Rastegari. Mobilevit: Lightweight, general-purpose, and mobile-friendly vision transformer. In International Conference on Learning Representations (ICLR), 2022. 2
|
| 387 |
+
[58] Muhammad Muzammal Naseer, Kanchana Ranasinghe, Salman H Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Intriguing properties of vision transformers. In Advances in Neural Information Processing Systems (NeurIPS), pages 23296-23308, 2021. 4
|
| 388 |
+
[59] Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 3163-3172, 2021. 2
|
| 389 |
+
|
| 390 |
+
[60] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), pages 8026-8037, 2019. 6
|
| 391 |
+
[61] Biao Qian, Yang Wang, Richang Hong, and Meng Wang. Adaptive data-free quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7960-7968, 2023. 3
|
| 392 |
+
[62] Akshit Ramachandran, Souvik Kundu, and Tushar Krishna. Clamp-vit: contrastive data-free learning for adaptive posttraining quantization of vits. In European Conference on Computer Vision (ECCV). Springer, 2024. 2, 3, 4, 5, 6, 1
|
| 393 |
+
[63] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 115:211-252, 2015. 6
|
| 394 |
+
[64] Fahad Shamshad, Salman Khan, Syed Waqas Zamir, Muhammad Haris Khan, Munawar Hayat, Fahad Shahbaz Khan, and Huazhu Fu. Transformers in medical imaging: A survey. Medical Image Analysis, page 102802, 2023. 2
|
| 395 |
+
[65] Huixin Sun, Runqi Wang, Yanjing Li, Xianbin Cao, Xiaolong Jiang, Yao Hu, and Baochang Zhang. P4q: Learning to prompt for quantization in visual-language models. arXiv preprint arXiv:2409.17634, 2024. 2
|
| 396 |
+
[66] Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, Dacheng Tao, and Chang Xu. Manifold regularized dynamic network pruning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5018-5028, 2021. 2
|
| 397 |
+
[67] Yehui Tang, Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chao Xu, and Dacheng Tao. Patch slimming for efficient vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12165-12174, 2022. 2
|
| 398 |
+
[68] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning (ICML), pages 10347-10357. PMLR, 2021. 1, 2, 6
|
| 399 |
+
[69] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research (JMLR), 9:2579-2605, 2008. 1
|
| 400 |
+
[70] Peihao Wang, Wenqing Zheng, Tianlong Chen, and Zhangyang Wang. Anti-oversmoothing in deep vision transformers via the fourier domain analysis: From theory to practice. arXiv preprint arXiv:2203.05962, 2022. 5
|
| 401 |
+
[71] Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, and Fengwei Yu. Qdrop: Randomly dropping quantization for extremely low-bit post-training quantization. In International Conference on Learning Representations (ICLR), 2022. 6
|
| 402 |
+
[72] Kan Wu, Houwen Peng, Minghao Chen, Jianlong Fu, and Hongyang Chao. Rethinking and improving relative position encoding for vision transformer. In IEEE/CVF In
|
| 403 |
+
|
| 404 |
+
ternational Conference on Computer Vision (ICCV), pages 10033-10041, 2021. 2
|
| 405 |
+
[73] Kan Wu, Jinnian Zhang, Houwen Peng, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan. Tinyvit: Fast pretraining distillation for small vision transformers. In European Conference on Computer Vision (ECCV), pages 68-85, 2022. 2
|
| 406 |
+
[74] Zhiqiang Shen Xijie Huang and Kwang-Ting Cheng. Variation-aware vision transformer quantization. arXiv preprint arXiv:2307.00331, 2023. 2
|
| 407 |
+
[75] Shoukai Xu, Haokun Li, Bohan Zhuang, Jing Liu, Jiezhang Cao, Chuangrun Liang, and Mingkui Tan. Generative low-bitwidth data free quantization. In European Conference on Computer Vision (ECCV), pages 1-17, 2020. 3
|
| 408 |
+
[76] Shoukai Xu, Haokun Li, Bohan Zhuang, Jing Liu, Jiezhang Cao, Chuangrun Liang, and Mingkui Tan. Generative low-bitwidth data free quantization. In European Conference on Computer Vision (ECCV), pages 1-17. Springer, 2020. 2
|
| 409 |
+
[77] Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz. Dreaming to distill: Data-free knowledge transffer via deep-inversion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8715-8724, 2020. 2, 3, 1
|
| 410 |
+
[78] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. 4
|
| 411 |
+
[79] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 818-833, 2014. 4
|
| 412 |
+
[80] Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan. Minivit: Compressing vision transformers with weight multiplexing. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12145-12154, 2022. 2
|
| 413 |
+
[81] Xiangguo Zhang, Haotong Qin, Yifu Ding, Ruihao Gong, Qinghua Yan, Renshuai Tao, Yuhang Li, Fengwei Yu, and Xianglong Liu. Diversifying sample generation for accurate data-free quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15658-15667, 2021. 2, 3
|
| 414 |
+
[82] Dehua Zheng, Wenhui Dong, Hailin Hu, Xinghao Chen, and Yunhe Wang. Less is more: Focus attention for efficient detr. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 6674-6683, 2023. 2
|
| 415 |
+
[83] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 6881-6890, 2021. 2
|
| 416 |
+
[84] Yunshan Zhong, Mingbao Lin, Gongrui Nan, Jianzhuang Liu, Baochang Zhang, Yonghong Tian, and Rongrong Ji. Intraq: Learning synthetic images with intra-class heterogeneity for zero-shot network quantization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12339-12348, 2022. 2, 4, 1
|
| 417 |
+
|
| 418 |
+
[85] Yunshan Zhong, Jiawei Hu, Mingbao Lin, Mengzhao Chen, and Rongrong Ji. I&S-vit: An inclusive & stable method for pushing the limit of post-training vits quantization. arXiv preprint arXiv:2311.10126, 2023. 6
|
| 419 |
+
[86] Yunshan Zhong, Jiawei Hu, You Huang, Yuxin Zhang, and Rongrong Ji. Erq: Error reduction for post-training quantization of vision transformers. In International Conference on Machine Learning (ICML), 2024. 3
|
2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3ebf1979a465a0550ce85f555f5d2c94cbec6158a57443841286ba94f1c4c923
|
| 3 |
+
size 427558
|
2025/Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/218a3708-3415-4c53-9a50-ef5f14ed0f9a_content_list.json
ADDED
|
@@ -0,0 +1,1949 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Semantic Causality-Aware Vision-Based 3D Occupancy Prediction",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
163,
|
| 8 |
+
130,
|
| 9 |
+
833,
|
| 10 |
+
152
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Dubing Chen $^{1}$ , Huan Zheng $^{1}$ , Yucheng Zhou $^{1}$ , Xianfei Li $^{2}$ , Wenlong Liao $^{2}$ , Tao He $^{2}$ , Pai Peng $^{2}$ , Jianbing Shen $^{1\\boxtimes}$ $^{1}$ SKL-IOTSC, CIS, University of Macau $^{2}$ COWAROBOT Co. Ltd. https://github.com/cdb342/CausalOcc",
|
| 17 |
+
"bbox": [
|
| 18 |
+
223,
|
| 19 |
+
180,
|
| 20 |
+
774,
|
| 21 |
+
252
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Abstract",
|
| 28 |
+
"text_level": 1,
|
| 29 |
+
"bbox": [
|
| 30 |
+
246,
|
| 31 |
+
286,
|
| 32 |
+
326,
|
| 33 |
+
301
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "Vision-based 3D semantic occupancy prediction is a critical task in 3D vision that integrates volumetric 3D reconstruction with semantic understanding. Existing methods, however, often rely on modular pipelines. These modules are typically optimized independently or use pre-configured inputs, leading to cascading errors. In this paper, we address this limitation by designing a novel causal loss that enables holistic, end-to-end supervision of the modular 2D-to-3D transformation pipeline. Grounded in the principle of 2D-to-3D semantic causality, this loss regulates the gradient flow from 3D voxel representations back to the 2D features. Consequently, it renders the entire pipeline differentiable, unifying the learning process and making previously non-trainable components fully learnable. Building on this principle, we propose the Semantic Causality-Aware 2D-to-3D Transformation, which comprises three components guided by our causal loss: Channel-Groupled Lifting for adaptive semantic mapping, Learnable Camera Offsets for enhanced robustness against camera perturbations, and Normalized Convolution for effective feature propagation. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the Occ3D benchmark, demonstrating significant robustness to camera perturbations and improved 2D-to-3D semantic consistency.",
|
| 40 |
+
"bbox": [
|
| 41 |
+
86,
|
| 42 |
+
335,
|
| 43 |
+
483,
|
| 44 |
+
698
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "1. Introduction",
|
| 51 |
+
"text_level": 1,
|
| 52 |
+
"bbox": [
|
| 53 |
+
91,
|
| 54 |
+
728,
|
| 55 |
+
220,
|
| 56 |
+
744
|
| 57 |
+
],
|
| 58 |
+
"page_idx": 0
|
| 59 |
+
},
|
| 60 |
+
{
|
| 61 |
+
"type": "text",
|
| 62 |
+
"text": "Predicting dense 3D semantic occupancy is a fundamental task in 3D vision, providing a fine-grained voxel representation of scene geometry and semantics [39, 40, 43, 47]. The challenge of performing this prediction from vision alone,",
|
| 63 |
+
"bbox": [
|
| 64 |
+
89,
|
| 65 |
+
753,
|
| 66 |
+
483,
|
| 67 |
+
816
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "image",
|
| 73 |
+
"img_path": "images/688f0050e8c1b14bbd9adc04a36708ea013cc74e6c54793c41befb5f3f624927.jpg",
|
| 74 |
+
"image_caption": [
|
| 75 |
+
"(a) An Example of Inaccurate 2D-to-3D Transformation"
|
| 76 |
+
],
|
| 77 |
+
"image_footnote": [],
|
| 78 |
+
"bbox": [
|
| 79 |
+
519,
|
| 80 |
+
285,
|
| 81 |
+
898,
|
| 82 |
+
392
|
| 83 |
+
],
|
| 84 |
+
"page_idx": 0
|
| 85 |
+
},
|
| 86 |
+
{
|
| 87 |
+
"type": "image",
|
| 88 |
+
"img_path": "images/716ae459e91b2d010789cd4c87e05ea9d2b792c95a6dc3f8e68044f365e20358.jpg",
|
| 89 |
+
"image_caption": [
|
| 90 |
+
"(b) Modular 2D-to-3D Transformation"
|
| 91 |
+
],
|
| 92 |
+
"image_footnote": [],
|
| 93 |
+
"bbox": [
|
| 94 |
+
521,
|
| 95 |
+
411,
|
| 96 |
+
898,
|
| 97 |
+
467
|
| 98 |
+
],
|
| 99 |
+
"page_idx": 0
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"type": "image",
|
| 103 |
+
"img_path": "images/6bc752e1a1159bbd1d1122c17f2aedd2d4b26b23bcdbc93afbee95d150546f4f.jpg",
|
| 104 |
+
"image_caption": [
|
| 105 |
+
"(c) Our End-to-End Supervised 2D-to-3D Transformation",
|
| 106 |
+
"Figure 1. (a) Illustrates a Visual Analysis of Semantic Ambiguity in VisionOcc. Inaccurate 2D-to-3D transformations may lead to positional shifts, misaligning supervision signals and resulting in semantic ambiguity. (b) Depicts the Conventional Modular 2D-to-3D Transformation Paradigm [7, 13, 27, 35], which employs depth supervision for geometry estimation, pre-calibrated camera parameters, and fixed mapping for lifting. (c) Presents Our Holistic, End-to-End Supervised 2D-to-3D Transformation Paradigm, which eliminates the need for separate modular supervision or pre-calibration, enabling unified error propagation to supervise all components."
|
| 107 |
+
],
|
| 108 |
+
"image_footnote": [],
|
| 109 |
+
"bbox": [
|
| 110 |
+
522,
|
| 111 |
+
481,
|
| 112 |
+
898,
|
| 113 |
+
534
|
| 114 |
+
],
|
| 115 |
+
"page_idx": 0
|
| 116 |
+
},
|
| 117 |
+
{
|
| 118 |
+
"type": "text",
|
| 119 |
+
"text": "an approach known as vision-based 3D semantic occupancy prediction (VisionOcc), has recently become a focal point of research. By leveraging only commodity cameras, VisionOcc is pivotal for a wide range of 3D applications, serving as a comprehensive digital replica of the environment for tasks like analysis, simulation, and interactive visualization [7, 9, 14, 27, 39, 40, 44].",
|
| 120 |
+
"bbox": [
|
| 121 |
+
511,
|
| 122 |
+
731,
|
| 123 |
+
906,
|
| 124 |
+
837
|
| 125 |
+
],
|
| 126 |
+
"page_idx": 0
|
| 127 |
+
},
|
| 128 |
+
{
|
| 129 |
+
"type": "text",
|
| 130 |
+
"text": "VisionOcc unifies the challenges of feed-forward 3D reconstruction and dense semantic understanding. Existing pipelines typically decompose this task into two metaphases [7, 14, 27, 30, 35]. The initial 2D-to-3D transfor",
|
| 131 |
+
"bbox": [
|
| 132 |
+
511,
|
| 133 |
+
839,
|
| 134 |
+
906,
|
| 135 |
+
901
|
| 136 |
+
],
|
| 137 |
+
"page_idx": 0
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"type": "header",
|
| 141 |
+
"text": "CVF",
|
| 142 |
+
"bbox": [
|
| 143 |
+
106,
|
| 144 |
+
2,
|
| 145 |
+
181,
|
| 146 |
+
42
|
| 147 |
+
],
|
| 148 |
+
"page_idx": 0
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"type": "header",
|
| 152 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 153 |
+
"bbox": [
|
| 154 |
+
238,
|
| 155 |
+
0,
|
| 156 |
+
807,
|
| 157 |
+
46
|
| 158 |
+
],
|
| 159 |
+
"page_idx": 0
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"type": "page_footnote",
|
| 163 |
+
"text": "$\\boxtimes$ Corresponding author: Jianbing Shen. This work was supported in part by the Science and Technology Development Fund of Macau SAR (FDCT) under grants 0102/2023/RIA2 and 0154/2022/A3 and 001/2024/SKL and CG2025-IOTSC, the University of Macau SRG2022-00023-IOTSC grant, and the Jiangyin Hi-tech Industrial Development Zone under the Taihu Innovation Scheme (EF2025-00003-SKL-IOTSC).",
|
| 164 |
+
"bbox": [
|
| 165 |
+
89,
|
| 166 |
+
827,
|
| 167 |
+
483,
|
| 168 |
+
898
|
| 169 |
+
],
|
| 170 |
+
"page_idx": 0
|
| 171 |
+
},
|
| 172 |
+
{
|
| 173 |
+
"type": "page_number",
|
| 174 |
+
"text": "24878",
|
| 175 |
+
"bbox": [
|
| 176 |
+
478,
|
| 177 |
+
944,
|
| 178 |
+
517,
|
| 179 |
+
955
|
| 180 |
+
],
|
| 181 |
+
"page_idx": 0
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"type": "text",
|
| 185 |
+
"text": "mation uses large receptive field operators like view lifting [7, 37] or cross attention [14, 27] to construct an initial 3D feature volume. Subsequently, a 3D representation learning phase employs operators with a small receptive field (e.g., 3D convolutions or local self-attention) to refine 3D features and produce the final prediction. Our work targets the 2D-to-3D transformation phase, which is more critical and error-prone. Fig. 1a showcases a primary failure mode where features of one class (e.g., a 2D 'car') are erroneously transformed to the 3D location of another (e.g., a 'tree'). This creates a flawed learning objective, forcing the model to learn a spurious association between 'car' features and a 'tree' label. Such semantic ambiguity is a principal obstacle to achieving high performance.",
|
| 186 |
+
"bbox": [
|
| 187 |
+
89,
|
| 188 |
+
90,
|
| 189 |
+
480,
|
| 190 |
+
301
|
| 191 |
+
],
|
| 192 |
+
"page_idx": 1
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"type": "text",
|
| 196 |
+
"text": "The prevailing VisionOcc methods, typically based on Lift-Splat-Shoot (LSS), employ a modular approach for the 2D-to-3D transformation (Fig. 1b) [7, 35, 37]. This involves supervising geometry with a proxy depth loss while relying on fixed, pre-calibrated camera parameters and a static lifting map. However, this modularity raises critical questions about robustness and optimality. First, it is susceptible to compounding errors; for example, the reliance on fixed camera parameters makes the system vulnerable to real-world perturbations like camera jitter during motion. More fundamentally, the optimality of such proxy supervision is questionable. An intermediate representation ideal for depth estimation may not be optimal for the final semantic occupancy task, inherently limiting the transformation's expressive power due to this objective misalignment. This motivates our central research question: Can we devise an end-to-end supervision framework<sup>1</sup> that holistically optimizes the entire 2D-to-3D transformation, enabling unified semantic-aware error backpropagation and allowing traditionally fixed modules to become fully learnable?",
|
| 197 |
+
"bbox": [
|
| 198 |
+
91,
|
| 199 |
+
303,
|
| 200 |
+
482,
|
| 201 |
+
604
|
| 202 |
+
],
|
| 203 |
+
"page_idx": 1
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"type": "text",
|
| 207 |
+
"text": "We approach this problem from a causal perspective. In VisionOcc, the 2D image semantics are the \"cause\" of the final 3D semantic \"effect\". Semantic misalignment arises from disrupted information flow from cause to effect (Fig. 1a). Therefore, instead of correcting the erroneous output, we propose to directly regularize the information flow itself. We posit that a 3D prediction for a given class should be influenced predominantly by 2D image regions of that same class. To enforce this information flow, we leverage gradients as a proxy, inspired by prior work [18, 42, 53]. For each semantic class, the gradient of its aggregated 3D features is computed w.r.t. the 2D feature map, producing a saliency-like map of 2D influence. This map is then directly supervised with the ground truth 2D segmentation mask. As shown in Fig. 1c, this establishes a principled, end-to-end supervision signal for the 2D-to-3D transformation, enabling holistic optimization of all its components.",
|
| 208 |
+
"bbox": [
|
| 209 |
+
89,
|
| 210 |
+
606,
|
| 211 |
+
482,
|
| 212 |
+
863
|
| 213 |
+
],
|
| 214 |
+
"page_idx": 1
|
| 215 |
+
},
|
| 216 |
+
{
|
| 217 |
+
"type": "text",
|
| 218 |
+
"text": "To fully leverage our end-to-end supervision, we introduce a more expressive and learnable 2D-to-3D view transformation, termed the Semantic Causality-Aware Transformation (SCAT). A key challenge is that directly supervising gradients is inherently unstable. Therefore, the entire SCAT module is designed to constrain its gradient flow to a stable [0, 1] range. Specifically, SCAT introduces three targeted designs: i) Channel-Grouped Lifting: To better disentangle semantics, we move beyond LSS's uniform weighting and apply distinct learnable weights to different groups of feature channels. ii) Learnable Camera Offsets: To mitigate motion-induced pose errors, we introduce learnable offsets to the camera parameters, which are implicitly supervised by the 2D-3D semantic consistency enforced by our causal loss. iii) Normalized Convolution: Finally, we employ a normalized convolution to densify the sparse 3D features from LSS [28], ensuring this final step also adheres to our global gradient stability requirement.",
|
| 219 |
+
"bbox": [
|
| 220 |
+
511,
|
| 221 |
+
90,
|
| 222 |
+
903,
|
| 223 |
+
362
|
| 224 |
+
],
|
| 225 |
+
"page_idx": 1
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"type": "text",
|
| 229 |
+
"text": "Our contributions are as follows: i) We systematically analyze the 2D-to-3D transformation in VisionOCC, identifying a critical failure mode we term semantic ambiguity. We provide a theoretical analysis proving how the modularity of prior methods leads to error propagation, offering clear guidance for future work. ii) To address these problems, we propose the Causal Loss that directly regularizes the information flow of the 2D-to-3D transformation. This enables true end-to-end optimization of all constituent modules, mitigating error accumulation and making previously fixed components, such as camera parameters, fully learnable. iii) We instantiate our principles in the Semantic Causality-Aware Transformation, a novel 2D-to-3D transformation architecture. SCAT incorporates Channel-Groupled Lifting, Learnable Camera Offsets, and Normalized Convolution to explicitly tackle the challenges of semantic confusion, camera perturbations, and limited learnability. iv) Extensive experiments show our method significantly boosts existing models, achieving a $3.2\\%$ absolute mIoU gain on BEVDet. Furthermore, it demonstrates superior robustness to camera perturbations, reducing the relative performance drop on BEVDet from a severe $-32.2\\%$ to a mere $-7.3\\%$ .",
|
| 230 |
+
"bbox": [
|
| 231 |
+
511,
|
| 232 |
+
364,
|
| 233 |
+
903,
|
| 234 |
+
710
|
| 235 |
+
],
|
| 236 |
+
"page_idx": 1
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"type": "text",
|
| 240 |
+
"text": "2. Related Work",
|
| 241 |
+
"text_level": 1,
|
| 242 |
+
"bbox": [
|
| 243 |
+
511,
|
| 244 |
+
729,
|
| 245 |
+
653,
|
| 246 |
+
744
|
| 247 |
+
],
|
| 248 |
+
"page_idx": 1
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"type": "text",
|
| 252 |
+
"text": "2.1. Semantic Scene Completion",
|
| 253 |
+
"text_level": 1,
|
| 254 |
+
"bbox": [
|
| 255 |
+
511,
|
| 256 |
+
756,
|
| 257 |
+
761,
|
| 258 |
+
771
|
| 259 |
+
],
|
| 260 |
+
"page_idx": 1
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"type": "text",
|
| 264 |
+
"text": "Semantic Scene Completion (SSC) [10, 20, 24, 31, 51] refers to the task of simultaneously predicting both the occupancy and semantic labels of a scene. Existing methods can be classified into indoor and outdoor approaches based on the scene type, with the former focusing on occupancy and semantic label prediction in controlled environments [10, 20, 31], while the latter shifts towards more complex outdoor settings, particularly in the context of au",
|
| 265 |
+
"bbox": [
|
| 266 |
+
511,
|
| 267 |
+
779,
|
| 268 |
+
903,
|
| 269 |
+
900
|
| 270 |
+
],
|
| 271 |
+
"page_idx": 1
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"type": "page_footnote",
|
| 275 |
+
"text": "<sup>1</sup>Here, \"end to end\" refers to a unified supervision scheme for a process, distinct from the concept of a monolithic network architecture.",
|
| 276 |
+
"bbox": [
|
| 277 |
+
89,
|
| 278 |
+
875,
|
| 279 |
+
482,
|
| 280 |
+
900
|
| 281 |
+
],
|
| 282 |
+
"page_idx": 1
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"type": "page_number",
|
| 286 |
+
"text": "24879",
|
| 287 |
+
"bbox": [
|
| 288 |
+
478,
|
| 289 |
+
944,
|
| 290 |
+
517,
|
| 291 |
+
955
|
| 292 |
+
],
|
| 293 |
+
"page_idx": 1
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"type": "text",
|
| 297 |
+
"text": "tonomous driving [1, 6]. The core principle of SSC lies in its ability to infer the unseen, effectively bridging gaps in incomplete observations with accurate semantic understanding. MonoScene [6] introduces a 3D SSC framework that infers dense geometry and semantics from a single monocular RGB image. VoxFormer [24] presents a Transformer-based semantic scene completion framework that generates complete 3D volumetric semantics from 2D images by first predicting sparse visible voxel queries and then densifying them through self-attention with a masked autoencoder design. OccFormer [51] introduces a dual-path transformer network for 3D semantic occupancy prediction, efficiently processing camera-generated 3D voxel features through local and global pathways, and enhancing occupancy decoding with preserve-pooling and class-guided sampling to address sparsity and class imbalance.",
|
| 298 |
+
"bbox": [
|
| 299 |
+
89,
|
| 300 |
+
90,
|
| 301 |
+
483,
|
| 302 |
+
332
|
| 303 |
+
],
|
| 304 |
+
"page_idx": 2
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"type": "text",
|
| 308 |
+
"text": "2.2. Vision-based 3D Occupancy Prediction",
|
| 309 |
+
"text_level": 1,
|
| 310 |
+
"bbox": [
|
| 311 |
+
89,
|
| 312 |
+
340,
|
| 313 |
+
426,
|
| 314 |
+
358
|
| 315 |
+
],
|
| 316 |
+
"page_idx": 2
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"type": "text",
|
| 320 |
+
"text": "Vision-based 3D Occupancy Prediction [9, 14-16, 34, 38-40, 43?] aims to predict the spatial and semantic features of 3D voxel grids surrounding an autonomous vehicle from image data. This task is closely related to SSC, emphasizing the importance of multi-perspective joint perception for effective autonomous navigation. TPVFormer [14] is prior work that lifts image features into the 3D TPV space by leveraging an attention mechanism [26, 41]. Different from TPVFormer, which relies on sparse point clouds for supervision, subsequent studies, including OccNet [40], SurroundOcc [44], Occ3D [39], and OpenOccupancy [43], have developed denser occupancy annotations by incorporating temporal information or instance-level labels. Methods such as BEVDet [13], FBOcc [27], COTR [35], and ALOcc [7] leverage depth-based LSS [22, 23, 25, 37] for explicit geometric transformation, demonstrating strong performance. Some methods [2, 4, 17, 36, 46, 49, 52] have explored rendering-based methods that utilize 2D signal supervision, thereby bypassing the need for 3D annotations. Furthermore, recent research like [3, 7, 8, 21, 29, 32, 40] introduced 3D occupancy flow prediction, which addresses the movement of foreground objects in dynamic scenes by embedding 3D flow information to capture per-voxel dynamics. Unlike the above methods, we analyze the 2D-to-3D transformation process from the perspectives of error propagation and semantic causal consistency, proposing a novel approach that enhances causal consistency.",
|
| 321 |
+
"bbox": [
|
| 322 |
+
91,
|
| 323 |
+
363,
|
| 324 |
+
483,
|
| 325 |
+
772
|
| 326 |
+
],
|
| 327 |
+
"page_idx": 2
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"type": "text",
|
| 331 |
+
"text": "3. Method",
|
| 332 |
+
"text_level": 1,
|
| 333 |
+
"bbox": [
|
| 334 |
+
89,
|
| 335 |
+
784,
|
| 336 |
+
181,
|
| 337 |
+
799
|
| 338 |
+
],
|
| 339 |
+
"page_idx": 2
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"type": "text",
|
| 343 |
+
"text": "Preliminary. VisionOcc predicts a dense 3D semantic occupancy grid from surround-view images, modeled as a causal dependency chain as shown in Fig. 2. This process involves key variables: input image $\\mathbf{I}$ , estimated geometry $\\mathbf{G}$ , camera parameters $P$ (intrinsic & extrinsic), potential camera parameter errors $e_{P}$ , intermediate 3D fea",
|
| 344 |
+
"bbox": [
|
| 345 |
+
89,
|
| 346 |
+
809,
|
| 347 |
+
483,
|
| 348 |
+
900
|
| 349 |
+
],
|
| 350 |
+
"page_idx": 2
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"type": "image",
|
| 354 |
+
"img_path": "images/e6dc0ed532564f3ef101e617c132dcadb62b9d4e31e01db2e726f7aeeb832c15.jpg",
|
| 355 |
+
"image_caption": [
|
| 356 |
+
"Figure 2. The Causal Structure of VisionOcc. It illustrates the dependency chain from the image input $\\mathbf{I}$ to the semantic occupancy output $\\mathbf{O}$ . $\\mathbf{G}$ : geometry for 2D-to-3D transformation. $P$ : camera intrinsic and extrinsic. $\\mathbf{L}$ : intermediate 3D feature. $e_P$ : errors in camera parameters."
|
| 357 |
+
],
|
| 358 |
+
"image_footnote": [],
|
| 359 |
+
"bbox": [
|
| 360 |
+
517,
|
| 361 |
+
88,
|
| 362 |
+
898,
|
| 363 |
+
186
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 2
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "text",
|
| 369 |
+
"text": "ture $\\mathbf{L}$ , and final occupancy output $\\mathbf{O}$ . In LSS-based methods [7, 13, 35], the pipeline starts with the image backbone extracting features $\\mathbf{f}_i = F_i(\\mathbf{I}), \\mathbf{f}_i \\in \\mathbb{R}^{U,V,C}$ ; geometry $\\mathbf{G} \\in \\mathbb{R}^{U,V,D}$ is predicted as a probability distribution over discretized depth bins $D$ , i.e., $\\mathbf{G} = F_g(\\mathbf{f}_i)$ ; 2D features are then transformed to 3D via an outer product $\\mathbf{f}_L' = \\mathbf{G} \\otimes \\mathbf{f}_i, \\mathbf{f}_L' \\in \\mathbb{R}^{U,V,D,C}$ ; camera parameters $P$ map these to voxel coordinates $R_P(u,v,d) \\to (h,w,z) \\in [0,H-1] \\times [0,W-1] \\times [0,Z-1]$ , yielding $\\mathbf{f}_L \\in \\mathbb{R}^{H \\times W \\times Z}$ , where $H \\times W \\times Z$ defines the occupancy grid resolution; finally, $\\mathbf{L}$ is decoded to produce the semantic occupancy output: $\\mathbf{O} = F_o(\\mathbf{f}_L), \\mathbf{O} \\in \\mathbb{R}^{H \\times W \\times Z \\times S}$ , where $F_o$ is the decoding function and $S$ is the number of semantic classes. The prediction $\\mathbf{O}$ is supervised by the ground-truth $\\tilde{\\mathbf{O}}$ .",
|
| 370 |
+
"bbox": [
|
| 371 |
+
511,
|
| 372 |
+
268,
|
| 373 |
+
906,
|
| 374 |
+
479
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 2
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "3.1. Error Propagation in Depth-Based LSS",
|
| 381 |
+
"text_level": 1,
|
| 382 |
+
"bbox": [
|
| 383 |
+
511,
|
| 384 |
+
489,
|
| 385 |
+
852,
|
| 386 |
+
506
|
| 387 |
+
],
|
| 388 |
+
"page_idx": 2
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "text",
|
| 392 |
+
"text": "Theorem 1. In Depth-Based LSS methods with a fixed $2D$ -to-3D mapping $M_{\\text{fixed}}$ , inherent mapping error $\\delta M$ leads to gradient deviations, preventing convergence to an $\\epsilon$ -optimal solution. This is formalized as:",
|
| 393 |
+
"bbox": [
|
| 394 |
+
511,
|
| 395 |
+
511,
|
| 396 |
+
906,
|
| 397 |
+
571
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 2
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "equation",
|
| 403 |
+
"text": "\n$$\nM _ {f i x e d} = M _ {i d e a l} + \\delta M \\Longrightarrow \\nabla_ {\\theta} L _ {L S S} \\neq \\nabla_ {\\theta} L _ {i d e a l}, \\tag {1}\n$$\n",
|
| 404 |
+
"text_format": "latex",
|
| 405 |
+
"bbox": [
|
| 406 |
+
521,
|
| 407 |
+
583,
|
| 408 |
+
906,
|
| 409 |
+
599
|
| 410 |
+
],
|
| 411 |
+
"page_idx": 2
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"type": "text",
|
| 415 |
+
"text": "where $M_{ideal}$ is the ideal mapping and $L_{ideal}$ is the loss function using $M_{ideal}$ .",
|
| 416 |
+
"bbox": [
|
| 417 |
+
511,
|
| 418 |
+
611,
|
| 419 |
+
905,
|
| 420 |
+
642
|
| 421 |
+
],
|
| 422 |
+
"page_idx": 2
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"type": "text",
|
| 426 |
+
"text": "Proof. Mapping Error Quantification: Let $\\mathbf{x} \\in \\mathbb{R}^2$ be 2D pixel coordinates and $d(\\mathbf{x})$ be the ground truth depth. Estimated depth is $\\hat{d}(\\mathbf{x}) = d(\\mathbf{x}) + \\epsilon_d(\\mathbf{x})$ , with $\\epsilon_d(\\mathbf{x})$ as depth error. Let $\\mathbf{K}_{ideal} \\in \\mathbb{R}^{3 \\times 3}$ be the ideal camera intrinsics, and $\\mathbf{K} = \\mathbf{K}_{ideal} + \\epsilon_K$ be the estimated camera intrinsics with error $\\epsilon_K$ . The fixed mapping is $M_{fixed} = M_{ideal} + \\delta M$ , where $\\delta M$ encompasses errors from various sources, including depth estimation error $\\epsilon_d(\\mathbf{x})$ and camera extrinsic error $\\epsilon_K$ . We assume the total mapping error is bounded $\\| \\delta M \\|_F \\leq \\Delta_M < \\infty$ . The 3D coordinates are:",
|
| 427 |
+
"bbox": [
|
| 428 |
+
511,
|
| 429 |
+
650,
|
| 430 |
+
906,
|
| 431 |
+
804
|
| 432 |
+
],
|
| 433 |
+
"page_idx": 2
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"type": "equation",
|
| 437 |
+
"text": "\n$$\n\\mathbf {X} = M _ {\\text {f i x e d}} (\\mathbf {x}, \\hat {d} (\\mathbf {x}), \\mathbf {K}) \\tag {2}\n$$\n",
|
| 438 |
+
"text_format": "latex",
|
| 439 |
+
"bbox": [
|
| 440 |
+
596,
|
| 441 |
+
816,
|
| 442 |
+
903,
|
| 443 |
+
833
|
| 444 |
+
],
|
| 445 |
+
"page_idx": 2
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "equation",
|
| 449 |
+
"text": "\n$$\n\\mathbf {X} _ {\\text {i d e a l}} = M _ {\\text {i d e a l}} (\\mathbf {x}, d (\\mathbf {x}), \\mathbf {K} _ {\\text {i d e a l}}) \\tag {3}\n$$\n",
|
| 450 |
+
"text_format": "latex",
|
| 451 |
+
"bbox": [
|
| 452 |
+
568,
|
| 453 |
+
835,
|
| 454 |
+
903,
|
| 455 |
+
852
|
| 456 |
+
],
|
| 457 |
+
"page_idx": 2
|
| 458 |
+
},
|
| 459 |
+
{
|
| 460 |
+
"type": "equation",
|
| 461 |
+
"text": "\n$$\n\\Delta \\mathbf {X} = \\mathbf {X} - \\mathbf {X} _ {\\text {i d e a l}} = \\delta M (\\mathbf {x}, \\hat {d} (\\mathbf {x}), \\mathbf {K}), \\tag {4}\n$$\n",
|
| 462 |
+
"text_format": "latex",
|
| 463 |
+
"bbox": [
|
| 464 |
+
584,
|
| 465 |
+
856,
|
| 466 |
+
903,
|
| 467 |
+
875
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 2
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"text": "where $\\| \\Delta \\mathbf{X}\\| _2\\leq C\\cdot \\Delta_M$ for bounded inputs.",
|
| 474 |
+
"bbox": [
|
| 475 |
+
511,
|
| 476 |
+
885,
|
| 477 |
+
820,
|
| 478 |
+
901
|
| 479 |
+
],
|
| 480 |
+
"page_idx": 2
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "page_number",
|
| 484 |
+
"text": "24880",
|
| 485 |
+
"bbox": [
|
| 486 |
+
478,
|
| 487 |
+
944,
|
| 488 |
+
519,
|
| 489 |
+
955
|
| 490 |
+
],
|
| 491 |
+
"page_idx": 2
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "image",
|
| 495 |
+
"img_path": "images/653d4f3314c43e2cb0d40b3486023ef5483b3b62d55a030e67bddeed13ad7dbf.jpg",
|
| 496 |
+
"image_caption": [
|
| 497 |
+
"Figure 3. The Overall Framework of the Proposed Semantic Causality-Aware VisionOcc. The proposed framework consists of three primary components: a backbone network for extracting 2D features, an SCAT module for transforming these features into 3D space, and an Encoder-Decoder network for learning 3D semantics. The SCAT module is supervised by our causal loss."
|
| 498 |
+
],
|
| 499 |
+
"image_footnote": [],
|
| 500 |
+
"bbox": [
|
| 501 |
+
181,
|
| 502 |
+
85,
|
| 503 |
+
816,
|
| 504 |
+
210
|
| 505 |
+
],
|
| 506 |
+
"page_idx": 3
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"type": "table",
|
| 510 |
+
"img_path": "images/96e2c267bf887b5ae625422ad3037bb568c80c636a8db887b2fc84ab9c78d7e5.jpg",
|
| 511 |
+
"table_caption": [],
|
| 512 |
+
"table_footnote": [],
|
| 513 |
+
"table_body": "<table><tr><td>Method</td><td>mIoU</td><td>mIoUD</td><td>IoU</td></tr><tr><td>Depth-Based LSS</td><td>44.5</td><td>40.4</td><td>78.9</td></tr><tr><td>SCL-Aware LSS</td><td>50.5↑6.0</td><td>46.9↑6.5</td><td>85.7↑6.8</td></tr></table>",
|
| 514 |
+
"bbox": [
|
| 515 |
+
120,
|
| 516 |
+
276,
|
| 517 |
+
450,
|
| 518 |
+
335
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 3
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "text",
|
| 524 |
+
"text": "Table 1. Performance of Depth-Based LSS vs. SCL-Aware LSS on Occ3D (in Ideal Conditions). BEVDetOcc is the baseline.",
|
| 525 |
+
"bbox": [
|
| 526 |
+
89,
|
| 527 |
+
338,
|
| 528 |
+
482,
|
| 529 |
+
367
|
| 530 |
+
],
|
| 531 |
+
"page_idx": 3
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "text",
|
| 535 |
+
"text": "Feature Space Deviation and Loss: Let $F_{2D}(\\mathbf{x})$ be 2D features and $F_{3D}(\\cdot) = \\text{Lift}(F_{2D}(\\mathbf{x}), \\cdot)$ . Assuming Lift is $L_{\\text{Lift}}$ -Lipschitz continuous, the 3D feature deviation is:",
|
| 536 |
+
"bbox": [
|
| 537 |
+
89,
|
| 538 |
+
375,
|
| 539 |
+
483,
|
| 540 |
+
421
|
| 541 |
+
],
|
| 542 |
+
"page_idx": 3
|
| 543 |
+
},
|
| 544 |
+
{
|
| 545 |
+
"type": "equation",
|
| 546 |
+
"text": "\n$$\n\\begin{array}{l} \\left\\| F _ {3 D} (\\mathbf {X}) - F _ {3 D} \\left(\\mathbf {X} _ {\\text {i d e a l}}\\right) \\right\\| _ {F} \\leq L _ {\\text {L i f t}} \\| \\mathbf {X} - \\mathbf {X} _ {\\text {i d e a l}} \\| _ {2} \\tag {5} \\\\ \\leq L _ {L i f t} \\| \\Delta \\mathbf {X} \\| _ {2} \\leq \\Delta_ {F _ {3 D}}. \\\\ \\end{array}\n$$\n",
|
| 547 |
+
"text_format": "latex",
|
| 548 |
+
"bbox": [
|
| 549 |
+
99,
|
| 550 |
+
431,
|
| 551 |
+
482,
|
| 552 |
+
468
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "text",
|
| 558 |
+
"text": "The loss function is $L_{LSS} = \\mathcal{L}(P_{3D}(\\mathbf{X}),GT_{3D})$ , where $P_{3D}(\\mathbf{X}) = Seg_{3D}(F_{3D}(\\mathbf{X}))$",
|
| 559 |
+
"bbox": [
|
| 560 |
+
89,
|
| 561 |
+
478,
|
| 562 |
+
482,
|
| 563 |
+
508
|
| 564 |
+
],
|
| 565 |
+
"page_idx": 3
|
| 566 |
+
},
|
| 567 |
+
{
|
| 568 |
+
"type": "text",
|
| 569 |
+
"text": "Gradient Deviation and Optimization Limit: The gradient of $L_{LSS}$ w.r.t. parameters $\\theta$ is given by chain rule. However, since $M_{fixed}$ is fixed, $\\frac{\\partial\\mathbf{X}}{\\partial\\theta} = 0$ . Thus, the gradient becomes:",
|
| 570 |
+
"bbox": [
|
| 571 |
+
89,
|
| 572 |
+
508,
|
| 573 |
+
483,
|
| 574 |
+
568
|
| 575 |
+
],
|
| 576 |
+
"page_idx": 3
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"type": "equation",
|
| 580 |
+
"text": "\n$$\n\\nabla_ {\\theta} L _ {L S S} = \\frac {\\partial L _ {L S S}}{\\partial P _ {3 D}} \\frac {\\partial P _ {3 D}}{\\partial F _ {3 D}} \\left(\\frac {\\partial F _ {3 D}}{\\partial F _ {2 D}} \\frac {\\partial F _ {2 D}}{\\partial \\theta}\\right). \\tag {6}\n$$\n",
|
| 581 |
+
"text_format": "latex",
|
| 582 |
+
"bbox": [
|
| 583 |
+
135,
|
| 584 |
+
577,
|
| 585 |
+
482,
|
| 586 |
+
611
|
| 587 |
+
],
|
| 588 |
+
"page_idx": 3
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "text",
|
| 592 |
+
"text": "Due to $\\Delta_{F_{3D}} > 0$ , the computed gradient $\\nabla_{\\theta}L_{LSS}$ is based on the deviated feature space, i.e.,",
|
| 593 |
+
"bbox": [
|
| 594 |
+
89,
|
| 595 |
+
619,
|
| 596 |
+
482,
|
| 597 |
+
651
|
| 598 |
+
],
|
| 599 |
+
"page_idx": 3
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "equation",
|
| 603 |
+
"text": "\n$$\n\\begin{array}{l} \\nabla_ {\\theta} L _ {L S S} (\\theta) = \\nabla_ {\\theta} \\mathcal {L} \\left(P _ {3 D} (\\mathbf {X}), G T _ {3 D}\\right) \\tag {7} \\\\ \\neq \\nabla_ {\\theta} \\mathcal {L} \\left(P _ {3 D} \\left(\\mathbf {X} _ {i d e a l}\\right), G T _ {3 D}\\right) = \\nabla_ {\\theta} L _ {i d e a l} (\\theta). \\\\ \\end{array}\n$$\n",
|
| 604 |
+
"text_format": "latex",
|
| 605 |
+
"bbox": [
|
| 606 |
+
89,
|
| 607 |
+
661,
|
| 608 |
+
483,
|
| 609 |
+
698
|
| 610 |
+
],
|
| 611 |
+
"page_idx": 3
|
| 612 |
+
},
|
| 613 |
+
{
|
| 614 |
+
"type": "text",
|
| 615 |
+
"text": "The gradient deviation prevents mapping's direct optimization and limits convergence to an $\\epsilon$ -optimal solution.",
|
| 616 |
+
"bbox": [
|
| 617 |
+
89,
|
| 618 |
+
708,
|
| 619 |
+
483,
|
| 620 |
+
739
|
| 621 |
+
],
|
| 622 |
+
"page_idx": 3
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"type": "text",
|
| 626 |
+
"text": "The theoretical analysis reveals that the inherent error in the fixed 2D-to-3D mapping of Depth-Based LSS methods fundamentally hinders gradient-based optimization from achieving optimal performance.",
|
| 627 |
+
"bbox": [
|
| 628 |
+
89,
|
| 629 |
+
750,
|
| 630 |
+
483,
|
| 631 |
+
808
|
| 632 |
+
],
|
| 633 |
+
"page_idx": 3
|
| 634 |
+
},
|
| 635 |
+
{
|
| 636 |
+
"type": "text",
|
| 637 |
+
"text": "3.2. Semantic Causal Locality in VisionOcc",
|
| 638 |
+
"text_level": 1,
|
| 639 |
+
"bbox": [
|
| 640 |
+
89,
|
| 641 |
+
816,
|
| 642 |
+
424,
|
| 643 |
+
834
|
| 644 |
+
],
|
| 645 |
+
"page_idx": 3
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"text": "As revealed in our theoretical analysis (Sec. 3.1), Depth-Based LSS methods suffer from inherent error propagation due to their fixed 2D-to-3D mapping, which limits optimization efficacy and potential performance. To overcome",
|
| 650 |
+
"bbox": [
|
| 651 |
+
89,
|
| 652 |
+
839,
|
| 653 |
+
483,
|
| 654 |
+
900
|
| 655 |
+
],
|
| 656 |
+
"page_idx": 3
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "text",
|
| 660 |
+
"text": "these limitations, we particularly focus on strengthening the semantic causality of the 2D-to-3D transformation. We argue that VisionOcc's 2D-to-3D semantic occupancy prediction should exhibit semantic causal locality (SCL) for robust perception in autonomous driving. Ideally, 2D causes should drive 3D semantic effects. For instance, a predicted \"car\" at 3D location $(h,w,z)$ should originate from a matching 2D image region of \"car\". Per the causal chain, camera parameters $P$ and estimated geometry $\\mathbf{G}$ enable this dependency, with $\\mathbf{G}$ being crucial to maintain SCL.",
|
| 661 |
+
"bbox": [
|
| 662 |
+
511,
|
| 663 |
+
280,
|
| 664 |
+
906,
|
| 665 |
+
430
|
| 666 |
+
],
|
| 667 |
+
"page_idx": 3
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"text": "Next, we formulate the ideal SCL condition. For a 2D pixel $(u,v)$ with semantic label $s$ , the projection probability $p_d$ (the value of $\\mathbf{G}$ at this coordinate and depth $d \\in D$ ) should be high if its corresponding 3D ground-truth semantic is $s$ , and low otherwise:",
|
| 672 |
+
"bbox": [
|
| 673 |
+
511,
|
| 674 |
+
430,
|
| 675 |
+
906,
|
| 676 |
+
505
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 3
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "equation",
|
| 682 |
+
"text": "\n$$\np _ {d} \\propto \\mathbb {1} (\\tilde {\\mathbf {O}} (R _ {P} (u, v, d) + e _ {P}) = s),\n$$\n",
|
| 683 |
+
"text_format": "latex",
|
| 684 |
+
"bbox": [
|
| 685 |
+
586,
|
| 686 |
+
516,
|
| 687 |
+
830,
|
| 688 |
+
534
|
| 689 |
+
],
|
| 690 |
+
"page_idx": 3
|
| 691 |
+
},
|
| 692 |
+
{
|
| 693 |
+
"type": "text",
|
| 694 |
+
"text": "where $\\mathbb{1}$ is the indicator function, and $e_P$ represents the potential coordinate transformation error caused by factors such as camera pose error. In practice, $p_d$ acts as a weight multiplied by 2D features (Eq. (8)), enabling a probabilistic mapping that supports differentiable backpropagation.",
|
| 695 |
+
"bbox": [
|
| 696 |
+
511,
|
| 697 |
+
544,
|
| 698 |
+
905,
|
| 699 |
+
619
|
| 700 |
+
],
|
| 701 |
+
"page_idx": 3
|
| 702 |
+
},
|
| 703 |
+
{
|
| 704 |
+
"type": "text",
|
| 705 |
+
"text": "Limitations of Depth-Based LSS in SCL. Depth-Based LSS does not fully account for semantic causal consistency. It only preserves semantic causal locality intuitively under ideal conditions where $e_P = 0$ and depth estimation is perfectly accurate. However, with coordinate transformation errors $e_P$ , even a high $p_d$ for the ideal depth may project 2D semantics to incorrect 3D locations, i.e.,",
|
| 706 |
+
"bbox": [
|
| 707 |
+
511,
|
| 708 |
+
619,
|
| 709 |
+
906,
|
| 710 |
+
726
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 3
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "equation",
|
| 716 |
+
"text": "\n$$\n\\tilde {\\mathbf {O}} (R _ {P} (u, v, d) + 0) = s, \\text {b u t} \\tilde {\\mathbf {O}} (R _ {P} (u, v, d) + e _ {P}) \\neq s.\n$$\n",
|
| 717 |
+
"text_format": "latex",
|
| 718 |
+
"bbox": [
|
| 719 |
+
519,
|
| 720 |
+
734,
|
| 721 |
+
898,
|
| 722 |
+
753
|
| 723 |
+
],
|
| 724 |
+
"page_idx": 3
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "text",
|
| 728 |
+
"text": "This misalignment causes 2D semantics $s$ (e.g., \"car\") to link with wrong 3D semantics (e.g., \"tree\"), causing semantic ambiguity and hindering training. Moreover, depth-based LSS often propagates semantics to surface points, weakening semantic propagation to occluded regions [7].",
|
| 729 |
+
"bbox": [
|
| 730 |
+
511,
|
| 731 |
+
763,
|
| 732 |
+
905,
|
| 733 |
+
840
|
| 734 |
+
],
|
| 735 |
+
"page_idx": 3
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "text",
|
| 739 |
+
"text": "Empirical Validation. We conduct an empirical study to validate our analysis of SCL, comparing two ideal geometric transformations. Using BEVDetOcc [13] as the baseline, we replace its estimated depth-based geometry for LSS",
|
| 740 |
+
"bbox": [
|
| 741 |
+
511,
|
| 742 |
+
840,
|
| 743 |
+
905,
|
| 744 |
+
900
|
| 745 |
+
],
|
| 746 |
+
"page_idx": 3
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "page_number",
|
| 750 |
+
"text": "24881",
|
| 751 |
+
"bbox": [
|
| 752 |
+
478,
|
| 753 |
+
944,
|
| 754 |
+
517,
|
| 755 |
+
955
|
| 756 |
+
],
|
| 757 |
+
"page_idx": 3
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"text": "with: $i$ ) ground-truth LiDAR depths; $ii$ ) SCL-Aware geometry, which computes $p_d$ from 2D and 3D semantic ground truths, where for $(u,v,d,s)$ , $p_d = 1$ if $\\tilde{\\mathbf{O}}(R_P(u,v,d)) = s$ , else $p_d = 0$ . We evaluate their performance in semantic occupancy prediction. Tab. 1 demonstrates that 2D-to-3D Transformation achieves significant performance improvements over depth-based LSS in ideal conditions, validating the benefits of semantic causal locality.",
|
| 762 |
+
"bbox": [
|
| 763 |
+
89,
|
| 764 |
+
90,
|
| 765 |
+
483,
|
| 766 |
+
210
|
| 767 |
+
],
|
| 768 |
+
"page_idx": 4
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"text": "Summary. Building on the above analysis, we propose our solution to enforce semantic causality constraints during training. The overall framework is shown in Fig. 3.",
|
| 773 |
+
"bbox": [
|
| 774 |
+
89,
|
| 775 |
+
213,
|
| 776 |
+
483,
|
| 777 |
+
258
|
| 778 |
+
],
|
| 779 |
+
"page_idx": 4
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"text": "3.3. Semantic Causality-Aware Causal Loss",
|
| 784 |
+
"text_level": 1,
|
| 785 |
+
"bbox": [
|
| 786 |
+
89,
|
| 787 |
+
268,
|
| 788 |
+
426,
|
| 789 |
+
286
|
| 790 |
+
],
|
| 791 |
+
"page_idx": 4
|
| 792 |
+
},
|
| 793 |
+
{
|
| 794 |
+
"type": "text",
|
| 795 |
+
"text": "For lifting methods like LSS, we could directly supervise the transformation geometry $\\mathbf{G}$ using the causal semantic geometry derived from ground-truth labels (as described in the previous section). However, we aim to enhance the lifting method in the next section, rendering direct supervision impractical. Thus, we design a gradient-based approach to enforce semantic causality.",
|
| 796 |
+
"bbox": [
|
| 797 |
+
89,
|
| 798 |
+
292,
|
| 799 |
+
483,
|
| 800 |
+
398
|
| 801 |
+
],
|
| 802 |
+
"page_idx": 4
|
| 803 |
+
},
|
| 804 |
+
{
|
| 805 |
+
"type": "text",
|
| 806 |
+
"text": "We begin within the LSS framework. For a 2D pixel feature at location $(u, v)$ , LSS multiplies it by the depth-related transformation probability $p_d$ and projects it to the 3D coordinate corresponding to depth $d$ :",
|
| 807 |
+
"bbox": [
|
| 808 |
+
89,
|
| 809 |
+
398,
|
| 810 |
+
483,
|
| 811 |
+
460
|
| 812 |
+
],
|
| 813 |
+
"page_idx": 4
|
| 814 |
+
},
|
| 815 |
+
{
|
| 816 |
+
"type": "equation",
|
| 817 |
+
"text": "\n$$\n\\mathbf {f} _ {L} \\left(R _ {P} (u, v, d)\\right) = p _ {d} (u, v, d) \\cdot \\mathbf {f} _ {i} (u, v, d). \\tag {8}\n$$\n",
|
| 818 |
+
"text_format": "latex",
|
| 819 |
+
"bbox": [
|
| 820 |
+
143,
|
| 821 |
+
472,
|
| 822 |
+
482,
|
| 823 |
+
489
|
| 824 |
+
],
|
| 825 |
+
"page_idx": 4
|
| 826 |
+
},
|
| 827 |
+
{
|
| 828 |
+
"type": "text",
|
| 829 |
+
"text": "Here, $\\mathbf{f}_L(R_P(u,v,d))\\in \\mathbb{R}^C$ represents the 3D voxel feature at the projected location, with $e_{P}$ omitted for notational simplicity. $\\mathbf{f}_i(u,v,d)\\in \\mathbb{R}^C$ denotes the 2D image feature at location $(u,v)$ . We backpropagate gradients from $\\mathbf{f}_L$ to $\\mathbf{f}_i$ , i.e.,",
|
| 830 |
+
"bbox": [
|
| 831 |
+
89,
|
| 832 |
+
501,
|
| 833 |
+
483,
|
| 834 |
+
575
|
| 835 |
+
],
|
| 836 |
+
"page_idx": 4
|
| 837 |
+
},
|
| 838 |
+
{
|
| 839 |
+
"type": "equation",
|
| 840 |
+
"text": "\n$$\n\\frac {\\partial \\sum \\mathbf {f} _ {L} \\left(R _ {P} (u , v , d)\\right)}{\\partial \\mathbf {f} _ {i} (u , v , d)} = p _ {d} \\cdot \\mathbf {I}, \\tag {9}\n$$\n",
|
| 841 |
+
"text_format": "latex",
|
| 842 |
+
"bbox": [
|
| 843 |
+
184,
|
| 844 |
+
577,
|
| 845 |
+
482,
|
| 846 |
+
609
|
| 847 |
+
],
|
| 848 |
+
"page_idx": 4
|
| 849 |
+
},
|
| 850 |
+
{
|
| 851 |
+
"type": "text",
|
| 852 |
+
"text": "where $\\mathbf{I}$ is the all-ones vector.",
|
| 853 |
+
"bbox": [
|
| 854 |
+
89,
|
| 855 |
+
617,
|
| 856 |
+
290,
|
| 857 |
+
631
|
| 858 |
+
],
|
| 859 |
+
"page_idx": 4
|
| 860 |
+
},
|
| 861 |
+
{
|
| 862 |
+
"type": "text",
|
| 863 |
+
"text": "For each semantic class $s$ , we aggregate the features $\\mathbf{f}_L$ at all 3D positions where the ground truth class equals $s$ . This aggregation is backpropagated to the 2D features $\\mathbf{f}_i$ , yielding a gradient map $\\nabla_s \\in \\mathbb{R}^{U \\times V \\times C}$ for class $s$ :",
|
| 864 |
+
"bbox": [
|
| 865 |
+
89,
|
| 866 |
+
633,
|
| 867 |
+
483,
|
| 868 |
+
694
|
| 869 |
+
],
|
| 870 |
+
"page_idx": 4
|
| 871 |
+
},
|
| 872 |
+
{
|
| 873 |
+
"type": "equation",
|
| 874 |
+
"text": "\n$$\n\\nabla_ {s} (u, v, c) = \\sum_ {\\left(h ^ {\\prime}, w ^ {\\prime}, z ^ {\\prime}\\right) \\in \\Omega_ {s}} \\frac {\\partial \\sum \\mathbf {f} _ {L} \\left(h ^ {\\prime} , w ^ {\\prime} , z ^ {\\prime} , c\\right)}{\\partial \\mathbf {f} _ {i} (u , v , c)}, \\tag {10}\n$$\n",
|
| 875 |
+
"text_format": "latex",
|
| 876 |
+
"bbox": [
|
| 877 |
+
109,
|
| 878 |
+
705,
|
| 879 |
+
482,
|
| 880 |
+
746
|
| 881 |
+
],
|
| 882 |
+
"page_idx": 4
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "text",
|
| 886 |
+
"text": "where $\\Omega_s = \\{(h', w', z') \\mid O(h', w', z') = s\\}$ is the set of 3D positions with semantic label $s$ , and $c$ indexes the feature channels. Averaging over channel $C$ produces an attention map $A_s \\in \\mathbb{R}^{U \\times V}$ :",
|
| 887 |
+
"bbox": [
|
| 888 |
+
89,
|
| 889 |
+
757,
|
| 890 |
+
483,
|
| 891 |
+
819
|
| 892 |
+
],
|
| 893 |
+
"page_idx": 4
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "equation",
|
| 897 |
+
"text": "\n$$\nA _ {s} (u, v) = \\frac {1}{C} \\sum_ {c = 1} ^ {C} \\nabla_ {s} (u, v, c). \\tag {11}\n$$\n",
|
| 898 |
+
"text_format": "latex",
|
| 899 |
+
"bbox": [
|
| 900 |
+
183,
|
| 901 |
+
832,
|
| 902 |
+
482,
|
| 903 |
+
872
|
| 904 |
+
],
|
| 905 |
+
"page_idx": 4
|
| 906 |
+
},
|
| 907 |
+
{
|
| 908 |
+
"type": "text",
|
| 909 |
+
"text": "Finally, we enforce per-pixel constraints using 2D",
|
| 910 |
+
"bbox": [
|
| 911 |
+
109,
|
| 912 |
+
885,
|
| 913 |
+
482,
|
| 914 |
+
901
|
| 915 |
+
],
|
| 916 |
+
"page_idx": 4
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"type": "text",
|
| 920 |
+
"text": "ground-truth labels with a binary cross-entropy loss:",
|
| 921 |
+
"bbox": [
|
| 922 |
+
511,
|
| 923 |
+
90,
|
| 924 |
+
859,
|
| 925 |
+
107
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 4
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "equation",
|
| 931 |
+
"text": "\n$$\n\\begin{array}{l} L _ {b c e} ^ {s} = - \\frac {1}{U \\cdot V} \\sum_ {u, v} \\left[ Y _ {s} (u, v) \\log A _ {s} (u, v) \\right. \\tag {12} \\\\ \\left. + \\left(1 - Y _ {s} (u, v)\\right) \\log (1 - A _ {s} (u, v)) \\right], \\\\ \\end{array}\n$$\n",
|
| 932 |
+
"text_format": "latex",
|
| 933 |
+
"bbox": [
|
| 934 |
+
550,
|
| 935 |
+
114,
|
| 936 |
+
903,
|
| 937 |
+
172
|
| 938 |
+
],
|
| 939 |
+
"page_idx": 4
|
| 940 |
+
},
|
| 941 |
+
{
|
| 942 |
+
"type": "text",
|
| 943 |
+
"text": "where $Y_{s}(u,v) \\in \\{0,1\\}$ is the 2D ground-truth label for semantic class $s$ at pixel $(u,v)$ .",
|
| 944 |
+
"bbox": [
|
| 945 |
+
511,
|
| 946 |
+
181,
|
| 947 |
+
903,
|
| 948 |
+
212
|
| 949 |
+
],
|
| 950 |
+
"page_idx": 4
|
| 951 |
+
},
|
| 952 |
+
{
|
| 953 |
+
"type": "text",
|
| 954 |
+
"text": "The gradient computation can be performed using the automatic differentiation calculator like torch.autograd, requiring $S$ backward passes to iterate over the $S$ semantic classes. This incurs a computational overhead scaling linearly with the class amount $S$ . To mitigate this overhead, we reformulate the loss computations in terms of the expectation of an unbiased estimator [11]. We define the expected BCE loss across all semantic classes $s \\in \\{1, \\dots, S\\}$ as:",
|
| 955 |
+
"bbox": [
|
| 956 |
+
511,
|
| 957 |
+
212,
|
| 958 |
+
905,
|
| 959 |
+
333
|
| 960 |
+
],
|
| 961 |
+
"page_idx": 4
|
| 962 |
+
},
|
| 963 |
+
{
|
| 964 |
+
"type": "equation",
|
| 965 |
+
"text": "\n$$\n\\mathbb {E} \\left[ L _ {b c e} ^ {s} \\right] = \\frac {1}{S} \\sum_ {s = 1} ^ {S} L _ {b c e} ^ {s}. \\tag {13}\n$$\n",
|
| 966 |
+
"text_format": "latex",
|
| 967 |
+
"bbox": [
|
| 968 |
+
630,
|
| 969 |
+
343,
|
| 970 |
+
903,
|
| 971 |
+
383
|
| 972 |
+
],
|
| 973 |
+
"page_idx": 4
|
| 974 |
+
},
|
| 975 |
+
{
|
| 976 |
+
"type": "text",
|
| 977 |
+
"text": "Based on this relationship, we uniformly sample a single semantic class $s$ during training:",
|
| 978 |
+
"bbox": [
|
| 979 |
+
511,
|
| 980 |
+
392,
|
| 981 |
+
903,
|
| 982 |
+
422
|
| 983 |
+
],
|
| 984 |
+
"page_idx": 4
|
| 985 |
+
},
|
| 986 |
+
{
|
| 987 |
+
"type": "equation",
|
| 988 |
+
"text": "\n$$\nL _ {c a u s a l} = L _ {b c e} ^ {s}, \\quad s \\sim \\operatorname {U n i f o r m} (1, S). \\tag {14}\n$$\n",
|
| 989 |
+
"text_format": "latex",
|
| 990 |
+
"bbox": [
|
| 991 |
+
578,
|
| 992 |
+
433,
|
| 993 |
+
903,
|
| 994 |
+
450
|
| 995 |
+
],
|
| 996 |
+
"page_idx": 4
|
| 997 |
+
},
|
| 998 |
+
{
|
| 999 |
+
"type": "text",
|
| 1000 |
+
"text": "This sampling preserves the unbiased nature of the gradient and loss estimates and reduces the computational cost to $\\frac{1}{S}$ . $L_{\\text{causal}}$ focuses on enhancing the geometric transformation, serving as an auxiliary term to complement the occupancy loss (case-by-case, e.g., cross-entropy in BEVDet [13]).",
|
| 1001 |
+
"bbox": [
|
| 1002 |
+
511,
|
| 1003 |
+
459,
|
| 1004 |
+
905,
|
| 1005 |
+
536
|
| 1006 |
+
],
|
| 1007 |
+
"page_idx": 4
|
| 1008 |
+
},
|
| 1009 |
+
{
|
| 1010 |
+
"type": "text",
|
| 1011 |
+
"text": "3.4. Semantic Causality-Aware Transformation",
|
| 1012 |
+
"text_level": 1,
|
| 1013 |
+
"bbox": [
|
| 1014 |
+
511,
|
| 1015 |
+
544,
|
| 1016 |
+
877,
|
| 1017 |
+
559
|
| 1018 |
+
],
|
| 1019 |
+
"page_idx": 4
|
| 1020 |
+
},
|
| 1021 |
+
{
|
| 1022 |
+
"type": "text",
|
| 1023 |
+
"text": "We propose semantic causality-aware 2D-to-3D transformation to enhance 2D-to-3D lifting, as shown in Fig. 4. Eq. (14) constrains the geometry $\\mathbf{G}$ using 2D and 3D semantics. This overcomes the rigid, per-location hard alignment of geometric probabilities (e.g., using LiDAR depth supervision) in prior methods [7, 13, 22, 23, 27]. It enables advanced lifting designs, addressing errors from camera pose and other distortions.",
|
| 1024 |
+
"bbox": [
|
| 1025 |
+
511,
|
| 1026 |
+
566,
|
| 1027 |
+
905,
|
| 1028 |
+
686
|
| 1029 |
+
],
|
| 1030 |
+
"page_idx": 4
|
| 1031 |
+
},
|
| 1032 |
+
{
|
| 1033 |
+
"type": "text",
|
| 1034 |
+
"text": "3.4.1. Channel-Grouped Lifting",
|
| 1035 |
+
"text_level": 1,
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
511,
|
| 1038 |
+
694,
|
| 1039 |
+
740,
|
| 1040 |
+
709
|
| 1041 |
+
],
|
| 1042 |
+
"page_idx": 4
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"type": "text",
|
| 1046 |
+
"text": "Vanilla LSS applies uniform weights to all 2D feature channels. We argue this is trivial as 2D and 3D features have distinct locality biases. For instance, a 2D \"car\" edge may capture \"tree\" semantics via convolution, but in 3D, these objects are distant. Uniform weighting both semantics causes ambiguity. Since different channels typically encode distinct semantics, we group the feature channels and learn unique weights for each group:",
|
| 1047 |
+
"bbox": [
|
| 1048 |
+
511,
|
| 1049 |
+
712,
|
| 1050 |
+
905,
|
| 1051 |
+
834
|
| 1052 |
+
],
|
| 1053 |
+
"page_idx": 4
|
| 1054 |
+
},
|
| 1055 |
+
{
|
| 1056 |
+
"type": "equation",
|
| 1057 |
+
"text": "\n$$\n\\mathbf {f} _ {L, g} \\left(R _ {P} (u, v, d)\\right) = \\omega_ {g, d} \\cdot \\mathbf {f} _ {i, g} (u, v, d), g \\in \\{1, \\dots , N _ {g} \\}, \\tag {15}\n$$\n",
|
| 1058 |
+
"text_format": "latex",
|
| 1059 |
+
"bbox": [
|
| 1060 |
+
521,
|
| 1061 |
+
844,
|
| 1062 |
+
903,
|
| 1063 |
+
861
|
| 1064 |
+
],
|
| 1065 |
+
"page_idx": 4
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "text",
|
| 1069 |
+
"text": "where $\\mathbf{f}_{L,g}\\in \\mathbb{R}^{C / N_g}$ and $\\mathbf{f}_{i,g}\\in \\mathbb{R}^{C / N_g}$ are the 3D and 2D features for group $g$ . $\\omega_{g,d}$ is the learned weight for group",
|
| 1070 |
+
"bbox": [
|
| 1071 |
+
511,
|
| 1072 |
+
868,
|
| 1073 |
+
903,
|
| 1074 |
+
902
|
| 1075 |
+
],
|
| 1076 |
+
"page_idx": 4
|
| 1077 |
+
},
|
| 1078 |
+
{
|
| 1079 |
+
"type": "page_number",
|
| 1080 |
+
"text": "24882",
|
| 1081 |
+
"bbox": [
|
| 1082 |
+
478,
|
| 1083 |
+
944,
|
| 1084 |
+
519,
|
| 1085 |
+
955
|
| 1086 |
+
],
|
| 1087 |
+
"page_idx": 4
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "image",
|
| 1091 |
+
"img_path": "images/b35fc75d231592dfad58e5042f4dec235ce01c814d7024733f9396f3c81da568.jpg",
|
| 1092 |
+
"image_caption": [
|
| 1093 |
+
"(a) Channel-Grouped Lifting"
|
| 1094 |
+
],
|
| 1095 |
+
"image_footnote": [],
|
| 1096 |
+
"bbox": [
|
| 1097 |
+
96,
|
| 1098 |
+
88,
|
| 1099 |
+
480,
|
| 1100 |
+
186
|
| 1101 |
+
],
|
| 1102 |
+
"page_idx": 5
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "image",
|
| 1106 |
+
"img_path": "images/984cc9ef1d47c4edd313521fd39c3ec47f8a062f101a03dee85ded2f278e13f4.jpg",
|
| 1107 |
+
"image_caption": [],
|
| 1108 |
+
"image_footnote": [],
|
| 1109 |
+
"bbox": [
|
| 1110 |
+
96,
|
| 1111 |
+
203,
|
| 1112 |
+
478,
|
| 1113 |
+
268
|
| 1114 |
+
],
|
| 1115 |
+
"page_idx": 5
|
| 1116 |
+
},
|
| 1117 |
+
{
|
| 1118 |
+
"type": "image",
|
| 1119 |
+
"img_path": "images/0ea1a77761edd9899688dd59f5533e25f88bab654f20db6ecd83d107b909228b.jpg",
|
| 1120 |
+
"image_caption": [
|
| 1121 |
+
"(b) Coordinate Mapping with Learnable Camera Offsets",
|
| 1122 |
+
"(c) Post-Processing with Normalized Convolution",
|
| 1123 |
+
"Figure 4. The Detailed Structure of the Proposed Modules in Semantic Causality-Aware 2D-to-3D Transformation. There are three key components, including (a): Channel-Groupled Lifting, (b): Learnable Camera Offsets, and (c): Normalized Convolution, enabling accurate and robust 2D-to-3D transformation."
|
| 1124 |
+
],
|
| 1125 |
+
"image_footnote": [],
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
109,
|
| 1128 |
+
290,
|
| 1129 |
+
473,
|
| 1130 |
+
345
|
| 1131 |
+
],
|
| 1132 |
+
"page_idx": 5
|
| 1133 |
+
},
|
| 1134 |
+
{
|
| 1135 |
+
"type": "text",
|
| 1136 |
+
"text": "$g$ , replacing $p_d$ which uniformly lifts all channels. $N_g$ is the number of groups. This preserves semantic distinction, ensuring channel-specific causal alignment.",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
89,
|
| 1139 |
+
454,
|
| 1140 |
+
483,
|
| 1141 |
+
501
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 5
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "text",
|
| 1147 |
+
"text": "3.4.2. Learnable Camera Offsets",
|
| 1148 |
+
"text_level": 1,
|
| 1149 |
+
"bbox": [
|
| 1150 |
+
89,
|
| 1151 |
+
508,
|
| 1152 |
+
321,
|
| 1153 |
+
522
|
| 1154 |
+
],
|
| 1155 |
+
"page_idx": 5
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "text",
|
| 1159 |
+
"text": "To address camera parameter errors, especially pose inaccuracies, we introduce learnable offsets into camera parameters. First, we ensure the lifting process is coordinatedifferentiable. This is crucial for the offsets to receive gradients and adapt during training. The transformation of 2D image coordinates $(u,v)$ and depth $d$ to 3D voxel coordinates can be represented as matrix multiplication:",
|
| 1160 |
+
"bbox": [
|
| 1161 |
+
89,
|
| 1162 |
+
527,
|
| 1163 |
+
483,
|
| 1164 |
+
633
|
| 1165 |
+
],
|
| 1166 |
+
"page_idx": 5
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "equation",
|
| 1170 |
+
"text": "\n$$\n[ h, w, z ] ^ {T} = P \\cdot [ u \\cdot d, v \\cdot d, d, 1 ] ^ {T}, \\tag {16}\n$$\n",
|
| 1171 |
+
"text_format": "latex",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
169,
|
| 1174 |
+
643,
|
| 1175 |
+
482,
|
| 1176 |
+
664
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 5
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "text",
|
| 1182 |
+
"text": "where $P \\in \\mathbb{R}^{3 \\times 4}$ is the camera projection matrix combining intrinsics and extrinsics. LSS typically rounds floating-point coordinates to integers, rendering them nondifferentiable. Following ALOcc [7], we use the soft filling to enable differentiability w.r.t. position. This method calculates distances between floating-point 3D coordinates and their eight surrounding integer coordinates. These distances serve as weights to distribute a 2D feature at $(u, v, d)$ across multiple 3D locations. Lifting can be rewritten to",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
89,
|
| 1185 |
+
674,
|
| 1186 |
+
483,
|
| 1187 |
+
811
|
| 1188 |
+
],
|
| 1189 |
+
"page_idx": 5
|
| 1190 |
+
},
|
| 1191 |
+
{
|
| 1192 |
+
"type": "equation",
|
| 1193 |
+
"text": "\n$$\n\\begin{array}{l} \\mathbf {f} _ {L, g} \\left(h ^ {\\prime}, w ^ {\\prime}, z ^ {\\prime}\\right) = \\omega_ {g, d} \\cdot \\omega_ {h ^ {\\prime}, w ^ {\\prime}, z ^ {\\prime}} \\cdot \\mathbf {f} _ {i, g} (u, v, d), \\tag {17} \\\\ \\forall \\left(h ^ {\\prime}, w ^ {\\prime}, z ^ {\\prime}\\right) \\in \\text {n e i g h b o r s}, \\\\ \\end{array}\n$$\n",
|
| 1194 |
+
"text_format": "latex",
|
| 1195 |
+
"bbox": [
|
| 1196 |
+
116,
|
| 1197 |
+
821,
|
| 1198 |
+
482,
|
| 1199 |
+
857
|
| 1200 |
+
],
|
| 1201 |
+
"page_idx": 5
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"text": "where $\\omega_{h',w',z'}$ are trilinear interpolation weights.",
|
| 1206 |
+
"bbox": [
|
| 1207 |
+
89,
|
| 1208 |
+
869,
|
| 1209 |
+
419,
|
| 1210 |
+
883
|
| 1211 |
+
],
|
| 1212 |
+
"page_idx": 5
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "text",
|
| 1216 |
+
"text": "Next, we propose learning two offsets. First, we directly",
|
| 1217 |
+
"bbox": [
|
| 1218 |
+
109,
|
| 1219 |
+
885,
|
| 1220 |
+
480,
|
| 1221 |
+
900
|
| 1222 |
+
],
|
| 1223 |
+
"page_idx": 5
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "text",
|
| 1227 |
+
"text": "predict an offset applied to the camera parameters:",
|
| 1228 |
+
"bbox": [
|
| 1229 |
+
511,
|
| 1230 |
+
90,
|
| 1231 |
+
849,
|
| 1232 |
+
106
|
| 1233 |
+
],
|
| 1234 |
+
"page_idx": 5
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "equation",
|
| 1238 |
+
"text": "\n$$\nP := P + \\Delta P, \\quad \\Delta P = F _ {\\text {o f f s e t 1}} \\left(\\mathbf {f} _ {i}, P\\right), \\tag {18}\n$$\n",
|
| 1239 |
+
"text_format": "latex",
|
| 1240 |
+
"bbox": [
|
| 1241 |
+
570,
|
| 1242 |
+
114,
|
| 1243 |
+
903,
|
| 1244 |
+
132
|
| 1245 |
+
],
|
| 1246 |
+
"page_idx": 5
|
| 1247 |
+
},
|
| 1248 |
+
{
|
| 1249 |
+
"type": "text",
|
| 1250 |
+
"text": "where $\\Delta P$ is the predicted parameter offset, and $F_{offset1}$ denotes the network. Second, we estimate per-position offsets for each $(u,v,d)$ in the image coordinate system:",
|
| 1251 |
+
"bbox": [
|
| 1252 |
+
511,
|
| 1253 |
+
138,
|
| 1254 |
+
903,
|
| 1255 |
+
186
|
| 1256 |
+
],
|
| 1257 |
+
"page_idx": 5
|
| 1258 |
+
},
|
| 1259 |
+
{
|
| 1260 |
+
"type": "equation",
|
| 1261 |
+
"text": "\n$$\n(u, v, d) := (u + \\Delta u, v + \\Delta v, d + \\Delta d), \\tag {19}\n$$\n",
|
| 1262 |
+
"text_format": "latex",
|
| 1263 |
+
"bbox": [
|
| 1264 |
+
573,
|
| 1265 |
+
194,
|
| 1266 |
+
903,
|
| 1267 |
+
217
|
| 1268 |
+
],
|
| 1269 |
+
"page_idx": 5
|
| 1270 |
+
},
|
| 1271 |
+
{
|
| 1272 |
+
"type": "equation",
|
| 1273 |
+
"text": "\n$$\n\\left(\\Delta u, \\Delta v, \\Delta d\\right) = F _ {\\text {o f f s e t} 2} \\left(\\mathbf {f} _ {i} (u, v, d)\\right),\n$$\n",
|
| 1274 |
+
"text_format": "latex",
|
| 1275 |
+
"bbox": [
|
| 1276 |
+
589,
|
| 1277 |
+
212,
|
| 1278 |
+
843,
|
| 1279 |
+
229
|
| 1280 |
+
],
|
| 1281 |
+
"page_idx": 5
|
| 1282 |
+
},
|
| 1283 |
+
{
|
| 1284 |
+
"type": "text",
|
| 1285 |
+
"text": "where $F_{offset2}$ is another network. These offsets enable the model to adaptively compensate for camera parameter errors $e_P$ , improving geometric accuracy while preserving semantic causal locality under such errors.",
|
| 1286 |
+
"bbox": [
|
| 1287 |
+
511,
|
| 1288 |
+
237,
|
| 1289 |
+
906,
|
| 1290 |
+
297
|
| 1291 |
+
],
|
| 1292 |
+
"page_idx": 5
|
| 1293 |
+
},
|
| 1294 |
+
{
|
| 1295 |
+
"type": "text",
|
| 1296 |
+
"text": "3.4.3. Normalized Convolution",
|
| 1297 |
+
"text_level": 1,
|
| 1298 |
+
"bbox": [
|
| 1299 |
+
511,
|
| 1300 |
+
304,
|
| 1301 |
+
730,
|
| 1302 |
+
316
|
| 1303 |
+
],
|
| 1304 |
+
"page_idx": 5
|
| 1305 |
+
},
|
| 1306 |
+
{
|
| 1307 |
+
"type": "text",
|
| 1308 |
+
"text": "Prior work [28] notes that the direct mapping in LSS yields sparse 3D features. To address this, an intuitive solution is to use local feature propagation operators (e.g., convolutions) with causal loss supervision for preserving semantic causality. However, vanilla convolutions lack gradient constraints during causal loss computation, producing poor results. We address this by normalizing convolution weights to keep gradient map values in [0, 1]. Given the difficulty of normalizing standard convolution weights, we follow MobileNet and ConvNext to adopt depthwise (spatial) and pointwise (channel) decomposition. For the depthwise kernel $W_{\\mathrm{spatial}} \\in \\mathbb{R}^{3 \\times 3 \\times 3 \\times C}$ , we apply softmax across spatial dimensions $(h, w, z)$ per channel $c$ :",
|
| 1309 |
+
"bbox": [
|
| 1310 |
+
511,
|
| 1311 |
+
321,
|
| 1312 |
+
906,
|
| 1313 |
+
518
|
| 1314 |
+
],
|
| 1315 |
+
"page_idx": 5
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "equation",
|
| 1319 |
+
"text": "\n$$\nW _ {\\text {s p a t i a l}} ^ {\\prime} [ h, w, z, c ] = \\frac {\\exp \\left(W _ {\\text {s p a t i a l}} [ h , w , z , c ]\\right)}{\\sum_ {h ^ {\\prime} , w ^ {\\prime} , z ^ {\\prime}} \\exp \\left(W _ {\\text {s p a t i a l}} \\left[ h ^ {\\prime} , w ^ {\\prime} , z ^ {\\prime} , c \\right]\\right)}. \\tag {20}\n$$\n",
|
| 1320 |
+
"text_format": "latex",
|
| 1321 |
+
"bbox": [
|
| 1322 |
+
519,
|
| 1323 |
+
526,
|
| 1324 |
+
903,
|
| 1325 |
+
555
|
| 1326 |
+
],
|
| 1327 |
+
"page_idx": 5
|
| 1328 |
+
},
|
| 1329 |
+
{
|
| 1330 |
+
"type": "text",
|
| 1331 |
+
"text": "For the pointwise kernel $W_{\\mathrm{channel}} \\in \\mathbb{R}^{C \\times C}$ , we apply softmax across input channels per output channel:",
|
| 1332 |
+
"bbox": [
|
| 1333 |
+
511,
|
| 1334 |
+
563,
|
| 1335 |
+
903,
|
| 1336 |
+
594
|
| 1337 |
+
],
|
| 1338 |
+
"page_idx": 5
|
| 1339 |
+
},
|
| 1340 |
+
{
|
| 1341 |
+
"type": "equation",
|
| 1342 |
+
"text": "\n$$\nW _ {\\text {c h a n n e l}} ^ {\\prime} \\left[ c _ {\\text {i n}}, c _ {\\text {o u t}} \\right] = \\frac {\\exp \\left(W _ {\\text {c h a n n e l}} \\left[ c _ {\\text {i n}} , c _ {\\text {o u t}} \\right]\\right)}{\\sum_ {c _ {\\text {o u t}} ^ {\\prime}} \\exp \\left(W _ {\\text {c h a n n e l}} \\left[ c _ {\\text {i n}} , c _ {\\text {o u t}} ^ {\\prime} \\right]\\right)}. \\tag {21}\n$$\n",
|
| 1343 |
+
"text_format": "latex",
|
| 1344 |
+
"bbox": [
|
| 1345 |
+
532,
|
| 1346 |
+
599,
|
| 1347 |
+
903,
|
| 1348 |
+
638
|
| 1349 |
+
],
|
| 1350 |
+
"page_idx": 5
|
| 1351 |
+
},
|
| 1352 |
+
{
|
| 1353 |
+
"type": "text",
|
| 1354 |
+
"text": "Specifically, we use transposed convolution for depthwise convolutions. It diffuses features from non-zero to zero positions, but the zero position does not affect others. We prove in the supplement that the derived gradient mask remains within [0, 1] even with our semantic causality-aware 2D-to-3D transformation.",
|
| 1355 |
+
"bbox": [
|
| 1356 |
+
511,
|
| 1357 |
+
643,
|
| 1358 |
+
905,
|
| 1359 |
+
734
|
| 1360 |
+
],
|
| 1361 |
+
"page_idx": 5
|
| 1362 |
+
},
|
| 1363 |
+
{
|
| 1364 |
+
"type": "text",
|
| 1365 |
+
"text": "3.5. Validation of Gradient Error Mitigation",
|
| 1366 |
+
"text_level": 1,
|
| 1367 |
+
"bbox": [
|
| 1368 |
+
511,
|
| 1369 |
+
742,
|
| 1370 |
+
856,
|
| 1371 |
+
758
|
| 1372 |
+
],
|
| 1373 |
+
"page_idx": 5
|
| 1374 |
+
},
|
| 1375 |
+
{
|
| 1376 |
+
"type": "text",
|
| 1377 |
+
"text": "Fig. 5 shows the occupancy loss curves during training for BEVDet, comparing performance with and without our method. The results show that BEVDet integrated with our approach (blue line) achieves a significantly faster and steeper loss reduction compared to the original BEVDet (red line). This empirical evidence validates our theoretical analysis of gradient error mitigation. As Theorem Theorem 1 formalizes, Depth-Based LSS methods are inherently limited by gradient deviations due to fixed 2D-to-3D",
|
| 1378 |
+
"bbox": [
|
| 1379 |
+
511,
|
| 1380 |
+
763,
|
| 1381 |
+
906,
|
| 1382 |
+
900
|
| 1383 |
+
],
|
| 1384 |
+
"page_idx": 5
|
| 1385 |
+
},
|
| 1386 |
+
{
|
| 1387 |
+
"type": "page_number",
|
| 1388 |
+
"text": "24883",
|
| 1389 |
+
"bbox": [
|
| 1390 |
+
478,
|
| 1391 |
+
944,
|
| 1392 |
+
517,
|
| 1393 |
+
955
|
| 1394 |
+
],
|
| 1395 |
+
"page_idx": 5
|
| 1396 |
+
},
|
| 1397 |
+
{
|
| 1398 |
+
"type": "image",
|
| 1399 |
+
"img_path": "images/8c7e40b5c0db6937e75f1633b2673a8b1d62dfa14ed19263daf7d4c34633847c.jpg",
|
| 1400 |
+
"image_caption": [
|
| 1401 |
+
"Figure 5. Occupancy Loss Curves with/without Our Method. Our method reduces training loss through semantic causal 2D-to-3D geometry transformation, as shown in the comparison before (red) and after (blue) its application."
|
| 1402 |
+
],
|
| 1403 |
+
"image_footnote": [],
|
| 1404 |
+
"bbox": [
|
| 1405 |
+
145,
|
| 1406 |
+
88,
|
| 1407 |
+
428,
|
| 1408 |
+
265
|
| 1409 |
+
],
|
| 1410 |
+
"page_idx": 6
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "table",
|
| 1414 |
+
"img_path": "images/221af839a449acb9fb887c1b836d27fe4f4cf7e5bd09b236ac2b8d06ba334b47.jpg",
|
| 1415 |
+
"table_caption": [],
|
| 1416 |
+
"table_footnote": [],
|
| 1417 |
+
"table_body": "<table><tr><td>Method</td><td>mIoU ↑</td><td>Drop</td><td>mIoUD↑</td><td>Drop</td><td>IoU ↑</td><td>Drop</td></tr><tr><td>BEVDetOcc [12]</td><td>37.1</td><td rowspan=\"2\">-32.3%</td><td>30.2</td><td rowspan=\"2\">-49.0%</td><td>70.4</td><td rowspan=\"2\">-8.5%</td></tr><tr><td>BEVDetOcc w/ Noise</td><td>25.1</td><td>15.4</td><td>64.4</td></tr><tr><td>BEVDetOcc+Ours</td><td>38.3</td><td rowspan=\"2\">-7.3%</td><td>31.5</td><td rowspan=\"2\">-10.8%</td><td>71.2</td><td rowspan=\"2\">-1.4%</td></tr><tr><td>BEVDetOcc+Ours w/ Noise</td><td>35.5</td><td>28.1</td><td>70.2</td></tr><tr><td>ALoCC [7]</td><td>40.1</td><td rowspan=\"2\">-21.9%</td><td>34.3</td><td rowspan=\"2\">-28.6%</td><td>70.2</td><td rowspan=\"2\">-8.4%</td></tr><tr><td>ALoCC w/ Noise</td><td>31.3</td><td>24.5</td><td>64.3</td></tr><tr><td>ALoCC+Ours</td><td>40.9</td><td rowspan=\"2\">-3.3%</td><td>35.5</td><td rowspan=\"2\">-4.8%</td><td>70.7</td><td rowspan=\"2\">-1.0%</td></tr><tr><td>ALoCC+Ours w/ Noise</td><td>39.6</td><td>33.8</td><td>70.0</td></tr></table>",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
91,
|
| 1420 |
+
334,
|
| 1421 |
+
485,
|
| 1422 |
+
467
|
| 1423 |
+
],
|
| 1424 |
+
"page_idx": 6
|
| 1425 |
+
},
|
| 1426 |
+
{
|
| 1427 |
+
"type": "text",
|
| 1428 |
+
"text": "Table 2. Performance Comparison on the Occ3D Dataset with Gaussian Noise Added to Camera Parameters. The \"Drop (%) columns show degradation, with our methods (BEVDetOcc+Ours, ALOcc+Ours) achieving much smaller drops (e.g., -7.3% mIoU vs. -32.4% for BEVDetOcc).",
|
| 1429 |
+
"bbox": [
|
| 1430 |
+
89,
|
| 1431 |
+
473,
|
| 1432 |
+
483,
|
| 1433 |
+
544
|
| 1434 |
+
],
|
| 1435 |
+
"page_idx": 6
|
| 1436 |
+
},
|
| 1437 |
+
{
|
| 1438 |
+
"type": "text",
|
| 1439 |
+
"text": "mapping errors. In contrast, our method aims to alleviate this by enabling a learnable mapping and incorporating a causal loss. It indicates that by mitigating gradient error through semantic causality-aware 2D-to-3D transformation, our approach facilitates more efficient gradient-based learning, leading to faster convergence and a lower loss curve.",
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
89,
|
| 1442 |
+
553,
|
| 1443 |
+
483,
|
| 1444 |
+
643
|
| 1445 |
+
],
|
| 1446 |
+
"page_idx": 6
|
| 1447 |
+
},
|
| 1448 |
+
{
|
| 1449 |
+
"type": "text",
|
| 1450 |
+
"text": "4. Experiment",
|
| 1451 |
+
"text_level": 1,
|
| 1452 |
+
"bbox": [
|
| 1453 |
+
89,
|
| 1454 |
+
657,
|
| 1455 |
+
217,
|
| 1456 |
+
674
|
| 1457 |
+
],
|
| 1458 |
+
"page_idx": 6
|
| 1459 |
+
},
|
| 1460 |
+
{
|
| 1461 |
+
"type": "text",
|
| 1462 |
+
"text": "4.1. Experimental Setup",
|
| 1463 |
+
"text_level": 1,
|
| 1464 |
+
"bbox": [
|
| 1465 |
+
89,
|
| 1466 |
+
681,
|
| 1467 |
+
281,
|
| 1468 |
+
698
|
| 1469 |
+
],
|
| 1470 |
+
"page_idx": 6
|
| 1471 |
+
},
|
| 1472 |
+
{
|
| 1473 |
+
"type": "text",
|
| 1474 |
+
"text": "Dataset. In this study, we leverage the Occ3D-nuScenes dataset [5, 39], a comprehensive dataset with diverse scenes for autonomous driving research. It encompasses 700 scenes for training, 150 for validation, and 150 for testing. Each scene integrates a 32-beam LiDAR point cloud alongside 6 RGB images, captured from multiple perspectives encircling the ego vehicle. Occ3D [39] introduces voxel-based annotations, covering a spatial extent of $-40\\mathrm{m}$ to $40\\mathrm{m}$ along the X and Y axes, and $-1\\mathrm{m}$ to $5.4\\mathrm{m}$ along the Z axis, with a consistent voxel size of $0.4\\mathrm{m}$ across all dimensions. Occ3D delineates 18 semantic categories, comprising 17 distinct object classes plus an empty class to signify unoccupied regions. Following [7, 27, 35, 39], we evalu",
|
| 1475 |
+
"bbox": [
|
| 1476 |
+
89,
|
| 1477 |
+
704,
|
| 1478 |
+
483,
|
| 1479 |
+
900
|
| 1480 |
+
],
|
| 1481 |
+
"page_idx": 6
|
| 1482 |
+
},
|
| 1483 |
+
{
|
| 1484 |
+
"type": "image",
|
| 1485 |
+
"img_path": "images/6a25e7a9ca91d4b12c60c8b544975b4dae0d485f86571abe220867e095cae644.jpg",
|
| 1486 |
+
"image_caption": [],
|
| 1487 |
+
"image_footnote": [],
|
| 1488 |
+
"bbox": [
|
| 1489 |
+
519,
|
| 1490 |
+
90,
|
| 1491 |
+
903,
|
| 1492 |
+
160
|
| 1493 |
+
],
|
| 1494 |
+
"page_idx": 6
|
| 1495 |
+
},
|
| 1496 |
+
{
|
| 1497 |
+
"type": "image",
|
| 1498 |
+
"img_path": "images/36e3fcbb843c288828c0a7573947a8a3e5ddecb152ab327f76a7f277a3d5dfac.jpg",
|
| 1499 |
+
"image_caption": [],
|
| 1500 |
+
"image_footnote": [],
|
| 1501 |
+
"bbox": [
|
| 1502 |
+
517,
|
| 1503 |
+
162,
|
| 1504 |
+
903,
|
| 1505 |
+
232
|
| 1506 |
+
],
|
| 1507 |
+
"page_idx": 6
|
| 1508 |
+
},
|
| 1509 |
+
{
|
| 1510 |
+
"type": "image",
|
| 1511 |
+
"img_path": "images/0136db64c9a04fd9b4b8a19aa6469c5da59aba4c623a0ecb0d5af6e88d9e22bb.jpg",
|
| 1512 |
+
"image_caption": [
|
| 1513 |
+
"(a) BEVDet"
|
| 1514 |
+
],
|
| 1515 |
+
"image_footnote": [],
|
| 1516 |
+
"bbox": [
|
| 1517 |
+
519,
|
| 1518 |
+
234,
|
| 1519 |
+
705,
|
| 1520 |
+
304
|
| 1521 |
+
],
|
| 1522 |
+
"page_idx": 6
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "image",
|
| 1526 |
+
"img_path": "images/fc7f33cc3030f9bbcf9ee9c392bfbdbb23a50a0f4c906b31ae2a168ac094e284.jpg",
|
| 1527 |
+
"image_caption": [
|
| 1528 |
+
"(b) BEVDet + Ours",
|
| 1529 |
+
"Figure 6. Visualization of 2D-to-3D Semantic Causal Consistency Using LayerCAM [18]. We enhance BEVDet with our method for comparison. Attention maps are computed for critical traffic classes: \"traffic cone\", \"pedestrian\", and \"car\". Areas of greatest difference are marked with boxes. Each class-specific localization highlights our method's precise focus over vanilla BEVDet, showing improved semantic alignment."
|
| 1530 |
+
],
|
| 1531 |
+
"image_footnote": [],
|
| 1532 |
+
"bbox": [
|
| 1533 |
+
709,
|
| 1534 |
+
234,
|
| 1535 |
+
903,
|
| 1536 |
+
304
|
| 1537 |
+
],
|
| 1538 |
+
"page_idx": 6
|
| 1539 |
+
},
|
| 1540 |
+
{
|
| 1541 |
+
"type": "text",
|
| 1542 |
+
"text": "ate the occupancy prediction performance with mIoU across 17 semantic object categories, $\\mathrm{mIoU}_D$ for 8 dynamic categories, and occupied/unoccupied IoU for scene geometry.",
|
| 1543 |
+
"bbox": [
|
| 1544 |
+
511,
|
| 1545 |
+
431,
|
| 1546 |
+
905,
|
| 1547 |
+
477
|
| 1548 |
+
],
|
| 1549 |
+
"page_idx": 6
|
| 1550 |
+
},
|
| 1551 |
+
{
|
| 1552 |
+
"type": "text",
|
| 1553 |
+
"text": "Implementation Details. We integrate our approach into BEVDet [12] and ALOcc [7] for performance evaluation. The model parameters, image, and BEV augmentation strategies are retrained as the original. For ALOcc, we remove its ground-truth depth denoising module to adapt to our method. We use multiple convolutional layers to predict the two camera parameter offsets. The loss weight of $L_{\\text{causal}}$ is set to 0.02 in the main experiments. We optimize using AdamW [33] with a learning rate of $2 \\times 10^{-4}$ , a global batch size of 16, and for 24 epochs. Experiments use single-frame surrounding images, emphasizing improvements from enhanced 2D-to-3D transformations.",
|
| 1554 |
+
"bbox": [
|
| 1555 |
+
511,
|
| 1556 |
+
477,
|
| 1557 |
+
905,
|
| 1558 |
+
656
|
| 1559 |
+
],
|
| 1560 |
+
"page_idx": 6
|
| 1561 |
+
},
|
| 1562 |
+
{
|
| 1563 |
+
"type": "text",
|
| 1564 |
+
"text": "4.2. Evaluation of Camera Perturbation Robustness",
|
| 1565 |
+
"text_level": 1,
|
| 1566 |
+
"bbox": [
|
| 1567 |
+
511,
|
| 1568 |
+
666,
|
| 1569 |
+
903,
|
| 1570 |
+
681
|
| 1571 |
+
],
|
| 1572 |
+
"page_idx": 6
|
| 1573 |
+
},
|
| 1574 |
+
{
|
| 1575 |
+
"type": "text",
|
| 1576 |
+
"text": "To assess our method's robustness under extreme noise, we add Gaussian noise with (0.1 variances) to camera parameters in training and testing, as shown in Tab. 2. Compared to vanilla BEVDetOcc and ALOcc, our methods show smaller performance drops. BEVDetOcc+Ours has a mIoU drop of $-7.3\\%$ vs. $-32.3\\%$ for BEVDetOcc, and ALOcc+Ours drops $-3.3\\%$ vs. $-21.9\\%$ for ALOcc, demonstrating enhanced resilience to noisy parameters. This effectively counters motion-induced errors, benefiting self-driving tasks.",
|
| 1577 |
+
"bbox": [
|
| 1578 |
+
511,
|
| 1579 |
+
688,
|
| 1580 |
+
905,
|
| 1581 |
+
824
|
| 1582 |
+
],
|
| 1583 |
+
"page_idx": 6
|
| 1584 |
+
},
|
| 1585 |
+
{
|
| 1586 |
+
"type": "text",
|
| 1587 |
+
"text": "4.3. Semantic Causality Visualization",
|
| 1588 |
+
"text_level": 1,
|
| 1589 |
+
"bbox": [
|
| 1590 |
+
511,
|
| 1591 |
+
833,
|
| 1592 |
+
803,
|
| 1593 |
+
849
|
| 1594 |
+
],
|
| 1595 |
+
"page_idx": 6
|
| 1596 |
+
},
|
| 1597 |
+
{
|
| 1598 |
+
"type": "text",
|
| 1599 |
+
"text": "As shown in the Fig. 6, we visualize 3D-to-2D semantic causal consistency using LayerCAM [18]. We backpropagate the final 3D semantic occupancy predictions of dis",
|
| 1600 |
+
"bbox": [
|
| 1601 |
+
511,
|
| 1602 |
+
854,
|
| 1603 |
+
905,
|
| 1604 |
+
900
|
| 1605 |
+
],
|
| 1606 |
+
"page_idx": 6
|
| 1607 |
+
},
|
| 1608 |
+
{
|
| 1609 |
+
"type": "page_number",
|
| 1610 |
+
"text": "24884",
|
| 1611 |
+
"bbox": [
|
| 1612 |
+
478,
|
| 1613 |
+
944,
|
| 1614 |
+
519,
|
| 1615 |
+
955
|
| 1616 |
+
],
|
| 1617 |
+
"page_idx": 6
|
| 1618 |
+
},
|
| 1619 |
+
{
|
| 1620 |
+
"type": "table",
|
| 1621 |
+
"img_path": "images/20d18994ba23a7982a510eb6e699ce1cc1597209f8aaf830df71988bd7982675.jpg",
|
| 1622 |
+
"table_caption": [],
|
| 1623 |
+
"table_footnote": [],
|
| 1624 |
+
"table_body": "<table><tr><td>Method</td><td>Backbone</td><td>Input Size</td><td>mIoU</td><td>mIoUD</td><td>IoU</td></tr><tr><td>MonoScene [6]</td><td>ResNet-101</td><td>928 × 1600</td><td>6.1</td><td>5.4</td><td>-</td></tr><tr><td>CTF-Occ [39]</td><td>ResNet-101</td><td>928 × 1600</td><td>28.5</td><td>27.4</td><td>-</td></tr><tr><td>TPVFormer [14]</td><td>ResNet-101</td><td>928 × 1600</td><td>27.8</td><td>27.2</td><td>-</td></tr><tr><td>COTR [35]</td><td>ResNet-50</td><td>256 × 704</td><td>39.1</td><td>33.8</td><td>69.6</td></tr><tr><td>ProtoOcc [19]</td><td>ResNet-50</td><td>256 × 704</td><td>39.6</td><td>34.3</td><td>-</td></tr><tr><td>LightOcc-S [50]</td><td>ResNet-50</td><td>256 × 704</td><td>37.9</td><td>32.4</td><td>-</td></tr><tr><td>DHD-S [45]</td><td>ResNet-50</td><td>256 × 704</td><td>36.5</td><td>30.7</td><td>-</td></tr><tr><td>FlashOCC [48]</td><td>ResNet-50</td><td>256 × 704</td><td>32.0</td><td>24.7</td><td>65.3</td></tr><tr><td>FB-Occ [27]</td><td>ResNet-50</td><td>256 × 704</td><td>35.7</td><td>30.9</td><td>66.5</td></tr><tr><td>BEVDetOcc [12]</td><td>ResNet-50</td><td>256 × 704</td><td>37.1</td><td>30.2</td><td>70.4</td></tr><tr><td>BEVDetOcc+Ours</td><td>ResNet-50</td><td>256 × 704</td><td>38.3 ↑1.2</td><td>31.5 ↑1.3</td><td>71.2 ↑1.2</td></tr><tr><td>ALOcc [7]</td><td>ResNet-50</td><td>256 × 704</td><td>40.1</td><td>34.3</td><td>70.2</td></tr><tr><td>ALOcc+Ours</td><td>ResNet-50</td><td>256 × 704</td><td>40.9 ↑0.8</td><td>35.5 ↑1.1</td><td>70.7 ↑0.5</td></tr></table>",
|
| 1625 |
+
"bbox": [
|
| 1626 |
+
94,
|
| 1627 |
+
88,
|
| 1628 |
+
478,
|
| 1629 |
+
271
|
| 1630 |
+
],
|
| 1631 |
+
"page_idx": 7
|
| 1632 |
+
},
|
| 1633 |
+
{
|
| 1634 |
+
"type": "text",
|
| 1635 |
+
"text": "tinct classes to the 2D feature maps fed into SCAT. Notably, we are the first to apply LayerCAM for cross-dimensional analysis. Fig. 6 (b) shows our method precisely focuses on class-specific locations. This confirms LayerCAM's cross-dimensional effectiveness. Our approach surpasses vanilla BEVDet (Fig. 6 (a)) in targeting class-associated objects. This proves improved semantic localization.",
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
89,
|
| 1638 |
+
364,
|
| 1639 |
+
483,
|
| 1640 |
+
470
|
| 1641 |
+
],
|
| 1642 |
+
"page_idx": 7
|
| 1643 |
+
},
|
| 1644 |
+
{
|
| 1645 |
+
"type": "text",
|
| 1646 |
+
"text": "4.4. Benchmarking with Previous Methods",
|
| 1647 |
+
"text_level": 1,
|
| 1648 |
+
"bbox": [
|
| 1649 |
+
89,
|
| 1650 |
+
481,
|
| 1651 |
+
421,
|
| 1652 |
+
498
|
| 1653 |
+
],
|
| 1654 |
+
"page_idx": 7
|
| 1655 |
+
},
|
| 1656 |
+
{
|
| 1657 |
+
"type": "text",
|
| 1658 |
+
"text": "As shown in Tab. 3, we compare our method with leading 3D semantic occupancy prediction approaches on Occ3D [39]. Specifically, compared to baseline models BEVDet and ALOcc, our method achieves significant improvements in mIoU, mIoU $_{\\mathrm{D}}$ , and IoU. For instance, the BEVDetOcc+Ours variant achieves an mIoU of 38.3, surpassing BEVDetOcc [12] by 1.2, while improving mIoU $_{\\mathrm{D}}$ by 1.3 and IoU by 1.2. Similarly, ALOcc+Ours shows gains of 0.8 in mIoU, 1.1 in mIoU $_{\\mathrm{D}}$ , and 0.5 in IoU over ALOcc [7]. These results validate the superiority of our semantic causality-aware 2D-to-3D transformation.",
|
| 1659 |
+
"bbox": [
|
| 1660 |
+
89,
|
| 1661 |
+
503,
|
| 1662 |
+
483,
|
| 1663 |
+
672
|
| 1664 |
+
],
|
| 1665 |
+
"page_idx": 7
|
| 1666 |
+
},
|
| 1667 |
+
{
|
| 1668 |
+
"type": "text",
|
| 1669 |
+
"text": "4.5. Ablation Study",
|
| 1670 |
+
"text_level": 1,
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
89,
|
| 1673 |
+
681,
|
| 1674 |
+
243,
|
| 1675 |
+
698
|
| 1676 |
+
],
|
| 1677 |
+
"page_idx": 7
|
| 1678 |
+
},
|
| 1679 |
+
{
|
| 1680 |
+
"type": "text",
|
| 1681 |
+
"text": "Effect of Causal Loss. We first investigate the effectiveness of the proposed Causal Loss in enhancing the occupancy prediction performance. As demonstrated in Tab. 4, the impact of the proposed Causal Loss is evaluated through a series of experiments. Specifically, using BEVDetOcc as the baseline (Exp. 0), we conducted two ablation studies: one removing the depth supervision loss (Exp. 1), and another incorporating the proposed Causal Loss (Exps. 2, 3). The results reveal that the removal of the depth supervision loss leads to marginal decrease in performance, whereas the addition of the proposed Causal Loss yields a significant improvement. This suggests that Causal Loss facilitates superior 2D-to-3D transformation and ultimately enhances the",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
89,
|
| 1684 |
+
703,
|
| 1685 |
+
483,
|
| 1686 |
+
900
|
| 1687 |
+
],
|
| 1688 |
+
"page_idx": 7
|
| 1689 |
+
},
|
| 1690 |
+
{
|
| 1691 |
+
"type": "table",
|
| 1692 |
+
"img_path": "images/aab475daf81f77cbb53c8fbf8bcdcf502b02c53e114491b7d448639be8e953b5.jpg",
|
| 1693 |
+
"table_caption": [
|
| 1694 |
+
"Table 3. Comparison of 3D Semantic Occupancy Prediction Using Single Frame on the Occ3D Dataset, Evaluated mIoU, mIoUD, and IoU Metrics. Performance gains are indicated by red arrows $\\uparrow$ . Our proposed approach (+Ours) consistently demonstrates superior enhancement over existing methods."
|
| 1695 |
+
],
|
| 1696 |
+
"table_footnote": [],
|
| 1697 |
+
"table_body": "<table><tr><td>Exp.</td><td>Method</td><td>mIoU</td><td>Diff.</td><td>mIoUD</td><td>IoU</td><td>Latency</td></tr><tr><td>0</td><td>Baseline (BEVDetOcc) [12]</td><td>37.1</td><td>-</td><td>30.2</td><td>70.4</td><td>416/125</td></tr><tr><td>1</td><td>w/o Depth Sup</td><td>36.8</td><td>-0.3</td><td>29.6</td><td>70.3</td><td>414/125</td></tr><tr><td>2</td><td>+ Causal Loss</td><td>37.6</td><td>+0.8</td><td>31.0</td><td>70.1</td><td>450/125</td></tr><tr><td>3</td><td>+ Unbiased Estimator</td><td>37.5</td><td>-0.1</td><td>30.7</td><td>70.5</td><td>417/125</td></tr><tr><td>4</td><td>w/o Post Conv</td><td>37.3</td><td>-0.2</td><td>30.7</td><td>70.2</td><td>379/122</td></tr><tr><td>5</td><td>+ Channel-Grouped Lifting</td><td>37.6</td><td>+0.3</td><td>30.7</td><td>70.7</td><td>419/128</td></tr><tr><td>6</td><td>+ Soft Filling</td><td>37.6</td><td>-</td><td>30.6</td><td>70.7</td><td>434/149</td></tr><tr><td>7</td><td>+ Learnable Camera Offset</td><td>37.9</td><td>+0.3</td><td>31.1</td><td>71.0</td><td>446/150</td></tr><tr><td>8</td><td>+ Normalized Convolution</td><td>38.3</td><td>+0.4</td><td>31.5</td><td>71.2</td><td>466/159</td></tr></table>",
|
| 1698 |
+
"bbox": [
|
| 1699 |
+
517,
|
| 1700 |
+
90,
|
| 1701 |
+
903,
|
| 1702 |
+
251
|
| 1703 |
+
],
|
| 1704 |
+
"page_idx": 7
|
| 1705 |
+
},
|
| 1706 |
+
{
|
| 1707 |
+
"type": "text",
|
| 1708 |
+
"text": "Table 4. Ablation Study of 3D Semantic Occupancy Prediction on Occ3D. We comprehensively evaluated the impact of the individual strategy (bolded rows) proposed in our paper, with BEVDetOcc as the baseline. The final column reports single-frame training/inference latency (ms) on an RTX 4090 GPU.",
|
| 1709 |
+
"bbox": [
|
| 1710 |
+
511,
|
| 1711 |
+
258,
|
| 1712 |
+
906,
|
| 1713 |
+
330
|
| 1714 |
+
],
|
| 1715 |
+
"page_idx": 7
|
| 1716 |
+
},
|
| 1717 |
+
{
|
| 1718 |
+
"type": "text",
|
| 1719 |
+
"text": "precision of 3D semantic occupancy prediction. Comparing Exp. 3 to Exp. 2, the Unbiased Estimator simplifies the Causal Loss computation, reducing training overhead.",
|
| 1720 |
+
"bbox": [
|
| 1721 |
+
511,
|
| 1722 |
+
339,
|
| 1723 |
+
903,
|
| 1724 |
+
383
|
| 1725 |
+
],
|
| 1726 |
+
"page_idx": 7
|
| 1727 |
+
},
|
| 1728 |
+
{
|
| 1729 |
+
"type": "text",
|
| 1730 |
+
"text": "Effect of Each Module. In Tab. 4 (Exps. 5-8), we systematically validate the effectiveness of the three proposed modules. We first remove the two post-lifting convolutional layers from the original BEVDet (Exp. 4), as they serve a similar role to our proposed modules in refining volume features. Subsequently, we incrementally integrate the three proposed modules (Channel-Groupled Lifting, Learnable Camera Offsets, and Normalized Convolution) into the baseline model. To ensure gradient flow for Learnable Camera Offsets, we introduce the Soft Filling strategy from ALOcc [7] in Exp. 6, enabling effective training of the camera parameter offsets in Exp. 7. The results show progressive performance improvements with each added component (Exps. 5, 7, 8), confirming the efficacy of the proposed SCAT method. Additionally, the proposed modules incur acceptable computational overhead.",
|
| 1731 |
+
"bbox": [
|
| 1732 |
+
511,
|
| 1733 |
+
383,
|
| 1734 |
+
905,
|
| 1735 |
+
626
|
| 1736 |
+
],
|
| 1737 |
+
"page_idx": 7
|
| 1738 |
+
},
|
| 1739 |
+
{
|
| 1740 |
+
"type": "text",
|
| 1741 |
+
"text": "Please refer to the supplementary material for more comprehensive experimental results.",
|
| 1742 |
+
"bbox": [
|
| 1743 |
+
511,
|
| 1744 |
+
626,
|
| 1745 |
+
905,
|
| 1746 |
+
656
|
| 1747 |
+
],
|
| 1748 |
+
"page_idx": 7
|
| 1749 |
+
},
|
| 1750 |
+
{
|
| 1751 |
+
"type": "text",
|
| 1752 |
+
"text": "5. Conclusion",
|
| 1753 |
+
"text_level": 1,
|
| 1754 |
+
"bbox": [
|
| 1755 |
+
511,
|
| 1756 |
+
666,
|
| 1757 |
+
633,
|
| 1758 |
+
681
|
| 1759 |
+
],
|
| 1760 |
+
"page_idx": 7
|
| 1761 |
+
},
|
| 1762 |
+
{
|
| 1763 |
+
"type": "text",
|
| 1764 |
+
"text": "In this paper, we introduced a novel approach leveraging causal principles to address the limitation that ignored the reliability and interpretability by existing methods. By exploring the causal foundations of 3D semantic occupancy prediction, we propose a causal loss that enhances semantic causal consistency. In addition, we develop the SCAT module with three main components: Channel-Groupled Lifting, Learnable Camera Offsets and Normalized Convolution. This approach effectively mitigates transformation inaccuracies arising from uniform mapping weights, camera perturbations, and sparse mappings. Experiments demonstrate that our approach achieves significant improvements in accuracy, robustness to camera perturbations, and semantic causal consistency in 2D-to-3D transformations.",
|
| 1765 |
+
"bbox": [
|
| 1766 |
+
511,
|
| 1767 |
+
688,
|
| 1768 |
+
906,
|
| 1769 |
+
900
|
| 1770 |
+
],
|
| 1771 |
+
"page_idx": 7
|
| 1772 |
+
},
|
| 1773 |
+
{
|
| 1774 |
+
"type": "page_number",
|
| 1775 |
+
"text": "24885",
|
| 1776 |
+
"bbox": [
|
| 1777 |
+
478,
|
| 1778 |
+
944,
|
| 1779 |
+
517,
|
| 1780 |
+
955
|
| 1781 |
+
],
|
| 1782 |
+
"page_idx": 7
|
| 1783 |
+
},
|
| 1784 |
+
{
|
| 1785 |
+
"type": "text",
|
| 1786 |
+
"text": "References",
|
| 1787 |
+
"text_level": 1,
|
| 1788 |
+
"bbox": [
|
| 1789 |
+
91,
|
| 1790 |
+
90,
|
| 1791 |
+
187,
|
| 1792 |
+
104
|
| 1793 |
+
],
|
| 1794 |
+
"page_idx": 8
|
| 1795 |
+
},
|
| 1796 |
+
{
|
| 1797 |
+
"type": "list",
|
| 1798 |
+
"sub_type": "ref_text",
|
| 1799 |
+
"list_items": [
|
| 1800 |
+
"[1] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyril Stachniss, and Jurgen Gall. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9297-9307, 2019. 3",
|
| 1801 |
+
"[2] Simon Boeder, Fabian Gigengack, and Benjamin Risse. Langocc: Self-supervised open vocabulary occupancy estimation via volume rendering. arXiv preprint arXiv:2407.17310, 2024. 3",
|
| 1802 |
+
"[3] Simon Boeder, Fabian Gigengack, and Benjamin Risse. Occfownet: Towards self-supervised occupancy estimation via differentiable rendering and occupancy flow. arXiv preprint arXiv:2402.12792, 2024. 3",
|
| 1803 |
+
"[4] Simon Boeder, Fabian Gigengack, and Benjamin Risse. Gaussianflowocc: Sparse and weakly supervised occupancy estimation using gaussian splatting and temporal flow. arXiv preprint arXiv:2502.17288, 2025. 3",
|
| 1804 |
+
"[5] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11621-11631, 2020. 7",
|
| 1805 |
+
"[6] Anh-Quan Cao and Raoul de Charette. Monoscene: Monocular 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3991–4001, 2022. 3, 8",
|
| 1806 |
+
"[7] Dubing Chen, Jin Fang, Wencheng Han, Xinjing Cheng, Junbo Yin, Chengzhong Xu, Fahad Shahbaz Khan, and Jianbing Shen. Alocc: adaptive lifting-based 3d semantic occupancy and cost volume-based flow prediction. arXiv preprint arXiv:2411.07725, 2024. 1, 2, 3, 4, 5, 6, 7, 8",
|
| 1807 |
+
"[8] Dubing Chen, Wencheng Han, Jin Fang, and Jianbing Shen. Adaocc: Adaptive forward view transformation and flow modeling for 3d occupancy and flow prediction. arXiv preprint arXiv:2407.01436, 2024. 3",
|
| 1808 |
+
"[9] Dubing Chen, Huan Zheng, Jin Fang, Xingping Dong, Xianfei Li, Wenlong Liao, Tao He, Pai Peng, and Jianbing Shen. Rethinking temporal fusion with a unified gradient descent view for 3d semantic occupancy prediction. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 1505-1515, 2025. 1, 3",
|
| 1809 |
+
"[10] Xiaokang Chen, Kwan-Yee Lin, Chen Qian, Gang Zeng, and Hongsheng Li. 3d sketch-aware semantic scene completion via semi-supervised structure prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4193-4202, 2020. 2",
|
| 1810 |
+
"[11] Judy Hoffman, Daniel A Roberts, and Sho Yaida. Robust learning with jacobian regularization. arXiv preprint arXiv:1908.02729, 2019. 5",
|
| 1811 |
+
"[12] Junjie Huang and Guan Huang. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054, 2022. 7, 8",
|
| 1812 |
+
"[13] Junjie Huang, Guan Huang, Zheng Zhu, Yun Ye, and Dalong Du. Bevdet: High-performance multi-camera 3d object de"
|
| 1813 |
+
],
|
| 1814 |
+
"bbox": [
|
| 1815 |
+
93,
|
| 1816 |
+
114,
|
| 1817 |
+
483,
|
| 1818 |
+
900
|
| 1819 |
+
],
|
| 1820 |
+
"page_idx": 8
|
| 1821 |
+
},
|
| 1822 |
+
{
|
| 1823 |
+
"type": "list",
|
| 1824 |
+
"sub_type": "ref_text",
|
| 1825 |
+
"list_items": [
|
| 1826 |
+
"tection in bird-eye-view. arXiv preprint arXiv:2112.11790, 2021. 1, 3, 4, 5",
|
| 1827 |
+
"[14] Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie Zhou, and Jiwen Lu. Tri-perspective view for vision-based 3d semantic occupancy prediction. arXiv preprint arXiv:2302.07817, 2023. 1, 2, 3, 8",
|
| 1828 |
+
"[15] Yuanhui Huang, Wenzhao Zheng, Borui Zhang, Jie Zhou, and Jiwen Lu. Selfocc: Self-supervised vision-based 3d occupancy prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19946-19956, 2024.",
|
| 1829 |
+
"[16] Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie Zhou, and Jiwen Lu. Gaussianformer: Scene as gaussians for vision-based 3d semantic occupancy prediction. In European Conference on Computer Vision, pages 376-393. Springer, 2024. 3",
|
| 1830 |
+
"[17] Haoyi Jiang, Liu Liu, Tianheng Cheng, Xinjie Wang, Tianwei Lin, Zhizhong Su, Wenyu Liu, and Xinggang Wang. Gausstr: Foundation model-aligned gaussian transformer for self-supervised 3d spatial understanding. arXiv preprint arXiv:2412.13193, 2024. 3",
|
| 1831 |
+
"[18] Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei. Layercam: Exploring hierarchical class activation maps for localization. IEEE Transactions on Image Processing, 30:5875-5888, 2021. 2, 7",
|
| 1832 |
+
"[19] Jungho Kim, Changwon Kang, Dongyoung Lee, Sehwan Choi, and Jun Won Choi. Protoocc: Accurate, efficient 3d occupancy prediction using dual branch encoder-prototype query decoder. arXiv preprint arXiv:2412.08774, 2024. 8",
|
| 1833 |
+
"[20] Jie Li, Yu Liu, Dong Gong, Qinfeng Shi, Xia Yuan, Chunxia Zhao, and Ian Reid. Rgbd based dimensional decomposition residual network for 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7693-7702, 2019. 2",
|
| 1834 |
+
"[21] Jinke Li, Xiao He, Chonghua Zhou, Xiaogiang Cheng, Yang Wen, and Dan Zhang. Viewformer: Exploring spatiotemporal modeling for multi-view 3d occupancy perception via view-guided transformers. In Computer Vision-ECCV 2024: 18th European Conference, 2024. 3",
|
| 1835 |
+
"[22] Yinhao Li, Zheng Ge, Guanyi Yu, Jinrong Yang, Zengran Wang, Yukang Shi, Jianjian Sun, and Zeming Li. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092, 2022. 3, 5",
|
| 1836 |
+
"[23] Yinhao Li, Han Bao, Zheng Ge, Jinrong Yang, Jianjian Sun, and Zeming Li. Bevstereo: Enhancing depth estimation in multi-view 3d object detection with temporal stereo. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1486-1494, 2023. 3, 5",
|
| 1837 |
+
"[24] Yiming Li, Zhiding Yu, Christopher Choy, Chaowei Xiao, Jose M Alvarez, Sanja Fidler, Chen Feng, and Anima Anandkumar. Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9087-9098, 2023. 2, 3",
|
| 1838 |
+
"[25] Yangguang Li, Bin Huang, Zeren Chen, Yufeng Cui, Feng Liang, Mingzhu Shen, Fenggang Liu, Enze Xie, Lu Sheng, Wanli Ouyang, et al. Fast-bev: A fast and strong bird's"
|
| 1839 |
+
],
|
| 1840 |
+
"bbox": [
|
| 1841 |
+
517,
|
| 1842 |
+
92,
|
| 1843 |
+
903,
|
| 1844 |
+
900
|
| 1845 |
+
],
|
| 1846 |
+
"page_idx": 8
|
| 1847 |
+
},
|
| 1848 |
+
{
|
| 1849 |
+
"type": "page_number",
|
| 1850 |
+
"text": "24886",
|
| 1851 |
+
"bbox": [
|
| 1852 |
+
478,
|
| 1853 |
+
945,
|
| 1854 |
+
519,
|
| 1855 |
+
955
|
| 1856 |
+
],
|
| 1857 |
+
"page_idx": 8
|
| 1858 |
+
},
|
| 1859 |
+
{
|
| 1860 |
+
"type": "list",
|
| 1861 |
+
"sub_type": "ref_text",
|
| 1862 |
+
"list_items": [
|
| 1863 |
+
"eye view perception baseline. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 3",
|
| 1864 |
+
"[26] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. arXiv preprint arXiv:2203.17270, 2022. 3",
|
| 1865 |
+
"[27] Zhiqi Li, Zhiding Yu, David Austin, Mingsheng Fang, Shiyi Lan, Jan Kautz, and Jose M Alvarez. Fb-occ: 3d occupancy prediction based on forward-backward view transformation. arXiv preprint arXiv:2307.01492, 2023. 1, 2, 3, 5, 7, 8",
|
| 1866 |
+
"[28] Zhiqi Li, Zhiding Yu, Wenhai Wang, Anima Anandkumar, Tong Lu, and Jose M Alvarez. Fb-bev: Bev representation from forward-backward view transformations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6919-6928, 2023. 2, 6",
|
| 1867 |
+
"[29] Zhimin Liao and Ping Wei. Cascadeflow: 3d occupancy and flow prediction with cascaded sparsity sampling refinement framework. CVPR 2024 Autonomous Grand Challenge Track On Occupancy and Flow, 2024. 3",
|
| 1868 |
+
"[30] Lizhe Liu, Bohua Wang, Hongwei Xie, Daqi Liu, Li Liu, Zhiqiang Tian, Kuiyuan Yang, and Bing Wang. Surroundsdf: Implicit 3d scene understanding based on signed distance field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 1",
|
| 1869 |
+
"[31] Shice Liu, Yu Hu, Yiming Zeng, Qiankun Tang, Beibei Jin, Yinhe Han, and Xiaowei Li. See and think: Disentangling semantic scene completion. Advances in Neural Information Processing Systems, 31, 2018. 2",
|
| 1870 |
+
"[32] Yili Liu, Linzhan Mou, Xuan Yu, Chenrui Han, Sitong Mao, Rong Xiong, and Yue Wang. Let occ flow: Self-supervised 3d occupancy flow prediction. arXiv preprint arXiv:2407.07587, 2024. 3",
|
| 1871 |
+
"[33] Ilya Loshchilov, Frank Hutter, et al. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101, 2017. 7",
|
| 1872 |
+
"[34] Yuhang Lu, Xinge Zhu, Tai Wang, and Yuexin Ma. Octreeocc: Efficient and multi-granularity occupancy prediction using octree queries. arXiv preprint arXiv:2312.03774, 2023. 3",
|
| 1873 |
+
"[35] Qihang Ma, Xin Tan, Yanyun Qu, Lizhuang Ma, Zhizhong Zhang, and Yuan Xie. Cotr: Compact occupancy transformer for vision-based 3d occupancy prediction. arXiv preprint arXiv:2312.01919, 2023. 1, 2, 3, 7, 8",
|
| 1874 |
+
"[36] Mingjie Pan, Jiaming Liu, Renrui Zhang, Peixiang Huang, Xiaoqi Li, Hongwei Xie, Bing Wang, Li Liu, and Shang-hang Zhang. Renderocc: Vision-centric 3d occupancy prediction with 2d rendering supervision. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 12404-12411. IEEE, 2024. 3",
|
| 1875 |
+
"[37] Jonah Philion and Sanja Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV 16, pages 194-210. Springer, 2020. 2, 3"
|
| 1876 |
+
],
|
| 1877 |
+
"bbox": [
|
| 1878 |
+
91,
|
| 1879 |
+
92,
|
| 1880 |
+
482,
|
| 1881 |
+
897
|
| 1882 |
+
],
|
| 1883 |
+
"page_idx": 9
|
| 1884 |
+
},
|
| 1885 |
+
{
|
| 1886 |
+
"type": "list",
|
| 1887 |
+
"sub_type": "ref_text",
|
| 1888 |
+
"list_items": [
|
| 1889 |
+
"[38] Yang Shi, Tianheng Cheng, Qian Zhang, Wenyu Liu, and Xinggang Wang. Occupancy as set of points. In Computer Vision-ECCV 2024: 18th European Conference, 2024. 3",
|
| 1890 |
+
"[39] Xiaoyu Tian, Tao Jiang, Longfei Yun, Yucheng Mao, Huitong Yang, Yue Wang, Yilun Wang, and Hang Zhao. Occ3d: A large-scale 3d occupancy prediction benchmark for autonomous driving. Advances in Neural Information Processing Systems, 36, 2024. 1, 3, 7, 8",
|
| 1891 |
+
"[40] Wenwen Tong, Chonghao Sima, Tai Wang, Li Chen, Silei Wu, Hanming Deng, Yi Gu, Lewei Lu, Ping Luo, Dahua Lin, et al. Scene as occupancy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8406-8415, 2023. 1, 3",
|
| 1892 |
+
"[41] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 3",
|
| 1893 |
+
"[42] Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. Label words are anchors: An information flow perspective for understanding in-context learning. arXiv preprint arXiv:2305.14160, 2023. 2",
|
| 1894 |
+
"[43] Xiaofeng Wang, Zheng Zhu, Wenbo Xu, Yunpeng Zhang, Yi Wei, Xu Chi, Yun Ye, Dalong Du, Jiwen Lu, and Xinggang Wang. Openoccupancy: A large scale benchmark for surrounding semantic occupancy perception. arXiv preprint arXiv:2303.03991, 2023. 1, 3",
|
| 1895 |
+
"[44] Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, and Jiwen Lu. Surroundocc: Multi-camera 3d occupancy prediction for autonomous driving. arXiv preprint arXiv:2303.09551, 2023. 1, 3",
|
| 1896 |
+
"[45] Yuan Wu, Zhiqiang Yan, Zhengxue Wang, Xiang Li, Le Hui, and Jian Yang. Deep height decoupling for precise vision-based 3d occupancy prediction. arXiv preprint arXiv:2409.07972, 2024.8",
|
| 1897 |
+
"[46] Ziyang Yan, Wenzhen Dong, Yihua Shao, Yuhang Lu, Liu Haiyang, Jingwen Liu, Haozhe Wang, Zhe Wang, Yan Wang, Fabio Remondino, et al. Renderworld: World model with self-supervised 3d label. arXiv preprint arXiv:2409.11356, 2024.3",
|
| 1898 |
+
"[47] Shichao Yang, Yulan Huang, and Sebastian Scherer. Semantic 3d occupancy mapping through efficient high order crfs. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 590-597. IEEE, 2017. 1",
|
| 1899 |
+
"[48] Zichen Yu, Changyong Shu, Jiajun Deng, Kangjie Lu, Zong-dai Liu, Jiangyong Yu, Dawei Yang, Hui Li, and Yan Chen. Flashocc: Fast and memory-efficient occupancy prediction via channel-to-height plugin. arXiv preprint arXiv:2311.12058, 2023. 8",
|
| 1900 |
+
"[49] Chubin Zhang, Juncheng Yan, Yi Wei, Jiaxin Li, Li Liu, Yansong Tang, Yueqi Duan, and Jiwen Lu. Occnerf: Self-supervised multi-camera occupancy prediction with neural radiance fields. arXiv preprint arXiv:2312.09243, 2023. 3",
|
| 1901 |
+
"[50] Jinqing Zhang, Yanan Zhang, Qingjie Liu, and Yunhong Wang. Lightweight spatial embedding for vision-based 3d occupancy prediction. arXiv preprint arXiv:2412.05976, 2024.8"
|
| 1902 |
+
],
|
| 1903 |
+
"bbox": [
|
| 1904 |
+
516,
|
| 1905 |
+
92,
|
| 1906 |
+
905,
|
| 1907 |
+
898
|
| 1908 |
+
],
|
| 1909 |
+
"page_idx": 9
|
| 1910 |
+
},
|
| 1911 |
+
{
|
| 1912 |
+
"type": "page_number",
|
| 1913 |
+
"text": "24887",
|
| 1914 |
+
"bbox": [
|
| 1915 |
+
478,
|
| 1916 |
+
945,
|
| 1917 |
+
517,
|
| 1918 |
+
955
|
| 1919 |
+
],
|
| 1920 |
+
"page_idx": 9
|
| 1921 |
+
},
|
| 1922 |
+
{
|
| 1923 |
+
"type": "list",
|
| 1924 |
+
"sub_type": "ref_text",
|
| 1925 |
+
"list_items": [
|
| 1926 |
+
"[51] Yunpeng Zhang, Zheng Zhu, and Dalong Du. Occformer: Dual-path transformer for vision-based 3d semantic occupancy prediction. arXiv preprint arXiv:2304.05316, 2023. 2, 3",
|
| 1927 |
+
"[52] Jilai Zheng, Pin Tang, Zhongdao Wang, Guoqing Wang, Xiangxuan Ren, Bailan Feng, and Chao Ma. Veon: Vocabulary-enhanced occupancy prediction. In European Conference on Computer Vision, pages 92-108. Springer, 2024. 3",
|
| 1928 |
+
"[53] Yucheng Zhou, Xiang Li, Qianning Wang, and Jianbing Shen. Visual in-context learning for large vision-language models. In Findings of the Association for Computational Linguistics ACL 2024, pages 15890-15902, 2024. 2"
|
| 1929 |
+
],
|
| 1930 |
+
"bbox": [
|
| 1931 |
+
89,
|
| 1932 |
+
90,
|
| 1933 |
+
482,
|
| 1934 |
+
260
|
| 1935 |
+
],
|
| 1936 |
+
"page_idx": 10
|
| 1937 |
+
},
|
| 1938 |
+
{
|
| 1939 |
+
"type": "page_number",
|
| 1940 |
+
"text": "24888",
|
| 1941 |
+
"bbox": [
|
| 1942 |
+
478,
|
| 1943 |
+
945,
|
| 1944 |
+
517,
|
| 1945 |
+
955
|
| 1946 |
+
],
|
| 1947 |
+
"page_idx": 10
|
| 1948 |
+
}
|
| 1949 |
+
]
|
2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/218a3708-3415-4c53-9a50-ef5f14ed0f9a_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/218a3708-3415-4c53-9a50-ef5f14ed0f9a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:be183a2ca17b9b55b3acd3cfa8db65e6feda3fa168bcaa740ddc691c34dc1070
|
| 3 |
+
size 1213523
|
2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/full.md
ADDED
|
@@ -0,0 +1,405 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Semantic Causality-Aware Vision-Based 3D Occupancy Prediction
|
| 2 |
+
|
| 3 |
+
Dubing Chen $^{1}$ , Huan Zheng $^{1}$ , Yucheng Zhou $^{1}$ , Xianfei Li $^{2}$ , Wenlong Liao $^{2}$ , Tao He $^{2}$ , Pai Peng $^{2}$ , Jianbing Shen $^{1\boxtimes}$ $^{1}$ SKL-IOTSC, CIS, University of Macau $^{2}$ COWAROBOT Co. Ltd. https://github.com/cdb342/CausalOcc
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Vision-based 3D semantic occupancy prediction is a critical task in 3D vision that integrates volumetric 3D reconstruction with semantic understanding. Existing methods, however, often rely on modular pipelines. These modules are typically optimized independently or use pre-configured inputs, leading to cascading errors. In this paper, we address this limitation by designing a novel causal loss that enables holistic, end-to-end supervision of the modular 2D-to-3D transformation pipeline. Grounded in the principle of 2D-to-3D semantic causality, this loss regulates the gradient flow from 3D voxel representations back to the 2D features. Consequently, it renders the entire pipeline differentiable, unifying the learning process and making previously non-trainable components fully learnable. Building on this principle, we propose the Semantic Causality-Aware 2D-to-3D Transformation, which comprises three components guided by our causal loss: Channel-Groupled Lifting for adaptive semantic mapping, Learnable Camera Offsets for enhanced robustness against camera perturbations, and Normalized Convolution for effective feature propagation. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the Occ3D benchmark, demonstrating significant robustness to camera perturbations and improved 2D-to-3D semantic consistency.
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
Predicting dense 3D semantic occupancy is a fundamental task in 3D vision, providing a fine-grained voxel representation of scene geometry and semantics [39, 40, 43, 47]. The challenge of performing this prediction from vision alone,
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
(a) An Example of Inaccurate 2D-to-3D Transformation
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
(b) Modular 2D-to-3D Transformation
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
(c) Our End-to-End Supervised 2D-to-3D Transformation
|
| 21 |
+
Figure 1. (a) Illustrates a Visual Analysis of Semantic Ambiguity in VisionOcc. Inaccurate 2D-to-3D transformations may lead to positional shifts, misaligning supervision signals and resulting in semantic ambiguity. (b) Depicts the Conventional Modular 2D-to-3D Transformation Paradigm [7, 13, 27, 35], which employs depth supervision for geometry estimation, pre-calibrated camera parameters, and fixed mapping for lifting. (c) Presents Our Holistic, End-to-End Supervised 2D-to-3D Transformation Paradigm, which eliminates the need for separate modular supervision or pre-calibration, enabling unified error propagation to supervise all components.
|
| 22 |
+
|
| 23 |
+
an approach known as vision-based 3D semantic occupancy prediction (VisionOcc), has recently become a focal point of research. By leveraging only commodity cameras, VisionOcc is pivotal for a wide range of 3D applications, serving as a comprehensive digital replica of the environment for tasks like analysis, simulation, and interactive visualization [7, 9, 14, 27, 39, 40, 44].
|
| 24 |
+
|
| 25 |
+
VisionOcc unifies the challenges of feed-forward 3D reconstruction and dense semantic understanding. Existing pipelines typically decompose this task into two metaphases [7, 14, 27, 30, 35]. The initial 2D-to-3D transfor
|
| 26 |
+
|
| 27 |
+
mation uses large receptive field operators like view lifting [7, 37] or cross attention [14, 27] to construct an initial 3D feature volume. Subsequently, a 3D representation learning phase employs operators with a small receptive field (e.g., 3D convolutions or local self-attention) to refine 3D features and produce the final prediction. Our work targets the 2D-to-3D transformation phase, which is more critical and error-prone. Fig. 1a showcases a primary failure mode where features of one class (e.g., a 2D 'car') are erroneously transformed to the 3D location of another (e.g., a 'tree'). This creates a flawed learning objective, forcing the model to learn a spurious association between 'car' features and a 'tree' label. Such semantic ambiguity is a principal obstacle to achieving high performance.
|
| 28 |
+
|
| 29 |
+
The prevailing VisionOcc methods, typically based on Lift-Splat-Shoot (LSS), employ a modular approach for the 2D-to-3D transformation (Fig. 1b) [7, 35, 37]. This involves supervising geometry with a proxy depth loss while relying on fixed, pre-calibrated camera parameters and a static lifting map. However, this modularity raises critical questions about robustness and optimality. First, it is susceptible to compounding errors; for example, the reliance on fixed camera parameters makes the system vulnerable to real-world perturbations like camera jitter during motion. More fundamentally, the optimality of such proxy supervision is questionable. An intermediate representation ideal for depth estimation may not be optimal for the final semantic occupancy task, inherently limiting the transformation's expressive power due to this objective misalignment. This motivates our central research question: Can we devise an end-to-end supervision framework<sup>1</sup> that holistically optimizes the entire 2D-to-3D transformation, enabling unified semantic-aware error backpropagation and allowing traditionally fixed modules to become fully learnable?
|
| 30 |
+
|
| 31 |
+
We approach this problem from a causal perspective. In VisionOcc, the 2D image semantics are the "cause" of the final 3D semantic "effect". Semantic misalignment arises from disrupted information flow from cause to effect (Fig. 1a). Therefore, instead of correcting the erroneous output, we propose to directly regularize the information flow itself. We posit that a 3D prediction for a given class should be influenced predominantly by 2D image regions of that same class. To enforce this information flow, we leverage gradients as a proxy, inspired by prior work [18, 42, 53]. For each semantic class, the gradient of its aggregated 3D features is computed w.r.t. the 2D feature map, producing a saliency-like map of 2D influence. This map is then directly supervised with the ground truth 2D segmentation mask. As shown in Fig. 1c, this establishes a principled, end-to-end supervision signal for the 2D-to-3D transformation, enabling holistic optimization of all its components.
|
| 32 |
+
|
| 33 |
+
To fully leverage our end-to-end supervision, we introduce a more expressive and learnable 2D-to-3D view transformation, termed the Semantic Causality-Aware Transformation (SCAT). A key challenge is that directly supervising gradients is inherently unstable. Therefore, the entire SCAT module is designed to constrain its gradient flow to a stable [0, 1] range. Specifically, SCAT introduces three targeted designs: i) Channel-Grouped Lifting: To better disentangle semantics, we move beyond LSS's uniform weighting and apply distinct learnable weights to different groups of feature channels. ii) Learnable Camera Offsets: To mitigate motion-induced pose errors, we introduce learnable offsets to the camera parameters, which are implicitly supervised by the 2D-3D semantic consistency enforced by our causal loss. iii) Normalized Convolution: Finally, we employ a normalized convolution to densify the sparse 3D features from LSS [28], ensuring this final step also adheres to our global gradient stability requirement.
|
| 34 |
+
|
| 35 |
+
Our contributions are as follows: i) We systematically analyze the 2D-to-3D transformation in VisionOCC, identifying a critical failure mode we term semantic ambiguity. We provide a theoretical analysis proving how the modularity of prior methods leads to error propagation, offering clear guidance for future work. ii) To address these problems, we propose the Causal Loss that directly regularizes the information flow of the 2D-to-3D transformation. This enables true end-to-end optimization of all constituent modules, mitigating error accumulation and making previously fixed components, such as camera parameters, fully learnable. iii) We instantiate our principles in the Semantic Causality-Aware Transformation, a novel 2D-to-3D transformation architecture. SCAT incorporates Channel-Groupled Lifting, Learnable Camera Offsets, and Normalized Convolution to explicitly tackle the challenges of semantic confusion, camera perturbations, and limited learnability. iv) Extensive experiments show our method significantly boosts existing models, achieving a $3.2\%$ absolute mIoU gain on BEVDet. Furthermore, it demonstrates superior robustness to camera perturbations, reducing the relative performance drop on BEVDet from a severe $-32.2\%$ to a mere $-7.3\%$ .
|
| 36 |
+
|
| 37 |
+
# 2. Related Work
|
| 38 |
+
|
| 39 |
+
# 2.1. Semantic Scene Completion
|
| 40 |
+
|
| 41 |
+
Semantic Scene Completion (SSC) [10, 20, 24, 31, 51] refers to the task of simultaneously predicting both the occupancy and semantic labels of a scene. Existing methods can be classified into indoor and outdoor approaches based on the scene type, with the former focusing on occupancy and semantic label prediction in controlled environments [10, 20, 31], while the latter shifts towards more complex outdoor settings, particularly in the context of au
|
| 42 |
+
|
| 43 |
+
tonomous driving [1, 6]. The core principle of SSC lies in its ability to infer the unseen, effectively bridging gaps in incomplete observations with accurate semantic understanding. MonoScene [6] introduces a 3D SSC framework that infers dense geometry and semantics from a single monocular RGB image. VoxFormer [24] presents a Transformer-based semantic scene completion framework that generates complete 3D volumetric semantics from 2D images by first predicting sparse visible voxel queries and then densifying them through self-attention with a masked autoencoder design. OccFormer [51] introduces a dual-path transformer network for 3D semantic occupancy prediction, efficiently processing camera-generated 3D voxel features through local and global pathways, and enhancing occupancy decoding with preserve-pooling and class-guided sampling to address sparsity and class imbalance.
|
| 44 |
+
|
| 45 |
+
# 2.2. Vision-based 3D Occupancy Prediction
|
| 46 |
+
|
| 47 |
+
Vision-based 3D Occupancy Prediction [9, 14-16, 34, 38-40, 43?] aims to predict the spatial and semantic features of 3D voxel grids surrounding an autonomous vehicle from image data. This task is closely related to SSC, emphasizing the importance of multi-perspective joint perception for effective autonomous navigation. TPVFormer [14] is prior work that lifts image features into the 3D TPV space by leveraging an attention mechanism [26, 41]. Different from TPVFormer, which relies on sparse point clouds for supervision, subsequent studies, including OccNet [40], SurroundOcc [44], Occ3D [39], and OpenOccupancy [43], have developed denser occupancy annotations by incorporating temporal information or instance-level labels. Methods such as BEVDet [13], FBOcc [27], COTR [35], and ALOcc [7] leverage depth-based LSS [22, 23, 25, 37] for explicit geometric transformation, demonstrating strong performance. Some methods [2, 4, 17, 36, 46, 49, 52] have explored rendering-based methods that utilize 2D signal supervision, thereby bypassing the need for 3D annotations. Furthermore, recent research like [3, 7, 8, 21, 29, 32, 40] introduced 3D occupancy flow prediction, which addresses the movement of foreground objects in dynamic scenes by embedding 3D flow information to capture per-voxel dynamics. Unlike the above methods, we analyze the 2D-to-3D transformation process from the perspectives of error propagation and semantic causal consistency, proposing a novel approach that enhances causal consistency.
|
| 48 |
+
|
| 49 |
+
# 3. Method
|
| 50 |
+
|
| 51 |
+
Preliminary. VisionOcc predicts a dense 3D semantic occupancy grid from surround-view images, modeled as a causal dependency chain as shown in Fig. 2. This process involves key variables: input image $\mathbf{I}$ , estimated geometry $\mathbf{G}$ , camera parameters $P$ (intrinsic & extrinsic), potential camera parameter errors $e_{P}$ , intermediate 3D fea
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
Figure 2. The Causal Structure of VisionOcc. It illustrates the dependency chain from the image input $\mathbf{I}$ to the semantic occupancy output $\mathbf{O}$ . $\mathbf{G}$ : geometry for 2D-to-3D transformation. $P$ : camera intrinsic and extrinsic. $\mathbf{L}$ : intermediate 3D feature. $e_P$ : errors in camera parameters.
|
| 55 |
+
|
| 56 |
+
ture $\mathbf{L}$ , and final occupancy output $\mathbf{O}$ . In LSS-based methods [7, 13, 35], the pipeline starts with the image backbone extracting features $\mathbf{f}_i = F_i(\mathbf{I}), \mathbf{f}_i \in \mathbb{R}^{U,V,C}$ ; geometry $\mathbf{G} \in \mathbb{R}^{U,V,D}$ is predicted as a probability distribution over discretized depth bins $D$ , i.e., $\mathbf{G} = F_g(\mathbf{f}_i)$ ; 2D features are then transformed to 3D via an outer product $\mathbf{f}_L' = \mathbf{G} \otimes \mathbf{f}_i, \mathbf{f}_L' \in \mathbb{R}^{U,V,D,C}$ ; camera parameters $P$ map these to voxel coordinates $R_P(u,v,d) \to (h,w,z) \in [0,H-1] \times [0,W-1] \times [0,Z-1]$ , yielding $\mathbf{f}_L \in \mathbb{R}^{H \times W \times Z}$ , where $H \times W \times Z$ defines the occupancy grid resolution; finally, $\mathbf{L}$ is decoded to produce the semantic occupancy output: $\mathbf{O} = F_o(\mathbf{f}_L), \mathbf{O} \in \mathbb{R}^{H \times W \times Z \times S}$ , where $F_o$ is the decoding function and $S$ is the number of semantic classes. The prediction $\mathbf{O}$ is supervised by the ground-truth $\tilde{\mathbf{O}}$ .
|
| 57 |
+
|
| 58 |
+
# 3.1. Error Propagation in Depth-Based LSS
|
| 59 |
+
|
| 60 |
+
Theorem 1. In Depth-Based LSS methods with a fixed $2D$ -to-3D mapping $M_{\text{fixed}}$ , inherent mapping error $\delta M$ leads to gradient deviations, preventing convergence to an $\epsilon$ -optimal solution. This is formalized as:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
M _ {f i x e d} = M _ {i d e a l} + \delta M \Longrightarrow \nabla_ {\theta} L _ {L S S} \neq \nabla_ {\theta} L _ {i d e a l}, \tag {1}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where $M_{ideal}$ is the ideal mapping and $L_{ideal}$ is the loss function using $M_{ideal}$ .
|
| 67 |
+
|
| 68 |
+
Proof. Mapping Error Quantification: Let $\mathbf{x} \in \mathbb{R}^2$ be 2D pixel coordinates and $d(\mathbf{x})$ be the ground truth depth. Estimated depth is $\hat{d}(\mathbf{x}) = d(\mathbf{x}) + \epsilon_d(\mathbf{x})$ , with $\epsilon_d(\mathbf{x})$ as depth error. Let $\mathbf{K}_{ideal} \in \mathbb{R}^{3 \times 3}$ be the ideal camera intrinsics, and $\mathbf{K} = \mathbf{K}_{ideal} + \epsilon_K$ be the estimated camera intrinsics with error $\epsilon_K$ . The fixed mapping is $M_{fixed} = M_{ideal} + \delta M$ , where $\delta M$ encompasses errors from various sources, including depth estimation error $\epsilon_d(\mathbf{x})$ and camera extrinsic error $\epsilon_K$ . We assume the total mapping error is bounded $\| \delta M \|_F \leq \Delta_M < \infty$ . The 3D coordinates are:
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\mathbf {X} = M _ {\text {f i x e d}} (\mathbf {x}, \hat {d} (\mathbf {x}), \mathbf {K}) \tag {2}
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\mathbf {X} _ {\text {i d e a l}} = M _ {\text {i d e a l}} (\mathbf {x}, d (\mathbf {x}), \mathbf {K} _ {\text {i d e a l}}) \tag {3}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\Delta \mathbf {X} = \mathbf {X} - \mathbf {X} _ {\text {i d e a l}} = \delta M (\mathbf {x}, \hat {d} (\mathbf {x}), \mathbf {K}), \tag {4}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $\| \Delta \mathbf{X}\| _2\leq C\cdot \Delta_M$ for bounded inputs.
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
Figure 3. The Overall Framework of the Proposed Semantic Causality-Aware VisionOcc. The proposed framework consists of three primary components: a backbone network for extracting 2D features, an SCAT module for transforming these features into 3D space, and an Encoder-Decoder network for learning 3D semantics. The SCAT module is supervised by our causal loss.
|
| 86 |
+
|
| 87 |
+
<table><tr><td>Method</td><td>mIoU</td><td>mIoUD</td><td>IoU</td></tr><tr><td>Depth-Based LSS</td><td>44.5</td><td>40.4</td><td>78.9</td></tr><tr><td>SCL-Aware LSS</td><td>50.5↑6.0</td><td>46.9↑6.5</td><td>85.7↑6.8</td></tr></table>
|
| 88 |
+
|
| 89 |
+
Table 1. Performance of Depth-Based LSS vs. SCL-Aware LSS on Occ3D (in Ideal Conditions). BEVDetOcc is the baseline.
|
| 90 |
+
|
| 91 |
+
Feature Space Deviation and Loss: Let $F_{2D}(\mathbf{x})$ be 2D features and $F_{3D}(\cdot) = \text{Lift}(F_{2D}(\mathbf{x}), \cdot)$ . Assuming Lift is $L_{\text{Lift}}$ -Lipschitz continuous, the 3D feature deviation is:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\begin{array}{l} \left\| F _ {3 D} (\mathbf {X}) - F _ {3 D} \left(\mathbf {X} _ {\text {i d e a l}}\right) \right\| _ {F} \leq L _ {\text {L i f t}} \| \mathbf {X} - \mathbf {X} _ {\text {i d e a l}} \| _ {2} \tag {5} \\ \leq L _ {L i f t} \| \Delta \mathbf {X} \| _ {2} \leq \Delta_ {F _ {3 D}}. \\ \end{array}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
The loss function is $L_{LSS} = \mathcal{L}(P_{3D}(\mathbf{X}),GT_{3D})$ , where $P_{3D}(\mathbf{X}) = Seg_{3D}(F_{3D}(\mathbf{X}))$
|
| 98 |
+
|
| 99 |
+
Gradient Deviation and Optimization Limit: The gradient of $L_{LSS}$ w.r.t. parameters $\theta$ is given by chain rule. However, since $M_{fixed}$ is fixed, $\frac{\partial\mathbf{X}}{\partial\theta} = 0$ . Thus, the gradient becomes:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\nabla_ {\theta} L _ {L S S} = \frac {\partial L _ {L S S}}{\partial P _ {3 D}} \frac {\partial P _ {3 D}}{\partial F _ {3 D}} \left(\frac {\partial F _ {3 D}}{\partial F _ {2 D}} \frac {\partial F _ {2 D}}{\partial \theta}\right). \tag {6}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
Due to $\Delta_{F_{3D}} > 0$ , the computed gradient $\nabla_{\theta}L_{LSS}$ is based on the deviated feature space, i.e.,
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\begin{array}{l} \nabla_ {\theta} L _ {L S S} (\theta) = \nabla_ {\theta} \mathcal {L} \left(P _ {3 D} (\mathbf {X}), G T _ {3 D}\right) \tag {7} \\ \neq \nabla_ {\theta} \mathcal {L} \left(P _ {3 D} \left(\mathbf {X} _ {i d e a l}\right), G T _ {3 D}\right) = \nabla_ {\theta} L _ {i d e a l} (\theta). \\ \end{array}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
The gradient deviation prevents mapping's direct optimization and limits convergence to an $\epsilon$ -optimal solution.
|
| 112 |
+
|
| 113 |
+
The theoretical analysis reveals that the inherent error in the fixed 2D-to-3D mapping of Depth-Based LSS methods fundamentally hinders gradient-based optimization from achieving optimal performance.
|
| 114 |
+
|
| 115 |
+
# 3.2. Semantic Causal Locality in VisionOcc
|
| 116 |
+
|
| 117 |
+
As revealed in our theoretical analysis (Sec. 3.1), Depth-Based LSS methods suffer from inherent error propagation due to their fixed 2D-to-3D mapping, which limits optimization efficacy and potential performance. To overcome
|
| 118 |
+
|
| 119 |
+
these limitations, we particularly focus on strengthening the semantic causality of the 2D-to-3D transformation. We argue that VisionOcc's 2D-to-3D semantic occupancy prediction should exhibit semantic causal locality (SCL) for robust perception in autonomous driving. Ideally, 2D causes should drive 3D semantic effects. For instance, a predicted "car" at 3D location $(h,w,z)$ should originate from a matching 2D image region of "car". Per the causal chain, camera parameters $P$ and estimated geometry $\mathbf{G}$ enable this dependency, with $\mathbf{G}$ being crucial to maintain SCL.
|
| 120 |
+
|
| 121 |
+
Next, we formulate the ideal SCL condition. For a 2D pixel $(u,v)$ with semantic label $s$ , the projection probability $p_d$ (the value of $\mathbf{G}$ at this coordinate and depth $d \in D$ ) should be high if its corresponding 3D ground-truth semantic is $s$ , and low otherwise:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
p _ {d} \propto \mathbb {1} (\tilde {\mathbf {O}} (R _ {P} (u, v, d) + e _ {P}) = s),
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $\mathbb{1}$ is the indicator function, and $e_P$ represents the potential coordinate transformation error caused by factors such as camera pose error. In practice, $p_d$ acts as a weight multiplied by 2D features (Eq. (8)), enabling a probabilistic mapping that supports differentiable backpropagation.
|
| 128 |
+
|
| 129 |
+
Limitations of Depth-Based LSS in SCL. Depth-Based LSS does not fully account for semantic causal consistency. It only preserves semantic causal locality intuitively under ideal conditions where $e_P = 0$ and depth estimation is perfectly accurate. However, with coordinate transformation errors $e_P$ , even a high $p_d$ for the ideal depth may project 2D semantics to incorrect 3D locations, i.e.,
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\tilde {\mathbf {O}} (R _ {P} (u, v, d) + 0) = s, \text {b u t} \tilde {\mathbf {O}} (R _ {P} (u, v, d) + e _ {P}) \neq s.
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
This misalignment causes 2D semantics $s$ (e.g., "car") to link with wrong 3D semantics (e.g., "tree"), causing semantic ambiguity and hindering training. Moreover, depth-based LSS often propagates semantics to surface points, weakening semantic propagation to occluded regions [7].
|
| 136 |
+
|
| 137 |
+
Empirical Validation. We conduct an empirical study to validate our analysis of SCL, comparing two ideal geometric transformations. Using BEVDetOcc [13] as the baseline, we replace its estimated depth-based geometry for LSS
|
| 138 |
+
|
| 139 |
+
with: $i$ ) ground-truth LiDAR depths; $ii$ ) SCL-Aware geometry, which computes $p_d$ from 2D and 3D semantic ground truths, where for $(u,v,d,s)$ , $p_d = 1$ if $\tilde{\mathbf{O}}(R_P(u,v,d)) = s$ , else $p_d = 0$ . We evaluate their performance in semantic occupancy prediction. Tab. 1 demonstrates that 2D-to-3D Transformation achieves significant performance improvements over depth-based LSS in ideal conditions, validating the benefits of semantic causal locality.
|
| 140 |
+
|
| 141 |
+
Summary. Building on the above analysis, we propose our solution to enforce semantic causality constraints during training. The overall framework is shown in Fig. 3.
|
| 142 |
+
|
| 143 |
+
# 3.3. Semantic Causality-Aware Causal Loss
|
| 144 |
+
|
| 145 |
+
For lifting methods like LSS, we could directly supervise the transformation geometry $\mathbf{G}$ using the causal semantic geometry derived from ground-truth labels (as described in the previous section). However, we aim to enhance the lifting method in the next section, rendering direct supervision impractical. Thus, we design a gradient-based approach to enforce semantic causality.
|
| 146 |
+
|
| 147 |
+
We begin within the LSS framework. For a 2D pixel feature at location $(u, v)$ , LSS multiplies it by the depth-related transformation probability $p_d$ and projects it to the 3D coordinate corresponding to depth $d$ :
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\mathbf {f} _ {L} \left(R _ {P} (u, v, d)\right) = p _ {d} (u, v, d) \cdot \mathbf {f} _ {i} (u, v, d). \tag {8}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
Here, $\mathbf{f}_L(R_P(u,v,d))\in \mathbb{R}^C$ represents the 3D voxel feature at the projected location, with $e_{P}$ omitted for notational simplicity. $\mathbf{f}_i(u,v,d)\in \mathbb{R}^C$ denotes the 2D image feature at location $(u,v)$ . We backpropagate gradients from $\mathbf{f}_L$ to $\mathbf{f}_i$ , i.e.,
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\frac {\partial \sum \mathbf {f} _ {L} \left(R _ {P} (u , v , d)\right)}{\partial \mathbf {f} _ {i} (u , v , d)} = p _ {d} \cdot \mathbf {I}, \tag {9}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where $\mathbf{I}$ is the all-ones vector.
|
| 160 |
+
|
| 161 |
+
For each semantic class $s$ , we aggregate the features $\mathbf{f}_L$ at all 3D positions where the ground truth class equals $s$ . This aggregation is backpropagated to the 2D features $\mathbf{f}_i$ , yielding a gradient map $\nabla_s \in \mathbb{R}^{U \times V \times C}$ for class $s$ :
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\nabla_ {s} (u, v, c) = \sum_ {\left(h ^ {\prime}, w ^ {\prime}, z ^ {\prime}\right) \in \Omega_ {s}} \frac {\partial \sum \mathbf {f} _ {L} \left(h ^ {\prime} , w ^ {\prime} , z ^ {\prime} , c\right)}{\partial \mathbf {f} _ {i} (u , v , c)}, \tag {10}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where $\Omega_s = \{(h', w', z') \mid O(h', w', z') = s\}$ is the set of 3D positions with semantic label $s$ , and $c$ indexes the feature channels. Averaging over channel $C$ produces an attention map $A_s \in \mathbb{R}^{U \times V}$ :
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
A _ {s} (u, v) = \frac {1}{C} \sum_ {c = 1} ^ {C} \nabla_ {s} (u, v, c). \tag {11}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
Finally, we enforce per-pixel constraints using 2D
|
| 174 |
+
|
| 175 |
+
ground-truth labels with a binary cross-entropy loss:
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\begin{array}{l} L _ {b c e} ^ {s} = - \frac {1}{U \cdot V} \sum_ {u, v} \left[ Y _ {s} (u, v) \log A _ {s} (u, v) \right. \tag {12} \\ \left. + \left(1 - Y _ {s} (u, v)\right) \log (1 - A _ {s} (u, v)) \right], \\ \end{array}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where $Y_{s}(u,v) \in \{0,1\}$ is the 2D ground-truth label for semantic class $s$ at pixel $(u,v)$ .
|
| 182 |
+
|
| 183 |
+
The gradient computation can be performed using the automatic differentiation calculator like torch.autograd, requiring $S$ backward passes to iterate over the $S$ semantic classes. This incurs a computational overhead scaling linearly with the class amount $S$ . To mitigate this overhead, we reformulate the loss computations in terms of the expectation of an unbiased estimator [11]. We define the expected BCE loss across all semantic classes $s \in \{1, \dots, S\}$ as:
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\mathbb {E} \left[ L _ {b c e} ^ {s} \right] = \frac {1}{S} \sum_ {s = 1} ^ {S} L _ {b c e} ^ {s}. \tag {13}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
Based on this relationship, we uniformly sample a single semantic class $s$ during training:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
L _ {c a u s a l} = L _ {b c e} ^ {s}, \quad s \sim \operatorname {U n i f o r m} (1, S). \tag {14}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
This sampling preserves the unbiased nature of the gradient and loss estimates and reduces the computational cost to $\frac{1}{S}$ . $L_{\text{causal}}$ focuses on enhancing the geometric transformation, serving as an auxiliary term to complement the occupancy loss (case-by-case, e.g., cross-entropy in BEVDet [13]).
|
| 196 |
+
|
| 197 |
+
# 3.4. Semantic Causality-Aware Transformation
|
| 198 |
+
|
| 199 |
+
We propose semantic causality-aware 2D-to-3D transformation to enhance 2D-to-3D lifting, as shown in Fig. 4. Eq. (14) constrains the geometry $\mathbf{G}$ using 2D and 3D semantics. This overcomes the rigid, per-location hard alignment of geometric probabilities (e.g., using LiDAR depth supervision) in prior methods [7, 13, 22, 23, 27]. It enables advanced lifting designs, addressing errors from camera pose and other distortions.
|
| 200 |
+
|
| 201 |
+
# 3.4.1. Channel-Grouped Lifting
|
| 202 |
+
|
| 203 |
+
Vanilla LSS applies uniform weights to all 2D feature channels. We argue this is trivial as 2D and 3D features have distinct locality biases. For instance, a 2D "car" edge may capture "tree" semantics via convolution, but in 3D, these objects are distant. Uniform weighting both semantics causes ambiguity. Since different channels typically encode distinct semantics, we group the feature channels and learn unique weights for each group:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\mathbf {f} _ {L, g} \left(R _ {P} (u, v, d)\right) = \omega_ {g, d} \cdot \mathbf {f} _ {i, g} (u, v, d), g \in \{1, \dots , N _ {g} \}, \tag {15}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
where $\mathbf{f}_{L,g}\in \mathbb{R}^{C / N_g}$ and $\mathbf{f}_{i,g}\in \mathbb{R}^{C / N_g}$ are the 3D and 2D features for group $g$ . $\omega_{g,d}$ is the learned weight for group
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
(a) Channel-Grouped Lifting
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
|
| 216 |
+

|
| 217 |
+
(b) Coordinate Mapping with Learnable Camera Offsets
|
| 218 |
+
(c) Post-Processing with Normalized Convolution
|
| 219 |
+
Figure 4. The Detailed Structure of the Proposed Modules in Semantic Causality-Aware 2D-to-3D Transformation. There are three key components, including (a): Channel-Groupled Lifting, (b): Learnable Camera Offsets, and (c): Normalized Convolution, enabling accurate and robust 2D-to-3D transformation.
|
| 220 |
+
|
| 221 |
+
$g$ , replacing $p_d$ which uniformly lifts all channels. $N_g$ is the number of groups. This preserves semantic distinction, ensuring channel-specific causal alignment.
|
| 222 |
+
|
| 223 |
+
# 3.4.2. Learnable Camera Offsets
|
| 224 |
+
|
| 225 |
+
To address camera parameter errors, especially pose inaccuracies, we introduce learnable offsets into camera parameters. First, we ensure the lifting process is coordinatedifferentiable. This is crucial for the offsets to receive gradients and adapt during training. The transformation of 2D image coordinates $(u,v)$ and depth $d$ to 3D voxel coordinates can be represented as matrix multiplication:
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
[ h, w, z ] ^ {T} = P \cdot [ u \cdot d, v \cdot d, d, 1 ] ^ {T}, \tag {16}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
where $P \in \mathbb{R}^{3 \times 4}$ is the camera projection matrix combining intrinsics and extrinsics. LSS typically rounds floating-point coordinates to integers, rendering them nondifferentiable. Following ALOcc [7], we use the soft filling to enable differentiability w.r.t. position. This method calculates distances between floating-point 3D coordinates and their eight surrounding integer coordinates. These distances serve as weights to distribute a 2D feature at $(u, v, d)$ across multiple 3D locations. Lifting can be rewritten to
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\begin{array}{l} \mathbf {f} _ {L, g} \left(h ^ {\prime}, w ^ {\prime}, z ^ {\prime}\right) = \omega_ {g, d} \cdot \omega_ {h ^ {\prime}, w ^ {\prime}, z ^ {\prime}} \cdot \mathbf {f} _ {i, g} (u, v, d), \tag {17} \\ \forall \left(h ^ {\prime}, w ^ {\prime}, z ^ {\prime}\right) \in \text {n e i g h b o r s}, \\ \end{array}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
where $\omega_{h',w',z'}$ are trilinear interpolation weights.
|
| 238 |
+
|
| 239 |
+
Next, we propose learning two offsets. First, we directly
|
| 240 |
+
|
| 241 |
+
predict an offset applied to the camera parameters:
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
P := P + \Delta P, \quad \Delta P = F _ {\text {o f f s e t 1}} \left(\mathbf {f} _ {i}, P\right), \tag {18}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
where $\Delta P$ is the predicted parameter offset, and $F_{offset1}$ denotes the network. Second, we estimate per-position offsets for each $(u,v,d)$ in the image coordinate system:
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
(u, v, d) := (u + \Delta u, v + \Delta v, d + \Delta d), \tag {19}
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\left(\Delta u, \Delta v, \Delta d\right) = F _ {\text {o f f s e t} 2} \left(\mathbf {f} _ {i} (u, v, d)\right),
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
where $F_{offset2}$ is another network. These offsets enable the model to adaptively compensate for camera parameter errors $e_P$ , improving geometric accuracy while preserving semantic causal locality under such errors.
|
| 258 |
+
|
| 259 |
+
# 3.4.3. Normalized Convolution
|
| 260 |
+
|
| 261 |
+
Prior work [28] notes that the direct mapping in LSS yields sparse 3D features. To address this, an intuitive solution is to use local feature propagation operators (e.g., convolutions) with causal loss supervision for preserving semantic causality. However, vanilla convolutions lack gradient constraints during causal loss computation, producing poor results. We address this by normalizing convolution weights to keep gradient map values in [0, 1]. Given the difficulty of normalizing standard convolution weights, we follow MobileNet and ConvNext to adopt depthwise (spatial) and pointwise (channel) decomposition. For the depthwise kernel $W_{\mathrm{spatial}} \in \mathbb{R}^{3 \times 3 \times 3 \times C}$ , we apply softmax across spatial dimensions $(h, w, z)$ per channel $c$ :
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
W _ {\text {s p a t i a l}} ^ {\prime} [ h, w, z, c ] = \frac {\exp \left(W _ {\text {s p a t i a l}} [ h , w , z , c ]\right)}{\sum_ {h ^ {\prime} , w ^ {\prime} , z ^ {\prime}} \exp \left(W _ {\text {s p a t i a l}} \left[ h ^ {\prime} , w ^ {\prime} , z ^ {\prime} , c \right]\right)}. \tag {20}
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
For the pointwise kernel $W_{\mathrm{channel}} \in \mathbb{R}^{C \times C}$ , we apply softmax across input channels per output channel:
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
W _ {\text {c h a n n e l}} ^ {\prime} \left[ c _ {\text {i n}}, c _ {\text {o u t}} \right] = \frac {\exp \left(W _ {\text {c h a n n e l}} \left[ c _ {\text {i n}} , c _ {\text {o u t}} \right]\right)}{\sum_ {c _ {\text {o u t}} ^ {\prime}} \exp \left(W _ {\text {c h a n n e l}} \left[ c _ {\text {i n}} , c _ {\text {o u t}} ^ {\prime} \right]\right)}. \tag {21}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
Specifically, we use transposed convolution for depthwise convolutions. It diffuses features from non-zero to zero positions, but the zero position does not affect others. We prove in the supplement that the derived gradient mask remains within [0, 1] even with our semantic causality-aware 2D-to-3D transformation.
|
| 274 |
+
|
| 275 |
+
# 3.5. Validation of Gradient Error Mitigation
|
| 276 |
+
|
| 277 |
+
Fig. 5 shows the occupancy loss curves during training for BEVDet, comparing performance with and without our method. The results show that BEVDet integrated with our approach (blue line) achieves a significantly faster and steeper loss reduction compared to the original BEVDet (red line). This empirical evidence validates our theoretical analysis of gradient error mitigation. As Theorem Theorem 1 formalizes, Depth-Based LSS methods are inherently limited by gradient deviations due to fixed 2D-to-3D
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
Figure 5. Occupancy Loss Curves with/without Our Method. Our method reduces training loss through semantic causal 2D-to-3D geometry transformation, as shown in the comparison before (red) and after (blue) its application.
|
| 281 |
+
|
| 282 |
+
<table><tr><td>Method</td><td>mIoU ↑</td><td>Drop</td><td>mIoUD↑</td><td>Drop</td><td>IoU ↑</td><td>Drop</td></tr><tr><td>BEVDetOcc [12]</td><td>37.1</td><td rowspan="2">-32.3%</td><td>30.2</td><td rowspan="2">-49.0%</td><td>70.4</td><td rowspan="2">-8.5%</td></tr><tr><td>BEVDetOcc w/ Noise</td><td>25.1</td><td>15.4</td><td>64.4</td></tr><tr><td>BEVDetOcc+Ours</td><td>38.3</td><td rowspan="2">-7.3%</td><td>31.5</td><td rowspan="2">-10.8%</td><td>71.2</td><td rowspan="2">-1.4%</td></tr><tr><td>BEVDetOcc+Ours w/ Noise</td><td>35.5</td><td>28.1</td><td>70.2</td></tr><tr><td>ALoCC [7]</td><td>40.1</td><td rowspan="2">-21.9%</td><td>34.3</td><td rowspan="2">-28.6%</td><td>70.2</td><td rowspan="2">-8.4%</td></tr><tr><td>ALoCC w/ Noise</td><td>31.3</td><td>24.5</td><td>64.3</td></tr><tr><td>ALoCC+Ours</td><td>40.9</td><td rowspan="2">-3.3%</td><td>35.5</td><td rowspan="2">-4.8%</td><td>70.7</td><td rowspan="2">-1.0%</td></tr><tr><td>ALoCC+Ours w/ Noise</td><td>39.6</td><td>33.8</td><td>70.0</td></tr></table>
|
| 283 |
+
|
| 284 |
+
Table 2. Performance Comparison on the Occ3D Dataset with Gaussian Noise Added to Camera Parameters. The "Drop (%) columns show degradation, with our methods (BEVDetOcc+Ours, ALOcc+Ours) achieving much smaller drops (e.g., -7.3% mIoU vs. -32.4% for BEVDetOcc).
|
| 285 |
+
|
| 286 |
+
mapping errors. In contrast, our method aims to alleviate this by enabling a learnable mapping and incorporating a causal loss. It indicates that by mitigating gradient error through semantic causality-aware 2D-to-3D transformation, our approach facilitates more efficient gradient-based learning, leading to faster convergence and a lower loss curve.
|
| 287 |
+
|
| 288 |
+
# 4. Experiment
|
| 289 |
+
|
| 290 |
+
# 4.1. Experimental Setup
|
| 291 |
+
|
| 292 |
+
Dataset. In this study, we leverage the Occ3D-nuScenes dataset [5, 39], a comprehensive dataset with diverse scenes for autonomous driving research. It encompasses 700 scenes for training, 150 for validation, and 150 for testing. Each scene integrates a 32-beam LiDAR point cloud alongside 6 RGB images, captured from multiple perspectives encircling the ego vehicle. Occ3D [39] introduces voxel-based annotations, covering a spatial extent of $-40\mathrm{m}$ to $40\mathrm{m}$ along the X and Y axes, and $-1\mathrm{m}$ to $5.4\mathrm{m}$ along the Z axis, with a consistent voxel size of $0.4\mathrm{m}$ across all dimensions. Occ3D delineates 18 semantic categories, comprising 17 distinct object classes plus an empty class to signify unoccupied regions. Following [7, 27, 35, 39], we evalu
|
| 293 |
+
|
| 294 |
+

|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
|
| 298 |
+

|
| 299 |
+
(a) BEVDet
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
(b) BEVDet + Ours
|
| 303 |
+
Figure 6. Visualization of 2D-to-3D Semantic Causal Consistency Using LayerCAM [18]. We enhance BEVDet with our method for comparison. Attention maps are computed for critical traffic classes: "traffic cone", "pedestrian", and "car". Areas of greatest difference are marked with boxes. Each class-specific localization highlights our method's precise focus over vanilla BEVDet, showing improved semantic alignment.
|
| 304 |
+
|
| 305 |
+
ate the occupancy prediction performance with mIoU across 17 semantic object categories, $\mathrm{mIoU}_D$ for 8 dynamic categories, and occupied/unoccupied IoU for scene geometry.
|
| 306 |
+
|
| 307 |
+
Implementation Details. We integrate our approach into BEVDet [12] and ALOcc [7] for performance evaluation. The model parameters, image, and BEV augmentation strategies are retrained as the original. For ALOcc, we remove its ground-truth depth denoising module to adapt to our method. We use multiple convolutional layers to predict the two camera parameter offsets. The loss weight of $L_{\text{causal}}$ is set to 0.02 in the main experiments. We optimize using AdamW [33] with a learning rate of $2 \times 10^{-4}$ , a global batch size of 16, and for 24 epochs. Experiments use single-frame surrounding images, emphasizing improvements from enhanced 2D-to-3D transformations.
|
| 308 |
+
|
| 309 |
+
# 4.2. Evaluation of Camera Perturbation Robustness
|
| 310 |
+
|
| 311 |
+
To assess our method's robustness under extreme noise, we add Gaussian noise with (0.1 variances) to camera parameters in training and testing, as shown in Tab. 2. Compared to vanilla BEVDetOcc and ALOcc, our methods show smaller performance drops. BEVDetOcc+Ours has a mIoU drop of $-7.3\%$ vs. $-32.3\%$ for BEVDetOcc, and ALOcc+Ours drops $-3.3\%$ vs. $-21.9\%$ for ALOcc, demonstrating enhanced resilience to noisy parameters. This effectively counters motion-induced errors, benefiting self-driving tasks.
|
| 312 |
+
|
| 313 |
+
# 4.3. Semantic Causality Visualization
|
| 314 |
+
|
| 315 |
+
As shown in the Fig. 6, we visualize 3D-to-2D semantic causal consistency using LayerCAM [18]. We backpropagate the final 3D semantic occupancy predictions of dis
|
| 316 |
+
|
| 317 |
+
<table><tr><td>Method</td><td>Backbone</td><td>Input Size</td><td>mIoU</td><td>mIoUD</td><td>IoU</td></tr><tr><td>MonoScene [6]</td><td>ResNet-101</td><td>928 × 1600</td><td>6.1</td><td>5.4</td><td>-</td></tr><tr><td>CTF-Occ [39]</td><td>ResNet-101</td><td>928 × 1600</td><td>28.5</td><td>27.4</td><td>-</td></tr><tr><td>TPVFormer [14]</td><td>ResNet-101</td><td>928 × 1600</td><td>27.8</td><td>27.2</td><td>-</td></tr><tr><td>COTR [35]</td><td>ResNet-50</td><td>256 × 704</td><td>39.1</td><td>33.8</td><td>69.6</td></tr><tr><td>ProtoOcc [19]</td><td>ResNet-50</td><td>256 × 704</td><td>39.6</td><td>34.3</td><td>-</td></tr><tr><td>LightOcc-S [50]</td><td>ResNet-50</td><td>256 × 704</td><td>37.9</td><td>32.4</td><td>-</td></tr><tr><td>DHD-S [45]</td><td>ResNet-50</td><td>256 × 704</td><td>36.5</td><td>30.7</td><td>-</td></tr><tr><td>FlashOCC [48]</td><td>ResNet-50</td><td>256 × 704</td><td>32.0</td><td>24.7</td><td>65.3</td></tr><tr><td>FB-Occ [27]</td><td>ResNet-50</td><td>256 × 704</td><td>35.7</td><td>30.9</td><td>66.5</td></tr><tr><td>BEVDetOcc [12]</td><td>ResNet-50</td><td>256 × 704</td><td>37.1</td><td>30.2</td><td>70.4</td></tr><tr><td>BEVDetOcc+Ours</td><td>ResNet-50</td><td>256 × 704</td><td>38.3 ↑1.2</td><td>31.5 ↑1.3</td><td>71.2 ↑1.2</td></tr><tr><td>ALOcc [7]</td><td>ResNet-50</td><td>256 × 704</td><td>40.1</td><td>34.3</td><td>70.2</td></tr><tr><td>ALOcc+Ours</td><td>ResNet-50</td><td>256 × 704</td><td>40.9 ↑0.8</td><td>35.5 ↑1.1</td><td>70.7 ↑0.5</td></tr></table>
|
| 318 |
+
|
| 319 |
+
tinct classes to the 2D feature maps fed into SCAT. Notably, we are the first to apply LayerCAM for cross-dimensional analysis. Fig. 6 (b) shows our method precisely focuses on class-specific locations. This confirms LayerCAM's cross-dimensional effectiveness. Our approach surpasses vanilla BEVDet (Fig. 6 (a)) in targeting class-associated objects. This proves improved semantic localization.
|
| 320 |
+
|
| 321 |
+
# 4.4. Benchmarking with Previous Methods
|
| 322 |
+
|
| 323 |
+
As shown in Tab. 3, we compare our method with leading 3D semantic occupancy prediction approaches on Occ3D [39]. Specifically, compared to baseline models BEVDet and ALOcc, our method achieves significant improvements in mIoU, mIoU $_{\mathrm{D}}$ , and IoU. For instance, the BEVDetOcc+Ours variant achieves an mIoU of 38.3, surpassing BEVDetOcc [12] by 1.2, while improving mIoU $_{\mathrm{D}}$ by 1.3 and IoU by 1.2. Similarly, ALOcc+Ours shows gains of 0.8 in mIoU, 1.1 in mIoU $_{\mathrm{D}}$ , and 0.5 in IoU over ALOcc [7]. These results validate the superiority of our semantic causality-aware 2D-to-3D transformation.
|
| 324 |
+
|
| 325 |
+
# 4.5. Ablation Study
|
| 326 |
+
|
| 327 |
+
Effect of Causal Loss. We first investigate the effectiveness of the proposed Causal Loss in enhancing the occupancy prediction performance. As demonstrated in Tab. 4, the impact of the proposed Causal Loss is evaluated through a series of experiments. Specifically, using BEVDetOcc as the baseline (Exp. 0), we conducted two ablation studies: one removing the depth supervision loss (Exp. 1), and another incorporating the proposed Causal Loss (Exps. 2, 3). The results reveal that the removal of the depth supervision loss leads to marginal decrease in performance, whereas the addition of the proposed Causal Loss yields a significant improvement. This suggests that Causal Loss facilitates superior 2D-to-3D transformation and ultimately enhances the
|
| 328 |
+
|
| 329 |
+
Table 3. Comparison of 3D Semantic Occupancy Prediction Using Single Frame on the Occ3D Dataset, Evaluated mIoU, mIoUD, and IoU Metrics. Performance gains are indicated by red arrows $\uparrow$ . Our proposed approach (+Ours) consistently demonstrates superior enhancement over existing methods.
|
| 330 |
+
|
| 331 |
+
<table><tr><td>Exp.</td><td>Method</td><td>mIoU</td><td>Diff.</td><td>mIoUD</td><td>IoU</td><td>Latency</td></tr><tr><td>0</td><td>Baseline (BEVDetOcc) [12]</td><td>37.1</td><td>-</td><td>30.2</td><td>70.4</td><td>416/125</td></tr><tr><td>1</td><td>w/o Depth Sup</td><td>36.8</td><td>-0.3</td><td>29.6</td><td>70.3</td><td>414/125</td></tr><tr><td>2</td><td>+ Causal Loss</td><td>37.6</td><td>+0.8</td><td>31.0</td><td>70.1</td><td>450/125</td></tr><tr><td>3</td><td>+ Unbiased Estimator</td><td>37.5</td><td>-0.1</td><td>30.7</td><td>70.5</td><td>417/125</td></tr><tr><td>4</td><td>w/o Post Conv</td><td>37.3</td><td>-0.2</td><td>30.7</td><td>70.2</td><td>379/122</td></tr><tr><td>5</td><td>+ Channel-Grouped Lifting</td><td>37.6</td><td>+0.3</td><td>30.7</td><td>70.7</td><td>419/128</td></tr><tr><td>6</td><td>+ Soft Filling</td><td>37.6</td><td>-</td><td>30.6</td><td>70.7</td><td>434/149</td></tr><tr><td>7</td><td>+ Learnable Camera Offset</td><td>37.9</td><td>+0.3</td><td>31.1</td><td>71.0</td><td>446/150</td></tr><tr><td>8</td><td>+ Normalized Convolution</td><td>38.3</td><td>+0.4</td><td>31.5</td><td>71.2</td><td>466/159</td></tr></table>
|
| 332 |
+
|
| 333 |
+
Table 4. Ablation Study of 3D Semantic Occupancy Prediction on Occ3D. We comprehensively evaluated the impact of the individual strategy (bolded rows) proposed in our paper, with BEVDetOcc as the baseline. The final column reports single-frame training/inference latency (ms) on an RTX 4090 GPU.
|
| 334 |
+
|
| 335 |
+
precision of 3D semantic occupancy prediction. Comparing Exp. 3 to Exp. 2, the Unbiased Estimator simplifies the Causal Loss computation, reducing training overhead.
|
| 336 |
+
|
| 337 |
+
Effect of Each Module. In Tab. 4 (Exps. 5-8), we systematically validate the effectiveness of the three proposed modules. We first remove the two post-lifting convolutional layers from the original BEVDet (Exp. 4), as they serve a similar role to our proposed modules in refining volume features. Subsequently, we incrementally integrate the three proposed modules (Channel-Groupled Lifting, Learnable Camera Offsets, and Normalized Convolution) into the baseline model. To ensure gradient flow for Learnable Camera Offsets, we introduce the Soft Filling strategy from ALOcc [7] in Exp. 6, enabling effective training of the camera parameter offsets in Exp. 7. The results show progressive performance improvements with each added component (Exps. 5, 7, 8), confirming the efficacy of the proposed SCAT method. Additionally, the proposed modules incur acceptable computational overhead.
|
| 338 |
+
|
| 339 |
+
Please refer to the supplementary material for more comprehensive experimental results.
|
| 340 |
+
|
| 341 |
+
# 5. Conclusion
|
| 342 |
+
|
| 343 |
+
In this paper, we introduced a novel approach leveraging causal principles to address the limitation that ignored the reliability and interpretability by existing methods. By exploring the causal foundations of 3D semantic occupancy prediction, we propose a causal loss that enhances semantic causal consistency. In addition, we develop the SCAT module with three main components: Channel-Groupled Lifting, Learnable Camera Offsets and Normalized Convolution. This approach effectively mitigates transformation inaccuracies arising from uniform mapping weights, camera perturbations, and sparse mappings. Experiments demonstrate that our approach achieves significant improvements in accuracy, robustness to camera perturbations, and semantic causal consistency in 2D-to-3D transformations.
|
| 344 |
+
|
| 345 |
+
# References
|
| 346 |
+
|
| 347 |
+
[1] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyril Stachniss, and Jurgen Gall. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9297-9307, 2019. 3
|
| 348 |
+
[2] Simon Boeder, Fabian Gigengack, and Benjamin Risse. Langocc: Self-supervised open vocabulary occupancy estimation via volume rendering. arXiv preprint arXiv:2407.17310, 2024. 3
|
| 349 |
+
[3] Simon Boeder, Fabian Gigengack, and Benjamin Risse. Occfownet: Towards self-supervised occupancy estimation via differentiable rendering and occupancy flow. arXiv preprint arXiv:2402.12792, 2024. 3
|
| 350 |
+
[4] Simon Boeder, Fabian Gigengack, and Benjamin Risse. Gaussianflowocc: Sparse and weakly supervised occupancy estimation using gaussian splatting and temporal flow. arXiv preprint arXiv:2502.17288, 2025. 3
|
| 351 |
+
[5] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11621-11631, 2020. 7
|
| 352 |
+
[6] Anh-Quan Cao and Raoul de Charette. Monoscene: Monocular 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3991–4001, 2022. 3, 8
|
| 353 |
+
[7] Dubing Chen, Jin Fang, Wencheng Han, Xinjing Cheng, Junbo Yin, Chengzhong Xu, Fahad Shahbaz Khan, and Jianbing Shen. Alocc: adaptive lifting-based 3d semantic occupancy and cost volume-based flow prediction. arXiv preprint arXiv:2411.07725, 2024. 1, 2, 3, 4, 5, 6, 7, 8
|
| 354 |
+
[8] Dubing Chen, Wencheng Han, Jin Fang, and Jianbing Shen. Adaocc: Adaptive forward view transformation and flow modeling for 3d occupancy and flow prediction. arXiv preprint arXiv:2407.01436, 2024. 3
|
| 355 |
+
[9] Dubing Chen, Huan Zheng, Jin Fang, Xingping Dong, Xianfei Li, Wenlong Liao, Tao He, Pai Peng, and Jianbing Shen. Rethinking temporal fusion with a unified gradient descent view for 3d semantic occupancy prediction. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 1505-1515, 2025. 1, 3
|
| 356 |
+
[10] Xiaokang Chen, Kwan-Yee Lin, Chen Qian, Gang Zeng, and Hongsheng Li. 3d sketch-aware semantic scene completion via semi-supervised structure prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4193-4202, 2020. 2
|
| 357 |
+
[11] Judy Hoffman, Daniel A Roberts, and Sho Yaida. Robust learning with jacobian regularization. arXiv preprint arXiv:1908.02729, 2019. 5
|
| 358 |
+
[12] Junjie Huang and Guan Huang. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054, 2022. 7, 8
|
| 359 |
+
[13] Junjie Huang, Guan Huang, Zheng Zhu, Yun Ye, and Dalong Du. Bevdet: High-performance multi-camera 3d object de
|
| 360 |
+
|
| 361 |
+
tection in bird-eye-view. arXiv preprint arXiv:2112.11790, 2021. 1, 3, 4, 5
|
| 362 |
+
[14] Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie Zhou, and Jiwen Lu. Tri-perspective view for vision-based 3d semantic occupancy prediction. arXiv preprint arXiv:2302.07817, 2023. 1, 2, 3, 8
|
| 363 |
+
[15] Yuanhui Huang, Wenzhao Zheng, Borui Zhang, Jie Zhou, and Jiwen Lu. Selfocc: Self-supervised vision-based 3d occupancy prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19946-19956, 2024.
|
| 364 |
+
[16] Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie Zhou, and Jiwen Lu. Gaussianformer: Scene as gaussians for vision-based 3d semantic occupancy prediction. In European Conference on Computer Vision, pages 376-393. Springer, 2024. 3
|
| 365 |
+
[17] Haoyi Jiang, Liu Liu, Tianheng Cheng, Xinjie Wang, Tianwei Lin, Zhizhong Su, Wenyu Liu, and Xinggang Wang. Gausstr: Foundation model-aligned gaussian transformer for self-supervised 3d spatial understanding. arXiv preprint arXiv:2412.13193, 2024. 3
|
| 366 |
+
[18] Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei. Layercam: Exploring hierarchical class activation maps for localization. IEEE Transactions on Image Processing, 30:5875-5888, 2021. 2, 7
|
| 367 |
+
[19] Jungho Kim, Changwon Kang, Dongyoung Lee, Sehwan Choi, and Jun Won Choi. Protoocc: Accurate, efficient 3d occupancy prediction using dual branch encoder-prototype query decoder. arXiv preprint arXiv:2412.08774, 2024. 8
|
| 368 |
+
[20] Jie Li, Yu Liu, Dong Gong, Qinfeng Shi, Xia Yuan, Chunxia Zhao, and Ian Reid. Rgbd based dimensional decomposition residual network for 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7693-7702, 2019. 2
|
| 369 |
+
[21] Jinke Li, Xiao He, Chonghua Zhou, Xiaogiang Cheng, Yang Wen, and Dan Zhang. Viewformer: Exploring spatiotemporal modeling for multi-view 3d occupancy perception via view-guided transformers. In Computer Vision-ECCV 2024: 18th European Conference, 2024. 3
|
| 370 |
+
[22] Yinhao Li, Zheng Ge, Guanyi Yu, Jinrong Yang, Zengran Wang, Yukang Shi, Jianjian Sun, and Zeming Li. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092, 2022. 3, 5
|
| 371 |
+
[23] Yinhao Li, Han Bao, Zheng Ge, Jinrong Yang, Jianjian Sun, and Zeming Li. Bevstereo: Enhancing depth estimation in multi-view 3d object detection with temporal stereo. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1486-1494, 2023. 3, 5
|
| 372 |
+
[24] Yiming Li, Zhiding Yu, Christopher Choy, Chaowei Xiao, Jose M Alvarez, Sanja Fidler, Chen Feng, and Anima Anandkumar. Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9087-9098, 2023. 2, 3
|
| 373 |
+
[25] Yangguang Li, Bin Huang, Zeren Chen, Yufeng Cui, Feng Liang, Mingzhu Shen, Fenggang Liu, Enze Xie, Lu Sheng, Wanli Ouyang, et al. Fast-bev: A fast and strong bird's
|
| 374 |
+
|
| 375 |
+
eye view perception baseline. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 3
|
| 376 |
+
[26] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. arXiv preprint arXiv:2203.17270, 2022. 3
|
| 377 |
+
[27] Zhiqi Li, Zhiding Yu, David Austin, Mingsheng Fang, Shiyi Lan, Jan Kautz, and Jose M Alvarez. Fb-occ: 3d occupancy prediction based on forward-backward view transformation. arXiv preprint arXiv:2307.01492, 2023. 1, 2, 3, 5, 7, 8
|
| 378 |
+
[28] Zhiqi Li, Zhiding Yu, Wenhai Wang, Anima Anandkumar, Tong Lu, and Jose M Alvarez. Fb-bev: Bev representation from forward-backward view transformations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6919-6928, 2023. 2, 6
|
| 379 |
+
[29] Zhimin Liao and Ping Wei. Cascadeflow: 3d occupancy and flow prediction with cascaded sparsity sampling refinement framework. CVPR 2024 Autonomous Grand Challenge Track On Occupancy and Flow, 2024. 3
|
| 380 |
+
[30] Lizhe Liu, Bohua Wang, Hongwei Xie, Daqi Liu, Li Liu, Zhiqiang Tian, Kuiyuan Yang, and Bing Wang. Surroundsdf: Implicit 3d scene understanding based on signed distance field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 1
|
| 381 |
+
[31] Shice Liu, Yu Hu, Yiming Zeng, Qiankun Tang, Beibei Jin, Yinhe Han, and Xiaowei Li. See and think: Disentangling semantic scene completion. Advances in Neural Information Processing Systems, 31, 2018. 2
|
| 382 |
+
[32] Yili Liu, Linzhan Mou, Xuan Yu, Chenrui Han, Sitong Mao, Rong Xiong, and Yue Wang. Let occ flow: Self-supervised 3d occupancy flow prediction. arXiv preprint arXiv:2407.07587, 2024. 3
|
| 383 |
+
[33] Ilya Loshchilov, Frank Hutter, et al. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101, 2017. 7
|
| 384 |
+
[34] Yuhang Lu, Xinge Zhu, Tai Wang, and Yuexin Ma. Octreeocc: Efficient and multi-granularity occupancy prediction using octree queries. arXiv preprint arXiv:2312.03774, 2023. 3
|
| 385 |
+
[35] Qihang Ma, Xin Tan, Yanyun Qu, Lizhuang Ma, Zhizhong Zhang, and Yuan Xie. Cotr: Compact occupancy transformer for vision-based 3d occupancy prediction. arXiv preprint arXiv:2312.01919, 2023. 1, 2, 3, 7, 8
|
| 386 |
+
[36] Mingjie Pan, Jiaming Liu, Renrui Zhang, Peixiang Huang, Xiaoqi Li, Hongwei Xie, Bing Wang, Li Liu, and Shang-hang Zhang. Renderocc: Vision-centric 3d occupancy prediction with 2d rendering supervision. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 12404-12411. IEEE, 2024. 3
|
| 387 |
+
[37] Jonah Philion and Sanja Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV 16, pages 194-210. Springer, 2020. 2, 3
|
| 388 |
+
|
| 389 |
+
[38] Yang Shi, Tianheng Cheng, Qian Zhang, Wenyu Liu, and Xinggang Wang. Occupancy as set of points. In Computer Vision-ECCV 2024: 18th European Conference, 2024. 3
|
| 390 |
+
[39] Xiaoyu Tian, Tao Jiang, Longfei Yun, Yucheng Mao, Huitong Yang, Yue Wang, Yilun Wang, and Hang Zhao. Occ3d: A large-scale 3d occupancy prediction benchmark for autonomous driving. Advances in Neural Information Processing Systems, 36, 2024. 1, 3, 7, 8
|
| 391 |
+
[40] Wenwen Tong, Chonghao Sima, Tai Wang, Li Chen, Silei Wu, Hanming Deng, Yi Gu, Lewei Lu, Ping Luo, Dahua Lin, et al. Scene as occupancy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8406-8415, 2023. 1, 3
|
| 392 |
+
[41] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 3
|
| 393 |
+
[42] Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. Label words are anchors: An information flow perspective for understanding in-context learning. arXiv preprint arXiv:2305.14160, 2023. 2
|
| 394 |
+
[43] Xiaofeng Wang, Zheng Zhu, Wenbo Xu, Yunpeng Zhang, Yi Wei, Xu Chi, Yun Ye, Dalong Du, Jiwen Lu, and Xinggang Wang. Openoccupancy: A large scale benchmark for surrounding semantic occupancy perception. arXiv preprint arXiv:2303.03991, 2023. 1, 3
|
| 395 |
+
[44] Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, and Jiwen Lu. Surroundocc: Multi-camera 3d occupancy prediction for autonomous driving. arXiv preprint arXiv:2303.09551, 2023. 1, 3
|
| 396 |
+
[45] Yuan Wu, Zhiqiang Yan, Zhengxue Wang, Xiang Li, Le Hui, and Jian Yang. Deep height decoupling for precise vision-based 3d occupancy prediction. arXiv preprint arXiv:2409.07972, 2024.8
|
| 397 |
+
[46] Ziyang Yan, Wenzhen Dong, Yihua Shao, Yuhang Lu, Liu Haiyang, Jingwen Liu, Haozhe Wang, Zhe Wang, Yan Wang, Fabio Remondino, et al. Renderworld: World model with self-supervised 3d label. arXiv preprint arXiv:2409.11356, 2024.3
|
| 398 |
+
[47] Shichao Yang, Yulan Huang, and Sebastian Scherer. Semantic 3d occupancy mapping through efficient high order crfs. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 590-597. IEEE, 2017. 1
|
| 399 |
+
[48] Zichen Yu, Changyong Shu, Jiajun Deng, Kangjie Lu, Zong-dai Liu, Jiangyong Yu, Dawei Yang, Hui Li, and Yan Chen. Flashocc: Fast and memory-efficient occupancy prediction via channel-to-height plugin. arXiv preprint arXiv:2311.12058, 2023. 8
|
| 400 |
+
[49] Chubin Zhang, Juncheng Yan, Yi Wei, Jiaxin Li, Li Liu, Yansong Tang, Yueqi Duan, and Jiwen Lu. Occnerf: Self-supervised multi-camera occupancy prediction with neural radiance fields. arXiv preprint arXiv:2312.09243, 2023. 3
|
| 401 |
+
[50] Jinqing Zhang, Yanan Zhang, Qingjie Liu, and Yunhong Wang. Lightweight spatial embedding for vision-based 3d occupancy prediction. arXiv preprint arXiv:2412.05976, 2024.8
|
| 402 |
+
|
| 403 |
+
[51] Yunpeng Zhang, Zheng Zhu, and Dalong Du. Occformer: Dual-path transformer for vision-based 3d semantic occupancy prediction. arXiv preprint arXiv:2304.05316, 2023. 2, 3
|
| 404 |
+
[52] Jilai Zheng, Pin Tang, Zhongdao Wang, Guoqing Wang, Xiangxuan Ren, Bailan Feng, and Chao Ma. Veon: Vocabulary-enhanced occupancy prediction. In European Conference on Computer Vision, pages 92-108. Springer, 2024. 3
|
| 405 |
+
[53] Yucheng Zhou, Xiang Li, Qianning Wang, and Jianbing Shen. Visual in-context learning for large vision-language models. In Findings of the Association for Computational Linguistics ACL 2024, pages 15890-15902, 2024. 2
|
2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bf343dda4be9569d47b4af2a77daaa7351fcaf16a2d0ca92d2db69dd67973a79
|
| 3 |
+
size 514360
|
2025/Semantic Causality-Aware Vision-Based 3D Occupancy Prediction/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/d257d4bc-770d-463b-8ab1-d4cb1749eea8_content_list.json
ADDED
|
@@ -0,0 +1,1627 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Semantic Discrepancy-aware Detector for Image Forgery Identification",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
138,
|
| 8 |
+
130,
|
| 9 |
+
856,
|
| 10 |
+
151
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Ziye Wang Minghang Yu Chunyan Xu* Zhen Cui Nanjing University of Science and Technology, Nanjing, China",
|
| 17 |
+
"bbox": [
|
| 18 |
+
250,
|
| 19 |
+
180,
|
| 20 |
+
748,
|
| 21 |
+
217
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "{wzynjust,mhyu,cyx}@njust.edu.cn,zhen.cui@bnu.edu.cn",
|
| 28 |
+
"bbox": [
|
| 29 |
+
256,
|
| 30 |
+
219,
|
| 31 |
+
733,
|
| 32 |
+
233
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
248,
|
| 42 |
+
268,
|
| 43 |
+
326,
|
| 44 |
+
282
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "With the rapid advancement of image generation techniques, robust forgery detection has become increasingly imperative to ensure the trustworthiness of digital media. Recent research indicates that the learned semantic concepts of pre-trained models are critical for identifying fake images. However, the misalignment between the forgery and semantic concept spaces hinders the model's forgery detection performance. To address this problem, we propose a novel Semantic Discrepancy-aware Detector (SDD) that leverages reconstruction learning to align the two spaces at a fine-grained visual level. By exploiting the conceptual knowledge embedded in the pre-trained vision-language model, we specifically design a semantic token sampling module to mitigate the space shifts caused by features irrelevant to both forgery traces and semantic concepts. A concept-level forgery discrepancy learning module, based on reconstruction, enhances the interaction between semantic concepts and forgery traces, effectively capturing discrepancies under the concepts' guidance. Finally, the low-level forgery feature enhancement integrates the learned concept-level forgery discrepancies to minimize redundant forgery information. Experiments conducted on two standard image forgery datasets demonstrate the efficacy of the proposed SDD, which achieves superior results compared to existing methods. The code is available at https://github.com/wzy111111/SSD.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
89,
|
| 53 |
+
301,
|
| 54 |
+
483,
|
| 55 |
+
694
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1. Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
91,
|
| 65 |
+
724,
|
| 66 |
+
220,
|
| 67 |
+
742
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "With the thriving of generative AI technologies, like Generative Adversarial Networks (GANs) [14] and diffusion models [2], the images generated by these models can easily create confusion by passing off the spurious as genuine. Therefore, it is crucial to develop a universal method for detecting fake images to mitigate the widespread dissemination of disinformation.",
|
| 74 |
+
"bbox": [
|
| 75 |
+
89,
|
| 76 |
+
751,
|
| 77 |
+
482,
|
| 78 |
+
854
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Pioneering research [26, 32] has shown that projecting",
|
| 85 |
+
"bbox": [
|
| 86 |
+
109,
|
| 87 |
+
858,
|
| 88 |
+
482,
|
| 89 |
+
875
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "image",
|
| 95 |
+
"img_path": "images/ebad5eb90f3ca9bb87c11fd2b66f803687de28196141364d2508083b6c397c5a.jpg",
|
| 96 |
+
"image_caption": [],
|
| 97 |
+
"image_footnote": [],
|
| 98 |
+
"bbox": [
|
| 99 |
+
532,
|
| 100 |
+
266,
|
| 101 |
+
888,
|
| 102 |
+
335
|
| 103 |
+
],
|
| 104 |
+
"page_idx": 0
|
| 105 |
+
},
|
| 106 |
+
{
|
| 107 |
+
"type": "image",
|
| 108 |
+
"img_path": "images/81954e406961bea5aa5a7ed67c1b28aa41cf3ad135b9f7908fb60b5698e8f433.jpg",
|
| 109 |
+
"image_caption": [
|
| 110 |
+
"Figure 1. The phenomenon of misalignment between semantic concept space and forgery space. Since $\\cos \\theta$ can reflect the similarity of image descriptions, we model the feature space in polar coordinates. As the semantic concept space in [32] is frozen, fake samples sharing similar concepts with real ones can be easily misclassified. With forgery-adaptive space like [26], the model can correctly distinguish between them based on re-learned forgery features. Nevertheless, due to the semantic concept bias introduced by coarse text prompts, the target samples may be projected into an inaccurate semantic concept dimension, causing them to drift away from the real source samples along the fake dimension."
|
| 111 |
+
],
|
| 112 |
+
"image_footnote": [],
|
| 113 |
+
"bbox": [
|
| 114 |
+
534,
|
| 115 |
+
337,
|
| 116 |
+
883,
|
| 117 |
+
542
|
| 118 |
+
],
|
| 119 |
+
"page_idx": 0
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"type": "text",
|
| 123 |
+
"text": "images in a joint embedding space of texts and images can effectively capture discrepancies between fake and real images. In contrast, methods [6, 13, 44, 50] overlooking the interplay between forgery traces and semantic concepts perform poorly when confronted with unseen generative models.",
|
| 124 |
+
"bbox": [
|
| 125 |
+
511,
|
| 126 |
+
718,
|
| 127 |
+
903,
|
| 128 |
+
808
|
| 129 |
+
],
|
| 130 |
+
"page_idx": 0
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"type": "text",
|
| 134 |
+
"text": "To investigate the visual semantic concepts of pretrained models, we conduct a statistical analysis of the output features from CNNSpot [50] and CLIP-ViT [32] (See Appendix A for more details). Under different categories, CNNSpot exhibits a synchronized difference between real and fake features in its training space. However, when tran",
|
| 135 |
+
"bbox": [
|
| 136 |
+
511,
|
| 137 |
+
810,
|
| 138 |
+
906,
|
| 139 |
+
900
|
| 140 |
+
],
|
| 141 |
+
"page_idx": 0
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"type": "header",
|
| 145 |
+
"text": "CVF",
|
| 146 |
+
"bbox": [
|
| 147 |
+
106,
|
| 148 |
+
2,
|
| 149 |
+
181,
|
| 150 |
+
42
|
| 151 |
+
],
|
| 152 |
+
"page_idx": 0
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"type": "header",
|
| 156 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 157 |
+
"bbox": [
|
| 158 |
+
238,
|
| 159 |
+
0,
|
| 160 |
+
807,
|
| 161 |
+
46
|
| 162 |
+
],
|
| 163 |
+
"page_idx": 0
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"type": "page_footnote",
|
| 167 |
+
"text": "*Corresponding author",
|
| 168 |
+
"bbox": [
|
| 169 |
+
109,
|
| 170 |
+
887,
|
| 171 |
+
233,
|
| 172 |
+
898
|
| 173 |
+
],
|
| 174 |
+
"page_idx": 0
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"type": "page_number",
|
| 178 |
+
"text": "18388",
|
| 179 |
+
"bbox": [
|
| 180 |
+
480,
|
| 181 |
+
944,
|
| 182 |
+
517,
|
| 183 |
+
955
|
| 184 |
+
],
|
| 185 |
+
"page_idx": 0
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"type": "image",
|
| 189 |
+
"img_path": "images/3211c3b1e370bfa02685bd229d7de6b14fbe69599dd492db2cb33e55aba0b7cb.jpg",
|
| 190 |
+
"image_caption": [
|
| 191 |
+
"Figure 2. Different paradigms of image forgery identification with pre-trained vision-language model. (a) Fine-tune the frozen model only by fully connected (FC) layers [32]. (b) Prompt-based designs are tuned on text prompts and contrastive objectives [26]. (c) Our paradigm incorporating visual clues can capture fine-grained forge traces by reconstruction learning."
|
| 192 |
+
],
|
| 193 |
+
"image_footnote": [],
|
| 194 |
+
"bbox": [
|
| 195 |
+
109,
|
| 196 |
+
88,
|
| 197 |
+
465,
|
| 198 |
+
234
|
| 199 |
+
],
|
| 200 |
+
"page_idx": 1
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"type": "text",
|
| 204 |
+
"text": "sitioning to the CLIP's space, these differences become inconsistent. From this, we infer a nuanced relationship between semantic concepts and forgery traces: Different semantic concepts may guide the model to uncover distinct forgery traces.",
|
| 205 |
+
"bbox": [
|
| 206 |
+
89,
|
| 207 |
+
338,
|
| 208 |
+
483,
|
| 209 |
+
414
|
| 210 |
+
],
|
| 211 |
+
"page_idx": 1
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"type": "text",
|
| 215 |
+
"text": "Intuitively, relying on a frozen pre-trained vision-language model like UnivFD [32] is essential, but this tends to overlook fine-grained forgery details. Although Fat-Former [26] achieves a substantial enhancement in generalization by employing the forgery-aware adaptive transformer, we observe that soft prompts based on simple [CLASS] embeddings have an intrinsic limitation in their semantic description granularity (See Appendix B for more details). The constrained breadth of the conveyed concepts may lead the detection toward incorrect predictions. This limitation highlights a misalignment between the visual semantic concept space and the target forgery space, as illustrated in Fig. 1.",
|
| 216 |
+
"bbox": [
|
| 217 |
+
89,
|
| 218 |
+
415,
|
| 219 |
+
483,
|
| 220 |
+
611
|
| 221 |
+
],
|
| 222 |
+
"page_idx": 1
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"type": "text",
|
| 226 |
+
"text": "To address this, one empirical approach is to design more detailed text descriptions, but this method struggles to describe all visual forgery details due to the limited length of texts and brings more computational overhead. Drawing from the aforementioned findings and analysis, we make a first attempt to align the CLIP's visual semantic concept space with the forgery space by reconstructing semantic features.",
|
| 227 |
+
"bbox": [
|
| 228 |
+
89,
|
| 229 |
+
612,
|
| 230 |
+
483,
|
| 231 |
+
731
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "text",
|
| 237 |
+
"text": "We develop a vision-based paradigm, as outlined in Fig. 2. First, employing a pre-trained model only with nearest neighbor or linear probing (e.g. UnivFD [32], Fig. 2 (a)) is suboptimal for image forgery detection. Second, modifying the pre-trained model with task-specific prompts (e.g. FatFormer [26], Fig. 2 (b)) may favor models biased towards any particular semantic concept. These studies pave the way for exploring pre-trained space with rich semantic concepts. Inspired by image reconstruction [43, 53], our paradigm amplifies the concept-level forgery discrepancies of forgery images, which empowers the model to detect sus",
|
| 238 |
+
"bbox": [
|
| 239 |
+
89,
|
| 240 |
+
734,
|
| 241 |
+
483,
|
| 242 |
+
902
|
| 243 |
+
],
|
| 244 |
+
"page_idx": 1
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"type": "text",
|
| 248 |
+
"text": "picious forgery traces with the assistance of semantic concepts.",
|
| 249 |
+
"bbox": [
|
| 250 |
+
511,
|
| 251 |
+
90,
|
| 252 |
+
903,
|
| 253 |
+
119
|
| 254 |
+
],
|
| 255 |
+
"page_idx": 1
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"type": "text",
|
| 259 |
+
"text": "In this work, we present a novel Semantic Discrepancy-aware Detector (SDD) to accurately align the semantic concept space and the forgery space. Firstly, to mitigate interference from features unrelated to learned semantic concepts and forgery traces, we divide the real images into non-overlapping blocks and feed them to the frozen CLIP [36] to obtain diverse semantic patch tokens. These tokens acting as visual clues smoothly align the semantic concepts' space and forgery space. It is noteworthy that these tokens sampled by JS divergence are universally representative of the real semantic distribution. Then, the visual clues are fused into a concept-level forgery discrepancy module. Unlike FatFormer, LoRA layers are incorporated into the image encoder. The goal is to preserve the completeness and diversity of the learned semantic concepts of CLIP, while the forgery features sharing similar semantic concepts should be highlighted. During reconstruction, we only narrow the reconstruction gap for real samples to reinforce the reconstructed discrepancies of the synthetic images. Finally, we present low-level forgery feature enhancement to let the reconstruction difference map enhance the extraction of the highly generalizable forgery features while introducing minimal additional parameters. The main challenge is how to capture forgery features with strong semantic concept correlation and features with high forgery relevance but weak semantic concept ties to ensure the model converges to powerful features. Motivated by this, we apply convolutional modules and adaptive weight parameters to avoid over-relying on semantic concepts.",
|
| 260 |
+
"bbox": [
|
| 261 |
+
511,
|
| 262 |
+
121,
|
| 263 |
+
906,
|
| 264 |
+
559
|
| 265 |
+
],
|
| 266 |
+
"page_idx": 1
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"type": "text",
|
| 270 |
+
"text": "We thoroughly evaluate the generalization performance of our model on a UnivFD benchmark [32] and a SynRIS benchmark [5]. Surprisingly, our method achieves superior performance by a $ap_{m}$ of 98.51% and a $acc_{m}$ of 93.61% on the UnivFD benchmark [32] and an average AUROC of 95.1% on the SynRIS benchmark [5]. In summary, our contributions are as follows:",
|
| 271 |
+
"bbox": [
|
| 272 |
+
511,
|
| 273 |
+
559,
|
| 274 |
+
908,
|
| 275 |
+
662
|
| 276 |
+
],
|
| 277 |
+
"page_idx": 1
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"type": "list",
|
| 281 |
+
"sub_type": "text",
|
| 282 |
+
"list_items": [
|
| 283 |
+
"- We propose a robust model (SDD) for forgery detection, specifically designed to align the visual concept space and forgery space in terms of visual information.",
|
| 284 |
+
"- We sample semantic tokens to mitigate the space shifts and align the two spaces through reconstruction learning. Additionally, we strengthen low-level forgery features to enhance the model's robustness.",
|
| 285 |
+
"- Our method achieves superior performance on two benchmarks, demonstrating its superior capability in comparison to existing approaches."
|
| 286 |
+
],
|
| 287 |
+
"bbox": [
|
| 288 |
+
511,
|
| 289 |
+
665,
|
| 290 |
+
903,
|
| 291 |
+
816
|
| 292 |
+
],
|
| 293 |
+
"page_idx": 1
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"type": "text",
|
| 297 |
+
"text": "2. Related Work",
|
| 298 |
+
"text_level": 1,
|
| 299 |
+
"bbox": [
|
| 300 |
+
511,
|
| 301 |
+
829,
|
| 302 |
+
653,
|
| 303 |
+
845
|
| 304 |
+
],
|
| 305 |
+
"page_idx": 1
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"type": "text",
|
| 309 |
+
"text": "AI-generated Images Detection. Extensive efforts have been devoted to enhancing the performance of AI-generated image detection. Early works like [25, 44, 45] tend to mine",
|
| 310 |
+
"bbox": [
|
| 311 |
+
511,
|
| 312 |
+
854,
|
| 313 |
+
906,
|
| 314 |
+
902
|
| 315 |
+
],
|
| 316 |
+
"page_idx": 1
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"type": "page_number",
|
| 320 |
+
"text": "18389",
|
| 321 |
+
"bbox": [
|
| 322 |
+
480,
|
| 323 |
+
944,
|
| 324 |
+
519,
|
| 325 |
+
955
|
| 326 |
+
],
|
| 327 |
+
"page_idx": 1
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"type": "image",
|
| 331 |
+
"img_path": "images/94dd20070c1e79972c6f8254b876b80beacef33b3db35c97c2297de23b51f24e.jpg",
|
| 332 |
+
"image_caption": [
|
| 333 |
+
"Figure 3. The architecture of SDD. First, Next, we sample semantic tokens from real images to learn features related to both concepts and forgery. the input images are mapped into a joint space of visual semantic concepts and forgery, which are transformed into learnable features $V_{H}$ . Then, we use transformer-based encoder and decoder to get reconstructed features $\\mathcal{R}_f$ . A reconstruction difference map $\\mathcal{D}_S$ is obtained and goes through the multi-scale convolutional network to refine forgery features. Finally, we concatenate the CLIP's CLS token with this output along the same dimension for classification. The whole system is trained by jointly minimizing the binary cross-entropy loss $L_{bce}$ , the reconstruction loss $L_{r}$ , and the triplet loss $L_{tri}$"
|
| 334 |
+
],
|
| 335 |
+
"image_footnote": [],
|
| 336 |
+
"bbox": [
|
| 337 |
+
94,
|
| 338 |
+
88,
|
| 339 |
+
906,
|
| 340 |
+
385
|
| 341 |
+
],
|
| 342 |
+
"page_idx": 2
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"type": "text",
|
| 346 |
+
"text": "the common forgery traces between all real and fake images, such as noise patterns, texture statistics, and frequency signals. As an illustration, Liu et al. [24] designed a network that learns the consistent noise patterns in images for fake detection. Liu et al. [28] proposed to leverage the gram matrix to discover the global anomalous texture of fake images. An effective approach [13] demonstrated that frequency representation is an important factor in improving fake detection performance. However, these differences are rigorously specific to the monotonous features, which contribute to the issue of overfitting. Cutting-edge research [26, 32] shifted attention toward the semantic properties of images. Ojha et al. [32] showed that projecting images into the feature space of pre-trained vision-language model enables strong generalization ability. To build generalized forgery representations, Liu et al. [26] constructed forgery adaptive space by a forgery-aware adapter. The above research [5, 26, 32] has suggested that concept attributes are vital in the image forgery detection task. Assuming that diffusion-based models leave distinct forgery traces that are characteristic of specific concept distributions, we aim to extract robust forgery features guided by semantic concepts, rather than suppressing them. Therefore, even \"useless\" information can be useful by providing significant certainty about the content of the image.",
|
| 347 |
+
"bbox": [
|
| 348 |
+
91,
|
| 349 |
+
494,
|
| 350 |
+
483,
|
| 351 |
+
872
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 2
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "Reconstruction Learning. Reconstruction learning has",
|
| 358 |
+
"bbox": [
|
| 359 |
+
89,
|
| 360 |
+
873,
|
| 361 |
+
482,
|
| 362 |
+
888
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 2
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "text",
|
| 368 |
+
"text": "great potential in unsupervised representation learning [16, 27]. Some works [5, 39] utilized reconstruction learning to reveal the nuances between real and fake images. For example, Wang et al. [51] found that reconstructing images by DDIM exposes an error between real images and their reconstructed replica. The new synthetic image detection method[5] used text-conditioned inversion maps to learn internal representations, which is conducive to predicting whether an image is fake. Ricker et al. [39] offered a simple detection approach by applying AE to measure the reconstruction error. Notably, these works are committed to reconstructing the distributions of both real and fake samples by leveraging generative models. Unlike previous works, we focus solely on reconstructing real images in the finetuned CLIP space in light of the authenticity and richness of semantic concepts. The distribution of real samples, learned from pre-trained vision-language model, helps to define an optimal boundary, thus alleviating overfitting.",
|
| 369 |
+
"bbox": [
|
| 370 |
+
511,
|
| 371 |
+
494,
|
| 372 |
+
908,
|
| 373 |
+
768
|
| 374 |
+
],
|
| 375 |
+
"page_idx": 2
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"type": "text",
|
| 379 |
+
"text": "3. Methodology",
|
| 380 |
+
"text_level": 1,
|
| 381 |
+
"bbox": [
|
| 382 |
+
511,
|
| 383 |
+
782,
|
| 384 |
+
648,
|
| 385 |
+
800
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 2
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "text",
|
| 391 |
+
"text": "Our goal is to align forgery and visual semantic concept spaces using reconstruction techniques for robust and generalizable synthetic image detection. To achieve this, we introduce a fine-grained model named Semantic Discrepancy-aware Detector (SDD). Building on prior works, we harness the generalization capability of vision-language mod",
|
| 392 |
+
"bbox": [
|
| 393 |
+
511,
|
| 394 |
+
810,
|
| 395 |
+
908,
|
| 396 |
+
902
|
| 397 |
+
],
|
| 398 |
+
"page_idx": 2
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"type": "page_number",
|
| 402 |
+
"text": "18390",
|
| 403 |
+
"bbox": [
|
| 404 |
+
480,
|
| 405 |
+
944,
|
| 406 |
+
519,
|
| 407 |
+
955
|
| 408 |
+
],
|
| 409 |
+
"page_idx": 2
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"type": "text",
|
| 413 |
+
"text": "els. Semantic concept space: The ideal joint embedding space of images and texts with four properties: semantic alignment, modality invariance, locality consistency, and structure preservation. Forgery space: The ideal space covers forgery traces. Notably, we derive semantic concept space via a vision language model pretrained solely on real images; thus we treat the two spaces as independent.",
|
| 414 |
+
"bbox": [
|
| 415 |
+
88,
|
| 416 |
+
90,
|
| 417 |
+
480,
|
| 418 |
+
196
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 3
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "text",
|
| 424 |
+
"text": "First, the Semantic Tokens Sampling (STS) module utilizes Jensen-Shannon (JS) divergence to sample semantic patch tokens, which serve as a transitional bridge, facilitating model to establish the association between real and forgery images accurately. Next, the Concept-level forgery Discrepancy Learning (CFDL) module employs reconstruction learning to explore the forgery discrepancies within the visual semantic concept space, which focuses on identifying subtle variations between reconstructed forgery features. Finally, the reconstruction difference map is trained with Low-level forgery Feature Enhancement module, which aims to refine forgery features with more visual details.",
|
| 425 |
+
"bbox": [
|
| 426 |
+
89,
|
| 427 |
+
198,
|
| 428 |
+
482,
|
| 429 |
+
378
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 3
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "3.1. Semantic Tokens Sampling",
|
| 436 |
+
"text_level": 1,
|
| 437 |
+
"bbox": [
|
| 438 |
+
89,
|
| 439 |
+
387,
|
| 440 |
+
334,
|
| 441 |
+
402
|
| 442 |
+
],
|
| 443 |
+
"page_idx": 3
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"text": "Initially, we considered directly aligning the visual semantic concept space and forgery space by leveraging fine-grained reconstruction learning to model real and fake semantic distributions. However, this strategy would treat the differences in features unrelated to semantics and forgery as crucial factors for identifying image's authenticity. To eliminate these redundant features, we sample real semantic image patch tokens as visual clues to bridge real and forged semantic domains. This module enables the model to focus on concept-related forgery traces and highlight the distinctions between real and fake images. In a tangible way, the image encoder of CLIP: ViT-L/14 is adapted to transform a real image $x_{r}$ into a set of features $f_{r}$ , without the image CLS token. We define the transformation as:",
|
| 448 |
+
"bbox": [
|
| 449 |
+
89,
|
| 450 |
+
409,
|
| 451 |
+
483,
|
| 452 |
+
619
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 3
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "equation",
|
| 458 |
+
"text": "\n$$\nf _ {r} = \\phi (x _ {r}), \\tag {1}\n$$\n",
|
| 459 |
+
"text_format": "latex",
|
| 460 |
+
"bbox": [
|
| 461 |
+
243,
|
| 462 |
+
632,
|
| 463 |
+
482,
|
| 464 |
+
648
|
| 465 |
+
],
|
| 466 |
+
"page_idx": 3
|
| 467 |
+
},
|
| 468 |
+
{
|
| 469 |
+
"type": "text",
|
| 470 |
+
"text": "where $\\phi (\\cdot)$ is the CLIP:ViT-L/14's visual encoder, $x_{r}\\in$ $\\mathbb{I}_r^{H\\times W\\times 3}$ represents a real image characterized by a height of $H$ and a width of $W$ . Besides, $f_{r}\\in \\mathbb{R}^{N\\times D}$ , where $N$ is the number of tokens and $D$ denotes the dimension of each patch token.",
|
| 471 |
+
"bbox": [
|
| 472 |
+
89,
|
| 473 |
+
657,
|
| 474 |
+
482,
|
| 475 |
+
734
|
| 476 |
+
],
|
| 477 |
+
"page_idx": 3
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"type": "text",
|
| 481 |
+
"text": "Since integrating all real patch tokens into the image reconstruction module is computationally intensive and memory-consuming, it is urgent to select a suitable subset of these tokens. From a distribution perspective, the Jensen-Shannon (JS) divergence, derived from the Kullback-Leibler divergence [48], is a symmetric and finite metric that can effectively measure the similarity between tokens by quantifying differences in their distributions.",
|
| 482 |
+
"bbox": [
|
| 483 |
+
89,
|
| 484 |
+
734,
|
| 485 |
+
482,
|
| 486 |
+
854
|
| 487 |
+
],
|
| 488 |
+
"page_idx": 3
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"type": "text",
|
| 492 |
+
"text": "To calculate the JS divergence between two tokens, both are converted into computable probability distribution space using the softmax function. Let $f_{s} \\in \\mathbb{F}_{r}^{M \\times D}$ be the selected",
|
| 493 |
+
"bbox": [
|
| 494 |
+
89,
|
| 495 |
+
854,
|
| 496 |
+
483,
|
| 497 |
+
901
|
| 498 |
+
],
|
| 499 |
+
"page_idx": 3
|
| 500 |
+
},
|
| 501 |
+
{
|
| 502 |
+
"type": "text",
|
| 503 |
+
"text": "semantic patch tokens with the num of tokens $M = 1 / \\delta$ and dimension $D$ in terms of sampling rate $\\delta$ ( $0 \\leq \\delta \\leq 1$ , $\\delta$ is user-defined). Once the initial token $\\tilde{r}$ and $\\delta$ are determined, the JS divergence between $\\tilde{r}$ and other tokens $r$ falls within the range [0, 1]. Subsequently, the sampling interval is split into $M$ equal segments with one token selected from each segment. As a consequence, the semantic tokens sampling module can be formulated as:",
|
| 504 |
+
"bbox": [
|
| 505 |
+
511,
|
| 506 |
+
90,
|
| 507 |
+
903,
|
| 508 |
+
210
|
| 509 |
+
],
|
| 510 |
+
"page_idx": 3
|
| 511 |
+
},
|
| 512 |
+
{
|
| 513 |
+
"type": "equation",
|
| 514 |
+
"text": "\n$$\n\\begin{array}{l} f _ {s} = \\mathcal {S} \\left(\\mathbb {R} ^ {N \\times D}, \\delta\\right) \\\\ = A _ {c} ^ {N _ {a} \\times M} \\times \\mathbb {R} ^ {N \\times D}, \\\\ \\end{array}\n$$\n",
|
| 515 |
+
"text_format": "latex",
|
| 516 |
+
"bbox": [
|
| 517 |
+
542,
|
| 518 |
+
214,
|
| 519 |
+
707,
|
| 520 |
+
253
|
| 521 |
+
],
|
| 522 |
+
"page_idx": 3
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"text": "s.t. JS(softmax $(\\tilde{r})$ ,softmax $(r)) = \\frac{i}{M}$ , if $a_{ij} = 1$ (2)",
|
| 527 |
+
"bbox": [
|
| 528 |
+
529,
|
| 529 |
+
256,
|
| 530 |
+
903,
|
| 531 |
+
290
|
| 532 |
+
],
|
| 533 |
+
"page_idx": 3
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "equation",
|
| 537 |
+
"text": "\n$$\n\\sum_ {i = 0} ^ {N _ {a}} \\sum_ {j = 0} ^ {M} a _ {i j} = M; \\sum_ {j = 0} ^ {M} a _ {i j} = 1 \\text {o r} 0,\n$$\n",
|
| 538 |
+
"text_format": "latex",
|
| 539 |
+
"bbox": [
|
| 540 |
+
560,
|
| 541 |
+
287,
|
| 542 |
+
792,
|
| 543 |
+
330
|
| 544 |
+
],
|
| 545 |
+
"page_idx": 3
|
| 546 |
+
},
|
| 547 |
+
{
|
| 548 |
+
"type": "equation",
|
| 549 |
+
"text": "\n$$\ni = 1, \\dots , N _ {a}; j = 1, \\dots , M,\n$$\n",
|
| 550 |
+
"text_format": "latex",
|
| 551 |
+
"bbox": [
|
| 552 |
+
557,
|
| 553 |
+
333,
|
| 554 |
+
759,
|
| 555 |
+
349
|
| 556 |
+
],
|
| 557 |
+
"page_idx": 3
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"text": "where $S(\\cdot, \\cdot)$ represents the sampling process. $A_{c}^{N_{a} \\times M}$ is a constraint matrix of size $N_{a} \\times M$ whose element $a_{ij}$ is constrained to the binary pattern of $\\{0, 1\\}$ . Here JS $(\\cdot)$ refers to the Jensen-Shannon divergence, $N_{a}$ denotes the total number of real image patch tokens sampled from the training dataset of UnivFD and $M$ represents the required subset size. The softmax $(\\cdot)$ is the softmax function. The sampling tokens help the reconstruction module avoid becoming biased towards any particular forgery-unrelated distribution. Meanwhile, it avoids the semantic bias often introduced by text prompts, since the tokens are evenly distributed in a unified CLIP space.",
|
| 562 |
+
"bbox": [
|
| 563 |
+
511,
|
| 564 |
+
357,
|
| 565 |
+
906,
|
| 566 |
+
540
|
| 567 |
+
],
|
| 568 |
+
"page_idx": 3
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "text",
|
| 572 |
+
"text": "3.2. Concept-level Forged Discrepancy Learning",
|
| 573 |
+
"text_level": 1,
|
| 574 |
+
"bbox": [
|
| 575 |
+
511,
|
| 576 |
+
546,
|
| 577 |
+
885,
|
| 578 |
+
563
|
| 579 |
+
],
|
| 580 |
+
"page_idx": 3
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"type": "text",
|
| 584 |
+
"text": "A few words alone can hardly paint a picture. We argue that the fine-grained visual details can uncover more forgery traces concealed in the images. As such, we mix sampling tokens with extracted features and capitalize on reconstruction learning to compensate for the omission of forgery traces brought by coarse prompts. As previous work [26] has demonstrated that the pre-trained vision-language model necessitates fine-tuning to adapt to the forgery detection task. Therefore, we integrate LoRA [17] with the CLIP-ViT model to capture discriminative forgery features by making use of the bread semantic concepts. This method, denoted as LoRA-CLIP [54], is more streamlined and flexible. Given an input image $\\mathcal{I} \\in \\mathbb{I}^{H \\times W \\times 3}$ , we can get high-level visual features $V_{H}$ , as follows:",
|
| 585 |
+
"bbox": [
|
| 586 |
+
511,
|
| 587 |
+
568,
|
| 588 |
+
906,
|
| 589 |
+
780
|
| 590 |
+
],
|
| 591 |
+
"page_idx": 3
|
| 592 |
+
},
|
| 593 |
+
{
|
| 594 |
+
"type": "equation",
|
| 595 |
+
"text": "\n$$\nV _ {H} = \\mathcal {F} _ {L o R A} (\\mathcal {I}). \\tag {3}\n$$\n",
|
| 596 |
+
"text_format": "latex",
|
| 597 |
+
"bbox": [
|
| 598 |
+
647,
|
| 599 |
+
786,
|
| 600 |
+
903,
|
| 601 |
+
803
|
| 602 |
+
],
|
| 603 |
+
"page_idx": 3
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "text",
|
| 607 |
+
"text": "Here $\\mathcal{F}_{LoRA}$ refers to the CLIP image encoder fine-tuned by LoRA. The reconstruction module of CFDL encompasses two submodules, i.e., transformer-based encoder and decoder. Thanks to the transformer's capability of long-range relationship modeling, we capitalize on the multi-head attention (MHA) mechanism, the core mechanism of",
|
| 608 |
+
"bbox": [
|
| 609 |
+
511,
|
| 610 |
+
809,
|
| 611 |
+
906,
|
| 612 |
+
898
|
| 613 |
+
],
|
| 614 |
+
"page_idx": 3
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "page_number",
|
| 618 |
+
"text": "18391",
|
| 619 |
+
"bbox": [
|
| 620 |
+
480,
|
| 621 |
+
944,
|
| 622 |
+
517,
|
| 623 |
+
955
|
| 624 |
+
],
|
| 625 |
+
"page_idx": 3
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "text",
|
| 629 |
+
"text": "the transformer, to obtain more discriminative distribution by utilizing contextual information, which is set as:",
|
| 630 |
+
"bbox": [
|
| 631 |
+
89,
|
| 632 |
+
90,
|
| 633 |
+
482,
|
| 634 |
+
121
|
| 635 |
+
],
|
| 636 |
+
"page_idx": 4
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "equation",
|
| 640 |
+
"text": "\n$$\n\\left\\{ \\begin{array}{r l} \\operatorname {h e a d} _ {i} & = \\operatorname {A t t n} \\left(Q W _ {i} ^ {Q}, K W _ {i} ^ {K}, V W _ {i} ^ {V}\\right) \\\\ & = \\operatorname {S o f t m a x} \\left(\\frac {Q W _ {i} ^ {Q} \\left(K W _ {i} ^ {K}\\right) ^ {\\top}}{\\sqrt {d}}\\right) V W _ {i} ^ {V}, \\\\ \\operatorname {M H A} (Q, K, V) & = \\operatorname {C o n c a t} \\left(\\operatorname {h e a d} _ {1}, \\dots , \\operatorname {h e a d} _ {h}\\right) W ^ {O}, \\end{array} \\right. \\tag {4}\n$$\n",
|
| 641 |
+
"text_format": "latex",
|
| 642 |
+
"bbox": [
|
| 643 |
+
107,
|
| 644 |
+
131,
|
| 645 |
+
482,
|
| 646 |
+
226
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 4
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "text",
|
| 652 |
+
"text": "where $Q$ (Query), $K$ (Key), $V$ (Value) refer to the input, $W_{i}^{Q}, W_{i}^{K}, W_{i}^{V}$ separately denote the corresponding weights of linear projection, $\\mathrm{Attn}(\\cdot)$ denotes the function of the scaled dot product, Softmax is the softmax function, $d$ refers to the dimension of input, Concat ( $\\cdot$ ) represents the concatenation used to stitch the discrete attention outputs of head $1 \\sim h$ together.",
|
| 653 |
+
"bbox": [
|
| 654 |
+
89,
|
| 655 |
+
227,
|
| 656 |
+
483,
|
| 657 |
+
332
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 4
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "text",
|
| 663 |
+
"text": "To amplify the discrepancy between a fake image and its reconstructed counterpart, the sampled visual clues are employed for the initial processing by the encoder. The encoder's process can be formulated as follows:",
|
| 664 |
+
"bbox": [
|
| 665 |
+
89,
|
| 666 |
+
333,
|
| 667 |
+
483,
|
| 668 |
+
393
|
| 669 |
+
],
|
| 670 |
+
"page_idx": 4
|
| 671 |
+
},
|
| 672 |
+
{
|
| 673 |
+
"type": "equation",
|
| 674 |
+
"text": "\n$$\nR _ {1} = \\operatorname {L N} \\left(\\mathrm {M H A} \\left(f _ {s}, V _ {H}, V _ {H}\\right)\\right), \\tag {5}\n$$\n",
|
| 675 |
+
"text_format": "latex",
|
| 676 |
+
"bbox": [
|
| 677 |
+
179,
|
| 678 |
+
406,
|
| 679 |
+
480,
|
| 680 |
+
422
|
| 681 |
+
],
|
| 682 |
+
"page_idx": 4
|
| 683 |
+
},
|
| 684 |
+
{
|
| 685 |
+
"type": "equation",
|
| 686 |
+
"text": "\n$$\nR _ {2} = \\operatorname {L N} \\left(\\mathrm {M H A} \\left(R _ {1}, V _ {H}, V _ {H}\\right)\\right), \\tag {6}\n$$\n",
|
| 687 |
+
"text_format": "latex",
|
| 688 |
+
"bbox": [
|
| 689 |
+
181,
|
| 690 |
+
425,
|
| 691 |
+
480,
|
| 692 |
+
441
|
| 693 |
+
],
|
| 694 |
+
"page_idx": 4
|
| 695 |
+
},
|
| 696 |
+
{
|
| 697 |
+
"type": "text",
|
| 698 |
+
"text": "where $\\mathrm{LN}(\\cdot)$ denotes the Layer Normalization. Then, the encoder's outputs used as queries are injected into the decoder to get the final reconstructed features, which are similar to the encoder process and perform the following operation:",
|
| 699 |
+
"bbox": [
|
| 700 |
+
89,
|
| 701 |
+
453,
|
| 702 |
+
483,
|
| 703 |
+
527
|
| 704 |
+
],
|
| 705 |
+
"page_idx": 4
|
| 706 |
+
},
|
| 707 |
+
{
|
| 708 |
+
"type": "equation",
|
| 709 |
+
"text": "\n$$\nR _ {3} = \\mathrm {L N} (\\mathrm {M H A} (R _ {2}, R _ {2}, R _ {2})), \\tag {7}\n$$\n",
|
| 710 |
+
"text_format": "latex",
|
| 711 |
+
"bbox": [
|
| 712 |
+
181,
|
| 713 |
+
541,
|
| 714 |
+
480,
|
| 715 |
+
556
|
| 716 |
+
],
|
| 717 |
+
"page_idx": 4
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "equation",
|
| 721 |
+
"text": "\n$$\nR _ {e} = \\mathrm {L N} (\\mathrm {M H A} \\left(R _ {1}, R _ {3}, R _ {3}\\right)). \\tag {8}\n$$\n",
|
| 722 |
+
"text_format": "latex",
|
| 723 |
+
"bbox": [
|
| 724 |
+
184,
|
| 725 |
+
560,
|
| 726 |
+
480,
|
| 727 |
+
575
|
| 728 |
+
],
|
| 729 |
+
"page_idx": 4
|
| 730 |
+
},
|
| 731 |
+
{
|
| 732 |
+
"type": "text",
|
| 733 |
+
"text": "During the reconstruction process, we just calculate the reconstruction loss $\\mathcal{L}_r$ between the real input features and their reconstructed counterparts $\\mathcal{R}_e$ within a mini-batch as follows:",
|
| 734 |
+
"bbox": [
|
| 735 |
+
89,
|
| 736 |
+
588,
|
| 737 |
+
483,
|
| 738 |
+
646
|
| 739 |
+
],
|
| 740 |
+
"page_idx": 4
|
| 741 |
+
},
|
| 742 |
+
{
|
| 743 |
+
"type": "equation",
|
| 744 |
+
"text": "\n$$\n\\mathcal {L} _ {r} = \\frac {1}{B} \\sum_ {i = 0} ^ {B} \\operatorname {M S E} \\left(R _ {e}, V _ {H}\\right), \\tag {9}\n$$\n",
|
| 745 |
+
"text_format": "latex",
|
| 746 |
+
"bbox": [
|
| 747 |
+
189,
|
| 748 |
+
645,
|
| 749 |
+
482,
|
| 750 |
+
685
|
| 751 |
+
],
|
| 752 |
+
"page_idx": 4
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "text",
|
| 756 |
+
"text": "where $\\mathrm{MSE}(\\cdot ,\\cdot)$ is mean squared error. Facilitating $\\mathcal{L}_r$ encourages preserving the completeness and richness of the visual semantic concept space and highlighting the concept-related forgery features. Given the reconstructed features $R_{f}$ and the original feature $f_{r}$ , the reconstruction difference map can be formally expressed as:",
|
| 757 |
+
"bbox": [
|
| 758 |
+
89,
|
| 759 |
+
691,
|
| 760 |
+
483,
|
| 761 |
+
784
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 4
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "equation",
|
| 767 |
+
"text": "\n$$\n\\mathcal {D} _ {s} = \\left| R _ {f} - f _ {r} \\right|, \\tag {10}\n$$\n",
|
| 768 |
+
"text_format": "latex",
|
| 769 |
+
"bbox": [
|
| 770 |
+
227,
|
| 771 |
+
795,
|
| 772 |
+
482,
|
| 773 |
+
811
|
| 774 |
+
],
|
| 775 |
+
"page_idx": 4
|
| 776 |
+
},
|
| 777 |
+
{
|
| 778 |
+
"type": "text",
|
| 779 |
+
"text": "where $|\\cdot |$ denotes the absolute value function.",
|
| 780 |
+
"bbox": [
|
| 781 |
+
89,
|
| 782 |
+
823,
|
| 783 |
+
390,
|
| 784 |
+
838
|
| 785 |
+
],
|
| 786 |
+
"page_idx": 4
|
| 787 |
+
},
|
| 788 |
+
{
|
| 789 |
+
"type": "text",
|
| 790 |
+
"text": "3.3. Low-level Forgery Feature Enhancement",
|
| 791 |
+
"text_level": 1,
|
| 792 |
+
"bbox": [
|
| 793 |
+
89,
|
| 794 |
+
848,
|
| 795 |
+
441,
|
| 796 |
+
864
|
| 797 |
+
],
|
| 798 |
+
"page_idx": 4
|
| 799 |
+
},
|
| 800 |
+
{
|
| 801 |
+
"type": "text",
|
| 802 |
+
"text": "Existing methods based on pre-trained vision-language models [26, 32] overlook the importance of concept",
|
| 803 |
+
"bbox": [
|
| 804 |
+
89,
|
| 805 |
+
869,
|
| 806 |
+
483,
|
| 807 |
+
901
|
| 808 |
+
],
|
| 809 |
+
"page_idx": 4
|
| 810 |
+
},
|
| 811 |
+
{
|
| 812 |
+
"type": "image",
|
| 813 |
+
"img_path": "images/de568ffeb00d073298b5f7dc1c0e06b896ff4b86f32ec8d1e4c74be043a2f038.jpg",
|
| 814 |
+
"image_caption": [
|
| 815 |
+
"Figure 4. The curve of exponential inverse. In the \"fast\" interval, the value drops sharply. In the \"low\" interval, the curve flattens out, showing a decay towards 0."
|
| 816 |
+
],
|
| 817 |
+
"image_footnote": [],
|
| 818 |
+
"bbox": [
|
| 819 |
+
584,
|
| 820 |
+
90,
|
| 821 |
+
836,
|
| 822 |
+
247
|
| 823 |
+
],
|
| 824 |
+
"page_idx": 4
|
| 825 |
+
},
|
| 826 |
+
{
|
| 827 |
+
"type": "text",
|
| 828 |
+
"text": "weakly-related features. We believe that a thorough alignment between the visual semantic concept space and the forgery space should include the exploration of concept weakly-related forgery features. To eliminate redundant forgery features, we come up with a novel feature enhancement that refines low-level forgery features. Empowered by the reconstruction difference map, our detector orchestrates the extraction of multi-scale features with exceptional robustness and markedly enhanced effectiveness. As shown in Fig. 3, the enhancer follows the typical architecture of a convolutional network. It involves the repeated application of convolutions, each followed by a batch normalization (BN) and a rectified linear unit (ReLU). For a given stage $n$ , $F(n)$ ( $N = 1,2,3$ ) corresponds to its output features. Then, We deconvolve the semantic difference map $D_{s}$ to the shape same as $F(N)$ and perform pixel-wise multiplication with $F(n)$ to get $F'(n)$ as:",
|
| 829 |
+
"bbox": [
|
| 830 |
+
511,
|
| 831 |
+
306,
|
| 832 |
+
906,
|
| 833 |
+
564
|
| 834 |
+
],
|
| 835 |
+
"page_idx": 4
|
| 836 |
+
},
|
| 837 |
+
{
|
| 838 |
+
"type": "equation",
|
| 839 |
+
"text": "\n$$\nF ^ {\\prime} (n) = \\operatorname {d e c o n v} (F (n)) \\otimes D _ {s}, \\tag {11}\n$$\n",
|
| 840 |
+
"text_format": "latex",
|
| 841 |
+
"bbox": [
|
| 842 |
+
606,
|
| 843 |
+
575,
|
| 844 |
+
903,
|
| 845 |
+
594
|
| 846 |
+
],
|
| 847 |
+
"page_idx": 4
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "text",
|
| 851 |
+
"text": "where $\\otimes$ is the element-wise multiplication, deconv $(\\cdot)$ represents deconvolution operation and $\\mathcal{F}'(n)$ is the low-level feature aggregated with semantic information. To further enhance the reliability of the extracted features, we compute an adaptive weight coefficient $\\frac{1}{e_n}$ to indicate the importance of $D_{s}$ to $F(n)$ :",
|
| 852 |
+
"bbox": [
|
| 853 |
+
511,
|
| 854 |
+
606,
|
| 855 |
+
905,
|
| 856 |
+
696
|
| 857 |
+
],
|
| 858 |
+
"page_idx": 4
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "equation",
|
| 862 |
+
"text": "\n$$\n\\frac {1}{e _ {n}} = \\frac {1}{e ^ {\\left| F ^ {\\prime} (n) - F (n) \\right|}}. \\tag {12}\n$$\n",
|
| 863 |
+
"text_format": "latex",
|
| 864 |
+
"bbox": [
|
| 865 |
+
638,
|
| 866 |
+
707,
|
| 867 |
+
903,
|
| 868 |
+
739
|
| 869 |
+
],
|
| 870 |
+
"page_idx": 4
|
| 871 |
+
},
|
| 872 |
+
{
|
| 873 |
+
"type": "text",
|
| 874 |
+
"text": "Here we explain the role of the exponential inverse through Fig. 4. As $x$ grows large, the curve of $e^x$ becomes flatter. Therefore, in the \"fast\" interval, forgery features with a significant divergence from the semantic difference map will be assigned smaller weights, which mobilizes the network to capture concept strongly-related features. However, in the \"low\" interval, features strongly associated with forgery can avoid being misguided by semantic concepts, which indicates that the order of importance is reversed. Next, we have the attended output features $F_{low}$",
|
| 875 |
+
"bbox": [
|
| 876 |
+
511,
|
| 877 |
+
750,
|
| 878 |
+
906,
|
| 879 |
+
901
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 4
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "page_number",
|
| 885 |
+
"text": "18392",
|
| 886 |
+
"bbox": [
|
| 887 |
+
480,
|
| 888 |
+
944,
|
| 889 |
+
517,
|
| 890 |
+
955
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 4
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "table",
|
| 896 |
+
"img_path": "images/87d0f709d3c5beebf16fa5d4102d64bc12877c31ade452d4291d5c56b5d7b629.jpg",
|
| 897 |
+
"table_caption": [],
|
| 898 |
+
"table_footnote": [],
|
| 899 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td rowspan=\"2\">Ref</td><td colspan=\"6\">GAN</td><td rowspan=\"2\">Deep fakes</td><td colspan=\"2\">Low level</td><td colspan=\"2\">Perceptual loss</td><td rowspan=\"2\">Guided</td><td colspan=\"3\">LDM</td><td colspan=\"3\">Glide</td><td>Dalle</td><td>mAP</td><td></td></tr><tr><td>Pro-GAN</td><td>Cycle-GAN</td><td>Big-GAN</td><td>Style-GAN</td><td>Gau-GAN</td><td>Star-GAN</td><td>SITD</td><td>SAN</td><td>CRN</td><td>IMLE</td><td>200Steps</td><td>200w/cfg</td><td>100Steps</td><td>100w/ CFG</td><td>10027</td><td>5027</td><td>10</td><td></td><td></td></tr><tr><td>CNN-Spot</td><td>CVPR2020</td><td>100.0</td><td>93.47</td><td>84.50</td><td>99.54</td><td>89.49</td><td>98.15</td><td>89.02</td><td>73.75</td><td>59.47</td><td>98.24</td><td>98.40</td><td>73.72</td><td>70.62</td><td>71.00</td><td>70.54</td><td>80.65</td><td>84.91</td><td>82.07</td><td>70.59</td><td>83.58</td><td></td></tr><tr><td>PatchFor</td><td>ECCV2020</td><td>80.88</td><td>72.84</td><td>71.66</td><td>85.75</td><td>65.99</td><td>69.25</td><td>76.55</td><td>76.19</td><td>76.34</td><td>74.52</td><td>68.52</td><td>75.03</td><td>87.10</td><td>86.72</td><td>86.40</td><td>85.37</td><td>83.73</td><td>78.38</td><td>75.67</td><td>77.73</td><td></td></tr><tr><td>Co-occurrence</td><td>Elect.Imag.</td><td>99.74</td><td>80.95</td><td>50.61</td><td>98.63</td><td>53.11</td><td>67.99</td><td>59.14</td><td>68.98</td><td>60.42</td><td>73.06</td><td>87.21</td><td>70.20</td><td>91.21</td><td>89.02</td><td>92.39</td><td>89.32</td><td>88.35</td><td>82.79</td><td>80.96</td><td>78.11</td><td></td></tr><tr><td>Freq-spec</td><td>WIFS2019</td><td>55.39</td><td>100.0</td><td>75.08</td><td>55.11</td><td>66.08</td><td>100.0</td><td>45.18</td><td>47.46</td><td>57.12</td><td>53.61</td><td>50.98</td><td>57.72</td><td>77.72</td><td>77.25</td><td>76.47</td><td>68.58</td><td>64.58</td><td>61.92</td><td>67.77</td><td>66.21</td><td></td></tr><tr><td>Dire</td><td>ICCV2023</td><td>100.0</td><td>83.59</td><td>81.50</td><td>96.50</td><td>81.70</td><td>99.88</td><td>95.73</td><td>62.51</td><td>69.98</td><td>97.31</td><td>98.62</td><td>79.53</td><td>75.52</td><td>73.42</td><td>76.45</td><td>86.28</td><td>89.00</td><td>88.34</td><td>51.35</td><td>83.54</td><td></td></tr><tr><td>UnivFD</td><td>CVPR2023</td><td>100.0</td><td>99.46</td><td>99.59</td><td>97.24</td><td>99.98</td><td>99.60</td><td>82.45</td><td>61.32</td><td>79.02</td><td>96.72</td><td>99.00</td><td>87.77</td><td>99.14</td><td>92.15</td><td>99.17</td><td>94.74</td><td>95.34</td><td>94.57</td><td>97.15</td><td>93.38</td><td></td></tr><tr><td>NPR</td><td>CVPR2024</td><td>100.0</td><td>99.50</td><td>96.50</td><td>99.80</td><td>96.80</td><td>100.0</td><td>92.20</td><td>73.10</td><td>78.70</td><td>87.20</td><td>64.80</td><td>65.80</td><td>99.80</td><td>99.80</td><td>99.80</td><td>99.70</td><td>99.80</td><td>99.80</td><td>98.60</td><td>92.19</td><td></td></tr><tr><td>FatFormer</td><td>CVPR2024</td><td>100.0</td><td>100.0</td><td>99.98</td><td>99.75</td><td>100.0</td><td>100.0</td><td>97.99</td><td>97.94</td><td>81.23</td><td>99.84</td><td>99.93</td><td>91.92</td><td>99.83</td><td>99.22</td><td>99.89</td><td>99.27</td><td>99.50</td><td>99.33</td><td>99.84</td><td>98.18</td><td></td></tr><tr><td>Ours</td><td></td><td>100.0</td><td>99.77</td><td>99.93</td><td>99.48</td><td>99.98</td><td>99.97</td><td>97.23</td><td>97.91</td><td>93.10</td><td>99.79</td><td>99.96</td><td>92.06</td><td>99.88</td><td>98.95</td><td>99.92</td><td>98.06</td><td>98.29</td><td>97.73</td><td>99.81</td><td>98.51</td><td></td></tr></table>",
|
| 900 |
+
"bbox": [
|
| 901 |
+
94,
|
| 902 |
+
88,
|
| 903 |
+
906,
|
| 904 |
+
258
|
| 905 |
+
],
|
| 906 |
+
"page_idx": 5
|
| 907 |
+
},
|
| 908 |
+
{
|
| 909 |
+
"type": "table",
|
| 910 |
+
"img_path": "images/2ce8a5a892e90254564779c552ba0bbf2b349de34e09a0b5faf6cd58b3ff19c3.jpg",
|
| 911 |
+
"table_caption": [
|
| 912 |
+
"Table 1. Average precision comparisons with different methods on the UnivFD dataset. We replicate the results of CNNSpot, Patchfor, Co-occurrence, Freq-spec, and UnivFD from the paper[32]. In addition, we obtained the results for Dire, NPR and FatFormer using either the official pre-trained models or our re-implemented versions. Red and underline indicates the best and the second best result, respectively."
|
| 913 |
+
],
|
| 914 |
+
"table_footnote": [],
|
| 915 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td rowspan=\"2\">Ref</td><td colspan=\"6\">GAN</td><td rowspan=\"2\">Deep fakes</td><td colspan=\"2\">Low level</td><td colspan=\"2\">Perceptual loss</td><td rowspan=\"2\">Guided</td><td colspan=\"3\">LDM</td><td colspan=\"3\">Glide</td><td>Dalle</td><td>Avg-acc</td></tr><tr><td>Pro-GAN</td><td>Cycle-GAN</td><td>Big-GAN</td><td>Style-GAN</td><td>Gau-GAN</td><td>Star-GAN</td><td>SITD</td><td>SAN</td><td>CRN</td><td>IMLE</td><td>200 Steps</td><td>200 wcfg</td><td>100 Steps</td><td>100 27</td><td>50 27</td><td>100 10</td><td></td><td></td></tr><tr><td>CNN-Spot</td><td>CVPR2020</td><td>99.99</td><td>85.20</td><td>70.20</td><td>85.70</td><td>78.95</td><td>91.70</td><td>53.47</td><td>66.67</td><td>48.69</td><td>86.31</td><td>86.26</td><td>60.07</td><td>54.03</td><td>54.96</td><td>54.14</td><td>60.78</td><td>63.80</td><td>65.66</td><td>55.58</td><td>69.58</td></tr><tr><td>PatchFor</td><td>ECCV2020</td><td>75.03</td><td>68.97</td><td>68.47</td><td>79.16</td><td>64.23</td><td>63.94</td><td>75.54</td><td>75.14</td><td>75.28</td><td>72.33</td><td>55.30</td><td>67.41</td><td>76.50</td><td>76.10</td><td>75.77</td><td>74.81</td><td>73.28</td><td>68.52</td><td>67.91</td><td>71.24</td></tr><tr><td>Co-occurrence</td><td>Elect.Imag.</td><td>97.70</td><td>63.15</td><td>53.75</td><td>92.50</td><td>51.10</td><td>54.70</td><td>57.10</td><td>63.06</td><td>55.85</td><td>65.65</td><td>65.80</td><td>60.50</td><td>70.70</td><td>70.55</td><td>71.00</td><td>70.25</td><td>69.60</td><td>69.90</td><td>67.55</td><td>66.86</td></tr><tr><td>Freq-spec</td><td>WIFS2019</td><td>49.90</td><td>99.90</td><td>50.50</td><td>49.90</td><td>50.30</td><td>99.70</td><td>50.10</td><td>50.00</td><td>48.00</td><td>50.60</td><td>50.10</td><td>50.90</td><td>50.40</td><td>50.40</td><td>50.30</td><td>51.70</td><td>51.40</td><td>50.40</td><td>50.00</td><td>55.45</td></tr><tr><td>Dire</td><td>ICCV2023</td><td>99.86</td><td>73.47</td><td>60.68</td><td>72.39</td><td>65.15</td><td>93.60</td><td>88.86</td><td>52.78</td><td>56.39</td><td>90.07</td><td>94.05</td><td>61.05</td><td>59.35</td><td>59.95</td><td>60.65</td><td>69.30</td><td>72,70</td><td>71.00</td><td>52.75</td><td>71.19</td></tr><tr><td>UnivFD</td><td>CVPR2023</td><td>100.0</td><td>98.50</td><td>94.50</td><td>82.00</td><td>99.50</td><td>97.00</td><td>66.60</td><td>63.00</td><td>57.50</td><td>59.50</td><td>72.00</td><td>70.03</td><td>94.19</td><td>73.76</td><td>94.36</td><td>79.07</td><td>79.85</td><td>78.14</td><td>86.78</td><td>81.38</td></tr><tr><td>NPR</td><td>CVPR2024</td><td>99.80</td><td>92.00</td><td>89.50</td><td>96.30</td><td>87.60</td><td>99.70</td><td>79.40</td><td>61.40</td><td>70.60</td><td>74.50</td><td>57.10</td><td>55.23</td><td>97.40</td><td>98.70</td><td>97.90</td><td>97.00</td><td>97.90</td><td>97.00</td><td>88.80</td><td>86.20</td></tr><tr><td>FatFormer</td><td>CVPR2024</td><td>99.89</td><td>99.36</td><td>99.50</td><td>97.12</td><td>99.43</td><td>99.75</td><td>93.25</td><td>81.39</td><td>68.04</td><td>69.47</td><td>69.47</td><td>76.00</td><td>98.55</td><td>94.85</td><td>98.60</td><td>94.30</td><td>94.60</td><td>94.15</td><td>98.70</td><td>90.86</td></tr><tr><td>Ours</td><td></td><td>99.88</td><td>95.76</td><td>96.70</td><td>98.08</td><td>98.46</td><td>99.17</td><td>91.82</td><td>83.61</td><td>77.45</td><td>95.40</td><td>96.47</td><td>79.55</td><td>98.05</td><td>94.60</td><td>98.25</td><td>92.20</td><td>93.35</td><td>91.80</td><td>98.00</td><td>93.61</td></tr></table>",
|
| 916 |
+
"bbox": [
|
| 917 |
+
93,
|
| 918 |
+
324,
|
| 919 |
+
906,
|
| 920 |
+
496
|
| 921 |
+
],
|
| 922 |
+
"page_idx": 5
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"text": "Table 2. Accuracy comparisons with different methods on the UnivFD dataset.",
|
| 927 |
+
"bbox": [
|
| 928 |
+
263,
|
| 929 |
+
506,
|
| 930 |
+
730,
|
| 931 |
+
520
|
| 932 |
+
],
|
| 933 |
+
"page_idx": 5
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"text": "by the residual connection:",
|
| 938 |
+
"bbox": [
|
| 939 |
+
89,
|
| 940 |
+
530,
|
| 941 |
+
272,
|
| 942 |
+
544
|
| 943 |
+
],
|
| 944 |
+
"page_idx": 5
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "equation",
|
| 948 |
+
"text": "\n$$\nF _ {l o w} (n) = F ^ {\\prime} (n) + \\frac {F ^ {\\prime} (n)}{e _ {n}}. \\tag {13}\n$$\n",
|
| 949 |
+
"text_format": "latex",
|
| 950 |
+
"bbox": [
|
| 951 |
+
189,
|
| 952 |
+
551,
|
| 953 |
+
480,
|
| 954 |
+
585
|
| 955 |
+
],
|
| 956 |
+
"page_idx": 5
|
| 957 |
+
},
|
| 958 |
+
{
|
| 959 |
+
"type": "text",
|
| 960 |
+
"text": "For optimizing the anchor features $\\tilde{f}_a$ of the enhancer, the following triplet loss [35] is employed to bring positive samples $f_{p}$ closer while pushing negative samples $f_{n}$ apart:",
|
| 961 |
+
"bbox": [
|
| 962 |
+
89,
|
| 963 |
+
584,
|
| 964 |
+
482,
|
| 965 |
+
631
|
| 966 |
+
],
|
| 967 |
+
"page_idx": 5
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"type": "equation",
|
| 971 |
+
"text": "\n$$\n\\mathcal {L} _ {t r i} = \\max \\left(0, d \\left(f _ {p}, f _ {a}\\right) - d \\left(f _ {n}, f _ {a}\\right) + \\alpha\\right), \\tag {14}\n$$\n",
|
| 972 |
+
"text_format": "latex",
|
| 973 |
+
"bbox": [
|
| 974 |
+
127,
|
| 975 |
+
637,
|
| 976 |
+
480,
|
| 977 |
+
656
|
| 978 |
+
],
|
| 979 |
+
"page_idx": 5
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "text",
|
| 983 |
+
"text": "where $d(\\cdot)$ represents the Euclidean distance between samples and $\\alpha$ is the margin.",
|
| 984 |
+
"bbox": [
|
| 985 |
+
89,
|
| 986 |
+
662,
|
| 987 |
+
482,
|
| 988 |
+
691
|
| 989 |
+
],
|
| 990 |
+
"page_idx": 5
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "text",
|
| 994 |
+
"text": "On top of that, we concatenate the LoRA-CLIP's CLS token $T_{CLS}$ with $F_{low}$ along the same dimension to yield the refined representation $F_{out}$ . This ensures that forged features exhibit distinctiveness across different semantic identities while preserving their uniformity within similar semantic identities. The process is formulated as:",
|
| 995 |
+
"bbox": [
|
| 996 |
+
89,
|
| 997 |
+
691,
|
| 998 |
+
483,
|
| 999 |
+
782
|
| 1000 |
+
],
|
| 1001 |
+
"page_idx": 5
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "equation",
|
| 1005 |
+
"text": "\n$$\nF _ {o u t} = F _ {l o w} \\| T _ {C L S}, \\tag {15}\n$$\n",
|
| 1006 |
+
"text_format": "latex",
|
| 1007 |
+
"bbox": [
|
| 1008 |
+
215,
|
| 1009 |
+
791,
|
| 1010 |
+
482,
|
| 1011 |
+
808
|
| 1012 |
+
],
|
| 1013 |
+
"page_idx": 5
|
| 1014 |
+
},
|
| 1015 |
+
{
|
| 1016 |
+
"type": "text",
|
| 1017 |
+
"text": "where $F^{out}$ is strategically integrated with a linear classifier to enable the execution of binary classification. Eventually, the total loss function $\\mathcal{L}$ of the proposed framework can be defined as:",
|
| 1018 |
+
"bbox": [
|
| 1019 |
+
89,
|
| 1020 |
+
815,
|
| 1021 |
+
483,
|
| 1022 |
+
875
|
| 1023 |
+
],
|
| 1024 |
+
"page_idx": 5
|
| 1025 |
+
},
|
| 1026 |
+
{
|
| 1027 |
+
"type": "equation",
|
| 1028 |
+
"text": "\n$$\n\\mathcal {L} = \\mathcal {L} _ {b c e} + \\lambda_ {1} \\mathcal {L} _ {t r i} + \\lambda_ {2} \\mathcal {L} _ {r}, \\tag {16}\n$$\n",
|
| 1029 |
+
"text_format": "latex",
|
| 1030 |
+
"bbox": [
|
| 1031 |
+
189,
|
| 1032 |
+
885,
|
| 1033 |
+
482,
|
| 1034 |
+
901
|
| 1035 |
+
],
|
| 1036 |
+
"page_idx": 5
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"type": "text",
|
| 1040 |
+
"text": "where $\\mathcal{L}_{bce}$ presents the binary cross-entropy loss and $\\mathcal{L}_{tri}$ is triplet loss. $\\mathcal{L}_{tri}$ and $\\mathcal{L}_r$ are scaled by the hyperparameters $\\lambda_{1}$ and $\\lambda_{2}$ , respectively.",
|
| 1041 |
+
"bbox": [
|
| 1042 |
+
511,
|
| 1043 |
+
529,
|
| 1044 |
+
906,
|
| 1045 |
+
575
|
| 1046 |
+
],
|
| 1047 |
+
"page_idx": 5
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "text",
|
| 1051 |
+
"text": "4. Experiments",
|
| 1052 |
+
"text_level": 1,
|
| 1053 |
+
"bbox": [
|
| 1054 |
+
511,
|
| 1055 |
+
593,
|
| 1056 |
+
645,
|
| 1057 |
+
609
|
| 1058 |
+
],
|
| 1059 |
+
"page_idx": 5
|
| 1060 |
+
},
|
| 1061 |
+
{
|
| 1062 |
+
"type": "text",
|
| 1063 |
+
"text": "4.1. Experiment Setups",
|
| 1064 |
+
"text_level": 1,
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
511,
|
| 1067 |
+
619,
|
| 1068 |
+
694,
|
| 1069 |
+
636
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 5
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "text",
|
| 1075 |
+
"text": "Datasets: We follow the protocol described in [32], using ProGAN's real and fake images as training data. Additionally, we adopt the protocol from [5], where the training data is composed of fake Stable Diffusion v1 images [52] and random real LAION images [41]. The UnivFD dataset [32] cover a broad range of generative models, primarily including GANs and diffusion models, such as ProGAN [18], StyleGAN [19], BigGAN [4], CycleGAN [59], StarGAN [10], GauGAN [46], CRN [9], IMLE [22], SAN [11], SITD [7], DeepFakes [20], Guided [12], Glide [31], LDM [40] and DALL-E [37]. The SynRIS dataset [5] is designed to avoid bias toward any specific topic, theme, or style and contains high-fidelity images generated by text-to-image models, such as Kandinsky2 [38], Kandinsky3 [1], PixArt- $\\alpha$ [8], SDXL-DPO [49], SDXL [34], SegMoE [23], SSD-1B [42], Stable-Cascade [33], Segmind-Vega [15], and Würstchen2 [33], Midjourney [29], DALL.E 3 [3]",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
511,
|
| 1078 |
+
643,
|
| 1079 |
+
906,
|
| 1080 |
+
902
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 5
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "page_number",
|
| 1086 |
+
"text": "18393",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
480,
|
| 1089 |
+
944,
|
| 1090 |
+
517,
|
| 1091 |
+
955
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 5
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "table",
|
| 1097 |
+
"img_path": "images/21d2a7a74f868328d782afdbbed51e059299eb60369d9ada291a02909bf435ec.jpg",
|
| 1098 |
+
"table_caption": [],
|
| 1099 |
+
"table_footnote": [
|
| 1100 |
+
"Table 3. AUROC comparisons with different methods on the SynRIS dataset. We retrieve the results of CNNSpot, UnivFD, and FakeInversion from [5] and obtain the results for Dire, NPR, Fat-Former, and PatchFor using re-implemented models. Red and underline indicate the best and the second best result, respectively."
|
| 1101 |
+
],
|
| 1102 |
+
"table_body": "<table><tr><td>Methods</td><td>CNN-Spot</td><td>Freq-spec</td><td>Dire</td><td>Univ FD</td><td>NPR</td><td>FatF ormer</td><td>FakeIn version</td><td>Patch For</td><td>Ours</td></tr><tr><td>Kandinsky2</td><td>60.0</td><td>57.0</td><td>71.6</td><td>56.2</td><td>97.5</td><td>75.6</td><td>69.9</td><td>53.5</td><td>97.13</td></tr><tr><td>Kandinsky3</td><td>65.9</td><td>45.7</td><td>74.9</td><td>61.4</td><td>93.7</td><td>80.1</td><td>74.3</td><td>51.4</td><td>94.8</td></tr><tr><td>PixArt-α</td><td>62.7</td><td>56.4</td><td>81.5</td><td>64.7</td><td>89.5</td><td>75.3</td><td>73.0</td><td>49.6</td><td>90.2</td></tr><tr><td>SDXL-DPO</td><td>84.3</td><td>69.8</td><td>69.9</td><td>70.2</td><td>97.6</td><td>86.0</td><td>88.1</td><td>54.5</td><td>95.5</td></tr><tr><td>Segmind-Vega</td><td>74.2</td><td>65.3</td><td>81.0</td><td>62.3</td><td>97.1</td><td>82.4</td><td>81.1</td><td>53.1</td><td>97.0</td></tr><tr><td>SDXL</td><td>81.4</td><td>61.2</td><td>86.2</td><td>66.3</td><td>96.0</td><td>85.1</td><td>80.7</td><td>63.9</td><td>97.6</td></tr><tr><td>Seg-MoE</td><td>66.3</td><td>54.6</td><td>71.9</td><td>62.0</td><td>93.6</td><td>70.8</td><td>71.3</td><td>49.8</td><td>97.6</td></tr><tr><td>SSD-1B</td><td>72.6</td><td>67.8</td><td>79.8</td><td>62.8</td><td>99.6</td><td>70.1</td><td>79.4</td><td>61.2</td><td>99.8</td></tr><tr><td>Stable-Cascade</td><td>70.5</td><td>62.1</td><td>74.1</td><td>68.2</td><td>97.6</td><td>81.6</td><td>74.9</td><td>57.4</td><td>99.2</td></tr><tr><td>Würstchen2</td><td>61.0</td><td>63.3</td><td>74.2</td><td>69.7</td><td>90.9</td><td>72.9</td><td>70.5</td><td>47.2</td><td>98.2</td></tr><tr><td>Midjourney</td><td>63.0</td><td>50.9</td><td>72.4</td><td>59.2</td><td>58.5</td><td>73.6</td><td>66.4</td><td>53.7</td><td>90.0</td></tr><tr><td>Playground</td><td>58.2</td><td>52.3</td><td>67.9</td><td>58.7</td><td>93.1</td><td>81.4</td><td>62.5</td><td>54.1</td><td>92.8</td></tr><tr><td>DALL-E3</td><td>71.6</td><td>59.9</td><td>80.8</td><td>48.0</td><td>69.1</td><td>79.2</td><td>75.9</td><td>50.1</td><td>85.9</td></tr><tr><td>Average</td><td>68.6</td><td>58.9</td><td>75.9</td><td>62.3</td><td>90.3</td><td>78.0</td><td>74.5</td><td>53.8</td><td>95.1</td></tr></table>",
|
| 1103 |
+
"bbox": [
|
| 1104 |
+
106,
|
| 1105 |
+
88,
|
| 1106 |
+
472,
|
| 1107 |
+
381
|
| 1108 |
+
],
|
| 1109 |
+
"page_idx": 6
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "text",
|
| 1113 |
+
"text": "and playground [21].",
|
| 1114 |
+
"bbox": [
|
| 1115 |
+
89,
|
| 1116 |
+
476,
|
| 1117 |
+
236,
|
| 1118 |
+
491
|
| 1119 |
+
],
|
| 1120 |
+
"page_idx": 6
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "text",
|
| 1124 |
+
"text": "Metrics: As standard evaluation metrics, the average precision (AP), the accuracy (ACC) and AUCROC are considered to measure the effectiveness of different methods.",
|
| 1125 |
+
"bbox": [
|
| 1126 |
+
89,
|
| 1127 |
+
491,
|
| 1128 |
+
482,
|
| 1129 |
+
535
|
| 1130 |
+
],
|
| 1131 |
+
"page_idx": 6
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "text",
|
| 1135 |
+
"text": "Baselines: In our experiments, we perform thorough comparisons with state-of-the-art methods, as follows: 1) CNNSpot [50]: The method relies only on one CNN generator. 2) PatchFor [6]: The method performs detection on a patch level. 3) Co-occurrence [30]: The method converts input images into co-occurrence matrices for classification. 4) Freq-spec [57]: The method employs the frequency spectrum of images. 5) Dire [51]: The method exploits the error between an input image and its reconstruction counterpart. 6) UnivFD [32]: The method uses the pre-trained language-vision model to determine the authenticity of images. 7) NPR [45]: The method captures the generalized artifacts according to the local interdependence among image pixels. 8) FatFormer [26]: The method is aimed at extracting forgery-adaptive features based on UnivFD. 9) Fakeinversion [5]: The method employs text-conditioned inversion maps extracted from Stable Diffusion.",
|
| 1136 |
+
"bbox": [
|
| 1137 |
+
89,
|
| 1138 |
+
537,
|
| 1139 |
+
482,
|
| 1140 |
+
792
|
| 1141 |
+
],
|
| 1142 |
+
"page_idx": 6
|
| 1143 |
+
},
|
| 1144 |
+
{
|
| 1145 |
+
"type": "text",
|
| 1146 |
+
"text": "Implement details: Our training and testing settings are adapted from the approach outlined in the previous study [26] with several key modifications. Specifically, early stopping was employed during model training, with an initial learning rate of $1 \\times 10^{-4}$ and a batch size of 32. Additionally, the Lora layers are configured with hyperparameters $lora_{r} = 6$ , $lora_{\\alpha} = 6$ , and a dropout rate of 0.8,",
|
| 1147 |
+
"bbox": [
|
| 1148 |
+
89,
|
| 1149 |
+
795,
|
| 1150 |
+
482,
|
| 1151 |
+
901
|
| 1152 |
+
],
|
| 1153 |
+
"page_idx": 6
|
| 1154 |
+
},
|
| 1155 |
+
{
|
| 1156 |
+
"type": "table",
|
| 1157 |
+
"img_path": "images/b8e851f0a6c0a3b01c9ead4418bfc9f62fb6a7e0eaf71b764adcc5459768e91e.jpg",
|
| 1158 |
+
"table_caption": [],
|
| 1159 |
+
"table_footnote": [
|
| 1160 |
+
"Table 4. Ablation study of the proposed modules on the UnivFD Dataset. We show the mean accuracy $(acc_m)$ and average precision $(ap_m)$ . Red and underline indicate the best and the second-best result, respectively."
|
| 1161 |
+
],
|
| 1162 |
+
"table_body": "<table><tr><td>#</td><td>STS module</td><td>CFDL module</td><td>feature enhancement</td><td>UnivFD apm</td><td>Dataset accm</td></tr><tr><td>1</td><td></td><td>✓</td><td></td><td>97.37</td><td>81.64</td></tr><tr><td>2</td><td></td><td>✓</td><td>✓</td><td>97.41</td><td>90.17</td></tr><tr><td>3</td><td>✓</td><td>✓</td><td></td><td>97.39</td><td>89.98</td></tr><tr><td>4</td><td>✓</td><td>✓</td><td>✓</td><td>98.52</td><td>93.61</td></tr></table>",
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
529,
|
| 1165 |
+
89,
|
| 1166 |
+
890,
|
| 1167 |
+
172
|
| 1168 |
+
],
|
| 1169 |
+
"page_idx": 6
|
| 1170 |
+
},
|
| 1171 |
+
{
|
| 1172 |
+
"type": "text",
|
| 1173 |
+
"text": "while $\\alpha$ is set to 8.0. The proposed method is implemented using Pytorch on 2 Nvidia GeForce RTX A6000 GPUs.",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
511,
|
| 1176 |
+
250,
|
| 1177 |
+
903,
|
| 1178 |
+
279
|
| 1179 |
+
],
|
| 1180 |
+
"page_idx": 6
|
| 1181 |
+
},
|
| 1182 |
+
{
|
| 1183 |
+
"type": "text",
|
| 1184 |
+
"text": "4.2. Comparison Results",
|
| 1185 |
+
"text_level": 1,
|
| 1186 |
+
"bbox": [
|
| 1187 |
+
511,
|
| 1188 |
+
287,
|
| 1189 |
+
710,
|
| 1190 |
+
304
|
| 1191 |
+
],
|
| 1192 |
+
"page_idx": 6
|
| 1193 |
+
},
|
| 1194 |
+
{
|
| 1195 |
+
"type": "text",
|
| 1196 |
+
"text": "The UnivFD dataset includes a diverse range of models, allowing for a comprehensive evaluation of our method across both GAN and diffusion generative models. In addition, the SynRIS dataset provides images generated by cutting-edge generative models. The overall experimental results are presented in Tab. 1, Tab. 2, and Tab. 3.",
|
| 1197 |
+
"bbox": [
|
| 1198 |
+
511,
|
| 1199 |
+
309,
|
| 1200 |
+
903,
|
| 1201 |
+
400
|
| 1202 |
+
],
|
| 1203 |
+
"page_idx": 6
|
| 1204 |
+
},
|
| 1205 |
+
{
|
| 1206 |
+
"type": "text",
|
| 1207 |
+
"text": "Results on UnivFD dataset. Results show that our proposed method achieves superior performance compared to the UnivFD and FatFormer. Notably, without the biased interpretation introduced by coarse-grained text prompts, SDD surpasses the latest state-of-the-art method FatFormer by the mean AP $(ap_{m})$ of $0.34\\%$ and the mean acc $(acc_{m})$ of $2.75\\%$ . Moreover, compared with methods based on relatively monotonous forgery features in Tab. 1 and Tab. 2, our approach can outperform all of them with a large improvement. The above evidence indicates effective combination of visual concepts and forgery features can contribute model to extract sufficient forgery patterns and eliminate the superfluous features.",
|
| 1208 |
+
"bbox": [
|
| 1209 |
+
511,
|
| 1210 |
+
402,
|
| 1211 |
+
905,
|
| 1212 |
+
597
|
| 1213 |
+
],
|
| 1214 |
+
"page_idx": 6
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "text",
|
| 1218 |
+
"text": "Results on SynRIS dataset. As shown in Tab. 3, when confronted with high-fidelity images generated by text-to-image models, methods leveraging pre-trained vision-language models, such as UnivFD and FatFormer, lose their competitiveness. In contrast, NPR, which focuses on neighboring pixel relationships, retains its edge. We assume that current generative methods grasp the relationships between visual information and semantic concepts in images but cannot refine local forgery details at the pixel level. Considering that excessive reliance on concepts misses abnormal pixel arrangements and focusing on monotonous forgery patterns can cause overfitting, our detector, which emphasizes low-level features with visual concepts, is trained on lower-fidelity fake images generated by Stable Diffusion [52] to capture concept-specific lacunae. We follow the evaluation protocol from SynRIS [5]. In comparison, our detector achieves an impressive mean AUROC of $95.1\\%$ , surpassing the state-of-the-art method by $4.8\\%$ . This demonstrates its superior ability to tackle the challenges posed by evolving generative models.",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
511,
|
| 1221 |
+
599,
|
| 1222 |
+
905,
|
| 1223 |
+
900
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 6
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "page_number",
|
| 1229 |
+
"text": "18394",
|
| 1230 |
+
"bbox": [
|
| 1231 |
+
480,
|
| 1232 |
+
944,
|
| 1233 |
+
517,
|
| 1234 |
+
955
|
| 1235 |
+
],
|
| 1236 |
+
"page_idx": 6
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "image",
|
| 1240 |
+
"img_path": "images/0fc9b0eeef54d2a107d0b1c0f6d6c4e979f1491fe980fcbbc07675e0143222a1.jpg",
|
| 1241 |
+
"image_caption": [
|
| 1242 |
+
"Figure 5. Performance of function on Figure 6. T-SNE visualization of real and fake images [47]. The feature space is based on our adaptive weights. classifier. Each randomly samples 500 real and 500 fake images."
|
| 1243 |
+
],
|
| 1244 |
+
"image_footnote": [],
|
| 1245 |
+
"bbox": [
|
| 1246 |
+
125,
|
| 1247 |
+
90,
|
| 1248 |
+
310,
|
| 1249 |
+
213
|
| 1250 |
+
],
|
| 1251 |
+
"page_idx": 7
|
| 1252 |
+
},
|
| 1253 |
+
{
|
| 1254 |
+
"type": "image",
|
| 1255 |
+
"img_path": "images/9ba89e1496d6b16bdccd57c575440594fea729573cfd64ec69d877d04d24eeca.jpg",
|
| 1256 |
+
"image_caption": [],
|
| 1257 |
+
"image_footnote": [],
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
344,
|
| 1260 |
+
87,
|
| 1261 |
+
475,
|
| 1262 |
+
215
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 7
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "image",
|
| 1268 |
+
"img_path": "images/53ea1b5f353a96c9738269ec99c6192074f98e4c9590ea3b568427e6f48b7455.jpg",
|
| 1269 |
+
"image_caption": [],
|
| 1270 |
+
"image_footnote": [],
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
491,
|
| 1273 |
+
88,
|
| 1274 |
+
617,
|
| 1275 |
+
215
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 7
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "image",
|
| 1281 |
+
"img_path": "images/12508e5e81b25cf63bb1033d94fba7fce7241302bce0d94e1bdf758f953e09b0.jpg",
|
| 1282 |
+
"image_caption": [],
|
| 1283 |
+
"image_footnote": [],
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
635,
|
| 1286 |
+
88,
|
| 1287 |
+
759,
|
| 1288 |
+
215
|
| 1289 |
+
],
|
| 1290 |
+
"page_idx": 7
|
| 1291 |
+
},
|
| 1292 |
+
{
|
| 1293 |
+
"type": "image",
|
| 1294 |
+
"img_path": "images/88bdb2dfb8a557c7e1f64a77e634075a41c2959d745cddd5ef7d47a86cb6ada7.jpg",
|
| 1295 |
+
"image_caption": [],
|
| 1296 |
+
"image_footnote": [],
|
| 1297 |
+
"bbox": [
|
| 1298 |
+
781,
|
| 1299 |
+
88,
|
| 1300 |
+
906,
|
| 1301 |
+
215
|
| 1302 |
+
],
|
| 1303 |
+
"page_idx": 7
|
| 1304 |
+
},
|
| 1305 |
+
{
|
| 1306 |
+
"type": "image",
|
| 1307 |
+
"img_path": "images/3f926a12a11783a8f3703d8b1e19a10696a9a49ef46cb72f9c96a4c3486c118f.jpg",
|
| 1308 |
+
"image_caption": [
|
| 1309 |
+
"Figure 7. The showcase of attention on images which are input into our model."
|
| 1310 |
+
],
|
| 1311 |
+
"image_footnote": [],
|
| 1312 |
+
"bbox": [
|
| 1313 |
+
91,
|
| 1314 |
+
261,
|
| 1315 |
+
480,
|
| 1316 |
+
416
|
| 1317 |
+
],
|
| 1318 |
+
"page_idx": 7
|
| 1319 |
+
},
|
| 1320 |
+
{
|
| 1321 |
+
"type": "text",
|
| 1322 |
+
"text": "4.3. Ablation study",
|
| 1323 |
+
"text_level": 1,
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
89,
|
| 1326 |
+
462,
|
| 1327 |
+
240,
|
| 1328 |
+
478
|
| 1329 |
+
],
|
| 1330 |
+
"page_idx": 7
|
| 1331 |
+
},
|
| 1332 |
+
{
|
| 1333 |
+
"type": "text",
|
| 1334 |
+
"text": "We perform comprehensive ablation studies on the UnivFD dataset under the original experimental configurations, reporting the mean accuracy $(acc_{m})$ and mean average precision $(ap_{m})$ as the primary evaluation metrics.",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
89,
|
| 1337 |
+
486,
|
| 1338 |
+
483,
|
| 1339 |
+
546
|
| 1340 |
+
],
|
| 1341 |
+
"page_idx": 7
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "text",
|
| 1345 |
+
"text": "Effect of Each Component: We study the effects of removing STS module, CFDL module, and feature enhancement in our method. The results, presented in Tab. 4, demonstrate that these components are essential for improving performance in generalization on unseen models. This empirical finding suggests that the CFDL effectively captures forgery discrepancies associated with semantic concepts, while the enhancer module plays a crucial role in identifying robust forgery artifacts. The collaboration of all modules enhances the model's ability to distinguish between real and fake images.",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
89,
|
| 1348 |
+
549,
|
| 1349 |
+
483,
|
| 1350 |
+
715
|
| 1351 |
+
],
|
| 1352 |
+
"page_idx": 7
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"type": "text",
|
| 1356 |
+
"text": "Effect of function on adaptive weights: To check how well the proposed function works in the SDD, we select two conventional functions for comparison: $f(x) = |x|$ and $f(x) = x^2$ . The corresponding results are presented in Fig. 5. We find that our proposed function yields improvements in both $ap_m$ and $acc_m$ compared to the selected functions. These results demonstrate that our proposed function is conducive to capturing robust and distinctive forgery features.",
|
| 1357 |
+
"bbox": [
|
| 1358 |
+
89,
|
| 1359 |
+
717,
|
| 1360 |
+
482,
|
| 1361 |
+
851
|
| 1362 |
+
],
|
| 1363 |
+
"page_idx": 7
|
| 1364 |
+
},
|
| 1365 |
+
{
|
| 1366 |
+
"type": "text",
|
| 1367 |
+
"text": "Visualization of learned latent space: As shown in Fig. 6, the input images can be distinctly categorized into two clusters: real and fake. Nevertheless, why does the di",
|
| 1368 |
+
"bbox": [
|
| 1369 |
+
89,
|
| 1370 |
+
854,
|
| 1371 |
+
483,
|
| 1372 |
+
900
|
| 1373 |
+
],
|
| 1374 |
+
"page_idx": 7
|
| 1375 |
+
},
|
| 1376 |
+
{
|
| 1377 |
+
"type": "text",
|
| 1378 |
+
"text": "vided boundary of ProGAN appear ambiguous in contrast to other models? Additionally, why do the real clusters of CycleGAN and StyleGAN separate from each other? We attribute these to the influence of visual semantic concepts. Perceptively, with the supervision of visual semantic concepts, the learned boundary of ProGAN is more complex and nuanced, rather than just simple straight lines or curves. Similarly, the images generated by StyleGAN and CycleGAN are projected into the corresponding semantic concept distribution and then separated from the real images based on the visual semantic concepts.",
|
| 1379 |
+
"bbox": [
|
| 1380 |
+
511,
|
| 1381 |
+
265,
|
| 1382 |
+
906,
|
| 1383 |
+
430
|
| 1384 |
+
],
|
| 1385 |
+
"page_idx": 7
|
| 1386 |
+
},
|
| 1387 |
+
{
|
| 1388 |
+
"type": "text",
|
| 1389 |
+
"text": "Visualization of attention on images: We apply Class Activation Mapping (CAM) [58] to visualize the learned representations. From our perspective, Fig. 7 illustrates that with the aid of semantic information, our model can focus on different regions of fake images, including the background, local object regions, and marginal details. This suggests that our fine-grained model is capable of capturing intricate discrepancies generalized to unseen models. Notably, the real images nearly always show no forgery discrepancy regions, which demonstrates the effectiveness of the reconstruction loss in the forgery detection task.",
|
| 1390 |
+
"bbox": [
|
| 1391 |
+
511,
|
| 1392 |
+
431,
|
| 1393 |
+
908,
|
| 1394 |
+
598
|
| 1395 |
+
],
|
| 1396 |
+
"page_idx": 7
|
| 1397 |
+
},
|
| 1398 |
+
{
|
| 1399 |
+
"type": "text",
|
| 1400 |
+
"text": "5. Conclusion",
|
| 1401 |
+
"text_level": 1,
|
| 1402 |
+
"bbox": [
|
| 1403 |
+
511,
|
| 1404 |
+
609,
|
| 1405 |
+
633,
|
| 1406 |
+
626
|
| 1407 |
+
],
|
| 1408 |
+
"page_idx": 7
|
| 1409 |
+
},
|
| 1410 |
+
{
|
| 1411 |
+
"type": "text",
|
| 1412 |
+
"text": "In this paper, we propose a novel method, SDD, for generalizable forgery image detection. The findings show that our method establishes a new state-of-the-art in detecting images generated by generative models from different periods, which underscores its robustness and superior generalization capability. To the best of our knowledge, in pre-trained vision-language paradigms, our approach is the first to rely solely on visual information, without text prompts. Based on experimental results, we conclude that leveraging sampled tokens and reconstruction techniques effectively aligns the visual semantic concept space with the forgery space. Additionally, refining low-level forgery features under the supervision of visual semantic concepts enhances the performance of forgery detection. Although SDD performs well across various generative methods, there is still room for improvement as generative technologies continue to advance. Future research will explore this further.",
|
| 1413 |
+
"bbox": [
|
| 1414 |
+
511,
|
| 1415 |
+
636,
|
| 1416 |
+
908,
|
| 1417 |
+
892
|
| 1418 |
+
],
|
| 1419 |
+
"page_idx": 7
|
| 1420 |
+
},
|
| 1421 |
+
{
|
| 1422 |
+
"type": "page_number",
|
| 1423 |
+
"text": "18395",
|
| 1424 |
+
"bbox": [
|
| 1425 |
+
480,
|
| 1426 |
+
944,
|
| 1427 |
+
517,
|
| 1428 |
+
955
|
| 1429 |
+
],
|
| 1430 |
+
"page_idx": 7
|
| 1431 |
+
},
|
| 1432 |
+
{
|
| 1433 |
+
"type": "text",
|
| 1434 |
+
"text": "Acknowledgement",
|
| 1435 |
+
"text_level": 1,
|
| 1436 |
+
"bbox": [
|
| 1437 |
+
91,
|
| 1438 |
+
90,
|
| 1439 |
+
250,
|
| 1440 |
+
107
|
| 1441 |
+
],
|
| 1442 |
+
"page_idx": 8
|
| 1443 |
+
},
|
| 1444 |
+
{
|
| 1445 |
+
"type": "text",
|
| 1446 |
+
"text": "This work is supported by the National Natural Science Foundation of China (Grants Nos. 62372238,62476133)",
|
| 1447 |
+
"bbox": [
|
| 1448 |
+
91,
|
| 1449 |
+
114,
|
| 1450 |
+
483,
|
| 1451 |
+
146
|
| 1452 |
+
],
|
| 1453 |
+
"page_idx": 8
|
| 1454 |
+
},
|
| 1455 |
+
{
|
| 1456 |
+
"type": "text",
|
| 1457 |
+
"text": "References",
|
| 1458 |
+
"text_level": 1,
|
| 1459 |
+
"bbox": [
|
| 1460 |
+
91,
|
| 1461 |
+
161,
|
| 1462 |
+
187,
|
| 1463 |
+
176
|
| 1464 |
+
],
|
| 1465 |
+
"page_idx": 8
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "list",
|
| 1469 |
+
"sub_type": "ref_text",
|
| 1470 |
+
"list_items": [
|
| 1471 |
+
"[1] V.Ya. Arkhipkin, Andrei Filatov, Viacheslav Vasilev, Anastasia Maltseva, Said Azizov, Igor Pavlov, Julia Agafonova, Andrey Kuznetsov, and Denis Dimitrov. Kandinsky 3.0 technical report. ArXiv, abs/2312.03511, 2023. 6",
|
| 1472 |
+
"[2] Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. ArXiv, abs/2107.03006, 2021. 1",
|
| 1473 |
+
"[3] James Betker, Gabriel Goh, Li Jing, † TimBrooks, Jianfeng Wang, Linjie Li, † LongOuyang, † JuntangZhuang, † JoyceLee, † YufeiGuo, † WesamManassra, † PrafullaDhariwal, † CaseyChu, † YunxinJiao, and Aditya Ramesh. Improving image generation with better captions. 6",
|
| 1474 |
+
"[4] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. ArXiv, abs/1809.11096, 2018. 6",
|
| 1475 |
+
"[5] George Cazenavette, Avneesh Sud, Thomas Leung, and Ben Usman. Fake inversion: Learning to detect images from unseen text-to-image models by inverting stable diffusion. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10759-10769, 2024. 2, 3, 6, 7",
|
| 1476 |
+
"[6] Lucy Chai, David Bau, Ser-Nam Lim, and Phillip Isola. What makes fake images detectable? understanding properties that generalize. In European Conference on Computer Vision, 2020. 1, 7",
|
| 1477 |
+
"[7] Cheng Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3291-3300, 2018. 6",
|
| 1478 |
+
"[8] Junsong Chen, Yue Wu, Simian Luo, Enze Xie, Sayak Paul, Ping Luo, Hang Zhao, and Zhenguo Li. Pixart- $\\delta$ : Fast and controllable image generation with latent consistency models. ArXiv, abs/2401.05252, 2024. 6",
|
| 1479 |
+
"[9] Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 1520-1529, 2017. 6",
|
| 1480 |
+
"[10] Yunjey Choi, Min-Je Choi, Mun Su Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8789-8797, 2017. 6",
|
| 1481 |
+
"[11] Tao Dai, Jianrui Cai, Yongbing Zhang, Shutao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11057-11066, 2019. 6",
|
| 1482 |
+
"[12] Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis. ArXiv, abs/2105.05233, 2021. 6"
|
| 1483 |
+
],
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
93,
|
| 1486 |
+
186,
|
| 1487 |
+
483,
|
| 1488 |
+
900
|
| 1489 |
+
],
|
| 1490 |
+
"page_idx": 8
|
| 1491 |
+
},
|
| 1492 |
+
{
|
| 1493 |
+
"type": "list",
|
| 1494 |
+
"sub_type": "ref_text",
|
| 1495 |
+
"list_items": [
|
| 1496 |
+
"[13] Joel Cameron Frank, Thorsten Eisenhofer, Lea Schonherr, Asja Fischer, Dorothea Kolossa, and Thorsten Holz. Leveraging frequency analysis for deep fake image recognition. ArXiv, abs/2003.08685, 2020. 1, 3",
|
| 1497 |
+
"[14] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63:139 - 144, 2014. 1",
|
| 1498 |
+
"[15] Yatharth Gupta, Vishnu V. Jaddipal, Harish Prabhala, Sayak Paul, and Patrick von Platen. Progressive knowledge distillation of stable diffusion xl using layer level loss. ArXiv, abs/2401.02677, 2024. 6",
|
| 1499 |
+
"[16] Zhizhong Han, Xiyang Wang, Yu-Shen Liu, and Matthias Zwicker. Multi-angle point cloud-vae: Unsupervised feature learning for 3d point clouds from multiple angles by joint self-reconstruction and half-to-half prediction. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 10441-10450. IEEE, 2019. 3",
|
| 1500 |
+
"[17] J. Edward Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. ArXiv, abs/2106.09685, 2021. 4",
|
| 1501 |
+
"[18] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. ArXiv, abs/1710.10196, 2017. 6",
|
| 1502 |
+
"[19] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4396-4405, 2018. 6",
|
| 1503 |
+
"[20] Prabhat Kumar, Mayank Vatsa, and Richa Singh. Detecting face2face facial reenactment in videos. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 2578-2586, 2020. 6",
|
| 1504 |
+
"[21] Daiqing Li, Aleks Kamko, Ehsan Akhgari, Ali Sabet, Linmiao Xu, and Suhail Doshi. Playground v2.5: Three insights towards enhancing aesthetic quality in text-to-image generation. ArXiv, abs/2402.17245, 2024. 7",
|
| 1505 |
+
"[22] Ke Li, Tianhao Zhang, and Jitendra Malik. Diverse image synthesis from semantic layouts via conditional imle. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 4219-4228, 2018. 6",
|
| 1506 |
+
"[23] Zhenghong Li, Hao Chen, Jiangjiang Wu, Jun Li, and Ning Jing. Segmind: Semisupervised remote sensing image semantic segmentation with masked image modeling and contrastive learning method. IEEE Transactions on Geoscience and Remote Sensing, 61:1-17, 2023. 6",
|
| 1507 |
+
"[24] Bo Liu, Fan Yang, Xiuli Bi, Bin Xiao, Weisheng Li, and Xinbo Gao. Detecting generated images by real images. In European Conference on Computer Vision, 2022. 3",
|
| 1508 |
+
"[25] Honggu Liu, Xiaodan Li, Wenbo Zhou, Yuefeng Chen, Yuan He, Hui Xue, Weiming Zhang, and Nenghai Yu. Spatial-phase shallow learning: Rethinking face forgery detection in frequency domain. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 772-781, 2021. 2",
|
| 1509 |
+
"[26] Huan Liu, Zichang Tan, Chuangchuang Tan, Yunchao Wei, Yao Zhao, and Jingdong Wang. Forgery-aware adaptive transformer for generalizable synthetic image detection."
|
| 1510 |
+
],
|
| 1511 |
+
"bbox": [
|
| 1512 |
+
516,
|
| 1513 |
+
92,
|
| 1514 |
+
903,
|
| 1515 |
+
900
|
| 1516 |
+
],
|
| 1517 |
+
"page_idx": 8
|
| 1518 |
+
},
|
| 1519 |
+
{
|
| 1520 |
+
"type": "page_number",
|
| 1521 |
+
"text": "18396",
|
| 1522 |
+
"bbox": [
|
| 1523 |
+
480,
|
| 1524 |
+
944,
|
| 1525 |
+
519,
|
| 1526 |
+
955
|
| 1527 |
+
],
|
| 1528 |
+
"page_idx": 8
|
| 1529 |
+
},
|
| 1530 |
+
{
|
| 1531 |
+
"type": "list",
|
| 1532 |
+
"sub_type": "ref_text",
|
| 1533 |
+
"list_items": [
|
| 1534 |
+
"2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10770-10780, 2023. 1, 2, 3, 4, 5, 7",
|
| 1535 |
+
"[27] Xinhai Liu, Xinchen Liu, Zhizhong Han, and Yu-Shen Liu. Spu-net: Self-supervised point cloud upsampling by coarse-to-fine reconstruction with self-projection optimization. IEEE Transactions on Image Processing, 31:4213-4226, 2020. 3",
|
| 1536 |
+
"[28] Zhengzhe Liu, Xiaojuan Qi, Jiaya Jia, and Philip H. S. Torr. Global texture enhancement for fake face detection in the wild. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8057-8066, 2020. 3",
|
| 1537 |
+
"[29] Midjourney. Midjourney, n.d. Accessed: 2025-03-04. 6",
|
| 1538 |
+
"[30] Lakshmanan Nataraj, Tajuddin Manhar Mohammed, B. S. Manjunath, Shivkumar Chandrasekaran, Arjuna Flenner, Jawadul H. Bappy, and Amit K. Roy-Chowdhury. Detecting gan generated fake images using co-occurrence matrices. ArXiv, abs/1903.06836, 2019. 7",
|
| 1539 |
+
"[31] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning, 2021. 6",
|
| 1540 |
+
"[32] Utkarsh Ojha, Yuheng Li, and Yong Jae Lee. Towards universal fake image detectors that generalize across generative models. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24480-24489, 2023. 1, 2, 3, 5, 6, 7",
|
| 1541 |
+
"[33] Pablo Pernias, Dominic Rampas, Mats L. Richter, and Marc Aubreville. Wuerstchen: Efficient pretraining of text-to-image models. ArXiv, abs/2306.00637, 2023. 6",
|
| 1542 |
+
"[34] Dustin Podell, Zion English, Kyle Lacey, A. Blattmann, Tim Dockhorn, Jonas Muller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. ArXiv, abs/2307.01952, 2023. 6",
|
| 1543 |
+
"[35] Zequn Qin, Pengyi Zhang, Fei Wu, and Xi Li. Fcanet: Frequency channel attention networks. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 763-772, 2020. 6",
|
| 1544 |
+
"[36] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. 2",
|
| 1545 |
+
"[37] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. ArXiv, abs/2102.12092, 2021. 6",
|
| 1546 |
+
"[38] Anton Razzhigaev, Arseniy Shakhmatov, Anastasia Maltseva, V.Ya. Arkhipkin, Igor Pavlov, Ilya Ryabov, Angelina Kuts, Alexander Panchenko, Andrey Kuznetsov, and Denis Dimitrov. Kandinsky: an improved text-to-image synthesis with image prior and latent diffusion. In Conference on Empirical Methods in Natural Language Processing, 2023. 6",
|
| 1547 |
+
"[39] Jonas Ricker, Denis Lukovnikov, and Asja Fischer. Aeroblade: Training-free detection of latent diffusion images using"
|
| 1548 |
+
],
|
| 1549 |
+
"bbox": [
|
| 1550 |
+
91,
|
| 1551 |
+
90,
|
| 1552 |
+
483,
|
| 1553 |
+
902
|
| 1554 |
+
],
|
| 1555 |
+
"page_idx": 9
|
| 1556 |
+
},
|
| 1557 |
+
{
|
| 1558 |
+
"type": "list",
|
| 1559 |
+
"sub_type": "ref_text",
|
| 1560 |
+
"list_items": [
|
| 1561 |
+
"autoencoder reconstruction error. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9130-9140, 2024. 3, 2",
|
| 1562 |
+
"[40] Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674-10685, 2021. 6",
|
| 1563 |
+
"[41] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Laion-5b: An open large-scale dataset for training next generation image-text models. ArXiv, abs/2210.08402, 2022.6",
|
| 1564 |
+
"[42] Segmind. Announcing ssd-1b: A leap in efficient t2i generation., 2023. 6",
|
| 1565 |
+
"[43] Minghe Shen, Hongping Gan, Chao Ning, Yi Hua, and Tao Zhang. Transc: A transformer-based hybrid architecture for image compressed sensing. IEEE Transactions on Image Processing, 31:6991-7005, 2022. 2",
|
| 1566 |
+
"[44] Chuangchuang Tan, Yao Zhao, Shikui Wei, Guanghua Gu, and Yunchao Wei. Learning on gradients: Generalized artifacts representation for gan-generated images detection. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12105-12114, 2023. 1, 2",
|
| 1567 |
+
"[45] Chuangchuang Tan, Huan Liu, Yao Zhao, Shikui Wei, Guanghua Gu, Ping Liu, and Yunchao Wei. Rethinking the up-sampling operations in cnn-based generative network for generalizable deepfake detection. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 28130-28139, 2024. 2, 7",
|
| 1568 |
+
"[46] Devavrat Tomar, Manana Lortkipanidze, Guillaume Vray, Behzad Bozorgtabar, and Jean-Philippe Thiran. Self-attentive spatial adaptive normalization for cross-modality domain adaptation. IEEE Transactions on Medical Imaging, 40:2926-2938, 2021. 6",
|
| 1569 |
+
"[47] Laurens van der Maaten and Geoffrey E. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9: 2579-2605, 2008. 8",
|
| 1570 |
+
"[48] Tim van Erven and Peter Harremoës. Rényi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory, 60:3797-3820, 2012. 4",
|
| 1571 |
+
"[49] Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq R. Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8228-8238, 2023. 6",
|
| 1572 |
+
"[50] Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A. Efros. Cnn-generated images are surprisingly easy to spot... for now. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8692-8701, 2019. 1, 7, 2, 3",
|
| 1573 |
+
"[51] Zhendong Wang, Jianmin Bao, Wen gang Zhou, Weilun Wang, Hezhen Hu, Hong Chen, and Houqiang Li. Dire for"
|
| 1574 |
+
],
|
| 1575 |
+
"bbox": [
|
| 1576 |
+
516,
|
| 1577 |
+
92,
|
| 1578 |
+
903,
|
| 1579 |
+
902
|
| 1580 |
+
],
|
| 1581 |
+
"page_idx": 9
|
| 1582 |
+
},
|
| 1583 |
+
{
|
| 1584 |
+
"type": "page_number",
|
| 1585 |
+
"text": "18397",
|
| 1586 |
+
"bbox": [
|
| 1587 |
+
480,
|
| 1588 |
+
944,
|
| 1589 |
+
517,
|
| 1590 |
+
955
|
| 1591 |
+
],
|
| 1592 |
+
"page_idx": 9
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"type": "list",
|
| 1596 |
+
"sub_type": "ref_text",
|
| 1597 |
+
"list_items": [
|
| 1598 |
+
"diffusion-generated image detection. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 22388-22398, 2023. 3, 7",
|
| 1599 |
+
"[52] Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. Diffusiondb: A large-scale prompt gallery dataset for text-to-image generative models. In Annual Meeting of the Association for Computational Linguistics, 2022. 6, 7",
|
| 1600 |
+
"[53] Syed Waqas Zamir, Aditya Arora, Salman Hameed Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Learning enriched features for fast image restoration and enhancement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45:1934-1948, 2022. 2",
|
| 1601 |
+
"[54] Maxime Zanella and Ismail Ben Ayed. Low-rank few-shot adaptation of vision-language models. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1593-1603, 2024. 4",
|
| 1602 |
+
"[55] Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, and Yu Su. Magicbrush: A manually annotated dataset for instruction-guided image editing. ArXiv, abs/2306.10012, 2023. 4, 5",
|
| 1603 |
+
"[56] Lingzhi Zhang, Zhengjie Xu, Connelly Barnes, Yuqian Zhou, Qing Liu, He Zhang, Sohrab Amirghodsi, Zhe Lin, Eli Shechtman, and Jianbo Shi. Perceptual artifacts localization for image synthesis tasks. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7545-7556, 2023. 5",
|
| 1604 |
+
"[57] Xu Zhang, Svebor Karaman, and Shih-Fu Chang. Detecting and simulating artifacts in gan fake images. 2019 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-6, 2019. 7",
|
| 1605 |
+
"[58] Bolei Zhou, Aditya Khosla, Ågata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2921-2929, 2015. 8",
|
| 1606 |
+
"[59] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2242-2251, 2017. 6"
|
| 1607 |
+
],
|
| 1608 |
+
"bbox": [
|
| 1609 |
+
91,
|
| 1610 |
+
90,
|
| 1611 |
+
482,
|
| 1612 |
+
654
|
| 1613 |
+
],
|
| 1614 |
+
"page_idx": 10
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "page_number",
|
| 1618 |
+
"text": "18398",
|
| 1619 |
+
"bbox": [
|
| 1620 |
+
480,
|
| 1621 |
+
945,
|
| 1622 |
+
517,
|
| 1623 |
+
955
|
| 1624 |
+
],
|
| 1625 |
+
"page_idx": 10
|
| 1626 |
+
}
|
| 1627 |
+
]
|
2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/d257d4bc-770d-463b-8ab1-d4cb1749eea8_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/d257d4bc-770d-463b-8ab1-d4cb1749eea8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e8c6ea47e387de77af490cbdebb7e4566f3efb6bc76f15398e633ef5e18f096f
|
| 3 |
+
size 4618878
|
2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/full.md
ADDED
|
@@ -0,0 +1,344 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Semantic Discrepancy-aware Detector for Image Forgery Identification
|
| 2 |
+
|
| 3 |
+
Ziye Wang Minghang Yu Chunyan Xu* Zhen Cui Nanjing University of Science and Technology, Nanjing, China
|
| 4 |
+
|
| 5 |
+
{wzynjust,mhyu,cyx}@njust.edu.cn,zhen.cui@bnu.edu.cn
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
With the rapid advancement of image generation techniques, robust forgery detection has become increasingly imperative to ensure the trustworthiness of digital media. Recent research indicates that the learned semantic concepts of pre-trained models are critical for identifying fake images. However, the misalignment between the forgery and semantic concept spaces hinders the model's forgery detection performance. To address this problem, we propose a novel Semantic Discrepancy-aware Detector (SDD) that leverages reconstruction learning to align the two spaces at a fine-grained visual level. By exploiting the conceptual knowledge embedded in the pre-trained vision-language model, we specifically design a semantic token sampling module to mitigate the space shifts caused by features irrelevant to both forgery traces and semantic concepts. A concept-level forgery discrepancy learning module, based on reconstruction, enhances the interaction between semantic concepts and forgery traces, effectively capturing discrepancies under the concepts' guidance. Finally, the low-level forgery feature enhancement integrates the learned concept-level forgery discrepancies to minimize redundant forgery information. Experiments conducted on two standard image forgery datasets demonstrate the efficacy of the proposed SDD, which achieves superior results compared to existing methods. The code is available at https://github.com/wzy111111/SSD.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
With the thriving of generative AI technologies, like Generative Adversarial Networks (GANs) [14] and diffusion models [2], the images generated by these models can easily create confusion by passing off the spurious as genuine. Therefore, it is crucial to develop a universal method for detecting fake images to mitigate the widespread dissemination of disinformation.
|
| 14 |
+
|
| 15 |
+
Pioneering research [26, 32] has shown that projecting
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1. The phenomenon of misalignment between semantic concept space and forgery space. Since $\cos \theta$ can reflect the similarity of image descriptions, we model the feature space in polar coordinates. As the semantic concept space in [32] is frozen, fake samples sharing similar concepts with real ones can be easily misclassified. With forgery-adaptive space like [26], the model can correctly distinguish between them based on re-learned forgery features. Nevertheless, due to the semantic concept bias introduced by coarse text prompts, the target samples may be projected into an inaccurate semantic concept dimension, causing them to drift away from the real source samples along the fake dimension.
|
| 21 |
+
|
| 22 |
+
images in a joint embedding space of texts and images can effectively capture discrepancies between fake and real images. In contrast, methods [6, 13, 44, 50] overlooking the interplay between forgery traces and semantic concepts perform poorly when confronted with unseen generative models.
|
| 23 |
+
|
| 24 |
+
To investigate the visual semantic concepts of pretrained models, we conduct a statistical analysis of the output features from CNNSpot [50] and CLIP-ViT [32] (See Appendix A for more details). Under different categories, CNNSpot exhibits a synchronized difference between real and fake features in its training space. However, when tran
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Figure 2. Different paradigms of image forgery identification with pre-trained vision-language model. (a) Fine-tune the frozen model only by fully connected (FC) layers [32]. (b) Prompt-based designs are tuned on text prompts and contrastive objectives [26]. (c) Our paradigm incorporating visual clues can capture fine-grained forge traces by reconstruction learning.
|
| 28 |
+
|
| 29 |
+
sitioning to the CLIP's space, these differences become inconsistent. From this, we infer a nuanced relationship between semantic concepts and forgery traces: Different semantic concepts may guide the model to uncover distinct forgery traces.
|
| 30 |
+
|
| 31 |
+
Intuitively, relying on a frozen pre-trained vision-language model like UnivFD [32] is essential, but this tends to overlook fine-grained forgery details. Although Fat-Former [26] achieves a substantial enhancement in generalization by employing the forgery-aware adaptive transformer, we observe that soft prompts based on simple [CLASS] embeddings have an intrinsic limitation in their semantic description granularity (See Appendix B for more details). The constrained breadth of the conveyed concepts may lead the detection toward incorrect predictions. This limitation highlights a misalignment between the visual semantic concept space and the target forgery space, as illustrated in Fig. 1.
|
| 32 |
+
|
| 33 |
+
To address this, one empirical approach is to design more detailed text descriptions, but this method struggles to describe all visual forgery details due to the limited length of texts and brings more computational overhead. Drawing from the aforementioned findings and analysis, we make a first attempt to align the CLIP's visual semantic concept space with the forgery space by reconstructing semantic features.
|
| 34 |
+
|
| 35 |
+
We develop a vision-based paradigm, as outlined in Fig. 2. First, employing a pre-trained model only with nearest neighbor or linear probing (e.g. UnivFD [32], Fig. 2 (a)) is suboptimal for image forgery detection. Second, modifying the pre-trained model with task-specific prompts (e.g. FatFormer [26], Fig. 2 (b)) may favor models biased towards any particular semantic concept. These studies pave the way for exploring pre-trained space with rich semantic concepts. Inspired by image reconstruction [43, 53], our paradigm amplifies the concept-level forgery discrepancies of forgery images, which empowers the model to detect sus
|
| 36 |
+
|
| 37 |
+
picious forgery traces with the assistance of semantic concepts.
|
| 38 |
+
|
| 39 |
+
In this work, we present a novel Semantic Discrepancy-aware Detector (SDD) to accurately align the semantic concept space and the forgery space. Firstly, to mitigate interference from features unrelated to learned semantic concepts and forgery traces, we divide the real images into non-overlapping blocks and feed them to the frozen CLIP [36] to obtain diverse semantic patch tokens. These tokens acting as visual clues smoothly align the semantic concepts' space and forgery space. It is noteworthy that these tokens sampled by JS divergence are universally representative of the real semantic distribution. Then, the visual clues are fused into a concept-level forgery discrepancy module. Unlike FatFormer, LoRA layers are incorporated into the image encoder. The goal is to preserve the completeness and diversity of the learned semantic concepts of CLIP, while the forgery features sharing similar semantic concepts should be highlighted. During reconstruction, we only narrow the reconstruction gap for real samples to reinforce the reconstructed discrepancies of the synthetic images. Finally, we present low-level forgery feature enhancement to let the reconstruction difference map enhance the extraction of the highly generalizable forgery features while introducing minimal additional parameters. The main challenge is how to capture forgery features with strong semantic concept correlation and features with high forgery relevance but weak semantic concept ties to ensure the model converges to powerful features. Motivated by this, we apply convolutional modules and adaptive weight parameters to avoid over-relying on semantic concepts.
|
| 40 |
+
|
| 41 |
+
We thoroughly evaluate the generalization performance of our model on a UnivFD benchmark [32] and a SynRIS benchmark [5]. Surprisingly, our method achieves superior performance by a $ap_{m}$ of 98.51% and a $acc_{m}$ of 93.61% on the UnivFD benchmark [32] and an average AUROC of 95.1% on the SynRIS benchmark [5]. In summary, our contributions are as follows:
|
| 42 |
+
|
| 43 |
+
- We propose a robust model (SDD) for forgery detection, specifically designed to align the visual concept space and forgery space in terms of visual information.
|
| 44 |
+
- We sample semantic tokens to mitigate the space shifts and align the two spaces through reconstruction learning. Additionally, we strengthen low-level forgery features to enhance the model's robustness.
|
| 45 |
+
- Our method achieves superior performance on two benchmarks, demonstrating its superior capability in comparison to existing approaches.
|
| 46 |
+
|
| 47 |
+
# 2. Related Work
|
| 48 |
+
|
| 49 |
+
AI-generated Images Detection. Extensive efforts have been devoted to enhancing the performance of AI-generated image detection. Early works like [25, 44, 45] tend to mine
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
Figure 3. The architecture of SDD. First, Next, we sample semantic tokens from real images to learn features related to both concepts and forgery. the input images are mapped into a joint space of visual semantic concepts and forgery, which are transformed into learnable features $V_{H}$ . Then, we use transformer-based encoder and decoder to get reconstructed features $\mathcal{R}_f$ . A reconstruction difference map $\mathcal{D}_S$ is obtained and goes through the multi-scale convolutional network to refine forgery features. Finally, we concatenate the CLIP's CLS token with this output along the same dimension for classification. The whole system is trained by jointly minimizing the binary cross-entropy loss $L_{bce}$ , the reconstruction loss $L_{r}$ , and the triplet loss $L_{tri}$
|
| 53 |
+
|
| 54 |
+
the common forgery traces between all real and fake images, such as noise patterns, texture statistics, and frequency signals. As an illustration, Liu et al. [24] designed a network that learns the consistent noise patterns in images for fake detection. Liu et al. [28] proposed to leverage the gram matrix to discover the global anomalous texture of fake images. An effective approach [13] demonstrated that frequency representation is an important factor in improving fake detection performance. However, these differences are rigorously specific to the monotonous features, which contribute to the issue of overfitting. Cutting-edge research [26, 32] shifted attention toward the semantic properties of images. Ojha et al. [32] showed that projecting images into the feature space of pre-trained vision-language model enables strong generalization ability. To build generalized forgery representations, Liu et al. [26] constructed forgery adaptive space by a forgery-aware adapter. The above research [5, 26, 32] has suggested that concept attributes are vital in the image forgery detection task. Assuming that diffusion-based models leave distinct forgery traces that are characteristic of specific concept distributions, we aim to extract robust forgery features guided by semantic concepts, rather than suppressing them. Therefore, even "useless" information can be useful by providing significant certainty about the content of the image.
|
| 55 |
+
|
| 56 |
+
Reconstruction Learning. Reconstruction learning has
|
| 57 |
+
|
| 58 |
+
great potential in unsupervised representation learning [16, 27]. Some works [5, 39] utilized reconstruction learning to reveal the nuances between real and fake images. For example, Wang et al. [51] found that reconstructing images by DDIM exposes an error between real images and their reconstructed replica. The new synthetic image detection method[5] used text-conditioned inversion maps to learn internal representations, which is conducive to predicting whether an image is fake. Ricker et al. [39] offered a simple detection approach by applying AE to measure the reconstruction error. Notably, these works are committed to reconstructing the distributions of both real and fake samples by leveraging generative models. Unlike previous works, we focus solely on reconstructing real images in the finetuned CLIP space in light of the authenticity and richness of semantic concepts. The distribution of real samples, learned from pre-trained vision-language model, helps to define an optimal boundary, thus alleviating overfitting.
|
| 59 |
+
|
| 60 |
+
# 3. Methodology
|
| 61 |
+
|
| 62 |
+
Our goal is to align forgery and visual semantic concept spaces using reconstruction techniques for robust and generalizable synthetic image detection. To achieve this, we introduce a fine-grained model named Semantic Discrepancy-aware Detector (SDD). Building on prior works, we harness the generalization capability of vision-language mod
|
| 63 |
+
|
| 64 |
+
els. Semantic concept space: The ideal joint embedding space of images and texts with four properties: semantic alignment, modality invariance, locality consistency, and structure preservation. Forgery space: The ideal space covers forgery traces. Notably, we derive semantic concept space via a vision language model pretrained solely on real images; thus we treat the two spaces as independent.
|
| 65 |
+
|
| 66 |
+
First, the Semantic Tokens Sampling (STS) module utilizes Jensen-Shannon (JS) divergence to sample semantic patch tokens, which serve as a transitional bridge, facilitating model to establish the association between real and forgery images accurately. Next, the Concept-level forgery Discrepancy Learning (CFDL) module employs reconstruction learning to explore the forgery discrepancies within the visual semantic concept space, which focuses on identifying subtle variations between reconstructed forgery features. Finally, the reconstruction difference map is trained with Low-level forgery Feature Enhancement module, which aims to refine forgery features with more visual details.
|
| 67 |
+
|
| 68 |
+
# 3.1. Semantic Tokens Sampling
|
| 69 |
+
|
| 70 |
+
Initially, we considered directly aligning the visual semantic concept space and forgery space by leveraging fine-grained reconstruction learning to model real and fake semantic distributions. However, this strategy would treat the differences in features unrelated to semantics and forgery as crucial factors for identifying image's authenticity. To eliminate these redundant features, we sample real semantic image patch tokens as visual clues to bridge real and forged semantic domains. This module enables the model to focus on concept-related forgery traces and highlight the distinctions between real and fake images. In a tangible way, the image encoder of CLIP: ViT-L/14 is adapted to transform a real image $x_{r}$ into a set of features $f_{r}$ , without the image CLS token. We define the transformation as:
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
f _ {r} = \phi (x _ {r}), \tag {1}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
where $\phi (\cdot)$ is the CLIP:ViT-L/14's visual encoder, $x_{r}\in$ $\mathbb{I}_r^{H\times W\times 3}$ represents a real image characterized by a height of $H$ and a width of $W$ . Besides, $f_{r}\in \mathbb{R}^{N\times D}$ , where $N$ is the number of tokens and $D$ denotes the dimension of each patch token.
|
| 77 |
+
|
| 78 |
+
Since integrating all real patch tokens into the image reconstruction module is computationally intensive and memory-consuming, it is urgent to select a suitable subset of these tokens. From a distribution perspective, the Jensen-Shannon (JS) divergence, derived from the Kullback-Leibler divergence [48], is a symmetric and finite metric that can effectively measure the similarity between tokens by quantifying differences in their distributions.
|
| 79 |
+
|
| 80 |
+
To calculate the JS divergence between two tokens, both are converted into computable probability distribution space using the softmax function. Let $f_{s} \in \mathbb{F}_{r}^{M \times D}$ be the selected
|
| 81 |
+
|
| 82 |
+
semantic patch tokens with the num of tokens $M = 1 / \delta$ and dimension $D$ in terms of sampling rate $\delta$ ( $0 \leq \delta \leq 1$ , $\delta$ is user-defined). Once the initial token $\tilde{r}$ and $\delta$ are determined, the JS divergence between $\tilde{r}$ and other tokens $r$ falls within the range [0, 1]. Subsequently, the sampling interval is split into $M$ equal segments with one token selected from each segment. As a consequence, the semantic tokens sampling module can be formulated as:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
\begin{array}{l} f _ {s} = \mathcal {S} \left(\mathbb {R} ^ {N \times D}, \delta\right) \\ = A _ {c} ^ {N _ {a} \times M} \times \mathbb {R} ^ {N \times D}, \\ \end{array}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
s.t. JS(softmax $(\tilde{r})$ ,softmax $(r)) = \frac{i}{M}$ , if $a_{ij} = 1$ (2)
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\sum_ {i = 0} ^ {N _ {a}} \sum_ {j = 0} ^ {M} a _ {i j} = M; \sum_ {j = 0} ^ {M} a _ {i j} = 1 \text {o r} 0,
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
i = 1, \dots , N _ {a}; j = 1, \dots , M,
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
where $S(\cdot, \cdot)$ represents the sampling process. $A_{c}^{N_{a} \times M}$ is a constraint matrix of size $N_{a} \times M$ whose element $a_{ij}$ is constrained to the binary pattern of $\{0, 1\}$ . Here JS $(\cdot)$ refers to the Jensen-Shannon divergence, $N_{a}$ denotes the total number of real image patch tokens sampled from the training dataset of UnivFD and $M$ represents the required subset size. The softmax $(\cdot)$ is the softmax function. The sampling tokens help the reconstruction module avoid becoming biased towards any particular forgery-unrelated distribution. Meanwhile, it avoids the semantic bias often introduced by text prompts, since the tokens are evenly distributed in a unified CLIP space.
|
| 99 |
+
|
| 100 |
+
# 3.2. Concept-level Forged Discrepancy Learning
|
| 101 |
+
|
| 102 |
+
A few words alone can hardly paint a picture. We argue that the fine-grained visual details can uncover more forgery traces concealed in the images. As such, we mix sampling tokens with extracted features and capitalize on reconstruction learning to compensate for the omission of forgery traces brought by coarse prompts. As previous work [26] has demonstrated that the pre-trained vision-language model necessitates fine-tuning to adapt to the forgery detection task. Therefore, we integrate LoRA [17] with the CLIP-ViT model to capture discriminative forgery features by making use of the bread semantic concepts. This method, denoted as LoRA-CLIP [54], is more streamlined and flexible. Given an input image $\mathcal{I} \in \mathbb{I}^{H \times W \times 3}$ , we can get high-level visual features $V_{H}$ , as follows:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
V _ {H} = \mathcal {F} _ {L o R A} (\mathcal {I}). \tag {3}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
Here $\mathcal{F}_{LoRA}$ refers to the CLIP image encoder fine-tuned by LoRA. The reconstruction module of CFDL encompasses two submodules, i.e., transformer-based encoder and decoder. Thanks to the transformer's capability of long-range relationship modeling, we capitalize on the multi-head attention (MHA) mechanism, the core mechanism of
|
| 109 |
+
|
| 110 |
+
the transformer, to obtain more discriminative distribution by utilizing contextual information, which is set as:
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\left\{ \begin{array}{r l} \operatorname {h e a d} _ {i} & = \operatorname {A t t n} \left(Q W _ {i} ^ {Q}, K W _ {i} ^ {K}, V W _ {i} ^ {V}\right) \\ & = \operatorname {S o f t m a x} \left(\frac {Q W _ {i} ^ {Q} \left(K W _ {i} ^ {K}\right) ^ {\top}}{\sqrt {d}}\right) V W _ {i} ^ {V}, \\ \operatorname {M H A} (Q, K, V) & = \operatorname {C o n c a t} \left(\operatorname {h e a d} _ {1}, \dots , \operatorname {h e a d} _ {h}\right) W ^ {O}, \end{array} \right. \tag {4}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where $Q$ (Query), $K$ (Key), $V$ (Value) refer to the input, $W_{i}^{Q}, W_{i}^{K}, W_{i}^{V}$ separately denote the corresponding weights of linear projection, $\mathrm{Attn}(\cdot)$ denotes the function of the scaled dot product, Softmax is the softmax function, $d$ refers to the dimension of input, Concat ( $\cdot$ ) represents the concatenation used to stitch the discrete attention outputs of head $1 \sim h$ together.
|
| 117 |
+
|
| 118 |
+
To amplify the discrepancy between a fake image and its reconstructed counterpart, the sampled visual clues are employed for the initial processing by the encoder. The encoder's process can be formulated as follows:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
R _ {1} = \operatorname {L N} \left(\mathrm {M H A} \left(f _ {s}, V _ {H}, V _ {H}\right)\right), \tag {5}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
R _ {2} = \operatorname {L N} \left(\mathrm {M H A} \left(R _ {1}, V _ {H}, V _ {H}\right)\right), \tag {6}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $\mathrm{LN}(\cdot)$ denotes the Layer Normalization. Then, the encoder's outputs used as queries are injected into the decoder to get the final reconstructed features, which are similar to the encoder process and perform the following operation:
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
R _ {3} = \mathrm {L N} (\mathrm {M H A} (R _ {2}, R _ {2}, R _ {2})), \tag {7}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
R _ {e} = \mathrm {L N} (\mathrm {M H A} \left(R _ {1}, R _ {3}, R _ {3}\right)). \tag {8}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
During the reconstruction process, we just calculate the reconstruction loss $\mathcal{L}_r$ between the real input features and their reconstructed counterparts $\mathcal{R}_e$ within a mini-batch as follows:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\mathcal {L} _ {r} = \frac {1}{B} \sum_ {i = 0} ^ {B} \operatorname {M S E} \left(R _ {e}, V _ {H}\right), \tag {9}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where $\mathrm{MSE}(\cdot ,\cdot)$ is mean squared error. Facilitating $\mathcal{L}_r$ encourages preserving the completeness and richness of the visual semantic concept space and highlighting the concept-related forgery features. Given the reconstructed features $R_{f}$ and the original feature $f_{r}$ , the reconstruction difference map can be formally expressed as:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\mathcal {D} _ {s} = \left| R _ {f} - f _ {r} \right|, \tag {10}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where $|\cdot |$ denotes the absolute value function.
|
| 151 |
+
|
| 152 |
+
# 3.3. Low-level Forgery Feature Enhancement
|
| 153 |
+
|
| 154 |
+
Existing methods based on pre-trained vision-language models [26, 32] overlook the importance of concept
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
Figure 4. The curve of exponential inverse. In the "fast" interval, the value drops sharply. In the "low" interval, the curve flattens out, showing a decay towards 0.
|
| 158 |
+
|
| 159 |
+
weakly-related features. We believe that a thorough alignment between the visual semantic concept space and the forgery space should include the exploration of concept weakly-related forgery features. To eliminate redundant forgery features, we come up with a novel feature enhancement that refines low-level forgery features. Empowered by the reconstruction difference map, our detector orchestrates the extraction of multi-scale features with exceptional robustness and markedly enhanced effectiveness. As shown in Fig. 3, the enhancer follows the typical architecture of a convolutional network. It involves the repeated application of convolutions, each followed by a batch normalization (BN) and a rectified linear unit (ReLU). For a given stage $n$ , $F(n)$ ( $N = 1,2,3$ ) corresponds to its output features. Then, We deconvolve the semantic difference map $D_{s}$ to the shape same as $F(N)$ and perform pixel-wise multiplication with $F(n)$ to get $F'(n)$ as:
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
F ^ {\prime} (n) = \operatorname {d e c o n v} (F (n)) \otimes D _ {s}, \tag {11}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
where $\otimes$ is the element-wise multiplication, deconv $(\cdot)$ represents deconvolution operation and $\mathcal{F}'(n)$ is the low-level feature aggregated with semantic information. To further enhance the reliability of the extracted features, we compute an adaptive weight coefficient $\frac{1}{e_n}$ to indicate the importance of $D_{s}$ to $F(n)$ :
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\frac {1}{e _ {n}} = \frac {1}{e ^ {\left| F ^ {\prime} (n) - F (n) \right|}}. \tag {12}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Here we explain the role of the exponential inverse through Fig. 4. As $x$ grows large, the curve of $e^x$ becomes flatter. Therefore, in the "fast" interval, forgery features with a significant divergence from the semantic difference map will be assigned smaller weights, which mobilizes the network to capture concept strongly-related features. However, in the "low" interval, features strongly associated with forgery can avoid being misguided by semantic concepts, which indicates that the order of importance is reversed. Next, we have the attended output features $F_{low}$
|
| 172 |
+
|
| 173 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">Ref</td><td colspan="6">GAN</td><td rowspan="2">Deep fakes</td><td colspan="2">Low level</td><td colspan="2">Perceptual loss</td><td rowspan="2">Guided</td><td colspan="3">LDM</td><td colspan="3">Glide</td><td>Dalle</td><td>mAP</td><td></td></tr><tr><td>Pro-GAN</td><td>Cycle-GAN</td><td>Big-GAN</td><td>Style-GAN</td><td>Gau-GAN</td><td>Star-GAN</td><td>SITD</td><td>SAN</td><td>CRN</td><td>IMLE</td><td>200Steps</td><td>200w/cfg</td><td>100Steps</td><td>100w/ CFG</td><td>10027</td><td>5027</td><td>10</td><td></td><td></td></tr><tr><td>CNN-Spot</td><td>CVPR2020</td><td>100.0</td><td>93.47</td><td>84.50</td><td>99.54</td><td>89.49</td><td>98.15</td><td>89.02</td><td>73.75</td><td>59.47</td><td>98.24</td><td>98.40</td><td>73.72</td><td>70.62</td><td>71.00</td><td>70.54</td><td>80.65</td><td>84.91</td><td>82.07</td><td>70.59</td><td>83.58</td><td></td></tr><tr><td>PatchFor</td><td>ECCV2020</td><td>80.88</td><td>72.84</td><td>71.66</td><td>85.75</td><td>65.99</td><td>69.25</td><td>76.55</td><td>76.19</td><td>76.34</td><td>74.52</td><td>68.52</td><td>75.03</td><td>87.10</td><td>86.72</td><td>86.40</td><td>85.37</td><td>83.73</td><td>78.38</td><td>75.67</td><td>77.73</td><td></td></tr><tr><td>Co-occurrence</td><td>Elect.Imag.</td><td>99.74</td><td>80.95</td><td>50.61</td><td>98.63</td><td>53.11</td><td>67.99</td><td>59.14</td><td>68.98</td><td>60.42</td><td>73.06</td><td>87.21</td><td>70.20</td><td>91.21</td><td>89.02</td><td>92.39</td><td>89.32</td><td>88.35</td><td>82.79</td><td>80.96</td><td>78.11</td><td></td></tr><tr><td>Freq-spec</td><td>WIFS2019</td><td>55.39</td><td>100.0</td><td>75.08</td><td>55.11</td><td>66.08</td><td>100.0</td><td>45.18</td><td>47.46</td><td>57.12</td><td>53.61</td><td>50.98</td><td>57.72</td><td>77.72</td><td>77.25</td><td>76.47</td><td>68.58</td><td>64.58</td><td>61.92</td><td>67.77</td><td>66.21</td><td></td></tr><tr><td>Dire</td><td>ICCV2023</td><td>100.0</td><td>83.59</td><td>81.50</td><td>96.50</td><td>81.70</td><td>99.88</td><td>95.73</td><td>62.51</td><td>69.98</td><td>97.31</td><td>98.62</td><td>79.53</td><td>75.52</td><td>73.42</td><td>76.45</td><td>86.28</td><td>89.00</td><td>88.34</td><td>51.35</td><td>83.54</td><td></td></tr><tr><td>UnivFD</td><td>CVPR2023</td><td>100.0</td><td>99.46</td><td>99.59</td><td>97.24</td><td>99.98</td><td>99.60</td><td>82.45</td><td>61.32</td><td>79.02</td><td>96.72</td><td>99.00</td><td>87.77</td><td>99.14</td><td>92.15</td><td>99.17</td><td>94.74</td><td>95.34</td><td>94.57</td><td>97.15</td><td>93.38</td><td></td></tr><tr><td>NPR</td><td>CVPR2024</td><td>100.0</td><td>99.50</td><td>96.50</td><td>99.80</td><td>96.80</td><td>100.0</td><td>92.20</td><td>73.10</td><td>78.70</td><td>87.20</td><td>64.80</td><td>65.80</td><td>99.80</td><td>99.80</td><td>99.80</td><td>99.70</td><td>99.80</td><td>99.80</td><td>98.60</td><td>92.19</td><td></td></tr><tr><td>FatFormer</td><td>CVPR2024</td><td>100.0</td><td>100.0</td><td>99.98</td><td>99.75</td><td>100.0</td><td>100.0</td><td>97.99</td><td>97.94</td><td>81.23</td><td>99.84</td><td>99.93</td><td>91.92</td><td>99.83</td><td>99.22</td><td>99.89</td><td>99.27</td><td>99.50</td><td>99.33</td><td>99.84</td><td>98.18</td><td></td></tr><tr><td>Ours</td><td></td><td>100.0</td><td>99.77</td><td>99.93</td><td>99.48</td><td>99.98</td><td>99.97</td><td>97.23</td><td>97.91</td><td>93.10</td><td>99.79</td><td>99.96</td><td>92.06</td><td>99.88</td><td>98.95</td><td>99.92</td><td>98.06</td><td>98.29</td><td>97.73</td><td>99.81</td><td>98.51</td><td></td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 1. Average precision comparisons with different methods on the UnivFD dataset. We replicate the results of CNNSpot, Patchfor, Co-occurrence, Freq-spec, and UnivFD from the paper[32]. In addition, we obtained the results for Dire, NPR and FatFormer using either the official pre-trained models or our re-implemented versions. Red and underline indicates the best and the second best result, respectively.
|
| 176 |
+
|
| 177 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">Ref</td><td colspan="6">GAN</td><td rowspan="2">Deep fakes</td><td colspan="2">Low level</td><td colspan="2">Perceptual loss</td><td rowspan="2">Guided</td><td colspan="3">LDM</td><td colspan="3">Glide</td><td>Dalle</td><td>Avg-acc</td></tr><tr><td>Pro-GAN</td><td>Cycle-GAN</td><td>Big-GAN</td><td>Style-GAN</td><td>Gau-GAN</td><td>Star-GAN</td><td>SITD</td><td>SAN</td><td>CRN</td><td>IMLE</td><td>200 Steps</td><td>200 wcfg</td><td>100 Steps</td><td>100 27</td><td>50 27</td><td>100 10</td><td></td><td></td></tr><tr><td>CNN-Spot</td><td>CVPR2020</td><td>99.99</td><td>85.20</td><td>70.20</td><td>85.70</td><td>78.95</td><td>91.70</td><td>53.47</td><td>66.67</td><td>48.69</td><td>86.31</td><td>86.26</td><td>60.07</td><td>54.03</td><td>54.96</td><td>54.14</td><td>60.78</td><td>63.80</td><td>65.66</td><td>55.58</td><td>69.58</td></tr><tr><td>PatchFor</td><td>ECCV2020</td><td>75.03</td><td>68.97</td><td>68.47</td><td>79.16</td><td>64.23</td><td>63.94</td><td>75.54</td><td>75.14</td><td>75.28</td><td>72.33</td><td>55.30</td><td>67.41</td><td>76.50</td><td>76.10</td><td>75.77</td><td>74.81</td><td>73.28</td><td>68.52</td><td>67.91</td><td>71.24</td></tr><tr><td>Co-occurrence</td><td>Elect.Imag.</td><td>97.70</td><td>63.15</td><td>53.75</td><td>92.50</td><td>51.10</td><td>54.70</td><td>57.10</td><td>63.06</td><td>55.85</td><td>65.65</td><td>65.80</td><td>60.50</td><td>70.70</td><td>70.55</td><td>71.00</td><td>70.25</td><td>69.60</td><td>69.90</td><td>67.55</td><td>66.86</td></tr><tr><td>Freq-spec</td><td>WIFS2019</td><td>49.90</td><td>99.90</td><td>50.50</td><td>49.90</td><td>50.30</td><td>99.70</td><td>50.10</td><td>50.00</td><td>48.00</td><td>50.60</td><td>50.10</td><td>50.90</td><td>50.40</td><td>50.40</td><td>50.30</td><td>51.70</td><td>51.40</td><td>50.40</td><td>50.00</td><td>55.45</td></tr><tr><td>Dire</td><td>ICCV2023</td><td>99.86</td><td>73.47</td><td>60.68</td><td>72.39</td><td>65.15</td><td>93.60</td><td>88.86</td><td>52.78</td><td>56.39</td><td>90.07</td><td>94.05</td><td>61.05</td><td>59.35</td><td>59.95</td><td>60.65</td><td>69.30</td><td>72,70</td><td>71.00</td><td>52.75</td><td>71.19</td></tr><tr><td>UnivFD</td><td>CVPR2023</td><td>100.0</td><td>98.50</td><td>94.50</td><td>82.00</td><td>99.50</td><td>97.00</td><td>66.60</td><td>63.00</td><td>57.50</td><td>59.50</td><td>72.00</td><td>70.03</td><td>94.19</td><td>73.76</td><td>94.36</td><td>79.07</td><td>79.85</td><td>78.14</td><td>86.78</td><td>81.38</td></tr><tr><td>NPR</td><td>CVPR2024</td><td>99.80</td><td>92.00</td><td>89.50</td><td>96.30</td><td>87.60</td><td>99.70</td><td>79.40</td><td>61.40</td><td>70.60</td><td>74.50</td><td>57.10</td><td>55.23</td><td>97.40</td><td>98.70</td><td>97.90</td><td>97.00</td><td>97.90</td><td>97.00</td><td>88.80</td><td>86.20</td></tr><tr><td>FatFormer</td><td>CVPR2024</td><td>99.89</td><td>99.36</td><td>99.50</td><td>97.12</td><td>99.43</td><td>99.75</td><td>93.25</td><td>81.39</td><td>68.04</td><td>69.47</td><td>69.47</td><td>76.00</td><td>98.55</td><td>94.85</td><td>98.60</td><td>94.30</td><td>94.60</td><td>94.15</td><td>98.70</td><td>90.86</td></tr><tr><td>Ours</td><td></td><td>99.88</td><td>95.76</td><td>96.70</td><td>98.08</td><td>98.46</td><td>99.17</td><td>91.82</td><td>83.61</td><td>77.45</td><td>95.40</td><td>96.47</td><td>79.55</td><td>98.05</td><td>94.60</td><td>98.25</td><td>92.20</td><td>93.35</td><td>91.80</td><td>98.00</td><td>93.61</td></tr></table>
|
| 178 |
+
|
| 179 |
+
Table 2. Accuracy comparisons with different methods on the UnivFD dataset.
|
| 180 |
+
|
| 181 |
+
by the residual connection:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
F _ {l o w} (n) = F ^ {\prime} (n) + \frac {F ^ {\prime} (n)}{e _ {n}}. \tag {13}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
For optimizing the anchor features $\tilde{f}_a$ of the enhancer, the following triplet loss [35] is employed to bring positive samples $f_{p}$ closer while pushing negative samples $f_{n}$ apart:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\mathcal {L} _ {t r i} = \max \left(0, d \left(f _ {p}, f _ {a}\right) - d \left(f _ {n}, f _ {a}\right) + \alpha\right), \tag {14}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
where $d(\cdot)$ represents the Euclidean distance between samples and $\alpha$ is the margin.
|
| 194 |
+
|
| 195 |
+
On top of that, we concatenate the LoRA-CLIP's CLS token $T_{CLS}$ with $F_{low}$ along the same dimension to yield the refined representation $F_{out}$ . This ensures that forged features exhibit distinctiveness across different semantic identities while preserving their uniformity within similar semantic identities. The process is formulated as:
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
F _ {o u t} = F _ {l o w} \| T _ {C L S}, \tag {15}
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
where $F^{out}$ is strategically integrated with a linear classifier to enable the execution of binary classification. Eventually, the total loss function $\mathcal{L}$ of the proposed framework can be defined as:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
\mathcal {L} = \mathcal {L} _ {b c e} + \lambda_ {1} \mathcal {L} _ {t r i} + \lambda_ {2} \mathcal {L} _ {r}, \tag {16}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where $\mathcal{L}_{bce}$ presents the binary cross-entropy loss and $\mathcal{L}_{tri}$ is triplet loss. $\mathcal{L}_{tri}$ and $\mathcal{L}_r$ are scaled by the hyperparameters $\lambda_{1}$ and $\lambda_{2}$ , respectively.
|
| 208 |
+
|
| 209 |
+
# 4. Experiments
|
| 210 |
+
|
| 211 |
+
# 4.1. Experiment Setups
|
| 212 |
+
|
| 213 |
+
Datasets: We follow the protocol described in [32], using ProGAN's real and fake images as training data. Additionally, we adopt the protocol from [5], where the training data is composed of fake Stable Diffusion v1 images [52] and random real LAION images [41]. The UnivFD dataset [32] cover a broad range of generative models, primarily including GANs and diffusion models, such as ProGAN [18], StyleGAN [19], BigGAN [4], CycleGAN [59], StarGAN [10], GauGAN [46], CRN [9], IMLE [22], SAN [11], SITD [7], DeepFakes [20], Guided [12], Glide [31], LDM [40] and DALL-E [37]. The SynRIS dataset [5] is designed to avoid bias toward any specific topic, theme, or style and contains high-fidelity images generated by text-to-image models, such as Kandinsky2 [38], Kandinsky3 [1], PixArt- $\alpha$ [8], SDXL-DPO [49], SDXL [34], SegMoE [23], SSD-1B [42], Stable-Cascade [33], Segmind-Vega [15], and Würstchen2 [33], Midjourney [29], DALL.E 3 [3]
|
| 214 |
+
|
| 215 |
+
<table><tr><td>Methods</td><td>CNN-Spot</td><td>Freq-spec</td><td>Dire</td><td>Univ FD</td><td>NPR</td><td>FatF ormer</td><td>FakeIn version</td><td>Patch For</td><td>Ours</td></tr><tr><td>Kandinsky2</td><td>60.0</td><td>57.0</td><td>71.6</td><td>56.2</td><td>97.5</td><td>75.6</td><td>69.9</td><td>53.5</td><td>97.13</td></tr><tr><td>Kandinsky3</td><td>65.9</td><td>45.7</td><td>74.9</td><td>61.4</td><td>93.7</td><td>80.1</td><td>74.3</td><td>51.4</td><td>94.8</td></tr><tr><td>PixArt-α</td><td>62.7</td><td>56.4</td><td>81.5</td><td>64.7</td><td>89.5</td><td>75.3</td><td>73.0</td><td>49.6</td><td>90.2</td></tr><tr><td>SDXL-DPO</td><td>84.3</td><td>69.8</td><td>69.9</td><td>70.2</td><td>97.6</td><td>86.0</td><td>88.1</td><td>54.5</td><td>95.5</td></tr><tr><td>Segmind-Vega</td><td>74.2</td><td>65.3</td><td>81.0</td><td>62.3</td><td>97.1</td><td>82.4</td><td>81.1</td><td>53.1</td><td>97.0</td></tr><tr><td>SDXL</td><td>81.4</td><td>61.2</td><td>86.2</td><td>66.3</td><td>96.0</td><td>85.1</td><td>80.7</td><td>63.9</td><td>97.6</td></tr><tr><td>Seg-MoE</td><td>66.3</td><td>54.6</td><td>71.9</td><td>62.0</td><td>93.6</td><td>70.8</td><td>71.3</td><td>49.8</td><td>97.6</td></tr><tr><td>SSD-1B</td><td>72.6</td><td>67.8</td><td>79.8</td><td>62.8</td><td>99.6</td><td>70.1</td><td>79.4</td><td>61.2</td><td>99.8</td></tr><tr><td>Stable-Cascade</td><td>70.5</td><td>62.1</td><td>74.1</td><td>68.2</td><td>97.6</td><td>81.6</td><td>74.9</td><td>57.4</td><td>99.2</td></tr><tr><td>Würstchen2</td><td>61.0</td><td>63.3</td><td>74.2</td><td>69.7</td><td>90.9</td><td>72.9</td><td>70.5</td><td>47.2</td><td>98.2</td></tr><tr><td>Midjourney</td><td>63.0</td><td>50.9</td><td>72.4</td><td>59.2</td><td>58.5</td><td>73.6</td><td>66.4</td><td>53.7</td><td>90.0</td></tr><tr><td>Playground</td><td>58.2</td><td>52.3</td><td>67.9</td><td>58.7</td><td>93.1</td><td>81.4</td><td>62.5</td><td>54.1</td><td>92.8</td></tr><tr><td>DALL-E3</td><td>71.6</td><td>59.9</td><td>80.8</td><td>48.0</td><td>69.1</td><td>79.2</td><td>75.9</td><td>50.1</td><td>85.9</td></tr><tr><td>Average</td><td>68.6</td><td>58.9</td><td>75.9</td><td>62.3</td><td>90.3</td><td>78.0</td><td>74.5</td><td>53.8</td><td>95.1</td></tr></table>
|
| 216 |
+
|
| 217 |
+
Table 3. AUROC comparisons with different methods on the SynRIS dataset. We retrieve the results of CNNSpot, UnivFD, and FakeInversion from [5] and obtain the results for Dire, NPR, Fat-Former, and PatchFor using re-implemented models. Red and underline indicate the best and the second best result, respectively.
|
| 218 |
+
|
| 219 |
+
and playground [21].
|
| 220 |
+
|
| 221 |
+
Metrics: As standard evaluation metrics, the average precision (AP), the accuracy (ACC) and AUCROC are considered to measure the effectiveness of different methods.
|
| 222 |
+
|
| 223 |
+
Baselines: In our experiments, we perform thorough comparisons with state-of-the-art methods, as follows: 1) CNNSpot [50]: The method relies only on one CNN generator. 2) PatchFor [6]: The method performs detection on a patch level. 3) Co-occurrence [30]: The method converts input images into co-occurrence matrices for classification. 4) Freq-spec [57]: The method employs the frequency spectrum of images. 5) Dire [51]: The method exploits the error between an input image and its reconstruction counterpart. 6) UnivFD [32]: The method uses the pre-trained language-vision model to determine the authenticity of images. 7) NPR [45]: The method captures the generalized artifacts according to the local interdependence among image pixels. 8) FatFormer [26]: The method is aimed at extracting forgery-adaptive features based on UnivFD. 9) Fakeinversion [5]: The method employs text-conditioned inversion maps extracted from Stable Diffusion.
|
| 224 |
+
|
| 225 |
+
Implement details: Our training and testing settings are adapted from the approach outlined in the previous study [26] with several key modifications. Specifically, early stopping was employed during model training, with an initial learning rate of $1 \times 10^{-4}$ and a batch size of 32. Additionally, the Lora layers are configured with hyperparameters $lora_{r} = 6$ , $lora_{\alpha} = 6$ , and a dropout rate of 0.8,
|
| 226 |
+
|
| 227 |
+
<table><tr><td>#</td><td>STS module</td><td>CFDL module</td><td>feature enhancement</td><td>UnivFD apm</td><td>Dataset accm</td></tr><tr><td>1</td><td></td><td>✓</td><td></td><td>97.37</td><td>81.64</td></tr><tr><td>2</td><td></td><td>✓</td><td>✓</td><td>97.41</td><td>90.17</td></tr><tr><td>3</td><td>✓</td><td>✓</td><td></td><td>97.39</td><td>89.98</td></tr><tr><td>4</td><td>✓</td><td>✓</td><td>✓</td><td>98.52</td><td>93.61</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Table 4. Ablation study of the proposed modules on the UnivFD Dataset. We show the mean accuracy $(acc_m)$ and average precision $(ap_m)$ . Red and underline indicate the best and the second-best result, respectively.
|
| 230 |
+
|
| 231 |
+
while $\alpha$ is set to 8.0. The proposed method is implemented using Pytorch on 2 Nvidia GeForce RTX A6000 GPUs.
|
| 232 |
+
|
| 233 |
+
# 4.2. Comparison Results
|
| 234 |
+
|
| 235 |
+
The UnivFD dataset includes a diverse range of models, allowing for a comprehensive evaluation of our method across both GAN and diffusion generative models. In addition, the SynRIS dataset provides images generated by cutting-edge generative models. The overall experimental results are presented in Tab. 1, Tab. 2, and Tab. 3.
|
| 236 |
+
|
| 237 |
+
Results on UnivFD dataset. Results show that our proposed method achieves superior performance compared to the UnivFD and FatFormer. Notably, without the biased interpretation introduced by coarse-grained text prompts, SDD surpasses the latest state-of-the-art method FatFormer by the mean AP $(ap_{m})$ of $0.34\%$ and the mean acc $(acc_{m})$ of $2.75\%$ . Moreover, compared with methods based on relatively monotonous forgery features in Tab. 1 and Tab. 2, our approach can outperform all of them with a large improvement. The above evidence indicates effective combination of visual concepts and forgery features can contribute model to extract sufficient forgery patterns and eliminate the superfluous features.
|
| 238 |
+
|
| 239 |
+
Results on SynRIS dataset. As shown in Tab. 3, when confronted with high-fidelity images generated by text-to-image models, methods leveraging pre-trained vision-language models, such as UnivFD and FatFormer, lose their competitiveness. In contrast, NPR, which focuses on neighboring pixel relationships, retains its edge. We assume that current generative methods grasp the relationships between visual information and semantic concepts in images but cannot refine local forgery details at the pixel level. Considering that excessive reliance on concepts misses abnormal pixel arrangements and focusing on monotonous forgery patterns can cause overfitting, our detector, which emphasizes low-level features with visual concepts, is trained on lower-fidelity fake images generated by Stable Diffusion [52] to capture concept-specific lacunae. We follow the evaluation protocol from SynRIS [5]. In comparison, our detector achieves an impressive mean AUROC of $95.1\%$ , surpassing the state-of-the-art method by $4.8\%$ . This demonstrates its superior ability to tackle the challenges posed by evolving generative models.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
Figure 5. Performance of function on Figure 6. T-SNE visualization of real and fake images [47]. The feature space is based on our adaptive weights. classifier. Each randomly samples 500 real and 500 fake images.
|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
Figure 7. The showcase of attention on images which are input into our model.
|
| 254 |
+
|
| 255 |
+
# 4.3. Ablation study
|
| 256 |
+
|
| 257 |
+
We perform comprehensive ablation studies on the UnivFD dataset under the original experimental configurations, reporting the mean accuracy $(acc_{m})$ and mean average precision $(ap_{m})$ as the primary evaluation metrics.
|
| 258 |
+
|
| 259 |
+
Effect of Each Component: We study the effects of removing STS module, CFDL module, and feature enhancement in our method. The results, presented in Tab. 4, demonstrate that these components are essential for improving performance in generalization on unseen models. This empirical finding suggests that the CFDL effectively captures forgery discrepancies associated with semantic concepts, while the enhancer module plays a crucial role in identifying robust forgery artifacts. The collaboration of all modules enhances the model's ability to distinguish between real and fake images.
|
| 260 |
+
|
| 261 |
+
Effect of function on adaptive weights: To check how well the proposed function works in the SDD, we select two conventional functions for comparison: $f(x) = |x|$ and $f(x) = x^2$ . The corresponding results are presented in Fig. 5. We find that our proposed function yields improvements in both $ap_m$ and $acc_m$ compared to the selected functions. These results demonstrate that our proposed function is conducive to capturing robust and distinctive forgery features.
|
| 262 |
+
|
| 263 |
+
Visualization of learned latent space: As shown in Fig. 6, the input images can be distinctly categorized into two clusters: real and fake. Nevertheless, why does the di
|
| 264 |
+
|
| 265 |
+
vided boundary of ProGAN appear ambiguous in contrast to other models? Additionally, why do the real clusters of CycleGAN and StyleGAN separate from each other? We attribute these to the influence of visual semantic concepts. Perceptively, with the supervision of visual semantic concepts, the learned boundary of ProGAN is more complex and nuanced, rather than just simple straight lines or curves. Similarly, the images generated by StyleGAN and CycleGAN are projected into the corresponding semantic concept distribution and then separated from the real images based on the visual semantic concepts.
|
| 266 |
+
|
| 267 |
+
Visualization of attention on images: We apply Class Activation Mapping (CAM) [58] to visualize the learned representations. From our perspective, Fig. 7 illustrates that with the aid of semantic information, our model can focus on different regions of fake images, including the background, local object regions, and marginal details. This suggests that our fine-grained model is capable of capturing intricate discrepancies generalized to unseen models. Notably, the real images nearly always show no forgery discrepancy regions, which demonstrates the effectiveness of the reconstruction loss in the forgery detection task.
|
| 268 |
+
|
| 269 |
+
# 5. Conclusion
|
| 270 |
+
|
| 271 |
+
In this paper, we propose a novel method, SDD, for generalizable forgery image detection. The findings show that our method establishes a new state-of-the-art in detecting images generated by generative models from different periods, which underscores its robustness and superior generalization capability. To the best of our knowledge, in pre-trained vision-language paradigms, our approach is the first to rely solely on visual information, without text prompts. Based on experimental results, we conclude that leveraging sampled tokens and reconstruction techniques effectively aligns the visual semantic concept space with the forgery space. Additionally, refining low-level forgery features under the supervision of visual semantic concepts enhances the performance of forgery detection. Although SDD performs well across various generative methods, there is still room for improvement as generative technologies continue to advance. Future research will explore this further.
|
| 272 |
+
|
| 273 |
+
# Acknowledgement
|
| 274 |
+
|
| 275 |
+
This work is supported by the National Natural Science Foundation of China (Grants Nos. 62372238,62476133)
|
| 276 |
+
|
| 277 |
+
# References
|
| 278 |
+
|
| 279 |
+
[1] V.Ya. Arkhipkin, Andrei Filatov, Viacheslav Vasilev, Anastasia Maltseva, Said Azizov, Igor Pavlov, Julia Agafonova, Andrey Kuznetsov, and Denis Dimitrov. Kandinsky 3.0 technical report. ArXiv, abs/2312.03511, 2023. 6
|
| 280 |
+
[2] Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. ArXiv, abs/2107.03006, 2021. 1
|
| 281 |
+
[3] James Betker, Gabriel Goh, Li Jing, † TimBrooks, Jianfeng Wang, Linjie Li, † LongOuyang, † JuntangZhuang, † JoyceLee, † YufeiGuo, † WesamManassra, † PrafullaDhariwal, † CaseyChu, † YunxinJiao, and Aditya Ramesh. Improving image generation with better captions. 6
|
| 282 |
+
[4] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. ArXiv, abs/1809.11096, 2018. 6
|
| 283 |
+
[5] George Cazenavette, Avneesh Sud, Thomas Leung, and Ben Usman. Fake inversion: Learning to detect images from unseen text-to-image models by inverting stable diffusion. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10759-10769, 2024. 2, 3, 6, 7
|
| 284 |
+
[6] Lucy Chai, David Bau, Ser-Nam Lim, and Phillip Isola. What makes fake images detectable? understanding properties that generalize. In European Conference on Computer Vision, 2020. 1, 7
|
| 285 |
+
[7] Cheng Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3291-3300, 2018. 6
|
| 286 |
+
[8] Junsong Chen, Yue Wu, Simian Luo, Enze Xie, Sayak Paul, Ping Luo, Hang Zhao, and Zhenguo Li. Pixart- $\delta$ : Fast and controllable image generation with latent consistency models. ArXiv, abs/2401.05252, 2024. 6
|
| 287 |
+
[9] Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 1520-1529, 2017. 6
|
| 288 |
+
[10] Yunjey Choi, Min-Je Choi, Mun Su Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8789-8797, 2017. 6
|
| 289 |
+
[11] Tao Dai, Jianrui Cai, Yongbing Zhang, Shutao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11057-11066, 2019. 6
|
| 290 |
+
[12] Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis. ArXiv, abs/2105.05233, 2021. 6
|
| 291 |
+
|
| 292 |
+
[13] Joel Cameron Frank, Thorsten Eisenhofer, Lea Schonherr, Asja Fischer, Dorothea Kolossa, and Thorsten Holz. Leveraging frequency analysis for deep fake image recognition. ArXiv, abs/2003.08685, 2020. 1, 3
|
| 293 |
+
[14] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63:139 - 144, 2014. 1
|
| 294 |
+
[15] Yatharth Gupta, Vishnu V. Jaddipal, Harish Prabhala, Sayak Paul, and Patrick von Platen. Progressive knowledge distillation of stable diffusion xl using layer level loss. ArXiv, abs/2401.02677, 2024. 6
|
| 295 |
+
[16] Zhizhong Han, Xiyang Wang, Yu-Shen Liu, and Matthias Zwicker. Multi-angle point cloud-vae: Unsupervised feature learning for 3d point clouds from multiple angles by joint self-reconstruction and half-to-half prediction. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 10441-10450. IEEE, 2019. 3
|
| 296 |
+
[17] J. Edward Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. ArXiv, abs/2106.09685, 2021. 4
|
| 297 |
+
[18] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. ArXiv, abs/1710.10196, 2017. 6
|
| 298 |
+
[19] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4396-4405, 2018. 6
|
| 299 |
+
[20] Prabhat Kumar, Mayank Vatsa, and Richa Singh. Detecting face2face facial reenactment in videos. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 2578-2586, 2020. 6
|
| 300 |
+
[21] Daiqing Li, Aleks Kamko, Ehsan Akhgari, Ali Sabet, Linmiao Xu, and Suhail Doshi. Playground v2.5: Three insights towards enhancing aesthetic quality in text-to-image generation. ArXiv, abs/2402.17245, 2024. 7
|
| 301 |
+
[22] Ke Li, Tianhao Zhang, and Jitendra Malik. Diverse image synthesis from semantic layouts via conditional imle. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 4219-4228, 2018. 6
|
| 302 |
+
[23] Zhenghong Li, Hao Chen, Jiangjiang Wu, Jun Li, and Ning Jing. Segmind: Semisupervised remote sensing image semantic segmentation with masked image modeling and contrastive learning method. IEEE Transactions on Geoscience and Remote Sensing, 61:1-17, 2023. 6
|
| 303 |
+
[24] Bo Liu, Fan Yang, Xiuli Bi, Bin Xiao, Weisheng Li, and Xinbo Gao. Detecting generated images by real images. In European Conference on Computer Vision, 2022. 3
|
| 304 |
+
[25] Honggu Liu, Xiaodan Li, Wenbo Zhou, Yuefeng Chen, Yuan He, Hui Xue, Weiming Zhang, and Nenghai Yu. Spatial-phase shallow learning: Rethinking face forgery detection in frequency domain. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 772-781, 2021. 2
|
| 305 |
+
[26] Huan Liu, Zichang Tan, Chuangchuang Tan, Yunchao Wei, Yao Zhao, and Jingdong Wang. Forgery-aware adaptive transformer for generalizable synthetic image detection.
|
| 306 |
+
|
| 307 |
+
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10770-10780, 2023. 1, 2, 3, 4, 5, 7
|
| 308 |
+
[27] Xinhai Liu, Xinchen Liu, Zhizhong Han, and Yu-Shen Liu. Spu-net: Self-supervised point cloud upsampling by coarse-to-fine reconstruction with self-projection optimization. IEEE Transactions on Image Processing, 31:4213-4226, 2020. 3
|
| 309 |
+
[28] Zhengzhe Liu, Xiaojuan Qi, Jiaya Jia, and Philip H. S. Torr. Global texture enhancement for fake face detection in the wild. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8057-8066, 2020. 3
|
| 310 |
+
[29] Midjourney. Midjourney, n.d. Accessed: 2025-03-04. 6
|
| 311 |
+
[30] Lakshmanan Nataraj, Tajuddin Manhar Mohammed, B. S. Manjunath, Shivkumar Chandrasekaran, Arjuna Flenner, Jawadul H. Bappy, and Amit K. Roy-Chowdhury. Detecting gan generated fake images using co-occurrence matrices. ArXiv, abs/1903.06836, 2019. 7
|
| 312 |
+
[31] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning, 2021. 6
|
| 313 |
+
[32] Utkarsh Ojha, Yuheng Li, and Yong Jae Lee. Towards universal fake image detectors that generalize across generative models. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24480-24489, 2023. 1, 2, 3, 5, 6, 7
|
| 314 |
+
[33] Pablo Pernias, Dominic Rampas, Mats L. Richter, and Marc Aubreville. Wuerstchen: Efficient pretraining of text-to-image models. ArXiv, abs/2306.00637, 2023. 6
|
| 315 |
+
[34] Dustin Podell, Zion English, Kyle Lacey, A. Blattmann, Tim Dockhorn, Jonas Muller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. ArXiv, abs/2307.01952, 2023. 6
|
| 316 |
+
[35] Zequn Qin, Pengyi Zhang, Fei Wu, and Xi Li. Fcanet: Frequency channel attention networks. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 763-772, 2020. 6
|
| 317 |
+
[36] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. 2
|
| 318 |
+
[37] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. ArXiv, abs/2102.12092, 2021. 6
|
| 319 |
+
[38] Anton Razzhigaev, Arseniy Shakhmatov, Anastasia Maltseva, V.Ya. Arkhipkin, Igor Pavlov, Ilya Ryabov, Angelina Kuts, Alexander Panchenko, Andrey Kuznetsov, and Denis Dimitrov. Kandinsky: an improved text-to-image synthesis with image prior and latent diffusion. In Conference on Empirical Methods in Natural Language Processing, 2023. 6
|
| 320 |
+
[39] Jonas Ricker, Denis Lukovnikov, and Asja Fischer. Aeroblade: Training-free detection of latent diffusion images using
|
| 321 |
+
|
| 322 |
+
autoencoder reconstruction error. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9130-9140, 2024. 3, 2
|
| 323 |
+
[40] Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674-10685, 2021. 6
|
| 324 |
+
[41] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Laion-5b: An open large-scale dataset for training next generation image-text models. ArXiv, abs/2210.08402, 2022.6
|
| 325 |
+
[42] Segmind. Announcing ssd-1b: A leap in efficient t2i generation., 2023. 6
|
| 326 |
+
[43] Minghe Shen, Hongping Gan, Chao Ning, Yi Hua, and Tao Zhang. Transc: A transformer-based hybrid architecture for image compressed sensing. IEEE Transactions on Image Processing, 31:6991-7005, 2022. 2
|
| 327 |
+
[44] Chuangchuang Tan, Yao Zhao, Shikui Wei, Guanghua Gu, and Yunchao Wei. Learning on gradients: Generalized artifacts representation for gan-generated images detection. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12105-12114, 2023. 1, 2
|
| 328 |
+
[45] Chuangchuang Tan, Huan Liu, Yao Zhao, Shikui Wei, Guanghua Gu, Ping Liu, and Yunchao Wei. Rethinking the up-sampling operations in cnn-based generative network for generalizable deepfake detection. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 28130-28139, 2024. 2, 7
|
| 329 |
+
[46] Devavrat Tomar, Manana Lortkipanidze, Guillaume Vray, Behzad Bozorgtabar, and Jean-Philippe Thiran. Self-attentive spatial adaptive normalization for cross-modality domain adaptation. IEEE Transactions on Medical Imaging, 40:2926-2938, 2021. 6
|
| 330 |
+
[47] Laurens van der Maaten and Geoffrey E. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9: 2579-2605, 2008. 8
|
| 331 |
+
[48] Tim van Erven and Peter Harremoës. Rényi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory, 60:3797-3820, 2012. 4
|
| 332 |
+
[49] Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq R. Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8228-8238, 2023. 6
|
| 333 |
+
[50] Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A. Efros. Cnn-generated images are surprisingly easy to spot... for now. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8692-8701, 2019. 1, 7, 2, 3
|
| 334 |
+
[51] Zhendong Wang, Jianmin Bao, Wen gang Zhou, Weilun Wang, Hezhen Hu, Hong Chen, and Houqiang Li. Dire for
|
| 335 |
+
|
| 336 |
+
diffusion-generated image detection. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 22388-22398, 2023. 3, 7
|
| 337 |
+
[52] Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. Diffusiondb: A large-scale prompt gallery dataset for text-to-image generative models. In Annual Meeting of the Association for Computational Linguistics, 2022. 6, 7
|
| 338 |
+
[53] Syed Waqas Zamir, Aditya Arora, Salman Hameed Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Learning enriched features for fast image restoration and enhancement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45:1934-1948, 2022. 2
|
| 339 |
+
[54] Maxime Zanella and Ismail Ben Ayed. Low-rank few-shot adaptation of vision-language models. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1593-1603, 2024. 4
|
| 340 |
+
[55] Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, and Yu Su. Magicbrush: A manually annotated dataset for instruction-guided image editing. ArXiv, abs/2306.10012, 2023. 4, 5
|
| 341 |
+
[56] Lingzhi Zhang, Zhengjie Xu, Connelly Barnes, Yuqian Zhou, Qing Liu, He Zhang, Sohrab Amirghodsi, Zhe Lin, Eli Shechtman, and Jianbo Shi. Perceptual artifacts localization for image synthesis tasks. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7545-7556, 2023. 5
|
| 342 |
+
[57] Xu Zhang, Svebor Karaman, and Shih-Fu Chang. Detecting and simulating artifacts in gan fake images. 2019 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-6, 2019. 7
|
| 343 |
+
[58] Bolei Zhou, Aditya Khosla, Ågata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2921-2929, 2015. 8
|
| 344 |
+
[59] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2242-2251, 2017. 6
|
2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8aeff7e917a54dd90c917afc13b29d7ab399174ecf1ec80c99187c549202ee48
|
| 3 |
+
size 748640
|
2025/Semantic Discrepancy-aware Detector for Image Forgery Identification/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/755b1314-9e24-437c-8f51-19b61bb62095_content_list.json
ADDED
|
@@ -0,0 +1,1520 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Semantic Equitable Clustering: A Simple and Effective Strategy for Clustering Vision Tokens",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
86,
|
| 8 |
+
130,
|
| 9 |
+
883,
|
| 10 |
+
174
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Qihang Fan $^{1,2}$ , Huaibo Huang $^{1*}$ , Mingrui Chen $^{1,2}$ , Ran He $^{1,2}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
243,
|
| 19 |
+
202,
|
| 20 |
+
723,
|
| 21 |
+
220
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ MAIS & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China \n $^{2}$ School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China \nfanqihang.159@gmail.com, huaibo.huang@cripac.ia.ac.cn, pharmier@hust.edu.cn, rhe@nlpr.ia.ac.cn",
|
| 28 |
+
"bbox": [
|
| 29 |
+
112,
|
| 30 |
+
220,
|
| 31 |
+
854,
|
| 32 |
+
292
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
233,
|
| 42 |
+
325,
|
| 43 |
+
313,
|
| 44 |
+
342
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "The Vision Transformer (ViT) has gained prominence for its superior relational modeling prowess. However, its global attention mechanism's quadratic complexity poses substantial computational burdens. A common remedy spatially groups tokens for self-attention, reducing computational requirements. Nonetheless, this strategy neglects semantic information in tokens, possibly scattering semantically-linked tokens across distinct groups, thus compromising the efficacy of self-attention intended for modeling inter-token dependencies. Motivated by these insights, we introduce a fast and balanced clustering method, named Semantic Equitable Clustering (SEC). SEC clusters tokens based on their global semantic relevance in an efficient, straightforward manner. In contrast to traditional clustering methods requiring multiple iterations, our method achieves token clustering in a single pass. Additionally, SEC regulates the number of tokens per cluster, ensuring a balanced distribution for effective parallel processing on current computational platforms without necessitating further optimization. Capitalizing on SEC, we propose a versatile vision backbone, SECViT. Comprehensive experiments in image classification, object detection, instance segmentation, and semantic segmentation validate the effectiveness of SECViT. Moreover, SEC can be conveniently and swiftly applied to multimodal large language models (MLLM), such as LLaVA, to serve as a vision language connector, effectively accelerating the model's efficiency while maintaining unchanged or better performance.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
75,
|
| 53 |
+
358,
|
| 54 |
+
473,
|
| 55 |
+
782
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1. Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
76,
|
| 65 |
+
808,
|
| 66 |
+
209,
|
| 67 |
+
824
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Since its inception, the Vision Transformer (ViT)[11] has drawn considerable interest from the research community due to its robust modeling prowess. However, the quadratic",
|
| 74 |
+
"bbox": [
|
| 75 |
+
75,
|
| 76 |
+
833,
|
| 77 |
+
468,
|
| 78 |
+
878
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "image",
|
| 84 |
+
"img_path": "images/bfedae319fa83a0de59aba8a9fed4488ae3931ea412df57b82603b7d4dda41be.jpg",
|
| 85 |
+
"image_caption": [
|
| 86 |
+
"(a) Window Partition",
|
| 87 |
+
"X Semantic Information $\\checkmark$ Cluster Efficiency $\\checkmark$ Equi-partition",
|
| 88 |
+
"Figure 1. Comparison among Window Partition, Dynamic Group by k-means, and Semantic Equitable Clustering. Our Semantic Equitable Clustering incorporates image semantics while maintaining efficient clustering, eliminating the need for iterative processes such as in k-means. Furthermore, it enables equipartitioning of tokens, promoting efficient GPU processing without necessitating additional CUDA optimization."
|
| 89 |
+
],
|
| 90 |
+
"image_footnote": [],
|
| 91 |
+
"bbox": [
|
| 92 |
+
503,
|
| 93 |
+
335,
|
| 94 |
+
620,
|
| 95 |
+
426
|
| 96 |
+
],
|
| 97 |
+
"page_idx": 0
|
| 98 |
+
},
|
| 99 |
+
{
|
| 100 |
+
"type": "image",
|
| 101 |
+
"img_path": "images/0d5a36a6d9ba18eca47eadf175213d7a8aec7ab9e2fefba6be9152e6d3af78d6.jpg",
|
| 102 |
+
"image_caption": [
|
| 103 |
+
"(b) k-means",
|
| 104 |
+
"$\\checkmark$ Semantic Information \n $\\times$ Cluster Efficiency \n $\\times$ Equi-partition",
|
| 105 |
+
"(c)Semantic Equitable Clustering"
|
| 106 |
+
],
|
| 107 |
+
"image_footnote": [],
|
| 108 |
+
"bbox": [
|
| 109 |
+
627,
|
| 110 |
+
335,
|
| 111 |
+
745,
|
| 112 |
+
426
|
| 113 |
+
],
|
| 114 |
+
"page_idx": 0
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"type": "image",
|
| 118 |
+
"img_path": "images/6ca7c9ba783fe071e548b9dfb0324f7fa73d27c460c85350ab380b43ec772af5.jpg",
|
| 119 |
+
"image_caption": [
|
| 120 |
+
"$\\checkmark$ Semantic Information $\\checkmark$ Cluster Efficiency $\\checkmark$ Equi-partition"
|
| 121 |
+
],
|
| 122 |
+
"image_footnote": [],
|
| 123 |
+
"bbox": [
|
| 124 |
+
754,
|
| 125 |
+
335,
|
| 126 |
+
874,
|
| 127 |
+
426
|
| 128 |
+
],
|
| 129 |
+
"page_idx": 0
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"type": "text",
|
| 133 |
+
"text": "complexity of Self-Attention leads to significant computational overhead, thus constraining the practicality of ViT. A variety of strategies have been devised to alleviate this computational load, the most prevalent of which involves token grouping, thereby constraining the attention span of each token[10, 38, 44, 47].",
|
| 134 |
+
"bbox": [
|
| 135 |
+
496,
|
| 136 |
+
580,
|
| 137 |
+
890,
|
| 138 |
+
671
|
| 139 |
+
],
|
| 140 |
+
"page_idx": 0
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"type": "text",
|
| 144 |
+
"text": "Specifically, the Swin-Transformer [38] partitions tokens into multiple small windows, restricting token attention within each window. The CSWin-Transformer [10] adopts a cross-shaped grouping, endowing each token with a global receptive field. MaxViT [44] amalgamates window and grid attention, facilitating intra-window tokens to attend to their counterparts in other windows. However, these methods, solely reliant on spatial positioning, neglect token semantics, potentially restricting the self-attention's capacity to model semantic dependencies. To mitigate this, DGT [37] employs k-means clustering for query grouping, considering the semantic information of tokens for enhanced feature learning. Nonetheless, the iterative nature of k-means clustering and the potential for uneven token counts per cluster can impact the efficiency of parallel attention operations.",
|
| 145 |
+
"bbox": [
|
| 146 |
+
496,
|
| 147 |
+
674,
|
| 148 |
+
892,
|
| 149 |
+
902
|
| 150 |
+
],
|
| 151 |
+
"page_idx": 0
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"type": "header",
|
| 155 |
+
"text": "CVF",
|
| 156 |
+
"bbox": [
|
| 157 |
+
106,
|
| 158 |
+
2,
|
| 159 |
+
181,
|
| 160 |
+
42
|
| 161 |
+
],
|
| 162 |
+
"page_idx": 0
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"type": "header",
|
| 166 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 167 |
+
"bbox": [
|
| 168 |
+
238,
|
| 169 |
+
0,
|
| 170 |
+
807,
|
| 171 |
+
46
|
| 172 |
+
],
|
| 173 |
+
"page_idx": 0
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"type": "page_footnote",
|
| 177 |
+
"text": "*Huaibo Huang is the corresponding author.",
|
| 178 |
+
"bbox": [
|
| 179 |
+
94,
|
| 180 |
+
886,
|
| 181 |
+
330,
|
| 182 |
+
898
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 0
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "page_number",
|
| 188 |
+
"text": "4019",
|
| 189 |
+
"bbox": [
|
| 190 |
+
482,
|
| 191 |
+
944,
|
| 192 |
+
514,
|
| 193 |
+
955
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 0
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "image",
|
| 199 |
+
"img_path": "images/82c9bce75d7bf1a1970ecdfcfbe91b46fd54fe2042e2030049cec92004ce354c.jpg",
|
| 200 |
+
"image_caption": [
|
| 201 |
+
"Figure 2. Left: Top-1 accuracy v.s. FLOPs on ImageNet-1K of recent SOTA models. Right: Comparison among different vision language connectors on LLaVA-1.5"
|
| 202 |
+
],
|
| 203 |
+
"image_footnote": [],
|
| 204 |
+
"bbox": [
|
| 205 |
+
76,
|
| 206 |
+
87,
|
| 207 |
+
467,
|
| 208 |
+
303
|
| 209 |
+
],
|
| 210 |
+
"page_idx": 1
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "text",
|
| 214 |
+
"text": "Given these considerations, an optimal token partitioning scheme should efficiently segregate tokens, incorporate semantic information, and efficiently utilize computational resources (e.g., GPU). In response, we introduce a simple, fast and equitable clustering approach named Semantic Equitable Clustering (SEC). SEC segments tokens based on their relevance to global semantic information. Specifically, we employ global pooling to generate a global token encapsulating global semantic information. The similarity between this global token and all other tokens is then computed, reflecting global semantic relevance. Upon obtaining the similarity matrix, tokens (excluding the global token) are sorted by similarity scores, and the tokens with similar scores are grouped into clusters, ensuring uniform token distribution across clusters. As depicted in Fig. 1, SEC comprehensively considers token semantics and completes the clustering process in a single iteration, unlike the multi-iteration k-means. The resulting clusters, containing an equal number of tokens, can be processed in parallel by the GPU efficiently.",
|
| 215 |
+
"bbox": [
|
| 216 |
+
75,
|
| 217 |
+
364,
|
| 218 |
+
468,
|
| 219 |
+
667
|
| 220 |
+
],
|
| 221 |
+
"page_idx": 1
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "text",
|
| 225 |
+
"text": "Building upon Semantic Equitable Clustering (SEC), we introduce the Semantic Equitable Clustering Vision Transformer (SECViT), a versatile vision backbone that is adaptable to a wide spectrum of downstream tasks. As demonstrated in Fig. 2, SECViT exhibits significant performance improvements compared to previous state-of-the-art (SOTA) models. Impressively, SECViT attains an accuracy of $84.3\\%$ utilizing merely 4.6GFLOPS, without the need for additional training data or supervision. This superior performance is maintained across different model scales. Furthermore, SECViT proves its proficiency in downstream tasks, including but not limited to, object detection, instance segmentation, and semantic segmentation.",
|
| 226 |
+
"bbox": [
|
| 227 |
+
75,
|
| 228 |
+
670,
|
| 229 |
+
468,
|
| 230 |
+
867
|
| 231 |
+
],
|
| 232 |
+
"page_idx": 1
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"type": "text",
|
| 236 |
+
"text": "Beyond vision tasks, we also apply SEC to multimodal large language models (MLLM) such as LLaVA-1.5 [36] to",
|
| 237 |
+
"bbox": [
|
| 238 |
+
76,
|
| 239 |
+
869,
|
| 240 |
+
468,
|
| 241 |
+
902
|
| 242 |
+
],
|
| 243 |
+
"page_idx": 1
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "text",
|
| 247 |
+
"text": "serve as an efficient vision language connector. Specifically, we use SEC to cluster the vision tokens, and then merge all the tokens at corresponding positions within each cluster into a single token. Experiments demonstrate that this approach significantly enhances the efficiency of LLaVA-1.5 while improving the model's performance.",
|
| 248 |
+
"bbox": [
|
| 249 |
+
496,
|
| 250 |
+
90,
|
| 251 |
+
893,
|
| 252 |
+
183
|
| 253 |
+
],
|
| 254 |
+
"page_idx": 1
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"type": "text",
|
| 258 |
+
"text": "2. Related Works",
|
| 259 |
+
"text_level": 1,
|
| 260 |
+
"bbox": [
|
| 261 |
+
500,
|
| 262 |
+
199,
|
| 263 |
+
650,
|
| 264 |
+
215
|
| 265 |
+
],
|
| 266 |
+
"page_idx": 1
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"type": "text",
|
| 270 |
+
"text": "Vision Transformer. The Vision Transformer (ViT) [11] is considered a powerful visual architecture. Many works have improved the Vision Transformer, including enhancing its training efficiency and reducing its computational cost [10, 24, 38, 43, 61]. DeiT [43] uses distillation loss and incorporates extensive data augmentation methods into the ViT training process. Hierarchical structures represented by PVT [16, 41, 45, 46, 51] reduce the number of tokens in global attention by downsampling the keys and values (KV), thereby low the computational cost. In addition to them, some methods directly prune tokens based on their importance, retaining important tokens [32, 40]. This reduces the number of tokens and subsequently lowers the computational cost of the model. Another highly representative approach is to group all tokens such that each token can only attend to tokens within its own group [9, 10, 37, 38, 61]. This method also significantly reduces the computational cost of self-attention.",
|
| 271 |
+
"bbox": [
|
| 272 |
+
496,
|
| 273 |
+
226,
|
| 274 |
+
893,
|
| 275 |
+
500
|
| 276 |
+
],
|
| 277 |
+
"page_idx": 1
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"type": "text",
|
| 281 |
+
"text": "Grouping-Based Vision Transformer. Most grouping-based attention mechanisms perform grouping based on spatial structure [9, 10, 37, 38, 44]. Specifically, the SwinTransformer [38] divides all tokens into equally sized windows based on their spatial positions, where each token can only attend to tokens within its own window. This significantly reduces the model's computational cost. In addition to dividing tokens into small windows along the spatial dimension, DaViT [9] also splits channels into multiple groups along the channel dimension. Unlike the above methods that only consider positional information for grouping, DGT [37] takes semantic information into account by using k-means clustering to group the queries.",
|
| 282 |
+
"bbox": [
|
| 283 |
+
496,
|
| 284 |
+
513,
|
| 285 |
+
893,
|
| 286 |
+
712
|
| 287 |
+
],
|
| 288 |
+
"page_idx": 1
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"type": "text",
|
| 292 |
+
"text": "Vision Language Connector. The vision language connector is a critical component in MLLMs [3, 23, 36]. It aligns vision tokens with language tokens. Typical vision language connectors include MLP [36], Resampler [1], C-Abstractor [3], and others. Although MLP performs well, it introduces a significant number of vision tokens, which hampers the model's efficiency. On the other hand, connectors like Resampler improve the model's efficiency, but at the cost of reduced performance. Unlike these methods, our proposed SEC consider the semantic information of each token and significantly enhances the model's efficiency while maintaining its performance.",
|
| 293 |
+
"bbox": [
|
| 294 |
+
496,
|
| 295 |
+
727,
|
| 296 |
+
893,
|
| 297 |
+
910
|
| 298 |
+
],
|
| 299 |
+
"page_idx": 1
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"type": "page_number",
|
| 303 |
+
"text": "4020",
|
| 304 |
+
"bbox": [
|
| 305 |
+
482,
|
| 306 |
+
944,
|
| 307 |
+
516,
|
| 308 |
+
955
|
| 309 |
+
],
|
| 310 |
+
"page_idx": 1
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"type": "image",
|
| 314 |
+
"img_path": "images/5ac3a4564f17d87dc7600153d4694641f9a1af6bd66688dad7304a0f57e71375.jpg",
|
| 315 |
+
"image_caption": [],
|
| 316 |
+
"image_footnote": [],
|
| 317 |
+
"bbox": [
|
| 318 |
+
99,
|
| 319 |
+
88,
|
| 320 |
+
869,
|
| 321 |
+
246
|
| 322 |
+
],
|
| 323 |
+
"page_idx": 2
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"type": "image",
|
| 327 |
+
"img_path": "images/a61e8739d6a4850d2400576335e0fa1a8e01d661b4868da9f8fff3f2bc62a9de.jpg",
|
| 328 |
+
"image_caption": [
|
| 329 |
+
"Figure 3. (a) Illustration of SECViT (b) Applying SEC to vision language connector. (c) Illustration of Semantic Equitable Clustering for ViT and vision language connector."
|
| 330 |
+
],
|
| 331 |
+
"image_footnote": [],
|
| 332 |
+
"bbox": [
|
| 333 |
+
99,
|
| 334 |
+
248,
|
| 335 |
+
872,
|
| 336 |
+
425
|
| 337 |
+
],
|
| 338 |
+
"page_idx": 2
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"type": "text",
|
| 342 |
+
"text": "3. Method",
|
| 343 |
+
"text_level": 1,
|
| 344 |
+
"bbox": [
|
| 345 |
+
76,
|
| 346 |
+
465,
|
| 347 |
+
171,
|
| 348 |
+
482
|
| 349 |
+
],
|
| 350 |
+
"page_idx": 2
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"type": "text",
|
| 354 |
+
"text": "3.1. Overall Architecture",
|
| 355 |
+
"text_level": 1,
|
| 356 |
+
"bbox": [
|
| 357 |
+
76,
|
| 358 |
+
491,
|
| 359 |
+
274,
|
| 360 |
+
505
|
| 361 |
+
],
|
| 362 |
+
"page_idx": 2
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"type": "text",
|
| 366 |
+
"text": "The overall architecture of SECViT is shown in Fig. 3(a). SECViT consists of four stages with downsampling factors of $\\frac{1}{4}$ , $\\frac{1}{8}$ , $\\frac{1}{16}$ , and $\\frac{1}{32}$ , respectively. This structural design facilitates downstream tasks, such as object detection, in constructing feature pyramids. A SECViT block is composed of three modules. For each block, the input tensor $X_{in} \\in \\mathbb{R}^{C \\times H \\times W}$ is fed into the CPE to introduce the positional information. Then, The Self-Attention based on the Semantic Equitable Clustering (SEC) is employed to serve as the token mixer. The final FFN is utilized to integrate channel-wise information of tokens.",
|
| 367 |
+
"bbox": [
|
| 368 |
+
75,
|
| 369 |
+
513,
|
| 370 |
+
470,
|
| 371 |
+
680
|
| 372 |
+
],
|
| 373 |
+
"page_idx": 2
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"type": "text",
|
| 377 |
+
"text": "Beyond the design of the backbone, we also utilize SEC in the design of the vision language connector in MLLM [36]. For the vision tokens output by ViT, we use SEC to cluster the tokens. For each position corresponding to a cluster, we use attentive pooling to merge them into a single token, thereby reducing the number of vision tokens. The process is shown in Fig. 3(b).",
|
| 378 |
+
"bbox": [
|
| 379 |
+
75,
|
| 380 |
+
681,
|
| 381 |
+
468,
|
| 382 |
+
789
|
| 383 |
+
],
|
| 384 |
+
"page_idx": 2
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"type": "text",
|
| 388 |
+
"text": "3.2. Semantic Equitable Clustering",
|
| 389 |
+
"text_level": 1,
|
| 390 |
+
"bbox": [
|
| 391 |
+
76,
|
| 392 |
+
800,
|
| 393 |
+
349,
|
| 394 |
+
816
|
| 395 |
+
],
|
| 396 |
+
"page_idx": 2
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"type": "text",
|
| 400 |
+
"text": "As previously mentioned, the design objectives of Semantic Equitable Clustering are threefold: 1) Fully consider the semantic information contained in different tokens during clustering. 2) Unlike k-means and other clustering methods that require multiple iterations, Semantic Equitable Cluster-",
|
| 401 |
+
"bbox": [
|
| 402 |
+
75,
|
| 403 |
+
824,
|
| 404 |
+
470,
|
| 405 |
+
902
|
| 406 |
+
],
|
| 407 |
+
"page_idx": 2
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"type": "text",
|
| 411 |
+
"text": "ing can complete clustering in a single step. 3) Ensure an equal number of tokens in each cluster to facilitate parallel processing on GPUs. In the following paragraphs, we will describe in detail how our Semantic Equitable Clustering achieves these three objectives. And the whole process is illustrated in the Fig. 3(c).",
|
| 412 |
+
"bbox": [
|
| 413 |
+
496,
|
| 414 |
+
467,
|
| 415 |
+
892,
|
| 416 |
+
558
|
| 417 |
+
],
|
| 418 |
+
"page_idx": 2
|
| 419 |
+
},
|
| 420 |
+
{
|
| 421 |
+
"type": "text",
|
| 422 |
+
"text": "Single Clustering Center Related to Semantics. k-means is relatively complex for two reasons. First, it has multiple cluster centers, and each token needs to calculate its distance to each cluster center to determine its cluster membership. Second, the determination of each cluster center in k-means is not precise and requires multiple iterations to accurately establish the cluster centers.",
|
| 423 |
+
"bbox": [
|
| 424 |
+
496,
|
| 425 |
+
574,
|
| 426 |
+
892,
|
| 427 |
+
679
|
| 428 |
+
],
|
| 429 |
+
"page_idx": 2
|
| 430 |
+
},
|
| 431 |
+
{
|
| 432 |
+
"type": "text",
|
| 433 |
+
"text": "To address these two issues, we first discard the use of multiple cluster centers and instead calculate the distance between each token and a single center. Based on each token's distance to this center, we divide the tokens into different intervals. Then, to ensure that our chosen center contains the most comprehensive semantic information, we directly use the result of average pooling of all tokens as the center token. This is because, in most vision foundation models, the output of the average pool is assumed to contain the richest semantic information and is thus used for classification [6, 10, 12, 38]. Specifically, the process for determining the cluster center is shown in Eq. 1:",
|
| 434 |
+
"bbox": [
|
| 435 |
+
496,
|
| 436 |
+
680,
|
| 437 |
+
893,
|
| 438 |
+
862
|
| 439 |
+
],
|
| 440 |
+
"page_idx": 2
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"type": "equation",
|
| 444 |
+
"text": "\n$$\nQ = W _ {Q} X, K = W _ {K} X, V = W _ {V} X, \\tag {1}\n$$\n",
|
| 445 |
+
"text_format": "latex",
|
| 446 |
+
"bbox": [
|
| 447 |
+
568,
|
| 448 |
+
867,
|
| 449 |
+
890,
|
| 450 |
+
891
|
| 451 |
+
],
|
| 452 |
+
"page_idx": 2
|
| 453 |
+
},
|
| 454 |
+
{
|
| 455 |
+
"type": "equation",
|
| 456 |
+
"text": "\n$$\nk _ {c} = \\operatorname {P o o l} (K).\n$$\n",
|
| 457 |
+
"text_format": "latex",
|
| 458 |
+
"bbox": [
|
| 459 |
+
651,
|
| 460 |
+
887,
|
| 461 |
+
754,
|
| 462 |
+
902
|
| 463 |
+
],
|
| 464 |
+
"page_idx": 2
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"type": "page_number",
|
| 468 |
+
"text": "4021",
|
| 469 |
+
"bbox": [
|
| 470 |
+
482,
|
| 471 |
+
944,
|
| 472 |
+
513,
|
| 473 |
+
955
|
| 474 |
+
],
|
| 475 |
+
"page_idx": 2
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"type": "text",
|
| 479 |
+
"text": "Where $W_{K}$ is a learnable matrix. $k_{c}$ is the determined cluster center. $X$ is the set of input tokens.",
|
| 480 |
+
"bbox": [
|
| 481 |
+
76,
|
| 482 |
+
90,
|
| 483 |
+
468,
|
| 484 |
+
121
|
| 485 |
+
],
|
| 486 |
+
"page_idx": 3
|
| 487 |
+
},
|
| 488 |
+
{
|
| 489 |
+
"type": "text",
|
| 490 |
+
"text": "Distance Metric Suitable for ViT. Unlike the Euclidean distance calculation used in the k-means algorithm for computing the distance between tokens, during the actual computation of Self-Attention, similarity between query and key is computed through dot product. To better adapt to the characteristics of Self-Attention, we also measure the distance between tokens using a method similar to dot product. Specifically, we calculate the cosine similarity between the cluster center and each token, and then sort the tokens according to the magnitude of the computed results. The specific process is shown in Eq. 2:",
|
| 491 |
+
"bbox": [
|
| 492 |
+
76,
|
| 493 |
+
143,
|
| 494 |
+
468,
|
| 495 |
+
310
|
| 496 |
+
],
|
| 497 |
+
"page_idx": 3
|
| 498 |
+
},
|
| 499 |
+
{
|
| 500 |
+
"type": "equation",
|
| 501 |
+
"text": "\n$$\ns i m = \\frac {K \\cdot k _ {c}}{| | K | | \\cdot | | k _ {c} | |},\n$$\n",
|
| 502 |
+
"text_format": "latex",
|
| 503 |
+
"bbox": [
|
| 504 |
+
215,
|
| 505 |
+
321,
|
| 506 |
+
354,
|
| 507 |
+
354
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 3
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "equation",
|
| 513 |
+
"text": "\n$$\ni d x = \\operatorname {a r g s o r t} (s i m), \\tag {2}\n$$\n",
|
| 514 |
+
"text_format": "latex",
|
| 515 |
+
"bbox": [
|
| 516 |
+
215,
|
| 517 |
+
352,
|
| 518 |
+
468,
|
| 519 |
+
373
|
| 520 |
+
],
|
| 521 |
+
"page_idx": 3
|
| 522 |
+
},
|
| 523 |
+
{
|
| 524 |
+
"type": "equation",
|
| 525 |
+
"text": "\n$$\nQ ^ {*} = Q [ i d x ], K ^ {*} = K [ i d x ], V ^ {*} = V [ i d x ].\n$$\n",
|
| 526 |
+
"text_format": "latex",
|
| 527 |
+
"bbox": [
|
| 528 |
+
129,
|
| 529 |
+
376,
|
| 530 |
+
418,
|
| 531 |
+
393
|
| 532 |
+
],
|
| 533 |
+
"page_idx": 3
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"text": "Where $sim$ is the similarity matrix between $K$ and $k_{c}$ , the $\\operatorname{argsort}(sim)$ returns the indices of $sim$ sorted in descending order. $Q^{*}, K^{*}, V^{*}$ are $Q, K, V$ rearranged according to $\\operatorname{argsort}(sim)$ .",
|
| 538 |
+
"bbox": [
|
| 539 |
+
76,
|
| 540 |
+
405,
|
| 541 |
+
468,
|
| 542 |
+
465
|
| 543 |
+
],
|
| 544 |
+
"page_idx": 3
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"text": "Equally Partition Tokens based on Distance. The obtained $Q^{*}, K^{*}$ , and $V^{*}$ from the previous step have been sorted based on their distances to the cluster center. For the design of vision backbone, we directly group them, so tokens with similar distances to the cluster center are classified into the same cluster. This allows us to directly control an equal number of tokens in each cluster. This process can be clearly illustrated in Fig. 3(c) and denoted as follows:",
|
| 549 |
+
"bbox": [
|
| 550 |
+
76,
|
| 551 |
+
488,
|
| 552 |
+
468,
|
| 553 |
+
609
|
| 554 |
+
],
|
| 555 |
+
"page_idx": 3
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "equation",
|
| 559 |
+
"text": "\n$$\nQ _ {m} = Q ^ {*} [ m \\times N: (m + 1) N ],\n$$\n",
|
| 560 |
+
"text_format": "latex",
|
| 561 |
+
"bbox": [
|
| 562 |
+
165,
|
| 563 |
+
623,
|
| 564 |
+
383,
|
| 565 |
+
638
|
| 566 |
+
],
|
| 567 |
+
"page_idx": 3
|
| 568 |
+
},
|
| 569 |
+
{
|
| 570 |
+
"type": "equation",
|
| 571 |
+
"text": "\n$$\nK _ {m} = K ^ {*} [ m \\times N: (m + 1) N ], \\tag {3}\n$$\n",
|
| 572 |
+
"text_format": "latex",
|
| 573 |
+
"bbox": [
|
| 574 |
+
165,
|
| 575 |
+
641,
|
| 576 |
+
468,
|
| 577 |
+
657
|
| 578 |
+
],
|
| 579 |
+
"page_idx": 3
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "equation",
|
| 583 |
+
"text": "\n$$\nV _ {m} = V ^ {*} [ m \\times N: (m + 1) N ].\n$$\n",
|
| 584 |
+
"text_format": "latex",
|
| 585 |
+
"bbox": [
|
| 586 |
+
169,
|
| 587 |
+
660,
|
| 588 |
+
383,
|
| 589 |
+
676
|
| 590 |
+
],
|
| 591 |
+
"page_idx": 3
|
| 592 |
+
},
|
| 593 |
+
{
|
| 594 |
+
"type": "text",
|
| 595 |
+
"text": "where $N$ is the basic token number of each cluster for equal partition and $m$ is the index of the cluster",
|
| 596 |
+
"bbox": [
|
| 597 |
+
76,
|
| 598 |
+
689,
|
| 599 |
+
468,
|
| 600 |
+
718
|
| 601 |
+
],
|
| 602 |
+
"page_idx": 3
|
| 603 |
+
},
|
| 604 |
+
{
|
| 605 |
+
"type": "text",
|
| 606 |
+
"text": "Based on the above steps, we have completed the clustering process that captures semantic information in the image with minimal sorting cost. Moreover, compared to k-means, we have achieved equi-partitioning of each cluster. After clustering is completed, we apply standard Self-Attention to the tokens within each cluster, thereby completing the interaction of information between tokens:",
|
| 607 |
+
"bbox": [
|
| 608 |
+
76,
|
| 609 |
+
720,
|
| 610 |
+
468,
|
| 611 |
+
824
|
| 612 |
+
],
|
| 613 |
+
"page_idx": 3
|
| 614 |
+
},
|
| 615 |
+
{
|
| 616 |
+
"type": "equation",
|
| 617 |
+
"text": "\n$$\nY _ {m} = \\operatorname {A t t n} \\left(Q _ {m}, K _ {m}, V _ {m}\\right). \\tag {4}\n$$\n",
|
| 618 |
+
"text_format": "latex",
|
| 619 |
+
"bbox": [
|
| 620 |
+
179,
|
| 621 |
+
840,
|
| 622 |
+
468,
|
| 623 |
+
856
|
| 624 |
+
],
|
| 625 |
+
"page_idx": 3
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "text",
|
| 629 |
+
"text": "For the design of vision language connector, we group the tokens according to their similarity, and the tokens",
|
| 630 |
+
"bbox": [
|
| 631 |
+
76,
|
| 632 |
+
869,
|
| 633 |
+
468,
|
| 634 |
+
900
|
| 635 |
+
],
|
| 636 |
+
"page_idx": 3
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "text",
|
| 640 |
+
"text": "within each group are interleaved, as shown in Eq. 5:",
|
| 641 |
+
"bbox": [
|
| 642 |
+
500,
|
| 643 |
+
90,
|
| 644 |
+
849,
|
| 645 |
+
106
|
| 646 |
+
],
|
| 647 |
+
"page_idx": 3
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"type": "equation",
|
| 651 |
+
"text": "\n$$\nQ _ {n} = Q ^ {*} [ n: N: L ],\n$$\n",
|
| 652 |
+
"text_format": "latex",
|
| 653 |
+
"bbox": [
|
| 654 |
+
625,
|
| 655 |
+
119,
|
| 656 |
+
767,
|
| 657 |
+
135
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 3
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "equation",
|
| 663 |
+
"text": "\n$$\nK _ {n} = K ^ {*} [ n: N: L ], \\tag {5}\n$$\n",
|
| 664 |
+
"text_format": "latex",
|
| 665 |
+
"bbox": [
|
| 666 |
+
624,
|
| 667 |
+
138,
|
| 668 |
+
890,
|
| 669 |
+
152
|
| 670 |
+
],
|
| 671 |
+
"page_idx": 3
|
| 672 |
+
},
|
| 673 |
+
{
|
| 674 |
+
"type": "equation",
|
| 675 |
+
"text": "\n$$\nV _ {n} = V ^ {*} [ n: N: L ].\n$$\n",
|
| 676 |
+
"text_format": "latex",
|
| 677 |
+
"bbox": [
|
| 678 |
+
630,
|
| 679 |
+
157,
|
| 680 |
+
767,
|
| 681 |
+
172
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 3
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "text",
|
| 687 |
+
"text": "in which $L$ is the token's sequence length, $n$ is the index of group tokens. $N$ is the basic token number of each cluster. After obtaining the token groups, we perform pooling on $Q$ to effectively reduce the number of tokens input to the LLM, with each group's output becoming a single token, as shown in Eq 6.",
|
| 688 |
+
"bbox": [
|
| 689 |
+
498,
|
| 690 |
+
184,
|
| 691 |
+
890,
|
| 692 |
+
275
|
| 693 |
+
],
|
| 694 |
+
"page_idx": 3
|
| 695 |
+
},
|
| 696 |
+
{
|
| 697 |
+
"type": "equation",
|
| 698 |
+
"text": "\n$$\nY _ {n} = \\operatorname {A t t n} \\left(\\operatorname {P o o l} \\left(Q _ {n}\\right), K _ {n}, V _ {n}\\right). \\tag {6}\n$$\n",
|
| 699 |
+
"text_format": "latex",
|
| 700 |
+
"bbox": [
|
| 701 |
+
586,
|
| 702 |
+
287,
|
| 703 |
+
890,
|
| 704 |
+
305
|
| 705 |
+
],
|
| 706 |
+
"page_idx": 3
|
| 707 |
+
},
|
| 708 |
+
{
|
| 709 |
+
"type": "text",
|
| 710 |
+
"text": "3.3. Difference between SEC and EViT.",
|
| 711 |
+
"text_level": 1,
|
| 712 |
+
"bbox": [
|
| 713 |
+
500,
|
| 714 |
+
316,
|
| 715 |
+
803,
|
| 716 |
+
332
|
| 717 |
+
],
|
| 718 |
+
"page_idx": 3
|
| 719 |
+
},
|
| 720 |
+
{
|
| 721 |
+
"type": "text",
|
| 722 |
+
"text": "We use the most representative example, EViT [32], to illustrate the differences between SEC and other methods based on the similarity between the global token and other tokens.",
|
| 723 |
+
"bbox": [
|
| 724 |
+
498,
|
| 725 |
+
340,
|
| 726 |
+
890,
|
| 727 |
+
385
|
| 728 |
+
],
|
| 729 |
+
"page_idx": 3
|
| 730 |
+
},
|
| 731 |
+
{
|
| 732 |
+
"type": "text",
|
| 733 |
+
"text": "Pruning v.s. Clustering. Most similarity-based methods, such as EViT, are pruning methods, where tokens with low similarity to the [cls] token are merged during the forward process, thereby reducing the number of tokens and decreasing computational cost. In contrast, our proposed SECViT employs a clustering-based approach, performing attention operations within each cluster.",
|
| 734 |
+
"bbox": [
|
| 735 |
+
498,
|
| 736 |
+
387,
|
| 737 |
+
890,
|
| 738 |
+
491
|
| 739 |
+
],
|
| 740 |
+
"page_idx": 3
|
| 741 |
+
},
|
| 742 |
+
{
|
| 743 |
+
"type": "text",
|
| 744 |
+
"text": "The role of the [cls] token. In methods like EViT, the [cls] token serves as a measure of importance of a token. Each token computes its similarity to the [cls] token, with higher similarity tokens deemed more important. The less important tokens are abandoned. In contrast, in SEC, the [cls] token (obtained by average pooling over all tokens) measures similarity between tokens. Each token computes its similarity score to the [cls] token; tokens with similar scores are considered to be more similar and grouped into one cluster. Attention is calculated only within the same cluster.",
|
| 745 |
+
"bbox": [
|
| 746 |
+
498,
|
| 747 |
+
493,
|
| 748 |
+
890,
|
| 749 |
+
657
|
| 750 |
+
],
|
| 751 |
+
"page_idx": 3
|
| 752 |
+
},
|
| 753 |
+
{
|
| 754 |
+
"type": "text",
|
| 755 |
+
"text": "4. Experiments",
|
| 756 |
+
"text_level": 1,
|
| 757 |
+
"bbox": [
|
| 758 |
+
500,
|
| 759 |
+
674,
|
| 760 |
+
632,
|
| 761 |
+
690
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 3
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "text",
|
| 767 |
+
"text": "We first make strict comparison with hierarchical/plain baselines. Then we conduct experiments on a wide range of vision tasks for SECViT, including image classification, object detection, instance segmentation, and semantic segmentation. We also verify the role of SEC in MLLM based on LLaVA-1.5 [36]. More details, experiments, and comparison of models' efficiency can be found in the Appendix.",
|
| 768 |
+
"bbox": [
|
| 769 |
+
498,
|
| 770 |
+
700,
|
| 771 |
+
890,
|
| 772 |
+
806
|
| 773 |
+
],
|
| 774 |
+
"page_idx": 3
|
| 775 |
+
},
|
| 776 |
+
{
|
| 777 |
+
"type": "text",
|
| 778 |
+
"text": "4.1. SEC for vision models",
|
| 779 |
+
"text_level": 1,
|
| 780 |
+
"bbox": [
|
| 781 |
+
500,
|
| 782 |
+
816,
|
| 783 |
+
705,
|
| 784 |
+
830
|
| 785 |
+
],
|
| 786 |
+
"page_idx": 3
|
| 787 |
+
},
|
| 788 |
+
{
|
| 789 |
+
"type": "text",
|
| 790 |
+
"text": "Strict Comparison with Baselines. We select two baselines: hierarchical backbone Swin-Transformer [38] and plain backbone DeiT [43] to make a comparison with our SEC based model. In the comparison models (SEC-Swin",
|
| 791 |
+
"bbox": [
|
| 792 |
+
498,
|
| 793 |
+
840,
|
| 794 |
+
890,
|
| 795 |
+
900
|
| 796 |
+
],
|
| 797 |
+
"page_idx": 3
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "page_number",
|
| 801 |
+
"text": "4022",
|
| 802 |
+
"bbox": [
|
| 803 |
+
482,
|
| 804 |
+
944,
|
| 805 |
+
514,
|
| 806 |
+
955
|
| 807 |
+
],
|
| 808 |
+
"page_idx": 3
|
| 809 |
+
},
|
| 810 |
+
{
|
| 811 |
+
"type": "table",
|
| 812 |
+
"img_path": "images/3b595e4907f31355ede69baec4904abb44339840974f98f930ec98ce67040d79.jpg",
|
| 813 |
+
"table_caption": [],
|
| 814 |
+
"table_footnote": [],
|
| 815 |
+
"table_body": "<table><tr><td>Model</td><td>Params (M)</td><td>FLOPs (G)</td><td>Throughput (imgs/s)</td><td>Acc</td><td>APb</td><td>APm</td><td>mIoU</td></tr><tr><td>DeiT-S [43]</td><td>22</td><td>4.6</td><td>3204</td><td>79.8</td><td>44.5</td><td>40.1</td><td>43.0</td></tr><tr><td>EViT-DeiT-S (keeprate=0.9)</td><td>22</td><td>4.0</td><td>3428</td><td>79.8</td><td>not suit</td><td>not suit</td><td>not suit</td></tr><tr><td>SEC-DeiT-S (num_cluster=4)</td><td>22</td><td>4.1</td><td>3412</td><td>80.5 (+0.7)</td><td>47.7 (+3.2)</td><td>42.7 (+2.6)</td><td>47.5 (+4.5)</td></tr><tr><td>DeiT-B</td><td>86</td><td>17.6</td><td>1502</td><td>81.8</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SEC-DeiT-B</td><td>86</td><td>14.8</td><td>1682</td><td>82.4 (+0.6)</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Swin-T</td><td>29</td><td>4.5</td><td>1723</td><td>81.3</td><td>43.7</td><td>39.8</td><td>44.5</td></tr><tr><td>SEC-Swin-T</td><td>29</td><td>4.8</td><td>1482</td><td>83.8 (+2.5)</td><td>48.3 (+4.6)</td><td>43.4 (+3.6)</td><td>49.3 (+4.8)</td></tr><tr><td>Swin-S</td><td>50</td><td>8.8</td><td>1006</td><td>83.0</td><td>45.7</td><td>41.1</td><td>47.6</td></tr><tr><td>SEC-Swin-S</td><td>50</td><td>9.2</td><td>804</td><td>85.0 (+2.0)</td><td>50.2 (+4.5)</td><td>44.7 (+3.6)</td><td>51.3 (+3.7)</td></tr></table>",
|
| 816 |
+
"bbox": [
|
| 817 |
+
86,
|
| 818 |
+
78,
|
| 819 |
+
460,
|
| 820 |
+
291
|
| 821 |
+
],
|
| 822 |
+
"page_idx": 4
|
| 823 |
+
},
|
| 824 |
+
{
|
| 825 |
+
"type": "table",
|
| 826 |
+
"img_path": "images/f5037ae6cb005af4cbb8916d1bbe35e765478d36308ea9fd60728bbabd12f5ce.jpg",
|
| 827 |
+
"table_caption": [
|
| 828 |
+
"Table 1. Comparison with Hierarchy/Plain baselines. Inference speed are measured on the A100 GPU."
|
| 829 |
+
],
|
| 830 |
+
"table_footnote": [],
|
| 831 |
+
"table_body": "<table><tr><td>Model</td><td>Params (M)</td><td>FLOPs (G)</td><td>Method</td><td>Pretrain epoch</td><td>Acc(%)</td></tr><tr><td>Swin-B [38]</td><td>88</td><td>15.4</td><td>Supervised</td><td>-</td><td>83.5</td></tr><tr><td>ConvNeXt V2-B [50]</td><td>88</td><td>15.4</td><td>Supervised</td><td>-</td><td>84.3</td></tr><tr><td>SEC-Swin-B</td><td>88</td><td>16.2</td><td>Supervised</td><td>-</td><td>85.3</td></tr><tr><td>Swin-B [38]</td><td>88</td><td>15.4</td><td>SimMIM [53]</td><td>800</td><td>84.0(+0.5)</td></tr><tr><td>ConvNeXt V2-B [50]</td><td>88</td><td>15.4</td><td>FCMAE [50]</td><td>800</td><td>84.6(+0.3)</td></tr><tr><td>SEC-Swin-B</td><td>88</td><td>16.2</td><td>SimMIM [53]</td><td>800</td><td>85.9(+0.6)</td></tr></table>",
|
| 832 |
+
"bbox": [
|
| 833 |
+
83,
|
| 834 |
+
325,
|
| 835 |
+
465,
|
| 836 |
+
436
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 4
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "text",
|
| 842 |
+
"text": "and SEC-DeiT), we merely substitute the attention mechanism in the original model with our SEC based Self-Attention and without introducing any other modules. As shown in Tab. 1, we conduct experiments on image classification, object detection, insatance segmentation and semantic segmentation, the simple replacement of the attention mechanism yields significant advantages in both performance and efficiency.",
|
| 843 |
+
"bbox": [
|
| 844 |
+
75,
|
| 845 |
+
469,
|
| 846 |
+
467,
|
| 847 |
+
589
|
| 848 |
+
],
|
| 849 |
+
"page_idx": 4
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "text",
|
| 853 |
+
"text": "In addition to the supervised scenario, we also train the model with SimMIM [53] in the self-supervised scenario. As shown in Tab. 2, SEC also performs exceptionally well in the self-supervised scenario.",
|
| 854 |
+
"bbox": [
|
| 855 |
+
75,
|
| 856 |
+
590,
|
| 857 |
+
467,
|
| 858 |
+
650
|
| 859 |
+
],
|
| 860 |
+
"page_idx": 4
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "text",
|
| 864 |
+
"text": "Image Classification. We compare our SECViT with numerous state-of-the-art models, the results are shown in Tab.3. We adopt the training strategy proposed in DeiT [43], with the only supervision is cross entropy loss. All of our models are trained from scratch for 300 epochs with the input resolution of $224 \\times 224$ . SECViT consistently outperforms preceding models across all scales. Notably, SECViT-S attains a Top1-accuracy of $84.3\\%$ with a mere 27M parameters and 4.6G FLOPs. The comparison of the models' efficiency can be found in Appendix.",
|
| 865 |
+
"bbox": [
|
| 866 |
+
75,
|
| 867 |
+
669,
|
| 868 |
+
467,
|
| 869 |
+
821
|
| 870 |
+
],
|
| 871 |
+
"page_idx": 4
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "text",
|
| 875 |
+
"text": "Object Detection and Instance Segmentation. We utilize MMDetection [4] to implement Mask-RCNN [21], Cascade Mask R-CNN [2], and RetinaNet [33] to evaluate the performance of the SECViT. Tab. 5 and Tab. 4 show the",
|
| 876 |
+
"bbox": [
|
| 877 |
+
75,
|
| 878 |
+
839,
|
| 879 |
+
467,
|
| 880 |
+
900
|
| 881 |
+
],
|
| 882 |
+
"page_idx": 4
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "table",
|
| 886 |
+
"img_path": "images/645e9c7be184e6fea1bb24e8fe9ec185ec719a723beef8558815a8fdfc664342.jpg",
|
| 887 |
+
"table_caption": [
|
| 888 |
+
"Table 2. Comparison with baselines on self-supervised setting."
|
| 889 |
+
],
|
| 890 |
+
"table_footnote": [],
|
| 891 |
+
"table_body": "<table><tr><td>Cost</td><td>Model</td><td>Parmas (M)</td><td>FLOPs (G)</td><td>Top1-acc (%)</td></tr><tr><td rowspan=\"11\">tiny model ~ 2.5G</td><td>PVTv2-b1 [46]</td><td>13</td><td>2.1</td><td>78.7</td></tr><tr><td>TCFormer-light [59]</td><td>14</td><td>3.8</td><td>79.4</td></tr><tr><td>QuadTree-B-b1 [42]</td><td>14</td><td>2.3</td><td>80.0</td></tr><tr><td>MPViT-XS [28]</td><td>11</td><td>2.9</td><td>80.9</td></tr><tr><td>BiFormer-T [61]</td><td>13</td><td>2.2</td><td>81.4</td></tr><tr><td>CrossFormer-T [47]</td><td>28</td><td>2.9</td><td>81.5</td></tr><tr><td>FAT-B2 [12]</td><td>14</td><td>2.0</td><td>81.9</td></tr><tr><td>GC-ViT-XT [20]</td><td>20</td><td>2.6</td><td>82.0</td></tr><tr><td>SMT-T [34]</td><td>12</td><td>2.4</td><td>82.2</td></tr><tr><td>RMT-T [13]</td><td>14</td><td>2.5</td><td>82.4</td></tr><tr><td>SECViT-T</td><td>15</td><td>2.5</td><td>82.7</td></tr><tr><td rowspan=\"13\">small model ~ 4.5G</td><td>PS-ViT-B14 [58]</td><td>21</td><td>5.4</td><td>81.7</td></tr><tr><td>DVT-T2T-ViT-19 [49]</td><td>39</td><td>6.2</td><td>81.9</td></tr><tr><td>ConvNeXt-T [39]</td><td>29</td><td>4.5</td><td>82.1</td></tr><tr><td>TCFormer [59]</td><td>26</td><td>5.8</td><td>82.3</td></tr><tr><td>SG-Former-S [15]</td><td>23</td><td>4.8</td><td>83.2</td></tr><tr><td>StructViT-S-8-1 [25]</td><td>24</td><td>5.4</td><td>83.3</td></tr><tr><td>InternImage-T [48]</td><td>30</td><td>5.0</td><td>83.5</td></tr><tr><td>MLLA-T [18]</td><td>25</td><td>4.2</td><td>83.5</td></tr><tr><td>MaxViT-T [44]</td><td>31</td><td>5.6</td><td>83.6</td></tr><tr><td>FAT-B3 [12]</td><td>29</td><td>4.4</td><td>83.6</td></tr><tr><td>SMT-S [34]</td><td>20</td><td>4.8</td><td>83.7</td></tr><tr><td>BiFormer-S [61]</td><td>26</td><td>4.5</td><td>83.8</td></tr><tr><td>SECViT-S</td><td>27</td><td>4.6</td><td>84.3</td></tr><tr><td rowspan=\"10\">base model ~ 9.0G</td><td>ConvNeXt-S [39]</td><td>50</td><td>8.7</td><td>83.1</td></tr><tr><td>NAT-S [19]</td><td>51</td><td>7.8</td><td>83.7</td></tr><tr><td>Quadtree-B-b4 [42]</td><td>64</td><td>11.5</td><td>84.0</td></tr><tr><td>MOAT-1 [54]</td><td>42</td><td>9.1</td><td>84.2</td></tr><tr><td>InternImage-S [48]</td><td>50</td><td>8.0</td><td>84.2</td></tr><tr><td>GC-ViT-S [20]</td><td>51</td><td>8.5</td><td>84.3</td></tr><tr><td>BiFormer-B [61]</td><td>57</td><td>9.8</td><td>84.3</td></tr><tr><td>iFormer-B [41]</td><td>48</td><td>9.4</td><td>84.6</td></tr><tr><td>FAT-B4 [12]</td><td>52</td><td>9.3</td><td>84.8</td></tr><tr><td>SECViT-B</td><td>57</td><td>9.8</td><td>85.2</td></tr><tr><td rowspan=\"9\">large model ~ 18.0G</td><td>CrossFormer-L [47]</td><td>92</td><td>16.1</td><td>84.0</td></tr><tr><td>SMT-L [34]</td><td>81</td><td>17.7</td><td>84.6</td></tr><tr><td>DaViT-B [9]</td><td>88</td><td>15.5</td><td>84.6</td></tr><tr><td>SG-Former-B [15]</td><td>78</td><td>15.6</td><td>84.7</td></tr><tr><td>iFormer-L [41]</td><td>87</td><td>14.0</td><td>84.8</td></tr><tr><td>InterImage-B [48]</td><td>97</td><td>16.0</td><td>84.9</td></tr><tr><td>GC-ViT-B [20]</td><td>90</td><td>14.8</td><td>85.0</td></tr><tr><td>RMT-L [13]</td><td>95</td><td>18.2</td><td>85.5</td></tr><tr><td>SECViT-L</td><td>101</td><td>18.2</td><td>85.7</td></tr></table>",
|
| 892 |
+
"bbox": [
|
| 893 |
+
519,
|
| 894 |
+
88,
|
| 895 |
+
880,
|
| 896 |
+
623
|
| 897 |
+
],
|
| 898 |
+
"page_idx": 4
|
| 899 |
+
},
|
| 900 |
+
{
|
| 901 |
+
"type": "text",
|
| 902 |
+
"text": "Table 3. Comparison with the state-of-the-art on ImageNet-1K classification.",
|
| 903 |
+
"bbox": [
|
| 904 |
+
500,
|
| 905 |
+
625,
|
| 906 |
+
890,
|
| 907 |
+
652
|
| 908 |
+
],
|
| 909 |
+
"page_idx": 4
|
| 910 |
+
},
|
| 911 |
+
{
|
| 912 |
+
"type": "text",
|
| 913 |
+
"text": "results of SECViT with different detection frameworks. The results show that SECViT performs better than its counterparts in all comparisons.",
|
| 914 |
+
"bbox": [
|
| 915 |
+
498,
|
| 916 |
+
667,
|
| 917 |
+
890,
|
| 918 |
+
714
|
| 919 |
+
],
|
| 920 |
+
"page_idx": 4
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "text",
|
| 924 |
+
"text": "Semantic Segmentation. We utilize Semantic FPN [26] and UperNet [52] to validate our SECViT's performance, implementing these frameworks via MMSegmentation [7]. The results of semantic segmentation can be found in the Tab. 6. All the FLOPs are measured with the input resolution of $512 \\times 2048$ . SECViT achieves the best performance in all settings.",
|
| 925 |
+
"bbox": [
|
| 926 |
+
496,
|
| 927 |
+
731,
|
| 928 |
+
890,
|
| 929 |
+
838
|
| 930 |
+
],
|
| 931 |
+
"page_idx": 4
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "text",
|
| 935 |
+
"text": "4.2. SEC for MLLM",
|
| 936 |
+
"text_level": 1,
|
| 937 |
+
"bbox": [
|
| 938 |
+
500,
|
| 939 |
+
845,
|
| 940 |
+
660,
|
| 941 |
+
861
|
| 942 |
+
],
|
| 943 |
+
"page_idx": 4
|
| 944 |
+
},
|
| 945 |
+
{
|
| 946 |
+
"type": "text",
|
| 947 |
+
"text": "SEC can greatly facilitate the design of vision language connectors in MLLMs. First, we conduct a rigorous compari",
|
| 948 |
+
"bbox": [
|
| 949 |
+
498,
|
| 950 |
+
869,
|
| 951 |
+
890,
|
| 952 |
+
900
|
| 953 |
+
],
|
| 954 |
+
"page_idx": 4
|
| 955 |
+
},
|
| 956 |
+
{
|
| 957 |
+
"type": "page_number",
|
| 958 |
+
"text": "4023",
|
| 959 |
+
"bbox": [
|
| 960 |
+
482,
|
| 961 |
+
944,
|
| 962 |
+
514,
|
| 963 |
+
955
|
| 964 |
+
],
|
| 965 |
+
"page_idx": 4
|
| 966 |
+
},
|
| 967 |
+
{
|
| 968 |
+
"type": "table",
|
| 969 |
+
"img_path": "images/30cb13a7c8ecab4dd19dc323b6db6be27d56a6c8b1818e27935212006b10083b.jpg",
|
| 970 |
+
"table_caption": [],
|
| 971 |
+
"table_footnote": [],
|
| 972 |
+
"table_body": "<table><tr><td rowspan=\"2\">Backbone</td><td rowspan=\"2\">Params (M)</td><td rowspan=\"2\">FLOPs (G)</td><td colspan=\"6\">Mask R-CNN 1×</td><td rowspan=\"2\">Params (M)</td><td rowspan=\"2\">FLOPs (G)</td><td colspan=\"6\">RetinaNet 1×</td></tr><tr><td>APb</td><td>APb50</td><td>APb75</td><td>APm</td><td>APm50</td><td>APm75</td><td>APb</td><td>APb50</td><td>APb75</td><td>APs</td><td>APb</td><td>APL</td></tr><tr><td>PVTv2-B1 [46]</td><td>33</td><td>243</td><td>41.8</td><td>54.3</td><td>45.9</td><td>38.8</td><td>61.2</td><td>41.6</td><td>23</td><td>225</td><td>41.2</td><td>61.9</td><td>43.9</td><td>25.4</td><td>44.5</td><td>54.3</td></tr><tr><td>FAT-B2 [12]</td><td>33</td><td>215</td><td>45.2</td><td>67.9</td><td>49.0</td><td>41.3</td><td>64.6</td><td>44.0</td><td>23</td><td>196</td><td>44.0</td><td>65.2</td><td>47.2</td><td>27.5</td><td>47.7</td><td>58.8</td></tr><tr><td>RMT-T [13]</td><td>33</td><td>218</td><td>47.1</td><td>68.8</td><td>51.7</td><td>42.6</td><td>65.8</td><td>45.9</td><td>23</td><td>199</td><td>45.1</td><td>66.2</td><td>48.1</td><td>28.8</td><td>48.9</td><td>61.1</td></tr><tr><td>SECViT-T</td><td>34</td><td>221</td><td>47.8</td><td>69.5</td><td>52.5</td><td>43.0</td><td>66.7</td><td>46.3</td><td>24</td><td>202</td><td>45.8</td><td>66.8</td><td>49.2</td><td>29.1</td><td>49.8</td><td>60.9</td></tr><tr><td>MPViT-S [28]</td><td>43</td><td>268</td><td>46.4</td><td>68.6</td><td>51.2</td><td>42.4</td><td>65.6</td><td>45.7</td><td>32</td><td>248</td><td>45.7</td><td>57.3</td><td>48.8</td><td>28.7</td><td>49.7</td><td>59.2</td></tr><tr><td>MLLA-T [18]</td><td>44</td><td>255</td><td>46.8</td><td>69.5</td><td>51.5</td><td>42.1</td><td>66.4</td><td>45.0</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>STViT-S [22]</td><td>44</td><td>252</td><td>47.6</td><td>70.0</td><td>52.3</td><td>43.1</td><td>66.8</td><td>46.5</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>RMT-S [13]</td><td>46</td><td>262</td><td>49.0</td><td>70.8</td><td>53.9</td><td>43.9</td><td>67.8</td><td>47.4</td><td>36</td><td>244</td><td>47.8</td><td>69.1</td><td>51.8</td><td>32.1</td><td>51.8</td><td>63.5</td></tr><tr><td>SECViT-S</td><td>45</td><td>262</td><td>49.9</td><td>70.9</td><td>54.7</td><td>44.6</td><td>68.3</td><td>47.7</td><td>35</td><td>240</td><td>48.4</td><td>69.4</td><td>52.0</td><td>31.3</td><td>53.3</td><td>63.8</td></tr><tr><td>ScalableViT-B [56]</td><td>95</td><td>349</td><td>46.8</td><td>68.7</td><td>51.5</td><td>42.5</td><td>65.8</td><td>45.9</td><td>85</td><td>330</td><td>45.8</td><td>67.3</td><td>49.2</td><td>29.9</td><td>49.5</td><td>61.0</td></tr><tr><td>InternImage-S [48]</td><td>69</td><td>340</td><td>47.8</td><td>69.8</td><td>52.8</td><td>43.3</td><td>67.1</td><td>46.7</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MLLA-S [18]</td><td>63</td><td>319</td><td>49.2</td><td>71.5</td><td>53.9</td><td>44.2</td><td>68.5</td><td>47.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>STViT-B [22]</td><td>70</td><td>359</td><td>49.7</td><td>71.7</td><td>54.7</td><td>44.8</td><td>68.9</td><td>48.7</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SECViT-B</td><td>76</td><td>371</td><td>51.5</td><td>72.9</td><td>56.7</td><td>45.4</td><td>69.9</td><td>48.7</td><td>63</td><td>349</td><td>49.3</td><td>70.3</td><td>52.9</td><td>32.0</td><td>53.8</td><td>64.8</td></tr><tr><td>Focal-B [55]</td><td>110</td><td>533</td><td>47.8</td><td>70.2</td><td>52.5</td><td>43.2</td><td>67.3</td><td>46.5</td><td>101</td><td>514</td><td>46.3</td><td>68.0</td><td>49.8</td><td>31.7</td><td>50.4</td><td>60.8</td></tr><tr><td>CSwin-B [10]</td><td>97</td><td>526</td><td>48.7</td><td>70.4</td><td>53.9</td><td>43.9</td><td>67.8</td><td>47.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>InternImage-B [48]</td><td>115</td><td>501</td><td>48.8</td><td>70.9</td><td>54.0</td><td>44.0</td><td>67.8</td><td>47.4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MLLA-B [18]</td><td>115</td><td>502</td><td>50.5</td><td>72.0</td><td>55.4</td><td>45.0</td><td>69.3</td><td>48.6</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SECViT-L</td><td>119</td><td>550</td><td>52.0</td><td>73.5</td><td>57.3</td><td>46.3</td><td>70.6</td><td>49.8</td><td>105</td><td>527</td><td>50.2</td><td>71.4</td><td>53.9</td><td>33.2</td><td>54.5</td><td>66.3</td></tr></table>",
|
| 973 |
+
"bbox": [
|
| 974 |
+
117,
|
| 975 |
+
77,
|
| 976 |
+
849,
|
| 977 |
+
330
|
| 978 |
+
],
|
| 979 |
+
"page_idx": 5
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "table",
|
| 983 |
+
"img_path": "images/8fbbe1628c6000ad91d39777b38ef9a2b5f60ca0170a1966b2b3e1f8fcedee81.jpg",
|
| 984 |
+
"table_caption": [
|
| 985 |
+
"Table 4. Comparison to other backbones using \"1×\" schedule on COCO."
|
| 986 |
+
],
|
| 987 |
+
"table_footnote": [],
|
| 988 |
+
"table_body": "<table><tr><td>Backbone</td><td>Params (M)</td><td>FLOPs (G)</td><td>\\(AP^b\\)</td><td>\\(AP_{50}^b\\)</td><td>\\(AP_{75}^b\\)</td><td>\\(AP^m\\)</td><td>\\(AP_{50}^m\\)</td><td>\\(AP_{75}^m\\)</td></tr><tr><td colspan=\"9\">Mask R-CNN 3×+MS</td></tr><tr><td>GC-ViT-T [20]</td><td>48</td><td>291</td><td>47.9</td><td>70.1</td><td>52.8</td><td>43.2</td><td>67.0</td><td>46.7</td></tr><tr><td>MLLA-T [18]</td><td>44</td><td>255</td><td>48.8</td><td>71.0</td><td>53.6</td><td>43.8</td><td>68.0</td><td>46.8</td></tr><tr><td>SMT-S [34]</td><td>40</td><td>265</td><td>49.0</td><td>70.1</td><td>53.4</td><td>43.4</td><td>67.3</td><td>46.7</td></tr><tr><td>InternImage-T [48]</td><td>49</td><td>270</td><td>49.1</td><td>70.4</td><td>54.1</td><td>43.7</td><td>67.3</td><td>47.3</td></tr><tr><td>RMT-S [13]</td><td>46</td><td>262</td><td>50.7</td><td>71.9</td><td>55.6</td><td>44.9</td><td>69.1</td><td>48.4</td></tr><tr><td>SECViT-S</td><td>45</td><td>262</td><td>51.6</td><td>72.5</td><td>55.9</td><td>45.6</td><td>69.9</td><td>48.8</td></tr><tr><td>NAT-S [19]</td><td>70</td><td>330</td><td>48.4</td><td>69.8</td><td>53.2</td><td>43.2</td><td>66.9</td><td>46.4</td></tr><tr><td>InternImage-S [48]</td><td>69</td><td>340</td><td>49.7</td><td>71.1</td><td>54.5</td><td>44.5</td><td>68.5</td><td>47.8</td></tr><tr><td>SMT-B [34]</td><td>52</td><td>328</td><td>49.8</td><td>71.0</td><td>54.4</td><td>44.0</td><td>68.0</td><td>47.3</td></tr><tr><td>MLLA-S [18]</td><td>63</td><td>319</td><td>50.5</td><td>71.8</td><td>55.2</td><td>44.9</td><td>69.1</td><td>48.2</td></tr><tr><td>RMT-B [13]</td><td>73</td><td>373</td><td>52.2</td><td>72.9</td><td>57.0</td><td>46.1</td><td>70.4</td><td>49.9</td></tr><tr><td>SECViT-B</td><td>75</td><td>371</td><td>52.8</td><td>73.6</td><td>57.7</td><td>46.4</td><td>70.8</td><td>49.9</td></tr><tr><td colspan=\"9\">Cascade Mask R-CNN 3×+MS</td></tr><tr><td>GC-ViT-T [20]</td><td>85</td><td>770</td><td>51.6</td><td>70.4</td><td>56.1</td><td>44.6</td><td>67.8</td><td>48.3</td></tr><tr><td>SMT-S [34]</td><td>78</td><td>744</td><td>51.9</td><td>70.5</td><td>56.3</td><td>44.7</td><td>67.8</td><td>48.6</td></tr><tr><td>UniFormer-S [30]</td><td>79</td><td>747</td><td>52.1</td><td>71.1</td><td>56.6</td><td>45.2</td><td>68.3</td><td>48.9</td></tr><tr><td>RMT-S [13]</td><td>83</td><td>741</td><td>53.2</td><td>72.0</td><td>57.8</td><td>46.1</td><td>69.8</td><td>49.8</td></tr><tr><td>SECViT-S</td><td>83</td><td>741</td><td>54.1</td><td>72.8</td><td>58.6</td><td>47.0</td><td>70.3</td><td>51.0</td></tr><tr><td>NAT-S [19]</td><td>108</td><td>809</td><td>51.9</td><td>70.4</td><td>56.2</td><td>44.9</td><td>68.2</td><td>48.6</td></tr><tr><td>GC-ViT-S [20]</td><td>108</td><td>866</td><td>52.4</td><td>71.0</td><td>57.1</td><td>45.4</td><td>68.5</td><td>49.3</td></tr><tr><td>CSWin-S [10]</td><td>92</td><td>820</td><td>53.7</td><td>72.2</td><td>58.4</td><td>46.4</td><td>69.6</td><td>50.6</td></tr><tr><td>UniFormer-B [30]</td><td>107</td><td>878</td><td>53.8</td><td>72.8</td><td>58.5</td><td>46.4</td><td>69.9</td><td>50.4</td></tr><tr><td>RMT-B [13]</td><td>111</td><td>852</td><td>54.5</td><td>72.8</td><td>59.0</td><td>47.2</td><td>70.5</td><td>51.4</td></tr><tr><td>SECViT-B</td><td>114</td><td>849</td><td>55.4</td><td>74.1</td><td>59.9</td><td>47.8</td><td>71.7</td><td>51.7</td></tr></table>",
|
| 989 |
+
"bbox": [
|
| 990 |
+
96,
|
| 991 |
+
349,
|
| 992 |
+
457,
|
| 993 |
+
676
|
| 994 |
+
],
|
| 995 |
+
"page_idx": 5
|
| 996 |
+
},
|
| 997 |
+
{
|
| 998 |
+
"type": "text",
|
| 999 |
+
"text": "son between SEC and various baseline vision language connectors based on LLaVA-1.5. Then, we compare LLaVA-1.5+SEC with several popular contemporary MLLMs.",
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
76,
|
| 1002 |
+
715,
|
| 1003 |
+
467,
|
| 1004 |
+
761
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 5
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "Strict Comparison with Baselines. In Tab. 7, we strictly compare various commonly used vision language connectors, including MLP, Resampler [1], Pooling, and EViT [32], which has achieved success in the design of ViT. Among these, MLP is the original design in LLaVA-1.5 [36], capable of achieving good results. However, it incurs significant computational cost due to the excessive vision tokens. To address this issue, some connectors at",
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
76,
|
| 1013 |
+
779,
|
| 1014 |
+
467,
|
| 1015 |
+
898
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 5
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "table",
|
| 1021 |
+
"img_path": "images/f3df275ca490173923467b7266a9a53400b06adca108bc4a933aeaed53d1e182.jpg",
|
| 1022 |
+
"table_caption": [
|
| 1023 |
+
"Table 5. Comparison with other backbones using $3 \\times +\\mathrm{MS}$ schedule on COCO."
|
| 1024 |
+
],
|
| 1025 |
+
"table_footnote": [],
|
| 1026 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"3\">Semantic FPN 80K Params FLOPs</td><td colspan=\"3\">Upernet 160K Params FLOPs</td></tr><tr><td>(M)</td><td>(G)</td><td>(%)</td><td>(M)</td><td>(G)</td><td>(%)</td></tr><tr><td>PVTv2-B1 [46]</td><td>18</td><td>136</td><td>42.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>VAN-B1 [17]</td><td>18</td><td>140</td><td>42.9</td><td>-</td><td>-</td><td>-</td></tr><tr><td>RMT-T [13]</td><td>17</td><td>136</td><td>46.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SECViT-T</td><td>18</td><td>136</td><td>47.2</td><td>44</td><td>894</td><td>48.8</td></tr><tr><td>StructViT-S [25]</td><td>26</td><td>271</td><td>46.9</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MogaNet-S [31]</td><td>29</td><td>189</td><td>47.7</td><td>55</td><td>946</td><td>49.2</td></tr><tr><td>SMT-S [34]</td><td>-</td><td>-</td><td>-</td><td>50</td><td>935</td><td>49.2</td></tr><tr><td>SGFormer-S [15]</td><td>25</td><td>205</td><td>49.0</td><td>52.5</td><td>989</td><td>49.9</td></tr><tr><td>RMT-S [13]</td><td>30</td><td>180</td><td>49.4</td><td>56</td><td>937</td><td>49.8</td></tr><tr><td>SECViT-S</td><td>30</td><td>180</td><td>49.6</td><td>56</td><td>936</td><td>50.6</td></tr><tr><td>MogaNet-B [31]</td><td>-</td><td>-</td><td>-</td><td>74</td><td>1050</td><td>50.1</td></tr><tr><td>InterImage-S [48]</td><td>-</td><td>-</td><td>-</td><td>80</td><td>1017</td><td>50.2</td></tr><tr><td>StructViT-B [25]</td><td>54</td><td>529</td><td>48.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>RMT-B [13]</td><td>57</td><td>294</td><td>50.4</td><td>83</td><td>1051</td><td>52.0</td></tr><tr><td>SECViT-B</td><td>60</td><td>291</td><td>50.7</td><td>86</td><td>1048</td><td>52.2</td></tr><tr><td>MogaNet-L [31]</td><td>-</td><td>-</td><td>-</td><td>113</td><td>1176</td><td>50.9</td></tr><tr><td>MLLA-B [18]</td><td>-</td><td>-</td><td>-</td><td>128</td><td>1183</td><td>51.9</td></tr><tr><td>SGFormer-B [15]</td><td>81</td><td>475</td><td>50.6</td><td>109</td><td>1304</td><td>52.0</td></tr><tr><td>RMT-L [13]</td><td>98</td><td>482</td><td>51.4</td><td>125</td><td>1241</td><td>52.8</td></tr><tr><td>SECViT-L</td><td>103</td><td>475</td><td>52.2</td><td>131</td><td>1256</td><td>53.8</td></tr></table>",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
524,
|
| 1029 |
+
359,
|
| 1030 |
+
867,
|
| 1031 |
+
665
|
| 1032 |
+
],
|
| 1033 |
+
"page_idx": 5
|
| 1034 |
+
},
|
| 1035 |
+
{
|
| 1036 |
+
"type": "text",
|
| 1037 |
+
"text": "Table 6. Comparison with the state-of-the-art on ADE20K.",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
519,
|
| 1040 |
+
667,
|
| 1041 |
+
869,
|
| 1042 |
+
680
|
| 1043 |
+
],
|
| 1044 |
+
"page_idx": 5
|
| 1045 |
+
},
|
| 1046 |
+
{
|
| 1047 |
+
"type": "text",
|
| 1048 |
+
"text": "tempt to use fewer vision tokens to accelerate LLaVA-1.5. Nonetheless, these adjustments inevitably lead to performance degradation. The results in Tab. 7 show that using SEC can effectively accelerate the inference of LLaVA-1.5 without causing performance degradation, and can even improve the performance of LLaVA-1.5 to a certain extent.",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
496,
|
| 1051 |
+
698,
|
| 1052 |
+
890,
|
| 1053 |
+
789
|
| 1054 |
+
],
|
| 1055 |
+
"page_idx": 5
|
| 1056 |
+
},
|
| 1057 |
+
{
|
| 1058 |
+
"type": "text",
|
| 1059 |
+
"text": "Comparison with Popular MLLMs. In Tab. 8 and Tab. 9, we compare LLaVA-1.5 equipped with SEC as a vision-language connector with other MLLMs. It is evident that SEC not only enhances the performance of MLLMs across various benchmarks but also significantly improves the efficiency of the models. This fully demonstrates the",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
496,
|
| 1062 |
+
809,
|
| 1063 |
+
890,
|
| 1064 |
+
900
|
| 1065 |
+
],
|
| 1066 |
+
"page_idx": 5
|
| 1067 |
+
},
|
| 1068 |
+
{
|
| 1069 |
+
"type": "page_number",
|
| 1070 |
+
"text": "4024",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
482,
|
| 1073 |
+
944,
|
| 1074 |
+
514,
|
| 1075 |
+
955
|
| 1076 |
+
],
|
| 1077 |
+
"page_idx": 5
|
| 1078 |
+
},
|
| 1079 |
+
{
|
| 1080 |
+
"type": "table",
|
| 1081 |
+
"img_path": "images/ff193ad143bc4fe3d22a757cc80a23d5395898256a23378743805a5e730243f8.jpg",
|
| 1082 |
+
"table_caption": [],
|
| 1083 |
+
"table_footnote": [],
|
| 1084 |
+
"table_body": "<table><tr><td>Model</td><td>Connector</td><td>V-T Num</td><td>Time</td><td>Speed</td><td>TextVQA</td><td>GQA</td><td>VQAv2</td><td>POPE</td><td>MME</td></tr><tr><td>LLaVA-1.5</td><td>MLP</td><td>576+1</td><td>194s</td><td>1.0×</td><td>58.2</td><td>62.0</td><td>78.5</td><td>86.1</td><td>1510.7</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>288+1</td><td>126s</td><td>1.5×</td><td>52.1</td><td>56.8</td><td>76.0</td><td>83.1</td><td>1393.2</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>288+1</td><td>126s</td><td>1.5×</td><td>54.6</td><td>60.0</td><td>77.9</td><td>84.3</td><td>1483.2</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>288+1</td><td>126s</td><td>1.5×</td><td>60.1</td><td>63.5</td><td>78.9</td><td>87.7</td><td>1510.7</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>256+1</td><td>116s</td><td>1.7×</td><td>51.6</td><td>56.0</td><td>75.2</td><td>82.7</td><td>1387.2</td></tr><tr><td>LLaVA-1.5+Pool</td><td>MLP+Pool</td><td>256+1</td><td>116s</td><td>1.7×</td><td>52.4</td><td>57.6</td><td>76.4</td><td>83.3</td><td>1415.5</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>256+1</td><td>116s</td><td>1.7×</td><td>52.8</td><td>59.6</td><td>77.1</td><td>83.7</td><td>1443.7</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>256+1</td><td>116s</td><td>1.7×</td><td>59.6</td><td>63.2</td><td>78.6</td><td>87.1</td><td>1505.2</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>192+1</td><td>102s</td><td>1.9×</td><td>50.1</td><td>55.2</td><td>74.3</td><td>82.7</td><td>1337.6</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>192+1</td><td>102s</td><td>1.9×</td><td>51.6</td><td>58.6</td><td>76.3</td><td>83.1</td><td>1427.6</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>192+1</td><td>102s</td><td>1.9×</td><td>57.7</td><td>62.7</td><td>78.4</td><td>86.7</td><td>1500.1</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>144+1</td><td>94s</td><td>2.1×</td><td>47.6</td><td>54.6</td><td>72.0</td><td>81.9</td><td>1293.7</td></tr><tr><td>LLaVA-1.5+Pool</td><td>MLP+Pool</td><td>144+1</td><td>94s</td><td>2.1×</td><td>50.0</td><td>56.2</td><td>73.6</td><td>81.9</td><td>1310.7</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>144+1</td><td>94s</td><td>2.1×</td><td>51.2</td><td>58.0</td><td>76.0</td><td>83.1</td><td>1393.6</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>144+1</td><td>94s</td><td>2.1×</td><td>56.8</td><td>62.0</td><td>78.0</td><td>86.1</td><td>1487.1</td></tr></table>",
|
| 1085 |
+
"bbox": [
|
| 1086 |
+
125,
|
| 1087 |
+
77,
|
| 1088 |
+
843,
|
| 1089 |
+
275
|
| 1090 |
+
],
|
| 1091 |
+
"page_idx": 6
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "table",
|
| 1095 |
+
"img_path": "images/ab4eade270e932acb2b264e8e6112de399977f63626d29c8b15ff839c83aad8f.jpg",
|
| 1096 |
+
"table_caption": [
|
| 1097 |
+
"Table 7. Comparison of different vision language connectors on LLaVA-1.5. \"V-T Num\" denotes the quantity of visual tokens. The computation expense is impacted by V-T Num, with larger values resulting in higher costs. \"Speed\" refers to the comparative inference velocity relative to LLaVA-1.5. \"Time\" is the average inference time. Inference speed are measured on the A100."
|
| 1098 |
+
],
|
| 1099 |
+
"table_footnote": [],
|
| 1100 |
+
"table_body": "<table><tr><td>Model</td><td>LLM</td><td>Connector</td><td>V-T Num</td><td>Res</td><td>TextVQA</td><td>GQA</td><td>VQAv2</td><td>VisWiz</td><td>\\( SQA_{img} \\)</td><td>Speed (↑)</td></tr><tr><td colspan=\"11\">7B LLM</td></tr><tr><td>Shikra [5]</td><td>Vicuna-7B</td><td>MLP</td><td>257</td><td>224</td><td>-</td><td>-</td><td>77.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>IDEFICS-9B [27]</td><td>LLaMA-7B</td><td>Cross Attn</td><td>257</td><td>224</td><td>-</td><td>38.4</td><td>50.9</td><td>35.5</td><td>-</td><td>-</td></tr><tr><td>Qwen-VL [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>59.3</td><td>78.8</td><td>35.2</td><td>67.1</td><td>-</td></tr><tr><td>Qwen-VL-Chat [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>57.5</td><td>78.2</td><td>38.9</td><td>68.2</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-7B</td><td>MLP</td><td>577</td><td>336</td><td>58.2</td><td>62.0</td><td>78.5</td><td>50.0</td><td>66.8</td><td>1.0×</td></tr><tr><td>LLaVA-1.5+SEC (ours)</td><td>Vicuna-7B</td><td>MLP+SEC</td><td>257</td><td>336</td><td>59.6</td><td>63.2</td><td>78.9</td><td>52.8</td><td>69.6</td><td>1.7×</td></tr><tr><td colspan=\"11\">13B LLM</td></tr><tr><td>InstructBLIP [8]</td><td>Vicuna-13B</td><td>Q-Former</td><td>32</td><td>224</td><td>-</td><td>49.5</td><td>-</td><td>33.4</td><td>63.1</td><td>-</td></tr><tr><td>BLIP-2 [29]</td><td>Vicuna-13B</td><td>Q-Former</td><td>32</td><td>224</td><td>-</td><td>41.0</td><td>41.0</td><td>19.5</td><td>61.0</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-13B</td><td>MLP</td><td>577</td><td>336</td><td>61.2</td><td>63.3</td><td>80.0</td><td>53.6</td><td>71.6</td><td>1.0×</td></tr><tr><td>LLaVA1.5+SEC (ours)</td><td>Vicuna-13B</td><td>MLP+SEC</td><td>257</td><td>336</td><td>62.3</td><td>64.3</td><td>80.0</td><td>54.7</td><td>72.0</td><td>1.8×</td></tr></table>",
|
| 1101 |
+
"bbox": [
|
| 1102 |
+
117,
|
| 1103 |
+
321,
|
| 1104 |
+
854,
|
| 1105 |
+
494
|
| 1106 |
+
],
|
| 1107 |
+
"page_idx": 6
|
| 1108 |
+
},
|
| 1109 |
+
{
|
| 1110 |
+
"type": "text",
|
| 1111 |
+
"text": "Table 8. Results on General VQA tasks.",
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
364,
|
| 1114 |
+
494,
|
| 1115 |
+
602,
|
| 1116 |
+
507
|
| 1117 |
+
],
|
| 1118 |
+
"page_idx": 6
|
| 1119 |
+
},
|
| 1120 |
+
{
|
| 1121 |
+
"type": "text",
|
| 1122 |
+
"text": "effectiveness of SEC in extracting visual information.",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
76,
|
| 1125 |
+
523,
|
| 1126 |
+
431,
|
| 1127 |
+
539
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 6
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "text",
|
| 1133 |
+
"text": "4.3. Ablation Study",
|
| 1134 |
+
"text_level": 1,
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
76,
|
| 1137 |
+
550,
|
| 1138 |
+
230,
|
| 1139 |
+
566
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 6
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "text",
|
| 1145 |
+
"text": "In this section, we present some of the ablation study results for SEC, and more results can be found in the Appendix.",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
76,
|
| 1148 |
+
574,
|
| 1149 |
+
468,
|
| 1150 |
+
604
|
| 1151 |
+
],
|
| 1152 |
+
"page_idx": 6
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "text",
|
| 1156 |
+
"text": "Number of Vision Tokens in Each Clusters. The number of vision tokens has a significant impact on the performance and speed of the model. We thoroughly investigate the effect of the number of vision tokens on SECViT. As shown in Tab. 10, the number of vision tokens in each cluster greatly influences the model's performance. Specifically, in downstream dense prediction tasks, having too few tokens in each cluster leads to substantial performance degradation. When the number of tokens in each cluster is too large, the model's performance does not see a significant improvement, but its speed decreases.",
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
75,
|
| 1159 |
+
625,
|
| 1160 |
+
468,
|
| 1161 |
+
790
|
| 1162 |
+
],
|
| 1163 |
+
"page_idx": 6
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "text",
|
| 1167 |
+
"text": "4.4. Visualization of SEC",
|
| 1168 |
+
"text_level": 1,
|
| 1169 |
+
"bbox": [
|
| 1170 |
+
76,
|
| 1171 |
+
800,
|
| 1172 |
+
272,
|
| 1173 |
+
816
|
| 1174 |
+
],
|
| 1175 |
+
"page_idx": 6
|
| 1176 |
+
},
|
| 1177 |
+
{
|
| 1178 |
+
"type": "text",
|
| 1179 |
+
"text": "To further understand the working mechanism of SEC, we visualize some clustering results for SECViT. As shown in Fig. 4, the left side presents the clustering results of vision tokens at different stages of the model. From the clustering results, we analyze that in the shallow layers, the model",
|
| 1180 |
+
"bbox": [
|
| 1181 |
+
75,
|
| 1182 |
+
824,
|
| 1183 |
+
470,
|
| 1184 |
+
902
|
| 1185 |
+
],
|
| 1186 |
+
"page_idx": 6
|
| 1187 |
+
},
|
| 1188 |
+
{
|
| 1189 |
+
"type": "text",
|
| 1190 |
+
"text": "distinguishes fine-grained features well, while in the deeper layers, it captures global semantic features effectively. The right side shows the Grad-CAM diagrams at different stages of the model, from which we can draw similar conclusions to the clustering results. More visualization results can be found in Appendix.",
|
| 1191 |
+
"bbox": [
|
| 1192 |
+
496,
|
| 1193 |
+
523,
|
| 1194 |
+
890,
|
| 1195 |
+
616
|
| 1196 |
+
],
|
| 1197 |
+
"page_idx": 6
|
| 1198 |
+
},
|
| 1199 |
+
{
|
| 1200 |
+
"type": "text",
|
| 1201 |
+
"text": "Number of Vision Tokens Outputs by SEC. MLLM is quite sensitive to the number of vision tokens. We conduct a detailed exploration based on LLaVA-1.5 regarding the number of vision tokens output by SEC, as shown in Tab. 11. The first row represents the speed and performance of the original LLaVA-1.5 without using SEC. Compared to LLaVA-1.5, employing SEC effectively reduces the number of vision tokens and improves training efficiency. As the number of vision tokens decreases, the model's performance shows a slight decline, but its efficiency is further enhanced.",
|
| 1202 |
+
"bbox": [
|
| 1203 |
+
496,
|
| 1204 |
+
633,
|
| 1205 |
+
892,
|
| 1206 |
+
800
|
| 1207 |
+
],
|
| 1208 |
+
"page_idx": 6
|
| 1209 |
+
},
|
| 1210 |
+
{
|
| 1211 |
+
"type": "text",
|
| 1212 |
+
"text": "5. Conclusion",
|
| 1213 |
+
"text_level": 1,
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
500,
|
| 1216 |
+
814,
|
| 1217 |
+
619,
|
| 1218 |
+
830
|
| 1219 |
+
],
|
| 1220 |
+
"page_idx": 6
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"type": "text",
|
| 1224 |
+
"text": "We propose a simple and straightforward clustering method for vision tokens—Semantic Equitable Clustering (SEC). This method assigns each token to a cluster by calculating the similarity between each token and a global token, and",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
496,
|
| 1227 |
+
839,
|
| 1228 |
+
893,
|
| 1229 |
+
902
|
| 1230 |
+
],
|
| 1231 |
+
"page_idx": 6
|
| 1232 |
+
},
|
| 1233 |
+
{
|
| 1234 |
+
"type": "page_number",
|
| 1235 |
+
"text": "4025",
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
482,
|
| 1238 |
+
944,
|
| 1239 |
+
514,
|
| 1240 |
+
955
|
| 1241 |
+
],
|
| 1242 |
+
"page_idx": 6
|
| 1243 |
+
},
|
| 1244 |
+
{
|
| 1245 |
+
"type": "table",
|
| 1246 |
+
"img_path": "images/c1bba0b8bb52a32dc26933269128d730ddd2a9865c45bfa9fe9a360ff72e4cc1.jpg",
|
| 1247 |
+
"table_caption": [],
|
| 1248 |
+
"table_footnote": [],
|
| 1249 |
+
"table_body": "<table><tr><td>Model</td><td>LLM</td><td>Connector</td><td>V-T Num</td><td>Res</td><td>POPE</td><td>MMB</td><td>MM-Vet</td><td>Speed (↑)</td></tr><tr><td colspan=\"9\">7B LLM</td></tr><tr><td>MiniGPT-4 [60]</td><td>Vicuna-7B</td><td>Resampler</td><td>32</td><td>224</td><td>72.2</td><td>24.3</td><td>22.1</td><td>-</td></tr><tr><td>mPLUG-Ow12 [57]</td><td>LLaMA2-7B</td><td>Resampler</td><td>32</td><td>224</td><td>-</td><td>49.4</td><td>-</td><td>-</td></tr><tr><td>LLaMA-AdapterV2 [14]</td><td>LLaMA2-7B</td><td>LLaMA-Adapter</td><td>257</td><td>224</td><td>-</td><td>41.0</td><td>31.4</td><td>-</td></tr><tr><td>Shikra [5]</td><td>Vicuna-7B</td><td>MLP</td><td>257</td><td>224</td><td>-</td><td>58.8</td><td>-</td><td>-</td></tr><tr><td>Qwen-VL [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>38.2</td><td>-</td><td>-</td></tr><tr><td>Qwen-VL-Chat [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>60.6</td><td>-</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-7B</td><td>MLP</td><td>577</td><td>336</td><td>86.1</td><td>64.3</td><td>31.1</td><td>1.0×</td></tr><tr><td>LLaVA1.5+SEC (ours)</td><td>Vicuna-7B</td><td>MLP+SEC</td><td>145</td><td>336</td><td>86.1</td><td>68.4</td><td>31.7</td><td>2.1×</td></tr><tr><td colspan=\"9\">13B LLM</td></tr><tr><td>MiniGPT-4 [60]</td><td>Vicuna-13B</td><td>Resampler</td><td>32</td><td>224</td><td>-</td><td>-</td><td>24.4</td><td>-</td></tr><tr><td>BLIP-2 [29]</td><td>Vicuna-13B</td><td>Q-Former</td><td>32</td><td>224</td><td>85.3</td><td>-</td><td>22.4</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-13B</td><td>MLP</td><td>577</td><td>336</td><td>86.2</td><td>67.7</td><td>36.1</td><td>1.0×</td></tr><tr><td>LLaVA-1.5+SEC (ours)</td><td>Vicuna-13B</td><td>MLP+SEC</td><td>145</td><td>336</td><td>86.4</td><td>69.2</td><td>37.3</td><td>2.2×</td></tr></table>",
|
| 1250 |
+
"bbox": [
|
| 1251 |
+
102,
|
| 1252 |
+
88,
|
| 1253 |
+
867,
|
| 1254 |
+
284
|
| 1255 |
+
],
|
| 1256 |
+
"page_idx": 7
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "table",
|
| 1260 |
+
"img_path": "images/c760fd9f8f0d078c2bb8be621b61cf32610c29ee8b18ee63cb67cebf90532cad.jpg",
|
| 1261 |
+
"table_caption": [
|
| 1262 |
+
"Table 9. Results on benchmark designed for MLLMs."
|
| 1263 |
+
],
|
| 1264 |
+
"table_footnote": [],
|
| 1265 |
+
"table_body": "<table><tr><td>V-T num</td><td>Params (M)</td><td>FLOPs (G)</td><td>Throughput (imgs/s)</td><td>Acc</td><td>APb</td><td>APm</td><td>mIoU</td></tr><tr><td>98</td><td>15</td><td>2.5</td><td>2004</td><td>82.7</td><td>47.8</td><td>43.0</td><td>47.2</td></tr><tr><td>196</td><td>15</td><td>3.1</td><td>1722</td><td>83.0 (+0.3)</td><td>48.2 (+0.4)</td><td>43.4 (+0.4)</td><td>47.5 (+0.3)</td></tr><tr><td>64</td><td>15</td><td>2.5</td><td>1946</td><td>82.7 (+0.0)</td><td>47.8 (+0.0)</td><td>42.8 (-0.2)</td><td>46.9 (-0.3)</td></tr><tr><td>49</td><td>15</td><td>2.4</td><td>2102</td><td>82.6 (-0.1)</td><td>47.5 (-0.3)</td><td>42.7 (-0.3)</td><td>47.7 (-0.5)</td></tr><tr><td>24</td><td>15</td><td>2.3</td><td>2186</td><td>82.0 (-0.7)</td><td>45.9 (-1.9)</td><td>40.6 (-2.4)</td><td>44.6 (-2.6)</td></tr></table>",
|
| 1266 |
+
"bbox": [
|
| 1267 |
+
98,
|
| 1268 |
+
306,
|
| 1269 |
+
447,
|
| 1270 |
+
455
|
| 1271 |
+
],
|
| 1272 |
+
"page_idx": 7
|
| 1273 |
+
},
|
| 1274 |
+
{
|
| 1275 |
+
"type": "table",
|
| 1276 |
+
"img_path": "images/739005363da79434f53381e97d751e222d3b9da27106c33b781ac1e713904cbb.jpg",
|
| 1277 |
+
"table_caption": [
|
| 1278 |
+
"Table 10. Effect of the number of vision tokens in each cluster. \"V-T num\" means the number of vision tokens in each cluster. The experiments are conducted based on SECViT-T."
|
| 1279 |
+
],
|
| 1280 |
+
"table_footnote": [],
|
| 1281 |
+
"table_body": "<table><tr><td>V-T num</td><td colspan=\"2\">Time Speed</td><td>TextVQA</td><td>GQA</td><td>VQAv2</td><td>POPE</td><td>MM-Vet</td></tr><tr><td>576+1</td><td>21h</td><td>1.0×</td><td>58.2</td><td>62.0</td><td>78.5</td><td>86.1</td><td>31.1</td></tr><tr><td>288+1</td><td>14h</td><td>1.5×</td><td>60.1(+1.9)</td><td>63.5(+1.5)</td><td>78.9(+0.4)</td><td>87.7(+1.6)</td><td>33.2(+2.1)</td></tr><tr><td>256+1</td><td>13h</td><td>1.6×</td><td>59.6(+1.4)</td><td>63.2(+0.3)</td><td>78.6(+0.1)</td><td>87.1(+1.0)</td><td>32.7(+1.6)</td></tr><tr><td>192+1</td><td>11h</td><td>1.9×</td><td>57.7(-0.5)</td><td>62.7(+0.7)</td><td>78.4(-0.1)</td><td>86.7(+0.6)</td><td>32.1(+1.0)</td></tr><tr><td>144+1</td><td>10h</td><td>2.1×</td><td>56.8(-1.4)</td><td>62.0(+0.0)</td><td>78.0(-0.5)</td><td>86.1(+0.0)</td><td>31.7(+0.6)</td></tr></table>",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
81,
|
| 1284 |
+
503,
|
| 1285 |
+
468,
|
| 1286 |
+
589
|
| 1287 |
+
],
|
| 1288 |
+
"page_idx": 7
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"type": "text",
|
| 1292 |
+
"text": "Table 11. Effect of the number of vision tokens outputs by SEC. \"V-T num\" means the number of vision tokens output by SEC. The experiments are conducted based on LLaVA-1.5 [36].",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
76,
|
| 1295 |
+
590,
|
| 1296 |
+
468,
|
| 1297 |
+
633
|
| 1298 |
+
],
|
| 1299 |
+
"page_idx": 7
|
| 1300 |
+
},
|
| 1301 |
+
{
|
| 1302 |
+
"type": "text",
|
| 1303 |
+
"text": "completes the whole clustering process in only one step. Our clustering method takes into account the semantic information contained in the tokens, and ensures an equal number of tokens in each cluster, facilitating efficient parallel processing on modern GPUs. Based on Semantic Equitable Clustering, we designed SECViT, a versatile vision backbone that achieves impressive results across various vision tasks, including image classification, object detection, instance segmentation, and semantic segmentation. Besides, SEC can also be conveniently applied to multimodal large language models (MLLM) to serve as a vision language connector and benefits the model's efficiency.",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
75,
|
| 1306 |
+
648,
|
| 1307 |
+
468,
|
| 1308 |
+
830
|
| 1309 |
+
],
|
| 1310 |
+
"page_idx": 7
|
| 1311 |
+
},
|
| 1312 |
+
{
|
| 1313 |
+
"type": "text",
|
| 1314 |
+
"text": "6. Acknowledgements",
|
| 1315 |
+
"text_level": 1,
|
| 1316 |
+
"bbox": [
|
| 1317 |
+
76,
|
| 1318 |
+
845,
|
| 1319 |
+
264,
|
| 1320 |
+
862
|
| 1321 |
+
],
|
| 1322 |
+
"page_idx": 7
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "text",
|
| 1326 |
+
"text": "This work is partially funded by Beijing Natural Science Foundation (4252054), Youth Innovation Promotion",
|
| 1327 |
+
"bbox": [
|
| 1328 |
+
75,
|
| 1329 |
+
869,
|
| 1330 |
+
468,
|
| 1331 |
+
900
|
| 1332 |
+
],
|
| 1333 |
+
"page_idx": 7
|
| 1334 |
+
},
|
| 1335 |
+
{
|
| 1336 |
+
"type": "image",
|
| 1337 |
+
"img_path": "images/dad77df489835b054acdb275ca0d0404b2aec373fd532a5ac9ebaf2e3f7b9f41.jpg",
|
| 1338 |
+
"image_caption": [
|
| 1339 |
+
"Figure 4. Visualization for SEC."
|
| 1340 |
+
],
|
| 1341 |
+
"image_footnote": [],
|
| 1342 |
+
"bbox": [
|
| 1343 |
+
500,
|
| 1344 |
+
305,
|
| 1345 |
+
888,
|
| 1346 |
+
795
|
| 1347 |
+
],
|
| 1348 |
+
"page_idx": 7
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "text",
|
| 1352 |
+
"text": "Association CAS(Grant No.2022132), Beijing Nova Program(20230484276), and CCF-Kuaishou Large Model Explorer Fund (NO. CCF-KuaiShou 2024005).",
|
| 1353 |
+
"bbox": [
|
| 1354 |
+
498,
|
| 1355 |
+
825,
|
| 1356 |
+
890,
|
| 1357 |
+
872
|
| 1358 |
+
],
|
| 1359 |
+
"page_idx": 7
|
| 1360 |
+
},
|
| 1361 |
+
{
|
| 1362 |
+
"type": "page_number",
|
| 1363 |
+
"text": "4026",
|
| 1364 |
+
"bbox": [
|
| 1365 |
+
482,
|
| 1366 |
+
944,
|
| 1367 |
+
514,
|
| 1368 |
+
955
|
| 1369 |
+
],
|
| 1370 |
+
"page_idx": 7
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "text",
|
| 1374 |
+
"text": "References",
|
| 1375 |
+
"text_level": 1,
|
| 1376 |
+
"bbox": [
|
| 1377 |
+
78,
|
| 1378 |
+
89,
|
| 1379 |
+
173,
|
| 1380 |
+
104
|
| 1381 |
+
],
|
| 1382 |
+
"page_idx": 8
|
| 1383 |
+
},
|
| 1384 |
+
{
|
| 1385 |
+
"type": "list",
|
| 1386 |
+
"sub_type": "ref_text",
|
| 1387 |
+
"list_items": [
|
| 1388 |
+
"[1] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond, 2023. 2, 6, 7, 8",
|
| 1389 |
+
"[2] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In CVPR, 2018. 5",
|
| 1390 |
+
"[3] Junbum Cha, Wooyoung Kang, Jonghwan Mun, and Byungseok Roh. Honeybee: Locality-enhanced projector for multimodal llm. In CVPR, 2024. 2",
|
| 1391 |
+
"[4] Kai Chen, Jiaqi Wang, Jiangmiao Pang, et al. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. 5",
|
| 1392 |
+
"[5] Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llm's referential dialogue magic, 2023. 7, 8",
|
| 1393 |
+
"[6] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, and Chunhua Shen. Conditional positional encodings for vision transformers. In ICLR, 2023. 3",
|
| 1394 |
+
"[7] MMSegmentation Contributors. Mmsegmentation, an open source semantic segmentation toolbox, 2020. 5",
|
| 1395 |
+
"[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. 7",
|
| 1396 |
+
"[9] Mingyu Ding, Bin Xiao, Noel Codella, et al. Davit: Dual attention vision transformers. In ECCV, 2022. 2, 5",
|
| 1397 |
+
"[10] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, et al. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In CVPR, 2022. 1, 2, 3, 6",
|
| 1398 |
+
"[11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. 1, 2",
|
| 1399 |
+
"[12] Qihang Fan, Huaibo Huang, Xiaogiang Zhou, and Ran He. Lightweight vision transformer with bidirectional interaction. In NeurIPS, 2023. 3, 5, 6",
|
| 1400 |
+
"[13] Qihang Fan, Huaibo Huang, Mingrui Chen, Hongmin Liu, and Ran He. Rmt: Retentive networks meet vision transformers. In CVPR, 2024. 5, 6",
|
| 1401 |
+
"[14] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xi-angyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efficient visual instruction model, 2023. 8",
|
| 1402 |
+
"[15] SG-Former: Self guided Transformer with Evolving Token Reallocation. Sucheng ren, xingyi yang, songhua liu, xinchao wang. In ICCV, 2023. 5, 6",
|
| 1403 |
+
"[16] Jianyuan Guo, Kai Han, Han Wu, Chang Xu, Yehui Tang, Chunjing Xu, and Yunhe Wang. Cmt: Convolutional neural networks meet vision transformers. In CVPR, 2022. 2",
|
| 1404 |
+
"[17] Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, and Shi-Min Hu. Visual attention network. arXiv preprint arXiv:2202.09741, 2022. 6",
|
| 1405 |
+
"[18] Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yifan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and"
|
| 1406 |
+
],
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
78,
|
| 1409 |
+
114,
|
| 1410 |
+
468,
|
| 1411 |
+
901
|
| 1412 |
+
],
|
| 1413 |
+
"page_idx": 8
|
| 1414 |
+
},
|
| 1415 |
+
{
|
| 1416 |
+
"type": "list",
|
| 1417 |
+
"sub_type": "ref_text",
|
| 1418 |
+
"list_items": [
|
| 1419 |
+
"Gao Huang. Demystify mamba in vision: A linear attention perspective. In NeurIPS, 2024. 5, 6",
|
| 1420 |
+
"[19] Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. Neighborhood attention transformer. In CVPR, 2023. 5, 6",
|
| 1421 |
+
"[20] Ali Hatamizadeh, Hongxu Yin, Greg Heinrich, Jan Kautz, and Pavlo Molchanov. Global context vision transformers. In ICML, 2023. 5, 6",
|
| 1422 |
+
"[21] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross B. Girshick. Mask r-cnn. In ICCV, 2017. 5",
|
| 1423 |
+
"[22] Huaibo Huang, Xiaogiang Zhou, Jie Cao, Ran He, and Tieniu Tan. Vision transformer with super token sampling. In CVPR, 2023. 6",
|
| 1424 |
+
"[23] Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. Perceiver: General perception with iterative attention, 2021. 2",
|
| 1425 |
+
"[24] Zi-Hang Jiang, Qibin Hou, Li Yuan, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, and Jiashi Feng. All tokens matter: Token labeling for training better vision transformers. In NeurIPS, 2021. 2",
|
| 1426 |
+
"[25] Manjin Kim, Paul Hongsuck Seo, Cordelia Schmid, and Minsu Cho. Learning correlation structures for vision transformers. In CVPR, 2024. 5, 6",
|
| 1427 |
+
"[26] Alexander Kirillov, Ross Girshick, Kaiming He, and Piotr Dollár. Panoptic feature pyramid networks. In CVPR, 2019. 5",
|
| 1428 |
+
"[27] Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander Rush, Douwe Kiela, et al. Obelics: An open web-scale filtered dataset of interleaved image-text documents. In NeurIPS, 2024. 7",
|
| 1429 |
+
"[28] Youngwan Lee, Jonghee Kim, Jeffrey Willette, and Sung Ju Hwang. Mpvit: Multi-path vision transformer for dense prediction. In CVPR, 2022. 5, 6",
|
| 1430 |
+
"[29] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, 2023. 7, 8",
|
| 1431 |
+
"[30] Kunchang Li, Yali Wang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, and Yu Qiao. Uniformer: Unified transformer for efficient spatiotemporal representation learning, 2022. 6",
|
| 1432 |
+
"[31] Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di Wu, Zhiyuan Chen, Jiangbin Zheng, and Stan Z. Li. Moganet: Multi-order gated aggregation network. In ICLR, 2024. 6",
|
| 1433 |
+
"[32] Youwei Liang, Chongjian Ge, Zhan Tong, Yibing Song, Jue Wang, and Pengtao Xie. Not all patches are what you need: Expediting vision transformers via token reorganizations. In International Conference on Learning Representations, 2022. 2, 4, 6",
|
| 1434 |
+
"[33] Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, and Kaiming He and Piotr Dólar. Focal loss for dense object detection. In ICCV, 2017. 5",
|
| 1435 |
+
"[34] Weifeng Lin, Ziheng Wu, Jiayu Chen, Jun Huang, and Lianwen Jin. Scale-aware modulation meet transformer. In ICCV, 2023. 5, 6"
|
| 1436 |
+
],
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
501,
|
| 1439 |
+
92,
|
| 1440 |
+
890,
|
| 1441 |
+
898
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 8
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "page_number",
|
| 1447 |
+
"text": "4027",
|
| 1448 |
+
"bbox": [
|
| 1449 |
+
482,
|
| 1450 |
+
944,
|
| 1451 |
+
514,
|
| 1452 |
+
955
|
| 1453 |
+
],
|
| 1454 |
+
"page_idx": 8
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "list",
|
| 1458 |
+
"sub_type": "ref_text",
|
| 1459 |
+
"list_items": [
|
| 1460 |
+
"[35] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023. 7, 8",
|
| 1461 |
+
"[36] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023. 2, 3, 4, 6, 8",
|
| 1462 |
+
"[37] Kai Liu, Tianyi Wu, Cong Liu, and Guodong Guo. Dynamic group transformer: A general vision transformer backbone with dynamic group attention. In IJCAI, 2022. 1, 2",
|
| 1463 |
+
"[38] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 1, 2, 3, 4, 5",
|
| 1464 |
+
"[39] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, et al. A convnet for the 2020s. In CVPR, 2022. 5",
|
| 1465 |
+
"[40] Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, and Cho-Jui Hsieh. Dynamicvit: Efficient vision transformers with dynamic token sparsification. In NeurIPS, 2021. 2",
|
| 1466 |
+
"[41] Chenyang Si, Weihao Yu, Pan Zhou, Yichen Zhou, Xin chao Wang, and Shuicheng YAN. Inception transformer. In NeurIPS, 2022. 2, 5",
|
| 1467 |
+
"[42] Shitao Tang, Jiahui Zhang, Siyu Zhu, et al. Quadtree attention for vision transformers. In ICLR, 2022. 5",
|
| 1468 |
+
"[43] Hugo Touvron, Matthieu Cord, Matthijs Douze, et al. Training data-efficient image transformers & distillation through attention. In ICML, 2021. 2, 4, 5",
|
| 1469 |
+
"[44] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In ECCV, 2022. 1, 2, 5",
|
| 1470 |
+
"[45] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, 2021. 2",
|
| 1471 |
+
"[46] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvtv2: Improved baselines with pyramid vision transformer. Computational Visual Media, 8(3):1-10, 2022. 2, 5, 6",
|
| 1472 |
+
"[47] Wenxiao Wang, Lu Yao, Long Chen, Binbin Lin, Deng Cai, Xiaofei He, and Wei Liu. Crossformer: A versatile vision transformer hinging on cross-scale attention. In ICLR, 2022. 1, 5",
|
| 1473 |
+
"[48] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In CVPR, 2023. 5, 6",
|
| 1474 |
+
"[49] Yulin Wang, Rui Huang, Shiji Song, Zeyi Huang, and Gao Huang. Not all images are worth 16x16 words: Dynamic vision transformers with adaptive sequence length. In NeurIPS, 2021. 5",
|
| 1475 |
+
"[50] Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, and Saining Xie. Convnext v2: Co-designing and scaling convnets with masked autoencoders. arXiv preprint arXiv:2301.00808, 2023. 5",
|
| 1476 |
+
"[51] Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In CVPR, 2022. 2"
|
| 1477 |
+
],
|
| 1478 |
+
"bbox": [
|
| 1479 |
+
78,
|
| 1480 |
+
90,
|
| 1481 |
+
468,
|
| 1482 |
+
898
|
| 1483 |
+
],
|
| 1484 |
+
"page_idx": 9
|
| 1485 |
+
},
|
| 1486 |
+
{
|
| 1487 |
+
"type": "list",
|
| 1488 |
+
"sub_type": "ref_text",
|
| 1489 |
+
"list_items": [
|
| 1490 |
+
"[52] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In ECCV, 2018. 5",
|
| 1491 |
+
"[53] Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In CVPR, 2022. 5",
|
| 1492 |
+
"[54] Chenglin Yang, Siyuan Qiao, Qihang Yu, et al. Moat: Alternating mobile convolution and attention brings strong vision models. In ICLR, 2023. 5",
|
| 1493 |
+
"[55] Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, and Jianfeng Gao. Focal self-attention for local-global interactions in vision transformers. In NeurIPS, 2021. 6",
|
| 1494 |
+
"[56] Rui Yang, Hailong Ma, Jie Wu, Yansong Tang, Xuefeng Xiao, Min Zheng, and Xiu Li. Scalablevit: Rethinking the context-oriented generalization of vision transformer. In ECCV, 2022. 6",
|
| 1495 |
+
"[57] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. mplug-owl: Modularization empowers large language models with multimodality, 2024. 8",
|
| 1496 |
+
"[58] Xiaoyu Yue, Shuyang Sun, Zhanghui Kuang, Meng Wei, Philip HS Torr, Wayne Zhang, and Dahua Lin. Vision transformer with progressive sampling. In ICCV, 2021. 5",
|
| 1497 |
+
"[59] Wang Zeng, Sheng Jin, Wentao Liu, Chen Qian, Ping Luo, Wanli Ouyang, and Xiaogang Wang. Not all tokens are equal: Human-centric visual analysis via token clustering transformer. In CVPR, 2022. 5",
|
| 1498 |
+
"[60] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models, 2023. 8",
|
| 1499 |
+
"[61] Lei Zhu, Xinjiang Wang, Zhanghan Ke, Wayne Zhang, and Rynson Lau. Bformer: Vision transformer with bi-level routing attention. In CVPR, 2023. 2, 5"
|
| 1500 |
+
],
|
| 1501 |
+
"bbox": [
|
| 1502 |
+
501,
|
| 1503 |
+
90,
|
| 1504 |
+
890,
|
| 1505 |
+
614
|
| 1506 |
+
],
|
| 1507 |
+
"page_idx": 9
|
| 1508 |
+
},
|
| 1509 |
+
{
|
| 1510 |
+
"type": "page_number",
|
| 1511 |
+
"text": "4028",
|
| 1512 |
+
"bbox": [
|
| 1513 |
+
482,
|
| 1514 |
+
944,
|
| 1515 |
+
514,
|
| 1516 |
+
955
|
| 1517 |
+
],
|
| 1518 |
+
"page_idx": 9
|
| 1519 |
+
}
|
| 1520 |
+
]
|
2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/755b1314-9e24-437c-8f51-19b61bb62095_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/755b1314-9e24-437c-8f51-19b61bb62095_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6192dd5e0b2121b2c6c78312e1d16fe963b8198ea17948a55133c7eada8e7c8f
|
| 3 |
+
size 2543314
|
2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/full.md
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Semantic Equitable Clustering: A Simple and Effective Strategy for Clustering Vision Tokens
|
| 2 |
+
|
| 3 |
+
Qihang Fan $^{1,2}$ , Huaibo Huang $^{1*}$ , Mingrui Chen $^{1,2}$ , Ran He $^{1,2}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ MAIS & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China
|
| 6 |
+
$^{2}$ School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
|
| 7 |
+
fanqihang.159@gmail.com, huaibo.huang@cripac.ia.ac.cn, pharmier@hust.edu.cn, rhe@nlpr.ia.ac.cn
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
The Vision Transformer (ViT) has gained prominence for its superior relational modeling prowess. However, its global attention mechanism's quadratic complexity poses substantial computational burdens. A common remedy spatially groups tokens for self-attention, reducing computational requirements. Nonetheless, this strategy neglects semantic information in tokens, possibly scattering semantically-linked tokens across distinct groups, thus compromising the efficacy of self-attention intended for modeling inter-token dependencies. Motivated by these insights, we introduce a fast and balanced clustering method, named Semantic Equitable Clustering (SEC). SEC clusters tokens based on their global semantic relevance in an efficient, straightforward manner. In contrast to traditional clustering methods requiring multiple iterations, our method achieves token clustering in a single pass. Additionally, SEC regulates the number of tokens per cluster, ensuring a balanced distribution for effective parallel processing on current computational platforms without necessitating further optimization. Capitalizing on SEC, we propose a versatile vision backbone, SECViT. Comprehensive experiments in image classification, object detection, instance segmentation, and semantic segmentation validate the effectiveness of SECViT. Moreover, SEC can be conveniently and swiftly applied to multimodal large language models (MLLM), such as LLaVA, to serve as a vision language connector, effectively accelerating the model's efficiency while maintaining unchanged or better performance.
|
| 12 |
+
|
| 13 |
+
# 1. Introduction
|
| 14 |
+
|
| 15 |
+
Since its inception, the Vision Transformer (ViT)[11] has drawn considerable interest from the research community due to its robust modeling prowess. However, the quadratic
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
(a) Window Partition
|
| 19 |
+
X Semantic Information $\checkmark$ Cluster Efficiency $\checkmark$ Equi-partition
|
| 20 |
+
Figure 1. Comparison among Window Partition, Dynamic Group by k-means, and Semantic Equitable Clustering. Our Semantic Equitable Clustering incorporates image semantics while maintaining efficient clustering, eliminating the need for iterative processes such as in k-means. Furthermore, it enables equipartitioning of tokens, promoting efficient GPU processing without necessitating additional CUDA optimization.
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
(b) k-means
|
| 24 |
+
$\checkmark$ Semantic Information
|
| 25 |
+
$\times$ Cluster Efficiency
|
| 26 |
+
$\times$ Equi-partition
|
| 27 |
+
(c)Semantic Equitable Clustering
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
$\checkmark$ Semantic Information $\checkmark$ Cluster Efficiency $\checkmark$ Equi-partition
|
| 31 |
+
|
| 32 |
+
complexity of Self-Attention leads to significant computational overhead, thus constraining the practicality of ViT. A variety of strategies have been devised to alleviate this computational load, the most prevalent of which involves token grouping, thereby constraining the attention span of each token[10, 38, 44, 47].
|
| 33 |
+
|
| 34 |
+
Specifically, the Swin-Transformer [38] partitions tokens into multiple small windows, restricting token attention within each window. The CSWin-Transformer [10] adopts a cross-shaped grouping, endowing each token with a global receptive field. MaxViT [44] amalgamates window and grid attention, facilitating intra-window tokens to attend to their counterparts in other windows. However, these methods, solely reliant on spatial positioning, neglect token semantics, potentially restricting the self-attention's capacity to model semantic dependencies. To mitigate this, DGT [37] employs k-means clustering for query grouping, considering the semantic information of tokens for enhanced feature learning. Nonetheless, the iterative nature of k-means clustering and the potential for uneven token counts per cluster can impact the efficiency of parallel attention operations.
|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
Figure 2. Left: Top-1 accuracy v.s. FLOPs on ImageNet-1K of recent SOTA models. Right: Comparison among different vision language connectors on LLaVA-1.5
|
| 38 |
+
|
| 39 |
+
Given these considerations, an optimal token partitioning scheme should efficiently segregate tokens, incorporate semantic information, and efficiently utilize computational resources (e.g., GPU). In response, we introduce a simple, fast and equitable clustering approach named Semantic Equitable Clustering (SEC). SEC segments tokens based on their relevance to global semantic information. Specifically, we employ global pooling to generate a global token encapsulating global semantic information. The similarity between this global token and all other tokens is then computed, reflecting global semantic relevance. Upon obtaining the similarity matrix, tokens (excluding the global token) are sorted by similarity scores, and the tokens with similar scores are grouped into clusters, ensuring uniform token distribution across clusters. As depicted in Fig. 1, SEC comprehensively considers token semantics and completes the clustering process in a single iteration, unlike the multi-iteration k-means. The resulting clusters, containing an equal number of tokens, can be processed in parallel by the GPU efficiently.
|
| 40 |
+
|
| 41 |
+
Building upon Semantic Equitable Clustering (SEC), we introduce the Semantic Equitable Clustering Vision Transformer (SECViT), a versatile vision backbone that is adaptable to a wide spectrum of downstream tasks. As demonstrated in Fig. 2, SECViT exhibits significant performance improvements compared to previous state-of-the-art (SOTA) models. Impressively, SECViT attains an accuracy of $84.3\%$ utilizing merely 4.6GFLOPS, without the need for additional training data or supervision. This superior performance is maintained across different model scales. Furthermore, SECViT proves its proficiency in downstream tasks, including but not limited to, object detection, instance segmentation, and semantic segmentation.
|
| 42 |
+
|
| 43 |
+
Beyond vision tasks, we also apply SEC to multimodal large language models (MLLM) such as LLaVA-1.5 [36] to
|
| 44 |
+
|
| 45 |
+
serve as an efficient vision language connector. Specifically, we use SEC to cluster the vision tokens, and then merge all the tokens at corresponding positions within each cluster into a single token. Experiments demonstrate that this approach significantly enhances the efficiency of LLaVA-1.5 while improving the model's performance.
|
| 46 |
+
|
| 47 |
+
# 2. Related Works
|
| 48 |
+
|
| 49 |
+
Vision Transformer. The Vision Transformer (ViT) [11] is considered a powerful visual architecture. Many works have improved the Vision Transformer, including enhancing its training efficiency and reducing its computational cost [10, 24, 38, 43, 61]. DeiT [43] uses distillation loss and incorporates extensive data augmentation methods into the ViT training process. Hierarchical structures represented by PVT [16, 41, 45, 46, 51] reduce the number of tokens in global attention by downsampling the keys and values (KV), thereby low the computational cost. In addition to them, some methods directly prune tokens based on their importance, retaining important tokens [32, 40]. This reduces the number of tokens and subsequently lowers the computational cost of the model. Another highly representative approach is to group all tokens such that each token can only attend to tokens within its own group [9, 10, 37, 38, 61]. This method also significantly reduces the computational cost of self-attention.
|
| 50 |
+
|
| 51 |
+
Grouping-Based Vision Transformer. Most grouping-based attention mechanisms perform grouping based on spatial structure [9, 10, 37, 38, 44]. Specifically, the SwinTransformer [38] divides all tokens into equally sized windows based on their spatial positions, where each token can only attend to tokens within its own window. This significantly reduces the model's computational cost. In addition to dividing tokens into small windows along the spatial dimension, DaViT [9] also splits channels into multiple groups along the channel dimension. Unlike the above methods that only consider positional information for grouping, DGT [37] takes semantic information into account by using k-means clustering to group the queries.
|
| 52 |
+
|
| 53 |
+
Vision Language Connector. The vision language connector is a critical component in MLLMs [3, 23, 36]. It aligns vision tokens with language tokens. Typical vision language connectors include MLP [36], Resampler [1], C-Abstractor [3], and others. Although MLP performs well, it introduces a significant number of vision tokens, which hampers the model's efficiency. On the other hand, connectors like Resampler improve the model's efficiency, but at the cost of reduced performance. Unlike these methods, our proposed SEC consider the semantic information of each token and significantly enhances the model's efficiency while maintaining its performance.
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
Figure 3. (a) Illustration of SECViT (b) Applying SEC to vision language connector. (c) Illustration of Semantic Equitable Clustering for ViT and vision language connector.
|
| 59 |
+
|
| 60 |
+
# 3. Method
|
| 61 |
+
|
| 62 |
+
# 3.1. Overall Architecture
|
| 63 |
+
|
| 64 |
+
The overall architecture of SECViT is shown in Fig. 3(a). SECViT consists of four stages with downsampling factors of $\frac{1}{4}$ , $\frac{1}{8}$ , $\frac{1}{16}$ , and $\frac{1}{32}$ , respectively. This structural design facilitates downstream tasks, such as object detection, in constructing feature pyramids. A SECViT block is composed of three modules. For each block, the input tensor $X_{in} \in \mathbb{R}^{C \times H \times W}$ is fed into the CPE to introduce the positional information. Then, The Self-Attention based on the Semantic Equitable Clustering (SEC) is employed to serve as the token mixer. The final FFN is utilized to integrate channel-wise information of tokens.
|
| 65 |
+
|
| 66 |
+
Beyond the design of the backbone, we also utilize SEC in the design of the vision language connector in MLLM [36]. For the vision tokens output by ViT, we use SEC to cluster the tokens. For each position corresponding to a cluster, we use attentive pooling to merge them into a single token, thereby reducing the number of vision tokens. The process is shown in Fig. 3(b).
|
| 67 |
+
|
| 68 |
+
# 3.2. Semantic Equitable Clustering
|
| 69 |
+
|
| 70 |
+
As previously mentioned, the design objectives of Semantic Equitable Clustering are threefold: 1) Fully consider the semantic information contained in different tokens during clustering. 2) Unlike k-means and other clustering methods that require multiple iterations, Semantic Equitable Cluster-
|
| 71 |
+
|
| 72 |
+
ing can complete clustering in a single step. 3) Ensure an equal number of tokens in each cluster to facilitate parallel processing on GPUs. In the following paragraphs, we will describe in detail how our Semantic Equitable Clustering achieves these three objectives. And the whole process is illustrated in the Fig. 3(c).
|
| 73 |
+
|
| 74 |
+
Single Clustering Center Related to Semantics. k-means is relatively complex for two reasons. First, it has multiple cluster centers, and each token needs to calculate its distance to each cluster center to determine its cluster membership. Second, the determination of each cluster center in k-means is not precise and requires multiple iterations to accurately establish the cluster centers.
|
| 75 |
+
|
| 76 |
+
To address these two issues, we first discard the use of multiple cluster centers and instead calculate the distance between each token and a single center. Based on each token's distance to this center, we divide the tokens into different intervals. Then, to ensure that our chosen center contains the most comprehensive semantic information, we directly use the result of average pooling of all tokens as the center token. This is because, in most vision foundation models, the output of the average pool is assumed to contain the richest semantic information and is thus used for classification [6, 10, 12, 38]. Specifically, the process for determining the cluster center is shown in Eq. 1:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
Q = W _ {Q} X, K = W _ {K} X, V = W _ {V} X, \tag {1}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
k _ {c} = \operatorname {P o o l} (K).
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
Where $W_{K}$ is a learnable matrix. $k_{c}$ is the determined cluster center. $X$ is the set of input tokens.
|
| 87 |
+
|
| 88 |
+
Distance Metric Suitable for ViT. Unlike the Euclidean distance calculation used in the k-means algorithm for computing the distance between tokens, during the actual computation of Self-Attention, similarity between query and key is computed through dot product. To better adapt to the characteristics of Self-Attention, we also measure the distance between tokens using a method similar to dot product. Specifically, we calculate the cosine similarity between the cluster center and each token, and then sort the tokens according to the magnitude of the computed results. The specific process is shown in Eq. 2:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
s i m = \frac {K \cdot k _ {c}}{| | K | | \cdot | | k _ {c} | |},
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
i d x = \operatorname {a r g s o r t} (s i m), \tag {2}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
Q ^ {*} = Q [ i d x ], K ^ {*} = K [ i d x ], V ^ {*} = V [ i d x ].
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
Where $sim$ is the similarity matrix between $K$ and $k_{c}$ , the $\operatorname{argsort}(sim)$ returns the indices of $sim$ sorted in descending order. $Q^{*}, K^{*}, V^{*}$ are $Q, K, V$ rearranged according to $\operatorname{argsort}(sim)$ .
|
| 103 |
+
|
| 104 |
+
Equally Partition Tokens based on Distance. The obtained $Q^{*}, K^{*}$ , and $V^{*}$ from the previous step have been sorted based on their distances to the cluster center. For the design of vision backbone, we directly group them, so tokens with similar distances to the cluster center are classified into the same cluster. This allows us to directly control an equal number of tokens in each cluster. This process can be clearly illustrated in Fig. 3(c) and denoted as follows:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
Q _ {m} = Q ^ {*} [ m \times N: (m + 1) N ],
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
K _ {m} = K ^ {*} [ m \times N: (m + 1) N ], \tag {3}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
V _ {m} = V ^ {*} [ m \times N: (m + 1) N ].
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $N$ is the basic token number of each cluster for equal partition and $m$ is the index of the cluster
|
| 119 |
+
|
| 120 |
+
Based on the above steps, we have completed the clustering process that captures semantic information in the image with minimal sorting cost. Moreover, compared to k-means, we have achieved equi-partitioning of each cluster. After clustering is completed, we apply standard Self-Attention to the tokens within each cluster, thereby completing the interaction of information between tokens:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
Y _ {m} = \operatorname {A t t n} \left(Q _ {m}, K _ {m}, V _ {m}\right). \tag {4}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
For the design of vision language connector, we group the tokens according to their similarity, and the tokens
|
| 127 |
+
|
| 128 |
+
within each group are interleaved, as shown in Eq. 5:
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
Q _ {n} = Q ^ {*} [ n: N: L ],
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
K _ {n} = K ^ {*} [ n: N: L ], \tag {5}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
V _ {n} = V ^ {*} [ n: N: L ].
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
in which $L$ is the token's sequence length, $n$ is the index of group tokens. $N$ is the basic token number of each cluster. After obtaining the token groups, we perform pooling on $Q$ to effectively reduce the number of tokens input to the LLM, with each group's output becoming a single token, as shown in Eq 6.
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
Y _ {n} = \operatorname {A t t n} \left(\operatorname {P o o l} \left(Q _ {n}\right), K _ {n}, V _ {n}\right). \tag {6}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
# 3.3. Difference between SEC and EViT.
|
| 149 |
+
|
| 150 |
+
We use the most representative example, EViT [32], to illustrate the differences between SEC and other methods based on the similarity between the global token and other tokens.
|
| 151 |
+
|
| 152 |
+
Pruning v.s. Clustering. Most similarity-based methods, such as EViT, are pruning methods, where tokens with low similarity to the [cls] token are merged during the forward process, thereby reducing the number of tokens and decreasing computational cost. In contrast, our proposed SECViT employs a clustering-based approach, performing attention operations within each cluster.
|
| 153 |
+
|
| 154 |
+
The role of the [cls] token. In methods like EViT, the [cls] token serves as a measure of importance of a token. Each token computes its similarity to the [cls] token, with higher similarity tokens deemed more important. The less important tokens are abandoned. In contrast, in SEC, the [cls] token (obtained by average pooling over all tokens) measures similarity between tokens. Each token computes its similarity score to the [cls] token; tokens with similar scores are considered to be more similar and grouped into one cluster. Attention is calculated only within the same cluster.
|
| 155 |
+
|
| 156 |
+
# 4. Experiments
|
| 157 |
+
|
| 158 |
+
We first make strict comparison with hierarchical/plain baselines. Then we conduct experiments on a wide range of vision tasks for SECViT, including image classification, object detection, instance segmentation, and semantic segmentation. We also verify the role of SEC in MLLM based on LLaVA-1.5 [36]. More details, experiments, and comparison of models' efficiency can be found in the Appendix.
|
| 159 |
+
|
| 160 |
+
# 4.1. SEC for vision models
|
| 161 |
+
|
| 162 |
+
Strict Comparison with Baselines. We select two baselines: hierarchical backbone Swin-Transformer [38] and plain backbone DeiT [43] to make a comparison with our SEC based model. In the comparison models (SEC-Swin
|
| 163 |
+
|
| 164 |
+
<table><tr><td>Model</td><td>Params (M)</td><td>FLOPs (G)</td><td>Throughput (imgs/s)</td><td>Acc</td><td>APb</td><td>APm</td><td>mIoU</td></tr><tr><td>DeiT-S [43]</td><td>22</td><td>4.6</td><td>3204</td><td>79.8</td><td>44.5</td><td>40.1</td><td>43.0</td></tr><tr><td>EViT-DeiT-S (keeprate=0.9)</td><td>22</td><td>4.0</td><td>3428</td><td>79.8</td><td>not suit</td><td>not suit</td><td>not suit</td></tr><tr><td>SEC-DeiT-S (num_cluster=4)</td><td>22</td><td>4.1</td><td>3412</td><td>80.5 (+0.7)</td><td>47.7 (+3.2)</td><td>42.7 (+2.6)</td><td>47.5 (+4.5)</td></tr><tr><td>DeiT-B</td><td>86</td><td>17.6</td><td>1502</td><td>81.8</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SEC-DeiT-B</td><td>86</td><td>14.8</td><td>1682</td><td>82.4 (+0.6)</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Swin-T</td><td>29</td><td>4.5</td><td>1723</td><td>81.3</td><td>43.7</td><td>39.8</td><td>44.5</td></tr><tr><td>SEC-Swin-T</td><td>29</td><td>4.8</td><td>1482</td><td>83.8 (+2.5)</td><td>48.3 (+4.6)</td><td>43.4 (+3.6)</td><td>49.3 (+4.8)</td></tr><tr><td>Swin-S</td><td>50</td><td>8.8</td><td>1006</td><td>83.0</td><td>45.7</td><td>41.1</td><td>47.6</td></tr><tr><td>SEC-Swin-S</td><td>50</td><td>9.2</td><td>804</td><td>85.0 (+2.0)</td><td>50.2 (+4.5)</td><td>44.7 (+3.6)</td><td>51.3 (+3.7)</td></tr></table>
|
| 165 |
+
|
| 166 |
+
Table 1. Comparison with Hierarchy/Plain baselines. Inference speed are measured on the A100 GPU.
|
| 167 |
+
|
| 168 |
+
<table><tr><td>Model</td><td>Params (M)</td><td>FLOPs (G)</td><td>Method</td><td>Pretrain epoch</td><td>Acc(%)</td></tr><tr><td>Swin-B [38]</td><td>88</td><td>15.4</td><td>Supervised</td><td>-</td><td>83.5</td></tr><tr><td>ConvNeXt V2-B [50]</td><td>88</td><td>15.4</td><td>Supervised</td><td>-</td><td>84.3</td></tr><tr><td>SEC-Swin-B</td><td>88</td><td>16.2</td><td>Supervised</td><td>-</td><td>85.3</td></tr><tr><td>Swin-B [38]</td><td>88</td><td>15.4</td><td>SimMIM [53]</td><td>800</td><td>84.0(+0.5)</td></tr><tr><td>ConvNeXt V2-B [50]</td><td>88</td><td>15.4</td><td>FCMAE [50]</td><td>800</td><td>84.6(+0.3)</td></tr><tr><td>SEC-Swin-B</td><td>88</td><td>16.2</td><td>SimMIM [53]</td><td>800</td><td>85.9(+0.6)</td></tr></table>
|
| 169 |
+
|
| 170 |
+
and SEC-DeiT), we merely substitute the attention mechanism in the original model with our SEC based Self-Attention and without introducing any other modules. As shown in Tab. 1, we conduct experiments on image classification, object detection, insatance segmentation and semantic segmentation, the simple replacement of the attention mechanism yields significant advantages in both performance and efficiency.
|
| 171 |
+
|
| 172 |
+
In addition to the supervised scenario, we also train the model with SimMIM [53] in the self-supervised scenario. As shown in Tab. 2, SEC also performs exceptionally well in the self-supervised scenario.
|
| 173 |
+
|
| 174 |
+
Image Classification. We compare our SECViT with numerous state-of-the-art models, the results are shown in Tab.3. We adopt the training strategy proposed in DeiT [43], with the only supervision is cross entropy loss. All of our models are trained from scratch for 300 epochs with the input resolution of $224 \times 224$ . SECViT consistently outperforms preceding models across all scales. Notably, SECViT-S attains a Top1-accuracy of $84.3\%$ with a mere 27M parameters and 4.6G FLOPs. The comparison of the models' efficiency can be found in Appendix.
|
| 175 |
+
|
| 176 |
+
Object Detection and Instance Segmentation. We utilize MMDetection [4] to implement Mask-RCNN [21], Cascade Mask R-CNN [2], and RetinaNet [33] to evaluate the performance of the SECViT. Tab. 5 and Tab. 4 show the
|
| 177 |
+
|
| 178 |
+
Table 2. Comparison with baselines on self-supervised setting.
|
| 179 |
+
|
| 180 |
+
<table><tr><td>Cost</td><td>Model</td><td>Parmas (M)</td><td>FLOPs (G)</td><td>Top1-acc (%)</td></tr><tr><td rowspan="11">tiny model ~ 2.5G</td><td>PVTv2-b1 [46]</td><td>13</td><td>2.1</td><td>78.7</td></tr><tr><td>TCFormer-light [59]</td><td>14</td><td>3.8</td><td>79.4</td></tr><tr><td>QuadTree-B-b1 [42]</td><td>14</td><td>2.3</td><td>80.0</td></tr><tr><td>MPViT-XS [28]</td><td>11</td><td>2.9</td><td>80.9</td></tr><tr><td>BiFormer-T [61]</td><td>13</td><td>2.2</td><td>81.4</td></tr><tr><td>CrossFormer-T [47]</td><td>28</td><td>2.9</td><td>81.5</td></tr><tr><td>FAT-B2 [12]</td><td>14</td><td>2.0</td><td>81.9</td></tr><tr><td>GC-ViT-XT [20]</td><td>20</td><td>2.6</td><td>82.0</td></tr><tr><td>SMT-T [34]</td><td>12</td><td>2.4</td><td>82.2</td></tr><tr><td>RMT-T [13]</td><td>14</td><td>2.5</td><td>82.4</td></tr><tr><td>SECViT-T</td><td>15</td><td>2.5</td><td>82.7</td></tr><tr><td rowspan="13">small model ~ 4.5G</td><td>PS-ViT-B14 [58]</td><td>21</td><td>5.4</td><td>81.7</td></tr><tr><td>DVT-T2T-ViT-19 [49]</td><td>39</td><td>6.2</td><td>81.9</td></tr><tr><td>ConvNeXt-T [39]</td><td>29</td><td>4.5</td><td>82.1</td></tr><tr><td>TCFormer [59]</td><td>26</td><td>5.8</td><td>82.3</td></tr><tr><td>SG-Former-S [15]</td><td>23</td><td>4.8</td><td>83.2</td></tr><tr><td>StructViT-S-8-1 [25]</td><td>24</td><td>5.4</td><td>83.3</td></tr><tr><td>InternImage-T [48]</td><td>30</td><td>5.0</td><td>83.5</td></tr><tr><td>MLLA-T [18]</td><td>25</td><td>4.2</td><td>83.5</td></tr><tr><td>MaxViT-T [44]</td><td>31</td><td>5.6</td><td>83.6</td></tr><tr><td>FAT-B3 [12]</td><td>29</td><td>4.4</td><td>83.6</td></tr><tr><td>SMT-S [34]</td><td>20</td><td>4.8</td><td>83.7</td></tr><tr><td>BiFormer-S [61]</td><td>26</td><td>4.5</td><td>83.8</td></tr><tr><td>SECViT-S</td><td>27</td><td>4.6</td><td>84.3</td></tr><tr><td rowspan="10">base model ~ 9.0G</td><td>ConvNeXt-S [39]</td><td>50</td><td>8.7</td><td>83.1</td></tr><tr><td>NAT-S [19]</td><td>51</td><td>7.8</td><td>83.7</td></tr><tr><td>Quadtree-B-b4 [42]</td><td>64</td><td>11.5</td><td>84.0</td></tr><tr><td>MOAT-1 [54]</td><td>42</td><td>9.1</td><td>84.2</td></tr><tr><td>InternImage-S [48]</td><td>50</td><td>8.0</td><td>84.2</td></tr><tr><td>GC-ViT-S [20]</td><td>51</td><td>8.5</td><td>84.3</td></tr><tr><td>BiFormer-B [61]</td><td>57</td><td>9.8</td><td>84.3</td></tr><tr><td>iFormer-B [41]</td><td>48</td><td>9.4</td><td>84.6</td></tr><tr><td>FAT-B4 [12]</td><td>52</td><td>9.3</td><td>84.8</td></tr><tr><td>SECViT-B</td><td>57</td><td>9.8</td><td>85.2</td></tr><tr><td rowspan="9">large model ~ 18.0G</td><td>CrossFormer-L [47]</td><td>92</td><td>16.1</td><td>84.0</td></tr><tr><td>SMT-L [34]</td><td>81</td><td>17.7</td><td>84.6</td></tr><tr><td>DaViT-B [9]</td><td>88</td><td>15.5</td><td>84.6</td></tr><tr><td>SG-Former-B [15]</td><td>78</td><td>15.6</td><td>84.7</td></tr><tr><td>iFormer-L [41]</td><td>87</td><td>14.0</td><td>84.8</td></tr><tr><td>InterImage-B [48]</td><td>97</td><td>16.0</td><td>84.9</td></tr><tr><td>GC-ViT-B [20]</td><td>90</td><td>14.8</td><td>85.0</td></tr><tr><td>RMT-L [13]</td><td>95</td><td>18.2</td><td>85.5</td></tr><tr><td>SECViT-L</td><td>101</td><td>18.2</td><td>85.7</td></tr></table>
|
| 181 |
+
|
| 182 |
+
Table 3. Comparison with the state-of-the-art on ImageNet-1K classification.
|
| 183 |
+
|
| 184 |
+
results of SECViT with different detection frameworks. The results show that SECViT performs better than its counterparts in all comparisons.
|
| 185 |
+
|
| 186 |
+
Semantic Segmentation. We utilize Semantic FPN [26] and UperNet [52] to validate our SECViT's performance, implementing these frameworks via MMSegmentation [7]. The results of semantic segmentation can be found in the Tab. 6. All the FLOPs are measured with the input resolution of $512 \times 2048$ . SECViT achieves the best performance in all settings.
|
| 187 |
+
|
| 188 |
+
# 4.2. SEC for MLLM
|
| 189 |
+
|
| 190 |
+
SEC can greatly facilitate the design of vision language connectors in MLLMs. First, we conduct a rigorous compari
|
| 191 |
+
|
| 192 |
+
<table><tr><td rowspan="2">Backbone</td><td rowspan="2">Params (M)</td><td rowspan="2">FLOPs (G)</td><td colspan="6">Mask R-CNN 1×</td><td rowspan="2">Params (M)</td><td rowspan="2">FLOPs (G)</td><td colspan="6">RetinaNet 1×</td></tr><tr><td>APb</td><td>APb50</td><td>APb75</td><td>APm</td><td>APm50</td><td>APm75</td><td>APb</td><td>APb50</td><td>APb75</td><td>APs</td><td>APb</td><td>APL</td></tr><tr><td>PVTv2-B1 [46]</td><td>33</td><td>243</td><td>41.8</td><td>54.3</td><td>45.9</td><td>38.8</td><td>61.2</td><td>41.6</td><td>23</td><td>225</td><td>41.2</td><td>61.9</td><td>43.9</td><td>25.4</td><td>44.5</td><td>54.3</td></tr><tr><td>FAT-B2 [12]</td><td>33</td><td>215</td><td>45.2</td><td>67.9</td><td>49.0</td><td>41.3</td><td>64.6</td><td>44.0</td><td>23</td><td>196</td><td>44.0</td><td>65.2</td><td>47.2</td><td>27.5</td><td>47.7</td><td>58.8</td></tr><tr><td>RMT-T [13]</td><td>33</td><td>218</td><td>47.1</td><td>68.8</td><td>51.7</td><td>42.6</td><td>65.8</td><td>45.9</td><td>23</td><td>199</td><td>45.1</td><td>66.2</td><td>48.1</td><td>28.8</td><td>48.9</td><td>61.1</td></tr><tr><td>SECViT-T</td><td>34</td><td>221</td><td>47.8</td><td>69.5</td><td>52.5</td><td>43.0</td><td>66.7</td><td>46.3</td><td>24</td><td>202</td><td>45.8</td><td>66.8</td><td>49.2</td><td>29.1</td><td>49.8</td><td>60.9</td></tr><tr><td>MPViT-S [28]</td><td>43</td><td>268</td><td>46.4</td><td>68.6</td><td>51.2</td><td>42.4</td><td>65.6</td><td>45.7</td><td>32</td><td>248</td><td>45.7</td><td>57.3</td><td>48.8</td><td>28.7</td><td>49.7</td><td>59.2</td></tr><tr><td>MLLA-T [18]</td><td>44</td><td>255</td><td>46.8</td><td>69.5</td><td>51.5</td><td>42.1</td><td>66.4</td><td>45.0</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>STViT-S [22]</td><td>44</td><td>252</td><td>47.6</td><td>70.0</td><td>52.3</td><td>43.1</td><td>66.8</td><td>46.5</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>RMT-S [13]</td><td>46</td><td>262</td><td>49.0</td><td>70.8</td><td>53.9</td><td>43.9</td><td>67.8</td><td>47.4</td><td>36</td><td>244</td><td>47.8</td><td>69.1</td><td>51.8</td><td>32.1</td><td>51.8</td><td>63.5</td></tr><tr><td>SECViT-S</td><td>45</td><td>262</td><td>49.9</td><td>70.9</td><td>54.7</td><td>44.6</td><td>68.3</td><td>47.7</td><td>35</td><td>240</td><td>48.4</td><td>69.4</td><td>52.0</td><td>31.3</td><td>53.3</td><td>63.8</td></tr><tr><td>ScalableViT-B [56]</td><td>95</td><td>349</td><td>46.8</td><td>68.7</td><td>51.5</td><td>42.5</td><td>65.8</td><td>45.9</td><td>85</td><td>330</td><td>45.8</td><td>67.3</td><td>49.2</td><td>29.9</td><td>49.5</td><td>61.0</td></tr><tr><td>InternImage-S [48]</td><td>69</td><td>340</td><td>47.8</td><td>69.8</td><td>52.8</td><td>43.3</td><td>67.1</td><td>46.7</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MLLA-S [18]</td><td>63</td><td>319</td><td>49.2</td><td>71.5</td><td>53.9</td><td>44.2</td><td>68.5</td><td>47.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>STViT-B [22]</td><td>70</td><td>359</td><td>49.7</td><td>71.7</td><td>54.7</td><td>44.8</td><td>68.9</td><td>48.7</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SECViT-B</td><td>76</td><td>371</td><td>51.5</td><td>72.9</td><td>56.7</td><td>45.4</td><td>69.9</td><td>48.7</td><td>63</td><td>349</td><td>49.3</td><td>70.3</td><td>52.9</td><td>32.0</td><td>53.8</td><td>64.8</td></tr><tr><td>Focal-B [55]</td><td>110</td><td>533</td><td>47.8</td><td>70.2</td><td>52.5</td><td>43.2</td><td>67.3</td><td>46.5</td><td>101</td><td>514</td><td>46.3</td><td>68.0</td><td>49.8</td><td>31.7</td><td>50.4</td><td>60.8</td></tr><tr><td>CSwin-B [10]</td><td>97</td><td>526</td><td>48.7</td><td>70.4</td><td>53.9</td><td>43.9</td><td>67.8</td><td>47.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>InternImage-B [48]</td><td>115</td><td>501</td><td>48.8</td><td>70.9</td><td>54.0</td><td>44.0</td><td>67.8</td><td>47.4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MLLA-B [18]</td><td>115</td><td>502</td><td>50.5</td><td>72.0</td><td>55.4</td><td>45.0</td><td>69.3</td><td>48.6</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SECViT-L</td><td>119</td><td>550</td><td>52.0</td><td>73.5</td><td>57.3</td><td>46.3</td><td>70.6</td><td>49.8</td><td>105</td><td>527</td><td>50.2</td><td>71.4</td><td>53.9</td><td>33.2</td><td>54.5</td><td>66.3</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 4. Comparison to other backbones using "1×" schedule on COCO.
|
| 195 |
+
|
| 196 |
+
<table><tr><td>Backbone</td><td>Params (M)</td><td>FLOPs (G)</td><td>\(AP^b\)</td><td>\(AP_{50}^b\)</td><td>\(AP_{75}^b\)</td><td>\(AP^m\)</td><td>\(AP_{50}^m\)</td><td>\(AP_{75}^m\)</td></tr><tr><td colspan="9">Mask R-CNN 3×+MS</td></tr><tr><td>GC-ViT-T [20]</td><td>48</td><td>291</td><td>47.9</td><td>70.1</td><td>52.8</td><td>43.2</td><td>67.0</td><td>46.7</td></tr><tr><td>MLLA-T [18]</td><td>44</td><td>255</td><td>48.8</td><td>71.0</td><td>53.6</td><td>43.8</td><td>68.0</td><td>46.8</td></tr><tr><td>SMT-S [34]</td><td>40</td><td>265</td><td>49.0</td><td>70.1</td><td>53.4</td><td>43.4</td><td>67.3</td><td>46.7</td></tr><tr><td>InternImage-T [48]</td><td>49</td><td>270</td><td>49.1</td><td>70.4</td><td>54.1</td><td>43.7</td><td>67.3</td><td>47.3</td></tr><tr><td>RMT-S [13]</td><td>46</td><td>262</td><td>50.7</td><td>71.9</td><td>55.6</td><td>44.9</td><td>69.1</td><td>48.4</td></tr><tr><td>SECViT-S</td><td>45</td><td>262</td><td>51.6</td><td>72.5</td><td>55.9</td><td>45.6</td><td>69.9</td><td>48.8</td></tr><tr><td>NAT-S [19]</td><td>70</td><td>330</td><td>48.4</td><td>69.8</td><td>53.2</td><td>43.2</td><td>66.9</td><td>46.4</td></tr><tr><td>InternImage-S [48]</td><td>69</td><td>340</td><td>49.7</td><td>71.1</td><td>54.5</td><td>44.5</td><td>68.5</td><td>47.8</td></tr><tr><td>SMT-B [34]</td><td>52</td><td>328</td><td>49.8</td><td>71.0</td><td>54.4</td><td>44.0</td><td>68.0</td><td>47.3</td></tr><tr><td>MLLA-S [18]</td><td>63</td><td>319</td><td>50.5</td><td>71.8</td><td>55.2</td><td>44.9</td><td>69.1</td><td>48.2</td></tr><tr><td>RMT-B [13]</td><td>73</td><td>373</td><td>52.2</td><td>72.9</td><td>57.0</td><td>46.1</td><td>70.4</td><td>49.9</td></tr><tr><td>SECViT-B</td><td>75</td><td>371</td><td>52.8</td><td>73.6</td><td>57.7</td><td>46.4</td><td>70.8</td><td>49.9</td></tr><tr><td colspan="9">Cascade Mask R-CNN 3×+MS</td></tr><tr><td>GC-ViT-T [20]</td><td>85</td><td>770</td><td>51.6</td><td>70.4</td><td>56.1</td><td>44.6</td><td>67.8</td><td>48.3</td></tr><tr><td>SMT-S [34]</td><td>78</td><td>744</td><td>51.9</td><td>70.5</td><td>56.3</td><td>44.7</td><td>67.8</td><td>48.6</td></tr><tr><td>UniFormer-S [30]</td><td>79</td><td>747</td><td>52.1</td><td>71.1</td><td>56.6</td><td>45.2</td><td>68.3</td><td>48.9</td></tr><tr><td>RMT-S [13]</td><td>83</td><td>741</td><td>53.2</td><td>72.0</td><td>57.8</td><td>46.1</td><td>69.8</td><td>49.8</td></tr><tr><td>SECViT-S</td><td>83</td><td>741</td><td>54.1</td><td>72.8</td><td>58.6</td><td>47.0</td><td>70.3</td><td>51.0</td></tr><tr><td>NAT-S [19]</td><td>108</td><td>809</td><td>51.9</td><td>70.4</td><td>56.2</td><td>44.9</td><td>68.2</td><td>48.6</td></tr><tr><td>GC-ViT-S [20]</td><td>108</td><td>866</td><td>52.4</td><td>71.0</td><td>57.1</td><td>45.4</td><td>68.5</td><td>49.3</td></tr><tr><td>CSWin-S [10]</td><td>92</td><td>820</td><td>53.7</td><td>72.2</td><td>58.4</td><td>46.4</td><td>69.6</td><td>50.6</td></tr><tr><td>UniFormer-B [30]</td><td>107</td><td>878</td><td>53.8</td><td>72.8</td><td>58.5</td><td>46.4</td><td>69.9</td><td>50.4</td></tr><tr><td>RMT-B [13]</td><td>111</td><td>852</td><td>54.5</td><td>72.8</td><td>59.0</td><td>47.2</td><td>70.5</td><td>51.4</td></tr><tr><td>SECViT-B</td><td>114</td><td>849</td><td>55.4</td><td>74.1</td><td>59.9</td><td>47.8</td><td>71.7</td><td>51.7</td></tr></table>
|
| 197 |
+
|
| 198 |
+
son between SEC and various baseline vision language connectors based on LLaVA-1.5. Then, we compare LLaVA-1.5+SEC with several popular contemporary MLLMs.
|
| 199 |
+
|
| 200 |
+
Strict Comparison with Baselines. In Tab. 7, we strictly compare various commonly used vision language connectors, including MLP, Resampler [1], Pooling, and EViT [32], which has achieved success in the design of ViT. Among these, MLP is the original design in LLaVA-1.5 [36], capable of achieving good results. However, it incurs significant computational cost due to the excessive vision tokens. To address this issue, some connectors at
|
| 201 |
+
|
| 202 |
+
Table 5. Comparison with other backbones using $3 \times +\mathrm{MS}$ schedule on COCO.
|
| 203 |
+
|
| 204 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">Semantic FPN 80K Params FLOPs</td><td colspan="3">Upernet 160K Params FLOPs</td></tr><tr><td>(M)</td><td>(G)</td><td>(%)</td><td>(M)</td><td>(G)</td><td>(%)</td></tr><tr><td>PVTv2-B1 [46]</td><td>18</td><td>136</td><td>42.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>VAN-B1 [17]</td><td>18</td><td>140</td><td>42.9</td><td>-</td><td>-</td><td>-</td></tr><tr><td>RMT-T [13]</td><td>17</td><td>136</td><td>46.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SECViT-T</td><td>18</td><td>136</td><td>47.2</td><td>44</td><td>894</td><td>48.8</td></tr><tr><td>StructViT-S [25]</td><td>26</td><td>271</td><td>46.9</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MogaNet-S [31]</td><td>29</td><td>189</td><td>47.7</td><td>55</td><td>946</td><td>49.2</td></tr><tr><td>SMT-S [34]</td><td>-</td><td>-</td><td>-</td><td>50</td><td>935</td><td>49.2</td></tr><tr><td>SGFormer-S [15]</td><td>25</td><td>205</td><td>49.0</td><td>52.5</td><td>989</td><td>49.9</td></tr><tr><td>RMT-S [13]</td><td>30</td><td>180</td><td>49.4</td><td>56</td><td>937</td><td>49.8</td></tr><tr><td>SECViT-S</td><td>30</td><td>180</td><td>49.6</td><td>56</td><td>936</td><td>50.6</td></tr><tr><td>MogaNet-B [31]</td><td>-</td><td>-</td><td>-</td><td>74</td><td>1050</td><td>50.1</td></tr><tr><td>InterImage-S [48]</td><td>-</td><td>-</td><td>-</td><td>80</td><td>1017</td><td>50.2</td></tr><tr><td>StructViT-B [25]</td><td>54</td><td>529</td><td>48.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>RMT-B [13]</td><td>57</td><td>294</td><td>50.4</td><td>83</td><td>1051</td><td>52.0</td></tr><tr><td>SECViT-B</td><td>60</td><td>291</td><td>50.7</td><td>86</td><td>1048</td><td>52.2</td></tr><tr><td>MogaNet-L [31]</td><td>-</td><td>-</td><td>-</td><td>113</td><td>1176</td><td>50.9</td></tr><tr><td>MLLA-B [18]</td><td>-</td><td>-</td><td>-</td><td>128</td><td>1183</td><td>51.9</td></tr><tr><td>SGFormer-B [15]</td><td>81</td><td>475</td><td>50.6</td><td>109</td><td>1304</td><td>52.0</td></tr><tr><td>RMT-L [13]</td><td>98</td><td>482</td><td>51.4</td><td>125</td><td>1241</td><td>52.8</td></tr><tr><td>SECViT-L</td><td>103</td><td>475</td><td>52.2</td><td>131</td><td>1256</td><td>53.8</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 6. Comparison with the state-of-the-art on ADE20K.
|
| 207 |
+
|
| 208 |
+
tempt to use fewer vision tokens to accelerate LLaVA-1.5. Nonetheless, these adjustments inevitably lead to performance degradation. The results in Tab. 7 show that using SEC can effectively accelerate the inference of LLaVA-1.5 without causing performance degradation, and can even improve the performance of LLaVA-1.5 to a certain extent.
|
| 209 |
+
|
| 210 |
+
Comparison with Popular MLLMs. In Tab. 8 and Tab. 9, we compare LLaVA-1.5 equipped with SEC as a vision-language connector with other MLLMs. It is evident that SEC not only enhances the performance of MLLMs across various benchmarks but also significantly improves the efficiency of the models. This fully demonstrates the
|
| 211 |
+
|
| 212 |
+
<table><tr><td>Model</td><td>Connector</td><td>V-T Num</td><td>Time</td><td>Speed</td><td>TextVQA</td><td>GQA</td><td>VQAv2</td><td>POPE</td><td>MME</td></tr><tr><td>LLaVA-1.5</td><td>MLP</td><td>576+1</td><td>194s</td><td>1.0×</td><td>58.2</td><td>62.0</td><td>78.5</td><td>86.1</td><td>1510.7</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>288+1</td><td>126s</td><td>1.5×</td><td>52.1</td><td>56.8</td><td>76.0</td><td>83.1</td><td>1393.2</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>288+1</td><td>126s</td><td>1.5×</td><td>54.6</td><td>60.0</td><td>77.9</td><td>84.3</td><td>1483.2</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>288+1</td><td>126s</td><td>1.5×</td><td>60.1</td><td>63.5</td><td>78.9</td><td>87.7</td><td>1510.7</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>256+1</td><td>116s</td><td>1.7×</td><td>51.6</td><td>56.0</td><td>75.2</td><td>82.7</td><td>1387.2</td></tr><tr><td>LLaVA-1.5+Pool</td><td>MLP+Pool</td><td>256+1</td><td>116s</td><td>1.7×</td><td>52.4</td><td>57.6</td><td>76.4</td><td>83.3</td><td>1415.5</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>256+1</td><td>116s</td><td>1.7×</td><td>52.8</td><td>59.6</td><td>77.1</td><td>83.7</td><td>1443.7</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>256+1</td><td>116s</td><td>1.7×</td><td>59.6</td><td>63.2</td><td>78.6</td><td>87.1</td><td>1505.2</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>192+1</td><td>102s</td><td>1.9×</td><td>50.1</td><td>55.2</td><td>74.3</td><td>82.7</td><td>1337.6</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>192+1</td><td>102s</td><td>1.9×</td><td>51.6</td><td>58.6</td><td>76.3</td><td>83.1</td><td>1427.6</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>192+1</td><td>102s</td><td>1.9×</td><td>57.7</td><td>62.7</td><td>78.4</td><td>86.7</td><td>1500.1</td></tr><tr><td>LLaVA-1.5+Resampler</td><td>Resampler</td><td>144+1</td><td>94s</td><td>2.1×</td><td>47.6</td><td>54.6</td><td>72.0</td><td>81.9</td><td>1293.7</td></tr><tr><td>LLaVA-1.5+Pool</td><td>MLP+Pool</td><td>144+1</td><td>94s</td><td>2.1×</td><td>50.0</td><td>56.2</td><td>73.6</td><td>81.9</td><td>1310.7</td></tr><tr><td>LLaVA-1.5+EViT</td><td>MLP+EViT</td><td>144+1</td><td>94s</td><td>2.1×</td><td>51.2</td><td>58.0</td><td>76.0</td><td>83.1</td><td>1393.6</td></tr><tr><td>LLaVA-1.5+SEC</td><td>MLP+SEC</td><td>144+1</td><td>94s</td><td>2.1×</td><td>56.8</td><td>62.0</td><td>78.0</td><td>86.1</td><td>1487.1</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Table 7. Comparison of different vision language connectors on LLaVA-1.5. "V-T Num" denotes the quantity of visual tokens. The computation expense is impacted by V-T Num, with larger values resulting in higher costs. "Speed" refers to the comparative inference velocity relative to LLaVA-1.5. "Time" is the average inference time. Inference speed are measured on the A100.
|
| 215 |
+
|
| 216 |
+
<table><tr><td>Model</td><td>LLM</td><td>Connector</td><td>V-T Num</td><td>Res</td><td>TextVQA</td><td>GQA</td><td>VQAv2</td><td>VisWiz</td><td>\( SQA_{img} \)</td><td>Speed (↑)</td></tr><tr><td colspan="11">7B LLM</td></tr><tr><td>Shikra [5]</td><td>Vicuna-7B</td><td>MLP</td><td>257</td><td>224</td><td>-</td><td>-</td><td>77.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>IDEFICS-9B [27]</td><td>LLaMA-7B</td><td>Cross Attn</td><td>257</td><td>224</td><td>-</td><td>38.4</td><td>50.9</td><td>35.5</td><td>-</td><td>-</td></tr><tr><td>Qwen-VL [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>59.3</td><td>78.8</td><td>35.2</td><td>67.1</td><td>-</td></tr><tr><td>Qwen-VL-Chat [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>57.5</td><td>78.2</td><td>38.9</td><td>68.2</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-7B</td><td>MLP</td><td>577</td><td>336</td><td>58.2</td><td>62.0</td><td>78.5</td><td>50.0</td><td>66.8</td><td>1.0×</td></tr><tr><td>LLaVA-1.5+SEC (ours)</td><td>Vicuna-7B</td><td>MLP+SEC</td><td>257</td><td>336</td><td>59.6</td><td>63.2</td><td>78.9</td><td>52.8</td><td>69.6</td><td>1.7×</td></tr><tr><td colspan="11">13B LLM</td></tr><tr><td>InstructBLIP [8]</td><td>Vicuna-13B</td><td>Q-Former</td><td>32</td><td>224</td><td>-</td><td>49.5</td><td>-</td><td>33.4</td><td>63.1</td><td>-</td></tr><tr><td>BLIP-2 [29]</td><td>Vicuna-13B</td><td>Q-Former</td><td>32</td><td>224</td><td>-</td><td>41.0</td><td>41.0</td><td>19.5</td><td>61.0</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-13B</td><td>MLP</td><td>577</td><td>336</td><td>61.2</td><td>63.3</td><td>80.0</td><td>53.6</td><td>71.6</td><td>1.0×</td></tr><tr><td>LLaVA1.5+SEC (ours)</td><td>Vicuna-13B</td><td>MLP+SEC</td><td>257</td><td>336</td><td>62.3</td><td>64.3</td><td>80.0</td><td>54.7</td><td>72.0</td><td>1.8×</td></tr></table>
|
| 217 |
+
|
| 218 |
+
Table 8. Results on General VQA tasks.
|
| 219 |
+
|
| 220 |
+
effectiveness of SEC in extracting visual information.
|
| 221 |
+
|
| 222 |
+
# 4.3. Ablation Study
|
| 223 |
+
|
| 224 |
+
In this section, we present some of the ablation study results for SEC, and more results can be found in the Appendix.
|
| 225 |
+
|
| 226 |
+
Number of Vision Tokens in Each Clusters. The number of vision tokens has a significant impact on the performance and speed of the model. We thoroughly investigate the effect of the number of vision tokens on SECViT. As shown in Tab. 10, the number of vision tokens in each cluster greatly influences the model's performance. Specifically, in downstream dense prediction tasks, having too few tokens in each cluster leads to substantial performance degradation. When the number of tokens in each cluster is too large, the model's performance does not see a significant improvement, but its speed decreases.
|
| 227 |
+
|
| 228 |
+
# 4.4. Visualization of SEC
|
| 229 |
+
|
| 230 |
+
To further understand the working mechanism of SEC, we visualize some clustering results for SECViT. As shown in Fig. 4, the left side presents the clustering results of vision tokens at different stages of the model. From the clustering results, we analyze that in the shallow layers, the model
|
| 231 |
+
|
| 232 |
+
distinguishes fine-grained features well, while in the deeper layers, it captures global semantic features effectively. The right side shows the Grad-CAM diagrams at different stages of the model, from which we can draw similar conclusions to the clustering results. More visualization results can be found in Appendix.
|
| 233 |
+
|
| 234 |
+
Number of Vision Tokens Outputs by SEC. MLLM is quite sensitive to the number of vision tokens. We conduct a detailed exploration based on LLaVA-1.5 regarding the number of vision tokens output by SEC, as shown in Tab. 11. The first row represents the speed and performance of the original LLaVA-1.5 without using SEC. Compared to LLaVA-1.5, employing SEC effectively reduces the number of vision tokens and improves training efficiency. As the number of vision tokens decreases, the model's performance shows a slight decline, but its efficiency is further enhanced.
|
| 235 |
+
|
| 236 |
+
# 5. Conclusion
|
| 237 |
+
|
| 238 |
+
We propose a simple and straightforward clustering method for vision tokens—Semantic Equitable Clustering (SEC). This method assigns each token to a cluster by calculating the similarity between each token and a global token, and
|
| 239 |
+
|
| 240 |
+
<table><tr><td>Model</td><td>LLM</td><td>Connector</td><td>V-T Num</td><td>Res</td><td>POPE</td><td>MMB</td><td>MM-Vet</td><td>Speed (↑)</td></tr><tr><td colspan="9">7B LLM</td></tr><tr><td>MiniGPT-4 [60]</td><td>Vicuna-7B</td><td>Resampler</td><td>32</td><td>224</td><td>72.2</td><td>24.3</td><td>22.1</td><td>-</td></tr><tr><td>mPLUG-Ow12 [57]</td><td>LLaMA2-7B</td><td>Resampler</td><td>32</td><td>224</td><td>-</td><td>49.4</td><td>-</td><td>-</td></tr><tr><td>LLaMA-AdapterV2 [14]</td><td>LLaMA2-7B</td><td>LLaMA-Adapter</td><td>257</td><td>224</td><td>-</td><td>41.0</td><td>31.4</td><td>-</td></tr><tr><td>Shikra [5]</td><td>Vicuna-7B</td><td>MLP</td><td>257</td><td>224</td><td>-</td><td>58.8</td><td>-</td><td>-</td></tr><tr><td>Qwen-VL [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>38.2</td><td>-</td><td>-</td></tr><tr><td>Qwen-VL-Chat [1]</td><td>Qwen-7B</td><td>Resampler</td><td>256</td><td>448</td><td>-</td><td>60.6</td><td>-</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-7B</td><td>MLP</td><td>577</td><td>336</td><td>86.1</td><td>64.3</td><td>31.1</td><td>1.0×</td></tr><tr><td>LLaVA1.5+SEC (ours)</td><td>Vicuna-7B</td><td>MLP+SEC</td><td>145</td><td>336</td><td>86.1</td><td>68.4</td><td>31.7</td><td>2.1×</td></tr><tr><td colspan="9">13B LLM</td></tr><tr><td>MiniGPT-4 [60]</td><td>Vicuna-13B</td><td>Resampler</td><td>32</td><td>224</td><td>-</td><td>-</td><td>24.4</td><td>-</td></tr><tr><td>BLIP-2 [29]</td><td>Vicuna-13B</td><td>Q-Former</td><td>32</td><td>224</td><td>85.3</td><td>-</td><td>22.4</td><td>-</td></tr><tr><td>LLaVA-1.5 [35]</td><td>Vicuna-13B</td><td>MLP</td><td>577</td><td>336</td><td>86.2</td><td>67.7</td><td>36.1</td><td>1.0×</td></tr><tr><td>LLaVA-1.5+SEC (ours)</td><td>Vicuna-13B</td><td>MLP+SEC</td><td>145</td><td>336</td><td>86.4</td><td>69.2</td><td>37.3</td><td>2.2×</td></tr></table>
|
| 241 |
+
|
| 242 |
+
Table 9. Results on benchmark designed for MLLMs.
|
| 243 |
+
|
| 244 |
+
<table><tr><td>V-T num</td><td>Params (M)</td><td>FLOPs (G)</td><td>Throughput (imgs/s)</td><td>Acc</td><td>APb</td><td>APm</td><td>mIoU</td></tr><tr><td>98</td><td>15</td><td>2.5</td><td>2004</td><td>82.7</td><td>47.8</td><td>43.0</td><td>47.2</td></tr><tr><td>196</td><td>15</td><td>3.1</td><td>1722</td><td>83.0 (+0.3)</td><td>48.2 (+0.4)</td><td>43.4 (+0.4)</td><td>47.5 (+0.3)</td></tr><tr><td>64</td><td>15</td><td>2.5</td><td>1946</td><td>82.7 (+0.0)</td><td>47.8 (+0.0)</td><td>42.8 (-0.2)</td><td>46.9 (-0.3)</td></tr><tr><td>49</td><td>15</td><td>2.4</td><td>2102</td><td>82.6 (-0.1)</td><td>47.5 (-0.3)</td><td>42.7 (-0.3)</td><td>47.7 (-0.5)</td></tr><tr><td>24</td><td>15</td><td>2.3</td><td>2186</td><td>82.0 (-0.7)</td><td>45.9 (-1.9)</td><td>40.6 (-2.4)</td><td>44.6 (-2.6)</td></tr></table>
|
| 245 |
+
|
| 246 |
+
Table 10. Effect of the number of vision tokens in each cluster. "V-T num" means the number of vision tokens in each cluster. The experiments are conducted based on SECViT-T.
|
| 247 |
+
|
| 248 |
+
<table><tr><td>V-T num</td><td colspan="2">Time Speed</td><td>TextVQA</td><td>GQA</td><td>VQAv2</td><td>POPE</td><td>MM-Vet</td></tr><tr><td>576+1</td><td>21h</td><td>1.0×</td><td>58.2</td><td>62.0</td><td>78.5</td><td>86.1</td><td>31.1</td></tr><tr><td>288+1</td><td>14h</td><td>1.5×</td><td>60.1(+1.9)</td><td>63.5(+1.5)</td><td>78.9(+0.4)</td><td>87.7(+1.6)</td><td>33.2(+2.1)</td></tr><tr><td>256+1</td><td>13h</td><td>1.6×</td><td>59.6(+1.4)</td><td>63.2(+0.3)</td><td>78.6(+0.1)</td><td>87.1(+1.0)</td><td>32.7(+1.6)</td></tr><tr><td>192+1</td><td>11h</td><td>1.9×</td><td>57.7(-0.5)</td><td>62.7(+0.7)</td><td>78.4(-0.1)</td><td>86.7(+0.6)</td><td>32.1(+1.0)</td></tr><tr><td>144+1</td><td>10h</td><td>2.1×</td><td>56.8(-1.4)</td><td>62.0(+0.0)</td><td>78.0(-0.5)</td><td>86.1(+0.0)</td><td>31.7(+0.6)</td></tr></table>
|
| 249 |
+
|
| 250 |
+
Table 11. Effect of the number of vision tokens outputs by SEC. "V-T num" means the number of vision tokens output by SEC. The experiments are conducted based on LLaVA-1.5 [36].
|
| 251 |
+
|
| 252 |
+
completes the whole clustering process in only one step. Our clustering method takes into account the semantic information contained in the tokens, and ensures an equal number of tokens in each cluster, facilitating efficient parallel processing on modern GPUs. Based on Semantic Equitable Clustering, we designed SECViT, a versatile vision backbone that achieves impressive results across various vision tasks, including image classification, object detection, instance segmentation, and semantic segmentation. Besides, SEC can also be conveniently applied to multimodal large language models (MLLM) to serve as a vision language connector and benefits the model's efficiency.
|
| 253 |
+
|
| 254 |
+
# 6. Acknowledgements
|
| 255 |
+
|
| 256 |
+
This work is partially funded by Beijing Natural Science Foundation (4252054), Youth Innovation Promotion
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
Figure 4. Visualization for SEC.
|
| 260 |
+
|
| 261 |
+
Association CAS(Grant No.2022132), Beijing Nova Program(20230484276), and CCF-Kuaishou Large Model Explorer Fund (NO. CCF-KuaiShou 2024005).
|
| 262 |
+
|
| 263 |
+
# References
|
| 264 |
+
|
| 265 |
+
[1] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond, 2023. 2, 6, 7, 8
|
| 266 |
+
[2] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In CVPR, 2018. 5
|
| 267 |
+
[3] Junbum Cha, Wooyoung Kang, Jonghwan Mun, and Byungseok Roh. Honeybee: Locality-enhanced projector for multimodal llm. In CVPR, 2024. 2
|
| 268 |
+
[4] Kai Chen, Jiaqi Wang, Jiangmiao Pang, et al. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. 5
|
| 269 |
+
[5] Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llm's referential dialogue magic, 2023. 7, 8
|
| 270 |
+
[6] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, and Chunhua Shen. Conditional positional encodings for vision transformers. In ICLR, 2023. 3
|
| 271 |
+
[7] MMSegmentation Contributors. Mmsegmentation, an open source semantic segmentation toolbox, 2020. 5
|
| 272 |
+
[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. 7
|
| 273 |
+
[9] Mingyu Ding, Bin Xiao, Noel Codella, et al. Davit: Dual attention vision transformers. In ECCV, 2022. 2, 5
|
| 274 |
+
[10] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, et al. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In CVPR, 2022. 1, 2, 3, 6
|
| 275 |
+
[11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. 1, 2
|
| 276 |
+
[12] Qihang Fan, Huaibo Huang, Xiaogiang Zhou, and Ran He. Lightweight vision transformer with bidirectional interaction. In NeurIPS, 2023. 3, 5, 6
|
| 277 |
+
[13] Qihang Fan, Huaibo Huang, Mingrui Chen, Hongmin Liu, and Ran He. Rmt: Retentive networks meet vision transformers. In CVPR, 2024. 5, 6
|
| 278 |
+
[14] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xi-angyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efficient visual instruction model, 2023. 8
|
| 279 |
+
[15] SG-Former: Self guided Transformer with Evolving Token Reallocation. Sucheng ren, xingyi yang, songhua liu, xinchao wang. In ICCV, 2023. 5, 6
|
| 280 |
+
[16] Jianyuan Guo, Kai Han, Han Wu, Chang Xu, Yehui Tang, Chunjing Xu, and Yunhe Wang. Cmt: Convolutional neural networks meet vision transformers. In CVPR, 2022. 2
|
| 281 |
+
[17] Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, and Shi-Min Hu. Visual attention network. arXiv preprint arXiv:2202.09741, 2022. 6
|
| 282 |
+
[18] Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yifan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and
|
| 283 |
+
|
| 284 |
+
Gao Huang. Demystify mamba in vision: A linear attention perspective. In NeurIPS, 2024. 5, 6
|
| 285 |
+
[19] Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. Neighborhood attention transformer. In CVPR, 2023. 5, 6
|
| 286 |
+
[20] Ali Hatamizadeh, Hongxu Yin, Greg Heinrich, Jan Kautz, and Pavlo Molchanov. Global context vision transformers. In ICML, 2023. 5, 6
|
| 287 |
+
[21] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross B. Girshick. Mask r-cnn. In ICCV, 2017. 5
|
| 288 |
+
[22] Huaibo Huang, Xiaogiang Zhou, Jie Cao, Ran He, and Tieniu Tan. Vision transformer with super token sampling. In CVPR, 2023. 6
|
| 289 |
+
[23] Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. Perceiver: General perception with iterative attention, 2021. 2
|
| 290 |
+
[24] Zi-Hang Jiang, Qibin Hou, Li Yuan, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, and Jiashi Feng. All tokens matter: Token labeling for training better vision transformers. In NeurIPS, 2021. 2
|
| 291 |
+
[25] Manjin Kim, Paul Hongsuck Seo, Cordelia Schmid, and Minsu Cho. Learning correlation structures for vision transformers. In CVPR, 2024. 5, 6
|
| 292 |
+
[26] Alexander Kirillov, Ross Girshick, Kaiming He, and Piotr Dollár. Panoptic feature pyramid networks. In CVPR, 2019. 5
|
| 293 |
+
[27] Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander Rush, Douwe Kiela, et al. Obelics: An open web-scale filtered dataset of interleaved image-text documents. In NeurIPS, 2024. 7
|
| 294 |
+
[28] Youngwan Lee, Jonghee Kim, Jeffrey Willette, and Sung Ju Hwang. Mpvit: Multi-path vision transformer for dense prediction. In CVPR, 2022. 5, 6
|
| 295 |
+
[29] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, 2023. 7, 8
|
| 296 |
+
[30] Kunchang Li, Yali Wang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, and Yu Qiao. Uniformer: Unified transformer for efficient spatiotemporal representation learning, 2022. 6
|
| 297 |
+
[31] Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di Wu, Zhiyuan Chen, Jiangbin Zheng, and Stan Z. Li. Moganet: Multi-order gated aggregation network. In ICLR, 2024. 6
|
| 298 |
+
[32] Youwei Liang, Chongjian Ge, Zhan Tong, Yibing Song, Jue Wang, and Pengtao Xie. Not all patches are what you need: Expediting vision transformers via token reorganizations. In International Conference on Learning Representations, 2022. 2, 4, 6
|
| 299 |
+
[33] Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, and Kaiming He and Piotr Dólar. Focal loss for dense object detection. In ICCV, 2017. 5
|
| 300 |
+
[34] Weifeng Lin, Ziheng Wu, Jiayu Chen, Jun Huang, and Lianwen Jin. Scale-aware modulation meet transformer. In ICCV, 2023. 5, 6
|
| 301 |
+
|
| 302 |
+
[35] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023. 7, 8
|
| 303 |
+
[36] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023. 2, 3, 4, 6, 8
|
| 304 |
+
[37] Kai Liu, Tianyi Wu, Cong Liu, and Guodong Guo. Dynamic group transformer: A general vision transformer backbone with dynamic group attention. In IJCAI, 2022. 1, 2
|
| 305 |
+
[38] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 1, 2, 3, 4, 5
|
| 306 |
+
[39] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, et al. A convnet for the 2020s. In CVPR, 2022. 5
|
| 307 |
+
[40] Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, and Cho-Jui Hsieh. Dynamicvit: Efficient vision transformers with dynamic token sparsification. In NeurIPS, 2021. 2
|
| 308 |
+
[41] Chenyang Si, Weihao Yu, Pan Zhou, Yichen Zhou, Xin chao Wang, and Shuicheng YAN. Inception transformer. In NeurIPS, 2022. 2, 5
|
| 309 |
+
[42] Shitao Tang, Jiahui Zhang, Siyu Zhu, et al. Quadtree attention for vision transformers. In ICLR, 2022. 5
|
| 310 |
+
[43] Hugo Touvron, Matthieu Cord, Matthijs Douze, et al. Training data-efficient image transformers & distillation through attention. In ICML, 2021. 2, 4, 5
|
| 311 |
+
[44] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In ECCV, 2022. 1, 2, 5
|
| 312 |
+
[45] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, 2021. 2
|
| 313 |
+
[46] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvtv2: Improved baselines with pyramid vision transformer. Computational Visual Media, 8(3):1-10, 2022. 2, 5, 6
|
| 314 |
+
[47] Wenxiao Wang, Lu Yao, Long Chen, Binbin Lin, Deng Cai, Xiaofei He, and Wei Liu. Crossformer: A versatile vision transformer hinging on cross-scale attention. In ICLR, 2022. 1, 5
|
| 315 |
+
[48] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In CVPR, 2023. 5, 6
|
| 316 |
+
[49] Yulin Wang, Rui Huang, Shiji Song, Zeyi Huang, and Gao Huang. Not all images are worth 16x16 words: Dynamic vision transformers with adaptive sequence length. In NeurIPS, 2021. 5
|
| 317 |
+
[50] Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, and Saining Xie. Convnext v2: Co-designing and scaling convnets with masked autoencoders. arXiv preprint arXiv:2301.00808, 2023. 5
|
| 318 |
+
[51] Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In CVPR, 2022. 2
|
| 319 |
+
|
| 320 |
+
[52] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In ECCV, 2018. 5
|
| 321 |
+
[53] Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In CVPR, 2022. 5
|
| 322 |
+
[54] Chenglin Yang, Siyuan Qiao, Qihang Yu, et al. Moat: Alternating mobile convolution and attention brings strong vision models. In ICLR, 2023. 5
|
| 323 |
+
[55] Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, and Jianfeng Gao. Focal self-attention for local-global interactions in vision transformers. In NeurIPS, 2021. 6
|
| 324 |
+
[56] Rui Yang, Hailong Ma, Jie Wu, Yansong Tang, Xuefeng Xiao, Min Zheng, and Xiu Li. Scalablevit: Rethinking the context-oriented generalization of vision transformer. In ECCV, 2022. 6
|
| 325 |
+
[57] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. mplug-owl: Modularization empowers large language models with multimodality, 2024. 8
|
| 326 |
+
[58] Xiaoyu Yue, Shuyang Sun, Zhanghui Kuang, Meng Wei, Philip HS Torr, Wayne Zhang, and Dahua Lin. Vision transformer with progressive sampling. In ICCV, 2021. 5
|
| 327 |
+
[59] Wang Zeng, Sheng Jin, Wentao Liu, Chen Qian, Ping Luo, Wanli Ouyang, and Xiaogang Wang. Not all tokens are equal: Human-centric visual analysis via token clustering transformer. In CVPR, 2022. 5
|
| 328 |
+
[60] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models, 2023. 8
|
| 329 |
+
[61] Lei Zhu, Xinjiang Wang, Zhanghan Ke, Wayne Zhang, and Rynson Lau. Bformer: Vision transformer with bi-level routing attention. In CVPR, 2023. 2, 5
|
2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c981bb4f80f0f9865f693c960e23d5a6435cfc570e3a3d800a9c32341eeacadc
|
| 3 |
+
size 1123941
|
2025/Semantic Equitable Clustering_ A Simple and Effective Strategy for Clustering Vision Tokens/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/4150a989-5307-4ff1-8e86-ca9b050e7e76_content_list.json
ADDED
|
@@ -0,0 +1,1447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Semantic Watermarking Reinvented: Enhancing Robustness and Generation Quality with Fourier Integrity",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
109,
|
| 8 |
+
128,
|
| 9 |
+
885,
|
| 10 |
+
176
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Sung Ju Lee Nam Ik Cho",
|
| 17 |
+
"bbox": [
|
| 18 |
+
366,
|
| 19 |
+
203,
|
| 20 |
+
630,
|
| 21 |
+
220
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Dept. of ECE & INMC, Seoul National University, Korea",
|
| 28 |
+
"bbox": [
|
| 29 |
+
269,
|
| 30 |
+
238,
|
| 31 |
+
727,
|
| 32 |
+
257
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "thomas11809@snu.ac.kr, nicho@snu.ac.kr",
|
| 39 |
+
"bbox": [
|
| 40 |
+
326,
|
| 41 |
+
258,
|
| 42 |
+
665,
|
| 43 |
+
273
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Abstract",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
248,
|
| 53 |
+
308,
|
| 54 |
+
326,
|
| 55 |
+
325
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "Semantic watermarking techniques for latent diffusion models (LDMs) are robust against regeneration attacks, but often suffer from detection performance degradation due to the loss of frequency integrity. To tackle this problem, we propose a novel embedding method called Hermitian Symmetric Fourier Watermarking (SFW), which maintains frequency integrity by enforcing Hermitian symmetry. Additionally, we introduce a center-aware embedding strategy that reduces the vulnerability of semantic watermarking due to cropping attacks by ensuring robust information retention. To validate our approach, we apply these techniques to existing semantic watermarking schemes, enhancing their frequency-domain structures for better robustness and retrieval accuracy. Extensive experiments demonstrate that our methods achieve state-of-the-art verification and identification performance, surpassing previous approaches across various attack scenarios. Ablation studies confirm the impact of SFW on detection capabilities, the effectiveness of the center-aware embedding against cropping, and how message capacity influences identification accuracy. Notably, our method achieves the highest detection accuracy while maintaining superior image fidelity, as evidenced by FID and CLIP scores. Conclusively, our proposed SFW is shown to be an effective framework for balancing robustness and image fidelity, addressing the inherent trade-offs in semantic watermarking. Code available at github.com/thomas11809/SFWMark.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
89,
|
| 64 |
+
342,
|
| 65 |
+
483,
|
| 66 |
+
750
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1. Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
91,
|
| 76 |
+
782,
|
| 77 |
+
220,
|
| 78 |
+
797
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "With the advancement of diffusion generative models [17, 25, 33, 35, 39, 41], their generated images are being increasingly used across various fields, including creative works, entertainment, and advertisement. In particular, the open-source release of large-scale language-image (LLI) models like Stable Diffusion [37] has led to an exponential growth",
|
| 85 |
+
"bbox": [
|
| 86 |
+
89,
|
| 87 |
+
809,
|
| 88 |
+
483,
|
| 89 |
+
901
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "image",
|
| 95 |
+
"img_path": "images/5bbcfca8fcbdc3cc4228a5a7344a6d4682b897c62e1a5eec1f2b6ad07fd5a977.jpg",
|
| 96 |
+
"image_caption": [
|
| 97 |
+
"Figure 1. Summary of watermarking performance across different semantic watermarking methods, as detailed in Sec. 5.2. All methods follow the merged-in-generation scheme with no additional processing time. Verification is evaluated using TPR@1%FPR (True Positive Rate at $1\\%$ False Positive Rate), while identification is assessed by Identification Accuracy (Perfect Match Rate). The proposed approaches achieve the best balance between detection robustness and image fidelity."
|
| 98 |
+
],
|
| 99 |
+
"image_footnote": [],
|
| 100 |
+
"bbox": [
|
| 101 |
+
516,
|
| 102 |
+
308,
|
| 103 |
+
906,
|
| 104 |
+
484
|
| 105 |
+
],
|
| 106 |
+
"page_idx": 0
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"type": "text",
|
| 110 |
+
"text": "in script-based image generation and editing technologies, resulting in a surge of generative content. This has raised new concerns, such as the copyright of AI-generated content and the tracking of images created with malicious intent. As a solution to these problems, the technique of embedding invisible watermarks into content has been proposed.",
|
| 111 |
+
"bbox": [
|
| 112 |
+
511,
|
| 113 |
+
638,
|
| 114 |
+
906,
|
| 115 |
+
744
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 0
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "text",
|
| 121 |
+
"text": "Digital content watermarking has been extensively studied through both classical signal processing techniques [5, 6, 12, 13, 15, 28, 38, 43, 44, 49] and deep learning-based approaches [1, 18, 19, 21, 26, 30, 42, 52, 57, 58]. Recently, watermarking techniques that directly intervene in the generation process of diffusion models [3, 7, 10, 20, 22, 27, 31, 36, 51, 54] have been explored; however, these approaches require additional setup or configuration, adding more complexity.",
|
| 122 |
+
"bbox": [
|
| 123 |
+
511,
|
| 124 |
+
747,
|
| 125 |
+
908,
|
| 126 |
+
883
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "text",
|
| 132 |
+
"text": "Several studies have explored embedding watermarks di",
|
| 133 |
+
"bbox": [
|
| 134 |
+
532,
|
| 135 |
+
885,
|
| 136 |
+
903,
|
| 137 |
+
901
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "header",
|
| 143 |
+
"text": "CVF",
|
| 144 |
+
"bbox": [
|
| 145 |
+
106,
|
| 146 |
+
2,
|
| 147 |
+
181,
|
| 148 |
+
42
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "header",
|
| 154 |
+
"text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
|
| 155 |
+
"bbox": [
|
| 156 |
+
238,
|
| 157 |
+
0,
|
| 158 |
+
807,
|
| 159 |
+
46
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 0
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "page_number",
|
| 165 |
+
"text": "18759",
|
| 166 |
+
"bbox": [
|
| 167 |
+
480,
|
| 168 |
+
944,
|
| 169 |
+
517,
|
| 170 |
+
955
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 0
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "rectly into the latent representation, eliminating the need for external models. These include a steganographic approach that constructs latent noise with stream cipher-randomized bits [50] and semantic watermarking methods that embed geometric patterns in the Fourier frequency domain of the latent representation [11, 48, 53]. Meanwhile, research on pixel-level perturbation-based watermarking has highlighted its vulnerability to regeneration attacks [55, 56], demonstrating that semantic watermarking serves as a more robust alternative against such attacks. However, the aforementioned semantic watermarking methods in the latent Fourier domain, which are the focus of this paper, suffer from degraded detection accuracy and generative quality due to their lack of frequency integrity preservation.",
|
| 177 |
+
"bbox": [
|
| 178 |
+
89,
|
| 179 |
+
90,
|
| 180 |
+
480,
|
| 181 |
+
303
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 1
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "text",
|
| 187 |
+
"text": "In this context, we propose a novel semantic watermarking framework that enhances both detection robustness and image quality by preserving frequency integrity in the latent Fourier domain. Our method is named Hermitian Symmetric Fourier Watermarking (SFW), which ensures that watermark embeddings maintain statistical consistency with the latent noise distribution, leading to improved retrievability and stability in generative models. Additionally, we incorporate a center-aware embedding strategy that enhances robustness against cropping attacks by embedding watermarks in a spatially resilient region of the latent representation. We comprehensively evaluate our method across various attack scenarios, including signal processing distortions, regeneration attacks, and cropping attacks. Experimental results, as illustrated in Fig. 1, demonstrate that our method achieves state-of-the-art detection accuracy in both verification and identification tasks while simultaneously maintaining superior generative quality, as evidenced by FID and CLIP score evaluations. Our contributions can be summarized as follows:",
|
| 188 |
+
"bbox": [
|
| 189 |
+
91,
|
| 190 |
+
303,
|
| 191 |
+
482,
|
| 192 |
+
602
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "list",
|
| 198 |
+
"sub_type": "text",
|
| 199 |
+
"list_items": [
|
| 200 |
+
"- We propose Hermitian SFW to ensure frequency integrity, leading to improved watermark detection and generative quality.",
|
| 201 |
+
"- We introduce center-aware embedding, which significantly enhances robustness against cropping attacks.",
|
| 202 |
+
"- We present extensive evaluations demonstrating that our approach outperforms existing baselines in detection accuracy and generative quality."
|
| 203 |
+
],
|
| 204 |
+
"bbox": [
|
| 205 |
+
91,
|
| 206 |
+
604,
|
| 207 |
+
482,
|
| 208 |
+
724
|
| 209 |
+
],
|
| 210 |
+
"page_idx": 1
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "text",
|
| 214 |
+
"text": "2. Related Works",
|
| 215 |
+
"text_level": 1,
|
| 216 |
+
"bbox": [
|
| 217 |
+
91,
|
| 218 |
+
739,
|
| 219 |
+
238,
|
| 220 |
+
753
|
| 221 |
+
],
|
| 222 |
+
"page_idx": 1
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"type": "text",
|
| 226 |
+
"text": "Digital Watermarking. Digital watermarking aims to achieve a balance between high embedding capacity and visual quality when inserting invisible watermarks while also ensuring robustness against various attacks [45]. This field originally began with the adoption of classical signal processing techniques. Embedding the watermark in the spatial domain is a straightforward and intuitive watermark insertion [43, 44], but it tends to be less robust against attacks such as filtering or compression. On the other hand,",
|
| 227 |
+
"bbox": [
|
| 228 |
+
89,
|
| 229 |
+
763,
|
| 230 |
+
482,
|
| 231 |
+
900
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "image",
|
| 237 |
+
"img_path": "images/79831f3006d8a799d55b67df55ba2fe6115882fb6449f42436d3e6ef4d026d16.jpg",
|
| 238 |
+
"image_caption": [
|
| 239 |
+
"Figure 2. Overview of the semantic watermarking process in the latent Fourier domain using the merged-in-generation scheme."
|
| 240 |
+
],
|
| 241 |
+
"image_footnote": [],
|
| 242 |
+
"bbox": [
|
| 243 |
+
516,
|
| 244 |
+
88,
|
| 245 |
+
903,
|
| 246 |
+
262
|
| 247 |
+
],
|
| 248 |
+
"page_idx": 1
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"type": "text",
|
| 252 |
+
"text": "studies utilizing the frequency domain have demonstrated resilience against JPEG compression by embedding watermarks in the low and middle frequency bands [2, 5, 6, 12, 13, 15, 28, 32, 38, 47, 49].",
|
| 253 |
+
"bbox": [
|
| 254 |
+
511,
|
| 255 |
+
325,
|
| 256 |
+
903,
|
| 257 |
+
386
|
| 258 |
+
],
|
| 259 |
+
"page_idx": 1
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"type": "text",
|
| 263 |
+
"text": "Watermarks for Latent Diffusion Models. Latent diffusion models (LDMs), such as Stable Diffusion [37], conduct the diffusion process in a lower-dimensional latent space rather than directly on high-resolution images. Recent research has focused on techniques for integrating watermarks into the latent diffusion process to ensure traceability and robustness. However, many of these approaches rely on model fine-tuning [3, 10, 20, 22, 27] or require separately trained encoders and decoders [7, 10, 14, 20, 22, 23, 27, 31, 36, 51, 54] to facilitate watermark embedding and detection. These dependencies constrain the flexibility of watermarking, limiting their applicability across diverse models.",
|
| 264 |
+
"bbox": [
|
| 265 |
+
511,
|
| 266 |
+
387,
|
| 267 |
+
905,
|
| 268 |
+
582
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "Several studies have explored embedding semantic watermarks in the Fourier domain of latent vectors using a merged-in-generation scheme. Wen et al. [48] introduced a tree-ring-shaped watermark constructed with Gaussian-distributed radii, while Ci et al. [11] proposed a modified pattern with high-energy signed constant rings for watermark identification, along with an additional random noise key embedded in a separate channel. On the other hand, Zodiac [53] followed a post-hoc approach, embedding a tree-ring pattern into generated images through multiple iterations of latent vector optimization and diffusion-based synthesis. However, in addition to its high computational cost, this method suffers from low practicality, as it relies on extensive linear interpolation between the original and optimized images to artificially improve visual quality.",
|
| 275 |
+
"bbox": [
|
| 276 |
+
511,
|
| 277 |
+
584,
|
| 278 |
+
905,
|
| 279 |
+
824
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "text",
|
| 285 |
+
"text": "In these methods, when performing inverse Fourier transform, the imaginary component in the spatial domain is discarded, leading to a distorted frequency representation. As a result, the real component of the key region retains only partial information, while the imaginary component",
|
| 286 |
+
"bbox": [
|
| 287 |
+
511,
|
| 288 |
+
825,
|
| 289 |
+
903,
|
| 290 |
+
900
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 1
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "page_number",
|
| 296 |
+
"text": "18760",
|
| 297 |
+
"bbox": [
|
| 298 |
+
480,
|
| 299 |
+
944,
|
| 300 |
+
517,
|
| 301 |
+
955
|
| 302 |
+
],
|
| 303 |
+
"page_idx": 1
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"type": "image",
|
| 307 |
+
"img_path": "images/dc28e9aa714e81edfecb78c3e654df8afd03e20873e931db02496f0b9bbf1ed4.jpg",
|
| 308 |
+
"image_caption": [
|
| 309 |
+
"Figure 3. Examples of various semantic watermarking patterns."
|
| 310 |
+
],
|
| 311 |
+
"image_footnote": [],
|
| 312 |
+
"bbox": [
|
| 313 |
+
93,
|
| 314 |
+
88,
|
| 315 |
+
480,
|
| 316 |
+
263
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 2
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "text",
|
| 322 |
+
"text": "is almost entirely lost, creating an empty key region in the frequency domain. Since detection relies on analyzing this incomplete key region, the process is inherently limited, resulting in suboptimal retrieval performance.",
|
| 323 |
+
"bbox": [
|
| 324 |
+
89,
|
| 325 |
+
316,
|
| 326 |
+
482,
|
| 327 |
+
378
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 2
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "text",
|
| 333 |
+
"text": "Fig. 2 illustrates the overall pipeline of semantic watermarking with the merged-in-generation scheme. The embedding process begins with the Fourier transform applied to the latent noise, generating the latent Fourier representation. A watermark key is then embedded into a designated key region, followed by the inverse Fourier transform, producing the watermarked latent vector. Finally, text-guided image generation synthesizes the watermarked image. For detection, the process starts with a clean or attacked image, from which the latent query is obtained via DDIM inversion [41]. The presence of a watermark is determined by analyzing the query key region in the latent representation. Further details on detection tasks and evaluation metrics are provided in Sec. 3.1.",
|
| 334 |
+
"bbox": [
|
| 335 |
+
89,
|
| 336 |
+
378,
|
| 337 |
+
483,
|
| 338 |
+
590
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 2
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "3. Preliminaries",
|
| 345 |
+
"text_level": 1,
|
| 346 |
+
"bbox": [
|
| 347 |
+
89,
|
| 348 |
+
607,
|
| 349 |
+
228,
|
| 350 |
+
623
|
| 351 |
+
],
|
| 352 |
+
"page_idx": 2
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"type": "text",
|
| 356 |
+
"text": "3.1. Task Formulation",
|
| 357 |
+
"text_level": 1,
|
| 358 |
+
"bbox": [
|
| 359 |
+
89,
|
| 360 |
+
633,
|
| 361 |
+
264,
|
| 362 |
+
648
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 2
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "text",
|
| 368 |
+
"text": "The performance of watermark detection is evaluated following RingID's formulation for watermark verification and identification tasks [11]. The metric used to measure the distance $d$ is the $L_{1}$ distance, calculated specifically for the key region where the watermark is embedded.",
|
| 369 |
+
"bbox": [
|
| 370 |
+
89,
|
| 371 |
+
656,
|
| 372 |
+
482,
|
| 373 |
+
731
|
| 374 |
+
],
|
| 375 |
+
"page_idx": 2
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"type": "text",
|
| 379 |
+
"text": "Verification. The objective of verification is to determine whether a watermark is present in an image by analyzing the distance between the reference key and the key region of the query in the latent Fourier domain. Let $\\hat{w}$ denote the watermarked latent Fourier key, and $\\hat{u}$ denote the unwatermarked latent Fourier null key. Verification is based on comparing the distances between the reference key $w$ and the watermarked/unwatermarked keys, i.e., $d(\\hat{w}, w) \\neq d(\\hat{u}, w)$ . Performance is assessed using statistical metrics derived from the ROC curve, considering different distance thresholds.",
|
| 380 |
+
"bbox": [
|
| 381 |
+
89,
|
| 382 |
+
733,
|
| 383 |
+
482,
|
| 384 |
+
883
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 2
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "text",
|
| 390 |
+
"text": "Identification. In identification, given that a watermark",
|
| 391 |
+
"bbox": [
|
| 392 |
+
89,
|
| 393 |
+
885,
|
| 394 |
+
482,
|
| 395 |
+
900
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 2
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "text",
|
| 401 |
+
"text": "is already embedded, the task is to accurately determine the embedded information. This is achieved by computing the distance between the watermarked key $\\hat{w}$ and multiple reference keys $w_{i}$ . Performance is evaluated based on the accuracy of the estimated message index, defined as $\\hat{i} = \\arg \\min_{i}d(\\hat{w},w_{i})$ .",
|
| 402 |
+
"bbox": [
|
| 403 |
+
511,
|
| 404 |
+
90,
|
| 405 |
+
903,
|
| 406 |
+
181
|
| 407 |
+
],
|
| 408 |
+
"page_idx": 2
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"type": "text",
|
| 412 |
+
"text": "3.2. Fourier Considerations for Real Latent Noise",
|
| 413 |
+
"text_level": 1,
|
| 414 |
+
"bbox": [
|
| 415 |
+
511,
|
| 416 |
+
190,
|
| 417 |
+
897,
|
| 418 |
+
205
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 2
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "text",
|
| 424 |
+
"text": "This section introduces the mathematical properties of the Fourier domain and highlights important considerations when embedding semantic watermarks.",
|
| 425 |
+
"bbox": [
|
| 426 |
+
511,
|
| 427 |
+
212,
|
| 428 |
+
903,
|
| 429 |
+
256
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 2
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "Hermitian Symmetry for Real Signals. To obtain a real-valued signal after inverse Fourier transform, modifications in the frequency domain must maintain Hermitian symmetry about the DC center:",
|
| 436 |
+
"bbox": [
|
| 437 |
+
511,
|
| 438 |
+
257,
|
| 439 |
+
903,
|
| 440 |
+
316
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "equation",
|
| 446 |
+
"text": "\n$$\nF [ M - k, N - l ] = \\overline {{F [ k , l ]}}, \\tag {1}\n$$\n",
|
| 447 |
+
"text_format": "latex",
|
| 448 |
+
"bbox": [
|
| 449 |
+
614,
|
| 450 |
+
325,
|
| 451 |
+
903,
|
| 452 |
+
344
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 2
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "text",
|
| 458 |
+
"text": "where $F$ represents the discrete Fourier transform, $M$ and $N$ denote the dimensions of the signal in the spatial domain along the row and column axes, respectively. Failure to preserve this symmetry introduces undesired imaginary components in the spatial domain, which are incompatible with the real-valued latent noise required for diffusion models.",
|
| 459 |
+
"bbox": [
|
| 460 |
+
511,
|
| 461 |
+
352,
|
| 462 |
+
905,
|
| 463 |
+
441
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 2
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "Impact of Ignoring Hermitian Symmetry. In existing baselines [11, 48, 53], this issue is simply addressed by discarding the imaginary component, resulting in a loss of frequency integrity. As illustrated in Fig. 3a, this disruption affects both real and imaginary components of the original watermark pattern. While the real component deviates from its intended structure due to frequency distortion, the imaginary component suffers even greater degradation, often becoming entirely empty. This significantly limits detection performance by preventing the effective use of frequency information in the imaginary domain. The impact of the frequency loss is further examined in Sec. 5.3.1.",
|
| 470 |
+
"bbox": [
|
| 471 |
+
511,
|
| 472 |
+
443,
|
| 473 |
+
905,
|
| 474 |
+
623
|
| 475 |
+
],
|
| 476 |
+
"page_idx": 2
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"text": "Preservation of Gaussianity. A real Gaussian noise signal in the spatial domain transforms into a complex Gaussian noise signal in the frequency domain while maintaining its statistical properties:",
|
| 481 |
+
"bbox": [
|
| 482 |
+
511,
|
| 483 |
+
625,
|
| 484 |
+
905,
|
| 485 |
+
685
|
| 486 |
+
],
|
| 487 |
+
"page_idx": 2
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "equation",
|
| 491 |
+
"text": "\n$$\nf [ m, n ] \\sim \\mathcal {N} (0, \\sigma^ {2}) \\Rightarrow F [ k, l ] \\sim \\mathcal {C N} (0, M N \\sigma^ {2}), \\tag {2}\n$$\n",
|
| 492 |
+
"text_format": "latex",
|
| 493 |
+
"bbox": [
|
| 494 |
+
519,
|
| 495 |
+
691,
|
| 496 |
+
903,
|
| 497 |
+
710
|
| 498 |
+
],
|
| 499 |
+
"page_idx": 2
|
| 500 |
+
},
|
| 501 |
+
{
|
| 502 |
+
"type": "text",
|
| 503 |
+
"text": "where $f$ represents the real-valued Gaussian noise signal in the spatial domain. When embedding watermarks in the frequency domain, some perturbation is inevitably introduced, altering the original distribution of complex Gaussian noise. However, if Hermitian symmetry is not preserved, the inverse Fourier transform produces incomplete noise characteristics in the spatial domain, as imaginary components must be discarded. This disrupts the statistical consistency of the latent noise, leading to a deviation from the expected real Gaussian distribution and potential degradation in generative quality. In contrast, embedding with Hermitian symmetry better retains the statistical structure of the latent",
|
| 504 |
+
"bbox": [
|
| 505 |
+
511,
|
| 506 |
+
719,
|
| 507 |
+
906,
|
| 508 |
+
900
|
| 509 |
+
],
|
| 510 |
+
"page_idx": 2
|
| 511 |
+
},
|
| 512 |
+
{
|
| 513 |
+
"type": "page_number",
|
| 514 |
+
"text": "18761",
|
| 515 |
+
"bbox": [
|
| 516 |
+
480,
|
| 517 |
+
944,
|
| 518 |
+
517,
|
| 519 |
+
955
|
| 520 |
+
],
|
| 521 |
+
"page_idx": 2
|
| 522 |
+
},
|
| 523 |
+
{
|
| 524 |
+
"type": "image",
|
| 525 |
+
"img_path": "images/ab9bf4e0e2b5982d3489bb7c83734271773944aa4c33f486db60e72de68a140a.jpg",
|
| 526 |
+
"image_caption": [
|
| 527 |
+
"Figure 4. Overview of the proposed framework and qualitative results. (Left) The key components of our approach: Symmetric Fourier Watermarking (SFW) (blue region) and Center-Aware Embedding Strategy (green region). (Right) Qualitative results of Tree-Ring, RingID, HSTR, and HSQR. Notably, RingID exhibits visible ring-like artifacts, highlighting that its high-energy pattern disrupts generative quality, unlike other merged-in-generation semantic watermarking methods that achieve a better balance between robustness and image fidelity."
|
| 528 |
+
],
|
| 529 |
+
"image_footnote": [],
|
| 530 |
+
"bbox": [
|
| 531 |
+
93,
|
| 532 |
+
87,
|
| 533 |
+
509,
|
| 534 |
+
258
|
| 535 |
+
],
|
| 536 |
+
"page_idx": 3
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"type": "image",
|
| 540 |
+
"img_path": "images/b50f6570acc8fbdaa56c28b3765c7d40fc0eb61030aa1612eec54a8483f7aa4e.jpg",
|
| 541 |
+
"image_caption": [],
|
| 542 |
+
"image_footnote": [],
|
| 543 |
+
"bbox": [
|
| 544 |
+
517,
|
| 545 |
+
88,
|
| 546 |
+
905,
|
| 547 |
+
256
|
| 548 |
+
],
|
| 549 |
+
"page_idx": 3
|
| 550 |
+
},
|
| 551 |
+
{
|
| 552 |
+
"type": "text",
|
| 553 |
+
"text": "noise, ensuring that the transformed spatial-domain signal remains closer to a real Gaussian distribution. By preserving these statistical properties, the initialization process in diffusion models becomes more stable, ultimately enhancing generative performance, as observed in Sec. 5.2.",
|
| 554 |
+
"bbox": [
|
| 555 |
+
89,
|
| 556 |
+
349,
|
| 557 |
+
485,
|
| 558 |
+
426
|
| 559 |
+
],
|
| 560 |
+
"page_idx": 3
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"type": "text",
|
| 564 |
+
"text": "4. Methods",
|
| 565 |
+
"text_level": 1,
|
| 566 |
+
"bbox": [
|
| 567 |
+
89,
|
| 568 |
+
441,
|
| 569 |
+
191,
|
| 570 |
+
455
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 3
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "text",
|
| 576 |
+
"text": "4.1. Hermitian Symmetric Fourier Watermark",
|
| 577 |
+
"text_level": 1,
|
| 578 |
+
"bbox": [
|
| 579 |
+
89,
|
| 580 |
+
465,
|
| 581 |
+
452,
|
| 582 |
+
482
|
| 583 |
+
],
|
| 584 |
+
"page_idx": 3
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"type": "text",
|
| 588 |
+
"text": "The proposed Hermitian SFW refers to a watermark designed to satisfy the Hermitian symmetry introduced in the previous section. As shown in the left side of Fig. 4, SFW allows free usage of the half-region in the frequency domain, while the remaining half is restricted by the symmetry condition. Certain constraints must be satisfied when embedding a pattern in the free half-region under a semantic watermarking scheme: the imaginary part at the DC center must be zero, and if the signal dimensions are even, the imaginary components of the points corresponding to the Nyquist frequency must also be zero. In cases where both frequency axes have even dimensions as in this study, the points where the imaginary part must be zero are $(0,0)$ , $\\left(\\frac{M}{2},0\\right)$ , $(0,\\frac{N}{2})$ , and $\\left(\\frac{M}{2},\\frac{N}{2}\\right)$ . Instead of directly modifying the watermark pattern to comply with locations where the imaginary part must be zero, a more effective strategy is to embed it while avoiding the DC axis as much as possible, as will be introduced in Sec. 4.3.2",
|
| 589 |
+
"bbox": [
|
| 590 |
+
89,
|
| 591 |
+
488,
|
| 592 |
+
483,
|
| 593 |
+
760
|
| 594 |
+
],
|
| 595 |
+
"page_idx": 3
|
| 596 |
+
},
|
| 597 |
+
{
|
| 598 |
+
"type": "text",
|
| 599 |
+
"text": "4.2. Center-Aware Embedding Strategy",
|
| 600 |
+
"text_level": 1,
|
| 601 |
+
"bbox": [
|
| 602 |
+
89,
|
| 603 |
+
771,
|
| 604 |
+
397,
|
| 605 |
+
787
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 3
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "Existing semantic watermarking baselines [11, 48, 53] embed patterns by applying the Fourier transform to the full spatial matrix of the latent vector. However, such methods are vulnerable to spatial attacks, particularly cropping, which can lead to the loss of watermark information and reduced detection performance. To address this issue, we propose a center-aware embedding strategy, which applies",
|
| 612 |
+
"bbox": [
|
| 613 |
+
89,
|
| 614 |
+
794,
|
| 615 |
+
483,
|
| 616 |
+
902
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "the Fourier transform only to the central area of the spatial domain before embedding. Specifically, for a latent vector with a signal dimension of 64, we utilize the central $44 \\times 44$ space. As demonstrated in Sec. 5.3.2, this design significantly improves robustness against cropping attacks on various scales.",
|
| 623 |
+
"bbox": [
|
| 624 |
+
511,
|
| 625 |
+
349,
|
| 626 |
+
906,
|
| 627 |
+
441
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 3
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "text",
|
| 633 |
+
"text": "4.3. Integrating SFW and Center-Aware Design",
|
| 634 |
+
"text_level": 1,
|
| 635 |
+
"bbox": [
|
| 636 |
+
511,
|
| 637 |
+
450,
|
| 638 |
+
880,
|
| 639 |
+
468
|
| 640 |
+
],
|
| 641 |
+
"page_idx": 3
|
| 642 |
+
},
|
| 643 |
+
{
|
| 644 |
+
"type": "text",
|
| 645 |
+
"text": "4.3.1. Application to the Baseline",
|
| 646 |
+
"text_level": 1,
|
| 647 |
+
"bbox": [
|
| 648 |
+
511,
|
| 649 |
+
474,
|
| 650 |
+
746,
|
| 651 |
+
488
|
| 652 |
+
],
|
| 653 |
+
"page_idx": 3
|
| 654 |
+
},
|
| 655 |
+
{
|
| 656 |
+
"type": "text",
|
| 657 |
+
"text": "As shown in Fig. 3a, the Tree-Ring [48] baseline suffers from frequency loss. To mitigate this, we impose the Hermitian symmetry condition directly on the watermark patterns. By integrating SFW with the center-aware embedding strategy, we refine the baseline into Hermitian Symmetric Tree-Ring (HSTR). HSTR not only enhances detection performance by fully utilizing the imaginary components of the latent Fourier domain but also improves image generation quality by restoring frequency integrity.",
|
| 658 |
+
"bbox": [
|
| 659 |
+
511,
|
| 660 |
+
493,
|
| 661 |
+
906,
|
| 662 |
+
630
|
| 663 |
+
],
|
| 664 |
+
"page_idx": 3
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"type": "text",
|
| 668 |
+
"text": "4.3.2. Hermitian Symmetric QR code",
|
| 669 |
+
"text_level": 1,
|
| 670 |
+
"bbox": [
|
| 671 |
+
511,
|
| 672 |
+
638,
|
| 673 |
+
777,
|
| 674 |
+
652
|
| 675 |
+
],
|
| 676 |
+
"page_idx": 3
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"text": "In this section, we introduce a novel approach that extends SFW beyond baselines to QR codes, which are widely recognized for their high versatility and robust error correction capabilities [16].",
|
| 681 |
+
"bbox": [
|
| 682 |
+
511,
|
| 683 |
+
657,
|
| 684 |
+
905,
|
| 685 |
+
717
|
| 686 |
+
],
|
| 687 |
+
"page_idx": 3
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "text",
|
| 691 |
+
"text": "**Embedding.** As illustrated in Fig. 3b, the Hermitian Symmetric QR Code (HSQR) watermark is constructed by splitting the QR code in half and embedding each part separately into the real and imaginary components of the free half-region in the Fourier domain. The binary pattern of the QR code is embedded using the following formulation:",
|
| 692 |
+
"bbox": [
|
| 693 |
+
511,
|
| 694 |
+
718,
|
| 695 |
+
905,
|
| 696 |
+
809
|
| 697 |
+
],
|
| 698 |
+
"page_idx": 3
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "equation",
|
| 702 |
+
"text": "\n$$\n\\operatorname {H S Q R} (\\tilde {x}, c) = \\left\\{ \\begin{array}{l l} + | F (\\tilde {x}, c) |, & \\text {i f} \\operatorname {Q R} (x) = 1 \\\\ - | F (\\tilde {x}, c) |, & \\text {i f} \\operatorname {Q R} (x) = 0, \\end{array} \\right. \\tag {3}\n$$\n",
|
| 703 |
+
"text_format": "latex",
|
| 704 |
+
"bbox": [
|
| 705 |
+
550,
|
| 706 |
+
834,
|
| 707 |
+
906,
|
| 708 |
+
875
|
| 709 |
+
],
|
| 710 |
+
"page_idx": 3
|
| 711 |
+
},
|
| 712 |
+
{
|
| 713 |
+
"type": "text",
|
| 714 |
+
"text": "where:",
|
| 715 |
+
"bbox": [
|
| 716 |
+
514,
|
| 717 |
+
886,
|
| 718 |
+
563,
|
| 719 |
+
898
|
| 720 |
+
],
|
| 721 |
+
"page_idx": 3
|
| 722 |
+
},
|
| 723 |
+
{
|
| 724 |
+
"type": "page_number",
|
| 725 |
+
"text": "18762",
|
| 726 |
+
"bbox": [
|
| 727 |
+
480,
|
| 728 |
+
944,
|
| 729 |
+
519,
|
| 730 |
+
955
|
| 731 |
+
],
|
| 732 |
+
"page_idx": 3
|
| 733 |
+
},
|
| 734 |
+
{
|
| 735 |
+
"type": "table",
|
| 736 |
+
"img_path": "images/d5f00951289a17b786af1fca0e97da387fb639b201d62c5587a010845934e893.jpg",
|
| 737 |
+
"table_caption": [
|
| 738 |
+
"Table 1. Verification performance of different watermarking methods under various attacks. Bit Accuracy is used for bitstream-based methods (DwtDct, DwtDctSvd, RivaGAN, S.Sign), while TPR@1%FPR is used for semantic methods. Best performances are highlighted. Our methods show superior detection accuracy and robustness against signal processing distortions, regeneration, and cropping attacks."
|
| 739 |
+
],
|
| 740 |
+
"table_footnote": [],
|
| 741 |
+
"table_body": "<table><tr><td rowspan=\"2\">Datasets</td><td rowspan=\"2\">Methods</td><td colspan=\"2\">No Attack</td><td colspan=\"6\">Signal Processing Attack</td><td colspan=\"3\">Regeneration Attack</td><td colspan=\"2\">Cropping Attack</td><td>Avg</td></tr><tr><td>Clean</td><td>Bright.</td><td>Cont.</td><td>JPEG</td><td>Blur</td><td>Noise</td><td>BM3D</td><td>VAE-B</td><td>VAE-C</td><td>Diff.</td><td>C.C.</td><td>R.C.</td><td></td><td></td></tr><tr><td rowspan=\"9\">MS-COCO</td><td>DwtDct</td><td>0.863</td><td>0.572</td><td>0.522</td><td>0.516</td><td>0.677</td><td>0.859</td><td>0.532</td><td>0.523</td><td>0.521</td><td>0.519</td><td>0.729</td><td>0.810</td><td>0.637</td><td></td></tr><tr><td>DwtDctSvd</td><td>1.000</td><td>0.555</td><td>0.473</td><td>0.602</td><td>1.000</td><td>1.000</td><td>0.784</td><td>0.648</td><td>0.596</td><td>0.644</td><td>0.744</td><td>0.861</td><td>0.742</td><td></td></tr><tr><td>RivaGAN</td><td>0.999</td><td>0.862</td><td>0.986</td><td>0.821</td><td>0.998</td><td>0.969</td><td>0.934</td><td>0.570</td><td>0.552</td><td>0.608</td><td>0.991</td><td>0.995</td><td>0.857</td><td></td></tr><tr><td>S.Sign.</td><td>0.995</td><td>0.894</td><td>0.978</td><td>0.806</td><td>0.911</td><td>0.721</td><td>0.838</td><td>0.717</td><td>0.715</td><td>0.478</td><td>0.987</td><td>0.991</td><td>0.836</td><td></td></tr><tr><td>Tree-Ring</td><td>0.957</td><td>0.463</td><td>0.900</td><td>0.548</td><td>0.934</td><td>0.412</td><td>0.815</td><td>0.509</td><td>0.536</td><td>0.543</td><td>0.509</td><td>0.734</td><td>0.655</td><td></td></tr><tr><td>Zodiac</td><td>0.998</td><td>0.843</td><td>0.998</td><td>0.973</td><td>0.998</td><td>0.880</td><td>0.997</td><td>0.944</td><td>0.958</td><td>0.972</td><td>0.989</td><td>0.995</td><td>0.962</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td>0.899</td><td>1.000</td><td>0.994</td><td>1.000</td><td>0.806</td><td>0.999</td><td>0.973</td><td>0.982</td><td>0.997</td><td>1.000</td><td>1.000</td><td>0.971</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.988</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.987</td><td>1.000</td><td>0.992</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.997</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.991</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.983</td><td>1.000</td><td>0.992</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.997</td><td></td></tr><tr><td rowspan=\"9\">SD-Prompts</td><td>DwtDct</td><td>0.819</td><td>0.557</td><td>0.516</td><td>0.506</td><td>0.685</td><td>0.822</td><td>0.530</td><td>0.513</td><td>0.512</td><td>0.509</td><td>0.723</td><td>0.794</td><td>0.624</td><td></td></tr><tr><td>DwtDctSvd</td><td>1.000</td><td>0.537</td><td>0.459</td><td>0.610</td><td>0.999</td><td>0.998</td><td>0.859</td><td>0.659</td><td>0.620</td><td>0.623</td><td>0.743</td><td>0.860</td><td>0.747</td><td></td></tr><tr><td>RivaGAN</td><td>0.991</td><td>0.823</td><td>0.963</td><td>0.810</td><td>0.988</td><td>0.961</td><td>0.915</td><td>0.572</td><td>0.535</td><td>0.567</td><td>0.980</td><td>0.983</td><td>0.841</td><td></td></tr><tr><td>S.Sign.</td><td>0.994</td><td>0.899</td><td>0.967</td><td>0.769</td><td>0.888</td><td>0.742</td><td>0.809</td><td>0.677</td><td>0.671</td><td>0.493</td><td>0.983</td><td>0.990</td><td>0.824</td><td></td></tr><tr><td>Tree-Ring</td><td>0.944</td><td>0.471</td><td>0.894</td><td>0.466</td><td>0.912</td><td>0.423</td><td>0.802</td><td>0.509</td><td>0.514</td><td>0.543</td><td>0.469</td><td>0.749</td><td>0.641</td><td></td></tr><tr><td>Zodiac</td><td>0.998</td><td>0.748</td><td>0.999</td><td>0.979</td><td>0.999</td><td>0.903</td><td>1.000</td><td>0.940</td><td>0.975</td><td>0.958</td><td>0.994</td><td>0.996</td><td>0.957</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td>0.742</td><td>1.000</td><td>0.990</td><td>1.000</td><td>0.850</td><td>1.000</td><td>0.983</td><td>0.987</td><td>0.999</td><td>1.000</td><td>1.000</td><td>0.963</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.972</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.988</td><td>1.000</td><td>0.996</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.996</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.955</td><td>1.000</td><td>0.999</td><td>1.000</td><td>0.992</td><td>1.000</td><td>0.996</td><td>0.999</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.995</td><td></td></tr><tr><td rowspan=\"9\">DiffusionDB</td><td>DwtDct</td><td>0.842</td><td>0.563</td><td>0.515</td><td>0.509</td><td>0.672</td><td>0.829</td><td>0.526</td><td>0.513</td><td>0.514</td><td>0.512</td><td>0.723</td><td>0.801</td><td>0.627</td><td></td></tr><tr><td>DwtDctSvd</td><td>0.998</td><td>0.558</td><td>0.463</td><td>0.593</td><td>0.997</td><td>0.995</td><td>0.830</td><td>0.658</td><td>0.608</td><td>0.621</td><td>0.742</td><td>0.860</td><td>0.744</td><td></td></tr><tr><td>RivaGAN</td><td>0.987</td><td>0.839</td><td>0.960</td><td>0.790</td><td>0.985</td><td>0.937</td><td>0.893</td><td>0.553</td><td>0.518</td><td>0.556</td><td>0.974</td><td>0.979</td><td>0.831</td><td></td></tr><tr><td>S.Sign.</td><td>0.990</td><td>0.890</td><td>0.967</td><td>0.787</td><td>0.889</td><td>0.726</td><td>0.819</td><td>0.690</td><td>0.687</td><td>0.496</td><td>0.981</td><td>0.986</td><td>0.826</td><td></td></tr><tr><td>Tree-Ring</td><td>0.940</td><td>0.487</td><td>0.889</td><td>0.434</td><td>0.904</td><td>0.392</td><td>0.799</td><td>0.454</td><td>0.503</td><td>0.454</td><td>0.499</td><td>0.715</td><td>0.622</td><td></td></tr><tr><td>Zodiac</td><td>0.992</td><td>0.752</td><td>0.988</td><td>0.933</td><td>0.988</td><td>0.834</td><td>0.984</td><td>0.911</td><td>0.926</td><td>0.903</td><td>0.971</td><td>0.985</td><td>0.931</td><td></td></tr><tr><td>HSTR (ours)</td><td>0.999</td><td>0.792</td><td>0.996</td><td>0.981</td><td>0.996</td><td>0.792</td><td>0.991</td><td>0.968</td><td>0.969</td><td>0.989</td><td>1.000</td><td>1.000</td><td>0.956</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.989</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.963</td><td>1.000</td><td>0.995</td><td>0.999</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.995</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.977</td><td>1.000</td><td>0.999</td><td>1.000</td><td>0.974</td><td>0.999</td><td>0.997</td><td>0.999</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.995</td><td></td></tr></table>",
|
| 742 |
+
"bbox": [
|
| 743 |
+
98,
|
| 744 |
+
143,
|
| 745 |
+
897,
|
| 746 |
+
506
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 4
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "list",
|
| 752 |
+
"sub_type": "text",
|
| 753 |
+
"list_items": [
|
| 754 |
+
"- $x$ denotes the coordinates in the QR code domain.",
|
| 755 |
+
"- $\\tilde{x}$ represents the corresponding coordinates in the embedding region of the Fourier domain. Since the QR code is split into two halves, each half is mapped to either the real or imaginary component.",
|
| 756 |
+
"- $c \\in \\{\\mathrm{Re}, \\mathrm{Im}\\}$ indicates whether the embedding occurs in the real or imaginary part.",
|
| 757 |
+
"- $F(\\tilde{x}, c)$ is the real-valued amplitude of the complex Gaussian noise in the Fourier domain, where a complex number is expressed as $F_{\\mathrm{Re}} + iF_{\\mathrm{Im}}$ , making both real and imaginary amplitudes individually real-valued."
|
| 758 |
+
],
|
| 759 |
+
"bbox": [
|
| 760 |
+
89,
|
| 761 |
+
531,
|
| 762 |
+
482,
|
| 763 |
+
696
|
| 764 |
+
],
|
| 765 |
+
"page_idx": 4
|
| 766 |
+
},
|
| 767 |
+
{
|
| 768 |
+
"type": "text",
|
| 769 |
+
"text": "To maintain symmetry while avoiding numerical instability, the embedding region is positioned one pixel to the right of the vertical DC axis. Additionally, to increase the amount of embedded information and statistically reduce errors, each QR code cell is represented by multiple pixels arranged in a square pattern, e.g., $2 \\times 2$ pixels.",
|
| 770 |
+
"bbox": [
|
| 771 |
+
89,
|
| 772 |
+
698,
|
| 773 |
+
482,
|
| 774 |
+
789
|
| 775 |
+
],
|
| 776 |
+
"page_idx": 4
|
| 777 |
+
},
|
| 778 |
+
{
|
| 779 |
+
"type": "text",
|
| 780 |
+
"text": "Detection. The detection process for semantic watermarking is performed by computing the L1 distance. In HSQR, the ground truth QR binary pattern is converted into a signed Boolean representation as follows:",
|
| 781 |
+
"bbox": [
|
| 782 |
+
89,
|
| 783 |
+
790,
|
| 784 |
+
482,
|
| 785 |
+
849
|
| 786 |
+
],
|
| 787 |
+
"page_idx": 4
|
| 788 |
+
},
|
| 789 |
+
{
|
| 790 |
+
"type": "equation",
|
| 791 |
+
"text": "\n$$\n\\mathrm {Q R} ^ {*} (\\tilde {x}, c) = \\left\\{ \\begin{array}{l l} + \\Lambda , & \\text {i f} \\mathrm {Q R} (x) = 1 \\\\ - \\Lambda , & \\text {i f} \\mathrm {Q R} (x) = 0, \\end{array} \\right. \\tag {4}\n$$\n",
|
| 792 |
+
"text_format": "latex",
|
| 793 |
+
"bbox": [
|
| 794 |
+
158,
|
| 795 |
+
863,
|
| 796 |
+
482,
|
| 797 |
+
904
|
| 798 |
+
],
|
| 799 |
+
"page_idx": 4
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "text",
|
| 803 |
+
"text": "where $\\Lambda$ is a fixed amplitude for binary encoding.",
|
| 804 |
+
"bbox": [
|
| 805 |
+
511,
|
| 806 |
+
531,
|
| 807 |
+
841,
|
| 808 |
+
546
|
| 809 |
+
],
|
| 810 |
+
"page_idx": 4
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "text",
|
| 814 |
+
"text": "5. Experiments",
|
| 815 |
+
"text_level": 1,
|
| 816 |
+
"bbox": [
|
| 817 |
+
511,
|
| 818 |
+
564,
|
| 819 |
+
643,
|
| 820 |
+
580
|
| 821 |
+
],
|
| 822 |
+
"page_idx": 4
|
| 823 |
+
},
|
| 824 |
+
{
|
| 825 |
+
"type": "text",
|
| 826 |
+
"text": "5.1. Experimental Settings",
|
| 827 |
+
"text_level": 1,
|
| 828 |
+
"bbox": [
|
| 829 |
+
511,
|
| 830 |
+
590,
|
| 831 |
+
718,
|
| 832 |
+
606
|
| 833 |
+
],
|
| 834 |
+
"page_idx": 4
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"text": "Semantic Watermarks. The Tree-Ring [48] watermark is embedded into channel 3 with a radius of 14 and HSTR adopts the same watermark parameters as its baseline. For RingID [11], we follow the original settings, embedding a tree-ring pattern with a radius range of 3-14 in channel 3, along with a Gaussian noise key in channel 0. Both SFW methods follow RingID in utilizing a Gaussian noise key in channel 0. For HSQR, we use a version 1 QR code $(21 \\times 21$ cells) with a cell size of 2 pixels and error correction level H, capable of encoding up to 72 bits. The QR code is embedded in channel 3, and the encoding amplitude $\\Lambda$ is set to 45, corresponding to the standard deviation of the real and imaginary components in the Fourier domain of $64 \\times 64$ normal Gaussian latent vector $(\\approx \\sqrt{64^2 / 2})$ . Note that these experiments are conducted on $512 \\times 512$ images. At higher resolutions, a larger embedding space allows for increased capacity through higher QR code versions that encode more bits, as well as the use of stronger watermarking parameters, enabling greater robustness and adaptability. Additionally,",
|
| 839 |
+
"bbox": [
|
| 840 |
+
511,
|
| 841 |
+
613,
|
| 842 |
+
906,
|
| 843 |
+
902
|
| 844 |
+
],
|
| 845 |
+
"page_idx": 4
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "page_number",
|
| 849 |
+
"text": "18763",
|
| 850 |
+
"bbox": [
|
| 851 |
+
480,
|
| 852 |
+
944,
|
| 853 |
+
517,
|
| 854 |
+
955
|
| 855 |
+
],
|
| 856 |
+
"page_idx": 4
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "table",
|
| 860 |
+
"img_path": "images/f11de02277708da5e7df5b4593c814afaf2b14a2eb23cbe4fbec9d918a1d5525.jpg",
|
| 861 |
+
"table_caption": [
|
| 862 |
+
"Table 2. Identification accuracy for different watermarking methods, evaluated across multiple attack conditions. Perfect Match Rate is used for bitstream-based methods. The best performance for each item is highlighted with shading. HSQR achieves state-of-the-art performance, while HSTR consistently outperforms other Gaussian radius-based semantic baselines, particularly under cropping attacks."
|
| 863 |
+
],
|
| 864 |
+
"table_footnote": [],
|
| 865 |
+
"table_body": "<table><tr><td rowspan=\"2\">Datasets</td><td rowspan=\"2\">Methods</td><td colspan=\"2\">No Attack</td><td colspan=\"6\">Signal Processing Attack</td><td colspan=\"3\">Regeneration Attack</td><td colspan=\"2\">Cropping Attack</td><td>Avg</td></tr><tr><td>Clean</td><td>Bright.</td><td>Cont.</td><td>JPEG</td><td>Blur</td><td>Noise</td><td>BM3D</td><td>VAE-B</td><td>VAE-C</td><td>Diff.</td><td>C.C.</td><td>R.C.</td><td></td><td></td></tr><tr><td rowspan=\"9\">MS-COCO</td><td>DwtDct</td><td>0.466</td><td>0.044</td><td>0.000</td><td>0.000</td><td>0.038</td><td>0.442</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.083</td><td></td></tr><tr><td>DwtDctSvd</td><td>1.000</td><td>0.044</td><td>0.019</td><td>0.000</td><td>0.999</td><td>0.998</td><td>0.037</td><td>0.000</td><td>0.000</td><td>0.003</td><td>0.000</td><td>0.000</td><td>0.258</td><td></td></tr><tr><td>RivaGAN</td><td>0.974</td><td>0.260</td><td>0.772</td><td>0.023</td><td>0.961</td><td>0.686</td><td>0.348</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.852</td><td>0.909</td><td>0.482</td><td></td></tr><tr><td>S.Sign.</td><td>0.873</td><td>0.177</td><td>0.563</td><td>0.000</td><td>0.036</td><td>0.010</td><td>0.007</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.709</td><td>0.802</td><td>0.265</td><td></td></tr><tr><td>Tree-Ring</td><td>0.303</td><td>0.087</td><td>0.207</td><td>0.072</td><td>0.256</td><td>0.030</td><td>0.162</td><td>0.083</td><td>0.072</td><td>0.054</td><td>0.009</td><td>0.033</td><td>0.114</td><td></td></tr><tr><td>Zodiac</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td>0.714</td><td>0.999</td><td>0.886</td><td>0.998</td><td>0.460</td><td>0.972</td><td>0.833</td><td>0.831</td><td>0.971</td><td>1.000</td><td>1.000</td><td>0.889</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.875</td><td>1.000</td><td>0.975</td><td>1.000</td><td>0.919</td><td>0.996</td><td>0.978</td><td>0.970</td><td>0.998</td><td>0.874</td><td>0.978</td><td>0.964</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.958</td><td>1.000</td><td>0.994</td><td>1.000</td><td>0.901</td><td>0.999</td><td>0.980</td><td>0.987</td><td>0.999</td><td>1.000</td><td>1.000</td><td>0.985</td><td></td></tr><tr><td rowspan=\"9\">SD-Prompts</td><td>DwtDct</td><td>0.285</td><td>0.024</td><td>0.000</td><td>0.000</td><td>0.017</td><td>0.276</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.050</td><td></td></tr><tr><td>DwtDctSvd</td><td>0.993</td><td>0.028</td><td>0.011</td><td>0.000</td><td>0.982</td><td>0.979</td><td>0.085</td><td>0.000</td><td>0.000</td><td>0.007</td><td>0.000</td><td>0.000</td><td>0.257</td><td></td></tr><tr><td>RivaGAN</td><td>0.878</td><td>0.213</td><td>0.613</td><td>0.009</td><td>0.857</td><td>0.657</td><td>0.304</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.736</td><td>0.768</td><td>0.420</td><td></td></tr><tr><td>S.Sign.</td><td>0.813</td><td>0.263</td><td>0.420</td><td>0.000</td><td>0.021</td><td>0.015</td><td>0.006</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.576</td><td>0.716</td><td>0.236</td><td></td></tr><tr><td>Tree-Ring</td><td>0.288</td><td>0.094</td><td>0.189</td><td>0.051</td><td>0.235</td><td>0.034</td><td>0.159</td><td>0.079</td><td>0.076</td><td>0.056</td><td>0.012</td><td>0.041</td><td>0.110</td><td></td></tr><tr><td>Zodiac</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.885</td><td>1.000</td><td>0.976</td><td>0.998</td><td>0.886</td><td>0.993</td><td>0.980</td><td>0.973</td><td>0.995</td><td>0.876</td><td>0.981</td><td>0.962</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.930</td><td>1.000</td><td>0.994</td><td>1.000</td><td>0.942</td><td>0.999</td><td>0.991</td><td>0.997</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.988</td><td></td></tr><tr><td rowspan=\"9\">DiffusionDB</td><td>DwtDct</td><td>0.357</td><td>0.037</td><td>0.000</td><td>0.000</td><td>0.034</td><td>0.320</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.062</td><td></td></tr><tr><td>DwtDctSvd</td><td>0.990</td><td>0.036</td><td>0.019</td><td>0.000</td><td>0.975</td><td>0.959</td><td>0.081</td><td>0.000</td><td>0.000</td><td>0.001</td><td>0.000</td><td>0.000</td><td>0.255</td><td></td></tr><tr><td>RivaGAN</td><td>0.858</td><td>0.213</td><td>0.625</td><td>0.020</td><td>0.848</td><td>0.615</td><td>0.221</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.756</td><td>0.780</td><td>0.411</td><td></td></tr><tr><td>S.Sign.</td><td>0.798</td><td>0.207</td><td>0.472</td><td>0.000</td><td>0.027</td><td>0.005</td><td>0.005</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.643</td><td>0.738</td><td>0.241</td><td></td></tr><tr><td>Tree-Ring</td><td>0.280</td><td>0.095</td><td>0.190</td><td>0.059</td><td>0.233</td><td>0.037</td><td>0.145</td><td>0.081</td><td>0.072</td><td>0.050</td><td>0.013</td><td>0.039</td><td>0.108</td><td></td></tr><tr><td>Zodiac</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td></td></tr><tr><td>HSTR (ours)</td><td>0.996</td><td>0.721</td><td>0.992</td><td>0.854</td><td>0.989</td><td>0.563</td><td>0.958</td><td>0.830</td><td>0.821</td><td>0.952</td><td>0.996</td><td>0.996</td><td>0.889</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.895</td><td>1.000</td><td>0.947</td><td>0.996</td><td>0.871</td><td>0.992</td><td>0.968</td><td>0.958</td><td>0.990</td><td>0.875</td><td>0.984</td><td>0.956</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.954</td><td>1.000</td><td>0.988</td><td>1.000</td><td>0.906</td><td>0.998</td><td>0.982</td><td>0.991</td><td>0.994</td><td>1.000</td><td>1.000</td><td>0.984</td><td></td></tr></table>",
|
| 866 |
+
"bbox": [
|
| 867 |
+
98,
|
| 868 |
+
143,
|
| 869 |
+
897,
|
| 870 |
+
506
|
| 871 |
+
],
|
| 872 |
+
"page_idx": 5
|
| 873 |
+
},
|
| 874 |
+
{
|
| 875 |
+
"type": "text",
|
| 876 |
+
"text": "the post-hoc semantic watermarking method, Zodiac [53], is included in our comparative experiments.",
|
| 877 |
+
"bbox": [
|
| 878 |
+
89,
|
| 879 |
+
531,
|
| 880 |
+
482,
|
| 881 |
+
561
|
| 882 |
+
],
|
| 883 |
+
"page_idx": 5
|
| 884 |
+
},
|
| 885 |
+
{
|
| 886 |
+
"type": "text",
|
| 887 |
+
"text": "Model Setup and Datasets. The diffusion model used for the experiments is Stable Diffusion v2-1-base [37], configured to generate images at a resolution of $512 \\times 512$ with a CFG scale of 7.5. Both generation and inversion steps of the DDIM scheduler [41] are set to 50. The datasets used for image generation include 5,000 captions from the MS-COCO-2017 training set [29], 1,001 sampled prompts from the 2M subset of DiffusionDB [46], and 8,192 prompts from the Stable-Diffusion-Prompts test set [40]. For watermark detection, the number of watermarked images used for each dataset is set to 1,000. Specifically, verification is conducted on 1,000 pairs of watermarked and unwatermarked images, while identification is performed on 1,000 watermarked images.",
|
| 888 |
+
"bbox": [
|
| 889 |
+
88,
|
| 890 |
+
564,
|
| 891 |
+
482,
|
| 892 |
+
775
|
| 893 |
+
],
|
| 894 |
+
"page_idx": 5
|
| 895 |
+
},
|
| 896 |
+
{
|
| 897 |
+
"type": "text",
|
| 898 |
+
"text": "Attack Methods. To evaluate the robustness of our method against various distortions, we apply 11 different attacks, categorized into three major types.",
|
| 899 |
+
"bbox": [
|
| 900 |
+
89,
|
| 901 |
+
777,
|
| 902 |
+
482,
|
| 903 |
+
824
|
| 904 |
+
],
|
| 905 |
+
"page_idx": 5
|
| 906 |
+
},
|
| 907 |
+
{
|
| 908 |
+
"type": "text",
|
| 909 |
+
"text": "- Signal Processing Attacks: These include brightness adjustment (ranging from 0 to 7), contrast adjustment (factor of 0.5), JPEG compression (quality factor of 25), Gaussian blur (radius of 5), Gaussian noise ( $\\sigma = 0.05$ ), and BM3D denoising ( $\\sigma = 0.1$ ).",
|
| 910 |
+
"bbox": [
|
| 911 |
+
89,
|
| 912 |
+
825,
|
| 913 |
+
482,
|
| 914 |
+
901
|
| 915 |
+
],
|
| 916 |
+
"page_idx": 5
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"type": "list",
|
| 920 |
+
"sub_type": "text",
|
| 921 |
+
"list_items": [
|
| 922 |
+
"- Regeneration Attacks: We apply two VAE-based image compression models, Bmshj18 [4] (VAE-B) and Cheng20 [8] (VAE-C), both at quality level 3, as well as a diffusion-based regeneration attack by Zhao et al. [56] (referred to as Diff.) using 60 denoising steps.",
|
| 923 |
+
"- Cropping Attacks: We apply center crop (C.C.) with a crop scale of 0.5 and random crop (R.C.) with a crop scale of 0.7, where the crop scale represents the ratio of the cropped image area to the original image."
|
| 924 |
+
],
|
| 925 |
+
"bbox": [
|
| 926 |
+
511,
|
| 927 |
+
531,
|
| 928 |
+
906,
|
| 929 |
+
667
|
| 930 |
+
],
|
| 931 |
+
"page_idx": 5
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "text",
|
| 935 |
+
"text": "Evaluation Metrics. The $L_{1}$ distance is computed only within the key region of the complex Fourier domain of latent noise, and unless otherwise specified, 2,048 keys are used for identification accuracy. For verification, semantic watermarking methods use True Positive Rate at $1\\%$ False Positive Rate (TPR@1%FPR) as the evaluation metric. To ensure a fair comparison with bitstream-based approaches such as DwtDct, DwtDctSvd [12], RivaGAN [52], and Stable Signature (S.Sign.) [22], we measure Bit Accuracy for these methods. For identification, where the goal is to match the exact message, Perfect Match Rate is used as the metric for bitstream-based approaches. To evaluate the generation quality, FID [24] is measured between watermarked images and 5,000 MS-COCO ground truth images. We also calculate the CLIP score [34] using the OpenCLIP-ViT/G",
|
| 936 |
+
"bbox": [
|
| 937 |
+
511,
|
| 938 |
+
674,
|
| 939 |
+
906,
|
| 940 |
+
900
|
| 941 |
+
],
|
| 942 |
+
"page_idx": 5
|
| 943 |
+
},
|
| 944 |
+
{
|
| 945 |
+
"type": "page_number",
|
| 946 |
+
"text": "18764",
|
| 947 |
+
"bbox": [
|
| 948 |
+
480,
|
| 949 |
+
944,
|
| 950 |
+
517,
|
| 951 |
+
955
|
| 952 |
+
],
|
| 953 |
+
"page_idx": 5
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "table",
|
| 957 |
+
"img_path": "images/4074b3b3a96b3c74136a639019085752fdaf96fe7d52348bdf469659caa65a0b.jpg",
|
| 958 |
+
"table_caption": [
|
| 959 |
+
"Table 3. Generative quality evaluation of watermarking methods based on FID (MS-COCO ground truth) and CLIP score. The best performance for each item is highlighted with shading, while bold text specifically marks the low CLIP score in RingID. Our proposed methods preserve frequency integrity, achieving the best balance between watermark robustness and generative performance, whereas RingID introduces visible artifacts, compromising perceptual quality. Vrf. and Idf. denote the average detection performance in verification and identification tasks, respectively."
|
| 960 |
+
],
|
| 961 |
+
"table_footnote": [],
|
| 962 |
+
"table_body": "<table><tr><td colspan=\"2\">Semantic Methods</td><td>FID ↓</td><td>CLIP ↑</td><td>Vrf.</td><td>Idf.</td></tr><tr><td rowspan=\"4\">Merged in Generation</td><td>Tree-Ring</td><td>26.418</td><td>0.326</td><td>0.655</td><td>0.114</td></tr><tr><td>RingID</td><td>27.052</td><td>0.324</td><td>0.997</td><td>0.964</td></tr><tr><td>HSTR (ours)</td><td>25.062</td><td>0.329</td><td>0.971</td><td>0.889</td></tr><tr><td>HSQR (ours)</td><td>24.895</td><td>0.330</td><td>0.997</td><td>0.985</td></tr></table>",
|
| 963 |
+
"bbox": [
|
| 964 |
+
109,
|
| 965 |
+
224,
|
| 966 |
+
460,
|
| 967 |
+
299
|
| 968 |
+
],
|
| 969 |
+
"page_idx": 6
|
| 970 |
+
},
|
| 971 |
+
{
|
| 972 |
+
"type": "text",
|
| 973 |
+
"text": "model [9] to determine how closely the generated image aligns with the input prompt.",
|
| 974 |
+
"bbox": [
|
| 975 |
+
89,
|
| 976 |
+
324,
|
| 977 |
+
482,
|
| 978 |
+
354
|
| 979 |
+
],
|
| 980 |
+
"page_idx": 6
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"type": "text",
|
| 984 |
+
"text": "5.2. Comparison with Baselines",
|
| 985 |
+
"text_level": 1,
|
| 986 |
+
"bbox": [
|
| 987 |
+
89,
|
| 988 |
+
364,
|
| 989 |
+
334,
|
| 990 |
+
381
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 6
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "text",
|
| 996 |
+
"text": "In this section, we compare the detection performance in both verification and identification tasks, as well as the generation quality of different watermarking methods.",
|
| 997 |
+
"bbox": [
|
| 998 |
+
89,
|
| 999 |
+
386,
|
| 1000 |
+
482,
|
| 1001 |
+
431
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 6
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"text": "Enhanced Detection Robustness. According to Tab. 1, our proposed methods consistently achieve top-tier detection performance across various attacks. A key observation is that non-semantic watermarking methods [12, 22, 52] exhibit significant degradation under regeneration attacks, highlighting their vulnerability. Among Gaussian radius-based pattern methods (Tree-Ring, Zodiac, and HSTR), HSTR outperforms corresponding baselines in most cases, demonstrating its effectiveness in enhancing detection robustness while maintaining the fundamental tree-ring structure. In contrast, RingID leverages high-energy signed constant ring patterns, securing strong detection performance. However, as acknowledged by the authors, this method introduces noticeable ring-like artifacts in generated images, disrupting the balance between watermark robustness and generative quality. A detailed quantitative evaluation of this trade-off is presented in the following section. Meanwhile, Zodiac suffers from time-consuming processing, taking several minutes to apply the watermark—requiring 7.36 minutes per image on MS-COCO dataset—which further limits its practicality in real-world applications. In contrast, our methods following the merged-in-generation scheme introduce no additional processing time, ensuring seamless watermark embedding.",
|
| 1008 |
+
"bbox": [
|
| 1009 |
+
89,
|
| 1010 |
+
431,
|
| 1011 |
+
482,
|
| 1012 |
+
792
|
| 1013 |
+
],
|
| 1014 |
+
"page_idx": 6
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "text",
|
| 1018 |
+
"text": "In Tab. 2, we compare identification results, which reveal an even greater disparity in detection performance across different watermarking methods. Gaussian radius-based pattern methods, except for HSTR, perform poorly in identification tasks, with Zodiac—designed solely for verification—failing entirely across all scenarios (achieving zero identification accuracy). Further analysis on scalability",
|
| 1019 |
+
"bbox": [
|
| 1020 |
+
89,
|
| 1021 |
+
795,
|
| 1022 |
+
482,
|
| 1023 |
+
901
|
| 1024 |
+
],
|
| 1025 |
+
"page_idx": 6
|
| 1026 |
+
},
|
| 1027 |
+
{
|
| 1028 |
+
"type": "text",
|
| 1029 |
+
"text": "under different message capacities is provided in Sec. 5.3.3. Notably, our methods exhibit strong resilience against cropping attacks in both tasks, reinforcing the robustness of center-aware embedding. HSQR achieves state-of-the-art identification accuracy across all datasets, further establishing its dominance in watermark retrieval.",
|
| 1030 |
+
"bbox": [
|
| 1031 |
+
511,
|
| 1032 |
+
90,
|
| 1033 |
+
903,
|
| 1034 |
+
180
|
| 1035 |
+
],
|
| 1036 |
+
"page_idx": 6
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"type": "text",
|
| 1040 |
+
"text": "Balance with Generative Quality. Tab. 3 presents the generative quality of different semantic watermarking methods following the merged-in-generation scheme, evaluated using FID and CLIP score. RingID, which deviates from a Gaussian distribution by embedding high-energy perturbations, exhibits the worst generative performance, particularly reflected in its low CLIP score. This decline is attributed to the noticeable ring-like artifacts, as discussed earlier. In contrast, our proposed methods, which preserve frequency integrity via SFW, achieve the top two FID scores, demonstrating a better balance between watermark robustness and generative quality.",
|
| 1041 |
+
"bbox": [
|
| 1042 |
+
511,
|
| 1043 |
+
181,
|
| 1044 |
+
903,
|
| 1045 |
+
363
|
| 1046 |
+
],
|
| 1047 |
+
"page_idx": 6
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "text",
|
| 1051 |
+
"text": "Expanding Frequency Utilization in Latent Watermarking. Traditional image-domain watermarking methods prioritize low-mid frequency embedding to resist compression and filtering attacks. This approach has also been adopted in latent diffusion-based semantic watermarking methods [11, 48, 53] without explicitly considering the differences between pixel-space and latent-space transformations. However, our findings suggest that such frequency constraints may not be necessary in the latent space, where watermark retrieval involves latent encoding and DDIM inversion, altering the impact of frequency perturbations. This distinction is evident in HSQR, which achieves state-of-the-art detection performance while utilizing nearly the entire frequency spectrum. Instead of relying on specific frequency bands, HSQR leverages structured statistical encoding, distributing watermark information across multiple pixels ( $2 \\times 2$ per QR cell). This redundancy enhances robustness against distortions, enabling effective retrieval even with broader frequency usage. These results indicate that latent-space watermarking is not bounded by traditional low-mid frequency constraints. Future semantic watermarking strategies can benefit from statistically robust encoding methods, allowing for effective watermark retrieval across a wider frequency range without compromising detection accuracy.",
|
| 1052 |
+
"bbox": [
|
| 1053 |
+
511,
|
| 1054 |
+
364,
|
| 1055 |
+
903,
|
| 1056 |
+
741
|
| 1057 |
+
],
|
| 1058 |
+
"page_idx": 6
|
| 1059 |
+
},
|
| 1060 |
+
{
|
| 1061 |
+
"type": "text",
|
| 1062 |
+
"text": "5.3. Ablation Study",
|
| 1063 |
+
"text_level": 1,
|
| 1064 |
+
"bbox": [
|
| 1065 |
+
511,
|
| 1066 |
+
753,
|
| 1067 |
+
666,
|
| 1068 |
+
768
|
| 1069 |
+
],
|
| 1070 |
+
"page_idx": 6
|
| 1071 |
+
},
|
| 1072 |
+
{
|
| 1073 |
+
"type": "text",
|
| 1074 |
+
"text": "5.3.1. Impact of SFW on Detection Performance",
|
| 1075 |
+
"text_level": 1,
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
511,
|
| 1078 |
+
775,
|
| 1079 |
+
851,
|
| 1080 |
+
789
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 6
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "We evaluate the impact of Hermitian SFW by comparing detection performance across four cases (Tab. 4). Without SFW, using both real and imaginary components for detection (Case A) results in the lowest performance due to frequency degradation, while restricting detection to the real component (Case B) improves accuracy. In contrast, applying SFW (Cases C and D) preserves frequency integrity,",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
511,
|
| 1089 |
+
794,
|
| 1090 |
+
903,
|
| 1091 |
+
902
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 6
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "page_number",
|
| 1097 |
+
"text": "18765",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
480,
|
| 1100 |
+
944,
|
| 1101 |
+
517,
|
| 1102 |
+
955
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 6
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "table",
|
| 1108 |
+
"img_path": "images/108a8b14af9ed889422ea4a061157f4d42af96d64a4cc11791fc617421af6b56.jpg",
|
| 1109 |
+
"table_caption": [
|
| 1110 |
+
"Table 4. Ablation study on detection performance based on frequency integrity ( $\\sqrt{}$ or $\\times$ ) and the number of detection region usage (1: real only, 2: real & imaginary). Vrf. and Idf. represent average detection performance in verification (TPR@1% FPR) and identification (accuracy), respectively. $\\Delta L_1^*$ denotes the normalized $\\Delta L_1$ metric, indicating detection effectiveness."
|
| 1111 |
+
],
|
| 1112 |
+
"table_footnote": [],
|
| 1113 |
+
"table_body": "<table><tr><td>Case</td><td>Methods</td><td>Freq. Int.</td><td># Det.</td><td>Vrf.</td><td>Idf.</td><td>ΔL1*↑</td></tr><tr><td>A</td><td>Tree-Ring</td><td>×</td><td>2</td><td>0.653</td><td>0.114</td><td>0.232</td></tr><tr><td>B</td><td>Tree-Ring</td><td>×</td><td>1</td><td>0.805</td><td>0.416</td><td>0.368</td></tr><tr><td>C</td><td>HSTR (ours)</td><td>✓</td><td>1</td><td>0.936</td><td>0.775</td><td>0.471</td></tr><tr><td>D</td><td>HSTR (ours)</td><td>✓</td><td>2</td><td>0.971</td><td>0.889</td><td>0.476</td></tr></table>",
|
| 1114 |
+
"bbox": [
|
| 1115 |
+
93,
|
| 1116 |
+
183,
|
| 1117 |
+
478,
|
| 1118 |
+
252
|
| 1119 |
+
],
|
| 1120 |
+
"page_idx": 7
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "image",
|
| 1124 |
+
"img_path": "images/56d75d67a6ba238f9798ac25f3ba8dde7f8e62510bcad250ae2eb3872e8feb0a.jpg",
|
| 1125 |
+
"image_caption": [
|
| 1126 |
+
"Figure 5. Identification accuracy under center crop and random crop attacks at different crop scales. HSTR and HSQR maintain higher accuracy compared to RingID, demonstrating improved robustness against cropping."
|
| 1127 |
+
],
|
| 1128 |
+
"image_footnote": [],
|
| 1129 |
+
"bbox": [
|
| 1130 |
+
91,
|
| 1131 |
+
268,
|
| 1132 |
+
287,
|
| 1133 |
+
415
|
| 1134 |
+
],
|
| 1135 |
+
"page_idx": 7
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "image",
|
| 1139 |
+
"img_path": "images/3d4e78eb6f8b84fdb847e3363e8f009b0c427e3a6510d7addd8a6827ca628aae.jpg",
|
| 1140 |
+
"image_caption": [],
|
| 1141 |
+
"image_footnote": [],
|
| 1142 |
+
"bbox": [
|
| 1143 |
+
287,
|
| 1144 |
+
268,
|
| 1145 |
+
480,
|
| 1146 |
+
414
|
| 1147 |
+
],
|
| 1148 |
+
"page_idx": 7
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"type": "image",
|
| 1152 |
+
"img_path": "images/bf4440f4305343d37f581f8f612d9cbd603c4ac20fff19d24a148a5be3abc9c2.jpg",
|
| 1153 |
+
"image_caption": [
|
| 1154 |
+
"Figure 6. Identification accuracy across watermark message capacities. HSQR remains nearly perfect, while RingID and HSTR degrade at higher capacities. Tree-Ring and Zodiac fail to scale effectively."
|
| 1155 |
+
],
|
| 1156 |
+
"image_footnote": [],
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
91,
|
| 1159 |
+
500,
|
| 1160 |
+
480,
|
| 1161 |
+
696
|
| 1162 |
+
],
|
| 1163 |
+
"page_idx": 7
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "text",
|
| 1167 |
+
"text": "leading to a significant boost in detection performance. Notably, HSTR with SFW and full complex detection (Case D) achieves the highest accuracy, demonstrating that leveraging both frequency components maximizes retrieval effectiveness. These results confirm that SFW enables robust semantic watermarking by maintaining frequency integrity and fully utilizing the Fourier domain for detection.",
|
| 1168 |
+
"bbox": [
|
| 1169 |
+
89,
|
| 1170 |
+
794,
|
| 1171 |
+
483,
|
| 1172 |
+
902
|
| 1173 |
+
],
|
| 1174 |
+
"page_idx": 7
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"type": "text",
|
| 1178 |
+
"text": "5.3.2. Robustness to Cropping Attacks",
|
| 1179 |
+
"text_level": 1,
|
| 1180 |
+
"bbox": [
|
| 1181 |
+
511,
|
| 1182 |
+
90,
|
| 1183 |
+
785,
|
| 1184 |
+
107
|
| 1185 |
+
],
|
| 1186 |
+
"page_idx": 7
|
| 1187 |
+
},
|
| 1188 |
+
{
|
| 1189 |
+
"type": "text",
|
| 1190 |
+
"text": "We assess the robustness of center-aware embedding against center crop and random crop attacks, measuring identification accuracy across different crop scales (Fig. 5). As crop scale decreases, accuracy declines for all methods. However, RingID exhibits a steeper drop, indicating higher vulnerability to cropping. In contrast, HSTR and HSQR degrade more gradually, demonstrating improved resilience. While extreme cropping (scale 0.2) significantly affects all methods, our center-aware design consistently outperforms RingID, confirming its effectiveness in preserving watermark information under cropping distortions.",
|
| 1191 |
+
"bbox": [
|
| 1192 |
+
511,
|
| 1193 |
+
109,
|
| 1194 |
+
906,
|
| 1195 |
+
276
|
| 1196 |
+
],
|
| 1197 |
+
"page_idx": 7
|
| 1198 |
+
},
|
| 1199 |
+
{
|
| 1200 |
+
"type": "text",
|
| 1201 |
+
"text": "5.3.3. Impact of Capacity on Identification",
|
| 1202 |
+
"text_level": 1,
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
511,
|
| 1205 |
+
282,
|
| 1206 |
+
812,
|
| 1207 |
+
299
|
| 1208 |
+
],
|
| 1209 |
+
"page_idx": 7
|
| 1210 |
+
},
|
| 1211 |
+
{
|
| 1212 |
+
"type": "text",
|
| 1213 |
+
"text": "We evaluate identification accuracy across different watermark message capacities (64 to 8,192) for five methods (Fig. 6). Zodiac's performance collapses, while Tree-Ring exhibits rapid performance degradation, becoming nearly unusable at higher capacities. RingID, HSTR, and HSQR remain robust, maintaining $80\\%+$ accuracy, though HSTR declines faster, and RingID starts degrading from 2,048 onward. HSQR demonstrates the highest scalability, retaining near-perfect accuracy even at the largest capacity, confirming its superior robustness in high-capacity scenarios.",
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
511,
|
| 1216 |
+
301,
|
| 1217 |
+
906,
|
| 1218 |
+
455
|
| 1219 |
+
],
|
| 1220 |
+
"page_idx": 7
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"type": "text",
|
| 1224 |
+
"text": "6. Conclusion",
|
| 1225 |
+
"text_level": 1,
|
| 1226 |
+
"bbox": [
|
| 1227 |
+
511,
|
| 1228 |
+
467,
|
| 1229 |
+
633,
|
| 1230 |
+
483
|
| 1231 |
+
],
|
| 1232 |
+
"page_idx": 7
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "text",
|
| 1236 |
+
"text": "We have introduced Hermitian SFW, a novel approach to semantic watermarking in the latent diffusion model framework. Unlike existing methods that fail to preserve frequency integrity, our approach ensures that watermark embeddings maintain a consistent statistical structure in the latent noise distribution. This is achieved through Hermitian symmetry enforcement, which preserves frequency components and enhances detection and generation quality. Additionally, we have proposed center-aware embedding, which significantly improves robustness against cropping attacks by strategically placing watermarks in a spatially resilient region of the latent representation. Through comprehensive experiments, we demonstrated that our method achieves state-of-the-art detection accuracy in both verification and identification tasks while also maintaining superior image generation quality, as shown by FID and CLIP scores. Our study highlights the importance of frequency integrity in Fourier-based watermarking and challenges the assumption that semantic watermarking must be confined to low-mid frequency bands. Experimental results confirm that a properly structured frequency-domain watermark can be effectively embedded and retrieved across the entire frequency spectrum without compromising generative quality. Future work includes exploring adaptive embedding strategies to further enhance robustness against adversarial attacks and extreme distortions, as well as extending our method to more diverse generative architectures beyond LDMs.",
|
| 1237 |
+
"bbox": [
|
| 1238 |
+
511,
|
| 1239 |
+
492,
|
| 1240 |
+
908,
|
| 1241 |
+
901
|
| 1242 |
+
],
|
| 1243 |
+
"page_idx": 7
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "page_number",
|
| 1247 |
+
"text": "18766",
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
480,
|
| 1250 |
+
944,
|
| 1251 |
+
519,
|
| 1252 |
+
957
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 7
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"text": "Acknowledgements",
|
| 1259 |
+
"text_level": 1,
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
91,
|
| 1262 |
+
90,
|
| 1263 |
+
258,
|
| 1264 |
+
107
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 8
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "text",
|
| 1270 |
+
"text": "This research was supported by the 2024 AI Semiconductor Application/Demonstration Support Program of the Ministry of Science and ICT and the National IT Industry Promotion Agency (NIPA) with Markany Co., Ltd. as the lead organization, and in part by the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2025.",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
89,
|
| 1273 |
+
113,
|
| 1274 |
+
485,
|
| 1275 |
+
212
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 8
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "text",
|
| 1281 |
+
"text": "References",
|
| 1282 |
+
"text_level": 1,
|
| 1283 |
+
"bbox": [
|
| 1284 |
+
91,
|
| 1285 |
+
237,
|
| 1286 |
+
187,
|
| 1287 |
+
252
|
| 1288 |
+
],
|
| 1289 |
+
"page_idx": 8
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "list",
|
| 1293 |
+
"sub_type": "ref_text",
|
| 1294 |
+
"list_items": [
|
| 1295 |
+
"[1] Mahdi Ahmadi, Alireza Norouzi, Nader Karimi, Shadrokh Samavi, and Ali Emami. Redmark: Framework for residual diffusion watermarking based on deep networks. Expert Systems with Applications, 146:113157, 2020. 1",
|
| 1296 |
+
"[2] Dhruv Arya. A survey of frequency and wavelet domain digital watermarking techniques. 2",
|
| 1297 |
+
"[3] Vishal Asnani, John Collomosse, Tu Bui, Xiaoming Liu, and Shruti Agarwal. Promark: Proactive diffusion watermarking for causal attribution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10802-10811, 2024. 1, 2",
|
| 1298 |
+
"[4] Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018. 6",
|
| 1299 |
+
"[5] Mauro Barni, Franco Bartolini, Vito Cappellini, and Alessandro Piva. A dct-domain system for robust image watermarking. Signal processing, 66(3):357-372, 1998. 1, 2",
|
| 1300 |
+
"[6] Mauro Barni, Franco Bartolini, and Alessandro Piva. Improved wavelet-based watermarking through pixel-wise masking. IEEE transactions on image processing, 10(5): 783-791, 2001. 1, 2",
|
| 1301 |
+
"[7] Tu Bui, Shruti Agarwal, Ning Yu, and John Collomosse. Rosteals: Robust steganography using autoencoder latent space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 933-942, 2023. 1, 2",
|
| 1302 |
+
"[8] Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro Katto. Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7939-7948, 2020. 6",
|
| 1303 |
+
"[9] Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2818-2829, 2023. 7",
|
| 1304 |
+
"[10] Hai Ci, Yiren Song, Pei Yang, Jinheng Xie, and Mike Zheng Shou. Wmadapter: Adding watermark control to latent diffusion models. arXiv preprint arXiv:2406.08337, 2024. 1, 2",
|
| 1305 |
+
"[11] Hai Ci, Pei Yang, Yiren Song, and Mike Zheng Shou. Ringid: Rethinking tree-ring watermarking for enhanced multi-key identification. arXiv preprint arXiv:2404.14055, 2024. 2, 3, 4, 5, 7"
|
| 1306 |
+
],
|
| 1307 |
+
"bbox": [
|
| 1308 |
+
93,
|
| 1309 |
+
261,
|
| 1310 |
+
483,
|
| 1311 |
+
898
|
| 1312 |
+
],
|
| 1313 |
+
"page_idx": 8
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "list",
|
| 1317 |
+
"sub_type": "ref_text",
|
| 1318 |
+
"list_items": [
|
| 1319 |
+
"[12] Ingemar Cox, Matthew Miller, Jeffrey Bloom, Jessica Fridrich, and Ton Kalker. Digital watermarking and steganography. Morgan Kaufmann, 2007. 1, 2, 6, 7",
|
| 1320 |
+
"[13] Ingemar J Cox, Joe Kilian, F Thomson Leighton, and Talal Shamoon. Secure spread spectrum watermarking for multimedia. IEEE TIP, 6(12):1673-1687, 1997. 1, 2",
|
| 1321 |
+
"[14] Yingqian Cui, Jie Ren, Han Xu, Pengfei He, Hui Liu, Lichao Sun, Yue Xing, and Jiliang Tang. Diffusionshield: A watermark for data copyright protection against generative diffusion models. ACM SIGKDD Explorations Newsletter, 26(2): 60-75, 2025. 2",
|
| 1322 |
+
"[15] DEJEY and RS Rajesh. An improved wavelet domain digital watermarking for image protection. International journal of wavelets, multiresolution and information processing, 8(01): 19-31, 2010. 1, 2",
|
| 1323 |
+
"[16] Denso Wave Incorporated. QR Code Essentials. Denso Wave, 2011. Available at: https://www.qrcode.com/en/about/standards.html.4",
|
| 1324 |
+
"[17] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 1",
|
| 1325 |
+
"[18] Han Fang, Dongdong Chen, Qidong Huang, Jie Zhang, Zehua Ma, Weiming Zhang, and Nenghai Yu. Deep template-based watermarking. IEEE Transactions on Circuits and Systems for Video Technology, 31(4):1436-1451, 2020. 1",
|
| 1326 |
+
"[19] Han Fang, Zhaoyang Jia, Yupeng Qiu, Jiyi Zhang, Weiming Zhang, and Ee-Chien Chang. De-end: decoder-driven watermarking network. IEEE Transactions on Multimedia, 25: 7571-7581, 2022. 1",
|
| 1327 |
+
"[20] Weitao Feng, Wenbo Zhou, Jiyan He, Jie Zhang, Tianyi Wei, Guanlin Li, Tianwei Zhang, Weiming Zhang, and Nenghai Yu. Aqualora: Toward white-box protection for customized stable diffusion models via watermark lora. arXiv preprint arXiv:2405.11135, 2024. 1, 2",
|
| 1328 |
+
"[21] Pierre Fernandez, Alexandre Sablayrolles, Teddy Furon, Hervé Jégou, and Matthijs Douze. Watermarking images in self-supervised latent spaces. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3054-3058. IEEE, 2022. 1",
|
| 1329 |
+
"[22] Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22466-22477, 2023. 1, 2, 6, 7",
|
| 1330 |
+
"[23] Yiyang Guo, Ruizhe Li, Mude Hui, Hanzhong Guo, Chen Zhang, Chuangjian Cai, Le Wan, and Shangfei Wang. Freqmark: Invisible image watermarking via frequency based optimization in latent space. arXiv preprint arXiv:2410.20824, 2024. 2",
|
| 1331 |
+
"[24] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6",
|
| 1332 |
+
"[25] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 1"
|
| 1333 |
+
],
|
| 1334 |
+
"bbox": [
|
| 1335 |
+
516,
|
| 1336 |
+
92,
|
| 1337 |
+
906,
|
| 1338 |
+
901
|
| 1339 |
+
],
|
| 1340 |
+
"page_idx": 8
|
| 1341 |
+
},
|
| 1342 |
+
{
|
| 1343 |
+
"type": "page_number",
|
| 1344 |
+
"text": "18767",
|
| 1345 |
+
"bbox": [
|
| 1346 |
+
480,
|
| 1347 |
+
944,
|
| 1348 |
+
517,
|
| 1349 |
+
955
|
| 1350 |
+
],
|
| 1351 |
+
"page_idx": 8
|
| 1352 |
+
},
|
| 1353 |
+
{
|
| 1354 |
+
"type": "list",
|
| 1355 |
+
"sub_type": "ref_text",
|
| 1356 |
+
"list_items": [
|
| 1357 |
+
"[26] Jae-Eun Lee, Young-Ho Seo, and Dong-Wook Kim. Convolutional neural network-based digital image watermarking adaptive to the resolution of image and watermark. Applied Sciences, 10(19):6854, 2020. 1",
|
| 1358 |
+
"[27] Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu. Diffusetrace: A transparent and flexible watermarking scheme for latent diffusion model. arXiv preprint arXiv:2405.02696, 2024. 1, 2",
|
| 1359 |
+
"[28] Chunlei Li, Zhaoxiang Zhang, Yunhong Wang, Bin Ma, and Di Huang. Dither modulation of significant amplitude difference for wavelet based robust watermarking. Neurocomputing, 166:404-415, 2015. 1, 2",
|
| 1360 |
+
"[29] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. 6",
|
| 1361 |
+
"[30] Xiyang Luo, Ruohan Zhan, Huiwen Chang, Feng Yang, and Peyman Milanfar. Distortion agnostic deep watermarking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13548-13557, 2020. 1",
|
| 1362 |
+
"[31] Zheling Meng, Bo Peng, and Jing Dong. Latent watermark: Inject and detect watermarks in latent diffusion space. arXiv preprint arXiv:2404.00230, 2024. 1, 2",
|
| 1363 |
+
"[32] A Miyazaki and A Okamoto. Analysis of watermarking systems in the frequency domain and its application to design of robust watermarking systems. In Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205), pages 506-509. IEEE, 2001. 2",
|
| 1364 |
+
"[33] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021. 1",
|
| 1365 |
+
"[34] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6",
|
| 1366 |
+
"[35] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 1",
|
| 1367 |
+
"[36] Ahmad Rezaei, Mohammad Akbari, Saeed Ranjbar Alvar, Arezou Fatemi, and Yong Zhang. Lawa: Using latent space for in-generation image watermarking. arXiv preprint arXiv:2408.05868, 2024. 1, 2",
|
| 1368 |
+
"[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 6",
|
| 1369 |
+
"[38] Joseph JK Ō Ruanaidh and Thierry Pun. Rotation, scale and translation invariant spread spectrum digital image watermarking. Signal processing, 66(3):303-317, 1998. 1, 2"
|
| 1370 |
+
],
|
| 1371 |
+
"bbox": [
|
| 1372 |
+
91,
|
| 1373 |
+
90,
|
| 1374 |
+
485,
|
| 1375 |
+
900
|
| 1376 |
+
],
|
| 1377 |
+
"page_idx": 9
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "list",
|
| 1381 |
+
"sub_type": "ref_text",
|
| 1382 |
+
"list_items": [
|
| 1383 |
+
"[39] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494, 2022. 1",
|
| 1384 |
+
"[40] Gustavo Santana. Gustavosta: Stable-diffusion-prompts. https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts, 2022.6",
|
| 1385 |
+
"[41] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 1, 3, 6",
|
| 1386 |
+
"[42] Matthew Tancik, Ben Mildenhall, and Ren Ng. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2117-2126, 2020. 1",
|
| 1387 |
+
"[43] Jun Tian. Reversible data embedding using a difference expansion. IEEE transactions on circuits and systems for video technology, 13(8):890-896, 2003. 1, 2",
|
| 1388 |
+
"[44] Ron G Van Schyndel, Andrew Z Tirkel, and Charles F Osborne. A digital watermark. In Proceedings of 1st international conference on image processing, pages 86-90. IEEE, 1994. 1, 2",
|
| 1389 |
+
"[45] Wenbo Wan, Jun Wang, Yunming Zhang, Jing Li, Hui Yu, and Jiande Sun. A comprehensive survey on robust image watermarking. Neurocomputing, 488:226-247, 2022. 2",
|
| 1390 |
+
"[46] Zijie J Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. Diffusional: A large-scale prompt gallery dataset for text-to-image generative models. arXiv preprint arXiv:2210.14896, 2022. 6",
|
| 1391 |
+
"[47] Xiu-mei Wen, Wei Zhao, and Fan-xing Meng. Research of a digital image watermarking algorithm resisting geometrical attacks in fourier domain. In 2009 International Conference on Computational Intelligence and Security, pages 265-268. IEEE, 2009. 2",
|
| 1392 |
+
"[48] Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-rings watermarks: Invisible fingerprints for diffusion images. Advances in Neural Information Processing Systems, 36, 2024. 2, 3, 4, 5, 7",
|
| 1393 |
+
"[49] Raymond B Wolfgang, Christine I Podilchuk, and Edward J Delp. Perceptual watermarks for digital images and video. Proceedings of the IEEE, 87(7):1108-1126, 1999. 1, 2",
|
| 1394 |
+
"[50] Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, and Nenghai Yu. Gaussian shading: Provable performance-lossless image watermarking for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12162-12171, 2024. 2",
|
| 1395 |
+
"[51] Guokai Zhang, Lanjun Wang, Yuting Su, and An-An Liu. A training-free plug-and-play watermark framework for stable diffusion. arXiv preprint arXiv:2404.05607, 2024. 1, 2",
|
| 1396 |
+
"[52] Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Robust invisible video watermarking with attention. arXiv preprint arXiv:1909.01285, 2019. 1, 6, 7"
|
| 1397 |
+
],
|
| 1398 |
+
"bbox": [
|
| 1399 |
+
516,
|
| 1400 |
+
90,
|
| 1401 |
+
905,
|
| 1402 |
+
898
|
| 1403 |
+
],
|
| 1404 |
+
"page_idx": 9
|
| 1405 |
+
},
|
| 1406 |
+
{
|
| 1407 |
+
"type": "page_number",
|
| 1408 |
+
"text": "18768",
|
| 1409 |
+
"bbox": [
|
| 1410 |
+
480,
|
| 1411 |
+
944,
|
| 1412 |
+
519,
|
| 1413 |
+
955
|
| 1414 |
+
],
|
| 1415 |
+
"page_idx": 9
|
| 1416 |
+
},
|
| 1417 |
+
{
|
| 1418 |
+
"type": "list",
|
| 1419 |
+
"sub_type": "ref_text",
|
| 1420 |
+
"list_items": [
|
| 1421 |
+
"[53] Lijun Zhang, Xiao Liu, Antoni Martin, Cindy Bearfield, Yuriy Brun, and Hui Guan. Attack-resilient image watermarking using stable diffusion. Advances in Neural Information Processing Systems, 37:38480-38507, 2025. 2, 3, 4, 6, 7",
|
| 1422 |
+
"[54] Xuanyu Zhang, Runyi Li, Jiwen Yu, Youmin Xu, Weiqi Li, and Jian Zhang. Editguard: Versatile image watermarking for tamper localization and copyright protection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11964-11974, 2024. 1, 2",
|
| 1423 |
+
"[55] Xuandong Zhao, Kexun Zhang, Yu-Xiang Wang, and Lei Li. Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats. 2023. 2",
|
| 1424 |
+
"[56] Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, and Lei Li. Invisible image watermarks are provably removable using generative ai. Advances in Neural Information Processing Systems, 37:8643-8672, 2025. 2, 6",
|
| 1425 |
+
"[57] Yimeng Zhao, Chengyou Wang, Xiao Zhou, and Zhiliang Qin. Dari-mark: Deep learning and attention network for robust image watermarking. Mathematics, 11(1):209, 2022. 1",
|
| 1426 |
+
"[58] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672, 2018. 1"
|
| 1427 |
+
],
|
| 1428 |
+
"bbox": [
|
| 1429 |
+
91,
|
| 1430 |
+
90,
|
| 1431 |
+
480,
|
| 1432 |
+
457
|
| 1433 |
+
],
|
| 1434 |
+
"page_idx": 10
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "page_number",
|
| 1438 |
+
"text": "18769",
|
| 1439 |
+
"bbox": [
|
| 1440 |
+
480,
|
| 1441 |
+
944,
|
| 1442 |
+
517,
|
| 1443 |
+
955
|
| 1444 |
+
],
|
| 1445 |
+
"page_idx": 10
|
| 1446 |
+
}
|
| 1447 |
+
]
|
2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/4150a989-5307-4ff1-8e86-ca9b050e7e76_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/4150a989-5307-4ff1-8e86-ca9b050e7e76_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:295e3144d32e528fcfacab627682f7aa04278d5cc748edd8c2dc3b19c5abaf6f
|
| 3 |
+
size 3168638
|
2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/full.md
ADDED
|
@@ -0,0 +1,288 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Semantic Watermarking Reinvented: Enhancing Robustness and Generation Quality with Fourier Integrity
|
| 2 |
+
|
| 3 |
+
Sung Ju Lee Nam Ik Cho
|
| 4 |
+
|
| 5 |
+
Dept. of ECE & INMC, Seoul National University, Korea
|
| 6 |
+
|
| 7 |
+
thomas11809@snu.ac.kr, nicho@snu.ac.kr
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Semantic watermarking techniques for latent diffusion models (LDMs) are robust against regeneration attacks, but often suffer from detection performance degradation due to the loss of frequency integrity. To tackle this problem, we propose a novel embedding method called Hermitian Symmetric Fourier Watermarking (SFW), which maintains frequency integrity by enforcing Hermitian symmetry. Additionally, we introduce a center-aware embedding strategy that reduces the vulnerability of semantic watermarking due to cropping attacks by ensuring robust information retention. To validate our approach, we apply these techniques to existing semantic watermarking schemes, enhancing their frequency-domain structures for better robustness and retrieval accuracy. Extensive experiments demonstrate that our methods achieve state-of-the-art verification and identification performance, surpassing previous approaches across various attack scenarios. Ablation studies confirm the impact of SFW on detection capabilities, the effectiveness of the center-aware embedding against cropping, and how message capacity influences identification accuracy. Notably, our method achieves the highest detection accuracy while maintaining superior image fidelity, as evidenced by FID and CLIP scores. Conclusively, our proposed SFW is shown to be an effective framework for balancing robustness and image fidelity, addressing the inherent trade-offs in semantic watermarking. Code available at github.com/thomas11809/SFWMark.
|
| 12 |
+
|
| 13 |
+
# 1. Introduction
|
| 14 |
+
|
| 15 |
+
With the advancement of diffusion generative models [17, 25, 33, 35, 39, 41], their generated images are being increasingly used across various fields, including creative works, entertainment, and advertisement. In particular, the open-source release of large-scale language-image (LLI) models like Stable Diffusion [37] has led to an exponential growth
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1. Summary of watermarking performance across different semantic watermarking methods, as detailed in Sec. 5.2. All methods follow the merged-in-generation scheme with no additional processing time. Verification is evaluated using TPR@1%FPR (True Positive Rate at $1\%$ False Positive Rate), while identification is assessed by Identification Accuracy (Perfect Match Rate). The proposed approaches achieve the best balance between detection robustness and image fidelity.
|
| 19 |
+
|
| 20 |
+
in script-based image generation and editing technologies, resulting in a surge of generative content. This has raised new concerns, such as the copyright of AI-generated content and the tracking of images created with malicious intent. As a solution to these problems, the technique of embedding invisible watermarks into content has been proposed.
|
| 21 |
+
|
| 22 |
+
Digital content watermarking has been extensively studied through both classical signal processing techniques [5, 6, 12, 13, 15, 28, 38, 43, 44, 49] and deep learning-based approaches [1, 18, 19, 21, 26, 30, 42, 52, 57, 58]. Recently, watermarking techniques that directly intervene in the generation process of diffusion models [3, 7, 10, 20, 22, 27, 31, 36, 51, 54] have been explored; however, these approaches require additional setup or configuration, adding more complexity.
|
| 23 |
+
|
| 24 |
+
Several studies have explored embedding watermarks di
|
| 25 |
+
|
| 26 |
+
rectly into the latent representation, eliminating the need for external models. These include a steganographic approach that constructs latent noise with stream cipher-randomized bits [50] and semantic watermarking methods that embed geometric patterns in the Fourier frequency domain of the latent representation [11, 48, 53]. Meanwhile, research on pixel-level perturbation-based watermarking has highlighted its vulnerability to regeneration attacks [55, 56], demonstrating that semantic watermarking serves as a more robust alternative against such attacks. However, the aforementioned semantic watermarking methods in the latent Fourier domain, which are the focus of this paper, suffer from degraded detection accuracy and generative quality due to their lack of frequency integrity preservation.
|
| 27 |
+
|
| 28 |
+
In this context, we propose a novel semantic watermarking framework that enhances both detection robustness and image quality by preserving frequency integrity in the latent Fourier domain. Our method is named Hermitian Symmetric Fourier Watermarking (SFW), which ensures that watermark embeddings maintain statistical consistency with the latent noise distribution, leading to improved retrievability and stability in generative models. Additionally, we incorporate a center-aware embedding strategy that enhances robustness against cropping attacks by embedding watermarks in a spatially resilient region of the latent representation. We comprehensively evaluate our method across various attack scenarios, including signal processing distortions, regeneration attacks, and cropping attacks. Experimental results, as illustrated in Fig. 1, demonstrate that our method achieves state-of-the-art detection accuracy in both verification and identification tasks while simultaneously maintaining superior generative quality, as evidenced by FID and CLIP score evaluations. Our contributions can be summarized as follows:
|
| 29 |
+
|
| 30 |
+
- We propose Hermitian SFW to ensure frequency integrity, leading to improved watermark detection and generative quality.
|
| 31 |
+
- We introduce center-aware embedding, which significantly enhances robustness against cropping attacks.
|
| 32 |
+
- We present extensive evaluations demonstrating that our approach outperforms existing baselines in detection accuracy and generative quality.
|
| 33 |
+
|
| 34 |
+
# 2. Related Works
|
| 35 |
+
|
| 36 |
+
Digital Watermarking. Digital watermarking aims to achieve a balance between high embedding capacity and visual quality when inserting invisible watermarks while also ensuring robustness against various attacks [45]. This field originally began with the adoption of classical signal processing techniques. Embedding the watermark in the spatial domain is a straightforward and intuitive watermark insertion [43, 44], but it tends to be less robust against attacks such as filtering or compression. On the other hand,
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
Figure 2. Overview of the semantic watermarking process in the latent Fourier domain using the merged-in-generation scheme.
|
| 40 |
+
|
| 41 |
+
studies utilizing the frequency domain have demonstrated resilience against JPEG compression by embedding watermarks in the low and middle frequency bands [2, 5, 6, 12, 13, 15, 28, 32, 38, 47, 49].
|
| 42 |
+
|
| 43 |
+
Watermarks for Latent Diffusion Models. Latent diffusion models (LDMs), such as Stable Diffusion [37], conduct the diffusion process in a lower-dimensional latent space rather than directly on high-resolution images. Recent research has focused on techniques for integrating watermarks into the latent diffusion process to ensure traceability and robustness. However, many of these approaches rely on model fine-tuning [3, 10, 20, 22, 27] or require separately trained encoders and decoders [7, 10, 14, 20, 22, 23, 27, 31, 36, 51, 54] to facilitate watermark embedding and detection. These dependencies constrain the flexibility of watermarking, limiting their applicability across diverse models.
|
| 44 |
+
|
| 45 |
+
Several studies have explored embedding semantic watermarks in the Fourier domain of latent vectors using a merged-in-generation scheme. Wen et al. [48] introduced a tree-ring-shaped watermark constructed with Gaussian-distributed radii, while Ci et al. [11] proposed a modified pattern with high-energy signed constant rings for watermark identification, along with an additional random noise key embedded in a separate channel. On the other hand, Zodiac [53] followed a post-hoc approach, embedding a tree-ring pattern into generated images through multiple iterations of latent vector optimization and diffusion-based synthesis. However, in addition to its high computational cost, this method suffers from low practicality, as it relies on extensive linear interpolation between the original and optimized images to artificially improve visual quality.
|
| 46 |
+
|
| 47 |
+
In these methods, when performing inverse Fourier transform, the imaginary component in the spatial domain is discarded, leading to a distorted frequency representation. As a result, the real component of the key region retains only partial information, while the imaginary component
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
Figure 3. Examples of various semantic watermarking patterns.
|
| 51 |
+
|
| 52 |
+
is almost entirely lost, creating an empty key region in the frequency domain. Since detection relies on analyzing this incomplete key region, the process is inherently limited, resulting in suboptimal retrieval performance.
|
| 53 |
+
|
| 54 |
+
Fig. 2 illustrates the overall pipeline of semantic watermarking with the merged-in-generation scheme. The embedding process begins with the Fourier transform applied to the latent noise, generating the latent Fourier representation. A watermark key is then embedded into a designated key region, followed by the inverse Fourier transform, producing the watermarked latent vector. Finally, text-guided image generation synthesizes the watermarked image. For detection, the process starts with a clean or attacked image, from which the latent query is obtained via DDIM inversion [41]. The presence of a watermark is determined by analyzing the query key region in the latent representation. Further details on detection tasks and evaluation metrics are provided in Sec. 3.1.
|
| 55 |
+
|
| 56 |
+
# 3. Preliminaries
|
| 57 |
+
|
| 58 |
+
# 3.1. Task Formulation
|
| 59 |
+
|
| 60 |
+
The performance of watermark detection is evaluated following RingID's formulation for watermark verification and identification tasks [11]. The metric used to measure the distance $d$ is the $L_{1}$ distance, calculated specifically for the key region where the watermark is embedded.
|
| 61 |
+
|
| 62 |
+
Verification. The objective of verification is to determine whether a watermark is present in an image by analyzing the distance between the reference key and the key region of the query in the latent Fourier domain. Let $\hat{w}$ denote the watermarked latent Fourier key, and $\hat{u}$ denote the unwatermarked latent Fourier null key. Verification is based on comparing the distances between the reference key $w$ and the watermarked/unwatermarked keys, i.e., $d(\hat{w}, w) \neq d(\hat{u}, w)$ . Performance is assessed using statistical metrics derived from the ROC curve, considering different distance thresholds.
|
| 63 |
+
|
| 64 |
+
Identification. In identification, given that a watermark
|
| 65 |
+
|
| 66 |
+
is already embedded, the task is to accurately determine the embedded information. This is achieved by computing the distance between the watermarked key $\hat{w}$ and multiple reference keys $w_{i}$ . Performance is evaluated based on the accuracy of the estimated message index, defined as $\hat{i} = \arg \min_{i}d(\hat{w},w_{i})$ .
|
| 67 |
+
|
| 68 |
+
# 3.2. Fourier Considerations for Real Latent Noise
|
| 69 |
+
|
| 70 |
+
This section introduces the mathematical properties of the Fourier domain and highlights important considerations when embedding semantic watermarks.
|
| 71 |
+
|
| 72 |
+
Hermitian Symmetry for Real Signals. To obtain a real-valued signal after inverse Fourier transform, modifications in the frequency domain must maintain Hermitian symmetry about the DC center:
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
F [ M - k, N - l ] = \overline {{F [ k , l ]}}, \tag {1}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
where $F$ represents the discrete Fourier transform, $M$ and $N$ denote the dimensions of the signal in the spatial domain along the row and column axes, respectively. Failure to preserve this symmetry introduces undesired imaginary components in the spatial domain, which are incompatible with the real-valued latent noise required for diffusion models.
|
| 79 |
+
|
| 80 |
+
Impact of Ignoring Hermitian Symmetry. In existing baselines [11, 48, 53], this issue is simply addressed by discarding the imaginary component, resulting in a loss of frequency integrity. As illustrated in Fig. 3a, this disruption affects both real and imaginary components of the original watermark pattern. While the real component deviates from its intended structure due to frequency distortion, the imaginary component suffers even greater degradation, often becoming entirely empty. This significantly limits detection performance by preventing the effective use of frequency information in the imaginary domain. The impact of the frequency loss is further examined in Sec. 5.3.1.
|
| 81 |
+
|
| 82 |
+
Preservation of Gaussianity. A real Gaussian noise signal in the spatial domain transforms into a complex Gaussian noise signal in the frequency domain while maintaining its statistical properties:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
f [ m, n ] \sim \mathcal {N} (0, \sigma^ {2}) \Rightarrow F [ k, l ] \sim \mathcal {C N} (0, M N \sigma^ {2}), \tag {2}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
where $f$ represents the real-valued Gaussian noise signal in the spatial domain. When embedding watermarks in the frequency domain, some perturbation is inevitably introduced, altering the original distribution of complex Gaussian noise. However, if Hermitian symmetry is not preserved, the inverse Fourier transform produces incomplete noise characteristics in the spatial domain, as imaginary components must be discarded. This disrupts the statistical consistency of the latent noise, leading to a deviation from the expected real Gaussian distribution and potential degradation in generative quality. In contrast, embedding with Hermitian symmetry better retains the statistical structure of the latent
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Figure 4. Overview of the proposed framework and qualitative results. (Left) The key components of our approach: Symmetric Fourier Watermarking (SFW) (blue region) and Center-Aware Embedding Strategy (green region). (Right) Qualitative results of Tree-Ring, RingID, HSTR, and HSQR. Notably, RingID exhibits visible ring-like artifacts, highlighting that its high-energy pattern disrupts generative quality, unlike other merged-in-generation semantic watermarking methods that achieve a better balance between robustness and image fidelity.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
noise, ensuring that the transformed spatial-domain signal remains closer to a real Gaussian distribution. By preserving these statistical properties, the initialization process in diffusion models becomes more stable, ultimately enhancing generative performance, as observed in Sec. 5.2.
|
| 96 |
+
|
| 97 |
+
# 4. Methods
|
| 98 |
+
|
| 99 |
+
# 4.1. Hermitian Symmetric Fourier Watermark
|
| 100 |
+
|
| 101 |
+
The proposed Hermitian SFW refers to a watermark designed to satisfy the Hermitian symmetry introduced in the previous section. As shown in the left side of Fig. 4, SFW allows free usage of the half-region in the frequency domain, while the remaining half is restricted by the symmetry condition. Certain constraints must be satisfied when embedding a pattern in the free half-region under a semantic watermarking scheme: the imaginary part at the DC center must be zero, and if the signal dimensions are even, the imaginary components of the points corresponding to the Nyquist frequency must also be zero. In cases where both frequency axes have even dimensions as in this study, the points where the imaginary part must be zero are $(0,0)$ , $\left(\frac{M}{2},0\right)$ , $(0,\frac{N}{2})$ , and $\left(\frac{M}{2},\frac{N}{2}\right)$ . Instead of directly modifying the watermark pattern to comply with locations where the imaginary part must be zero, a more effective strategy is to embed it while avoiding the DC axis as much as possible, as will be introduced in Sec. 4.3.2
|
| 102 |
+
|
| 103 |
+
# 4.2. Center-Aware Embedding Strategy
|
| 104 |
+
|
| 105 |
+
Existing semantic watermarking baselines [11, 48, 53] embed patterns by applying the Fourier transform to the full spatial matrix of the latent vector. However, such methods are vulnerable to spatial attacks, particularly cropping, which can lead to the loss of watermark information and reduced detection performance. To address this issue, we propose a center-aware embedding strategy, which applies
|
| 106 |
+
|
| 107 |
+
the Fourier transform only to the central area of the spatial domain before embedding. Specifically, for a latent vector with a signal dimension of 64, we utilize the central $44 \times 44$ space. As demonstrated in Sec. 5.3.2, this design significantly improves robustness against cropping attacks on various scales.
|
| 108 |
+
|
| 109 |
+
# 4.3. Integrating SFW and Center-Aware Design
|
| 110 |
+
|
| 111 |
+
# 4.3.1. Application to the Baseline
|
| 112 |
+
|
| 113 |
+
As shown in Fig. 3a, the Tree-Ring [48] baseline suffers from frequency loss. To mitigate this, we impose the Hermitian symmetry condition directly on the watermark patterns. By integrating SFW with the center-aware embedding strategy, we refine the baseline into Hermitian Symmetric Tree-Ring (HSTR). HSTR not only enhances detection performance by fully utilizing the imaginary components of the latent Fourier domain but also improves image generation quality by restoring frequency integrity.
|
| 114 |
+
|
| 115 |
+
# 4.3.2. Hermitian Symmetric QR code
|
| 116 |
+
|
| 117 |
+
In this section, we introduce a novel approach that extends SFW beyond baselines to QR codes, which are widely recognized for their high versatility and robust error correction capabilities [16].
|
| 118 |
+
|
| 119 |
+
**Embedding.** As illustrated in Fig. 3b, the Hermitian Symmetric QR Code (HSQR) watermark is constructed by splitting the QR code in half and embedding each part separately into the real and imaginary components of the free half-region in the Fourier domain. The binary pattern of the QR code is embedded using the following formulation:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\operatorname {H S Q R} (\tilde {x}, c) = \left\{ \begin{array}{l l} + | F (\tilde {x}, c) |, & \text {i f} \operatorname {Q R} (x) = 1 \\ - | F (\tilde {x}, c) |, & \text {i f} \operatorname {Q R} (x) = 0, \end{array} \right. \tag {3}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where:
|
| 126 |
+
|
| 127 |
+
Table 1. Verification performance of different watermarking methods under various attacks. Bit Accuracy is used for bitstream-based methods (DwtDct, DwtDctSvd, RivaGAN, S.Sign), while TPR@1%FPR is used for semantic methods. Best performances are highlighted. Our methods show superior detection accuracy and robustness against signal processing distortions, regeneration, and cropping attacks.
|
| 128 |
+
|
| 129 |
+
<table><tr><td rowspan="2">Datasets</td><td rowspan="2">Methods</td><td colspan="2">No Attack</td><td colspan="6">Signal Processing Attack</td><td colspan="3">Regeneration Attack</td><td colspan="2">Cropping Attack</td><td>Avg</td></tr><tr><td>Clean</td><td>Bright.</td><td>Cont.</td><td>JPEG</td><td>Blur</td><td>Noise</td><td>BM3D</td><td>VAE-B</td><td>VAE-C</td><td>Diff.</td><td>C.C.</td><td>R.C.</td><td></td><td></td></tr><tr><td rowspan="9">MS-COCO</td><td>DwtDct</td><td>0.863</td><td>0.572</td><td>0.522</td><td>0.516</td><td>0.677</td><td>0.859</td><td>0.532</td><td>0.523</td><td>0.521</td><td>0.519</td><td>0.729</td><td>0.810</td><td>0.637</td><td></td></tr><tr><td>DwtDctSvd</td><td>1.000</td><td>0.555</td><td>0.473</td><td>0.602</td><td>1.000</td><td>1.000</td><td>0.784</td><td>0.648</td><td>0.596</td><td>0.644</td><td>0.744</td><td>0.861</td><td>0.742</td><td></td></tr><tr><td>RivaGAN</td><td>0.999</td><td>0.862</td><td>0.986</td><td>0.821</td><td>0.998</td><td>0.969</td><td>0.934</td><td>0.570</td><td>0.552</td><td>0.608</td><td>0.991</td><td>0.995</td><td>0.857</td><td></td></tr><tr><td>S.Sign.</td><td>0.995</td><td>0.894</td><td>0.978</td><td>0.806</td><td>0.911</td><td>0.721</td><td>0.838</td><td>0.717</td><td>0.715</td><td>0.478</td><td>0.987</td><td>0.991</td><td>0.836</td><td></td></tr><tr><td>Tree-Ring</td><td>0.957</td><td>0.463</td><td>0.900</td><td>0.548</td><td>0.934</td><td>0.412</td><td>0.815</td><td>0.509</td><td>0.536</td><td>0.543</td><td>0.509</td><td>0.734</td><td>0.655</td><td></td></tr><tr><td>Zodiac</td><td>0.998</td><td>0.843</td><td>0.998</td><td>0.973</td><td>0.998</td><td>0.880</td><td>0.997</td><td>0.944</td><td>0.958</td><td>0.972</td><td>0.989</td><td>0.995</td><td>0.962</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td>0.899</td><td>1.000</td><td>0.994</td><td>1.000</td><td>0.806</td><td>0.999</td><td>0.973</td><td>0.982</td><td>0.997</td><td>1.000</td><td>1.000</td><td>0.971</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.988</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.987</td><td>1.000</td><td>0.992</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.997</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.991</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.983</td><td>1.000</td><td>0.992</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.997</td><td></td></tr><tr><td rowspan="9">SD-Prompts</td><td>DwtDct</td><td>0.819</td><td>0.557</td><td>0.516</td><td>0.506</td><td>0.685</td><td>0.822</td><td>0.530</td><td>0.513</td><td>0.512</td><td>0.509</td><td>0.723</td><td>0.794</td><td>0.624</td><td></td></tr><tr><td>DwtDctSvd</td><td>1.000</td><td>0.537</td><td>0.459</td><td>0.610</td><td>0.999</td><td>0.998</td><td>0.859</td><td>0.659</td><td>0.620</td><td>0.623</td><td>0.743</td><td>0.860</td><td>0.747</td><td></td></tr><tr><td>RivaGAN</td><td>0.991</td><td>0.823</td><td>0.963</td><td>0.810</td><td>0.988</td><td>0.961</td><td>0.915</td><td>0.572</td><td>0.535</td><td>0.567</td><td>0.980</td><td>0.983</td><td>0.841</td><td></td></tr><tr><td>S.Sign.</td><td>0.994</td><td>0.899</td><td>0.967</td><td>0.769</td><td>0.888</td><td>0.742</td><td>0.809</td><td>0.677</td><td>0.671</td><td>0.493</td><td>0.983</td><td>0.990</td><td>0.824</td><td></td></tr><tr><td>Tree-Ring</td><td>0.944</td><td>0.471</td><td>0.894</td><td>0.466</td><td>0.912</td><td>0.423</td><td>0.802</td><td>0.509</td><td>0.514</td><td>0.543</td><td>0.469</td><td>0.749</td><td>0.641</td><td></td></tr><tr><td>Zodiac</td><td>0.998</td><td>0.748</td><td>0.999</td><td>0.979</td><td>0.999</td><td>0.903</td><td>1.000</td><td>0.940</td><td>0.975</td><td>0.958</td><td>0.994</td><td>0.996</td><td>0.957</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td>0.742</td><td>1.000</td><td>0.990</td><td>1.000</td><td>0.850</td><td>1.000</td><td>0.983</td><td>0.987</td><td>0.999</td><td>1.000</td><td>1.000</td><td>0.963</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.972</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.988</td><td>1.000</td><td>0.996</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.996</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.955</td><td>1.000</td><td>0.999</td><td>1.000</td><td>0.992</td><td>1.000</td><td>0.996</td><td>0.999</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.995</td><td></td></tr><tr><td rowspan="9">DiffusionDB</td><td>DwtDct</td><td>0.842</td><td>0.563</td><td>0.515</td><td>0.509</td><td>0.672</td><td>0.829</td><td>0.526</td><td>0.513</td><td>0.514</td><td>0.512</td><td>0.723</td><td>0.801</td><td>0.627</td><td></td></tr><tr><td>DwtDctSvd</td><td>0.998</td><td>0.558</td><td>0.463</td><td>0.593</td><td>0.997</td><td>0.995</td><td>0.830</td><td>0.658</td><td>0.608</td><td>0.621</td><td>0.742</td><td>0.860</td><td>0.744</td><td></td></tr><tr><td>RivaGAN</td><td>0.987</td><td>0.839</td><td>0.960</td><td>0.790</td><td>0.985</td><td>0.937</td><td>0.893</td><td>0.553</td><td>0.518</td><td>0.556</td><td>0.974</td><td>0.979</td><td>0.831</td><td></td></tr><tr><td>S.Sign.</td><td>0.990</td><td>0.890</td><td>0.967</td><td>0.787</td><td>0.889</td><td>0.726</td><td>0.819</td><td>0.690</td><td>0.687</td><td>0.496</td><td>0.981</td><td>0.986</td><td>0.826</td><td></td></tr><tr><td>Tree-Ring</td><td>0.940</td><td>0.487</td><td>0.889</td><td>0.434</td><td>0.904</td><td>0.392</td><td>0.799</td><td>0.454</td><td>0.503</td><td>0.454</td><td>0.499</td><td>0.715</td><td>0.622</td><td></td></tr><tr><td>Zodiac</td><td>0.992</td><td>0.752</td><td>0.988</td><td>0.933</td><td>0.988</td><td>0.834</td><td>0.984</td><td>0.911</td><td>0.926</td><td>0.903</td><td>0.971</td><td>0.985</td><td>0.931</td><td></td></tr><tr><td>HSTR (ours)</td><td>0.999</td><td>0.792</td><td>0.996</td><td>0.981</td><td>0.996</td><td>0.792</td><td>0.991</td><td>0.968</td><td>0.969</td><td>0.989</td><td>1.000</td><td>1.000</td><td>0.956</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.989</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.963</td><td>1.000</td><td>0.995</td><td>0.999</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.995</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.977</td><td>1.000</td><td>0.999</td><td>1.000</td><td>0.974</td><td>0.999</td><td>0.997</td><td>0.999</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.995</td><td></td></tr></table>
|
| 130 |
+
|
| 131 |
+
- $x$ denotes the coordinates in the QR code domain.
|
| 132 |
+
- $\tilde{x}$ represents the corresponding coordinates in the embedding region of the Fourier domain. Since the QR code is split into two halves, each half is mapped to either the real or imaginary component.
|
| 133 |
+
- $c \in \{\mathrm{Re}, \mathrm{Im}\}$ indicates whether the embedding occurs in the real or imaginary part.
|
| 134 |
+
- $F(\tilde{x}, c)$ is the real-valued amplitude of the complex Gaussian noise in the Fourier domain, where a complex number is expressed as $F_{\mathrm{Re}} + iF_{\mathrm{Im}}$ , making both real and imaginary amplitudes individually real-valued.
|
| 135 |
+
|
| 136 |
+
To maintain symmetry while avoiding numerical instability, the embedding region is positioned one pixel to the right of the vertical DC axis. Additionally, to increase the amount of embedded information and statistically reduce errors, each QR code cell is represented by multiple pixels arranged in a square pattern, e.g., $2 \times 2$ pixels.
|
| 137 |
+
|
| 138 |
+
Detection. The detection process for semantic watermarking is performed by computing the L1 distance. In HSQR, the ground truth QR binary pattern is converted into a signed Boolean representation as follows:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\mathrm {Q R} ^ {*} (\tilde {x}, c) = \left\{ \begin{array}{l l} + \Lambda , & \text {i f} \mathrm {Q R} (x) = 1 \\ - \Lambda , & \text {i f} \mathrm {Q R} (x) = 0, \end{array} \right. \tag {4}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where $\Lambda$ is a fixed amplitude for binary encoding.
|
| 145 |
+
|
| 146 |
+
# 5. Experiments
|
| 147 |
+
|
| 148 |
+
# 5.1. Experimental Settings
|
| 149 |
+
|
| 150 |
+
Semantic Watermarks. The Tree-Ring [48] watermark is embedded into channel 3 with a radius of 14 and HSTR adopts the same watermark parameters as its baseline. For RingID [11], we follow the original settings, embedding a tree-ring pattern with a radius range of 3-14 in channel 3, along with a Gaussian noise key in channel 0. Both SFW methods follow RingID in utilizing a Gaussian noise key in channel 0. For HSQR, we use a version 1 QR code $(21 \times 21$ cells) with a cell size of 2 pixels and error correction level H, capable of encoding up to 72 bits. The QR code is embedded in channel 3, and the encoding amplitude $\Lambda$ is set to 45, corresponding to the standard deviation of the real and imaginary components in the Fourier domain of $64 \times 64$ normal Gaussian latent vector $(\approx \sqrt{64^2 / 2})$ . Note that these experiments are conducted on $512 \times 512$ images. At higher resolutions, a larger embedding space allows for increased capacity through higher QR code versions that encode more bits, as well as the use of stronger watermarking parameters, enabling greater robustness and adaptability. Additionally,
|
| 151 |
+
|
| 152 |
+
Table 2. Identification accuracy for different watermarking methods, evaluated across multiple attack conditions. Perfect Match Rate is used for bitstream-based methods. The best performance for each item is highlighted with shading. HSQR achieves state-of-the-art performance, while HSTR consistently outperforms other Gaussian radius-based semantic baselines, particularly under cropping attacks.
|
| 153 |
+
|
| 154 |
+
<table><tr><td rowspan="2">Datasets</td><td rowspan="2">Methods</td><td colspan="2">No Attack</td><td colspan="6">Signal Processing Attack</td><td colspan="3">Regeneration Attack</td><td colspan="2">Cropping Attack</td><td>Avg</td></tr><tr><td>Clean</td><td>Bright.</td><td>Cont.</td><td>JPEG</td><td>Blur</td><td>Noise</td><td>BM3D</td><td>VAE-B</td><td>VAE-C</td><td>Diff.</td><td>C.C.</td><td>R.C.</td><td></td><td></td></tr><tr><td rowspan="9">MS-COCO</td><td>DwtDct</td><td>0.466</td><td>0.044</td><td>0.000</td><td>0.000</td><td>0.038</td><td>0.442</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.083</td><td></td></tr><tr><td>DwtDctSvd</td><td>1.000</td><td>0.044</td><td>0.019</td><td>0.000</td><td>0.999</td><td>0.998</td><td>0.037</td><td>0.000</td><td>0.000</td><td>0.003</td><td>0.000</td><td>0.000</td><td>0.258</td><td></td></tr><tr><td>RivaGAN</td><td>0.974</td><td>0.260</td><td>0.772</td><td>0.023</td><td>0.961</td><td>0.686</td><td>0.348</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.852</td><td>0.909</td><td>0.482</td><td></td></tr><tr><td>S.Sign.</td><td>0.873</td><td>0.177</td><td>0.563</td><td>0.000</td><td>0.036</td><td>0.010</td><td>0.007</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.709</td><td>0.802</td><td>0.265</td><td></td></tr><tr><td>Tree-Ring</td><td>0.303</td><td>0.087</td><td>0.207</td><td>0.072</td><td>0.256</td><td>0.030</td><td>0.162</td><td>0.083</td><td>0.072</td><td>0.054</td><td>0.009</td><td>0.033</td><td>0.114</td><td></td></tr><tr><td>Zodiac</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td>0.714</td><td>0.999</td><td>0.886</td><td>0.998</td><td>0.460</td><td>0.972</td><td>0.833</td><td>0.831</td><td>0.971</td><td>1.000</td><td>1.000</td><td>0.889</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.875</td><td>1.000</td><td>0.975</td><td>1.000</td><td>0.919</td><td>0.996</td><td>0.978</td><td>0.970</td><td>0.998</td><td>0.874</td><td>0.978</td><td>0.964</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.958</td><td>1.000</td><td>0.994</td><td>1.000</td><td>0.901</td><td>0.999</td><td>0.980</td><td>0.987</td><td>0.999</td><td>1.000</td><td>1.000</td><td>0.985</td><td></td></tr><tr><td rowspan="9">SD-Prompts</td><td>DwtDct</td><td>0.285</td><td>0.024</td><td>0.000</td><td>0.000</td><td>0.017</td><td>0.276</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.050</td><td></td></tr><tr><td>DwtDctSvd</td><td>0.993</td><td>0.028</td><td>0.011</td><td>0.000</td><td>0.982</td><td>0.979</td><td>0.085</td><td>0.000</td><td>0.000</td><td>0.007</td><td>0.000</td><td>0.000</td><td>0.257</td><td></td></tr><tr><td>RivaGAN</td><td>0.878</td><td>0.213</td><td>0.613</td><td>0.009</td><td>0.857</td><td>0.657</td><td>0.304</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.736</td><td>0.768</td><td>0.420</td><td></td></tr><tr><td>S.Sign.</td><td>0.813</td><td>0.263</td><td>0.420</td><td>0.000</td><td>0.021</td><td>0.015</td><td>0.006</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.576</td><td>0.716</td><td>0.236</td><td></td></tr><tr><td>Tree-Ring</td><td>0.288</td><td>0.094</td><td>0.189</td><td>0.051</td><td>0.235</td><td>0.034</td><td>0.159</td><td>0.079</td><td>0.076</td><td>0.056</td><td>0.012</td><td>0.041</td><td>0.110</td><td></td></tr><tr><td>Zodiac</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td></td></tr><tr><td>HSTR (ours)</td><td>1.000</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.885</td><td>1.000</td><td>0.976</td><td>0.998</td><td>0.886</td><td>0.993</td><td>0.980</td><td>0.973</td><td>0.995</td><td>0.876</td><td>0.981</td><td>0.962</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.930</td><td>1.000</td><td>0.994</td><td>1.000</td><td>0.942</td><td>0.999</td><td>0.991</td><td>0.997</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.988</td><td></td></tr><tr><td rowspan="9">DiffusionDB</td><td>DwtDct</td><td>0.357</td><td>0.037</td><td>0.000</td><td>0.000</td><td>0.034</td><td>0.320</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.062</td><td></td></tr><tr><td>DwtDctSvd</td><td>0.990</td><td>0.036</td><td>0.019</td><td>0.000</td><td>0.975</td><td>0.959</td><td>0.081</td><td>0.000</td><td>0.000</td><td>0.001</td><td>0.000</td><td>0.000</td><td>0.255</td><td></td></tr><tr><td>RivaGAN</td><td>0.858</td><td>0.213</td><td>0.625</td><td>0.020</td><td>0.848</td><td>0.615</td><td>0.221</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.756</td><td>0.780</td><td>0.411</td><td></td></tr><tr><td>S.Sign.</td><td>0.798</td><td>0.207</td><td>0.472</td><td>0.000</td><td>0.027</td><td>0.005</td><td>0.005</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.643</td><td>0.738</td><td>0.241</td><td></td></tr><tr><td>Tree-Ring</td><td>0.280</td><td>0.095</td><td>0.190</td><td>0.059</td><td>0.233</td><td>0.037</td><td>0.145</td><td>0.081</td><td>0.072</td><td>0.050</td><td>0.013</td><td>0.039</td><td>0.108</td><td></td></tr><tr><td>Zodiac</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td></td></tr><tr><td>HSTR (ours)</td><td>0.996</td><td>0.721</td><td>0.992</td><td>0.854</td><td>0.989</td><td>0.563</td><td>0.958</td><td>0.830</td><td>0.821</td><td>0.952</td><td>0.996</td><td>0.996</td><td>0.889</td><td></td></tr><tr><td>RingID</td><td>1.000</td><td>0.895</td><td>1.000</td><td>0.947</td><td>0.996</td><td>0.871</td><td>0.992</td><td>0.968</td><td>0.958</td><td>0.990</td><td>0.875</td><td>0.984</td><td>0.956</td><td></td></tr><tr><td>HSQR (ours)</td><td>1.000</td><td>0.954</td><td>1.000</td><td>0.988</td><td>1.000</td><td>0.906</td><td>0.998</td><td>0.982</td><td>0.991</td><td>0.994</td><td>1.000</td><td>1.000</td><td>0.984</td><td></td></tr></table>
|
| 155 |
+
|
| 156 |
+
the post-hoc semantic watermarking method, Zodiac [53], is included in our comparative experiments.
|
| 157 |
+
|
| 158 |
+
Model Setup and Datasets. The diffusion model used for the experiments is Stable Diffusion v2-1-base [37], configured to generate images at a resolution of $512 \times 512$ with a CFG scale of 7.5. Both generation and inversion steps of the DDIM scheduler [41] are set to 50. The datasets used for image generation include 5,000 captions from the MS-COCO-2017 training set [29], 1,001 sampled prompts from the 2M subset of DiffusionDB [46], and 8,192 prompts from the Stable-Diffusion-Prompts test set [40]. For watermark detection, the number of watermarked images used for each dataset is set to 1,000. Specifically, verification is conducted on 1,000 pairs of watermarked and unwatermarked images, while identification is performed on 1,000 watermarked images.
|
| 159 |
+
|
| 160 |
+
Attack Methods. To evaluate the robustness of our method against various distortions, we apply 11 different attacks, categorized into three major types.
|
| 161 |
+
|
| 162 |
+
- Signal Processing Attacks: These include brightness adjustment (ranging from 0 to 7), contrast adjustment (factor of 0.5), JPEG compression (quality factor of 25), Gaussian blur (radius of 5), Gaussian noise ( $\sigma = 0.05$ ), and BM3D denoising ( $\sigma = 0.1$ ).
|
| 163 |
+
|
| 164 |
+
- Regeneration Attacks: We apply two VAE-based image compression models, Bmshj18 [4] (VAE-B) and Cheng20 [8] (VAE-C), both at quality level 3, as well as a diffusion-based regeneration attack by Zhao et al. [56] (referred to as Diff.) using 60 denoising steps.
|
| 165 |
+
- Cropping Attacks: We apply center crop (C.C.) with a crop scale of 0.5 and random crop (R.C.) with a crop scale of 0.7, where the crop scale represents the ratio of the cropped image area to the original image.
|
| 166 |
+
|
| 167 |
+
Evaluation Metrics. The $L_{1}$ distance is computed only within the key region of the complex Fourier domain of latent noise, and unless otherwise specified, 2,048 keys are used for identification accuracy. For verification, semantic watermarking methods use True Positive Rate at $1\%$ False Positive Rate (TPR@1%FPR) as the evaluation metric. To ensure a fair comparison with bitstream-based approaches such as DwtDct, DwtDctSvd [12], RivaGAN [52], and Stable Signature (S.Sign.) [22], we measure Bit Accuracy for these methods. For identification, where the goal is to match the exact message, Perfect Match Rate is used as the metric for bitstream-based approaches. To evaluate the generation quality, FID [24] is measured between watermarked images and 5,000 MS-COCO ground truth images. We also calculate the CLIP score [34] using the OpenCLIP-ViT/G
|
| 168 |
+
|
| 169 |
+
Table 3. Generative quality evaluation of watermarking methods based on FID (MS-COCO ground truth) and CLIP score. The best performance for each item is highlighted with shading, while bold text specifically marks the low CLIP score in RingID. Our proposed methods preserve frequency integrity, achieving the best balance between watermark robustness and generative performance, whereas RingID introduces visible artifacts, compromising perceptual quality. Vrf. and Idf. denote the average detection performance in verification and identification tasks, respectively.
|
| 170 |
+
|
| 171 |
+
<table><tr><td colspan="2">Semantic Methods</td><td>FID ↓</td><td>CLIP ↑</td><td>Vrf.</td><td>Idf.</td></tr><tr><td rowspan="4">Merged in Generation</td><td>Tree-Ring</td><td>26.418</td><td>0.326</td><td>0.655</td><td>0.114</td></tr><tr><td>RingID</td><td>27.052</td><td>0.324</td><td>0.997</td><td>0.964</td></tr><tr><td>HSTR (ours)</td><td>25.062</td><td>0.329</td><td>0.971</td><td>0.889</td></tr><tr><td>HSQR (ours)</td><td>24.895</td><td>0.330</td><td>0.997</td><td>0.985</td></tr></table>
|
| 172 |
+
|
| 173 |
+
model [9] to determine how closely the generated image aligns with the input prompt.
|
| 174 |
+
|
| 175 |
+
# 5.2. Comparison with Baselines
|
| 176 |
+
|
| 177 |
+
In this section, we compare the detection performance in both verification and identification tasks, as well as the generation quality of different watermarking methods.
|
| 178 |
+
|
| 179 |
+
Enhanced Detection Robustness. According to Tab. 1, our proposed methods consistently achieve top-tier detection performance across various attacks. A key observation is that non-semantic watermarking methods [12, 22, 52] exhibit significant degradation under regeneration attacks, highlighting their vulnerability. Among Gaussian radius-based pattern methods (Tree-Ring, Zodiac, and HSTR), HSTR outperforms corresponding baselines in most cases, demonstrating its effectiveness in enhancing detection robustness while maintaining the fundamental tree-ring structure. In contrast, RingID leverages high-energy signed constant ring patterns, securing strong detection performance. However, as acknowledged by the authors, this method introduces noticeable ring-like artifacts in generated images, disrupting the balance between watermark robustness and generative quality. A detailed quantitative evaluation of this trade-off is presented in the following section. Meanwhile, Zodiac suffers from time-consuming processing, taking several minutes to apply the watermark—requiring 7.36 minutes per image on MS-COCO dataset—which further limits its practicality in real-world applications. In contrast, our methods following the merged-in-generation scheme introduce no additional processing time, ensuring seamless watermark embedding.
|
| 180 |
+
|
| 181 |
+
In Tab. 2, we compare identification results, which reveal an even greater disparity in detection performance across different watermarking methods. Gaussian radius-based pattern methods, except for HSTR, perform poorly in identification tasks, with Zodiac—designed solely for verification—failing entirely across all scenarios (achieving zero identification accuracy). Further analysis on scalability
|
| 182 |
+
|
| 183 |
+
under different message capacities is provided in Sec. 5.3.3. Notably, our methods exhibit strong resilience against cropping attacks in both tasks, reinforcing the robustness of center-aware embedding. HSQR achieves state-of-the-art identification accuracy across all datasets, further establishing its dominance in watermark retrieval.
|
| 184 |
+
|
| 185 |
+
Balance with Generative Quality. Tab. 3 presents the generative quality of different semantic watermarking methods following the merged-in-generation scheme, evaluated using FID and CLIP score. RingID, which deviates from a Gaussian distribution by embedding high-energy perturbations, exhibits the worst generative performance, particularly reflected in its low CLIP score. This decline is attributed to the noticeable ring-like artifacts, as discussed earlier. In contrast, our proposed methods, which preserve frequency integrity via SFW, achieve the top two FID scores, demonstrating a better balance between watermark robustness and generative quality.
|
| 186 |
+
|
| 187 |
+
Expanding Frequency Utilization in Latent Watermarking. Traditional image-domain watermarking methods prioritize low-mid frequency embedding to resist compression and filtering attacks. This approach has also been adopted in latent diffusion-based semantic watermarking methods [11, 48, 53] without explicitly considering the differences between pixel-space and latent-space transformations. However, our findings suggest that such frequency constraints may not be necessary in the latent space, where watermark retrieval involves latent encoding and DDIM inversion, altering the impact of frequency perturbations. This distinction is evident in HSQR, which achieves state-of-the-art detection performance while utilizing nearly the entire frequency spectrum. Instead of relying on specific frequency bands, HSQR leverages structured statistical encoding, distributing watermark information across multiple pixels ( $2 \times 2$ per QR cell). This redundancy enhances robustness against distortions, enabling effective retrieval even with broader frequency usage. These results indicate that latent-space watermarking is not bounded by traditional low-mid frequency constraints. Future semantic watermarking strategies can benefit from statistically robust encoding methods, allowing for effective watermark retrieval across a wider frequency range without compromising detection accuracy.
|
| 188 |
+
|
| 189 |
+
# 5.3. Ablation Study
|
| 190 |
+
|
| 191 |
+
# 5.3.1. Impact of SFW on Detection Performance
|
| 192 |
+
|
| 193 |
+
We evaluate the impact of Hermitian SFW by comparing detection performance across four cases (Tab. 4). Without SFW, using both real and imaginary components for detection (Case A) results in the lowest performance due to frequency degradation, while restricting detection to the real component (Case B) improves accuracy. In contrast, applying SFW (Cases C and D) preserves frequency integrity,
|
| 194 |
+
|
| 195 |
+
Table 4. Ablation study on detection performance based on frequency integrity ( $\sqrt{}$ or $\times$ ) and the number of detection region usage (1: real only, 2: real & imaginary). Vrf. and Idf. represent average detection performance in verification (TPR@1% FPR) and identification (accuracy), respectively. $\Delta L_1^*$ denotes the normalized $\Delta L_1$ metric, indicating detection effectiveness.
|
| 196 |
+
|
| 197 |
+
<table><tr><td>Case</td><td>Methods</td><td>Freq. Int.</td><td># Det.</td><td>Vrf.</td><td>Idf.</td><td>ΔL1*↑</td></tr><tr><td>A</td><td>Tree-Ring</td><td>×</td><td>2</td><td>0.653</td><td>0.114</td><td>0.232</td></tr><tr><td>B</td><td>Tree-Ring</td><td>×</td><td>1</td><td>0.805</td><td>0.416</td><td>0.368</td></tr><tr><td>C</td><td>HSTR (ours)</td><td>✓</td><td>1</td><td>0.936</td><td>0.775</td><td>0.471</td></tr><tr><td>D</td><td>HSTR (ours)</td><td>✓</td><td>2</td><td>0.971</td><td>0.889</td><td>0.476</td></tr></table>
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
Figure 5. Identification accuracy under center crop and random crop attacks at different crop scales. HSTR and HSQR maintain higher accuracy compared to RingID, demonstrating improved robustness against cropping.
|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
Figure 6. Identification accuracy across watermark message capacities. HSQR remains nearly perfect, while RingID and HSTR degrade at higher capacities. Tree-Ring and Zodiac fail to scale effectively.
|
| 206 |
+
|
| 207 |
+
leading to a significant boost in detection performance. Notably, HSTR with SFW and full complex detection (Case D) achieves the highest accuracy, demonstrating that leveraging both frequency components maximizes retrieval effectiveness. These results confirm that SFW enables robust semantic watermarking by maintaining frequency integrity and fully utilizing the Fourier domain for detection.
|
| 208 |
+
|
| 209 |
+
# 5.3.2. Robustness to Cropping Attacks
|
| 210 |
+
|
| 211 |
+
We assess the robustness of center-aware embedding against center crop and random crop attacks, measuring identification accuracy across different crop scales (Fig. 5). As crop scale decreases, accuracy declines for all methods. However, RingID exhibits a steeper drop, indicating higher vulnerability to cropping. In contrast, HSTR and HSQR degrade more gradually, demonstrating improved resilience. While extreme cropping (scale 0.2) significantly affects all methods, our center-aware design consistently outperforms RingID, confirming its effectiveness in preserving watermark information under cropping distortions.
|
| 212 |
+
|
| 213 |
+
# 5.3.3. Impact of Capacity on Identification
|
| 214 |
+
|
| 215 |
+
We evaluate identification accuracy across different watermark message capacities (64 to 8,192) for five methods (Fig. 6). Zodiac's performance collapses, while Tree-Ring exhibits rapid performance degradation, becoming nearly unusable at higher capacities. RingID, HSTR, and HSQR remain robust, maintaining $80\%+$ accuracy, though HSTR declines faster, and RingID starts degrading from 2,048 onward. HSQR demonstrates the highest scalability, retaining near-perfect accuracy even at the largest capacity, confirming its superior robustness in high-capacity scenarios.
|
| 216 |
+
|
| 217 |
+
# 6. Conclusion
|
| 218 |
+
|
| 219 |
+
We have introduced Hermitian SFW, a novel approach to semantic watermarking in the latent diffusion model framework. Unlike existing methods that fail to preserve frequency integrity, our approach ensures that watermark embeddings maintain a consistent statistical structure in the latent noise distribution. This is achieved through Hermitian symmetry enforcement, which preserves frequency components and enhances detection and generation quality. Additionally, we have proposed center-aware embedding, which significantly improves robustness against cropping attacks by strategically placing watermarks in a spatially resilient region of the latent representation. Through comprehensive experiments, we demonstrated that our method achieves state-of-the-art detection accuracy in both verification and identification tasks while also maintaining superior image generation quality, as shown by FID and CLIP scores. Our study highlights the importance of frequency integrity in Fourier-based watermarking and challenges the assumption that semantic watermarking must be confined to low-mid frequency bands. Experimental results confirm that a properly structured frequency-domain watermark can be effectively embedded and retrieved across the entire frequency spectrum without compromising generative quality. Future work includes exploring adaptive embedding strategies to further enhance robustness against adversarial attacks and extreme distortions, as well as extending our method to more diverse generative architectures beyond LDMs.
|
| 220 |
+
|
| 221 |
+
# Acknowledgements
|
| 222 |
+
|
| 223 |
+
This research was supported by the 2024 AI Semiconductor Application/Demonstration Support Program of the Ministry of Science and ICT and the National IT Industry Promotion Agency (NIPA) with Markany Co., Ltd. as the lead organization, and in part by the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2025.
|
| 224 |
+
|
| 225 |
+
# References
|
| 226 |
+
|
| 227 |
+
[1] Mahdi Ahmadi, Alireza Norouzi, Nader Karimi, Shadrokh Samavi, and Ali Emami. Redmark: Framework for residual diffusion watermarking based on deep networks. Expert Systems with Applications, 146:113157, 2020. 1
|
| 228 |
+
[2] Dhruv Arya. A survey of frequency and wavelet domain digital watermarking techniques. 2
|
| 229 |
+
[3] Vishal Asnani, John Collomosse, Tu Bui, Xiaoming Liu, and Shruti Agarwal. Promark: Proactive diffusion watermarking for causal attribution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10802-10811, 2024. 1, 2
|
| 230 |
+
[4] Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018. 6
|
| 231 |
+
[5] Mauro Barni, Franco Bartolini, Vito Cappellini, and Alessandro Piva. A dct-domain system for robust image watermarking. Signal processing, 66(3):357-372, 1998. 1, 2
|
| 232 |
+
[6] Mauro Barni, Franco Bartolini, and Alessandro Piva. Improved wavelet-based watermarking through pixel-wise masking. IEEE transactions on image processing, 10(5): 783-791, 2001. 1, 2
|
| 233 |
+
[7] Tu Bui, Shruti Agarwal, Ning Yu, and John Collomosse. Rosteals: Robust steganography using autoencoder latent space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 933-942, 2023. 1, 2
|
| 234 |
+
[8] Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro Katto. Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7939-7948, 2020. 6
|
| 235 |
+
[9] Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2818-2829, 2023. 7
|
| 236 |
+
[10] Hai Ci, Yiren Song, Pei Yang, Jinheng Xie, and Mike Zheng Shou. Wmadapter: Adding watermark control to latent diffusion models. arXiv preprint arXiv:2406.08337, 2024. 1, 2
|
| 237 |
+
[11] Hai Ci, Pei Yang, Yiren Song, and Mike Zheng Shou. Ringid: Rethinking tree-ring watermarking for enhanced multi-key identification. arXiv preprint arXiv:2404.14055, 2024. 2, 3, 4, 5, 7
|
| 238 |
+
|
| 239 |
+
[12] Ingemar Cox, Matthew Miller, Jeffrey Bloom, Jessica Fridrich, and Ton Kalker. Digital watermarking and steganography. Morgan Kaufmann, 2007. 1, 2, 6, 7
|
| 240 |
+
[13] Ingemar J Cox, Joe Kilian, F Thomson Leighton, and Talal Shamoon. Secure spread spectrum watermarking for multimedia. IEEE TIP, 6(12):1673-1687, 1997. 1, 2
|
| 241 |
+
[14] Yingqian Cui, Jie Ren, Han Xu, Pengfei He, Hui Liu, Lichao Sun, Yue Xing, and Jiliang Tang. Diffusionshield: A watermark for data copyright protection against generative diffusion models. ACM SIGKDD Explorations Newsletter, 26(2): 60-75, 2025. 2
|
| 242 |
+
[15] DEJEY and RS Rajesh. An improved wavelet domain digital watermarking for image protection. International journal of wavelets, multiresolution and information processing, 8(01): 19-31, 2010. 1, 2
|
| 243 |
+
[16] Denso Wave Incorporated. QR Code Essentials. Denso Wave, 2011. Available at: https://www.qrcode.com/en/about/standards.html.4
|
| 244 |
+
[17] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 1
|
| 245 |
+
[18] Han Fang, Dongdong Chen, Qidong Huang, Jie Zhang, Zehua Ma, Weiming Zhang, and Nenghai Yu. Deep template-based watermarking. IEEE Transactions on Circuits and Systems for Video Technology, 31(4):1436-1451, 2020. 1
|
| 246 |
+
[19] Han Fang, Zhaoyang Jia, Yupeng Qiu, Jiyi Zhang, Weiming Zhang, and Ee-Chien Chang. De-end: decoder-driven watermarking network. IEEE Transactions on Multimedia, 25: 7571-7581, 2022. 1
|
| 247 |
+
[20] Weitao Feng, Wenbo Zhou, Jiyan He, Jie Zhang, Tianyi Wei, Guanlin Li, Tianwei Zhang, Weiming Zhang, and Nenghai Yu. Aqualora: Toward white-box protection for customized stable diffusion models via watermark lora. arXiv preprint arXiv:2405.11135, 2024. 1, 2
|
| 248 |
+
[21] Pierre Fernandez, Alexandre Sablayrolles, Teddy Furon, Hervé Jégou, and Matthijs Douze. Watermarking images in self-supervised latent spaces. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3054-3058. IEEE, 2022. 1
|
| 249 |
+
[22] Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22466-22477, 2023. 1, 2, 6, 7
|
| 250 |
+
[23] Yiyang Guo, Ruizhe Li, Mude Hui, Hanzhong Guo, Chen Zhang, Chuangjian Cai, Le Wan, and Shangfei Wang. Freqmark: Invisible image watermarking via frequency based optimization in latent space. arXiv preprint arXiv:2410.20824, 2024. 2
|
| 251 |
+
[24] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6
|
| 252 |
+
[25] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 1
|
| 253 |
+
|
| 254 |
+
[26] Jae-Eun Lee, Young-Ho Seo, and Dong-Wook Kim. Convolutional neural network-based digital image watermarking adaptive to the resolution of image and watermark. Applied Sciences, 10(19):6854, 2020. 1
|
| 255 |
+
[27] Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu. Diffusetrace: A transparent and flexible watermarking scheme for latent diffusion model. arXiv preprint arXiv:2405.02696, 2024. 1, 2
|
| 256 |
+
[28] Chunlei Li, Zhaoxiang Zhang, Yunhong Wang, Bin Ma, and Di Huang. Dither modulation of significant amplitude difference for wavelet based robust watermarking. Neurocomputing, 166:404-415, 2015. 1, 2
|
| 257 |
+
[29] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. 6
|
| 258 |
+
[30] Xiyang Luo, Ruohan Zhan, Huiwen Chang, Feng Yang, and Peyman Milanfar. Distortion agnostic deep watermarking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13548-13557, 2020. 1
|
| 259 |
+
[31] Zheling Meng, Bo Peng, and Jing Dong. Latent watermark: Inject and detect watermarks in latent diffusion space. arXiv preprint arXiv:2404.00230, 2024. 1, 2
|
| 260 |
+
[32] A Miyazaki and A Okamoto. Analysis of watermarking systems in the frequency domain and its application to design of robust watermarking systems. In Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205), pages 506-509. IEEE, 2001. 2
|
| 261 |
+
[33] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021. 1
|
| 262 |
+
[34] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6
|
| 263 |
+
[35] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1 (2):3, 2022. 1
|
| 264 |
+
[36] Ahmad Rezaei, Mohammad Akbari, Saeed Ranjbar Alvar, Arezou Fatemi, and Yong Zhang. Lawa: Using latent space for in-generation image watermarking. arXiv preprint arXiv:2408.05868, 2024. 1, 2
|
| 265 |
+
[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 2, 6
|
| 266 |
+
[38] Joseph JK Ō Ruanaidh and Thierry Pun. Rotation, scale and translation invariant spread spectrum digital image watermarking. Signal processing, 66(3):303-317, 1998. 1, 2
|
| 267 |
+
|
| 268 |
+
[39] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494, 2022. 1
|
| 269 |
+
[40] Gustavo Santana. Gustavosta: Stable-diffusion-prompts. https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts, 2022.6
|
| 270 |
+
[41] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 1, 3, 6
|
| 271 |
+
[42] Matthew Tancik, Ben Mildenhall, and Ren Ng. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2117-2126, 2020. 1
|
| 272 |
+
[43] Jun Tian. Reversible data embedding using a difference expansion. IEEE transactions on circuits and systems for video technology, 13(8):890-896, 2003. 1, 2
|
| 273 |
+
[44] Ron G Van Schyndel, Andrew Z Tirkel, and Charles F Osborne. A digital watermark. In Proceedings of 1st international conference on image processing, pages 86-90. IEEE, 1994. 1, 2
|
| 274 |
+
[45] Wenbo Wan, Jun Wang, Yunming Zhang, Jing Li, Hui Yu, and Jiande Sun. A comprehensive survey on robust image watermarking. Neurocomputing, 488:226-247, 2022. 2
|
| 275 |
+
[46] Zijie J Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. Diffusional: A large-scale prompt gallery dataset for text-to-image generative models. arXiv preprint arXiv:2210.14896, 2022. 6
|
| 276 |
+
[47] Xiu-mei Wen, Wei Zhao, and Fan-xing Meng. Research of a digital image watermarking algorithm resisting geometrical attacks in fourier domain. In 2009 International Conference on Computational Intelligence and Security, pages 265-268. IEEE, 2009. 2
|
| 277 |
+
[48] Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-rings watermarks: Invisible fingerprints for diffusion images. Advances in Neural Information Processing Systems, 36, 2024. 2, 3, 4, 5, 7
|
| 278 |
+
[49] Raymond B Wolfgang, Christine I Podilchuk, and Edward J Delp. Perceptual watermarks for digital images and video. Proceedings of the IEEE, 87(7):1108-1126, 1999. 1, 2
|
| 279 |
+
[50] Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, and Nenghai Yu. Gaussian shading: Provable performance-lossless image watermarking for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12162-12171, 2024. 2
|
| 280 |
+
[51] Guokai Zhang, Lanjun Wang, Yuting Su, and An-An Liu. A training-free plug-and-play watermark framework for stable diffusion. arXiv preprint arXiv:2404.05607, 2024. 1, 2
|
| 281 |
+
[52] Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Robust invisible video watermarking with attention. arXiv preprint arXiv:1909.01285, 2019. 1, 6, 7
|
| 282 |
+
|
| 283 |
+
[53] Lijun Zhang, Xiao Liu, Antoni Martin, Cindy Bearfield, Yuriy Brun, and Hui Guan. Attack-resilient image watermarking using stable diffusion. Advances in Neural Information Processing Systems, 37:38480-38507, 2025. 2, 3, 4, 6, 7
|
| 284 |
+
[54] Xuanyu Zhang, Runyi Li, Jiwen Yu, Youmin Xu, Weiqi Li, and Jian Zhang. Editguard: Versatile image watermarking for tamper localization and copyright protection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11964-11974, 2024. 1, 2
|
| 285 |
+
[55] Xuandong Zhao, Kexun Zhang, Yu-Xiang Wang, and Lei Li. Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats. 2023. 2
|
| 286 |
+
[56] Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, and Lei Li. Invisible image watermarks are provably removable using generative ai. Advances in Neural Information Processing Systems, 37:8643-8672, 2025. 2, 6
|
| 287 |
+
[57] Yimeng Zhao, Chengyou Wang, Xiao Zhou, and Zhiliang Qin. Dari-mark: Deep learning and attention network for robust image watermarking. Mathematics, 11(1):209, 2022. 1
|
| 288 |
+
[58] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672, 2018. 1
|
2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8abe10af38ca26403bf3f2e4a3771f7f9ed3c26eedc8137a8d623c4320592d12
|
| 3 |
+
size 813323
|
2025/Semantic Watermarking Reinvented_ Enhancing Robustness and Generation Quality with Fourier Integrity/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Semantic versus Identity_ A Divide-and-Conquer Approach towards Adjustable Medical Image De-Identification/2b39ad1e-d18f-4579-85b8-6d56df6a82db_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|