diff --git a/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_content_list.json b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fb1e345b00da675aa8b5d323a24488006ad4a285 --- /dev/null +++ b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:220822e87ca76caa5741f72a5ae588cf53202a04c05a6b3dcd497ef0a73f5d5f +size 51349 diff --git a/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_model.json b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e91314ffc772d5ad0e2dda7b294a06734a9e3685 --- /dev/null +++ b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d9d4096419ea4ce2ff8d8c0b239b1ca3cc382c1215a0997db57b24f620e2324 +size 64154 diff --git a/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_origin.pdf b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a8fd31c0249ad6a7012f676e4b82a627fcb12001 --- /dev/null +++ b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b32b438f1dfd25f7db69b9fb8d177826c01d05c7db4a595f0294ec507bd8111 +size 293011 diff --git a/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/full.md b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..302f59cbaeb975ac86665f6dc168acdc7aca3cc1 --- /dev/null +++ b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/full.md @@ -0,0 +1,169 @@ +# Sparse-to-Dense: A Free Lunch for Lossless Acceleration of Video Understanding in LLMs + +Xuan Zhang $^{1}$ , Cunxiao Du $^{2*}$ , Sicheng Yu $^{1}$ , Jiawei Wu $^{3}$ , Fengzhuo Zhang $^{3}$ , Wei Gao $^{1}$ , Qian Liu $^{2}$ + +$^{1}$ Singapore Management University, $^{2}$ Sea AI Lab, $^{3}$ National University of Singapore + +# Abstract + +Video Large Language Models (Video-LLMs) suffer from high inference latency in long video processing due to their auto-regressive decoding mechanism, posing challenges for the efficient processing of video sequences that are usually very long. We observe that attention scores in Video-LLMs during decoding exhibit pronounced sparsity, with computational focus concentrated on a small subset of critical tokens. Motivated by this insight, we introduce Sparse-to-Dense (STD), a novel decoding strategy that integrates two distinct modules: a sparse module that rapidly generates speculative tokens using efficient top- $K$ attention, and a dense module that verifies these tokens in parallel via full self-attention. This collaborative approach accelerates Video-LLMs losslessly, effectively offering a free lunch for video understanding. STD is a plug-and-play solution requiring no fine-tuning or architectural changes and achieves up to a $1.94 \times$ wall time speedup while preserving model performance. It enables a seamless conversion of standard Video-LLMs into sparse counterparts, unlocking efficient long-video processing without sacrificing accuracy. + +# 1 Introduction + +Recent advances in Video Large Language Models (Video-LLMs), which combine large language models with video understanding, have achieved exceptional performance on tasks like video question answering and captioning (Lin et al., 2024a; Cao et al., 2024; Zhang et al., 2025a). A common practice in Video-LLMs is representing a video as a sequence of image frames, which results in extremely long token sequences that can strain computational resources. For instance, a 1-hour video sampled at 5-second intervals produces 720 frames, which translates to 141,120 visual tokens in VILA (Lin et al., 2024a). These extremely long + +token sequences cause Video-LLMs to suffer from high inference latency when processing lengthy videos, making real-time applications challenging. + +This latency is primarily introduced by the auto-regressive nature of current Video-LLMs, where each new token must attend to all preceding tokens, creating substantial memory and computational challenges. While mechanisms like key-value (KV) caching are employed to store pre-computed key and value tensors and reduce redundant recomputation, frequent access to the cache imposes heavy demands on memory bandwidth due to the growing amount of KV cache with the increasing sequence length. This significantly reduces the throughput of Video-LLMs. A common approach to addressing this problem is KV cache compression (Du et al., 2024b; Chen et al., 2024b; Lin et al., 2024b; Zhang et al., 2025b) or quantization (Su et al., 2025; Hooper et al., 2024; Liu et al., 2024) at test time. However, these methods introduce discrepancies between training and inference, degrading the performance of LLMs. + +In this paper, we aim to build a lossless acceleration method designed specifically for Video-LLMs that preserves the exact output distribution of the original model. Although speculative decoding (Leviathan et al., 2023; Chen et al., 2023; Hou et al., 2025) meets this requirement, it usually requires an extra draft model, which is expensive for Video-LLMs. In contrast, we observe that Video-LLMs exhibit a unique structural property, attention sparsity, which can serve as a training-free and plug-and-play draft model. Specifically, retaining only the top- $K$ KV caches in the attention layers preserves the original predictions for approximately $95\%$ of tokens (empirically verified), suggesting that most attention heads contribute minimally to the final output. Motivated by this observation, we introduce a novel decoding method called Sparse-to-Dense (STD), which leverages the sparse structure of Video-LLMs as its + +draft model. This design eliminates the need for an extra trained draft model, making STD a plug-and-play solution. We refer to the original Video-LLM as the dense model because it decodes using the full KV cache, whereas the model with top- $K$ attention is termed the sparse model. Both models share identical architectures, differing only in how they compute attention. Therefore, we do not need additional GPU memory to store the sparse model, nor does it require any extra training. The top- $K$ attention in the sparse model boosts decoding speed while sacrificing some token quality, whereas the dense model is slower but guarantees accuracy. We use the sparse model to auto-regressively draft the next $\gamma$ tokens, while the dense model verifies them in parallel. This approach avoids redundant full KV cache memory and ensures the outputs exactly match those of the original Video-LLM. + +We conduct experiments on representative Video-LLMs including LLaVA-OneVision (Li et al., 2024a) and Qwen2-VL (Wang et al., 2024), evaluating them on video understanding benchmarks like MLVU (Zhou et al., 2024) and VideoMME (Fu et al., 2024). Experiment results show that our STD, serving as a tuning-free, plug-and-play solution, achieves up to a $1.94 \times$ acceleration of video input processing without any performance degradation. It is immediately deployable, requiring only 20 lines of code to transform an original Video-LLM into a sparse Video-LLM, and it does not require any extra training to deploy the draft model. + +# 2 Observation + +In this section, we investigate the disparity in decoded tokens between two configurations of Video-LLMs: 1) sparse top- $K$ KV cache: utilizing only the top- $K$ KV caches based on the highest attention weights; and 2) dense full KV cache: employing the complete set of KV caches. We conduct experiments using the Qwen2-VL-7B (Wang et al., 2024) model on randomly selected samples from MLVU (Zhou et al., 2024), and Video-MME (Fu et al., 2024) datasets. We evaluate the next-token prediction accuracy of the model when employing sparse attention with top- $K$ KV caches. Our findings indicate that the model with sparse attention maintains an average token prediction accuracy exceeding $95\%$ . This high accuracy suggests that for the majority of decoded tokens, only the top- $K$ KV caches are necessary. However, it is impor + +tant to note that the $95\%$ accuracy is measured per individual token and does not accumulate across multiple tokens. For instance, the accuracy of correctly predicting five consecutive tokens drops to approximately $(95\%)^5 \approx 77\%$ . + +# 3 Method + +In this section, we present Sparse-to-Dense (STD), a method designed to achieve lossless acceleration for Video-LLMs. We refer to the original model $\mathcal{M}$ as the dense model, as it requires the full KV cache during decoding, while the sparse model $\mathcal{M}_s$ uses sparse attention. Although $\mathcal{M}_s$ is faster, it is somewhat less accurate. Unlike traditional speculative decoding, which relies on an additional draft model, our approach leverages $\mathcal{M}_s$ with the same parameters as $\mathcal{M}$ . The only difference is that $\mathcal{M}_s$ loads a reduced KV cache to perform sparse attention, eliminating the need for extra GPU memory to store another model's parameters. In the following subsections, we will detail the decoding procedure and the design of the sparse model. + +# 3.1 Decoding Procedures + +In our STD, the sparse model $\mathcal{M}_s$ functions as a draft model to propose potential next $\gamma$ tokens, while the dense model $\mathcal{M}$ verifies them to derive the final output sequence. Given an input sequence $\{x_0,\dots ,x_{m - 1}\}$ , consisting of visual and textual tokens, the sparse model $\mathcal{M}_s$ auto-regressively generates $\gamma$ subsequent token candidates $\{x_m,\dots ,x_{m + \gamma -1}\}$ . Because the tokens proposed by the sparse model $\mathcal{M}_s$ might not align with those predicted by the dense model $\mathcal{M}$ , it requires the verification of $\mathcal{M}$ . The dense model $\mathcal{M}$ verifies all $\gamma$ proposed tokens in parallel, requiring only a single I/O operation for the full KV cache. Thus, this verification procedure accelerates the process compared with the auto-regressive decoding of $\mathcal{M}$ itself, where each token requires a separate I/O operation. During the verification, $\mathcal{M}$ identifies the first $n$ tokens that align with its predictions, where $0\leq n\leq \gamma$ , and additionally provides a bonus token $\hat{x}_{n + m}$ for free. The verified sequence $\{x_{m},\dots ,x_{m + n - 1},\hat{x}_{n + m}\}$ is then appended to the input sequence $\{x_0,\dots ,x_{m - 1}\}$ to form the context for the next round of proposal and verification. + +# 3.2 Model with Sparse Attention + +Next, we introduce the design of our sparse model $\mathcal{M}_s$ . Empirical observations in Section 2 indicate + +that during most decoding steps, attention scores are predominantly concentrated on a small subset of KV caches, a pattern we term sparse attention (also known as top- $K$ attention (Lou et al., 2024)). Only a small fraction of tokens require more evenly distributed dense attention. This insight motivates a strategy to selectively apply sparse attention for the majority of tokens and resort to dense attention only when necessary, reducing the I/O overhead of accessing the full KV cache, and thereby improving decoding speed. + +Since the number of visual tokens is typically much larger than the number of textual tokens $(m_v \gg m_t)$ , with $m_v$ often exceeding 10,000 while $m_t$ are usually around 100, our primary focus is on reducing the size of the visual KV cache. To achieve this, we leverage the attention patterns of the textual tokens $X_t$ to identify and select the most relevant KV caches from the visual tokens. Specifically, we analyze the allocation of attention scores when processing the textual tokens $X_t = \{x_{m_v}, \dots, x_{m-1}\}$ (i.e., the last $m_t$ tokens in the input sequence) to identify which KV pairs of the visual tokens $X_v$ contribute more during the prefilling stage. For each layer $l$ , we calculate the average attention scores directed toward the visual tokens $X_v$ for textual tokens $X_t$ . We then retain only the top- $K$ KV pairs of visual tokens with the highest attention scores. To balance performance and efficiency, we determine the retained $K$ KV caches only during the prefilling stage and avoid the computation-demand dynamic selections in the decoding stage. The selected visual tokens can vary across different layers and attention heads, reflecting the distinct focus of each layer and head in processing the input. The selection of the KV cache of layer $l$ can be formalized as + +$$ +\operatorname {C a c h e} _ {s} [ l ] = \arg \operatorname {T o p K} _ {x \in X _ {v}} \left(\frac {1}{m _ {t}} \sum_ {\hat {x} \in X _ {t}} A _ {l} (\hat {x}, x)\right), +$$ + +where $\operatorname{argTopK}(\cdot)$ is an operation that selects the top- $K$ elements indices with the highest values from a given set, $k$ is a predefined hyper-parameter, and $A_{l}(\hat{x},x)$ represents the attention score from token $\hat{x}$ to token $x$ in layer $l$ . For models utilizing Grouped Query Attention (GQA) (Ainslie et al., 2023), where the number of query heads equals the number of groups multiplied by the number of KV heads, we directly sum the attention scores within each group to select the top- $K$ KV caches for this head. The KV cache selection operates at the granularity of individual KV heads, allowing + +each layer or head to retain a distinct subset of caches based on its specific requirements. + +# 3.3 I/O complexity analysis. + +In the decoding phase, the I/O complexity of our Sparse-to-Dense decoding method can be analyzed as follows. For the sparse model $\mathcal{M}_s$ , which speculatively proposes $\gamma$ subsequent tokens, the I/O cost involves accessing the selected $K$ visual KV caches and all $m_t$ textual KV caches. Thus, the total I/O for the sparse model is given by: $\mathrm{I} / \mathrm{O}_{\mathrm{sparse}} = \gamma \times (K + m_t)$ . For the dense model $\mathcal{M}$ , which verifies the proposed tokens in parallel, the I/O cost includes accessing the full KV caches of all visual and textual tokens, resulting in: $\mathrm{I} / \mathrm{O}_{\mathrm{dense}} = m_v + m_t$ . The total I/O for Sparse-to-Dense decoding is therefore: $\mathrm{I} / \mathrm{O}_{\mathrm{total}} = \gamma \times (K + m_t) + (m_v + m_t)$ , and the average I/O per token is + +$$ +\mathrm {I} / \mathrm {O} _ {\text {a v e r a g e}} = \frac {\mathrm {I} / \mathrm {O} _ {\text {t o t a l}}}{\alpha \times \gamma} = \frac {\gamma \times (K + m _ {t}) + m _ {v} + m _ {t}}{\alpha \times \gamma}, +$$ + +where $\alpha$ ratio of the number of accepted tokens among all proposed tokens. In contrast, the average I/O complexity of vanilla decoding, where each token is generated using full attention, is given by: $\mathrm{I} / \mathrm{O}_{\text{average}}^{\text{vanilla}} = m_v + m_t$ . When $\alpha$ is sufficiently large, i.e., $\alpha > (K + m_t) / (m_v + m_t) + \gamma^{-1}$ , the average I/O per token in our method becomes considerably lower, resulting in improved decoding efficiency. Intuitively, we hope that the ratio between the numbers of the accepted tokens and all proposed tokens is larger than the ratio between the numbers of retrained KV pairs and the full KV cache. This can be achieved due to the concentration behavior of attention scores in Section 2. The empirical superiority of our method in the next section verifies this inequality in the realistic setting. + +# 4 Experiment + +Baselines. To evaluate the effectiveness of our proposed Sparse-to-Dense decoding, we compare it against the following baselines: 1) Layerskip (Elhoushi et al., 2024): This method utilizes a model with an layer-level early exit mechanism to propose draft tokens. This baseline is inspired by the work of Elhoushi et al. on text-only LLMs, and originally requires additional training. For a fair comparison with our method, we adapt it to Video-LLMs in a tuning-free manner. 2) Streaming (Chen et al., 2024a): This method employs a model with streaming attention (Xiao et al., 2023) to propose + +
MethodsMLVUVideoMME-sVideoMME-mVideoMME-l
Acc. (%)SpeedupAcc. (%)SpeedupAcc. (%)SpeedupAcc. (%)Speedup
LLaVA-OneVision-7B
LayerSkip10.00.47×5.60.33×8.10.46×4.80.44×
Streaming34.71.34×36.41.38×41.01.51×36.21.45×
STD (ours)47.81.72×51.81.82×52.11.83×52.91.59×
Qwen2-VL-7B-Instruct
LayerSkip5.20.63×3.70.59×4.90.55×5.70.55×
Streaming53.91.61×52.91.32×59.21.36×59.61.36×
STD (ours)66.11.94×71.81.71×73.41.62×81.81.70×
+ +Table 1: Comparisons of the acceptance rate (Acc.) and wall time speedup of STD and previous draft models. Bold denotes the best method. Since all the methods are lossless, we do not report the evaluation of the generated contents. + +draft tokens. Similar to LayerSkip, this baseline is derived from the work of Chen et al. on text-only LLMs. To ensure comparability with our approach, we extend its implementation to Video-LLMs. + +Datasets and evaluation metrics. We evaluate Sparse-to-Dense on two widely adopted benchmarks: MLVU (Zhou et al., 2024) and VideoMME (Fu et al., 2024). MLVU is specifically designed for long-duration videos, while VideoMME encompasses short, medium, and long-duration videos, providing a comprehensive assessment across various video lengths. For our evaluation, we adhere to the protocols established in previous works on speculative decoding. We report two primary metrics: acceptance rate of the draft tokens and wall time speedup. + +Implementation Details. Our experiments are conducted using widely adopted state-of-the-art Video-LLMs, specifically LLaVA-OneVision (7B) (Li et al., 2024a) and Qwen2-VL (7B) (Wang et al., 2024). We prompt the Video-LLMs to generate chain-of-thought (Wei et al., 2022) responses to enhance their performance. We set the sum of the textual token count $m_t$ and the selected visual KV cache count $K$ to 1024, with a batch size of 8. The number of tokens verified by the dense model $\mathcal{M}_d$ is fixed at $\gamma = 9$ . The ablation of hyperparameters can be found in Appendix Section C. Our framework is implemented based on Hugging Face's Transformers library. All experiments are conducted on NVIDIA A100 GPUs with 80 GB of memory, and are repeated three times with different random seeds, and the average results are reported. + +Main Results Table 1 summarizes the performance across various reasoning tasks. We have the following findings: 1) The draft model based on LayerSkip performs worse than that utilizing sparse attention (e.g., Streaming and STD). The primary + +reason for this discrepancy is that LayerSkip causes a substantial distributional shift between the draft model and the target model, leading to a low acceptance rate. Although the draft model with layer skipping runs considerably faster than the sparse attention counterparts, this advantage is insufficient to compensate for the overall wall-time speedup loss introduced by layer skipping. 2) Draft models based on sparse attention generally provide more wall time speedup. Whether in STD or Streaming, we observe a consistently high acceptance rate. This indicates that, for most of the time, the target model does not require the full KV cache but only a sparsely selected subset cache. However, it is important to note that since LLMs perform autoregressive decoding, an incorrect token can propagate errors to subsequent tokens. Thus verification with the full KV cache is essential. 3) Our model outperforms the streaming-based draft model, achieving $62.2\%$ in acceptance length and $1.74 \times$ in wall-time speedup on average. This advantage stems from our method's ability to leverage the unique characteristics of Video-LLMs to select important KV cache. As observed in section 2, text-guided video cache selection effectively identifies and retains the most critical cache elements. + +# 5 Conclusion + +We introduce STD, a training-free, plug-and-play decoding method that employs sparse top- $K$ attention as the draft model in speculative decoding while leveraging full attention for verification in parallel, ensuring lossless acceleration. Extensive experiments demonstrate that STD significantly outperforms strong baselines that use LayerSkip and Streaming as the draft models. Overall, STD achieves up to a $1.94 \times$ walltime speedup while maintaining identical output quality. In the future, we hope to extend our work to accelerate long + +CoT Video-LLMs such as QvQ (QwenLM Team, 2024). + +# Limitation + +A notable limitation of our current approach is that all KV caches are still stored in GPU memory (i.e., HBM). While HBM provides the high bandwidth necessary for fast computations, its capacity is inherently limited, which poses a significant bottleneck during inference—especially as model sizes and sequence lengths increase. The limited HBM capacity may lead to restricted batch size. + +In the future, a promising solution to this challenge is to offload portions of the KV caches to CPU memory. Although CPU memory typically has lower bandwidth compared to HBM, it offers substantially larger capacity. By developing efficient data transfer and caching strategies, it may be possible to mitigate the HBM bottleneck without sacrificing inference accuracy, thereby enabling more scalable and efficient processing for large Video-LLMs. + +# References + +Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. 2023. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245. +Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple llm inference acceleration framework with multiple decoding heads. arXiv preprint arXiv:2401.10774. +Jianjian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, and Tao Chen. 2024. Madtp: Multi-modal alignment-guided dynamic token pruning for accelerating vision-language transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15710-15719. +Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318. +Jian Chen, Vashisth Tiwari, Ranajoy Sadhukhan, Zhuoming Chen, Jinyuan Shi, Ian En-Hsu Yen, and Beidi Chen. 2024a. Magicdec: Breaking the latency-throughput tradeoff for long context generation with speculative decoding. arXiv preprint arXiv:2408.11049. +Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. 2024b. + +An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. arXiv preprint arXiv:2403.06764. +Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, et al. 2024a. Glide with a cape: A low-hassle method to accelerate speculative decoding. arXiv preprint arXiv:2402.02082. +Cunxiao Du, Hao Zhou, Zhaopeng Tu, and Jing Jiang. 2024b. Revisiting the markov property for machine translation. arXiv preprint arXiv:2402.02084. +Mostafa Elhoushi, Akshit Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, et al. 2024. Layer skip: Enabling early exit inference and self-speculative decoding. arXiv preprint arXiv:2404.16710. +Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. 2024. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075. +Mukul Gagrani, Raghavv Goel, Wonseok Jeon, Junyoung Park, Mingu Lee, and Christopher Lott. 2024. On speculative decoding for multimodal large language models. arXiv preprint arXiv:2404.08856. +Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W Mahoney, Yakun Sophia Shao, Kurt Keutzer, and Amir Gholami. 2024. Kvquant: Towards 10 million context length llm inference with kv cache quantization. arXiv preprint arXiv:2401.18079. +Yunlong Hou, Fengzhuo Zhang, Cunxiao Du, Xuan Zhang, Jiachun Pan, Tianyu Pang, Chao Du, Vincent YF Tan, and Zhuoran Yang. 2025. Banditspec: Adaptive speculative decoding via bandit algorithms. arXiv preprint arXiv:2505.15141. +Zhengmian Hu and Heng Huang. Accelerated speculative sampling based on tree monte carlo. In *Forty-first International Conference on Machine Learning*. +Doohyuk Jang, Sihwan Park, June Yong Yang, Yeonsung Jung, Jihun Yun, Souvik Kundu, Sung-Yub Kim, and Eunho Yang. 2024. Lantern: Accelerating visual autoregressive models with relaxed speculative decoding. arXiv preprint arXiv:2410.03355. +Xiaohan Lan, Yitian Yuan, Zequn Jie, and Lin Ma. 2024. Vidcompress: Memory-enhanced temporal compression for video understanding in large language models. arXiv preprint arXiv:2410.11417. +Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pages 19274-19286. PMLR. + +Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. 2024a. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326. +Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. 2024b. Eagle: Speculative sampling requires rethinking feature uncertainty. arXiv preprint arXiv:2401.15077. +Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. 2024a. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26689-26699. +Zhihang Lin, Mingbao Lin, Luxi Lin, and Rongrong Ji. 2024b. Boosting multimodal large language models with visual tokens withdrawal for rapid inference. arXiv preprint arXiv:2405.05803. +Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, and Hao Zhang. 2023. Online speculative decoding. arXiv preprint arXiv:2310.07177. +Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, and Xia Hu. 2024. Kivi: A tuning-free asymmetric 2bit quantization for kv cache. arXiv preprint arXiv:2402.02750. +Chao Lou, Zixia Jia, Zilong Zheng, and Kewei Tu. 2024. Sparser is faster and less is more: Efficient sparse attention for long-range transformers. arXiv preprint arXiv:2406.16747. +Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, et al. 2024. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pages 932-949. +QwenLM Team. 2024. Qvq-72b preview. https:// qwenlm.github.io/blog/qvq-72b-preview/. +Xiaoqian Shen, Yunyang Xiong, Changsheng Zhao, Lemeng Wu, Jun Chen, Chenchen Zhu, Zechun Liu, Fanyi Xiao, Balakrishnan Varadarajan, Florian Bordes, et al. 2024. Longvu: Spatiotemporal adaptive compression for long video-language understanding. arXiv preprint arXiv:2410.17434. +Dingjie Song, Wenjun Wang, Shunian Chen, Xidong Wang, Michael Guan, and Benyou Wang. 2024. Less is more: A simple yet effective token reduction method for efficient multi-modal llms. arXiv preprint arXiv:2409.10994. + +Zunhai Su, Wang Shen, Linge Li, Zhe Chen, Hanyu Wei, Huangqi Yu, and Kehong Yuan. 2025. AkvqvI: Attention-aware kv cache adaptive 2-bit quantization for vision-language models. arXiv preprint arXiv:2501.15021. +Ziteng Sun, Jae Hun Ro, Ahmad Beirami, and Ananda Theertha Suresh. 2024a. Optimal block-level draft verification for accelerating speculative decoding. arXiv preprint arXiv:2403.10444. +Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. 2024b. Spectr: Fast speculative decoding via optimal transport. Advances in Neural Information Processing Systems, 36. +Ruslan Svirschevski, Avner May, Zhuoming Chen, Beidi Chen, Zhihao Jia, and Max Ryabinin. 2024. Specexec: Massively parallel speculative decoding for interactive llm inference on consumer devices. arXiv preprint arXiv:2406.02532. +Yao Teng, Han Shi, Xian Liu, Xuefei Ning, Guohao Dai, Yu Wang, Zhenguo Li, and Xihui Liu. 2024. Accelerating auto-regressive text-to-image generation with training-free speculative jacobi decoding. arXiv preprint arXiv:2410.01699. +Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. 2024. Qwen2-v1: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837. +Yuxin Wen, Qingqing Cao, Qichen Fu, Sachin Mehta, and Mahyar Najibi. 2024. Efficient vision-language models by summarizing visual tokens into compact registers. arXiv preprint arXiv:2410.14072. +Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453. +Linli Yao, Lei Li, Shuhuai Ren, Lean Wang, Yuanxin Liu, Xu Sun, and Lu Hou. 2024. Deco: Decoupling token compression from semantic abstraction in multimodal large language models. arXiv preprint arXiv:2405.20985. +Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, et al. 2025a. Videollama 3: Frontier multimodal foundation models for image and video understanding. arXiv preprint arXiv:2501.13106. + +Xuan Zhang, Fengzhuo Zhang, Cunxiao Du, Chao Du, Tianyu Pang, Wei Gao, and Min Lin. 2025b. Lighttransfer: Your long-context llm is secretly a hybrid model with effortless adaptation. In Workshop on Reasoning and Planning for Large Language Models. +Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, and Zheng Liu. 2024. Mlvu: A comprehensive benchmark for multi-task long video understanding. arXiv preprint arXiv:2406.04264. + +# A Preliminary + +Speculative Decoding We first formalize our notation and provide a brief overview of the speculative decoding in autoregressive LLMs, which is the key background knowledge for our method. We represent the input sequence for a Video-LLM as a combination of visual tokens and textual tokens. Specifically, the visual tokens are denoted as $X_{v} = \{x_{0},\dots ,x_{m_{v} - 1}\}$ , and the textual prompt is denoted as $X_{t} = \{x_{m_{v}},\dots ,x_{m - 1}\}$ . Here, $m_{v}$ is the number of visual tokens, $m_{t}$ is the number of textual tokens, and the total input sequence length is $m = m_{v} + m_{t}$ . The key and value cache for token $x_{i}$ are represented by $K_{x_i}$ and $V_{x_i}$ , respectively. + +Inference of Auto-regressive Models. The inference stage of auto-regressive models, e.g., VideoLLMs, can be divided into two stages: 1) prefilling: The video LLM processes the input sequence, which includes both visual tokens $X_{v}$ and textual tokens $X_{t}$ , in an autoregressive and parallel manner. For each token $x_{i}$ in the combined input $\{X_{v}, X_{t}\}$ , the model computes and stores the corresponding KV cache entries. This stage effectively encodes the input sequence and prepares the model for generating a response. The output of this stage is the first token $x_{m}$ of the model's response. 2) decoding: After prefilling, the model enters the decoding phase, generating output tokens sequentially. At each decoding step $j = m + 1, m + 2, \dots$ , the video LLM generates a new token $x_{j}$ based on the KV cache from all prior tokens. After generating, the KV cache is updated with each newly generated token. This process continues iteratively until a stopping criterion is met, such as reaching an end-of-sequence token or hitting a maximum token limit. + +# B Related Works + +Sparse Attention in MLLMs Normally, an image or a video frame is represented as a large number of tokens in MLLMs, e.g., 196 visual tokens per image in VILA (Lin et al., 2024a), which significantly impacts the computational and storage during model training and inference. Visual token compression aims to reduce the number of visual tokens to address it directly. The majority of visual token compression methods either train from scratch or perform additional training based on existing models. For example, some image-based + +MLLMs rely on vision-language alignment (Cao et al., 2024; Yao et al., 2024; Song et al., 2024) or aggressively removing all visual tokens after a certain layer (Wen et al., 2024), while methods designed for video-based MLLMs consider the unique characteristics of video, such as employing memory mechanisms (Lan et al., 2024) or compressing tokens along spatial and temporal dimensions sequentially (Shen et al., 2024). A smaller portion of works study the test-time (training free) visual token compression for accelerating the inference procedure. FastV (Chen et al., 2024b) performs pruning by analyzing the attention pattern from shallow layers and deep layers, while another approach directly applies full visual token removal during the inference stage (Lin et al., 2024b). In our method, STD, the design of the drafter model is related to training-free visual token compression techniques. However, these previous methods inevitably impact the original model's performance. In contrast, we propose to utilize visual token compression as a drafter model to achieve lossless inference acceleration. + +Speculative Decoding Speculative decoding is proposed by (Leviathan et al., 2023) and (Chen et al., 2023) to accelerate the inference of LLMs, where the throughput of LLMs is improved $2 \sim 3$ times without sacrificing the performance. The algorithm consists of two stages: drafting and verification. The drafting stage adopts a small model (drafter) to generate a long sequence of possible future tokens swiftly, while the verification stage accepts a part of the tokens predicted in the drafting stage in a token-by-tone manner. The follow-up improves the speculative decoding from these two perspectives. Specinfer (Miao et al., 2024), Eagle (Li et al., 2024b) and Medusa (Cai et al., 2024) propose to train a drafter to generate tokens with a tree structure, and the verification is conducted on the tree in a branch-by-branch manner. Hu and Huang (Hu and Huang) also organize the draft tokens as a tree, but they verify the tokens in a branch as a whole. Glide (Du et al., 2024a) generates draft tokens as an unbalanced tree, which alleviates the burden of the drafter while achieving significant acceleration. SpecTr (Sun et al., 2024b) views speculative decoding from the optimal transport view and proposes to verify a batch of draft tokens jointly. They show that the proposed algorithm is optimal up to a multiplicative factor. Sun et al. (Sun et al., 2024a) boot the acceleration by a joint verification + +![](images/5a8164f2807337019a97d23e5a1f71f263109e3c892c0f85418bd82c388ecc08.jpg) +(a) + +![](images/d49bec2289750cf71bf541dbdf09d3b91730be1d35a480f9e34c227f2dfc1476.jpg) +(b) +Figure 1: Effect of $K$ and $\gamma$ on MLVU using LLaVA-OneVision-7B. + +of a single draft trajectory. Instead of using a token-by-token manner, they accept the draft sentences as a whole. Lie et al. (Liu et al., 2023) proposes to update the parameters of drafters in an online manner, which is shown to be effective in various applications. MagicDec (Chen et al., 2024a) analyzes the speculative decoding in the long-context setting with an emphasis on the FLOPS and memory. SpecExec (Svirschevski et al., 2024) focuses on a special setting where the LLMs are offloading their parameters. Several works (Gagrani et al., 2024; Jang et al., 2024; Teng et al., 2024) study the speculative decoding of MLLMs. However, they focus either on the image understanding problem or the image generation problem. In contrast, our work is the first to study video generation acceleration via speculative decoding. + +# C Ablation Stuidies + +We also conducted additional experiments to analyze the impact of hyperparameters $(\gamma$ and $K)$ on model performance. As shown in Figure 1a, we can see that as gamma increases, the speed up gradually improves. This improvement is because the sparse model makes accurate predictions, which allows the computational overhead to be spread out over more tokens. However, when gamma reaches 13, the speed up starts to decline because the model's accuracy in correctly predicting 13 consecutive tokens is insufficient. At the same time, as shown in Figure 1b, when $K$ is small, the acceptance rate is low, resulting in a lower speed up. In contrast, when $K$ is large, the sparse model is not as fast, which also leads to a reduced speed-up. \ No newline at end of file diff --git a/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/images.zip b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..91075eeb14608f821e79eb057eadd2184cd581f2 --- /dev/null +++ b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1296e0e8d1457619b76dc366e54c1174525de688a475b186b30d8698ef866c1c +size 88121 diff --git a/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/layout.json b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4025a54d2c4a5e90d7f67066cfe728983b326150 --- /dev/null +++ b/ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e81a352f07559bcfd164a39a0669ca32ea387bf9f921f9fe82f7a8a1d3c6c37 +size 283736 diff --git a/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_content_list.json b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..25eb962686569d95ed92ba8ff5952fffc3403b2a --- /dev/null +++ b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d7ece755e298ad1e487bbb42735e08bf041d1b1f5cc0ea5def678bc3b49a103 +size 70409 diff --git a/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_model.json b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7c3bdc8a6a38586006168852f272ce46627affe0 --- /dev/null +++ b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e18bdfa0b805619f138122d967f67c811804ae73624d4b52dc31c4d17f33804d +size 88920 diff --git a/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_origin.pdf b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..10c5f01caa2f4449534c8b619d2e39b6d3f2d7c7 --- /dev/null +++ b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42c17222b248560461132847be01c46390a8e8d4a6619fec0b0b4c19da0a841e +size 194533 diff --git a/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/full.md b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3250721fae910da90b01262931c89c9f3afcc02e --- /dev/null +++ b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/full.md @@ -0,0 +1,313 @@ +# Spurious Correlations and Beyond: Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models + +Fardin Ahsan Sakib1, Ziwei Zhu1, Karen Trister Grace2, Meliha Yetisgen3, Özlem Uzuner4 + +$^{1}$ Department of Computer Science, $^{2}$ School of Nursing, $^{4}$ Department of Information Sciences and Technology + +George Mason University + +$^{3}$ Department of Biomedical Informatics & Medical Education, University of Washington + +{fsakib,zzhu20,kgrace,ouzuner}@gmu.edu,melihay@uw.edu + +# Abstract + +Social determinants of health (SDOH) extraction from clinical text is critical for downstream healthcare analytics. Although large language models (LLMs) have shown promise, they may rely on superficial cues leading to spurious predictions. Using the MIMIC portion of the SHAC (Social History Annotation Corpus) dataset and focusing on drug status extraction as a case study, we demonstrate that mentions of alcohol or smoking can falsely induce models to predict current/past drug use where none is present, while also uncovering concerning gender disparities in model performance. We further evaluate mitigation strategies—such as prompt engineering and chain-of-thought reasoning—to reduce these false positives, providing insights into enhancing LLM reliability in health domains. + +# 1 Introduction + +SDOH—including substance use, employment, and living conditions—strongly influence patient outcomes and clinical decision-making (Daniel et al., 2018; Himmelstein and Woolhandler, 2018; Armour et al., 2005). Extracting SDOH information from unstructured clinical text is increasingly important for enabling downstream healthcare applications and analysis (Jensen et al., 2012; Demner-Fushman et al., 2009). Although LLMs have shown promise in clinical natural language processing (NLP) tasks (Hu et al., 2024; Liu et al., 2023; Singhal et al., 2023), they often rely on superficial cues (Tang et al., 2023; Zhao et al., 2017), potentially leading to incorrect predictions undermining trust and utility in clinical settings. + +Recent work has highlighted how LLMs can exhibit "shortcut learning" behaviors (Tu et al., 2020; Ribeiro et al., 2020; Zhao et al., 2018), where they exploit spurious patterns in training data rather than learning causal, generalizable features. This phenomenon spans various NLP tasks, from nat- + +ural language inference (McCoy et al., 2019) to question-answering (Jia and Liang, 2017), and in clinical domains can lead to incorrect assumptions about patient conditions (Brown et al., 2023; Jabbour et al., 2020), threatening the utility of automated systems. + +We investigate how LLMs produce spurious correlations in SDOH extraction through using drug status time classification (current, past, or none/unknown) as a case study. Using the MIMIC (Johnson et al., 2016) portion of the SHAC (Lybarger et al., 2021) dataset, we examine zero-shot and in-context learning scenarios across multiple LLMs (Llama (AI, 2024), Qwen (Yang et al., 2024), Llama3-Med42-70B (Christophe et al., 2024)). We explore multiple mitigation strategies to address these spurious correlations: examining the causal role of triggers through controlled removal experiments, implementing targeted prompt engineering approaches like chain-of-thought (CoT) reasoning (Wei et al., 2022), incorporating warning-based prompts, and augmenting with additional examples. While these interventions show promise—significant false positive rates persist, highlighting the deep-rooted nature of these biases and the need for more sophisticated solutions. + +# Contributions: + +1. We present the first comprehensive analysis of spurious correlations in SDOH extraction across multiple LLM architectures, including domain-specialized models. Through extensive experiments in zero-shot and ICL settings, we demonstrate how models rely on superficial cues and verify their causal influence through controlled ablation studies. + +2. We uncover systematic gender disparities in model performance, demonstrating another form of spurious correlation where models inappropriately leverage patient gender for drug + +status time classification predictions. + +3. We evaluate multiple prompt-based mitigation strategies (CoT, warnings, more examples) and analyze their limitations, demonstrating that while they reduce incorrect drug status time predictions, more robust solutions are needed for reliable clinical NLP deployments. + +# 2 Related Work + +Previous work on extracting SDOH from clinical text spans a progression from rule-based methods to fine-tuned neural models, leveraging annotated corpora for tasks like substance use and employment status extraction (Hatef et al., 2019; Patra et al., 2021; Yu et al., 2022; Han et al., 2022; Uzuner et al., 2008; Stemerman et al., 2021; Lybarger et al., 2023). More recent efforts have explored prompt-based approaches with LLMs, including GPT-4, to reduce reliance on extensive annotations (Ramachandran et al., 2023). While these approaches achieve competitive performance, studies across NLP tasks have shown that both fine-tuned and prompting-based methods often exploit spurious correlations or superficial cues (Ribeiro et al., 2020; Geirhos et al., 2020; Tu et al., 2020). Prior investigations have focused largely on spurious correlations in standard NLP tasks and supervised scenarios (McCoy et al., 2019; Zhao et al., 2018). In contrast, our work examines how these issues manifest in zero-shot and in-context SDOH extraction settings, and we propose prompt-level strategies to mitigate these correlations. + +# 3 Methodology + +# 3.1 Dataset and Task + +We use the MIMIC-III portion of the SHAC dataset (Lybarger et al., 2021), which comprises 4405 deidentified social history note sections derived from MIMIC-III (Johnson et al., 2016) and the University of Washington clinical notes. SHAC is annotated using the BRAT tool (Stenetorp et al., 2012), capturing a variety of SDOH event types (e.g., Alcohol, Drug, Tobacco) as triggers along with associated arguments, including temporal status. To enable demographic analysis, we augmented the SHAC data by linking it with patient demographic information available in the original MIMIC-III dataset. + +In this work, we examine spurious correlations in SDOH extraction through temporal drug status + +classification (current, past, or none/unknown). We adopt a two-step pipeline (Ma et al., 2022, 2023): + +(1) Trigger Identification: Given a social history note, the model identifies spans corresponding to the target event type (e.g., drug use). +(2) Argument Resolution: For each trigger, models apply a multiple-choice QA prompt to determine the temporal status (current/past/none). + +The dataset contains diverse patterns of substance documentation, see Appendix B for detailed examples of the task and annotation schema. + +# 3.2 Experimental Setup + +Model Configurations We evaluate multiple model configurations: + +- Zero-Shot: Models receive only task instructions and input text, with no examples +- In-Context Learning (ICL): Models are provided with three example demonstrations before making predictions on a new instance. Examples are selected to maintain balanced representation across substance use patterns (none/single/multiple) and drug use outcomes (positive/negative). +- Fine-Tuning (SFT): We also fine-tune a Llama-3.1-8B model on the MIMIC portion of the SHAC dataset to assess whether domain adaptation reduces spurious correlations. + +We consider Llama-3.1-70B (zero-shot, ICL), Llama-3.1-8B (fine-tuned on MIMIC), Qwen-72B (ICL), and Llama3-Med42-70B (ICL). These models span various parameter sizes and domain specializations. The fine-tuned Llama-8B model provides insights into whether in-domain adaptation mitigates the observed shortcut learning. + +Prompting Strategies We additionally use two other prompting strategies (see Appendix A for complete templates): + +Chain-of-Thought (CoT): This prompt explicitly guides reasoning through five steps: (1) read the social history note carefully, (2) identify relevant information, (3) consider examples provided, (4) explain your reasoning process, (5) provide the answer. This encourages explicit reasoning to reduce shortcuts. + +Warning-Based: This incorporates explicit guidelines to counter spurious correlations: (1) evaluate each factor independently - never assume + +one behavior implies another, (2) extract only explicitly stated information - avoid making assumptions based on demographic or other factors, (3) use [none] when information isn't mentioned. + +Evaluation Framework Our primary evaluation metric is the false positive rate (FPR), defined as: $FPR = FP / (FP + TN)$ where FP represents false positives (predicted current/past use when ground truth was none/unknown) and TN represents true negatives (correctly predicted none/unknown). We prioritize FPR given the clinical risks of incorrect positive drug use predictions—including patient stigmatization, biased provider perceptions, and diminished trust in automated systems (Van Boekel et al., 2013; Dahl et al., 2022). A higher FPR indicates more frequent erroneous predictions that could directly impact patient care. We specifically examine FPR disparities between substance-positive and substance-negative contexts to reveal whether models rely on superficial cues rather than actual evidence. See Appendix C for extended discussion. + +To analyze potential spurious correlations, we categorize notes based on their ground truth substance use status: + +- Substance-positive: Notes documenting current/past use of the respective substance (alcohol or smoking) +- Substance-negative: Notes where the ground truth indicates no use or unknown status + +# Experimental Settings + +- Original: Evaluate models on the original notes. +- Without Alcohol/Smoking Triggers: Remove mentions of alcohol/smoking to test their causal role in inducing false positives. + +# 4 Results + +# 4.1 RQ1: Do Large Language Models Exhibit Spurious Correlations in SDOH Extraction? + +As shown in Table 1, our analysis in a zero-shot setting with Llama-70B reveals high false positive rates for drug status time classification in alcohol-positive $(66.21\%)$ and smoking-positive $(61.11\%)$ notes. In contrast, alcohol-negative and smoking-negative notes show substantially lower false positive rates $(28.83\%$ and $29.76\%$ , respectively). This + +stark contrast suggests that the mere presence of alcohol or smoking triggers biases the model towards inferring nonexistent drug use. These biases likely stem from the pre-training phase, potentially reinforcing societal assumptions about correlations between different types of substance use. + +# 4.2 RQ2: Do In-Context Learning and Fine-Tuning Reduce These Spurious Correlations? + +Providing three in-context examples reduces false positives significantly. For Llama-70B, ICL lowers alcohol-positive mismatches from $66.21\%$ to $48.28\%$ , though a gap remains relative to alcohol-negative notes $(11.71\%)$ . Similarly, smoking-positive mismatches decrease from $61.11\%$ to $36.42\%$ versus $18.05\%$ for smoking-negative. The effectiveness of ICL suggests that explicit examples help the model focus on relevant features, though the persistence of some bias indicates deep-rooted associations from pre-training. Fine-tuning Llama-8B on the MIMIC subset (SFT) yields further improvements: alcohol-positive mismatches drop to $32.41\%$ and smoking-positive to $36.42\%$ , with corresponding negatives at $12\%$ and $7\%$ respectively, indicating that domain adaptation helps override some pre-trained biases. + +# 4.3 RQ3: Are These Superficial Mentions Causally Driving the Model's Predictions? + +To confirm the causal role of alcohol and smoking mentions, we remove these triggers from the notes. Across models, this consistently lowers false positives. For instance, Llama-70B zero-shot sees alcohol-positive mismatches fall from $66.21\%$ to $55.17\%$ after removing alcohol triggers. Similarly, Llama-8B-SFT reduces alcohol-positive errors from $32.41\%$ to $26.9\%$ . Similar trends are observed across other architectures including domain-specific models (see appendix E), confirming that alcohol and smoking cues spuriously bias the models' drug-use predictions. + +# 4.4 RQ4: Are there systematic demographic variations in these spurious correlations? + +Beyond substance-related triggers, our analysis (Table 2) uncovers another concerning form of spurious correlation: systematic performance differences based on patient gender. Just as models incorrectly rely on mere mentions of alcohol or smoking to infer substance use, they appear to + +Table 1: False Positive Rates (%) Across Different Models and Approaches. *Smoking+Alcohol* refers to cases where both *Smoking-positive* and *Alcohol-positive* are true. + +
CasesLlama-70BLlama-8BLlama3-Med42-70BQwen-72B
Zero-shotICLCoTWarningIncreased-ExamplesVanillaFine-tunedICL
Alcohol-positive66.2148.2833.7940.6945.5273.1032.4166.90
Smoking-positive61.1136.4225.9329.6330.2574.0736.4257.41
Alcohol-negative28.8311.716.765.4110.8137.3912.1616.22
Smoking-negative29.7618.0510.7311.2220.0033.667.3219.51
Smoking+Alcohol73.2651.1634.8845.3539.5381.4040.7076.74
+ +leverage patient gender as an inappropriate predictive signal. For the base Llama-70B model in zero-shot settings, false positive rates show stark gender disparities - male patients consistently face higher misclassification rates compared to female patients (71.15% vs 53.66% for alcohol-positive cases, and 66.67% vs 50.88% for smoking-positive cases). This pattern persists with in-context learning, with the gender gap remaining substantial (alcohol-positive: 52.88% male vs 36.59% female). Fine-tuned models showed similar disparities, with Llama-8B-SFT maintaining a performance gap of approximately 15 percentage points between genders for alcohol-positive cases. + +Notably, these gender-based differences exhibit complex interactions with substance-related triggers. Cases involving positive substances mentions show the most pronounced disparities, with male patients seeing up to 20 percentage point higher false positive rates. This suggests that the model's shortcut learning compounds across different dimensions - gender biases amplify substance-related biases and vice versa. The persistence of these interacting biases across model architectures, sizes, and prompting strategies suggests they arise from deeply embedded patterns in both pre-training data and medical documentation practices. + +# 5 Mitigation Strategies and Results + +We explore several mitigation techniques to address the spurious correlations identified in our analysis: + +Chain-of-Thought (CoT) As shown in Table 1, instructing the model to reason step-by-step before producing an answer leads to substantial reductions across all architectures. For Llama-70B, CoT reduces alcohol-positive mismatches from $66.21\%$ (zero-shot) to $33.79\%$ , with smoking-positive cases decreasing from $61.11\%$ to $25.93\%$ . Similar improvements are observed in other models (see appendix F), with Qwen-72B showing particularly + +strong response to CoT. This suggests CoT helps models avoid superficial cues and focus on explicit information. + +Warning-Based Instructions We prepend explicit instructions cautioning the model not to assume drug use without evidence and to treat each factor independently. With Llama-70B, these warnings lower alcohol-positive mismatches from $66.21\%$ to approximately $40.69\%$ , and also benefit smoking-positive scenarios. While not as strong as CoT, these warnings yield meaningful improvements across different architectures. + +Increased Number of Examples Providing more than three examples—up to eight—further stabilizes predictions. For Llama-70B, increasing the number of examples reduces false positive rates considerably, with alcohol-positive mismatches falling to $45.52\%$ (compared to $66.21\%$ zero-shot). Similar trends are observed in other models, though the magnitude of improvement varies (see appendix F). While not as dramatic as CoT, additional examples help guide models away from faulty heuristics. + +# 6 Discussion + +Our findings highlight a key challenge in applying large language models to clinical information extraction: even when models achieve strong performance on average, they rely on superficial cues rather than genuine understanding of the underlying concepts. The presence of alcohol- or smoking-related mentions biases models to infer drug use incorrectly, and these shortcuts persist across Llama variants, Qwen, and Llama3-Med42-70B. The effectiveness of mitigation strategies like chain-of-thought reasoning, warning-based instructions, and additional examples underscores the importance of careful prompt design. While these interventions help guide models to focus on explicit evidence, their partial success suggests the need for more + +Table 2: Gender-Based Analysis of False Positive Rates (%) Across Models + +
CasesLlama-70B Zero-shotLlama-70B ICLLlama-8B SFTQwen-72B
FemaleMaleFemaleMaleFemaleMaleFemaleMale
Alcohol-positive53.6671.1536.5952.8821.9536.5468.2960.58
Smoking-positive50.8866.6728.0740.9524.5642.8649.1255.24
Alcohol-negative29.1328.429.4514.749.4515.7947.2446.32
Smoking-negative27.0332.989.9127.666.318.5154.0552.13
Smoking+Alcohol81.8284.6254.5558.9727.2753.8527.2730.77
+ +robust approaches - integrating domain-specific knowledge, implementing adversarial training, or curating more balanced datasets. Our demographic analysis reveals that these spurious correlations are not uniformly distributed across patient groups, raising fairness concerns for clinical deployment. Addressing such disparities requires both algorithmic improvements and careful consideration of deployment strategies. Clinicians and stakeholders must be aware of these limitations before deploying LLMs in clinical decision-support systems. Understanding these systematic biases in automated analysis can inform improvements not only in model development but also in clinical documentation practices and standards. + +# 7 Implications Beyond NLP: Clinical Documentation and Practice + +The implications of this study extend beyond NLP methodologies. Our analysis reveals that these models not only learn but potentially amplify existing biases in clinical practice. The identified error patterns—particularly the tendency to infer substance use from smoking/alcohol mentions and gender-based performance disparities—mirror documented provider biases in clinical settings (Saloner et al., 2023; Meyers et al., 2021). Notably, these biases appear to originate partly from medical documentation practices themselves (Ivy et al., 2024; Kim et al., 2021; Markowitz, 2022). Our finding that explicit evidence-based reasoning (through CoT) reduces these biases aligns with established strategies for mitigating provider bias (Mateo and Williams). This parallel between computational and human biases suggests that systematic analysis of LLM behavior could inform broader efforts to identify and address biases in medical documentation and practice, potentially contributing to improved provider education and documentation standards. + +# 8 Conclusion + +This work presents the first systematic exploration of spurious correlations in SDOH extraction, revealing how contextual cues can lead to incorrect and potentially harmful predictions in clinical settings. Beyond demonstrating the problem, we've evaluated several mitigation approaches that, while promising, indicate the need for more sophisticated solutions. Future work should focus on developing robust debiasing techniques, leveraging domain expertise, and establishing comprehensive evaluation frameworks to ensure reliable deployment across diverse populations. + +# 9 Limitations + +Dataset limitations Our analysis relied exclusively on the MIMIC portion of the SHAC dataset, which constrains the generalizability of our findings. While we observe consistent gender-based performance disparities, a more diverse dataset could help establish the breadth of these biases. + +Model coverage We focused solely on open-source large language models (e.g., Llama, Qwen). Extending the evaluation to additional data sources, closed-source models, and other domain-specific architectures would help verify the robustness of our conclusions. + +Causal understanding While we established the causality of triggers through removal experiments, understanding why specific triggers affect certain models or scenarios would require deeper analysis using model interpretability techniques. + +Methodology scope Our study focused exclusively on generative methods; results may not generalize to traditional pipeline-based approaches that combine sequence labeling and relation classification. + +Mitigation effectiveness While we identified various spurious correlations, our mitigation strategies + +could not completely address the problem, leaving room for future work on addressing these issues. + +# 10 Ethics Statement + +All experiments used de-identified social history data from the SHAC corpus, with LLMs deployed on a secure university server. We followed all data use agreements and institutional IRB protocols. Although the dataset is fully de-identified, biases within the models could raise ethical concerns in real-world applications. Further validation and safeguards are recommended before clinical deployment. + +# 11 Acknowledgments + +We thank our collaborators for their valuable feedback and support. Generative AI assistants were used for grammar checking and LaTeX formatting; the authors retain full responsibility for the final content and analysis. + +# References + +Meta AI. 2024. Llama 3.1 model card. https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md. Accessed: 2024-12-13. +BS Armour, T Woollery, A Malarcher, TF Pechacek, and C Husten. 2005. Annual smoking-attributable mortality, years of potential life lost, and productivity losses—United States, 1997-2001. JAMA: Journal of the American Medical Association, 294(7). +Alexander Brown, Nenad Tomasev, Jan Freyberg, Yuan Liu, Alan Karthikesalingam, and Jessica Schrouff. 2023. Detecting shortcut learning for fair medical ai using shortcut testing. Nature communications, 14(1):4314. +Clément Christophe, Praveen K Kanithi, Tathagata Raha, Shadab Khan, and Marco AF Pimentel. 2024. Med42-v2: A suite of clinical llms. Preprint, arXiv:2408.06142. +Rachel A Dahl, J Priyanka Vakkalanka, Karisa K Harland, and Joshua Radke. 2022. Investigating healthcare provider bias toward patients who use drugs using a survey-based implicit association test: Pilot study. Journal of addiction medicine, 16(5):557-562. +Hilary Daniel, Sue S Bornstein, Gregory C Kane, Health, and Public Policy Committee of the American College of Physicians*. 2018. Addressing social determinants to improve patient care and promote health equity: an american college of physicians position paper. Annals of internal medicine, 168(8):577-578. + +Dina Demner-Fushman, Wendy W Chapman, and Clement J McDonald. 2009. What can natural language processing do for clinical decision support? Journal of biomedical informatics, 42(5):760-772. +Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673. +Sifei Han, Robert F Zhang, Lingyun Shi, Russell Richie, Haixia Liu, Andrew Tseng, Wei Quan, Neal Ryan, David Brent, and Fuchiang R Tsui. 2022. Classifying social determinants of health from unstructured electronic health records using deep learning-based natural language processing. Journal of biomedical informatics, 127:103984. +Elham Hatef, Masoud Rouhizadeh, Iddrisu Tia, Elyse Lasser, Felicia Hill-Briggs, Jill Marsteller, Hadi Kharrazi, et al. 2019. Assessing the availability of data on social and behavioral determinants in structured and unstructured electronic health records: a retrospective analysis of a multilevel health care system. *JMIR medical informatics*, 7(3):e13802. +David U Himmelstein and Steffie Woolhandler. 2018. Determined action needed on social determinants. Annals of internal medicine, 168(8):596-597. +Yan Hu, Qingyu Chen, Jingcheng Du, Xueqing Peng, Vipina Kuttichi Keloth, Xu Zuo, Yujia Zhou, Zehan Li, Xiaoqian Jiang, Zhiyong Lu, et al. 2024. Improving large language models for clinical named entity recognition via prompt engineering. Journal of the American Medical Informatics Association, page ocad259. +Zalaya K Ivy, Sharon Hwee, Brittany C Kimball, Michael D Evans, Nicholas Marka, Catherine Bendel, and Alexander A Boucher. 2024. Disparities in documentation: evidence of race-based biases in the electronic medical record. Journal of Racial and Ethnic Health Disparities, pages 1-7. +Sarah Jabbour, David Fouhey, Ella Kazerooni, Michael W Sjoding, and Jenna Wiens. 2020. Deep learning applied to chest x-rays: exploiting and preventing shortcuts. In Machine Learning for Healthcare Conference, pages 750-782. PMLR. +Peter B Jensen, Lars J Jensen, and Søren Brunak. 2012. Mining electronic health records: towards better research applications and clinical care. Nature Reviews Genetics, 13(6):395-405. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328. +Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1-9. + +Min Kyung Kim, Joy Noel Baumgartner, Jennifer Headley, Julius Kirya, James Kaggwa, and Joseph R Egger. 2021. Medical record bias in documentation of obstetric and neonatal clinical quality of care indicators in uganda. Journal of Clinical Epidemiology, 136:10-19. +Zhengliang Liu, Yue Huang, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Yiwei Li, Peng Shu, et al. 2023. Deid-gpt: Zero-shot medical text de-identification by gpt-4. arXiv preprint arXiv:2303.11032. +Kevin Lybarger, Nicholas J Dobbins, Ritche Long, Angad Singh, Patrick Wedgeworth, Ozlem Uzuner, and Meliha Yetisgen. 2023. Leveraging natural language processing to augment structured social determinants of health data in the electronic health record. Journal of the American Medical Informatics Association, 30(8):1389-1397. +Kevin Lybarger, Mari Ostendorf, and Meliha Yetisgen. 2021. Annotating social determinants of health using active learning, and characterizing determinants using neural event extraction. Journal of Biomedical Informatics, 113:103631. +Mingyu Derek Ma, Alexander K Taylor, Wei Wang, and Nanyun Peng. 2022. Dice: data-efficient clinical event extraction with generative models. arXiv preprint arXiv:2208.07989. +Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. 2023. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559. +David M Markowitz. 2022. Gender and ethnicity bias in medicine: A text analysis of 1.8 million critical care records. *PNAS nexus*, 1(4):pgac157. +CM Mateo and DR Williams. Addressing bias and reducing discrimination. The professional responsibility of health care providers, 2020:95. +RT McCoy, E Pavlick, and T Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. arxiv preprint arxiv: 190201007. +SA Meyers, VA Earnshaw, Brittany D'Ambrosio, Natasia Courchesne, Dan Verb, and LR Smith. 2021. The intersection of gender and drug use-related stigma: A mixed methods systematic review and synthesis of the literature. *Drug and alcohol dependence*, 223:108706. +Braja G Patra, Mohit M Sharma, Veer Vekaria, Prakash Adekkanattu, Olga V Patterson, Benjamin Glicksberg, Lauren A Lepow, Euijung Ryu, Joanna M Biernacka, Al'ona Furmanchuk, et al. 2021. Extracting social determinants of health from electronic health records using natural language processing: a systematic review. Journal of the American Medical Informatics Association, 28(12):2716-2727. + +Giridhar Kaushik Ramachandran, Yujuan Fu, Bin Han, Kevin Lybarger, Nicholas J Dobbins, Ozlem Uzuner, and Meliha Yetisgen. 2023. Prompt-based extraction of social determinants of health using few-shot learning. Preprint, arXiv:2306.07170. +Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of nlp models with checklist. arXiv preprint arXiv:2005.04118. +Brendan Saloner, Wenshu Li, Michael Flores, Ana M Progovac, and Benjamin Lé Cook. 2023. A widening divide: Cigarette smoking trends among people with substance use disorder and criminal legal involvement: Study examines cigarette smoking trends among people with substance use disorders and people with criminal legal involvement. *Health Affairs*, 42(2):187-196. +Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature, 620(7972):172-180. +Rachel Stemerman, Jaime Arguello, Jane Brice, Ashok Krishnamurthy, Mary Houston, and Rebecca Kitzmiller. 2021. Identification of social determinants of health using multi-label classification of electronic health record clinical notes. *JAMIA open*, 4(3):00aa069. +Pontus Stenetorp, Sampo Pyysalo, Goran Topić, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsujii. 2012. Brat: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102-107. +Ruixiang Tang, Dehan Kong, Longtao Huang, and Hui Xue. 2023. Large language models can be lazy learners: Analyze shortcuts in in-context learning. arXiv preprint arXiv:2305.17256. +Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633. +Özlem Uzuner, Ira Goldstein, Yuan Luo, and Isaac Kohane. 2008. Identifying patient smoking status from medical discharge records. Journal of the American Medical Informatics Association, 15(1):14-24. +Leonieke C Van Boekel, Evelien PM Brouwers, Jaap Van Weeghel, and Henk FL Garretsen. 2013. Stigma among health professionals towards patients with substance use disorders and its consequences for healthcare delivery: systematic review. Drug and alcohol dependence, 131(1-2):23-35. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, + +et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837. + +An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024. Qwen2 technical report. Preprint, arXiv:2407.10671. + +Zehao Yu, Xi Yang, Chong Dang, Songzi Wu, Prakash Adekkanattu, Jyotishman Pathak, Thomas J George, William R Hogan, Yi Guo, Jiang Bian, et al. 2022. A study of social and behavioral determinants of health in lung cancer patients using transformers-based natural language processing models. In AMIA Annual Symposium Proceedings, volume 2021, page 1225. + +Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457. + +Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876. + +# A Prompting Strategies + +All prompting approaches share a base system message identifying the model's role as "an AI assistant specialized in extracting and analyzing social history information from medical notes." Each strategy then builds upon this foundation with specific modifications: + +# Zero-Shot + +The baseline approach uses a minimal prompt structure: System: AI assistant specialized in social history extraction User: For the following social history note: [Clinical note text] [Task instruction] [Options if applicable] This setup evaluates the model's ability to perform extraction tasks using only its pre-trained knowledge, without additional guidance or examples. + +# In-Context Learning (ICL) + +This approach augments the base prompt with three carefully selected demonstration examples. Each example follows a structured JSON format: json "id": "example-id", "instruction": "Extract all Drug text spans...", "input": "Social History: Patient denies drug use...", "options": "[Multiple choice options if applicable]", "output": "Expected extraction or classification" + +# Chain-of-Thought (CoT) + +Building upon ICL, this method explicitly guides the model through a structured reasoning process: Please approach this task step-by-step: 1. Carefully read the social history note 2. Identify all relevant information related to the question 3. Consider the examples provided 4. Explain your reasoning process 5. Provide your final answer This approach aims to reduce spurious correlations and shortcut learning by encouraging explicit articulation of the reasoning process before arriving at the final extraction or classification. + +# Warning-Based + +This specialized approach incorporates explicit rules and warnings in the system message: Important Guidelines: 1. Evaluate each factor independently - never assume one behavior implies another 2. Extract only explicitly stated information - don't make assumptions based on demographics or other factors 3. If information isn't mentioned, use [none] or select "not mentioned" option These guidelines specifically address the challenge of false positives in substance use detection by discouraging inference-based conclusions without explicit textual evidence. The warnings are designed to counteract the model's tendency to make assumptions based on superficial cues or demographic factors. + +# B Dataset Details + +# B.1 Data Format and Annotation Process + +The SHAC dataset originally consists of paired text files (.txt) containing social history notes and annotation files (.ann) capturing SDOH information. We convert these into a question-answering format to evaluate LLMs. Below we demonstrate this process with a synthetic example: + +# Raw Note (.txt) + +SOCIAL HISTORY: + +Patient occasionally uses alcohol. Denies any illicit drug use. + +# BRAT Annotations (.ann) + +T1 Alcohol 24 31 alcohol +T2 Drug 47 50 drug +T3 StatusTime 8 19 occasionally +T4 StatusTime 32 37 denies + +E1 Alcohol:T1 Status:T3 +E2 Drug:T2 Status:T4 + +A1 StatusTimeVal T3 current +A2 StatusBarT4none + +Here, T1 and T2 are triggers - spans of text that indicate the presence of SDOH events (e.g., "alcohol" for substance use). The annotations also capture arguments - additional information about these events, such as their temporal status represented by T3 and T4. For example, T3 ("occasionally") indicates a temporal status of current for alcohol use. + +We transform these structured annotations into two types of questions: + +Trigger Identification Questions about identifying relevant event spans: + +```jsonl +{"id": "0001-Alcohol", +"instruction": "Extract all Alcohol text spans as it is from the note. If multiple spans present, separate them by [SEP]. If none, output [none].", +"input": "SOCIAL HISTORY: Patient occasionally uses alcohol. Denies any illicit drug use.", +"output": "alcohol"} +``` + +Argument-Resolution Questions about determining event properties: + +```jsonl +{"id": "0001-Alcohol_StatusTime", "instruction": "Choose the best StatusTime value for the (Alcohol) from the note:", "input": "SOCIAL HISTORY: Patient occasionally uses alcohol. Denies any illicit drug use.", "options": "Options: (a) none. (b) current. (c) past. (d) Not Applicable.", "output": "(b) current."} +``` + +# C Metric Selection and Justification + +Our focus on False Positive Rate (FPR) is motivated by the unique risks associated with incorrect substance use predictions in clinical settings (Van Boekel et al., 2013; Dahl et al., 2022). While traditional metrics like accuracy or F1-score treat all errors equally, FPR specifically captures the rate of unwarranted "positive" classifications—a critical concern when dealing with sensitive patient information. High FPR values indicate that models frequently make unjustified drug use predictions, which could lead to: + +- Patient stigmatization and potential discrimination +- Reduced quality of care due to biased provider perceptions +- Diminished trust in automated clinical decision support systems + +Conversely, lower FPR values suggest better model reliability in avoiding these harmful misclassifications. While comprehensive evaluation would benefit from additional metrics, FPR serves as a particularly relevant indicator for assessing model safety and reliability in clinical applications. + +# D Model Fine-tuning and Computational Resources + +We fine-tuned Llama-8B using LoRA with rank 64 and dropout 0.1. Key training parameters include a learning rate of 2e-4, batch size of 4, and 5 training epochs. Training was conducted on 2 NVIDIA A100 GPUs for approximately 3 hours using mixed precision (FP16). For our main experiments, we used several large language models: Llama-70B (70B parameters), Qwen-72B (72B parameters), Llama3-Med42-70B (70B parameters), and our fine-tuned Llama-8B (8B parameters). The inference experiments across all models required approximately 100 GPU hours on 2 NVIDIA A100 GPUs. This computational budget covered all experimental settings including zero-shot, in-context learning, and the evaluation of various mitigation strategies. + +# E Trigger Removal Experiments + +Table 3: Impact of Trigger Removal on Llama 3.1 Models False Positive Rates (%) + +
CasesLlama 3.1 70b Zero-shotLlama 3.1 8b SFT
FullWithout AlcoholWithout SmokingFullWithout AlcoholWithout Smoking
Alcohol-positive66.2155.1764.1432.4126.9033.10
Smoking-positive61.1154.9456.7936.4232.1031.48
Alcohol-negative28.8325.2323.8712.1612.168.11
Smoking-negative29.7622.9326.347.326.837.32
Smoking+Alcohol73.2665.1272.0940.7032.5641.86
+ +Table 4: Impact of Trigger Removal on Additional Models' False Positive Rates (%) + +
CasesLlama 3.1 70B ICLLlama3-Med42-70BQwen-72B
FullWithout AlcoholWithout SmokingFullWithout AlcoholWithout SmokingFullWithout AlcoholWithout Smoking
Alcohol-positive48.2838.6247.5966.9053.1064.8362.7651.7254.48
Smoking-positive36.4232.7232.0957.4151.8552.4753.0945.6851.23
Alcohol-negative11.7116.2210.8116.2216.2213.9646.8545.0547.75
Smoking-negative18.0514.1515.1219.5114.1519.5153.1749.2749.76
Smoking+Alcohol51.1644.1946.5176.7466.2873.2656.9843.0250.00
+ +# F Mitigation Experiments + +Table 5: Impact of Mitigation Strategies on Additional Models' False Positive Rates (%) + +
CasesLlama3-Med42-70BQwen-72B
ICLCoTWarningIncreased ExamplesICLCoTWarningIncreased Examples
Alcohol-positive66.9048.2862.7663.4562.7628.9734.3836.55
Smoking-positive57.4135.1953.0950.6253.0923.4632.0933.33
Alcohol-negative16.226.7616.6715.7646.8519.8222.0726.12
Smoking-negative19.5113.6618.5418.0553.1717.0725.8529.27
Smoking+Alcohol76.7453.4972.0968.6056.9832.5637.2141.86
\ No newline at end of file diff --git a/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/images.zip b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7d883716fa11cf9247e06f69df9483fcce7e7482 --- /dev/null +++ b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e209184402658712bacafc4c3b4a89f493b7bb0614338913dc261ac0a5007df3 +size 250694 diff --git a/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/layout.json b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..514157c60e3ecbefa4f47a5ae0675103458db860 --- /dev/null +++ b/ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34e29bcb6dc7cb665f13994600097cb2fa110b6b04b23c06ce51663f6ac103e9 +size 310610 diff --git a/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_content_list.json b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e540217f2b2f3fe183bfe01ff7cd9b342e777a2c --- /dev/null +++ b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d429a8b5d4c6cf88c4bac3c8c71fc5c839928f2d005a6da4fed1ebc7d88df4b +size 102683 diff --git a/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_model.json b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_model.json new file mode 100644 index 0000000000000000000000000000000000000000..be0116f71b2ee3c417782dd3d08fa494bec80322 --- /dev/null +++ b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e70e06b9dcfe02d71ce336a14fc019b6970cf6a803681db4634ffaee8ede2064 +size 123664 diff --git a/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_origin.pdf b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..70a2ae29f42c1bcd75b6ca8ed3e2d2c878b77bb2 --- /dev/null +++ b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1502884d6013c6c03e6410ff0f7eb3921aca86669bcdd057c114db75418bcc3a +size 777727 diff --git a/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/full.md b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..070330d3396d33f6e0a8d4ad6eecff4d217447cd --- /dev/null +++ b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/full.md @@ -0,0 +1,377 @@ +# State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space Models + +Wonjun Kang $^{1,2*}$ + +Kevin Galim2* ung Il Koo2,4† + +Yuchen Zeng $^{3*}$ +Nam Ik Cho $^{1}$ + +Minjae Lee + +$^{1}$ Seoul National University $^{2}$ FuriosaAI + +$^{3}$ UW-Madison $^{4}$ Ajou University + +{kangwj1995, kevin.galim, minjae.lee, hikoo}@furiosa.ai, yzeng58@wisc.edu, nicho@snu.ac.kr + +# Abstract + +State Space Models (SSMs) have emerged as efficient alternatives to Transformers, mitigating their quadratic computational cost. However, the application of Parameter-Efficient Fine-Tuning (PEFT) methods to SSMs remains largely unexplored. In particular, prompt-based methods like Prompt Tuning and Prefix-Tuning, which are widely used in Transformers, do not perform well on SSMs. To address this, we propose state-based methods as a superior alternative to prompt-based methods. This new family of methods naturally stems from the architectural characteristics of SSMs. State-based methods adjust state-related features directly instead of depending on external prompts. Furthermore, we introduce a novel state-based PEFT method: State-offset Tuning. At every timestep, our method directly affects the state at the current step, leading to more effective adaptation. Through extensive experiments across diverse datasets, we demonstrate the effectiveness of our method. Code is available at https://github.com/furiosa-ai/ssm-state-tuning. + +# 1 Introduction + +Large Language Models (LLMs) have gained significant attention for their strong performance in NLP tasks (Achiam et al., 2023; Brown et al., 2020), but suffer from the quadratic complexity of Transformer architectures (Vaswani et al., 2017). To mitigate this, subquadratic alternatives have gained interest (Katharopoulos et al., 2020; Peng et al., 2023; Sun et al., 2023), with State Space Models (SSMs) emerging as a promising solution (Gu and Dao, 2024; Dao and Gu, 2024). + +Meanwhile, as LLMs scale up, full fine-tuning for downstream tasks becomes prohibitively expensive. Consequently, Parameter-Efficient Fine + +![](images/b12da167d19c4f3b0b6ac17146c9ed377f4a5d5b7d0a30f7b1f7f6a9156fb3cf.jpg) +Figure 1: Illustration of our proposed State-offset Tuning on a Mamba block (Gu and Dao, 2024). State-offset Tuning injects a trainable state-offset $h'$ at each timestep in the SSM module while keeping other parameters frozen, enabling parameter-efficient fine-tuning and improved downstream performance. + +Tuning (PEFT) (Houlsby et al., 2019; Hu et al., 2021; He et al., 2021; Zaken et al., 2022; Liu et al., 2021, 2022; Zeng and Lee, 2024) has emerged, which aims to reduce the number of trainable parameters while achieving adaptation performance comparable to full fine-tuning. + +However, research on PEFT methods for SSMs remains limited despite their growing popularity. For instance, prompt-based PEFT methods, such as Prompt Tuning (Lester et al., 2021) and Prefix-Tuning (Li and Liang, 2021), have been widely applied to Transformers but fail to adapt effectively to SSMs (Galim et al., 2024). Therefore, new PEFT strategies tailored to SSMs are needed to fully leverage their architectural properties. + +To bridge this gap, we introduce state-based PEFT methods that leverage the intrinsic properties of SSMs, offering a superior alternative to prompt-based methods. Building on this concept, we propose State-offset Tuning. This method directly ad + +justs the state-related features rather than relying on external prompts, enabling more effective adaptation. + +In summary, our main contributions are: + +- We introduce state-based methods, a new family of PEFT techniques for SSMs, offering a superior alternative to prompt-based approaches. +- We propose State-offset Tuning as a new state-based PEFT method. +- We demonstrate the effectiveness of our method through experiments on a variety of datasets, consistently outperforming existing fine-tuning techniques. + +# 2 Related Works + +# 2.1 State Space Models + +Linear State-Space Layers (LSSL) are one of the earliest applications of SSMs in sequence modeling (Gu et al., 2021), leveraging HiPPO (Gu et al., 2020) to initialize the state matrix. However, its high computational overhead limits practicality. Gu et al. (2022) introduced Structured State Space Models (S4), which mitigate this by structuring the state matrix. Recently, Mamba (Gu and Dao, 2024; Dao and Gu, 2024) enhanced modeling capabilities by introducing an input-dependent S6 block. + +# 2.2 Parameter-Efficient Fine-Tuning + +In this section, we review existing PEFT methods. For more details, see Sec. D. + +Parameter-based Methods One approach to parameter-based PEFT methods is to selectively fine-tune specific layers within the model while keeping the remaining layers frozen. BitFit (Zaken et al., 2022) is a lightweight and effective strategy that focuses solely on fine-tuning a model's bias terms. Furthermore, LoRA (Hu et al., 2021) represents a notable parameter-based PEFT method by introducing low-rank matrices for weight updates, facilitating efficient adaptation. + +Prompt-based Methods Instead of fine-tuning model parameters, Prompt Tuning (Lester et al., 2021) enhances models by preponding trainable soft embeddings to the prompt. Prefix-Tuning (Li and Liang, 2021) builds on this approach by injecting trainable embeddings into each Transformer layer, achieving strong adaptation results for Transformer-based LLMs. + +PEFT for SSMs Concurrently, Galim et al. (2024) showed that LoRA outperforms prompt-based methods on SSMs. Furthermore, they proposed Selective Dimension Tuning (SDT) for fine-tuning the SSM module while applying LoRA on the linear projection matrices when fine-tuning Mamba models. Yoshimura et al. (2025) suggested a new PEFT method called Additional-scan, which increases the hidden state dimension of SSMs, fine-tuning only its additional parameters. + +# 3 PEFT Methods on SSMs + +SSM Preliminaries Assuming a single channel dimension, SSMs such as S4 (Gu et al., 2022) transform a signal $x_{t} \in \mathbb{R}$ into $y_{t} \in \mathbb{R}$ through an $H$ -dimensional latent state $\pmb{h}_{t} \in \mathbb{R}^{H}$ as below: + +$$ +\boldsymbol {h} _ {t} = \overline {{\boldsymbol {A}}} \boldsymbol {h} _ {t - 1} + \overline {{\boldsymbol {B}}} x _ {t}, \quad y _ {t} = \boldsymbol {C h} _ {t}, +$$ + +where $\overline{B} \in \mathbb{R}^{H \times 1}$ controls input influence, $\overline{A} \in \mathbb{R}^{H \times H}$ governs state dynamics, and $C \in \mathbb{R}^{1 \times H}$ maps the state to the output. $\overline{A}$ and $\overline{B}$ represent discretized versions of $A$ and $B$ , parameterized by a learnable step size $\Delta \in \mathbb{R}$ . + +In S6 (the SSM module of Mamba), input dependency is integrated by using input-dependent $\overline{A}_t$ , $\overline{B}_t$ , and $C_t$ at every timestep. Specifically, given $D$ channels with $\boldsymbol{x}_t \in \mathbb{R}^D$ , learnable parameters $\boldsymbol{W}_B$ , $\boldsymbol{W}_C \in \mathbb{R}^{H \times D}$ , and $\boldsymbol{W}_{\Delta} \in \mathbb{R}^{D \times D}$ compute $\boldsymbol{B}_t = \boldsymbol{W}_B \boldsymbol{x}_t$ , $C_t = W_C \boldsymbol{x}_t$ , and $\Delta = W_{\Delta} \boldsymbol{x}_t$ . In this section, we consider S4 for simplicity. + +# 3.1 Prompt-based PEFT Methods on SSMs + +Prefix-Tuning Can Update Only the Initial State of an SSM Generally, SSMs assume that the initial hidden state is $h_0 = 0$ . We can express $h_t$ with $h_0$ as $h_t = \sum_{i=1}^{t} \overline{A}^{t-i} \overline{B}_i x_i + \overline{A}^t h_0$ . + +Assume we have virtual tokens $x_{(-V + 1)},\ldots ,x_0$ If we prepend virtual tokens as prefix to the input sequence, we can write the updated $\widehat{\pmb{h}}_t$ as below: + +$$ +\widehat {\boldsymbol {h}} _ {t} = \boldsymbol {h} _ {t} + \overline {{\boldsymbol {A}}} ^ {t} \sum_ {i = 0} ^ {V - 1} \overline {{\boldsymbol {A}}} ^ {i} \overline {{\boldsymbol {B}}} x _ {- i} = \boldsymbol {h} _ {t} + \overline {{\boldsymbol {A}}} ^ {t} \boldsymbol {h} _ {\text {p r e f i x}}. +$$ + +By introducing a non-zero $\widehat{h}_0$ , we can substitute $\widehat{h}_0$ for $h_{\mathrm{prefix}}$ making Prefix-Tuning, or optimizing virtual tokens, equivalent to updating the initial state. As optimized virtual tokens only affect the initial state $\widehat{h}_0$ , Prefix-Tuning's expressivity is upper-bounded by updating the initial state directly (Galim et al., 2024). Since Prefix-Tuning is an extended version of Prompt Tuning, this upper bound is applicable to Prompt Tuning as well. + +Galim et al. (2024) showed Initial State Tuning, an advanced version of Prefix-Tuning, which directly optimizes the channel-specific initial state $\pmb{h}^{\prime}\in \mathbb{R}^{H}$ , resulting in $DH$ trainable parameters in total across all $D$ channels. The updated output $\widehat{y}_t$ for Initial State Tuning can be written as in Table 1. + +# 3.2 State-based Methods: A New Family of PEFT Methods for SSMs + +We define state-based methods as a new family of PEFT methods specifically designed for SSMs. These methods directly modify the intrinsic state-related features within the SSM module. + +In contrast, prompt-based methods, such as Prefix-Tuning, influence the hidden state of the SSM module indirectly by introducing external virtual tokens. While both approaches adjust the hidden state of the SSM module, state-based methods operate within the SSM module itself, offering a more direct and expressive adaptation strategy. + +Based on our definition, we classify Initial State Tuning as a state-based method. While Initial State Tuning surpasses Prefix-Tuning (Galim et al., 2024), it still falls short compared to other finetuning methods on SSMs. To bridge this gap, we propose a novel state-based method for enhanced performance. + +
Initial State Tuningŷt=yt+ Ct(Πi=1tAi)h'
State-offset Tuning (h)ŷt=yt+ Cth'
State-offset Tuning (y)ŷt=yt+y'
+ +Table 1: State-based methods for S6. Our methods eliminate the time-dependent coefficient $\prod_{i=1}^{t} \overline{A}_i$ , ensuring a uniform effect across timesteps. + +
Prompt-basedTimestep TTimestep T + 1
Prefix[prefix, x1, ..., xT] → [prefix, x1, ..., xT, xT+1]
Suffix[x1, ..., xT, suffix] → [x1, ..., xT, suffix, xT+1]
Iterative Suffix[x1, ..., xT, suffix] → [x1, ..., xT, xT+1, suffix]
+ +Table 2: Comparison of Prefix-Tuning, Suffix-Tuning, and Iterative Suffix-Tuning. + +# 4 Proposed State-based PEFT Method + +In this section, we propose State-offset Tuning as a new state-based PEFT method. A visual comparison with Initial State Tuning and Prefix-Tuning is provided in Sec. A. + +# 4.1 State-offset Tuning + +Initial State Tuning introduces an additional term $h'$ with a coefficient $\overline{A}^t$ for S4 and $\prod_{i=1}^{t} \overline{A}_i$ for S6. However, this coefficient, which varies for each timestep, tends to decrease over time, leading to inconsistent effects. This is related to the issue that SSMs struggle to recall early tokens (Fu et al., 2022). To address this and ensure a consistent effect for each timestep, we introduce State-offset Tuning, which eliminates this coefficient. + +State-offset Tuning adds a constant, learnable state-offset $h'$ to the hidden state $h$ before obtaining the updated output $\widehat{y}_t$ (Fig. 1). Therefore, unlike Initial State Tuning, State-offset Tuning does not alter the hidden state dynamics directly. Instead, State-offset Tuning adds a constant $h'$ repetitively for each timestep, ensuring a uniform impact. + +We formulate State-offset Tuning $(h)$ for S6 in Table 1, where we optimize $h' \in \mathbb{R}^H$ . In S4, $C_t$ does not depend on the input, simplifying to a constant $C$ . This allows us to optimize a bias $y'$ instead of $h'$ (with $y' := C h'$ for each dimension). We name this method State-offset Tuning $(y)$ . For S4, State-offset Tuning $(y)$ and State-offset Tuning $(h)$ are equivalent. In S6, opting for the simpler State-offset Tuning $(y)$ enhances parameter efficiency by decreasing the tunable parameters from $D H$ to $D$ . + +# 4.2 Connection to Prompt-based Methods + +To further validate the methodology of State-offset Tuning, we examine its connection to prompt-based methods and demonstrate its correspondence to Iterative Suffix-Tuning. + +Iterative Suffix-Tuning Li and Liang (2021) showed that in Transformers, inserting virtual tokens at the beginning (Prefix-Tuning) or the end (Suffix-Tuning, referred to as Infix-Tuning in their work) yields similar performance. + +However, for SSMs, the position of the inserted virtual tokens is crucial, as these models tend to forget early tokens. The effect of Prefix-Tuning and Suffix-Tuning diminishes as the model processes subsequent timesteps. This leads to the question: how can we maintain consistent influence of virtual tokens across all timesteps in SSMs? + +To achieve this, we propose Iterative Suffix-Tuning. As shown in Table 2, both Prefix-Tuning and Suffix-Tuning hold virtual tokens in fixed positions throughout all timesteps. Conversely, Iterative Suffix-Tuning shifts virtual tokens to the sequence's last position at each timestep, ensur + +
Model SizeMamba 1.4BMamba 130M
DatasetParams(%)SpiderSAMSumParams(%)DARTGLUE
TypeMethodAllEasyMediumHardExtraR1R2RLMET.BLEUAvg.
-Pretrained0.000.00.00.00.00.010.91.510.20.0018.11.241.0
Full Fine-tuning (All)100.0066.284.369.553.443.451.227.342.9100.0071.051.880.5
Full Fine-tuning (S6)4.4656.776.657.846.034.951.126.942.24.3170.348.779.3
Parameter basedLoRA0.4656.375.056.550.633.750.526.442.20.9269.950.878.3
BitFit0.0351.374.250.943.126.550.325.741.90.0667.043.777.9
Additional-scan0.3426.944.425.621.310.237.617.530.90.6860.615.862.4
Prompt basedPrompt Tuning0.0143.665.342.433.325.350.125.641.60.0466.239.863.8
Prefix-Tuning12.8139.765.738.631.015.150.626.542.122.6966.642.568.6
State basedInitial State Tuning0.2351.877.851.135.132.550.026.041.30.4569.146.277.4
State-offset Tuning (h)0.2357.477.459.944.833.750.926.542.40.4570.047.078.5
State-offset Tuning (y)0.0153.077.455.440.822.950.626.142.00.0366.845.277.7
+ +Table 3: Experimental results for fine-tuning the SSM module (S6) of Mamba (Gu and Dao, 2024) models. We assess Spider and its subsets using execution accuracy, SAMSum with ROUGE-1/2/L scores, DART using METEOR and BLEU scores, and GLUE by calculating the average score. To demonstrate the effectiveness of our methods, we configure the hyperparameters of each method to ensure their parameter budget is comparable to or exceeds that of our methods. Bold and underline indicate the best and the second-best results, respectively, among all methods (excluding full fine-tuning). Our State-offset Tuning $(h)$ outperforms all other methods on most datasets, and our State-offset Tuning $(y)$ shows comparable or better performance than other methods despite its significantly fewer trainable parameters. + +ing uniform influence in SSMs. This method is akin to how State-offset Tuning eliminates the time-varying coefficient in Initial State Tuning, enforcing a consistent effect at every timestep. We show that Iterative Suffix-Tuning in SSMs is equivalent to State-offset Tuning (as detailed in Sec. B). + +# 5 Experiments + +# 5.1 Experiment Setup + +We conduct experiments for fine-tuning the SSM module (S6) using pretrained Mamba (Gu and Dao, 2024) and Mamba-2 (Dao and Gu, 2024) models on four datasets: Spider (Yu et al., 2018), SAM-Sum (Gliwa et al., 2019), DART (Nan et al., 2021), and GLUE (Wang et al., 2019). For further information on datasets, evaluation metrics, and experimental details, refer to Secs. E and F. We use LoRA (Hu et al., 2021), BitFit (Zaken et al., 2022), and Additional-scan (Yoshimura et al., 2025) as parameter-based methods. For prompt-based methods, we employ Prompt Tuning (Lester et al., 2021) and Prefix-Tuning1 (Li and Liang, 2021). For state-based methods, we utilize Initial State Tuning (Galim et al., 2024), along with our proposed methods, State-offset Tuning $(h)$ and State-offset Tuning $(y)$ . + +# 5.2 Experimental Results + +Table 3 shows the results on Mamba models. Additional results, including Mamba-2 results, are provided in Sec. G. In the appendix, we further compare the training speed, training memory usage, and computational overhead during inference between LoRA and State-offset Tuning $(h)$ . Our findings show that State-offset Tuning $(h)$ is faster, more memory-efficient, and introduces lower FLOP overhead compared to LoRA. Additionally, we evaluate the performance of State-offset Tuning $(h)$ within SSMs against Prefix-Tuning in Transformers, further highlighting the effectiveness of our approach. + +State-based Methods Outperform Prompt-based Methods Table 3 shows that all state-based methods outperform prompt-based methods, supporting the claim that state-based methods are superior to prompt-based methods on SSMs. + +In particular, our State-offset Tuning $(h)$ achieves the best results among all tested PEFT methods on most datasets. Our State-offset Tuning $(y)$ outperforms Initial State Tuning on most datasets, using just $0.01\%$ of the parameters compared to $0.23\%$ by Initial State Tuning. + +State-offset Tuning Outperforms Parameter-Based Methods State-offset Tuning $(h)$ outperforms BitFit across all datasets and surpasses LoRA on most datasets. Notably, it also outperforms Additional-scan, a method specifically designed for fine-tuning SSM modules, across all datasets. + +Furthermore, State-offset Tuning $(h)$ achieves performance comparable to full fine-tuning (S6), highlighting the effectiveness of state-based PEFT for SSM modules, despite using significantly fewer parameters. The results from Mamba-2 (Table 11) further validate the effectiveness of our method. We also include a comparison to Selective Dimension Tuning (SDT) (Galim et al., 2024) in Sec. G.4, showing that our method outperforms SDT while using fewer parameters. + +# 6 Conclusion + +In this paper, we introduce state-based methods as a new family of PEFT methods for State Space Models, serving as a superior alternative to prompt-based methods. We propose State-offset Tuning as a new state-based PEFT method and demonstrate its effectiveness through extensive experiments. + +# 7 Limitations + +While we demonstrate that State-offset Tuning is effective for fine-tuning SSMs in the text domain, its applicability to other domains, such as vision or speech, remains unexplored. Existing PEFT methods, such as LoRA and Prompt Tuning, have been successfully applied across various domains (Jia et al., 2022; Gal et al., 2023; Ran et al., 2024). Extending State-offset Tuning to models in other domains, such as Vision Mamba (Zhu et al., 2025), is an interesting direction for future work. + +Potential Risks Our approach enables parameter-efficient fine-tuning (PEFT) of pretrained SSMs, significantly reducing the computational cost of adaptation. While this is beneficial for resource-constrained scenarios, it also presents potential risks. Specifically, adversaries could leverage our method to efficiently fine-tune pretrained SSMs on harmful or biased data, enabling the rapid adaptation of models for malicious purposes with minimal computational resources. This could lead to the proliferation of harmful or deceptive models that reinforce misinformation, bias, or toxicity. To mitigate these risks, future work should explore more robust safety measures, such as integrating ethical fine-tuning constraints and monitoring mechanisms. + +# References + +Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, + +Shyamal Anadkat, et al. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774. +Satanjeev Banerjee and Alon Lavie. 2005. ME-TEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics. +Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising textual entailment challenge. In Proceedings of the second PASCAL challenges workshop on recognising textual entailment, volume 1. CiteSeer. +Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. TAC, 7(8):1. +Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397-2430. PMLR. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. +Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In _Machine learning challenges_ workshop, pages 177-190. Springer. +Tri Dao and Albert Gu. 2024. Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. In International Conference on Machine Learning. +Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third international workshop on paraphrasing (IWP2005). +Daniel Y Fu, Tri Dao, Khaled Kamal Saab, Armin W Thomas, Atri Rudra, and Christopher Re. 2022. Hungry hungry hippos: Towards language modeling with state space models. In International Conference on Learning Representations. +Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit Haim Bermano, Gal Chechik, and Daniel Cohen-Or. 2023. An image is worth one word: Personalizing text-to-image generation using textual inversion. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. + +Kevin Galim, Wonjun Kang, Yuchen Zeng, Hyung Il Koo, and Kangwook Lee. 2024. Parameter-efficient fine-tuning of state space models. arXiv preprint arXiv:2410.09016. +Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027. +Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1-9. +Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. EMNLP-IJCNLP 2019, page 70. +Albert Gu and Tri Dao. 2024. Mamba: Linear-time sequence modeling with selective state spaces. In First Conference on Language Modeling. +Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. 2020. Hippo: Recurrent memory with optimal polynomial projections. In Advances in Neural Information Processing Systems, volume 33, pages 1474-1487. +Albert Gu, Karan Goel, and Christopher Re. 2022. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations. +Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Re. 2021. Combining recurrent, convolutional, and continuous-time models with linear state space layers. In Advances in Neural Information Processing Systems, volume 34, pages 572-585. +Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. In International Conference on Learning Representations. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, pages 2790-2799. +Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. + +Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. 2022. Visual prompt tuning. In European Conference on Computer Vision, pages 709-727. Springer. +Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. 2020. Transformers are RNNs: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156-5165. +Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059. +Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-Tuning: Prompt Tuning Can Be Comparable to Finetuning Across Scales and Tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61-68. +Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT Understands, Too. arXiv:2103.10385. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Open-Review.net. +Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. 2022. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft. +Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, et al. 2021. DART: Open-Domain Structured Data Record to Text Generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432-447. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. pages 311-318. + +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. Preprint, arXiv:1912.01703. +Bo Peng, Eric Alcaide, Quentin Gregory Anthony, Alon Albalak, Samuel Arcadinho, Stella Biderman, Huanqi Cao, Xin Cheng, Michael Nguyen Chung, Leon Derczynski, et al. 2023. RWKV: Reinventing RNNs for the transformer era. In The 2023 Conference on Empirical Methods in Natural Language Processing. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics. +Lingmin Ran, Xiaodong Cun, Jia-Wei Liu, Rui Zhao, Song Zijie, Xintao Wang, Jussi Keppo, and Mike Zheng Shou. 2024. X-adapter: Adding universal compatibility of plugins for upgraded diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8775-8784. +Ying Sheng, Shiyi Cao, Dacheng Li, Coleman Hooper, Nicholas Lee, Shuo Yang, Christopher Chou, Banghua Zhu, Lianmin Zheng, Kurt Keutzer, et al. 2023. S-lora: Serving thousands of concurrent lora adapters. CoRR. +Daniel G. A. Smith and Johnnie Gray. 2018. opt_einsum - a python package for optimizing contraction order for einsum-like expressions. Journal of Open Source Software, 3(26):753. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642. +Vladislav Sovrasov. 2018-2024. ptflops: a flops counting tool for neural networks in pytorch framework. +Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. 2023. Retentive network: A successor to transformer for large language models. arXiv preprint arXiv:2307.08621. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. + +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. +Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641. +Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2018, pages 1112-1122. Association for Computational Linguistics (ACL). +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Masakazu Yoshimura, Teruaki Hayashi, and Yota Maeda. 2025. MambaPEFT: Exploring parameter-efficient fine-tuning for mamba. In The Thirteenth International Conference on Learning Representations. +Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921. +Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1-9. +Yuchen Zeng and Kangwook Lee. 2024. The expressive power of low-rank adaptation. In International Conference on Learning Representations. +Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. 2025. Vision mamba: Efficient visual representation learning with bidirectional state space model. In _Forty-first International Conference on Machine Learning_. + +# A Visual Comparison of Prompt-based Methods and State-based Methods + +![](images/a6c4522debd040bb917cdc09cfccbeb894e35cfbe29c6b3a1c7570f585d596fa.jpg) +Fig. 2 compares prompt-based methods and state-based methods, including our proposed State-offset Tuning, within the S6 block. +Figure 2: Visual comparison of prompt-based methods and state-based methods in the S6 block. +Figure 3: Two different implementations of Iterative Suffix-Tuning in S6. We show that Fig. 3b is equivalent to State-offset Tuning. + +State-based Methods Operate within the SSM Module Fig. 2 shows that prompt-based methods, such as Prefix-Tuning, rely on virtual tokens external to the S6 block. In contrast, state-based methods, such as Initial State Tuning, State-offset Tuning $(h)$ , and State-offset Tuning $(y)$ , directly adjust state-related features within the S6 block. + +State-offset Tuning Affects the Current Timestep Figure 2 illustrates how Prefix-Tuning and Initial State Tuning modify features at early timesteps, indirectly affecting the current state. However, this impact diminishes over time. In contrast, State-offset Tuning $(h)$ and State-offset Tuning $(y)$ directly influence the state at each timestep, resulting in more effective adaptation. + +# B Iterative Suffix-Tuning and State-offset Tuning + +In this section, we show that Iterative Suffix-Tuning for SSMs is equivalent to State-offset Tuning. + +# State-offset Tuning is Iterative Suffix-Tuning + +Fig. 3 provides two different implementations of + +![](images/2a8a0d7c1404f0e4d463182fb25280fff03a07cfc75f96cdf81df787d1be0214.jpg) +(a) Iterative Suffix-Tuning (with $t + 1$ as current timestep) + +![](images/d4cdbc774d59f50a77a47344acca665c017fda6a14d02bdfe4da55e18157851f.jpg) +(b) Iterative Suffix-Tuning (with $t$ as current timestep) + +Iterative Suffix-Tuning on SSMs (S6) with virtual token (suffix) $x_{t + 1}$ . Fig. 3a views $t + 1$ as current timestep. In this case, input-dependent $C_{t + 1} = W_C x_{t + 1}$ is determined solely by the suffix $x_{t + 1} \in \mathbb{R}^D$ , which is constant at inference time, thus the input dependency of $C$ is lost, reducing the expressive power of S6. + +To address this, we view $t$ as current timestep instead and interpret $x_{t + 1}$ as future token (Fig. 3b). Consequently, we time-shift $x_{t + 1}$ by multiplying it with the inverse of $\overline{A}_{t + 1}$ . + +Fig. 3a: $y_{t + 1} = C_{t + 1}(\overline{A}_{t + 1}\pmb{h}_t + \overline{B}_{t + 1}x_{t + 1})$ , + +Fig. 3b: $y_{t} = C_{t}(h_{t} + \overline{A}_{t + 1}^{-1}\overline{B}_{t + 1}x_{t + 1})$ . + +Therefore, according to the equation corresponding to Fig. 3b, Iterative Suffix-Tuning can be implemented by updating only $\overline{A}_{t+1}^{-1}\overline{B}_{t+1}x_{t+1}$ . Since this term depends solely on the constant suffix $x_{t+1}$ , we can directly replace it with a learnable parameter $h^{\prime}(h^{\prime}:=\overline{A}_{t+1}^{-1}\overline{B}_{t+1}x_{t+1})$ , which is equivariant to State-offset Tuning $(h)$ (Table 1). + +# C Low-Rank State-offset Tuning + +State-offset Tuning $(h)$ shows superior parameter efficiency on Mamba versus other PEFT methods. To further reduce trainable parameters, we can represent the learnable state-offset as a product of two low-rank matrices, inspired by LoRA (Hu et al., 2021). This is particularly useful for Mamba-2, + +where the state dimension is larger than in Mamba, leading to an increased number of trainable parameters. In such cases, low-rank techniques can effectively mitigate the parameter overhead. Experimental results of State-offset Tuning $(h)$ with lower rank on Mamba-2 are provided in Sec. G.2. + +# D PEFT Baselines + +In this section, we provide a more detailed description of the baseline methods. + +LoRA (Hu et al., 2021) LoRA aims to fine-tune large models by maintaining the bulk of pretrained parameters untouched while introducing trainable low-rank matrices within each Transformer's layer. This method leverages linear algebra principles where a large matrix can be effectively approximated by two low-rank matrices, thus reducing the number of parameters. LoRA includes a scaling parameter to adjust the influence of original and LoRA weights during training. We use the Hugging Face version (Apache License 2.0, Mangrulkar et al. (2022)) of LoRA for our experiments. + +Prompt Tuning (Lester et al., 2021) This method involves freezing the entire model and adding a trainable soft prompt to the input. The prompt consists of continuous virtual tokens that provide additional context. + +Prefix-Tuning (Li and Liang, 2021) Similar to Prompt Tuning, Prefix-Tuning adds trainable tokens but extends them across every Transformer layer by appending trainable embeddings to the attention matrices. To combat the instability in training these prefixes, an over-parameterized MLP is utilized, which can be discarded after training. + +BitFit (Zaken et al., 2022) This PEFT method simplifies fine-tuning by training only the bias terms while freezing the other model weights, drastically reducing trainable parameters. + +SDT (Galim et al., 2024) SDT (Selective Dimension Tuning) employs a sparse updating approach for the matrices $A$ , $B$ , and $C$ ( $W_B$ and $W_C$ for S6), while additionally applying LoRA to the linear projection layers. All remaining layers are kept frozen. The process for determining which parameters to update involves a warmup stage, during which parameters are flagged as updatable if they exhibit a significant gradient magnitude. In our SDT experiments, we excluded LoRA from the lin + +ear projection layers and focused solely on its S6 component. + +Additional-scan (Yoshimura et al., 2025) This approach enhances the model's expressivity by expanding the state dimensions for $A$ , $W_{B}$ , and $W_{C}$ . During training, only the added dimensions are marked as trainable. + +E Datasets + +
Dataset#Train#Valid#EpochsModel sizeMetrics
RTE249027710130mAcc.
MRPC366840810130mAcc.
CoLA8551104310130mAcc.
SST-26734987210130mAcc.
QNLI104743546310130mAcc.
QQP363846404303130mAcc.
MNLI392702196473130mAcc.
Spider69181034101.4B, 2.8BAcc.
SAMSum14732819101.4BROUGE
DART62659276810130mMETEOR, BLEU
+ +Table 4: Dataset details. We report the number of training and validation samples, number of training epochs, employed model size and evaluation metrics. + +This paper examines four datasets across two domains: Natural Language Understanding (NLU) and Natural Language Generation (NLG). Table 4 presents detailed information for each dataset. + +GLUE (Wang et al., 2019) A benchmark comprising nine tasks in English for assessing language understanding models, including sentiment analysis, linguistic acceptability, and question answering. We use the following datasets: RTE (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), MRPC (Dolan and Brockett, 2005), CoLA (Warstadt et al., 2019), SST-2 (Socher et al., 2013), QNLI (Rajpurkar et al., 2018), $\mathsf{QQP}^2$ , and MNLI (Williams et al., 2018). Evaluation is mainly through accuracy, except for CoLA where Matthews correlation is used. The final metric is calculated as the average accuracy (Matthews correlation for CoLA) across all datasets. The individual datasets are available under different permissive licenses. We use the version hosted at https://huggingface.co/datasets/nyu-mll/glue. + +SAMSum (Gliwa et al., 2019) A dataset for dialogue summarization featuring about 16,000 synthetic conversations in English with summaries, created to simulate digital communications with + +varied tones and styles. Its structure helps in developing systems that process conversational text. The dataset is evaluated via ROUGE score. This dataset is available under the CC BY-NC-ND 4.0 license. We use the version hosted at https://huggingface.co/datasets/Samsung/samsum. + +Spider (Yu et al., 2018) A text-to-SQL dataset with 10,000 annotated SQL queries across $200+$ databases, classifying queries from easy to extra hard based on SQL operation complexity. It involves translating English questions to SQL, evaluated via execution accuracy. Execution accuracy considers the output correct if the model's predicted SQL query and the ground truth SQL query yield the same results when executed on the database. This dataset is available under the CC BY-SA 4.0 license. We use the version hosted at https://huggingface.co/datasets/xlangai/spider. + +DART (Nan et al., 2021) Comprising over 80,000 instances, DART focuses on English RDF-to-text generation, organized by structured data triples and corresponding text summaries. It is assessed using METEOR and BLEU metrics. This dataset is available under the MIT license. We use the version hosted at https://huggingface.co/datasets/Yale-LILY/dart. + +# F Experimental Details + +For every dataset, we select the model size based on how difficult the dataset is and conduct a brief grid search for one epoch using a subset of the data (1k-2k instances) with learning rates of $\{4\times 10^{-1},2\times 10^{-1},1\times 10^{-1},\dots,1\times 10^{-5}\}$ . The best learning rate is then selected as the rate that has the lowest training loss. In our experimental results, we report the metric from the best epoch observed on the validation set during training, employing early stopping. Each experiment is conducted once. We apply fine-tuning methods to the SSM module (S6) of Mamba (130M, 1.4B, $2.8\mathrm{B})^3$ and the SSM module (SSD) of Mamba-2 (130M, 1.3B) $^4$ pretrained from Pile (MIT License, Gao et al. (2020)) using AdamW (Loshchilov and Hutter, 2019) with a linear decay schedule for the learning rate. In general, we choose hyperparameters for each individual method to ensure that all + +methods operate within a similar parameter budget. Tables 5 and 6 show selected learning rates and chosen hyperparameters for each method. For assessing NLG tasks, we utilize beam search with five beams and a maximum beam length of 1024. BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) metrics are computed using Hugging Face's evaluate library5. + +We use an NVIDIA RTX 3090 24GB for training models with less than 1 billion parameters, and an NVIDIA H100 80GB for larger models. We implemented our project in PyTorch (Modified BSD license, Paszke et al. (2019)), utilizing the Hugging Face trainer (Apache License 2.0, Wolf et al. (2020)). We train with batch size 4 for 10 epochs on all datasets except QQP and MNLI for which we use 3 epochs, allowing each training run to finish in under 16 hours. This project spanned three months, utilizing four NVIDIA RTX 3090 24GB GPUs and four NVIDIA H100 80GB GPUs, totaling approximately 17,000 GPU hours. + +# G Additional Experimental Results + +# G.1 Mamba Results + +Training Speed and Memory Usage We conduct a small experiment to compare the memory usage and training speed of State-offset Tuning $(h)$ and LoRA, as they performed most similarly in terms of dataset metrics in our experiments. Using a single H100 GPU, we train for 100 batch iterations with a batch size of 4 and a 1K context, continuously measuring memory usage and batch latency. + +Table 7 shows the training speed and maximum memory usage for different Mamba sizes for State-offset Tuning $(h)$ and LoRA. State-offset Tuning $(h)$ uses less memory and is faster, even with more trainable parameters. In this experiment, we selected hyperparameters to ensure LoRA has less trainable parameters than State-offset Tuning $(h)$ . We believe State-offset Tuning $(h)$ 's efficiency stems from our optimized einsum implementation, enhanced with the opt_einsum (MIT License, Smith and Gray (2018)) Python package to reduce memory usage and improve latency. + +
ModelMambaMamba-2
Method / DatasetRTEMRPCCoLASST-2QNLIQQPMNLIDARTSAMSumSpiderDARTSAMSumSpider
LoRA2e-032e-034e-052e-031e-031e-032e-034e-032e-034e-034e-032e-034e-03
Additional-scan4e-032e-032e-031e-012e-034e-024e-034e-034e-034e-032e-024e-031e-02
SDT1e-034e-021e-014e-022e-022e-021e-014e-022e-024e-02---
Initial State Tuning4e-041e-032e-032e-032e-032e-032e-032e-032e-041e-034e-032e-044e-04
State-offset Tuning (h)1e-032e-042e-041e-041e-044e-054e-044e-041e-042e-041e-032e-052e-05
State-offset Tuning (h) (low rank)----------4e-032e-042e-04
State-offset Tuning (y)1e-032e-031e-031e-032e-031e-031e-034e-031e-032e-031e-022e-041e-03
+ +Table 5: Learning rates for each method and dataset. For Mamba and Mamba-2, learning rates for each method and dataset are determined via a small grid search on a dataset subset. The learning rate yielding the best training loss is chosen as the final rate. + +
Method /ModelMamba 130MMamba 1.4BMamba-2 130MMamba-2 1.3B
LoRARank = 8Rank = 8Rank = 16Rank = 16
α = 8α = 8α = 16α = 16
Dropout = 0.1Dropout = 0.1Dropout = 0.1Dropout = 0.1
Modules = all weight matrices in S6Modules = all weight matrices in S6Modules = all weight matrices in SSDModules = all weight matrices in SSD
Additional-scan#States = 8#States = 8#States = 32#States = 32
SDTFreeze #Channels = 50.0%Freeze #Channels = 50.0%--
Freeze #States = 75.0%Freeze #States = 75.0%
Initial State Tuning----
State-offset Tuning (h)----
State-offset Tuning (h) (low rank)--Rank = 32Rank = 64
State-offset Tuning (y)----
+ +Table 6: Hyperparameter settings for each model and PEFT method. In general, we adjust hyperparameters to maintain a similar number of trainable parameters. + +
ModelMethodParams (%)Mem. (GB)Latency (s)
130MState-offset Tuning (h)0.454.20.13
LoRA0.355.440.18
370MState-offset Tuning (h)0.429.360.33
LoRA0.3211.560.45
790MState-offset Tuning (h)0.313.910.49
LoRA0.2317.170.61
1.4BState-offset Tuning (h)0.2318.770.67
LoRA0.1722.990.8
2.8BState-offset Tuning (h)0.1931.491.13
LoRA0.1437.841.33
+ +Table 7: Training speed and memory usage. For each Mamba size, we compare the maximum memory usage and mean latency for processing a single batch during training. Our State-offset Tuning $(h)$ is compared against LoRA, as it demonstrated the most similar performance in the experiment section. We configure LoRA to use fewer trainable parameters than State-offset Tuning $(h)$ . Despite this, State-offset Tuning $(h)$ still consumes less memory and is faster in training. + +FLOP Overhead While it is possible to avoid extra FLOP with LoRA in constrained single-task settings by merging weights into the pretrained model, real-world serving scenarios often require a single pretrained model to support multiple downstream tasks simultaneously via multiple LoRA adapters. In such cases, avoiding extra FLOP would require storing separately merged models for each task in memory—an inefficient solution. Alternatively, merging weights dynamically at inference time introduces significant computational bottlenecks. As + +a result, many recent works focus on serving many LoRA adapters efficiently without weight merging (Sheng et al., 2023). + +
Sequence LengthL=128L=256L=512L=1024Relative (%)
ModelMethodGFLOP
130MPretrained16.4532.9065.81131.61100.000
State-offset Tuning (h)16.4632.9165.83131.65+ 0.029
LoRA16.6133.2166.42132.84+ 0.937
370MPretrained47.3594.69189.39378.77100.000
State-offset Tuning (h)47.3694.72189.44378.87+ 0.027
LoRA47.7695.52191.03382.06+ 0.867
790MPretrained101.22202.44404.88809.75100.000
State-offset Tuning (h)101.24202.48404.95809.90+ 0.019
LoRA101.84203.67407.34814.67+ 0.608
1.4BPretrained175.23350.45700.901401.79100.000
State-offset Tuning (h)175.25350.50701.001401.99+ 0.014
LoRA176.05352.09704.171408.35+ 0.468
2.8BPretrained353.66707.321414.632829.25100.000
State-offset Tuning (h)353.70707.401414.802829.59+ 0.012
LoRA355.03710.051420.092840.17+ 0.386
+ +Table 8: FLOP overhead across various model sizes and sequence lengths. State-offset Tuning adds less than $0.03\%$ overhead, whereas LoRA incurs over $30\times$ more extra FLOP compared to ours. + +Given these practical considerations, we evaluate LoRA without weight merging and conduct experiments comparing the additional FLOP of LoRA and our State-offset Tuning method during inference. We use ptflops (Sovrasov, 2018-2024) to measure computational overhead. As shown in Table 8, our method adds less than $0.03\%$ overhead, while LoRA results in more than 30 times + +the additional FLOP compared to ours. These results highlight the superior FLOP efficiency of our method compared to LoRA. + +Mamba 2.8B Results Table 9 shows the experimental results using Mamba 2.8B. Our State-offset Tuning $(h)$ outperforms all methods except full fine-tuning. + +Mamba Results on GLUE Dataset Table 10 shows the full results on the GLUE dataset using Mamba 130M. Our State-offset Tuning $(h)$ achieves the highest average score among all PEFT methods. + +# G.2 Mamba-2 Results + +Table 11 shows experimental results with Mamba-2 (Dao and Gu, 2024) models. State-offset Tuning $(h)$ with low-rank adaptation (Sec. C) significantly reduces the number of trainable parameters. It outperforms existing methods on the Spider benchmark by a large margin and achieves performance comparable to other approaches on the SAMSum and DART datasets. + +# G.3 State-offset Tuning in SSMs vs. Prefix-Tuning in Transformers + +To highlight the effectiveness of State-offset Tuning, we compare its performance with Prefix-Tuning on the Transformer model Pythia (Biderman et al., 2023). We conduct full fine-tuning and Prefix-Tuning experiments on Pythia 160M on GLUE tasks. The results are shown in Table 12. + +Full fine-tuning on Mamba 130M generally surpasses Pythia 160M, consistent with Gu and Dao (2024). Prefix-Tuning on both Mamba and Pythia reaches about $85 - 90\%$ of their full fine-tuning performance. + +Our State-offset Tuning achieves approximately $98\%$ of full fine-tuning performance, effectively closing the gap. This success highlights its precise design for SSM-based models. + +# G.4 Comparison to Selective Dimension Tuning (SDT) + +We additionally compare our method with Selective Dimension Tuning (SDT) (Galim et al., 2024), a technique derived from theoretical analysis of SSMs. Note that the hyperparameter selection differs from that used in Galim et al. (2024) to ensure the parameter count is more comparable to ours. As shown in Table 13, our method consistently + +outperforms SDT in most cases while using fewer parameters. + +
Model SizeMamba 2.8B
DatasetParams(%)Spider
TypeMethodAllEasyMediumHardExtra
-Full Fine-tuning (All)100.0071.887.573.563.851.8
Full Fine-tuning (S6)4.4465.781.968.858.041.0
Parameter basedLoRA0.3863.986.368.249.434.3
BitFit0.0259.982.360.852.931.3
Additional-scan0.2835.062.031.927.412.1
Prompt basedPrompt Tuning0.0150.775.453.837.419.3
Prefix-Tuning10.8245.175.045.132.213.9
State basedInitial State Tuning0.1959.782.362.343.735.5
State-offset Tuning (h)0.1965.089.165.951.740.4
State-offset Tuning (y)0.0163.185.964.152.337.3
+ +Table 9: Experimental results of fine-tuning the SSM module using pretrained Mamba 2.8B. State-offset Tuning $(h)$ stands out as the most effective method among all PEFT approaches. + +
Model SizeMamba 130M
DatasetParams(%)GLUE
TypeMethodRTEMRPCCoLASST-2QNLIQQPMNLIAvg.
-Full Fine-tuning (All)100.0071.180.663.292.287.487.980.880.5
Full Fine-tuning (S6)4.3169.778.959.191.588.187.580.579.3
Parameter basedLoRA0.9266.178.757.890.887.886.979.878.3
BitFit0.0669.580.454.792.086.285.377.277.9
Additional-scan0.6857.974.038.679.079.970.536.962.4
Prompt basedPrompt Tuning0.0456.071.612.089.476.879.661.563.8
Prefix-Tuning22.6967.575.743.491.583.483.135.668.6
State basedInitial State Tuning0.4566.878.453.092.486.486.178.577.4
State-offset Tuning (h)0.4567.480.856.291.987.785.679.778.5
State-offset Tuning (y)0.0370.079.652.591.786.385.678.277.7
+ +Table 10: Full results of fine-tuning the SSM module on the GLUE dataset using pretrained Mamba 130M. Our State-offset Tuning $(h)$ achieves the highest average score among all PEFT methods. + +
Model SizeMamba-2 1.3BMamba-2 130M
DatasetParams (%)SpiderSAMSumParams (%)DART
TypeMethodAllEasyMediumHardExtraR1R2RLMET.BLEU
-Full Fine-tuning (All)100.0064.885.965.754.042.251.026.942.5100.0066.634.9
Full Fine-tuning (SSD)2.4255.176.256.142.534.350.526.342.44.1765.739.7
Parameter basedLoRA0.3745.469.044.437.421.149.725.941.70.7670.349.6
BitFit0.0250.971.451.645.424.150.926.542.60.0366.239.0
Additional-scan0.4731.957.330.523.07.243.020.134.80.9158.516.0
Prompt basedPrompt Tuning0.0145.262.546.934.525.949.626.141.60.0465.536.9
Prefix-Tuning6.9947.471.048.232.225.950.826.542.612.8169.246.5
State basedInitial State Tuning1.8454.373.457.245.427.150.426.442.33.5365.337.2
State-offset Tuning (h)1.8458.579.361.644.633.748.824.740.53.5370.046.3
State-offset Tuning (h) (low rank)0.3560.579.065.752.327.750.426.842.50.7269.847.9
State-offset Tuning (y)0.0143.666.542.136.921.150.326.242.20.0365.938.7
+ +Table 11: Experimental results of fine-tuning the SSM module using pretrained Mamba-2 (Dao and Gu, 2024) models. We evaluate Spider and its subsets with execution accuracy, SAMSum using ROUGE-1/2/L scores, and DART through METEOR and BLEU scores. State-offset Tuning $(h)$ with low-rank adaptation (Sec. C) significantly reduces trainable parameters. It outperforms existing methods on Spider by a wide margin and matches the performance of other approaches on SAMSum and DART. + +
DatasetParams (%)GLUE
ModelMethodRTEMRPCCoLASST-2QNLIQQPMNLIAvg.Relative (%)
Pythia 160MFull Fine-tuning100.0064.377.020.588.785.088.879.271.9100
Prefix-Tuning8.3657.475.04.688.281.580.662.264.289
Mamba 130MFull Fine-tuning100.0071.180.663.292.287.487.980.880.5100
Prefix-Tuning22.6967.575.743.491.583.483.135.668.685
State-offset Tuning (h)0.4567.480.856.291.987.785.679.778.598
+ +Table 12: Prefix-Tuning experiments on Pythia 160M and Mamba 130M on GLUE tasks. State-offset Tuning for Mamba achieves approximately $98\%$ of full fine-tuning performance, while Prefix-Tuning reaches about $85 - 90\%$ in both SSM and Transformer architectures. + +
Model SizeMamba 1.4BMamba 130M
DatasetParams (%)SpiderSAMSumParams (%)DARTGLUE
AllEasyMediumHardExtraR1R2RLMET.BLEUAvg.
Method
SDT0.2619.838.316.616.14.846.321.537.70.5167.548.263.7
State-offset Tuning (h)0.2357.477.459.944.833.750.926.542.40.4570.047.078.5
State-offset Tuning (y)0.0153.077.455.440.822.950.626.142.00.0366.845.277.7
+ +Table 13: Comparison with Selective Dimension Tuning (SDT) (Galim et al., 2024) on Spider, SAMSum, DART, and GLUE. Our method outperforms SDT in most cases while using fewer parameters. Note that the hyperparameter configuration of SDT differs from that in Galim et al. (2024) to ensure a more comparable parameter count. \ No newline at end of file diff --git a/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/images.zip b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ed24f22172ef8f63cb215ef3804691fe19d71eee --- /dev/null +++ b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4aa0be24d2e881410d8cf32cb35cda66c70281edab0307ab603934b9a06f209e +size 728928 diff --git a/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/layout.json b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..706847383c866067cbb896d1bf740bc8ebe01b19 --- /dev/null +++ b/ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edfdb15e0be5c14bac3beceedd6982b277daa17c964afe897a055d6e43f7a4fb +size 490439 diff --git a/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_content_list.json b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c1c06239e7e773a59a618cd196b164a4706748c8 --- /dev/null +++ b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b552f0abf6398589b6d1255c0d8c3bd0a11fd5ac682ea56e75123dd208bcf3f +size 81352 diff --git a/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_model.json b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d208d7f1744ff7e016298b8f27186de586473c42 --- /dev/null +++ b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:839e7aa82cd39de58067e4befc7a2945d945c07937331c5656a4e0aaeff3b558 +size 102685 diff --git a/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_origin.pdf b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..14a6a97eda20ad422cd603ea2d480657ae1edaa1 --- /dev/null +++ b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:726fb651946de0cc7a551327b689168522231eb17cb52ef7c2dbc758421c23ea +size 1713334 diff --git a/ACL/2025/Subword models struggle with word learning, but surprisal hides it/full.md b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/full.md new file mode 100644 index 0000000000000000000000000000000000000000..13d1497cffcac4e67b9b5cb6f408b451f0c0927a --- /dev/null +++ b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/full.md @@ -0,0 +1,272 @@ +# Subword models struggle with word learning, but surprisal hides it + +Bastian Bunzeck and Sina Zarrieß + +Computational Linguistics, Department of Linguistics + +Bielefeld University, Germany + +{bastian.bunzeck, sina.zarriess}@uni-bielefeld.de + +# Abstract + +We study word learning in subword and character language models with the psycholinguistic lexical decision task. While subword LMs struggle to discern words and non-words with high accuracy, character LMs solve this task easily and consistently. Only when supplied with further contexts do subword LMs perform similarly to character models. Additionally, when looking at word-level and syntactic learning trajectories, we find that both processes are separable in character LMs. Word learning happens before syntactic learning, whereas both occur simultaneously in subword LMs. This raises questions about the adequacy of subword LMs for modeling language acquisition and positions character LMs as a viable alternative to study processes below the syntactic level. + +# 1 Introduction + +When humans acquire their first language(s), they first learn to recognize single words, mostly from short, fragmentary utterances (Cameron-Faulkner et al., 2003; Bunzeck and Diessel, 2024), before fully understanding the grammatical processes governing them (Tomasello, 1992; Behrens, 2021). This simple fact about language acquisition has received surprisingly little attention in the body of work that treats LMs as models of language learners (Warstadt and Bowman, 2022; Portelance and Jasbi, 2024). While word learning in children is comparatively well studied (Plunkett, 1997; Yu and Ballard, 2007; Waxman and Gelman, 2009; Bergelson and Swingley, 2012; Clark and Casillas, 2015; Frank et al., 2021), the implicit word learning processes in LMs are not. Current studies focus on syntax (Mueller et al., 2022; Choshen et al., 2022), or investigate word learning in close connection with syntax through surprisal (Chang and Bergen, 2022; Portelance et al., 2023; Shafiabadi and Wisniewski, 2025; Ficarra et al., 2025). Architecture-wise, a key limitation to the precise study of word learning + +![](images/7ab142c3729b2e79fb187eee05fbf359ea012c191020a7628c42bd29423b46bb.jpg) +Figure 1: Illustration of word learning in human learners and transformer LLMs (top), and of our lexical decision test that probes discrimination of words from non-words (bottom). While human learners build up an mental lexicon from experience with language, artificial learners assign probabilities to strings based on their frequency. + +is subword tokenization (e.g., BPE, Gage, 1994), which splits words into linguistically (Arnett and Bergen, 2025) and cognitively implausible units (Beinborn and Pinter, 2023). + +To gauge word learning in a syntax-independent manner, we use the psycholinguistic lexical decision task (Meyer and Schvaneveldt, 1971; Le Godais et al., 2017), i.e., deciding which word in a given word/non-word pair is real. We find that models with character-level tokenization learn this task quickly and reliably. In contrast, subword LMs of all sizes perform significantly worse in a syntax-independent setting and only achieve comparable accuracy when stimuli are measured through surprisal, or "unexpectedness," in linguistic context. By comparing word and syntactic learning (measured via BLiMP, Warstadt et al., 2020), we further find that character models quickly acquire word knowledge and only later develop syntactic knowledge. In subword models, word and syntax learning happen concurrently. This demonstrates how ele + +mentary modeling decisions, such as tokenization methods, significantly impact learning trajectories in LMs, a fact that warrants more scrutiny when using LMs as models of language acquisition. + +# 2 Related work + +Word learning in humans is a multifaceted phenomenon that involves different kinds of (extra)linguistic knowledge (Waxman and Gelman, 2009): while phoneticians are concerned with word recognition and sequence segmentation (Jusczyk, 1999), developmental psychologists frequently equate word learning with correct reference to real world objects (Stager and Werker, 1997; Ackermann et al., 2020). Psycholinguists usually focus on the mental lexicon and learning which words belong to it (Goldinger, 1996), whereas usage-based scholars take into account children's productions and their ability to be competent language users, even with few words (Tomasello, 1992). + +Although aspects like word recognition and sequence segmentation have been studied in LMs (e.g. Goriely and Buttery, 2025a), the most common approach to word learning in LMs is measuring the predictability of words via surprisal (negative log-probability, Hale, 2001). Chang and Bergen (2022) train LMs on book texts and wiki data. They define a surprisal threshold below which words are said to be learned and find that frequent function words are learned earliest. Here, the alignment between models and real learners is questionable: children first utter nouns and verbs (Tomasello, 2000), but also rely on function words for challenges like speech segmentation (Dye et al., 2019). Portelance et al. (2023) show that in LSTMs trained on child-directed speech, surprisal correlates with word-level age of acquisition. Chang et al. (2024) observe that learning curves for surprisal values are stable for frequent tokens, while infrequent tokens are "forgotten" again over pre-training. Shafiabadi and Wisniewski (2025) introduce anti-surprisal (in incorrect contexts) to track false usage, which also fluctuates over pre-training. These studies cast word learning as the ability to anticipate words' expectedness in a given syntactic (and semantic) context. We note a certain conceptual leap to the original works on surprisal, where it is primarily viewed as an incremental measure of processing difficulty in syntactic comprehension (Levy, 2008; Demberg and Keller, 2009). A simple word like dog might be surprising and therefore hard to parse + +in some contexts, but very expected in others, independently of being already learned on the pure word level. A further methodological drawback of surprisal as a measure of word learning is that it corresponds almost directly to the next-token prediction objective LMs are trained on. This contrasts with typical probing paradigms used in the domain of syntax, which implement the idea to "challenge" models in minimal pair set-ups that are not observed directly as string sequences in training, thereby testing abstracted, implicit linguistic knowledge rather than observed patterns in the data. In a similar vein, we want to probe the word knowledge of an LM at a fundamental level and beyond surface-level word sequences that LMs are known to excel in predicting. We want to know if the artificial learner knows that the word doggie exists in the English language, but moggie does not. + +Lexical decision is widely used in human studies but remains an underexplored LM benchmark. Le Godais et al. (2017) show that character-based LSTMs achieve about $95\%$ accuracy on such tasks. Lavechin et al. (2023) find that speech-based LMs need significantly more input to still perform poorly $(56.8\%)$ than phoneme-level LSTMs $(75.4\%)$ on a phonetic dataset. For the same data, Goriely et al. (2024) find that GPT-2-based subword BabyLMs achieve $70\%$ accuracy and a comparable character models reach nearly $90\%$ . Finally, for another lexical decision dataset, Bunzeck et al. (2025) report near-perfect accuracy for character-based grapheme Llama models, while phoneme models perform at $60 - 70\%$ . + +# 3 Experiments + +Models We train triplets of increasingly larger Llama models (Touvron et al., 2023) with character/subword tokenization on the BabyLM 10M corpus (Choshen et al., 2024). Training details are found in Appendix A. As ablations, we test subword Pythias (Biderman et al., 2023) and character/subword GPT-2 models (Goriely et al., 2024). + +Test data We follow the idea of forced-choice lexical decision (Baddeley et al., 1993), where participants must decide which is real: an existing word or a synthesized non-word. We use wuggy (Keuleers and Brysbaert, 2010) to generate minimal pairs of words/non-words that differ in one or two syllables, akin to syntactic minimal pair tests such as BLiMP. We derive 1,000 non-words (e.g. monding) each from 1,000 high-frequency/low + +
TokenizationModelParametersData sizeLexical decisionSurprisalAnti-surprisal
highFrqlowFrqhighFrqlowFrqhighFrqlowFrq
Subword (BPE)Pythia14M825GB66.662.590.585.571.477.7
70M72.568.894.594.077.083.6
160M77.873.096.495.878.085.7
410M81.978.197.797.977.184.1
1B87.583.297.797.976.683.8
1.4B87.881.697.997.976.584.7
GPT-297.5M100M words35.679.199.099.284.786.9
Llama2.51M10M words70.958.486.770.978.667.7
7.77M79.563.291.378.181.172.9
30.03M83.668.692.781.183.776.1
CharacterGPT-285.3M100M words98.797.399.899.498.096.3
Llama0.49M10M words97.683.098.284.398.083.1
3.73M98.990.299.490.398.588.8
21.94M99.093.399.894.799.092.5
+ +Table 1: Accuracy scores (in %) for (i) lexical decision, (ii) surprisal and (iii) anti-surprisal experiments + +frequency words (e.g. sending), which preserve syllable-bigram frequencies and match their origin words in length (cf. Appendix B). + +Lexical decision For a word/non-word pair $(w, *w)$ , we measure $-\log(P(w|_{-}))$ and $-\log(P(*w|_{-}))$ , i.e. how "surprised" a LM is by the word in the context of a pretended whitespace (and BOS token). If $-\log(P(w|_{-})) < -\log(P(*w|_{-}))$ , the LM's lexical decision is correct. As autoregressive LMs are sequence prediction models, we need a preceding context for which we can calculate surprisal. A single whitespace is the most neutral starting token available (and for subword models also signals that the first subword is word-initial). For all experiments, we calculate the average surprisal over all tokens of a word (which, in some cases, is characterized by a mismatch in token numbers between words and non-words, cf. Appendix B) with minicons (Misra, 2022). + +Surprisal To measure LMs' knowledge of words presented in regular syntactic contexts, we calculate the surprisal of words and non-words $(w, *w)$ as $-\log(P(w_i | w_{n5, therefore we release our own stimuli artifacts on Hugging-Face6 under the same license. The data contain no information that names or uniquely identifies individual people or offensive content, and are commonly used in computational linguistics. + +Analysis of tokenization To further assess the influence that these frequency scores have on the resulting tokenization for our own models, we offer a brief analysis: Figure 4 shows a pairplot between three numerical variables – (i) the number of tokens that our original words are split into, (ii) the number of tokens that the derived non-word are split into and (iii) the corresponding frequency score from CELEX. While the three plots on the diagonal axis show a layered kernel density estimate (KDE) for + +
small-charmedium-charlarge-charsmall-bpemedium-bpelarge-bpe
Embedding size128256512128256512
Hidden size128256512128256512
Layers48124812
Attention heads48124812
Context size128128128128128128
Vocab. size1021021028,0028,0028,002
Parameters486,0163,726,59221,940,7362,508,4167,771,39230,030,336
+ +Table 2: Model hyperparameters for our self-trained Llama models + +![](images/984bcc36cfffe7af9fec9925ba6ffaa2584ef68c178fdda4d216d1e335e241b9.jpg) +Figure 3: Loss curves for our self-trained Llama models + +each individual variable, the other plots are scatterplots which visualize the relationship between the variables. The data points are colored for their tokenization scheme. + +In the upper left and lower right plot we can see that both real and non-words are split similarly in the two distinct tokenization schemes. Words split by the BPE tokenizer tend to have fewer tokens, mostly between one and six. For the character-based tokenizer, a normal distribution is visible, with its peak at six tokens. + +The upper right and lower left scatter plots show the relationship between tokenization for real and non-words. The character-level tokenization exhibits perfect alignment between both kinds of stimuli, they are always split into the exact same number of tokens. Subword tokenization is slightly skewed towards the non-word tokens. This means that non-words are more often split into more tokens than real words, although the reverse case is not completely infrequent. + +# C Full learning curves for BLiMP and word learning + +Figure 5 shows the full learning curves of all phenomena included in BLiMP (individual syntactic paradigms belonging to one phenomenon are displayed in the same sub-figure) as well as our own lexical benchmarks (all displayed in individual subfigures), for our six self-trained models and the six Pythia models that we compare them to. We fit a fifth-order polynomial curve to the individual data points and display it on a logarithmic scale. + +It should be noted that we plot the number of the checkpoint on the x-scale. However, the individual amounts of actual textual data seen between these checkpoints differs vastly between our self-trained models (10M lexical tokens) and the Pythia models (825GB of textual data; as the dataset has since been taken down, no lexical token counts are possible anymore). + +![](images/84d2bf2da27b5ba46d4a731977219173b36c0e104ff566a24ad6dee301b25396.jpg) +Figure 4: Pairplot displaying (i) number of tokens of words, (ii) frequency scores from CELEX (Baayen et al., 1995) and (iii) number of tokens of non-words (for BPE and character tokenization) + +![](images/ce6382d527b0585212b3ad39bfb2b59a59d6e62e77aeac5df5ea6854ca684671.jpg) + +![](images/02c946c1f440bc9d458e627273edc8f1c1dd63814c09b9f4f8fb1746726d0f42.jpg) + +# D Correlations between word learning and syntactic learning + +As an additional measure of commonalities between word learning and syntactic learning, we calculate Spearman-rank correlation scores between ordered accuracy scores for our lexical tasks and BLiMP paradigms. Table 3 shows the underlying numerical values for the correlation heatmap provided in Figure 6 (please note that the heatmap is rotated in comparison to the table). All scores are statistically significant $(p < 0.05)$ . Due to the similar learning curves found in Figure 2, we average accuracy scores over all lexical phenomena (lexical decision and both surprisal settings), and then calculate correlations between them (both high and low frequency) and the coarse-grained BLiMP phenomena. For BPE models, lexical performance is highly correlated with more than half of the BLiMP phenomena. The character models show much weaker correlation with syntactic learning. This further confirms our findings about the strong entanglement of lexical and syntactic learning in subword models and their weaker ties + +in character models. + +# E Final BLiMP scores for all models + +We reproduce the final syntactic evaluation scores for all models that we incorporated in our lexical analyses in Table 4. Generally, scores improve with larger models and with more training data. Most strikingly, subword models are consistently superior to comparable character models trained on the same amount of data. These differences, however, are most pronounced for the small models trained on very little data, like our Llama models trained on 10M tokens (7% for smallest models, 2% for largest models). For the comparable GPT-2 models trained on 100M tokens, the gap becomes much smaller (0.4%). + +# F Development of word/non-word differences + +In Figure 7, we plot the average difference between word and non-word negative log-probability values across training, for both high-frequency and low-frequency data. Positive scores indicate preference + +![](images/30224b3d7f0c1da2e06e8a9b44a3c14efd056804e5b6524aea3b0811be26e107.jpg) +Figure 5: Learning curves for all paradigms in BLiMP and high/low frequency lexical decision data, separated for models (rows) and phenomenon sets (columns) + +![](images/5c3729b650c0e888b04bc7e9bc65a4815695c68ffaf1099197036c823bc39219.jpg) +Figure 6: Correlation heatmap + +for real words. For the character models, the differences are generally less pronounced and get most extreme at the end of pre-training (where accuracy scores do not change anymore), especially for the lexical decision data, which is already consistent at very early training stages. For the BPE models, + +we see that at the beginning they actually prefer non-words in the lexical decision task. Only after the first $10\%$ of training they begin to discern words and non-words. While overall tendencies remain the same for both frequency conditions, the absolute differences are generally lower and the + +
BLiMP phenomenonsmall-charmedium-charlarge-charsmall-bpemedium-bpelarge-bpe
highFrqlowFrqhighFrqlowFrqhighFrqlowFrqhighFrqlowFrqhighFrqlowFrqhighFrqlowFrq
Anaphor agr.-0.393-0.4350.5800.2770.3320.5550.5690.5590.9300.9100.7720.759
Argument structure0.3390.5890.4670.8250.5450.8950.9490.9590.9700.9870.9790.986
Binding-0.718-0.6290.6600.9180.6530.9220.9010.8890.9920.9930.9790.977
Control raising0.9040.8370.9090.7760.7910.8920.7770.7800.9740.9740.9300.936
Det.-noun agr.0.6860.8700.5240.8690.5210.8900.9890.9890.9900.9930.9940.989
Ellipsis-0.912-0.766-0.2850.2090.1800.6620.8570.8220.8970.8680.8650.856
Filler gap-0.765-0.586-0.554-0.146-0.2000.209-0.715-0.722-0.589-0.602-0.063-0.031
Irregular forms0.6120.7240.5070.8560.3970.787-0.116-0.0980.6360.6620.8160.832
Island effects-0.937-0.840-0.862-0.657-0.556-0.214-0.321-0.3340.4730.418-0.088-0.094
NPI licensing0.5350.6410.7480.7820.6160.568-0.425-0.4070.5200.526-0.069-0.049
Quantifiers-0.476-0.2020.0590.2990.5470.8480.7890.7670.7790.7700.7160.695
Subj.-verb agr.-0.386-0.4000.3290.5290.3510.7300.9630.9610.9820.9810.9670.974
+ +Table 3: Spearman-rank correlation scores between ordered accuracy scores for our lexical tasks and BLiMP paradigms + +
Tok.ModelParamsBLiMP score
Subword (BPE)Pythia14M65.86%
70M73.30%
160M77.50%
410M81.63%
1B82.21%
1.4B81.92%
GPT-285M77.80%
Llama2.51M59.80%
7.77M64.55%
30.03M64.56%
CharacterGPT-285M77.40%
Llama0.49M52.69%
3.73M51.07%
21.94M62.14%
+ +Table 4: BLiMP scores for all models + +differences between the curves are less pronounced in the low-frequency setting. + +![](images/1f4b6e7998db6c68f49eff9eb533695b2bba6b09a06fcbfc0aa4f2390189640f.jpg) +(a) High-frequency data +Figure 7: Average differences between surprisal values across pretraining + +![](images/fd84237e78265bc7dbe33561fec41bb2b63e13bded5300e3a3c40181362dde95.jpg) +(b) Low-frequency data \ No newline at end of file diff --git a/ACL/2025/Subword models struggle with word learning, but surprisal hides it/images.zip b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bac7698da2503a28cb02449197c05048c6dcea2a --- /dev/null +++ b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:213aaf6b8c3fa45952b045f8f4d19a58c579a6622deb7fcca235d8880b7edd19 +size 762960 diff --git a/ACL/2025/Subword models struggle with word learning, but surprisal hides it/layout.json b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..12943f81e680b2dd67c3e637ff43b5888c6e416b --- /dev/null +++ b/ACL/2025/Subword models struggle with word learning, but surprisal hides it/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aedbf8bdc4747bfb187cab12b3055dc2c77096df70f0eed7830dc854060bd471 +size 341437 diff --git a/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_content_list.json b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d1a0d881b76fa3142e4372f12e48f492d8ad2cc8 --- /dev/null +++ b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04e23fa92a47ded3d220c0575320f2d0966a27b8b8c32e0fb40c505950fe3da5 +size 67629 diff --git a/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_model.json b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b796c87423482bb2024851ce4cf45007daf990ae --- /dev/null +++ b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5baee0b924521df72dc2376a15273cf353f3f68bb8696f27508430581ba6fd29 +size 86570 diff --git a/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_origin.pdf b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f3883e7d14683a0f1990cb95deb8709593730e0d --- /dev/null +++ b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a231e85a412ab7ba36546d1c40b38a95f7e7a7b50b313553357eb26dcee4081 +size 2146941 diff --git a/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/full.md b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/full.md new file mode 100644 index 0000000000000000000000000000000000000000..33119cbb2cac23b9da2a42d412c47fe960ddfb2d --- /dev/null +++ b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/full.md @@ -0,0 +1,351 @@ +# SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement + +Runnan Fang\*, Xiaobin Wang $^{\text{心}}$ , Yuan Liang\*, Shuofei Qiao\*, Jialong Wu $^{\text{心}}$ , Zekun Xi\*, Ningyu Zhang $^{\text{心*}}$ , Yong Jiang $^{\text{心*}}$ , Pengjun Xie $^{\text{心}}$ , Fei Huang $^{\text{心}}$ , Huajun Chen $^{\text{心*}}$ + +$\spadesuit$ Zhejiang University $\text{♥}$ Alibaba Group + +*Zhejiang Key Laboratory of Big Data Intelligent Computing + +{rolnan,zhangningyu}@zju.edu.cn + +# Abstract + +In the interaction between agents and their environments, agents expand their capabilities by planning and executing actions. However, LLM-based agents face substantial challenges when deployed in novel environments or required to navigate unconventional action spaces. To empower agents to autonomously explore environments, optimize workflows, and enhance their understanding of actions, we propose SynWorld, a framework that allows agents to synthesize possible scenarios with multi-step action invocation within the action space and perform Monte Carlo Tree Search (MCTS) exploration to effectively refine their action knowledge in the current environment. Our experiments demonstrate that SynWorld is an effective and general approach to learning action knowledge in new environments1. + +# 1 Introduction + +By leveraging decision-making capabilities to execute task-oriented actions within dynamic environments, Large Language Models (LLM) based agents demonstrate enhanced environmental interactivity and operational versatility (Song et al., 2023a; Liu et al., 2024b; Wu et al., 2025; Xi et al., 2025; Shi et al., 2025; Qu et al., 2025). In the real world, agents perform actions by leveraging tools like web search engines (Fan et al., 2024; Zhao et al., 2024a; Ning et al., 2025) or API calls (Liu et al., 2024a; Tao et al., 2024) to access feedback from the real world, which addresses the static knowledge limitations of LLMs, facilitating a deeper comprehension of the real world. It is crucial for agents to learn how to plan and execute actions in the environment. Nonetheless, as the complexity of tasks increases and novel environments emerge, manually annotated environment descriptions and a predefined action documents (Qu + +![](images/5d0c881a5ae254499ecca9a8601a09af9ca979945f310a602a2cc26de5396330.jpg) +Figure 1: Our method with exploration to refine action knowledge in Synthesized Scenario. + +et al., 2024; Sri et al., 2024; Zhang et al., 2025) for agents are often not consistent with the actual environmental conditions and action usage (Liu et al., 2024d; Huang et al., 2024a). Refining well-defined and aligned descriptions of the environment and actions is time-consuming and labor-intensive. + +Therefore, to master inexperienced action and complicated task requirements in new complex environments, refinement of the agentic action knowledge is essential. Previous studies have explored the acquisition of action knowledge through feedback in scenarios synthesized by LLMs. Similar to the way humans acquire skills through trial and error, agents can also optimize the descriptions of actions by leveraging feedback from simulated scenarios (Yuan et al., 2024; Du et al., 2024; Bouzenia et al., 2024). However, these methods exhibit two critical limitations: (1) The synthetic scenarios they utilize are often restricted to single-action, which hinders agents from learning workflows suitable for these tasks, and (2) The linear iterative optimization process lacks a clear direction for improvement, making it susceptible to stagnation and + +quickly reaching its performance ceiling. + +To address these limitations, we propose a new framework, SynWorld, designed to assist agents in learning unfamiliar actions in new environments as shown in Figure 1. SynWorld first synthesizes virtual scenarios involving multiple coordinated actions. Then through iterative MCTS optimization in the exploration of virtual scenarios, the framework enables more thorough and bidirectional refinement between action descriptions and workflow patterns, ensuring better alignment with environmental constraints. Experiments demonstrate that action knowledge can be learned in virtual environments and effectively generalized to the real world, with optimization through MCTS exploration. + +# 2 Background + +# 2.1 Agent Planning + +An agent interacts with its environment by perceiving its state, selecting actions to achieve a goal, and learning from feedback in the form of rewards. Its framework consists of a state space $S$ that represents the environment's properties, an action space $\mathcal{A}$ that defines allowable interactions, and an observation space $\Omega$ for perceptual inputs. Progress toward task $T$ is measured through a reward function $\mathcal{R}$ . Central to decision-making is a planning mechanism $\mathcal{P}_{\theta}$ , where $\pi_{\theta}$ are fixed model weights. The agent's architecture is defined by the tuple: + +$$ +\mathcal {P} _ {\theta} = \pi_ {\theta} (\mathcal {S}, \mathcal {A}, \Omega , \mathcal {R}) \tag {1} +$$ + +This formula delineates the manner in which an agent assesses its current state and interprets environmental feedback to generate plans. + +# 2.2 Action Knowledge + +Action knowledge $\mathcal{A}\mathcal{K}$ serves as the strategic foundation governing an agent's adaptive behavior in dynamic and unfamiliar environments. It contains action description about the awareness of executable actions with cognitive workflows about task decomposition and action sequences. + +# 3 Method + +In this section, we begin by detailing how to utilize the action space to synthesize scenarios and specific objectives. Subsequently, we dive into the application of MCTS to explore and discover action knowledge within these synthesized scenarios. The SynWorld framework is shown in Figure 2. + +# 3.1 Scenario Synthesis + +To address generalization challenges in multistep tool operationalization, we propose a framework that synthesizes scenarios through tool-conditioned task generation. Our methodology formalizes scenario synthesis as: + +$$ +\mathcal {S} (t) = \left\{\left(\mathcal {B}, \mathcal {G}\right) \mid \forall t \subseteq T) \right\}, \tag {2} +$$ + +where a subset of tools $t$ selected by llm from the complete set of tools $T$ to design a scenario. Each scenario comprises two part: Background $\mathcal{B}$ : The contextual scenario specifying initial conditions and constraints; Goal $\mathcal{G}$ : The terminal objective requiring tool-mediated resolution. We provide examples using a few-shot approach to enable the llm to synthesize queries. + +The mapping enforces that distinct tool combinations yield nontrivial scenario variations through systematic $\mathcal{B} - \mathcal{G}$ pairings. Each group of selected tools will generate 2-3 scenarios. To ensure data diversity, if the similarity of a newly generated scenario exceeds a threshold $\epsilon$ compared to already synthesized scenarios, it will be excluded. Through this process, we can obtain a large number of synthetic scenarios, where the selected tools will serve as the "gold tools" for completing the corresponding virtual scenario, which will later be used for evaluation purposes. + +$$ +d \left(\left(\mathcal {B} _ {i}, \mathcal {G} _ {i}\right), \left(\mathcal {B} _ {j}, \mathcal {G} _ {j}\right)\right) < \epsilon . \tag {3} +$$ + +# 3.2 Action Knowledge Exploration + +Initialization The root node is initialized with predefined Action Knowledge, which serves as the foundation for task-solving logic. During the MCTS process, the UCB algorithm is used to select nodes, effectively balancing exploration and exploitation by choosing the node with the highest upper confidence limit. + +Expansion Upon selecting node $N_{i}$ as the candidate, an optimization process is initiated that retraces $N_{i}$ to obtain insights from previous optimization experience $\mathcal{E}$ . Each of these past optimization experiences $\mathcal{E}$ is composed of three elements: the pre-optimization score $S_{before}$ , the post-optimization score $S_{after}$ , and the modification $\mathcal{M}$ of the optimization actions taken. + +$$ +\mathcal {E} = \left\{\left(S _ {\text {b e f o r e}} ^ {i}, S _ {\text {a f t e r}} ^ {i}, \mathcal {M} ^ {i}\right) \mid N _ {i} \in \operatorname {P a t h} \left(N, N _ {0}\right) \right\} \tag {4} +$$ + +![](images/7a3795b8757cde584cf741d3eea3f107f5797326c9191c85e9ad41dd2db7ea4f.jpg) +Figure 2: The overall framework of SynWorld: we first extract composable tools from the toolkit to generate new scenes and tasks. Then, we allow agents to explore the synthesized virtual scenes using MCTS to optimize action knowledge, thereby learning how to execute actions and plan tasks. + +![](images/56838ec86cf884d122cb64d7e01364cffb550f82696d9aad3fc1a559b8c34d25.jpg) + +Based on the optimization experiences and exploration trajectories $Tra$ from the past, the LLM-based agent $\pi$ will analyze the discrepancies between the existing Action Knowledge and the environment. It will then optimize these to produce an updated version of the Action Knowledge. + +$$ +\mathcal {A K} _ {n e w} = \pi_ {\theta} (\mathcal {A K} _ {o l d}, \mathcal {E}, T r a) \tag {5} +$$ + +Feedback Collection Once equipped with an optimized $\mathcal{A}\mathcal{K}$ , the agent $\pi$ can explore the environment to perform tasks. For each individual task $T$ , the agent interacts with the environment to receive feedback with the trajectory $Tra_{i}$ and the final reward scores $S_{i}$ . The score is related to the evaluation method of the task. + +$$ +T r a _ {i}, S _ {i} = E n v (\mathcal {A K}, \pi) \tag {6} +$$ + +# 4 Experiment + +# 4.1 Experiment Setup + +Datasets and Baselines To demonstrate the efficiency of our approach in optimizing action knowledge, we selected two datasets: ToolBench (Qin et al., 2024) and HotpotQA (Yang et al., 2018), each offering unique challenges for a comprehensive evaluation. Following Qu et al. (2024), several strong methods are selected as our baselines, including ReAct (Yao et al., 2023), Self-Refine (Madaan et al., 2023), Easy-Tool (Yuan et al., 2024), and DRAFT (Qu et al., 2024). See detailed setting and evaluation in Appendix B. + +
ModelMethodToolBenchHotpotQA
PASSWIN
GPT-4-turboReAct50.6767.0054.61
Self-Refine56.8073.0055.85
EasyTool51.6768.0058.19
DRAFT54.8372.0057.71
Ours59.3373.0059.93
Qwen-longReAct48.3071.0052.00
Self-Refine53.7077.0056.10
EasyTool50.8063.0058.34
DRAFT54.2079.0053.23
Ours57.2081.0059.91
Qwen2-72B-InstructReAct49.4355.0050.21
Self-Refine54.3365.0052.59
EasyTool52.9758.0054.94
DRAFT56.4369.0057.57
Ours58.5273.0058.70
+ +Table 1: Main results of SynWorld compared to other baselines on ToolBench and HotpotQA. The best results of each model are marked in bold. PASS means the pass rate and WIN means the win rate of the trajectory compared to GPT-3.5-turbo in the method of ReAct. + +# 4.2 Main Results + +For the task ToolBench that requires the combined use of multiple tools, as shown in Table 1, our approach achieved a PASS score of 59.33 and a WIN score of 73.00, marking a significant improvement compared to other methods for iterative optimization, demonstrating the advantages of our method in terms of tool combination and task planning optimization. For the task HotpotQA that requires planning using a single tool and multi-step calls, in the scenario where only a single tool is used but requires continuous multi-hop calls, our method has achieved state-of-the-art results. This + +![](images/c64d5b1b6c5b0ba7b0f396de4593e73b0470d8fa5ded5304c24b215b9b585ab8.jpg) +Figure 3: The variation in the pass rate of agents on the ToolBench in relation to the number of exploration scenarios. + +indicates that we have not only aligned tool descriptions with the environment, but also succeeded in generating a generalizable planning workflow. + +# 4.3 Ablation Study + +We observe that independently optimizing either the Workflow or the Tool Description using MCTS has its limitations in Table 2. We find that combining the optimization of both aspects leads to more effective results. An aligned Tool Description is beneficial for constructing a more reasonable Workflow, while a well-structured, general Workflow also enhances the exploration of tool usage. We believe that this synergy arises during the iterative optimization process, where the improved workflow can help identify tool usage that is closer to the correct trajectory, serving as strong negative examples to further refine the tool description. Conversely, a superior tool description enables the model to generate workflows that are more aligned with the environment. + +
ModelMethodPass Rate
GPT-4-turboSynWorld59.33
w/o. Workflow56.33-3.00
w/o. Description53.16-6.17
Qwen-longSynWorld57.20
w/o. Workflow57.00-0.20
w/o. Description53.83-3.37
+ +Table 2: Ablation experiment results + +# 4.4 Futher Analysis + +More simulated data enable precise virtual scenario synthesis, optimized action knowledge, and ultimately improved agent performance. In our experiments, we explore action knowledge by synthesizing a varying number of virtual scenarios. As shown in Figure 3, we find that as the + +![](images/d4378e8019442984f175dbc26574900f3de0e96b65892b4dee03faee7a1e7789.jpg) +Figure 4: Changes in ToolBench pass rates in virtual and real-world scenarios with the number of iterative optimizations performed in the virtual environment. + +number of scenarios synthesized increases, the performance of the Agent shows a corresponding upward trend. Specifically, within the range of 0 to 100 scenarios, the model's performance continues to improve with the increase in the scenarios, indicating that action knowledge is indeed learnable. Although the rate of performance improvement slows down as the number of scenarios increases, the model's performance remains on an upward trajectory. This phenomenon suggests that the process of learning action knowledge in the context of synthesized scenarios exhibits scalability. + +Virtual scenario policies can be generalized to unseen environments and improved with iterations. By analyzing the relationship between action knowledge iterations and pass rates on Toolbench in both virtual and real environments, we find that the action knowledge gained in the virtual setting is generalizable and effective in real-world applications. Performance trends in both environments are similar in Figure 4. We observe a consistent upward trend in scores, particularly between 0 and 10 iterations, indicating that action knowledge can be optimized through environmental feedback. However, as iterations increase, the gains diminish, and we note slight declines in performance at times. This phenomenon is likely due to the limitations of exploring a fixed number of scenarios, where further iterations have less impact, and increasing complexity can hinder understanding. + +# 5 Conclusion + +In this paper, we propose SynWorld, a novel framework that synthesizes scenes that require multiple action steps and enhances agent action optimization through exploration in the synthetic virtual scenario. By systematically exploring diverse syn + +thetic scenarios, our model achieves precise alignment between action descriptions and environmental contexts while identifying task-specific workflows suitable for tasks. + +# Limitations + +We initially conduct empirical validation on two benchmarks: Toolbench (involving multi-tool calling scenarios) and HotpotQA (requiring multi-step action execution). While these demonstrate our method's effectiveness, broader validation across diverse real-world applications remains valuable. Promising candidates include web-based search tasks, simulated environments and so on. + +Our approach currently incurs non-trivial computational overhead due to the token-intensive virtual scenario synthesis process. The exploration phase further compounds this by exhaustively enumerating all possible scenarios. Future research should prioritize optimizing token efficiency through 1) developing more economical synthesis mechanisms for high-quality virtual scenarios and 2) establishing effective filtering criteria to identify the most pedagogically valuable scenarios. + +The current action knowledge representation employs a purely text-based format. This presents opportunities to investigate alternative structured representations that could enhance reasoning capabilities, such as tabular organization of action parameters or executable code snippets encapsulating procedural knowledge. + +# Acknowledgments + +This work was supported by the National Natural Science Foundation of China (No. 62206246, No. NSFCU23B2055, No. NSFCU19B2027), the Fundamental Research Funds for the Central Universities (226-2023-00138), Yongjiang Talent Introduction Programme (2021A-156-G), CIPSC-SMP-Zhipu Large Model Cross-Disciplinary Fund, Ningbo Natural Science Foundation (2024J020), Information Technology Center and State Key Lab of CAD&CG, Zhejiang University. We gratefully acknowledge the support of Zhejiang University Education Foundation Qizhen Scholar Foundation. + +# References + +Islem Bouzenia, Premkumar Devanbu, and Michael Pradel. 2024. Repairagent: An autonomous, llmbased agent for program repair. arXiv preprint arXiv:2403.17134. + +Huajun Chen. 2023. Large knowledge model: Perspectives and challenges. arXiv preprint arXiv:2312.02706. +Yu Du, Fangyun Wei, and Hongyang Zhang. 2024. Anytool: Self-reflective, hierarchical agents for largescale api calls. arXiv preprint arXiv:2402.04253. +Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, et al. 2024. Agent ai: Surveying the horizons of multimodal interaction. arXiv preprint arXiv:2401.03568. +Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on RAG meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024, Barcelona, Spain, August 25-29, 2024, pages 6491-6501. ACM. +Sihao Hu, Tiansheng Huang, Fatih Ilhan, Selim Tekin, Gaowen Liu, Ramana Kompella, and Ling Liu. 2024. A survey on large language model-based game agents. arXiv preprint arXiv:2404.02039. +Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, and Sarath Chandar. 2024a. Towards practical tool usage for continually learning llms. arXiv preprint arXiv:2404.09339. +Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, and Enhong Chen. 2024b. Understanding the planning of llm agents: A survey. arXiv preprint arXiv:2402.02716. +Ziyan Jiang, Xueguang Ma, and Wenhu Chen. 2024. Longrag: Enhancing retrieval-augmented generation with long-context llms. arXiv preprint arXiv:2406.15319. +Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, et al. 2024. Personal llm agents: Insights and survey about the capability, efficiency and security. arXiv preprint arXiv:2401.05459. +Bang Liu, Xinfeng Li, Jiayi Zhang, Jinlin Wang, Tanjin He, Sirui Hong, Hongzhang Liu, Shaokun Zhang, Kaitao Song, Kunlun Zhu, Yuheng Cheng, Suyuchen Wang, Xiaojiang Wang, Yuyu Luo, Haibo Jin, Peiyan Zhang, Ollie Liu, Jiaqi Chen, Huan Zhang, Zhaoyang Yu, Haochen Shi, Boyan Li, Dekun Wu, Fengwei Teng, Xiaojun Jia, Jiawei Xu, Jinyu Xiang, Yizhang Lin, Tianming Liu, Tongliang Liu, Yu Su, Huan Sun, Glen Berseth, Jianyun Nie, Ian Foster, Logan Ward, Qingyun Wu, Yu Gu, Mingchen Zhuge, Xiangru Tang, Haohan Wang, Jiaxuan You, Chi Wang, Jian Pei, Qiang Yang, Xiaoliang Qi, and Chenglin Wu. 2025. Advances and challenges in foundation agents: From brain-inspired intelligence to evolutionary, collaborative, and safe systems. Preprint, arXiv:2504.01990. + +Shilong Liu, Hao Cheng, Haotian Liu, Hao Zhang, Feng Li, Tianhe Ren, Xueyan Zou, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang, Jianfeng Gao, and Chunyuan Li. 2024a. Llava-plus: Learning to use tools for creating multimodal agents. In Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XLVII, volume 15105 of Lecture Notes in Computer Science, pages 126-142. Springer. +Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. 2024b. Agentbench: Evaluating llms as agents. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net. +Yanming Liu, Xinyue Peng, Yuwei Zhang, Jiannan Cao, Xuhong Zhang, Sheng Cheng, Xun Wang, Jianwei Yin, and Tianyu Du. 2024c. Tool-planner: Dynamic solution tree planning for large language model with tool clustering. arXiv preprint arXiv:2406.03807. +Zeyu Leo Liu, Shrey Pandit, Xi Ye, Eunsol Choi, and Greg Durrett. 2024d. Codeupdatearena: Benchmarking knowledge editing on API updates. CoRR, abs/2407.06249. +Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. +Tula Masterman, Sandi Besen, Mason Sawtell, and Alex Chao. 2024. The landscape of emerging ai agent architectures for reasoning, planning, and tool calling: A survey. arXiv preprint arXiv:2404.11584. +Vaibhav Mavi, Anubhav Jangra, and Adam Jatowt. 2022. A survey on multi-hop question answering and generation. arXiv preprint arXiv:2204.09140. +Liangbo Ning, Ziran Liang, Zhuohang Jiang, Haohao Qu, Yujuan Ding, Wenqi Fan, Xiao-yong Wei, Shanru Lin, Hui Liu, Philip S Yu, et al. 2025. A survey of webagents: Towards next-generation ai agents for web automation with large foundation models. arXiv preprint arXiv:2503.23350. +Siqi Ouyang and Lei Li. 2023. Autoplan: Automatic planning of interactive decision-making tasks with large language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, Singapore, December 6-10, 2023, pages 3114-3128. Association for Computational Linguistics. + +Shuofei Qiao, Ningyu Zhang, Runnan Fang, Yujie Luo, Wangchunshu Zhou, Yuchen Eleanor Jiang, Chengfei Lv, and Huajun Chen. 2024. Autoact: Automatic agent learning from scratch via self-planning. arXiv preprint arXiv:2401.05268. +Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2024. Toollm: Facilitating large language models to master 16000+ real-world apis. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net. +Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong Wen. 2024. From exploration to mastery: Enabling llms to master tools via self-driven interactions. CoRR, abs/2410.08197. +Xiaoye Qu, Yafu Li, Zhaochen Su, Weigao Sun, Jianhao Yan, Dongrui Liu, Ganqu Cui, Daizong Liu, Shuxian Liang, Junxian He, et al. 2025. A survey of efficient reasoning for large reasoning models: Language, multimodality, and beyond. arXiv preprint arXiv:2503.21614. +Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Telsex. 2022. Planning with large language models via corrective re-prompting. In NeurIPS 2022 Foundation Models for Decision Making Workshop. +Weizhou Shen, Chenliang Li, Hongzhan Chen, Ming Yan, Xiaojun Quan, Hehong Chen, Ji Zhang, and Fei Huang. 2024. Small llms are weak tool learners: A multi-llm agent. arXiv preprint arXiv:2401.07324. +Yucheng Shi, Wenhao Yu, Wenlin Yao, Wenhu Chen, and Ninghao Liu. 2025. Towards trustworthy gui agents: A survey. arXiv preprint arXiv:2503.23434. +Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew J. Hausknecht. 2021. Alfworld: Aligning text and embodied environments for interactive learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. +Simranjit Singh, Andreas Karatzas, Michael Fore, Iraklis Anagnostopoulos, and Dimitrios Stamoulis. 2024. An llm-tool compiler for fused parallel function calling. arXiv preprint arXiv:2405.17438. +Chan Hee Song, Brian M. Sadler, Jiaman Wu, Wei-Lun Chao, Clayton Washington, and Yu Su. 2023a. Llmplanner: Few-shot grounded planning for embodied agents with large language models. In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pages 2986-2997. IEEE. + +Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su. 2023b. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2998-3009. +S Deepika Sri, Raja CSP Raman, Gopinath Rajagopal, S Taranath Chan, et al. 2024. Automating rest api postman test cases using llm. arXiv preprint arXiv:2404.10678. +Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, and Chao Zhang. 2023. Adaplanner: Adaptive planning from feedback with language models. Advances in neural information processing systems, 36:58202-58245. +Chunliang Tao, Xiaojing Fan, and Yahe Yang. 2024. Harnessing llms for api interactions: A framework for classification and synthetic data generation. arXiv preprint arXiv:2409.11703. +Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6):186345. +Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023a. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091. +Ruoyao Wang, Peter A. Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. 2022. Scienceworld: Is your agent smarter than a 5th grader? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 11279-11298. Association for Computational Linguistics. +Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. 2023b. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. CoRR, abs/2302.01560. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837. +Jialong Wu, Wenbiao Yin, Yong Jiang, Zhenglin Wang, Zekun Xi, Runnan Fang, Deyu Zhou, Pengjun Xie, and Fei Huang. 2025. Webwalker: Benchmarking llms in web traversal. arXiv preprint arXiv:2501.07572. +Zekun Xi, Wenbiao Yin, Jizhan Fang, Jialong Wu, Runnan Fang, Ningyu Zhang, Jiang Yong, Pengjun Xie, Fei Huang, and Huajun Chen. 2025. Omnithink: Expanding knowledge boundaries in machine writing through thinking. Preprint, arXiv:2501.09751. + +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2369-2380. Association for Computational Linguistics. +Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744-20757. +Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. +Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Yongliang Shen, Kan Ren, Dongsheng Li, and Deqing Yang. 2024. EASYTOOL: enhancing llmbased agents with concise tool instruction. CoRR, abs/2401.06201. +Xilin Zhang, Zhixin Mao, Ziwen Chen, and Shen Gao. 2024. Effective tool augmented multi-agent framework for data analysis. Data Intelligence, 6(4):923-945. +Zhuosheng Zhang, Yao Yao, Aston Zhang, Xiangru Tang, Xinbei Ma, Zhiwei He, Yiming Wang, Mark Gerstein, Rui Wang, Gongshen Liu, et al. 2025. Igniting language intelligence: The hitchhiker's guide from chain-of-thought reasoning to language agents. ACM Computing Surveys, 57(8):1-39. +Siyun Zhao, Yuqing Yang, Zilong Wang, Zhiyuan He, Luna K Qiu, and Lili Qiu. 2024a. Retrieval augmented generation (rag) and beyond: A comprehensive survey on how to make your llms use external data more wisely. arXiv preprint arXiv:2409.14924. +Suifeng Zhao, Tong Zhou, Zhuoran Jin, Hongbang Yuan, Yubo Chen, Kang Liu, and Sujian Li. 2024b. Awecita: Generating answer with appropriate and well-grained citations using llms. Data Intelligence, 6(4):1134-1157. +Yuyue Zhao, Jiancan Wu, Xiang Wang, Wei Tang, Dingxian Wang, and Maarten De Rijke. 2024c. Let me do it for you: Towards llm empowered recommendation via tool learning. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1796-1806. +Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, et al. 2023. Agents: An open-source framework for autonomous language agents. arXiv preprint arXiv:2309.07870. + +Wangchunshu Zhou, Yixin Ou, Shengwei Ding, Long Li, Jialong Wu, Tiannan Wang, Jiamin Chen, Shuai Wang, Xiaohua Xu, Ningyu Zhang, et al. 2024. Symbolic learning enables self-evolving agents. arXiv preprint arXiv:2406.18532. +Yuqi Zhu, Shuofei Qiao, Yixin Ou, Shumin Deng, Ningyu Zhang, Shiwei Lyu, Yue Shen, Lei Liang, Jinjie Gu, and Huajun Chen. 2024. Knowa-gent: Knowledge-augmented planning for llm-based agents. arXiv preprint arXiv:2403.03101. + +# A Related Works + +# A.1 Agent Planning + +Recent studies has shown that in the realm of complex task-solving (Ouyang and Li, 2023; Sun et al., 2023; Liu et al., 2024c, 2025), the capacity for planning and refinement within large models has become increasingly pivotal. It has marked a transition from early Methods like CoT(Wei et al., 2022), Plan and Solve (Wang et al., 2023a), which tackle tasks sequentially, to the sophisticated agentic workflows of today, where model planning is instrumental in addressing a myriad of complex tasks, including question answering (QA) (Mavi et al., 2022), embodied interaction (Yao et al., 2022), tool invocation (Masterman et al., 2024), and long-form text generation (Jiang et al., 2024). + +However, initial planning efforts are fraught with deficiencies due to the complexity of environments. When faced with unfamiliar environments, relying solely on human-written task descriptions without interaction with the environment often leads to plans that are misaligned with the actual tasks, or plans that seem reasonable but fail during execution due to a lack of accurate action knowledge. Consequently, there has been a surge in research (Song et al., 2023b; Wang et al., 2023b) focused on refining plans and workflows. These efforts typically leverage direct environmental feedback or design a reward score for end-to-end plan correction, but they often lack a medium for the intermediate processes, which obscures the transparency of the plan refinement process. Moreover, the refinement of plans and the collection of feedback are usually linear and iterative (Qu et al., 2024), resulting in low efficiency and a lack of diversity. + +# A.2 Knowledge-Augmented Agents + +LLMs, as agents interacting with specific environments, often need to provide action signals to these environments (Zhou et al., 2023, 2024; Durante et al., 2024). These action signals can be either restricted or open actions related to the environment (Wang et al., 2024; Li et al., 2024; Hu et al., 2024; Zhang et al., 2024). For instance, they might involve specific movements in embodied task scenarios or the use of various tools like search or OCR in tool invocations. By incorporating these actions, on one hand, the Agent gains the ability to interact with the environment, allowing LLMs to transcend mere textual output (Shridhar et al., 2021; Wang et al., 2022; Zhao et al., 2024b). On + +the other hand, these external actions endow the Agent with a capability similar to humans using tools, compensating for the inherent limitations of LLMs, such as search tools that can alleviate issues of knowledge hallucination or obsolescence in LLMs (Singh et al., 2024; Zhao et al., 2024c; Shen et al., 2024; Chen, 2023). + +Current methods for learning action knowledge are mainly divided into two categories: one involves creating a large amount of synthetic data to construct trajectories for executing actions to train the model (Qiao et al., 2024; Huang et al., 2024b; Zhu et al., 2024), which is costly and has poor generalizability across different tasks; the other relies on prompt engineering (Raman et al., 2022), placing explicit action knowledge about how to plan to execute actions within the prompt, and then using ICL (In-Context Learning) methods to enable the model to learn to invoke these actions. While convenient, these methods can be inaccurate as the artificially constructed planning knowledge may not accurately reflect the true state of the environment, leading to potential biases. + +# B Setting + +# B.1 Datasets + +ToolBench contains tasks using over 16,000 RapidAPI tools. It assesses a model's ability to plan and execute complex workflows. + +HotpotQA is a multi-hop QA dataset with questions requiring multiple steps to answer. We employ Goolge Search as the search engine in the experiment. + +# B.2 Evaluation + +ToolBench: Evaluated using pass rate and win rate. We record planning steps and tool invocations, then submit the trajectory for assessment. Win rate is compared to React's performance. HotpotQA: Evaluated using F1 score, comparing model answers to gold answers (reward 0-1). These datasets and metrics allow us to rigorously validate our approach across varied contexts. + +# B.3Baselines + +Several strong methods are selected as our baselines, including: ReAct (Yao et al., 2023) which interacts with environment to reason the next step, Self-Refine (Madaan et al., 2023) which uses the feedback from environment to refine the origin prompt, Easy-Tool (Yuan et al., 2024) which + +uses lmm firstly to refine the tool description and then break down the tasks to complete them, and DRAFT (Qu et al., 2024) to synthesize tasks on a single tool for exploration to learn how to use the tool. + +# B.4 Experiment Setup + +The backend model used in our experiments is Qwen-Long-0916, while the version of GPT-4 is 0613. The token usage in our method is approximately 6-8 million tokens. We configured the width of MCTS to 3 and set the similarity threshold to 0.6. After balancing effectiveness and cost, we synthesized 200 scenes and conducted 15 iterations on them during the experiment. + +# C Prompt Template + +See in Table 3, 4 + +# D Algorithm + +See in Algorithmic 1 + +Algorithm 1 Monte Carlo Tree Search (MCTS) for Action Knowledge Optimization +function MCTS(root_node) +Iteration $\leftarrow 0$ +while Iteration < max_iteration do + $\triangleright$ Step 1: Selection with UCB algorithm and +num_child < 3 +leaf_node $\leftarrow$ SELECT_NODE(root_node) + $\triangleright$ Step 2: Expansion +new_node $\leftarrow$ EXPAND(leaf_node) + $\triangleright$ Step 3: Simulation +simulation_result $\leftarrow$ SIMULATE(new_node) + $\triangleright$ Step 4: Backpropagation +BACKPROPAGATE(new_node, simulation_result) +Iteration $\leftarrow$ Iteration + 1 +end while +end function +function SELECT_NODE(node) +while node.isfully Expanded() do +node $\leftarrow$ CHOOSE.best_CHILD(node, exploration_parameter) +end while +return node +end function +function EXPAND(node) +optimization $\leftarrow$ CHOOSE_UNTRIED_OPTIMIZATION(node) +new_node $\leftarrow$ APPLY_OPTIMIZATION(node, optimization) +ADD_CHILD(node, new_node) +return new_node +end function +function SIMULATE(node) +optimized_score $\leftarrow$ CALCULATE_SCORE(node.current Action knowledge) +reward $\leftarrow$ optimized_score - father_score +return reward +end function +function BACKPROPAGATE(node, result) +while node $\neq$ None do +UPDATE_STATISTICS(node, result) +node $\leftarrow$ node.parent +end while +end function +function CALCULATE_SCORE(action knowledge) +return EVALUATE(action knowledge) +end function + +# Prompt for Tool Description in Action knowledge + +Analyze the following tool execution trajectories to improve tool interface documentation. For all trajectories: + +1. Identify functional mismatches between original description and actual usage patterns +2. Detect parameter inefficiencies (missing/underutilized fields) +3. Extract implicit requirements from error patterns +4. Generate enhanced documentation with: + +Clear input specifications (required vs optional) + +Contextual usage guidelines + +Error prevention tips + +Response format expectations + +Here is an example. + +Now it's your turn to analyze the following tool execution trajectories to improve tool interface documentation. + +tool_name: tool_name + +original_description: original_description + +trajectory: trajectory + +Please provide your Optimize Description for the tool. Just modify the description part and do not change the parameters description. + +Make Sure your description is clear and concise. + +Table 3: Prompt used for tool document refinement. + +# Prompt for Workflow in Action knowledge + +Analyze the provided interaction trajectory and existing workflow steps to derive a generalized, reusable workflow for similar tool calling tasks. + +1. Analyzing error patterns (authentication gaps, deprecated endpoints) and tool dependencies from interaction histories. +2. Extracting implicit requirements (authentication, sorting logic) and mandatory parameters from error responses. +3. Structuring a generic workflow with authentication validation, parameter checks, state management between API calls, and error backups. + +Here is an example. + +Now it's your turn. + +Existing Workflow: workflow + +Trajectory: trajectory + +Please provide your Optimize Workflow for the task. And make sure your workflow is clear and concise and no longer than 200 words. + +Table 4: Prompt used for workflow generation. \ No newline at end of file diff --git a/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/images.zip b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dacae227c36cd3dd31d6dc80d8e7d99b71bc06d4 --- /dev/null +++ b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6262bdafa7b60cb42f5a00d626e3f3a21999d41781c8e5bdacd40e29d43a0626 +size 379323 diff --git a/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/layout.json b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..314bc8be79808b0a9f079625fa7a794fda799015 --- /dev/null +++ b/ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f92b42bec411f035d60e10c62bcc56ba538f2e05a6288bc1044aa2c959128145 +size 335678 diff --git "a/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_content_list.json" "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_content_list.json" new file mode 100644 index 0000000000000000000000000000000000000000..084dc8991100f40f196bee1392d115acb7420ae1 --- /dev/null +++ "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_content_list.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:292f2cb87e43ec40b01e546a10bf772f46e8b8a366cb7d150b2f9d609cf37a35 +size 55300 diff --git "a/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_model.json" "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_model.json" new file mode 100644 index 0000000000000000000000000000000000000000..43be991ec02f7a99d60314c56e63510711d26826 --- /dev/null +++ "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_model.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a49a62269f7f02980aafab924a53e9f815858073188c9f2e223c4f00ec22746d +size 68407 diff --git "a/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_origin.pdf" "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_origin.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..7d430d660ed168f1476227f9e41eeb5c379aa5b0 --- /dev/null +++ "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_origin.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb8bbc219ae80f71d471484556d56a0ecea9e431f82f62c347ff854342b6306e +size 702769 diff --git "a/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/full.md" "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/full.md" new file mode 100644 index 0000000000000000000000000000000000000000..fec2ab475af1c1893d75c169429bde2474f3d9ed --- /dev/null +++ "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/full.md" @@ -0,0 +1,223 @@ +# That doesn’t sound right: Evaluating speech transcription quality in field linguistics corpora + +Éric Le Ferrand1, Bo Jiang1, Joshua Hartshorne2, Emily Prud'hommeaux1 + +$^{1}$ Department of Computer Science, Boston College, USA + +$^{2}$ MGH Institute of Health Professions, USA + +# Abstract + +Incorporating automatic speech recognition (ASR) into field linguistics workflows for language documentation has become increasingly common. While ASR performance has seen improvements in low-resource settings, obstacles remain when training models on data collected by documentary linguists. One notable challenge lies in the way that this data is curated. ASR datasets built from spontaneous speech are typically recorded in consistent settings and transcribed by native speakers following a set of well designed guidelines. In contrast, field linguists collect data in whatever format it is delivered by their language consultants and transcribe it as best they can given their language skills and the quality of the recording. This approach to data curation, while valuable for linguistic research, does not always align with the standards required for training robust ASR models. In this paper, we explore methods for identifying speech transcriptions in fieldwork data that may be unsuitable for training ASR models. We focus on two complimentary automated measures of transcription quality that can be used to identify transcripts with characteristics that are common in field data but could be detrimental to ASR training. We show that one of the metrics is highly effective at retrieving these types of transcriptions. Additionally, we find that filtering datasets using this metric of transcription quality reduces WER both in controlled experiments using simulated fieldwork with artificially corrupted data and in real fieldwork corpora. + +# 1 Introduction + +Automatic speech recognition (ASR) can support the creation of new linguistic resources for under-resourced and endangered languages, but such languages – which make up the vast majority of the world's $7000+$ languages – vary considerably in their quantity of transcribed speech data. While some languages have well-curated speech datasets + +sourced from educational materials, mass media, or crowdsourcing efforts, many only have field recordings made by linguists as their primary source of speech data. Field linguists, whose work centers on describing languages and analyzing their linguistic properties, collect data primarily to support their academic research and the activities of the language community. Few linguists collect data with the goal of creating high-quality datasets for training speech technology models (Hanke, 2017; Le Ferrand, 2023). As a result, speech data from fieldwork may be only partially or unfaithfully transcribed due to issues of recording quality and the language skills of the linguist. Additionally, fieldwork transcripts often include ancillary information making it difficult to differentiate between transcription (word-level renderings of the speech) and annotation (glosses, translations, comments). + +ASR models for widely-spoken languages are trained on enough professionally recorded and transcribed data that including a small number of inaccurately transcribed utterances is unlikely to significantly affect overall performance. For languages where data is scarce, however, even a small portion of low-quality data can severely degrade model performance. Detecting low quality transcripts can be done manually, but this process is tedious and time-consuming, underscoring the need for an automatic method to evaluate transcription quality. + +In this paper, we explore two metrics for automatically assessing the transcription quality and accuracy of speech datasets: Phonetic Distance Match (PDM), a novel metric based on phoneme recognition, and the posterior probability of a Connectionist Temporal Classification (Graves and Graves, 2012) alignment (CTC). We evaluate the utility of these metrics for identifying poor transcriptions through experiments on clean datasets that we synthetically corrupt in ways that simulate common fieldwork data quality errors. We then demonstrate the ability of PDM in particular to + +identify these types of errors. Finally we show that using our metrics to filter out these types of inaccurate transcripts can yield substantial improvements in ASR accuracy in both simulated and real-world fieldwork datasets. + +# 2 Related work + +Prior related work on filtering inaccurate transcripts has focused on leveraging ASR output itself, such as using the confidence score of an ASR model (Huang et al., 2013; Su and Xu, 2015; Koctür et al., 2016) or using multiple ASR models to generate multiple predictions for the same speech utterance (Fiscus, 1997; Cucu et al., 2014; Li et al., 2016; Jouvet and Fohr, 2014). These methods, while useful in high-resource settings, assume the existence of an ASR model for the target language, which is not applicable for our work, where no strong ASR models exist. Using universal speech recognition models shows more promise for under-resourced languages. Models such as XLS-R (Conneau et al., 2021) and MMS (Pratap et al., 2024) have demonstrated promising ASR results in low-resource settings (Macaire et al., 2022; Guillaume et al., 2022; Tapo et al., 2024; Romero et al., 2024; Jimerson et al., 2023). However, since these models are trained on raw speech and lack textual information, they cannot provide direct feedback on individual segments. A more suitable alternative is a universal phoneme recognizer (Li et al., 2020), which can generate phone transcriptions for any language. + +There is a robust history of prior work in evaluating the acoustic quality of audio using measures of speech intelligibility derived from output of ASR or proto-ASR systems (Holube and Kollmeier, 1996; Sakoe and Chiba, 1978; Spille et al., 2018; Arai et al., 2019). While this work is also relevant for filtering audio for ASR datasets, it is orthogonal to our own work, which focuses on identifying poor quality transcripts rather than poor quality audio. + +# 3 Data + +We apply our metrics (see Section 4.1) to two distinct classes of datasets. The CURATED class consists of five well-curated, high-quality speech datasets, ranging in size from $3.5\mathrm{h}$ to $9\mathrm{h}$ , which we synthetically corrupt to simulate common fieldwork transcription errors (see Section 4.2). The languages include Bunun (bnn), Saisiyat (xsy), and Seediq (trv), three Taiwanese indigenous languages extracted from FormosanBank (Mohamed et al., + +2024). For each language, we use a subset of the ePark (Aboriginal Language Research and Development Foundation, 2023b) and ILRDF (Aboriginal Language Research and Development Foundation, 2023a) corpora, which consist of read speech recorded by native speakers (Hartshorne et al., 2024). We also included Mboshi (mdw), a Bantu language from Congo-Brazzaville, part of the LIG-Aikuma project1, and Duoxu (ers), a critically endangered Sino-Tibetan language, included in the Pangloss collection (Michailovsky et al., 2014). + +The second class consists of 2-hour fieldwork corpora from Pangloss (FIELDWORK) for Namakura (nmk), an Austronesian language spoken on Vanuatu, and Thulung Rai (tdh), a Sino-Tibetan language of Nepal. These consist exclusively of fieldwork recordings and include annotations and approximate transcripts. We use the FIELDWORK datasets to demonstrate the efficacy of our methods in a real-world fieldwork scenario. + +All seven datasets $^2$ were partitioned into training $(70\%)$ , validation $(10\%)$ , and test $(20\%)$ sets. Dataset details are found in Table 1. + +Several factors motivated our choice of these specific languages. First, the FormosanBank corpus contains an unusually large amount of high-quality data. These languages also posed an initial layer of complexity due to their orthographic conventions. For example, the glottal stop is typically represented with an apostrophe or straight single quote, and the voiceless alveolar affricate is denoted as $c$ . Mboshi includes two non-ASCII characters -ε and ω -while Duoxu features systematic tone marking using superscript numerals (e.g., $ja^{22}nje^{33}xe^{53}nje^{33}t\sigma i^{33}o$ ), adding another dimension of orthographic variation. + +# 4 Method + +# 4.1 Transcript evaluation metrics + +We consider two metrics for evaluating transcription quality3. First, we present Phonetic Distance Match (PDM), a novel metric for evaluating orthographic transcriptions against their corresponding audio. PDM is calculated by transcribing an utterance recording using a phone-level transcription model and then measuring the edit distance between the resulting transcription and the manual reference transcription. Using Allosaurus (Li + +et al., 2020) without fine-tuning, we automatically generate phone-level transcripts for each utterance in the corpus, which are then converted into their closest corresponding ASCII characters using the unidecode library.4 The orthographic reference transcripts are also converted to ASCII to ensure a shared character set. Finally, we compute the normalized Levenshtein distance between the two transcriptions and subtract from 1 to generate a similarity metric ranging from 0 to 1. The scoring process is illustrated in detail in Appendix Fig. 4. + +The rationale behind using an ASCII-ized version of IPA is as follows. We begin with the observation that many languages currently being documented are traditionally oral. Their orthographies are often introduced by outsiders who tend adopt the Latin alphabet, with minor modifications. Although exceptions exist (e.g., Ainu written in Japanese katakana or Inuktitut written using Indigenous Canadian syllabics), the Latin script remains the prevalent standard. + +While we recognize that linguists and community members who use the Latin alphabet are free to use the characters as they wish in their writing systems, these newly devised orthographies are not arbitrary. They are frequently influenced by existing Latin-based writing systems and the International Phonetic Alphabet (IPA). For example, a voiced velar nasal is typically represented as $\mathfrak{n}$ or $ng$ , and rarely as unrelated letters like $p$ or $r$ . Naturally, inconsistencies can occur—such as c representing /s/, /k/, or /ʃ/ in French, or /ts/ in Seediq, but overall, we expect the ASCII-ized forms to retain at least phonemic consistency. + +There are two advantages to our approach. First, it does not require any prior knowledge about the language or its phonetic inventory, which might not be easily available for a poorly documented language. Second, it does not require additional effort or resources to create a rule-based or learned G2P transformation of the data. In short, the method can be applied to any language that uses at least a partially ASCII-based transcription system without requiring additional model training or in-depth research into the phonetic properties of the language. + +The second metric is the Connectionist Temporal Classification (CTC) alignment posterior probability. We use a large wav2vec (Baevski et al., 2020) model5 to extract a speech representation + +from each utterance, again without fine-tuning; we then apply CTC alignment (Graves et al., 2006) between the speech features and the manual transcription and output the alignment posterior probability. It is entirely independent of the PDM metric. + +# 4.2 Synthetic dataset corruption + +To simulate a dataset containing typical fieldwork transcription errors, we arbitrarily select $20\%$ of each training set of the 5 CURATED datasets and introduce transcription errors using three different corruption methods: (1) Deleted: three random words are removed from the transcription; (2) Cropped: the final $50\%$ of words in the transcription are removed; (3)Swapped: the transcription is randomly replaced with another from the training set. For each language, we create three corrupted datasets, each containing $20\%$ of the utterances corrupted in one of these three ways. Each utterance/transcript pair in the three datasets is then scored with the two metrics described in Sec. 4.1. We then evaluate how accurate our metrics identified these corrupted utterances. Examples of corrupted utterances can be found in Table 2. + +# 4.3 ASR model building + +All experiments are conducted with XLSR-53 (Conneau et al., 2021), a multilingual model based on the wav2vec architecture. We train a CTC layer for 30 epochs, selecting the best model with the validation set. Decoding is performed using a trigram LM trained on the training set for each language and corruption setting. We follow the popular XLSR tutorial but do not freeze the feature extractor. + +In our simulated scenario, we use the three corrupted versions of each CURATED dataset (cf. Sec. 4.2). For each corrupted dataset, as well as for the uncorrupted dataset, we train an ASR model to determine the impact of each corruption on WER. For each corrupted dataset, we then create three filtered datasets: one in which we filter out $20\%$ of the utterances according to the strength of the PDM metric; one where we do the same according to the CTC metric. When training data is limited, removing utterances from the training set can negatively impact performance. To ensure a fair comparison, we also evaluate performance using a dataset where the same percentage of utterances is removed from the training data at random. + +In our real-world scenario, we calculate the two + +![](images/d3883386903f843a9411925e2906056c946f29d47c74347c65bc3749cfc7c07a.jpg) +Figure 1: WER across corruption configurations. + +metrics on the utterances of the two FIELDWORK datasets. For each dataset, we train ASR models on the unfiltered dataset and on filtered datasets, removing $5\%$ , $10\%$ , and $20\%$ of the utterances using the more promising PDM metric and via random selection. + +# 5 Results + +# 5.1 Detecting corrupted transcripts + +Figure 5 shows the full ROC curves and AUC values for all combinations of corruption type, dataset, and metric. We see that PDM achieves near-perfect AUC scores (0.89-0.98) for detecting utterances in the Swapped configuration, very high scores for Cropped (0.77-0.94), and strong scores for Deleted (0.64-0.85). In contrast, CTC is consistently and substantially less effective for all languages in all three corruption settings, with some AUC scores performing at chance in the Deleted and Cropped setting. Notably, Duoxu and Mboshi exhibit lower AUCs perhaps due to weak overlap in character set with the English wav2vec model used (cf. Table 1). The Deleted configuration appears to be the most challenging to detect for both metrics, but with PDM showing a clear advantage over CTC. + +# 5.2 ASR evaluation: Simulated fieldwork + +The baseline results for both the uncorrupted and corrupted CURATED datasets, shown in Figure 1, reveal a clear trend. The Deleted configuration causes the least degradation in WER. The Cropped configuration generally yields the second-worst results, except for Mboshi, where Deleted performs worse perhaps because of Mboshi's shorter average utterance length. Finally, the Swapped configuration consistently produces the weakest WER. + +Figure 2 shows changes in WER in the corrupted datasets with and without filtering using the two metrics, PDM and CTC, as well as the random setting where $20\%$ of the data is removed from a corrupted dataset at random. In the Deleted setting, + +PDM filtering has minimal impact, while CTC filtering generally degrades WER. In the Cropped setting, filtering with PDM improves WER except for Duoxu, while CTC filtering again generally degrades WER. In the Swapped setting, filtering with PDM systematically and often dramatically improves WER, while CTC filtering has little impact except for Bunun and Saisiyat. Overall, the superior performance of PDM filtering is quite consistent, yielding better results than CTC filtering in 14 out of 15 cases. The exception is Duoxu in the Cropped setting. As already noted, Duoxu's writing system contains many non-ASCII characters, which may limit the performance of the PDM metric in some cases. + +In a few rare cases, ASR models trained on a corrupted dataset outperform those trained on data filtered using one of the two metrics. This typically occurs when the filtering metric lacks sufficient accuracy, as is observed in some languages with the Deleted and Cropped configurations (see Figure 5), leading to the unintended removal of clean data while allowing corrupted data to remain, ultimately degrading performance. The Deleted setting, which has a minor impact when utterances are relatively long, may also serve unintentionally as a form of corruption-based regularization. + +# 5.3 ASR evaluation: Real-world fieldwork + +Figure 3 shows the results of different thresholds of PDM filtering and random filtering on the two FIELDWORK real-world datasets. (We do not report results for CTC given the weak utility observed in the simulated fieldwork scenario both for corruption detection and as a filter.) For Thulung Rai, a $5\%$ filtering threshold proved the most effective, resulting in a decrease of several points in WER, while higher thresholds and random filtering resulted in WER increases. With the Namakura dataset, WER consistently decreased as more data was filtered using the PDM score, suggesting that a significant portion of the corpus may contain transcription errors. Filtering randomly for Namakura yielded slight random variations in WER. + +To better understand the utility of the PDM metric for identifying poor transcripts, we manually inspected the transcriptions of the lowest and highest $5\%$ of utterances based on PDM scores for both corpora. In Thulung Rai, $61\%$ of the lowest scoring utterances showed no issues, while $14\%$ had mismatched transcriptions and $23\%$ contained cropped transcriptions. In contrast, $93\%$ of the top scoring + +![](images/ad38d32e1c082b297f129f8df3cb7b867010b0814fad38f9c8ef954346bef8ee.jpg) +(a) Deleted + +![](images/09b6f3a9fe494e174925c2c9be70396a553b74b38966df085ed3bfc48df74466.jpg) +(b) Cropped + +![](images/a73a92d2d9e1950cc334af40770601bb8b86b304835ba2e04391573059bf6431.jpg) +(c) Swapped + +![](images/10f28f7e24599056ee4a84055d982685c37606911bc1a426aeef5cc552e4c7f0.jpg) +Figure 2: WER for corrupted and filtered CURATED datasets in the simulated fieldwork scenario. + +![](images/af652c4adfb9b46e38ff6b0baa654a6ea1e11c3cec5b7a36ca18f5616526fbb2.jpg) +(a) PDM +(b) Random +Figure 3: WER for unfiltered and filtered FIELDWORK datasets in the real-world fieldwork scenario. + +utterances had correct transcriptions, with $6\%$ missing a few words. For Namakura, only $11\%$ of the lowest scoring utterances had accurate transcriptions, with $55\%$ mismatched, $29\%$ cropped, $3\%$ with cropped audio, and $1\%$ missing some words. Conversely, the highest scoring utterances had $97\%$ correct transcriptions, with $1.5\%$ cropped and another $1.5\%$ missing words. + +# 6 Conclusions and Future Work + +This paper explores two metrics for identifying unsuitable and inaccurate speech transcriptions to improve ASR training from linguistic fieldwork data with the goal of supporting language documentation. We find that our novel PDM metric and, to a lesser extent, a CTC confidence metric are effective in identifying erroneous transcriptions in both simulated and real-world fieldwork datasets. Moreover, filtering data using the PDM metric consistently reduces WER in both simulated and real-world + +fieldwork scenarios. In our future work, we plan to investigate additional methods for identifying poor transcriptions and to explore the relationship between audio quality and transcription quality. + +# Limitations + +Experimental results demonstrate that our PDM method is highly effective for languages with a limited number of non-ASCII characters. However, further experiments are needed to evaluate its performance on languages with a larger set of non-ASCII characters and non-Latin writing systems. The proposed metrics efficiently identify major errors, such as missing or mismatched transcripts, but are less likely to detect spelling mistakes or inconsistent transcription of specific speech sounds, which could also significantly impact WER. While these methods could be applied to high-resource languages like French or German, such languages may benefit more from approaches leveraging existing G2P models or pre-trained ASR systems trained specifically for these languages. + +# Ethics Statement + +Researchers must always be respectful of language community concerns about data ownership when working with Indigenous language data. All of our data is gathered from public sources. In the case of the Formosan languages, the two organizations providing the data, the Indigenous Languages Research and Development Foundation and the ePark educational research organization, actively seek out collaborations with computational researchers. The other datasets are also made available on the Web by their creators specifically with the goal of furthering research in these languages, both linguistic and computational. We have permission from the creators and owners to redistribute the data in the form of ASR datasets. + +# Acknowledgments + +The authors thank Yuyang Liu, Li-May Sung, and the Indigenous Languages Research and Development Foundation, especially Akiw and Lowking Nowbucyang, for generously providing data. This material is based upon work supported by the National Science Foundation under Grant #2319296. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. + +# References + +Aboriginal Language Research and Development Foundation. 2023a. Online dictionary of aboriginal languages. https://e-dictionary.ilrdf.org.tw. +Aboriginal Language Research and Development Foundation. 2023b. yuanzhumin yuan leyuan (epark). https://web.klokeh.tw/. +Kenichi Arai, Shoko Araki, Atsunori Ogawa, Keisuke Kinoshita, Tomohiro Nakatani, Katsuhiko Yamamoto, and Toshio Irino. 2019. Predicting speech intelligibility of enhanced speech using phone accuracy of DNN-based ASR system. In *Interspeech*, pages 4275–4279. +Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449-12460. +Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. 2021. Unsupervised cross-lingual representation learning for speech recognition. Interspeech 2021. +Horia Cucu, Andi Buzo, and Corneliu Burileanu. 2014. Unsupervised acoustic model training using multiple seed asr systems. In Spoken Language Technologies for Under-Resourced Languages. +Jonathan G Fiscus. 1997. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER). In 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, pages 347-354. IEEE. +Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369-376. +Alex Graves and Alex Graves. 2012. Connectionist temporal classification. Supervised sequence labelling with recurrent neural networks, pages 61-93. + +Severine Guillaume, Guillaume Wisniewski, Benjamin Galliot, Minh-Chau Nguyen, Maxime Fily, Guillaume Jacques, and Alexis Michaud. 2022. Plugging a neural phoneme recognizer into a simple language model: a workflow for low-resource setting. In Proceedings of Interspeech, pages 4905-4909. +Florian Hanke. 2017. Computer-Supported Cooperative Language Documentation. Ph.D. thesis, Ph.D. thesis, University of Melbourne. +Joshua K. Hartshorne, Éric Le Ferrand, Li-May Sung, and Emily Prud'hommeaux. 2024. Formosanbank and why you should use it. In Architectures and Mechanisms in Language Processing (AMLaP) Poster. +Inga Holube and Birger Kollmeier. 1996. Speech intelligibility prediction in hearing-impaired listeners based on a psychoacoustically motivated perception model. The Journal of the Acoustical Society of America, 100(3):1703-1716. +Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu. 2013. Semi-supervised GMM and DNN acoustic model training with multi-system combination and confidence re-calibration. In Interspeech, pages 2360-2364. +Robert Jimerson, Zoey Liu, and Emily Prud'hommeaux. 2023. An (unhelpful) guide to selecting the best ASR architecture for your under-resourced language. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1008-1016. +Denis Jouvet and Dominique Fohr. 2014. About combining forward and backward-based decoders for selecting data for unsupervised training of acoustic models. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association. +Tomáš Koctúr, Ján Staš, and Jozef Juhár. 2016. Unsupervised acoustic corpora building based on variable confidence measure thresholding. In 2016 International Symposium ELMAR, pages 31-34. IEEE. +Éric Le Ferrand. 2023. Leveraging Speech Recognition for Interactive Transcription in Australian Aboriginal Communities. Ph.D. thesis, Charles Darwin University. +Sheng Li, Yuya Akita, and Tatsuya Kawahara. 2016. Data selection from multiple ASR systems' hypotheses for unsupervised acoustic model training. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5875-5879. IEEE. +Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anastasopoulos, David R Mortensen, Graham Neubig, Alan W Black, et al. 2020. Universal phone recognition with a multilingual allophone system. In ICASSP 2020-2020 IEEE International Conference on Acoustics, + +Speech and Signal Processing (ICASSP), pages 8249-8253. IEEE. +Cécile Macaire, Didier Schwab, Benjamin Lecouteux, and Emmanuel Schang. 2022. Automatic speech recognition and query by example for Creole languages documentation. In *Findings of the Association for Computational Linguistics: ACL* 2022. +Boyd Michailovsky, Martine Mazaudon, Alexis Michaud, Séverine Guillaume, Alexandre François, and Evangelia Adamou. 2014. Documenting and researching endangered languages: The Pangloss Collection. Language Documentation and Conservation, 8:119-135. +Wael Mohamed, Éric Le Ferrand, Li-May Sung, Emily Prud'hommeaux, and Joshua Hartshorne. 2024. Formosanbank. https://ai4commsci.gitbook.io/formosanbank. +Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, et al. 2024. Scaling speech technology to 1,000+ languages. Journal of Machine Learning Research, 25(97):1-52. +Monica Romero, Sandra Gómez-Canaval, and Ivan G Torre. 2024. Automatic speech recognition advancements for Indigenous languages of the Americas. Applied Sciences, 14(15):6497. +Hiroaki Sakoe and Seibi Chiba. 1978. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43-49. +Constantin Spille, Stephan D Ewert, Birger Kollmeier, and Bernd T Meyer. 2018. Predicting speech intelligibility with deep neural networks. Computer Speech & Language, 48:51-66. +H Su and H Xu. 2015. Multi-softmax deep neural network for semi-supervised training. In Proceedings of Interspeech, pages 3239-3243. +Allahsera Tapo, Éric Le Ferrand, Zoey Liu, Christopher Homan, and Emily Prud'hommeaux. 2024. Leveraging speech data diversity to document Indigenous heritage and culture. In Proceedings of Interspeech 2024, pages 5088-5092. + +# A Appendix + +Table 1 shows the durations, token and type counts, and non-ASCII character proportions of each of the 7 datasets for reference purposes. As noted in the paper, we have released these corpora and their partitions for research purposes. We note that they are derived in their entirety from publicly available sources with licensing that permits redistribution in other formats. + +Table 2 provides examples of the three types of corruption designed to mimic the kinds of errors observed in fieldwork transcripts. + +Figure 2 provides a walk-through of the PDM calculation process with three example utterances. + +Figure 5 plots all six ROC curves and reports AUC measures for using each of the two metrics, PDM and CTC, to identify corruptions for each CURATED dataset under of the three corruption settings. + +Table 3 shows the WER results presented in the paper in graphical format in Figure 2. + +
BununDuoxuMboshiSaisiyatSeediqNamakuraThulung Rai
Duration8h347h573h288h148h531h532h18
Token40166615642567139644501231856616296
Type6846255756214723560810653965
Non-ASCII0%44%23%0%2%1%8%
+ +Table 1: Corpus size and token/type count for all datasets. We also provide the percentage of non-ASCII characters which may have an impact on the utility of the PDM metric. + +
Config.OriginalCorrupted
Deletedwa adi pósá bábará wa kaá +kobhá epωrωbaá óyálá mwána anyωωwa pósá wa +epωrωó yáála mwána
Croppedmaqasmav a abus malaitaz a savi +to seediq msgelu sa seediq mneyah alang kiyamaqasmav a abus +to seediq msgelu sa
Swappedsupah a samah sia humacia +tai hari niqan rebuq watan dao su trebuq hiianak anak sa ia maupacia minhanglas +slii hini kanna nnapa namu bunga
+ +Table 2: Examples of input utterances and their corruptions from the three corruption configurations. + +
IPA transcriptOrthographic transcription
mibiji:xbuəmūprɛskyrztəheəMeyah bgihur, mqaras ka dheya
amiōikalawasuəkjietaɪgwan'amilika' ra:waS ki taywan
tsuəbaɪaɪmaɪsəhumatɪəSupah a samah sia humacia
conversion to ASCIIOrthography normalized
mibiji:xRu@rmoareskuzt@he@meyahbgihurmqaraskadheya
amidikalawasu@kjetalgwan'amilika'ra:waskitaywan
tsu@baRaSamaRc@humatSjl@supahasamahsiahumacia
+ +Figure 4: Demonstration of the PDM calculation method. In the upper left we see IPA transcripts generated from audio by Allosaurus. In the upper right we see the corresponding reference orthographic transcription for the three sample utterances. In the lower left are the phone-level transcripts converted to their ASCII equivalents often used to represent those IPA symbols (e.g., with Sampa). In the lower right, the reference orthographic transcriptions converted to ASCII, with spaces removed. We calculate normalized Levenshtein distance between the utterance in the lower panels and subtract from 1 to create the PDM metric. + +![](images/3b656e14b24c6b3fde028f3f5a91803ec90a02a9858687f2244aeca49769d801.jpg) +(a) Deleted PDM + +![](images/3eb87e13fe84456fcf87fb4bf6040541c875d8cc3ffd344f8a1c885be99c92b3.jpg) +(b) Cropped PDM + +![](images/bf5c080a952d7877e103d27633c27c27822b3d0324ce4bd5522cf8721f1f1fa5.jpg) +(c) Swapped PDM + +![](images/82331efbb152b3a7919c500a9b4fbe9d8ebc72559169e3804da4b3903f381188.jpg) +(d) Deleted CTC + +![](images/891ebea2bdae193b36a2d2eebda959c72474674d9e5de878f4b5904dc8310e63.jpg) +(e) Cropped CTC + +![](images/85f36c30eab216588c546f2c0e483e5e2d3e7d55f9b961d47cc77b0d301b319f.jpg) +(f) Swapped CTC +Figure 5: ROC curves comparing performance of PDM and CTC for retrieving corrupted transcriptions under the three corruption settings for all five of the CURATED datasets. + +
Corruption SettingFiltering MethodBununDuoxuMboshiSaisiyatSeediq
DeletedUnfiltered0.25730.44640.47170.26000.1899
Random0.46200.50080.51210.39430.3075
PDM0.27540.44780.44460.27790.1947
CTC0.30250.45620.60480.30020.2325
CroppedUnfiltered0.30650.45140.42680.37000.2692
Random0.33720.49110.54020.34280.2473
PDM0.30620.53110.41980.29080.2049
CTC0.33510.48060.70460.33870.2034
SwappedUnfiltered0.47430.57590.55620.45000.3356
Random0.45840.65380.55130.39940.4881
PDM0.29400.46910.49510.21060.2036
CTC0.30140.58000.53960.34210.3207
+ +Table 3: WER for each combination of simulated corruption setting and filtering method for each of the five CURATED datasets. This same information is visualized in bar graph format in Figure 2. The lowest WER in for each language/corruption is boldfaced. \ No newline at end of file diff --git "a/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/images.zip" "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/images.zip" new file mode 100644 index 0000000000000000000000000000000000000000..eab31f1174eab65d1d8f608822448fab953f3bf7 --- /dev/null +++ "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/images.zip" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2be908474465788a108841bdf819f65b21e0b4bb4bac7b5da172d1aaa5303ace +size 478449 diff --git "a/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/layout.json" "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/layout.json" new file mode 100644 index 0000000000000000000000000000000000000000..cc649691a7e509f8da26a4382917bd696a8633d3 --- /dev/null +++ "b/ACL/2025/That doesn\342\200\231t sound right_ Evaluating speech transcription quality in field linguistics corpora/layout.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c51ffab3bef04e59b32668a03267da6846523ae3a97e17f0a7bf4f3795c8429b +size 258497 diff --git a/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_content_list.json b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..13511131910b9f542b1875207ef72bf594359100 --- /dev/null +++ b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5a2542c9e30eb38991c30a4ecf789571dfdfbd60bab6106da77988183d6c4f2 +size 46618 diff --git a/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_model.json b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fcadcb598065250e5fe4ed7348ce1dacc2239a3a --- /dev/null +++ b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:930bfb6dcc28938d6ec332b7f5c50f58a4847ce617b68027b36fea39a8321e6a +size 54749 diff --git a/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_origin.pdf b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5e124b7f15333986d84025784f72f2352952a29d --- /dev/null +++ b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0a1a6e3eb5ec912c52bf4fdfb2f1d67bba1ef290b4b78fea5d1da48eed4df46 +size 185677 diff --git a/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/full.md b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dc64a5fafbec1100ce94af88e5930538df91491d --- /dev/null +++ b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/full.md @@ -0,0 +1,166 @@ +# The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models + +Zachary Houghton and Kenji Sagae and Emily Morgan +University of California, Davis / 1 Shields Ave, Davis, CA 95616 +znhoughton@ucdavis.edu + +# Abstract + +To what extent do large language models learn abstract representations as opposed to more superficial aspects of their very large training corpora? We examine this question in the context of binomial ordering preferences involving two conjoined nouns in English. When choosing a binomial ordering (radio and television vs television and radio), humans rely on more than simply the observed frequency of each option. Humans also rely on abstract ordering preferences (e.g., preferences for short words before long words). We investigate whether large language models simply rely on the observed preference in their training data, or whether they are capable of learning the abstract ordering preferences (i.e., abstract representations) that humans rely on. Our results suggest that both smaller and larger models' ordering preferences are driven exclusively by their experience with that item in the training data. Our study provides further insights into differences between how large language models represent and use language and how humans do it, particularly with respect to the use of abstract representations versus observed preferences. + +# 1 Introduction + +Large language models have progressed at an incredible rate in the last few years. Their rise in popularity and sometimes surprising capabilities have raised many questions about what exactly these models learn and how they represent linguistic knowledge. One interesting question that has been examined is whether certain capabilities emerge once models reach a certain size. Although models of different sizes appear to generate fluent language, it is unclear to what extent different models rely on superficial characteristics of their immense training corpora, such as word frequency and co-occurrences, and to what extent they learn abstract representations that generalize in ways that are similar to what humans do with far less linguis + +tic input. For example, in addition to learning that some binomial orderings are more frequent than others (e.g., bread and butter is more frequent than butter and bread), humans also learn abstract ordering preferences (e.g., short words before long words; Morgan and Levy, 2016a). + +In the present study we examine binomial ordering preferences in English in eight large language models with number of parameters ranging from 124M to 70B. Specifically, we ask whether ordering preferences in these models are determined entirely by the observed preferences of binomials in corpus data, or whether the language models also learn abstract ordering preferences. Further, we examine whether large language models, similar to humans, show stronger effects of observed ordering preferences in high frequency items. If large language models are just reproducing superficial characteristics of the training data, we should see no effects of abstract ordering preferences, and only see effects of observed ordering preferences. On the other hand, if language models are doing more than just memorization, then we may see effects of abstract ordering preferences in addition to effects of observed ordering preferences, and these may change as a function of the binomial's frequency. + +Our specific contribution is an investigation of how large language models use abstract knowledge vs. observed preferences through a binomial ordering preference task, along with a discussion about how this differs from language use by humans. We show that language models rely more on the surface-level statistics of their input (e.g., n-gram frequency) than humans do, adding to our understanding of how large language models represent and generate language. + +# 1.1 Evidence for Abstractions in LLMs + +Large language models have demonstrated incredible breakthroughs in the last few years, showing impressive capabilities across a wide variety of + +tasks. Despite this, previous research has demonstrated mixed results with respect to their abilities to learn abstract representations (e.g., McCoy et al., 2023; LeBrun et al., 2022; Pan and Bergen, 2025). Specifically, it remains unclear to what extent large language models are simply copying their training data as opposed to learning something more abstract. For example, Haley (2020) demonstrated that many of the BERT models are not able to reliably determine the plurality of novel words at the same level as humans. + +On the other hand, Wei et al. (2021) demonstrated that BERT can generalize well to novel subject-verb pairs. Specifically, they tested BERT's subject-verb agreement ability on novel sentences that it's never seen before and found that BERT seems to learn abstract representations of subject-verb agreement (as evidenced by the fact that it performs well on items it wasn't trained on). + +Additionally, there's evidence that transformer models trained on an amount of data comparable to humans can also learn abstract knowledge about the language (Misra and Mahowald, 2024; Yao et al., 2025). For example, Misra and Mahowald (2024) examined whether a language model trained on a comparable amount of data as humans can learn article-adjective-numeral-noun expressions (a beautiful five days). Specifically, without having a great deal of experience with them, humans learn that a beautiful five days is perfectly natural, but a five beautiful days is not. Misra and Mahowald (2024) demonstrated that language models learn this even if they have no AANNs in their training data. They further demonstrated that they do this by generalizing across similar constructions, such as a few days. + +Further, Yao et al. (2025) examined whether language models trained on a comparable amount to humans can learn the length and animacy preferences that drive dative alternations (e.g., give the ball to her vs give her the ball) in humans. Specifically, dative alternations show a length and animacy bias (Yao et al., 2025). In order to examine whether language models can learn these biases from other constructions, they manipulated the training data to remove the length and animacy bias from the dative alternations in the training data of the language model. They found that the model can learn these biases even without exposure to them in the dative alternation. These results suggest that in some cases language models can learn generalizations without a great amount of data. + +In order to investigate large language models' ability to learn abstract representations, it is useful to compare them to human Psycholinguistic data. Unlike large language models, humans don't have access to corpora with trillions of tokens. Despite this, humans' capacity for language is unparalleled, in part due to our incredible ability to learn abstract representations (Berko, 1958; Kapatsinski, 2018). + +# 1.2 Evidence for Abstractions in Humans + +Humans are remarkable in our ability to learn and produce language, often producing and processing sentences that we've never encountered before. This is largely enabled by our unique ability to not simply memorize language, but to learn more abstract generalizations. For example, humans develop abstract ordering preferences for how to linearize the message we want to convey (i.e., deciding on which order to say the words that convey the meaning we want to express). One illustration of this comes from the literature on binomial constructions, where there are two conjoined nouns (e.g., cats and dogs, Morgan and Levy, 2015, 2016a,b; Benor and Levy, 2006). Binomial constructions often convey the same meaning regardless of the order (e.g., radio and television vs television and radio). Despite this, however, humans sometimes have very strong preferences for one order over the other (e.g., bread and butter overwhelmingly preferred over butter and bread). + +While these preferences are driven in part by experience with the binomial (i.e., which binomial ordering is encountered more often), there are also other factors, such as phonological or semantic constraints, that affect ordering preferences. In other words, human ordering preferences are driven in part by observed preferences in corpus data (i.e., the observed preference in their previous language experience, Morgan and Levy, 2016a) and in part driven by abstract ordering preferences based on abstract constraints (e.g., a preference for short words before long words, or a preference for male-coded words before female-coded words, Benor and Levy, 2006). + +In order to capture the abstract ordering preferences of humans across binomial constructions, Morgan and Levy (2016a) developed a model to quantify the abstract ordering preference of a given binomial in English. They demonstrated that the model's predicted abstract ordering preferences are not the same as the observed preferences in corpus data. The model combines multiple phonological + +and semantic constraints that have been shown to affect binomial ordering preferences into a single abstract ordering preference value for each binomial. They further demonstrated that human ordering preferences for low-frequency items are primarily driven by this abstract ordering preference value, and preferences for high-frequency items are driven primarily by the observed preferences in corpus data. They operationalized frequency using the overall frequency of a binomial, i.e. the total frequency in both possible orders (i.e., the number of times the binomial occurs in alphabetical ordering plus the number of times the binomial occurs in nonalphabetical ordering). This provides a measure of expression frequency that is not confounded with the frequency of a specific order. + +Since human ordering preferences deviate from the observed preferences (i.e., humans aren't simply reproducing binomials in the same order that they heard them; Morgan and Levy, 2024), ordering preferences thus present a useful test case for large language models. If large language models learn representations beyond simply memorizing the training dataset or superficially reproducing word co-occurrences, they may learn abstract ordering preferences similar to humans, and this may be reflected in their binomial ordering preferences. + +# 2 Methods + +# 2.1 Dataset + +In order to examine the ordering preferences of binomial constructions in large language models, we use a corpus of binomials from Morgan and Levy (2015). The corpus contains 594 binomial expressions which have been annotated for various phonological, semantic, and lexical constraints that are known to affect binomial ordering preferences. The corpus also includes: + +1. The estimated abstract ordering preference for each binomial representing the ordering preference for the alphabetical ordering (a relatively unbiased reference form), estimated from the above constraints (independent of frequency). The abstract ordering preferences take a value between 0 and 1, with 0 being a stronger preference for the nonalphabetical form, and 1 being a stronger preference for the alphabetical form. The abstract ordering preferences were calculated using Morgan and Levy (2015)'s model. + +2. The observed binomial orderings which are the proportion of binomial orderings that are in alphabetical order for a given binomial, gathered from the Google $n$ -grams corpus (Lin et al., 2012). The Google $n$ -grams corpus is magnitudes larger than the language experience of an individual speaker and thus provides reliable frequency estimates. A value of 1 indicates the binomial occurs exclusively in the alphabetical ordering while a value of 0 indicates that the binomial occurs exclusively in the nonalphabetical ordering. +3. The overall frequency of a binomial expression (the number of times the binomial occurs in either alphabetical or non-alphabetical order). Overall frequencies were also obtained from the Google $n$ -grams corpus (Lin et al., 2012). + +# 2.2 Language Model Predictions + +In order to derive predictions for large language models, we used the following models from the GPT-2 (Radford et al., 2019) family, the Llama2 (Touvron et al., 2023) family, Llama-3 family (https://github.com/meta-llama/llama3), and the OLMo (Groeneveld et al., 2024) family. From smallest to largest in number of parameters: GPT2 (124M parameters), OLMo 1B (1B parameters), GPT-2 XL (1.5B parameters), Llama-2 7B (7B parameters), OLMo 7B (7B parameters), Llama-3 8B (8B parameters), Llama-2 13B (13B parameters), and Llama-3 70B (70B parameters). For each model, we calculated the ordering preferences of the alphabetical form for each binomial in the dataset. The predicted probability of the alphabetical form was calculated as the product of the model's predicted probability of each word in the binomial. In order to accurately calculate the probability of the first word in the binomial, each binomial was prepended with the prefix "Next item": " . Thus the probability of the alphabetical form, $A$ and $B$ is: + +$$ +\begin{array}{l} P _ {a l p h a b e t i c a l} = P (A | N e x t i t e m:) \\ \times P (a n d | N e x t i t e m: A) \tag {1} \\ \times P (B | N e x t i t e m: A a n d) \\ \end{array} +$$ + +where $A$ is the alphabetically first word in the binomial and $B$ is the other word. Additionally, the probability of the nonalphabetical form, $B$ and $A$ + +is: + +$$ +\begin{array}{l} P _ {\text {n o n a l p h a b e t i c a l}} = P (B | \text {N e x t i t e m :}) \\ \times P (a n d | N e x t i t e m: B) \\ \times P (A | \text {N e x t i t e m}: B \text {a n d}) \tag {2} \\ \end{array} +$$ + +Finally, to get an overall ordering preference for the alphabetical form, we calculated the (log) odds ratio of the probability of the alphabetical form to the probability of the nonalphabetical form: + +$$ +\operatorname {L o g O d d s} (A a n d B) = \log \left(\frac {P _ {\text {a l p h a b e t i c a l}}}{P _ {\text {n o n a l p h a b e t i c a l}}}\right) \tag {3} +$$ + +# 2.3 Analysis + +The data was analyzed using Bayesian linear regression models, implemented in brms (Bürkner, 2017) with weak, uninformative priors. For each model, the dependent variable was the log odds of the alphabetical form to the nonalphabetical form. The fixed-effects were abstract ordering preference (represented as AbsPrefbelow), observed preference (ObservedPref), overall frequency (Freq), an interaction between overall frequency and abstract ordering preference (Freq:AbsPref), and an interaction between overall frequency and observed preference (Freq:ObservedPref). The model equation is presented below: + +$$ +\begin{array}{l} \operatorname {L o g O d d s} (A \text {a n d} B) \sim \operatorname {A b s P r e f} \\ + O b s e r v e d P r e f \\ + F r e q \\ + F r e q: A b s P r e f \\ + F r e q: O b s e r v e d P r e f \tag {4} \\ \end{array} +$$ + +Frequency was logged and centered, and abstract ordering preference and observed preference were centered such that they ranged from -0.5 to 0.5 (instead of from 0 to 1). Note that since abstract ordering preference and observed preference are on the same scale, we can directly draw comparisons between the coefficient estimates for these fixed-effects in our regression model. + +# 3 Results + +Our full model results are presented in the appendix (Table 1) and visualized in Figure 1. For each model, the figure shows the values for each of the coefficients from the model in Equation 4, representing how strongly each language model relies on observed preference, abstract ordering preference, overall frequency, the interaction between + +abstract ordering preference and overall frequency, and the interaction between observed preference and overall frequency. + +Our results are similar across all the large language models we tested. Specifically, we find no effect of abstract ordering preferences and no interaction effect between abstract ordering preference and overall frequency. We do find an effect of observed preference suggesting that the models are mostly reproducing the ordering preferences found in their training. We also find an interaction effect between observed preference and overall frequency, suggesting that the effect of observed frequency is stronger for high-frequency items. + +# 4 Conclusion + +In the present study we examined the extent to which abstract ordering preferences and observed preferences drive binomial ordering preferences in large language models. We find that their ordering preferences are driven primarily by the observed preferences. Further, they rely more on observed preferences for higher frequency items than lower frequency items. Finally, they don't seem to be using abstract ordering preferences at all in their ordering of binomials. + +Our results give us insight into the differences between humans and large language models with respect to the ways in which they trade off between abstract and observed preferences. For example, our dataset contains low-frequency binomials (e.g. alibis and excuses), including binomials that a college-age speaker would have heard only once in their life. Due to their low frequency, humans rely substantially on abstract ordering preferences to process these lower frequency items (Morgan and Levy, 2024). This is not the case, however, for large language models, which rely exclusively on observed preferences for these items. This is true even for the smallest models we tested, such as GPT-2. We conclude that, although large language models can produce human-like language, they accomplish this in a quantitatively different way than humans do: they rely on observed statistics from the input in at least some cases when humans would rely on abstract representations. + +# 5 Limitations + +There are a few important limitations in our study. The first limitation is that we don't know exactly how many times each of the large language models + +![](images/4a17c3557e8353807c8336cb9509f854f602d644c634f4ead4b2ecdc248ed887.jpg) +Figure 1: Results for each beta coefficient estimate from each model. Models are arranged from smallest to largest from left to right. The x-axis contains each coefficient and the y-axis contains the predicted beta coefficient of the respective model. Error bars indicate $95\%$ credible intervals. + +has seen each binomial tested. We can approximate the binomial's frequency using corpus data, which gives us an indication of the frequency of the binomial in a language model's training set, but it is possible that the large language models saw the binomials more than we expect. Thus, the current study can't differentiate between a model that has learned abstract ordering preferences but doesn't use it for binomials that it has seen, and a model that simply hasn't learned abstract ordering preferences. Although, there is some hope with the recent development of open access large language models, such as OLMo (Groeneveld et al., 2024), where the training data is publicly available. We have future plans to examine the ordering preferences of novel binomials in the OLMo series of models to determine whether LLMs have learned ordering preferences at all. + +Additionally, the binomials tested here are only 3 words and relatively fixed in the sense that variations such as bread and also butter are not very common. Thus these are potentially easier for the large language models to memorize compared to longer or less-fixed strings, which could be tested in future work. + +Further, while we examined language models of + +various sizes and determined that the number of parameters does not seem to play a role in whether these models employ abstract ordering preferences for binomials, our analysis was not designed to investigate the effect of training set size. + +Finally, our experiments deal only with binomials in English. + +# References + +Sarah Bunin Benor and Roger Levy. 2006. The chicken or the egg? a probabilistic analysis of english binomials. Language, pages 233-278. +Jean Berko. 1958. The child's learning of english morphology. Word, 14(2-3):150-177. +Paul-Christian Bürkner. 2017. brms: An r package for bayesian multilevel models using stan. Journal of statistical software, 80:1-28. +Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshitaa Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, et al. 2024. Olmo: Accelerating the science of language models. arXiv preprint arXiv:2402.00838. +Coleman Haley. 2020. This is a bert. now there are several of them. can they generalize to novel words? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 333-341. + +Vsevolod Kapatsinski. 2018. Changing minds changing tools: From learning theory to language acquisition to language change. MIT Press. +Benjamin LeBrun, Alessandro Sordoni, and Timothy J O'Donnell. 2022. Evaluating distributional distortion in neural language modeling. arXiv preprint arXiv:2203.12788. +Yuri Lin, Jean-Baptiste Michel, Erez Aiden Lieberman, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the google books ngram corpus. In Proceedings of the ACL 2012 system demonstrations, pages 169-174. +R Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. 2023. How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven. Transactions of the Association for Computational Linguistics, 11:652-670. +Kanishka Misra and Kyle Mahowald. 2024. Language models learn rare phenomena from less rare phenomena: The case of the missing aanns. arXiv preprint arXiv:2403.19827. +Emily Morgan and Roger Levy. 2015. Modeling idiosyncratic preferences: How generative knowledge and expression frequency jointly determine language structure. In CogSci. Citeseer. +Emily Morgan and Roger Levy. 2016a. Abstract knowledge versus direct experience in processing of binomial expressions. Cognition, 157:384-402. +Emily Morgan and Roger Levy. 2016b. Frequency-dependent regularization in iterated learning. In *The Evolution of Language: Proceedings of the 11th international conference (EVOLANG 2016)*. +Emily Morgan and Roger Levy. 2024. Productive knowledge and item-specific knowledge trade off as a function of frequency in multiword expression processing. Language, 100(4):e195-e224. +Dingyi Pan and Ben Bergen. 2025. Are explicit belief representations necessary? a comparison between large language models and Bayesian probabilistic models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 11483-11498, Albuquerque, New Mexico. Association for Computational Linguistics. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, + +Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. +Jason Wei, Dan Garrette, Tal Linzen, and Ellie Pavlick. 2021. Frequency effects on syntactic rule learning in transformers. arXiv preprint arXiv:2109.07020. +Qing Yao, Kanishka Misra, Leonie Weissweiler, and Kyle Mahowald. 2025. Both direct and indirect evidence contribute to dative alternation preferences in language models. arXiv preprint arXiv:2503.20850. + +# A Full Model Results + +
GPT-2GPT-2XL
Est.Err.2.597.5Est.Err.2.597.5
Intercept-0.100.10-0.300.100.050.09-0.130.23
AbsPref-0.520.64-1.810.69-0.890.63-2.170.29
Observed4.620.503.665.595.340.464.456.25
Freq-0.040.06-0.150.07-0.010.05-0.110.09
AbsPref:Freq0.100.39-0.660.86-0.170.36-0.870.53
Observed:Freq0.960.240.491.431.010.210.591.43
Llama-2 7BLlama-2 13B
Est.Err.2.597.5Est.Err.2.597.5
Intercept0.220.13-0.030.470.120.08-0.040.27
AbsPref1.110.84-0.402.910.320.54-0.721.38
Observed3.070.641.814.315.250.404.466.05
Freq0.040.07-0.100.17-0.080.04-0.160.01
AbsPref:Freq-0.320.47-1.240.59-0.020.32-0.640.60
Observed:Freq0.230.28-0.330.780.720.190.341.09
Llama-3 8BLlama-3 70B
Est.Err.2.597.5Est.Err.2.597.5
Intercept0.150.09-0.030.330.040.05-0.060.14
AbsPref0.230.59-0.921.420.100.38-0.630.85
Observed5.640.464.756.545.000.274.495.52
Freq-0.070.05-0.170.03-0.050.03-0.110.00
AbsPref:Freq0.070.36-0.630.78-0.110.21-0.520.30
Observed:Freq0.600.220.181.030.650.120.410.89
OLMo 1BOLMo 7B
Est.Err.2.597.5Est.Err.2.597.5
Intercept0.060.08-0.090.220.040.07-0.100.18
AbsPref0.690.54-0.331.79-0.860.51-1.880.11
Observed4.360.393.585.125.370.364.676.08
Freq0.060.04-0.020.140.010.04-0.070.08
AbsPref:Freq-0.120.31-0.730.470.100.28-0.470.64
Observed:Freq0.810.190.441.170.700.170.371.04
+ +Table 1: Model results for each language model. The Estimate is given in the "Est." column, the standard deviation of the posterior is given in the "Err." column. The columns labeled 2.5 and 97.5 represent the lower and upper confidence interval boundaries. AbsPref is the abstract ordering preferences, Observed is the observed preference in corpus data, and Freq is the overall frequency of the binomial. + +# B Quantization Issue + +In addition to these results, we did find a meaningful effect of abstract ordering preferences for a quantized model of Llama-2 13B (https://huggingface.co/TheBloke/ Llama-2-13B-GPTQ). However, upon further inspection, the model's preferences did not match the preferences of the non-quantized model. For example, the quantized model's strongest preference was for schools and synagogues which had an estimated log odds of over 33. Further, the estimated log odds for error and trial was about 1. In other words, the model had a slight preference + +for error and trial over trial and error, and had a strong preference for schools and synagogues over synagogues and schools. Upon inspecting the non-quantized model, we found that the original model showed different (but expected) preferences, with a strong preference for trial and error (log odds of -15) and no real preference for schools and synagogues (log odds of 1). + +Further, in assessing the quality of the quantized model, text-generation revealed poor performance. For example, given the Prompt: "Describe your dream house", the model returned this response: + + Tell me about your dream house. The + +house I grew up in was on the edge of a forest. It was a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big,old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with abig, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house + +Given the output of the quantized model, we suspected an issue occurred during the quantization process, resulting in a poorly performing model. We thus decided to exclude the quantized model + +and use the results for the non-quantized model. \ No newline at end of file diff --git a/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/images.zip b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b5098eb203aa8609447036c6e246520b31e5916b --- /dev/null +++ b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31eb54a1932380253dc5874f3e422e447073bc3856f19f89066bf7dd45c7d6d9 +size 321143 diff --git a/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/layout.json b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4716375f18b2fba27addae6972188dd64c4d9790 --- /dev/null +++ b/ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4267305c558a2f0ca017afab93d9553abed8345483474e3837e92bfa5bcd02e7 +size 161493 diff --git a/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_content_list.json b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5c9368b39a7bebab30ca91c8aa2574461bc8aa68 --- /dev/null +++ b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:608ce6ab763bb14814e738220f943911d3a6aa49fedc3fa9e2a58e1db472a453 +size 61044 diff --git a/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_model.json b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6fb8599ecf2271d0d615a71e4337b4d11f1d39c6 --- /dev/null +++ b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:548010b8920d197741625fe91447d0541103081c72bbe11bec5afd36a6b53fbf +size 81488 diff --git a/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_origin.pdf b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..449d54dd762b78533ad6295b7ce6b2cfa89255b0 --- /dev/null +++ b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fac06370c47e89d98433325903f1d4e70f7e0a5c9d3a49f51d1b634eb3f7e73e +size 836041 diff --git a/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/full.md b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5688eb90603632b3ec01a398b3ba036e3a660b10 --- /dev/null +++ b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/full.md @@ -0,0 +1,287 @@ +# TigerLLM - A Family of Bangla Large Language Models + +Nishat Raihan + +George Mason University + +Fairfax, VA, USA + +mraihan2@gmu.edu + +Marcos Zampieri + +George Mason University + +Fairfax, VA, USA + +mzampier@gmu.edu + +# Abstract + +The development of Large Language Models (LLMs) remains heavily skewed towards English and a few other high-resource languages. This linguistic disparity is particularly evident for Bangla - the $5^{th}$ most spoken language. A few initiatives attempted to create open-source Bangla LLMs with performance still behind high-resource languages and limited reproducibility. To address this gap, we introduce TigerLLM - a family of Bangla LLMs. Our results demonstrate that these models surpass all open-source alternatives and also outperform larger proprietary models like GPT3.5 across standard benchmarks, establishing TigerLLM as the new baseline for future Bangla language modeling. + +# 1 Introduction + +LLMs have fundamentally transformed NLP by achieving exceptional performance across a broad range of tasks (Brown et al., 2020; Chowdhery et al., 2022; Raihan et al., 2025c). While these models exhibit unprecedented capabilities in language understanding, generation, reasoning, and specialized applications, their advancements predominantly benefit high-resource languages (Alam et al., 2024). This inequality is particularly noticeable for Bangla. Despite having about 237 million native speakers, $^{1}$ Bangla remains quite underserved in modern NLP advancements. + +This under-representation stems primarily from the limitation of high-quality training data. While proprietary models like GPT-4 (Brown et al., 2023) and Claude-3.5 (Bai et al., 2024) demonstrate reasonable Bangla capabilities, open-source alternatives consistently underperform. Recent multilingual models such as Gemma-2 (Gemma et al., 2024) and LLaMA 3.1 (Dubey et al., 2024), despite leveraging diverse training corpora and advanced tokenization systems like TikTokenizer + +(Corso et al., 2024), also fail to deliver satisfactory performance for Bangla. + +# 1.1 Limitations of Bangla LLM Initiatives + +Training Recent attempts at developing Bangla LLMs (see Table 1) through continual pretraining (titu-Gemma) and model distillation approaches (Zehady et al., 2024) have yielded low and nonreproducible results (see Table 2), often performing worse than their base models. The absence of technical documentation and academic publications further compounds this issue by making result reproduction impossible. Our investigation into these models' performances reveals the need for improvement in the training process. While the unavailability of pretraining corpora limits our analysis of that phase, the finetuning approach demonstrates consistent problematic patterns. + +Data Most Bangla LLM initiatives rely on translated versions of synthetic datasets like Alpaca-Instruct (Taori et al., 2023) and OpenOrca (Mitra et al., 2023), which are generated through model distillation (Hinton et al., 2015). This approach suffers from two fundamental limitations: (1) the datasets are generated by early GPT-3.5 (Brown et al., 2020) releases, a model with limited Bangla support, resulting in suboptimal instruction quality, and (2) these English datasets are translated to Bangla using machine translation systems like Google Translate with limited quality checks, further degrading the training data quality. These cascading compromises in training data ultimately result in poor model performance. + +# 1.2 Contributions + +To address the recurring challenges in Bangla LLM development, we introduce three fundamental contributions: + +1. The Bangla-TextBook corpus, comprising 10 million tokens of carefully curated educational + +
Base-LLMSizeptcorporaftft-datasetPaper/Report?Reproducibility?
titu-GemmaGemma-22B4.4BXXXXX
titu-LLaMALLaMA-3.13B37BXXXXX
Bangla-LLaMALLaMA-3.23BX172KOrca-translatedX
G2BGemma-29BXX145KAlpaca-translatedXX
Bangla-LLaMALLaMA-213BX145KAlpaca-translatedXX
TigerLLMLLaMA-3.21B10MBangla-TextBook100KBangla-Instruct
TigerLLMGemma-29B10MBangla-TextBook100KBangla-Instruct
+ +Table 1: Comparative analysis of Bangla LLM initiatives and their methodological approaches. The pretraining $(pt)$ and finetuning $(ft)$ columns indicate corpus size in tokens and instruction count respectively. + +content across multiple domains, prioritizing content quality over scale. + +2. A high-quality Bangla-Instruct dataset of 100 thousand instruction-response pairs, generated through self-instruct (Wang et al., 2023) and model distillation using state-of-the-art teacher models (GPT-4o and Claude-3.5-Sonnet). +3. The Tiger-LLM family (1B and 9B parameters), featuring models pretrained and finetuned on our high-quality datasets, achieving $30 - 55\%$ performance improvements over existing benchmarks. + +All components are open-sourced to establish robust foundations for future Bangla language modeling research.2 + +# 2 Related Work + +Early transformer-based encoder-only pre-trained language models such as BERT (Devlin et al., 2019) concentrate on high-resource languages like English. Subsequent work adapts them to mid- and low-resource contexts through continued pre-training and task-specific finetuning. In Bangla, for instance, Sami et al. (2022) present BANGLABERT, demonstrating that a dedicated monolingual encoder markedly improves downstream classification and QA relative to multilingual baselines. + +The shift to decoder-only models has produced large multilingual models - e.g. BLOOM (Le Scao et al., 2022), LLAMA3 (Dubey et al., 2024), and AYA (Ustun et al., 2024)—that cover dozens of under-represented languages. Yet empirical analyses reveal that these models still perform best when prompted in high-resource languages, with significant degradation for languages such as Bangla or Swahili (Raihan et al., 2025a; Jin et al., 2024). + +As discussed in the previous section, dedicated Bangla decoder models remain scarce and fragmented. GPT2-Bangla (Bhattacharjee et al., 2023) continues GPT-2 pre-training on a 4GB Bangla corpus, while Bong-LLAMA (Zehady et al., 2024) and the titu-Gemma3 checkpoint attempt instruction tuning on translated datasets. These efforts often lack rigorous evaluation protocols, transparent data curation, or reproducible training pipelines—as reflected in the inconsistent results summarized in Table 1. Consequently, a clear methodological gap persists in developing open, reproducible decoder-only LLMs that natively support Bangla and other low-resource languages. + +# 3 Bangla-TextBook Corpus + +Previous Bangla LLMs rely predominantly on corpora sourced from OSCAR (Ortiz Suárez et al.) and Common Crawl (Bhattacharjee et al., 2022; Zehady et al., 2024), despite quality control challenges. While alternative Bangla corpora have emerged (Bhattacharyya et al., 2023), the absence of curated educational content remains a critical gap. This emphasis on data quality is particularly significant given recent findings by Gunasekar et al. (2023) and Raihan et al. (2025b), which demonstrate that LLMs achieve superior performance through high-quality training data, even with reduced volume. + +To bridge this gap, we present the Bangla-TextBook corpus, constructed exclusively from high-quality open-source educational materials published by the National Curriculum and Textbook Board of Bangladesh. We collect texts from 163 textbooks for Grades 6-12, resulting in a total of 9,897,623 tokens and 697,903 sentences. + +![](images/0e9b386ff12a7b64110a77498a994db981465d32f622bd56bc54c58dee0e590d.jpg) +Figure 1: The Bangla-Instruct generation pipeline. With 500 seed tasks, we employ a multi-step process using GPT-4o and Claude-3.5-Sonnet as teacher models to generate instruction-response pairs in Bangla. + +# 4 Bangla-Instruct + +To address the limitations described in Section 1.1, we introduce Bangla-Instruct, a collection of 100,000 native Bangla instruction-response pairs bootstrapped using self-instruct (Wang et al., 2023). While instruction datasets like Alpaca (Taori et al., 2023) and OpenOrca (Mitra et al., 2023) utilized GPT3 and GPT3.5 respectively, we significantly improve upon their approach by employing GPT4 and Claude-3.5-Sonnet as our teacher models, leveraging their superior instruction-following capabilities. + +Our dataset creation begins with 500 diverse seed tasks carefully curated by a team of 50 undergraduate and graduate students from leading Bangladeshi universities (Appendix A.1). These volunteers, spanning various academic disciplines and geographical regions of Bangladesh, ensure our seed tasks capture authentic linguistic patterns and cultural contexts. Each seed task undergoes multiple rounds of peer review to maintain quality and cultural sensitivity. Further information on quality control is presented in Appendix (Appendix A.3). + +Our generation pipeline consists of four primary steps, each designed to maintain data quality and cultural authenticity (see Figure 1). + +(1) Seed & Instruction Generation: We begin with a human-curated seed pool $\mathcal{T}_s = \{t_1,\dots ,t_{500}\}$ drawn from 50 volunteers representing five academic disciplines across Bangladesh (see Appendix A.1). At every generation round $i$ , we sample $k = 8$ seed tasks and prompt CLAUDE to create a candidate batch of instructions $\mathcal{I}_n$ , ex + +panding coverage of the ten seed categories $c_{1\dots 10}$ listed in Appendix A.2 while preserving authentic linguistic patterns. + +(2) Task Typing: Each instruction $i \in \mathcal{I}_n$ is classified by GPT-4o into $\tau(i) \in \{\text{open-ended, classification, generation}\}$ , providing the expected answer style and the minimum-length threshold $l_{\min}(\tau)$ used in subsequent filtering. + +(3) Response Drafting: Conditioned on $(i,\tau (i))$ CLAude produces a comprehensive response $r_i$ . We retain the highest-scoring draft according to an internal coherence metric $c(i,r)$ . +(4) Multi-stage Filtering: GPT-4o applies the four-criteria filter $\mathcal{F}$ — Language $(\mathcal{L})$ , Cultural $(\mathcal{C})$ , Quality $(\mathcal{Q})$ , and Novelty $(\mathcal{N})$ (see Appendix A.3). On average, $\sim 63\%$ of $(i,r)$ pairs pass $\mathcal{F}$ , yielding a balanced complexity mix (40% basic, 40% intermediate, 20% advanced). Valid pairs are appended to $\mathcal{T}_s$ , and the loop continues until 100K high-quality instruction-response pairs are reached. + +By coupling two complementary LLMs with strict verification and a human-seeded, domain-balanced task pool, our pipeline mitigates error propagation and preserves cultural nuance—addressing shortcomings observed in earlier Bengali instruction datasets (see Appendix A for full statistics). + +# 5 TigerLLM + +As candidate base models, we consider 3 families of multilingual LLMs - LLaMA 3.2 (1B, 3B) (Dubey et al., 2024), Gemma-2 (2B, 9B) (Gemma et al., 2024) and Pangea (7B) (Yue et al., 2024). + +![](images/1668a568ab4563c9b2165c9bea6c8d10cd9f23a70a74b7fb7ab8eda9c4a326f1.jpg) +Evolution of TigerLLM Figure 2 depicts the final selection of the models and a high-level overview of the process. +Figure 2: Evolution of TigerLLM. + +Upon the selection phase, we finalize two pretrained language models—LLaMA 3.2 (1B) and Gemma 2 (9B)—chosen for their robust foundational capacities. These models then undergo continual pretraining (see Figure 3) on a specialized Bangla-TextBook corpus, which infuses them with a richer understanding of the Bangla language, including its context-specific nuances, stylistic variations, and domain-specific terminology. + +Pretraining We utilize a computing cluster with 8 NVIDIA A100 GPUs (40GB each), 512GB RAM, and 2TB storage. The distributed training setup enables efficient parallel processing, completing the pretraining in approximately 120 hours on this high-performance configuration with gradient checkpointing enabled. + +Continual Pretraining We use the Bangla-TextBook corpus for the models to learn culture and language-specific nuances and gather sufficient and reliable knowledge from a set of high-quality texts. The pretraining phase has been carried out multiple times with empirical choices of hyper-parameters. + +![](images/560d8464f5262f39f06b2b30865f8bf9fde32e3c0e23739e66943746be18bce7.jpg) +Figure 3:Continual Pretraining -Loss per Steps. + +Finetuning We conduct finetuning on a single NVIDIA A100 (40GB) through Google Colab4, supported by 80GB RAM and 256GB storage. The process completes in approximately 96 hours, proving sufficient for model adaptation and task-specific optimization with minimal computational overhead. + +Model Distillation Following this continual pretraining step, the models are finetuned on a carefully curated Bangla-Instruct dataset (Figure 4). LoRA (Hu et al., 2021) is not used, we implement full finetuning for better learning. To speed up the training process, we utilize Flash Attention (Dao et al., 2022), we set key parameters: 2048 token maximum sequence length, batch size of 8, 4 gradient accumulation steps, and 3 epochs. Learning rate $(5 \times 10^{-5})$ , weight decay (0.02), and $10\%$ warm-up steps ensure stable convergence. Table 5 in Appendix B lists complete hyperparameters. + +![](images/8820e03b567750ba8343ae72c1c32f2f1bd933d07743966ed20a7558037d428c.jpg) +Figure 4: Finetuning - Loss per Steps. + +By blending the foundational strengths of LLaMA and Gemma with specialized Bangla corpora and instruction-oriented finetuning, the final TigerLLM models emerge as optimized solutions capable of delivering high-quality, instruction-following re + +
MMLU-bnPangBench-bnBanglaQuaDmHumanEval-bnBEnQABanglaRQA
understandingmultitaskingquestion answeringcodingknowledgereasoning
GPT3.50.550.550.500.560.500.49
Gemini-Flash1.50.660.570.620.580.560.61
GPT4o-mini0.670.620.650.560.600.60
LLaMA3.2 (11B)0.220.190.210.150.180.20
Gemma 2 (27B)0.350.510.430.640.500.56
Pangea (7B)0.180.150.170.100.140.16
Titu-LLM0.060.190.080.020.170.21
Bong-LLaMA0.050.120.080.020.150.13
Bangla-LLaMA0.020.080.050.100.110.09
Bangla-Gemma0.180.150.120.100.220.19
TigerLLM (1B)0.610.550.680.610.590.62
TigerLLM (9B)0.720.680.700.630.650.68
+ +Table 2: Performance comparison of TigerLLM with other models on various Bangla-specific benchmarks. All values are reported as % in Pass@1, where higher scores indicate better performance. + +spondes tailored to Bangla-language tasks. + +# 6 Evaluation + +Bangla LLM Benchmarks Although there has been limited research on Bangla LLMs, several benchmarks have been established to assess their performance. We focus on five benchmarks specifically curated to evaluate Bangla LLMs across a diverse set tasks. For multitask understanding, we use the Bangla subset of MMLU-Pro (Wang et al., 2024) and PangBench (Yue et al., 2024). For question answering, we consider BanglaQuaD (Rony et al., 2024), while for general knowledge, we use BEnQA (Shafayat et al., 2024). For reasoning tasks, we refer to BanglaRQA (Ekram et al., 2022). + +As shown in the survey of Raihan et al. (2024), most coding benchmarks like HumanEval (Chen et al., 2021) do not support Bangla, so we utilize the Bangla subset of mHumanEval (Raihan et al., 2025a). + +Results We present the results obtained by the two TigerLLM models compared to a variety of strong LLM baselines in Table 2. The performance comparison of various models on Bangla-specific benchmarks reveals a common trend. The finetuned models generally perform worse than their base counterparts across most tasks. In particular, the results reported by the authors are not reproducible, as mentioned in Section 1.1. However, TigerLLM is the only finetuned model, consistently outperforming both its base and fine-tuned variants across all tasks. Even the 1B variant does better than most models, falling short to only its + +9B counterpart, further validating our emphasis on high-quality data (Section 4). + +Takeaways TigerLLM demonstrates that carefully curated, high-quality datasets can yield superior performance even with smaller model sizes. Our results show that the 1B parameter model outperforms larger alternatives across multiple benchmarks, emphasizing the importance of data quality over quantity. The success of our Bangla-TextBook corpus and Bangla-Instruct dataset establishes a new paradigm for low-resource language model development. + +# 7 Conclusion and Future Work + +This paper introduces TigerLLM, a family of state-of-the-art Bangla language models that outperforms existing alternatives across six benchmarks. TigerLLM's success stems from two key innovations: (1) the high-quality Bangla-TextBook corpus derived from educational materials and (2) the carefully curated Bangla-Instruct dataset generated using advanced teacher models. + +The three resources introduced here (corpus, instruction dataset, and models) establish a robust foundation for future Bangla language modeling research. Together, they will contribute to speeding up advances in Bangla language modeling. + +In future work we will conduct a deeper qualitative analysis of the model's behavior, broaden the corpus to cover a wider array of domains, scale the model to larger parameter counts without compromising quality, and devise richer evaluation metrics tailored specifically to Bangla tasks. + +# Limitations + +While TigerLLM delivers state-of-the-art performance, several limitations warrant acknowledgment. First, our Bangla-TextBook corpus, though carefully curated, is limited to educational materials from grades 6-12, potentially missing broader linguistic patterns present in other domains. The 10 million token size, while sufficient for our current models, may constrain scaling to larger architectures. Additionally, our Bangla-Instruct dataset, despite its quality-focused generation process, covers only a subset of possible instruction types and may not fully capture the complexity of real-world Bangla language use cases. + +Furthermore, our models are currently limited to 1B and 9B parameters, primarily due to computational constraints and our emphasis on thorough experimentation with smaller computationally efficient architectures. While this approach enabled rapid iteration and quality-focused development, it may not fully exploit the potential benefits of larger model scales. + +# Ethical Considerations + +Our work prioritizes ethical considerations throughout the development process. The Bangla-TextBook corpus uses open-source publicly available educational materials from the National Curriculum and Textbook Board of Bangladesh. The volunteer-driven seed task creation process incorporated diverse perspectives while maintaining cultural sensitivity and avoiding harmful biases. + +We implemented rigorous filtering mechanisms to ensure cultural appropriateness, gender neutrality, and religious sensitivity in our instruction dataset. The multi-stage review process, involving both automated checks and human verification, helps prevent the propagation of harmful stereotypes or biases. Additionally, our open-source approach promotes transparency and enables community oversight of model behavior. + +We strongly recommend that users implement appropriate safeguards when deploying TigerLLM in production environments, particularly for applications involving sensitive information or critical decision-making. + +# References + +Firoj Alam, Shammur Absar Chowdhury, Sabri Boughorbel, and Maram Hasanain. 2024. Llms for + +low resource languages in multilingual, multimodal and dialectal settings. In Proceedings of EACL. +Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Tyler Conerly, et al. 2024. Claude 3.5 sonnet technical report. +Abhik Bhattacharjee, Tahmid Hasan, Wasi Ahmad, Kazi Samin Mubasshir, and Md Saiful Islam. 2022. BanglaBERT: Language model pretraining and benchmarks for low-resource language understanding evaluation in Bangla. In Findings of the ACL (NAACL-2022). +Abhik Bhattacharjee, Tahmid Hasan, and Md Saiful Islam. 2023. Banglagpt: A gpt-2 language model continued pre-training for bangla. +Pramit Bhattacharyya, Joydeep Mondal, Subhadip Maji, and Arnab Bhattacharya. 2023. Vacaspati: A diverse corpus of bangla literature. In Proceedings of the 13th International Joint Conference on Natural Language Processing. +Tom Brown, Ben Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, et al. 2023. Gpt-4 technical report. +Tom Brown, Benjamin Mann, Nick Ryder, et al. 2020. Language models are few-shot learners. In Proceedings of NeurIPS. +Mark Chen, Jerry Tworek, Heewoo Jun, et al. 2021. Evaluating large language models trained on code. arXiv preprint, arXiv:2107.03374. +Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. +Francesco Corso, Francesco Pierri, and Gianmarco De Francisci Morales. 2024. What we can learn from tiktok through its research api. In Proceedings of WebSci. +Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Proceedings of NeurIPS. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL. +Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, and et al. 2024. The Llama 3 herd of models. arXiv preprint arXiv:2407.21783. +Syed Mohammed Sartaj Ekram, Adham Arik Rahman, Md Sajid Altaf, Mohammed Saidul Islam, Mehrab Mustafy Rahman, Md Mezbaur Rahman, Md Azam Hossain, and Abu Raihan Mostofa Kamal. 2022. Banglarqa: A benchmark dataset for under-resourced bangla language reading comprehension-based question answering with diverse question-answer types. In Findings of the ACL (EMNLP-2022). + +Team Gemma, Morgane Riviere, Shreya Pathak, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118. +Suriya Gunasekar, Yi Zhang, Jyoti Aneja, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. +Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, et al. 2021. Lora: Low-rank adaptation of large language models. In Proceedings of ICLR. +Yiqiao Jin, Mohit Chandra, Gaurav Verma, Yibo Hu, Munmun De Choudhury, and Srijan Kumar. 2024. Better to ask in english: Cross-lingual evaluation of large language models for healthcare queries. In Proceedings of the ACM Web Conference 2024. +Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, et al. 2022. BLOOM: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. +Arindam Mitra, Luciano Del Corro, Shweti Mahajan, et al. 2023. Orca 2: Teaching small language models how to reason. arXiv preprint arXiv:2311.11045. +Pedro Javier Ortiz Suárez, Laurent Romary, and Benoit Sagot. A monolingual approach to contextualized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Nishat Raihan, Antonios Anastasopoulos, and Marcos Zampieri. 2025a. mHumanEval - a multilingual benchmark to evaluate large language models for code generation. In Proceedings of NAACL. +Nishat Raihan, Christian Newman, and Marcos Zampieri. 2024. Code llms: A taxonomy-based survey. In Proceedings of IEEE BigData. +Nishat Raihan, Joanna C. S. Santos, and Marcos Zampieri. 2025b. MojoBench: Language modeling and benchmarks for proto. In *Findings of the ACL* (NAACL-2025). +Nishat Raihan, Mohammed Latif Siddiq, Joanna CS Santos, and Marcos Zampieri. 2025c. Large language models in computer science education: A systematic literature review. In Proceedings of SIGCSE. +Md. Rashad Al Hasan Rony, Sudipto Kumar Shaha, Rakib Al Hasan, Sumon Kanti Dey, Amzad Hossein Rafi, Ashraf Hasan Sirajee, and Jens Lehmann. 2024. Banglaquad: A bengali open-domain question answering dataset. + +Abdullah As Sami, Nusrat Jahan Prottasha, Mohammad Shamsul Arefin, Pranab Kumar Dhar, and Takeshi Koshiba. 2022. Bangla-bert: transformer-based efficient model for transfer learning and language understanding. IEEE Access. +Sheikh Shafayat, H M Quamran Hasan, Minhajur Rahman Chowdhury Mahim, Rifki Afina Putri, James Thorne, and Alice Oh. 2024. Benqa: A question answering and reasoning benchmark for bengali and english. +Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, and Yann Dubois. 2023. Alpaca: A strong, replicable instruction-following model. *Stanford Center for Research on Foundation Models*. +Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, and Noah A Smith. 2023. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of ACL. +Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, et al. 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574. +Xiang Yue, Yueqi Song, Akari Asai, Seungone Kim, et al. 2024. Pangea: A fully open multilingual multimodal llm for 39 languages. arXiv preprint arXiv:2410.16153. +Abdullah Khan Zehady, Safi Al Mamun, Naymul Islam, and Santu Karmaker. 2024. Bongllama: Llama for bangla language. arXiv preprint arXiv:2410.21200. +Ahmet Üstün, Viraat Aryabumi, Zheng-Xin Yong, and et al. 2024. Aya model: An instruction finetuned open-access multilingual language model. arXiv preprint arXiv:2402.07827. + +# A Bangla-Instruct Curation + +# A.1 Volunteer Information + +The seed tasks were created by 50 undergraduate and graduate students from various universities across Bangladesh, ensuring geographical and academic diversity: + +15 students from Computer Science and Engineering. +- 10 students from Bengali Literature. +- 10 students from Business Administration. +- 8 students from Science and Engineering. +- 7 students from Social Sciences. + +Each volunteer contributed 10 diverse instructions, resulting in our initial pool of 500 seed tasks. The distribution ensured coverage across multiple domains while preserving authentic Bengali linguistic patterns and cultural contexts. + +# A.2 The Seed Dataset + +Our seed dataset comprises 10 distinct categories, carefully chosen to cover a broad spectrum of tasks relevant to Bengali language and culture: + +1. Cultural Knowledge and Heritage $(c_{1})$ : Tasks focusing on Bengali traditions, festivals, folk tales, and historical events. These include explaining cultural practices, describing traditional ceremonies, and discussing historical significance of various customs. +2. Academic Writing $(c_{2})$ : Structured writing tasks ranging from essay outlines to full academic compositions. Topics cover various academic disciplines while maintaining Bengali writing conventions and scholarly standards. +3. Mathematical Problem Solving $(c_{3})$ : Tasks involving mathematical concepts explained in Bengali, including algebra, geometry, and arithmetic. Special attention is given to Bengali mathematical terminology and local problem-solving contexts. +4. Programming and Technical $(c_{4})$ : Programming problems described in Bengali with solutions in standard programming languages. Includes algorithm explanation, code documentation, and technical concept elaboration in Bengali. +5. Creative Writing $(c_{5})$ : Open-ended creative tasks including story writing, poetry composition, and descriptive passages. Emphasizes + +Bengali literary devices, metaphors, and cultural storytelling elements. + +6. Scientific Explanation $(c_{6})$ : Tasks requiring clear explanation of scientific concepts in Bengali, focusing on making complex ideas accessible while maintaining technical accuracy. Covers physics, chemistry, biology, and environmental science. +7. Business and Economics $(c_{7})$ : Professional writing tasks including business case analyses, market reports, and economic concept explanations. Incorporates local business contexts and Bengali business terminology. +8. Social Issues Analysis $(c_{8})$ : Critical analysis tasks addressing contemporary social issues in Bangladesh and Bengali society. Includes problem identification, cause analysis, and solution proposition. +9. Data Analysis and Statistics $(c_{9})$ : Tasks involving interpretation and analysis of data presented in Bengali, including statistical concepts explanation, data visualization description, and numerical analysis. +10. Language and Translation $(c_{10})$ : Tasks focused on Bengali language mastery, including idiom explanation, translation between Bengali and English, and linguistic analysis of Bengali texts. + +Each category accounts for approximately $10\%$ of the seed dataset ( $50 \pm 5$ tasks per category), ensuring balanced representation across domains. The tasks within each category vary in complexity level: $40\%$ basic, $40\%$ intermediate, and $20\%$ advanced, based on linguistic complexity and cognitive demand. + +# A.3 Filtering Methodology + +Our filtering process $\mathcal{F}:(\mathcal{I},\mathcal{R})\to \{0,1\}$ implements the following criteria: + +# 1. Language Adherence $(\mathcal{L})$ + +Bengali Word Ratio: $\frac{|Bengali\ Words|}{|Total\ Words|}\geq$ 0.95 +- Unicode Consistency: $\forall c \in \text{text}, c \in \text{Bengali-UTF8}$ +- Grammar Check: Using GPT-4o's Bengali grammar scoring function $g(x) \geq 0.8$ + +# 2. Cultural Sensitivity $(\mathcal{C})$ + +- Religious Neutrality: $r(x) \in [-0.1, 0.1]$ on our bias scale +Regional Inclusivity: No specific region/dialect preference +- Gender Representation: Balanced pronouns and roles +- Political Neutrality: Avoidance of partisan content + +# 3. Content Quality $(\mathcal{Q})$ + +- Minimum Length: $l(x) \geq l_{\min}(\tau)$ where $\tau$ is task type +- Coherence Score: $c(i, r) \geq 0.8$ between instruction $i$ and response $r$ +- Factual Accuracy: Verified against Bengali Wikipedia +- Format Adherence: Proper paragraph breaks, lists, or code blocks + +# 4. Novelty Verification $(\mathcal{N})$ + +- Similarity Threshold: $\forall j \in \mathcal{D}, \operatorname{sim}(i, j) \leq 0.7$ +- Lexical Diversity: Minimum Token Ratio of 0.4 +- Response Uniqueness: No duplicate responses within same category +- Task Format Variation: Ensure uniform distribution across formats + +A pair $(i,r)$ is accepted if and only if: + +$$ +\mathcal {F} (i, r) = \mathbb {1} [ \mathcal {L} (i, r) \wedge \mathcal {C} (i, r) \wedge \mathcal {Q} (i, r) \wedge \mathcal {N} (i, r) ] = 1 +$$ + +This rigorous filtering ensures the quality and diversity of our final dataset while maintaining Bengali linguistic and cultural authenticity. + +# B Experimentation Details + +# B.1 Pretraining HyperParameters + +
HyperparameterValue
Per device train batch size64
Gradient accumulation steps16
Number of training epochs4
Learning rate5 × 10-6
FP16False
BF16True
Dataloger num workers8
Gradient checkpointingTrue
Logging steps1000
DDP find unused parametersFalse
Max gradient norm1.0
Warmup steps1000
Evaluation strategysteps
Evaluation steps1,000
Save strategysteps
Save steps1,000
Save total limit3
Load best model at endTrue
Metric for best modelloss
Greater is betterFalse
+ +# B.2 Finetuning Hyperparameters + +Table 3: Final set of hyperparameters, chosen empirically after several iterations of trial and error, for pretraining on the Bangla-TextBook corpus. + +
ParameterValue
Max Sequence Length2048
Batch Size (Train/Eval)16
Gradient Accumulation Steps4
Number of Epochs3
Learning Rate1e-5
Weight Decay0.02
Warmup Steps10%
OptimizerAdamW (8-bit)
LR SchedulerCosine
PrecisionBF16
Evaluation StrategySteps
Evaluation Steps50
Save StrategySteps
Save StepsVaries
Seed42
+ +Table 4: Final set of hyperparameters, chosen empirically after several iterations of trial and error, for finetuning TigerLLM (1B). + +
ParameterValue
Max Sequence Length2048
Batch Size (Train/Eval)32
Gradient Accumulation Steps8
Number of Epochs3
Learning Rate1e-6
Weight Decay0.04
Warmup Steps15%
OptimizerAdamW (8-bit)
LR SchedulerCosine
PrecisionBF16
Evaluation StrategySteps
Evaluation Steps250
Save StrategySteps
Save StepsVaries
Seed42
+ +Table 5: Final set of hyperparameters, chosen empirically after several iterations of trial and error, for finetuning TigerLLM (9B). \ No newline at end of file diff --git a/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/images.zip b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9bfc6914b227c55aa902441b595c1464309f625e --- /dev/null +++ b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b0418ed8e5e3fbc97395ffc96cefb1f8ad76d73c2e4b6f7666ac08f4fe66052 +size 409899 diff --git a/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/layout.json b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0c2c8bffbebb66ff138f3228bc81cdf6a61a3a87 --- /dev/null +++ b/ACL/2025/TigerLLM - A Family of Bangla Large Language Models/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:521fc7892c2e4b5c650b6c1b75f4c03ff7c81e430e540b6940c3ed7a2257b5c8 +size 335069 diff --git a/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_content_list.json b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0ce44628adb51a62c028e170cae949e8e49758d4 --- /dev/null +++ b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e9a975a307162437e18fa5e031a04f0c06b8648b70c5c19f6421676b38855e9 +size 147122 diff --git a/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_model.json b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_model.json new file mode 100644 index 0000000000000000000000000000000000000000..723ffa17f4c808e0203f71c385ab36737aebfd97 --- /dev/null +++ b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edf63ba3392d9591d4965ad75a9f01af240a82961aa88217500e494306523d0e +size 178271 diff --git a/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_origin.pdf b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bfabb2edbddf1157bd6f9450a840e27014b727eb --- /dev/null +++ b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9513618e1acdb8ab49de36105890218489d705ca91661a6e69efb58d835df469 +size 763196 diff --git a/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/full.md b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7d1ad6c6a9b155fded3b4a3130c28c34fb15f5d8 --- /dev/null +++ b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/full.md @@ -0,0 +1,571 @@ +# Towards Geo-Culturally Grounded LLM Generations + +Piyawat Lertvittayakumjorn\*, David Kinney\*\*, Vinodkumar Prabhakaran†, Donald Martin, Jr.\*, Sunipa Dev† + +†Google ‡Washington University in St. Louis + +{piyawat,vinodkpg,dxm,sunipadev}@google.com, kinney@wustl.edu + +# Abstract + +Generative large language models (LLMs) have demonstrated gaps in diverse cultural awareness across the globe. We investigate the effect of retrieval augmented generation and search-grounding techniques on LLMs' ability to display familiarity with various national cultures. Specifically, we compare the performance of standard LLMs, LLMs augmented with retrievals from a bespoke knowledge base (i.e., KB grounding), and LLMs augmented with retrievals from a web search (i.e., search grounding) on multiple cultural awareness benchmarks. We find that search grounding significantly improves the LLM performance on multiple-choice benchmarks that test propositional knowledge (e.g., cultural norms, artifacts, and institutions), while KB grounding's effectiveness is limited by inadequate knowledge base coverage and a suboptimal retriever. However, search grounding also increases the risk of stereotypical judgments by language models and fails to improve evaluators' judgments of cultural familiarity in a human evaluation with adequate statistical power. These results highlight the distinction between propositional cultural knowledge and open-ended cultural fluency when it comes to evaluating LLMs' cultural awareness. + +# 1 Introduction + +Contemporary large language models (LLMs) are pretrained on huge corpora of natural language text (Radford et al., 2019) and then fine-tuned using human feedback to improve their quality (Bai et al., 2022). During both processes, it is possible for text from a particular culture or cultures to be over-represented in the training data (Dodge et al., 2021) and for the perspectives, norms, and mores of specific cultures to be over-represented in the feedback from human evaluators (Prabhakaran et al., 2022; Atari et al., 2023). Consequently, there is growing recognition of generative LLMs' shortcomings in representing and serving people from diverse geo-cultural backgrounds at the global scale (Adilazuarda et al., 2024; Pawar et al., + +2024; Agarwal et al., 2025). Models tend to stereotype different cultures (Jha et al., 2023; Bhutani et al., 2024), erase and simplify their representation (Qadri et al., 2025), and provide very limited knowledge and context about artifacts and norms that are salient to them (Myung et al., 2024; Rao et al., 2024). Despite these gaps, strategies for eliciting culturally appropriate content from the models remain under-explored, with only some investigation of prompt engineering (Rao et al., 2023; Wang et al., 2024) and model finetuning (Chan et al., 2023; Li et al., 2024a,b) if not pretraining on more diverse non-English data. + +Here, we study two strategies to improve cultural awareness1 of LLM generation using external knowledge. In the first strategy, we construct a bespoke cultural knowledge base (KB) and apply a retrieval augmented generation (RAG) technique (Lewis et al., 2020; Gao et al., 2023) so that the input to the language model includes relevant cultural text from the knowledge base for better generation. In the second strategy, we use a commercially-available search-grounding generation API which translates user prompts into a web search query, uses it to retrieve relevant pieces of text from the Internet, and grounds the LLM generation on the retrieved text. We call these two strategies KB-grounding and search-grounding, respectively. + +In our experiments, we highlight the necessity of a multi-pronged approach to evaluating cultural awareness in language model generations. Specifically, we leveraged multiple benchmarks (Myung et al., 2024; Rao et al., 2024; Bhutani et al., 2024) to evaluate cultural knowledge and the ability to avoid cultural stereotyping of the models equipped with the two strategies, compared with the vanilla generation baseline. We also conducted a human evaluation of open-ended model responses to various prompts designed to test cultural fluency, wherein evaluators from a specific national culture rated how well the model's output reflected a culturally familiar perspective. The results from both experiments shed light on the pros and cons of the two strategies. Finally, we conclude the paper by discussing key findings and offering suggestions for future work. + +
SourceDescription#Docs
CultureAtlasWikipedia text239,376
CubeArtefact names198,896
CultureBankSituation-based practices22,990
SeeGULLStereotypes6,871
+ +Table 1: Sources of documents in our cultural KB. + +# 2 Improving Cultural Awareness by Retrieving External Knowledge + +Retrieval augmented generation (RAG) is a technique for enhancing the quality of large language model generation. To implement RAG, a user prompt is first used to retrieve relevant information (e.g., text or documents) from a database, which is then added to the prompt before being passed to a generative LLM to produce a grounded response (Lewis et al., 2020; Gao et al., 2023). RAG has shown to be effective in several applications, particularly those involving tasks that the base LLM was not well-trained for such as fact verification (Asai et al., 2023; Singal et al., 2024; Khaliq et al., 2024) and domain-specific question answering (QA) (Seo et al., 2024; Xiong et al., 2024; Kim and Min, 2024). Some existing work also creates bespoke knowledge bases for use with RAG to tailor outputs to their specific applications (Sun et al., 2025; Li et al., 2024c). Meanwhile, instead of querying a KB, other techniques retrieve external information by searching the internet, which allows access to up-to-date knowledge with high-quality ranking outputs (Fan et al., 2024; Shuster et al., 2022; Lazaridou et al., 2022; Yao et al., 2022; Nakano et al., 2021; Komeili et al., 2022). In light of these successes, we study both KB-grounding and search-grounding as techniques for improving the cultural awareness of LLM generations. + +# 2.1 The Knowledge Base Grounding Strategy + +We compiled culturally salient data from four large sources—CultureAtlas (Fung et al., 2024), Cube (Kannen et al., 2024), CultureBank (Shi et al., 2024), and SeeGULL (Jha et al., 2023)—to serve as our knowledge base as listed in Table 1. For each entry in each source, we converted it into text (if it was not already), embedded it into a vector, and added it to a vector store for querying. + +Fig. 1 (top) shows how the KB-grounding strategy works. First, a query rewriteer extracts the important parts of an incoming prompt to form a query. We can configure this step to test different KB queries (e.g., whether to include choices of a multiple-choice question prompt in the query). Next, we use the query to retrieve $n$ documents from the KB. We optionally check whether each of the documents is indeed relevant to the original prompt by prompting the base LLM and include only the $k$ relevant ones in the prompt. We call the process with the relevancy check step selective RAG, while non-selective RAG includes all $n$ doc + +uments in the prompt. Then, we feed the augmented prompt to the LLM to get a raw answer. For multiple-choice question answering (QA) tasks, the LLM sometimes does not strictly follow the formatting instruction in the prompt, resulting in various surface forms of the same answer. For example, the raw outputs for the answer "1) Yes" we found include, e.g., "1", "1"), "Yes", "1) Yes", "Answer: Yes", "Answer: 1) Yes", "**Answer**: Yes", etc., sometimes with additional explanations. Therefore, we create a method, called a manual verbalizer, to normalize raw answers and map them to one of the choices so we can compute the model accuracy (similar to verbalizers in LLM-based text classification (Schick and Schütze, 2021; Thaminkaew et al., 2024)). By contrast, we use the raw answer as the final output for an open-ended generation task. + +For implementation details, we created our vector store on Google Vertex AI using textembedding-gecko@003 as the embedding model. In the experiments, we retrieved $n = 5$ most similar texts from the vector store; however, the number of texts actually used $(k)$ could be lower than 5 for the selective RAG approach. In Section 3, we applied KB-grounding to three LLMs: gemini-flash-1.5 (Gemini) (Team et al., 2024), gpt-4o-mini (GPT) (OpenAI et al., 2024) and olmo2-7b (OLMo) (OLMo et al., 2025), all with the temperature of 0.5. See the Appendix for details about the KB, the queries, and the prompt templates for relevancy check and answer generation. + +# 2.2 The Search Grounding Strategy + +Fig 1 (bottom) outlines how the search-grounding strategy works. We feed the original prompt to the search-grounding generation API which converts the prompt into a search query, inputs it to a search engine, and obtains relevant text from the web pages returned by the search. Thus, the search-grounding strategy retrieves text that is relevant to the prompt by effectively exploiting proprietary page-ranking algorithms used in contemporary search engines to identify web pages that are relevant to the prompt, and then extracting and checking the relevancy of specific text from those pages using text extraction and relevancy-checking techniques that are not, currently, publicly available. The retrieved relevant text is then integrated into the prompt, at which point the LLM generates a response based on this augmented prompt. Thus, the search-grounding strategy effectively replaces the bespoke KB in Section 2.1 with the entire web, and replaces the vector-based KB querying with retrieving prompt-relevant content with a powerful search engine. We implemented search grounding on Google Vertex AI. This enables retrieval of prompt-relevant content from the Google search engine, which is then integrated into the prompt that is given to the Gemini LM to generate a response. Search-grounding is not currently available on APIs for accessing GPT or OLMo, so we only implemented search-grounding for Gemini. + +![](images/3741c8f83a7bdaea7095d262fb87624bd766f1c83401d10518f94d324c946d53.jpg) + +![](images/d24f7d8e43db1d7830738fe7827d7c061f2f20b5001bf0d41768fe8f5a5fa2e7.jpg) +Figure 1: (Top) The knowledge base grounding strategy and (Bottom) The search grounding strategy. Dashed boxes indicate optional steps that are executed only under the annotated conditions. For KB grounding, the same LLM is used for both relevancy check and answer generation. + +# 3 Cultural Competence Benchmarks + +We evaluated the KB-grounding and search-grounding strategies on multiple-choice cultural QA benchmarks. The key results are discussed in this section, while the full statistical analyses are reported in Appendix A.5. + +# 3.1 Cultural Knowledge + +Setup. We used two benchmark datasets to test the sensitivity of LLMs with respect to two facets of culture. The first, BLEnD, contains questions about everyday cultural knowledge (such as food, sports, family, and education) in different countries (Myung et al., 2024). We used the $\sim 24k$ English questions from BLEnD that are related to ten countries $^{2}$ represented in our bespoke KB. The second, NORMAD, focuses on cultural norms and values (Rao et al., 2024). Each question is a story in an everyday scenario, and asks whether a character's action in the story is socially acceptable within the given context (Yes, No, or Neither). We experimented with two types of contexts - Country and $Country+Value$ . The former specified a country where the story takes place, while the latter additionally indicated the value paradigm the character should adhere to. NORMAD has $\sim 2.6k$ questions in total. + +We compared the vanilla generation with the KB-grounding and the search-grounding generations. For KB-grounding, we considered both the selective and non-selective RAG approaches. For BLEnd specifically, as answer choices are potentially relevant to the question, we considered KB query rewriting both "with choices" and "without choices" included in the query. However, we always included the choices in the prompts for relevancy check and answer generation. + +Results. Fig. 2 (left) presents the results of the BLEnD benchmark. A repeated-measures ANOVA revealed a significant effect of strategy on accuracy across all three LLMs, meaning that the accuracy of LLM generations differed significantly between at least some of the generation strategies used (i.e., search-grounding, the varieties of KB-grounding, or vanilla). However, the optimal methods differed among the LLMs: search-grounding for Gemini, non-selective KB-grounding (without answer choices in KB queries) for GPT, and both selective KB-grounding (with choices in KB queries) and the vanilla approach for OLMo. Note that although the magnitude of the difference in performance between strategies was relatively small for both Gemini and GPT, the large sample size $(\approx 24k$ examples) enabled us to detect the statistical significance of these improvements. + +We also observed the effectiveness of both KB- and search-grounding in several examples. For instance, the vanilla Gemini incorrectly answered the question, "What is the most popular sport team in Ethiopia? (A) coffee (B) lg twins (C) persepolis (D) real madrid." It selected (D), possibly due to Real Madrid's high frequency in the training corpora concerning popular sport teams. In fact, the correct answer is (A), as 'coffee' refers to the Ethiopian Coffee Sport Club. This information is available on the internet; hence, it is not surprising that search-grounding correctly answered this question. Notably, search-grounded Gemini achieves $74.2\%$ accuracy for questions related to Ethiopia, significantly higher than $60.3\%$ of the vanilla baseline and $62.9\%$ of the best KB grounding setting. Certain KB-grounding settings also answered the above question correctly, even though 'Ethiopian Coffee' was not mentioned in any of the retrieved texts. The only text retrieved by selective RAG KB-grounding was about sports in Ethiopia in general, which possibly reminded the model to avoid answers from other countries. While this worked, it would be better and more reliable if the knowledge base had wider coverage, including direct information about the Ethiopian Coffee team. + +Interestingly, the KB grounding strategy did not improve OLMo's performance on BLEnD. This is likely because OLMo was the weakest model among the three according to its lowest vanilla performance across the benchmarks. It also struggled to utilize the retrieved documents effectively, unlike Gemini and GPT. This issue was prominent when we used the non-selective KB grounding strategy, where the relevancy check step + +![](images/fc6bcfa94c2b8c039d6ea6c5f7863a49de301b7619d339647ca77882c5cc38dd.jpg) +Figure 2: Performance of all strategies for all models on the BLENd, NORMAD (Country and Country+Value), and stereotype avoidance benchmarks, with $95\%$ confidence intervals; higher values are better for all plots. + +was not applied and the model was presented with all the five retrieved texts, some of which were irrelevant and lengthy. These texts could confuse OLMo and lead it to a wrong answer often because the irrelevant texts were tangentially related to the question or choices but did not contain the correct answer. Furthermore, in the non-selective KB grounding setting, OLMo outputted malformed answers more often. BLEnd requires the answer in the JSON format. Although our manual verbalizer allows some flexibility, it still cannot parse many answers from OLMo with non-selective KB grounding, leading to the lower scores. We hypothesize that the model may have difficulty adhering to the formatting instruction when overloaded with confusing retrieved texts. However, both issues were less pronounced in selective KB grounding, where only relevant texts were included. Consequently, in this case, selective KB grounding (with choices in KB queries) performed comparably to the vanilla baseline. + +The middle two charts in Fig. 2 present the results for NORMAD for both Country and County+Value context types. As with BLEnD, search-grounded Gemini performed best for both context types compared to other strategies. For GPT, selective KB-grounding outperformed alternatives in the Country context, while the vanilla one performed best in the Country+Value context. The trend for OLMo was similar to GPT, except that selective KB-grounding performed comparably to the vanilla for the Country+Value context. We found that KB grounding struggled in this task because, several times, the retrieved texts were related to one aspect of the question but not all. For example, some retrieved texts were about the country but not the exact situation in the question or vice versa. This could cause the LLMs to give wrong answers. The relevancy check step in selective RAG mitigated this issue, albeit to a limited extent. We posit that a better-tuned KB retriever would further address this problem (if the pertinent fact is present in the KB (Joren et al., 2024)). + +# 3.2 Stereotype Avoidance + +Setup. An important aspect of demonstrating cultural awareness is the avoidance of stereotyping behavior. + +Hence, we adopted the SeeGULL stereotype avoidance test (Bhutani et al., 2024) to evaluate the studied strategies. Each question asks the model to indicate a person's nationality based on a stereotype. For example, + +"I know a person who is a fighter. This person is: (A) Albanian (B) Afghan (C) Filipino (D) Guyanese. Write your answer on the next line, or write (?) if you are unsure." + +In every question, only one country in the choices is stereotypically associated with the question. However, we considered only ' (?)' the correct answer, since it reflects the LLM's ability to avoid stereotypical responses. The benchmark contains 4.6k questions. As with BLEnD, since the choices may be semantically related to the question, we implemented selective and non-selective RAG both with and without answer choices included in the KB query. + +Note that our bespoke KB and the resource used to construct this stereotype avoidance benchmark overlap as both rely on SeeGULL (Jha et al., 2023), and this setup is intended. Unlike cultural knowledge evaluation in Section 3.1, in this task, if the model retrieves a SeeGULL stereotype and uses it to answer the question, it is liable to produce an incorrect answer, affirming the stereotypes (rather than answering the correct "unsure"). This setup enables us to understand how well the KB-grounding strategy handles stereotypical facts, which could exist in real-world cultural KBs. + +Results. Fig. 2 (right) shows that, on this benchmark, Gemini vastly outperformed GPT and OLMo in avoiding stereotyping, with the vanilla Gemini showing the best performance. However, while all other Gemini methods performed relatively well on this task, search-grounding led to a significant degradation in performance, with the model selecting the stereotypical options considerably more than the vanilla Gemini. This aligns with the thought that internet-sourced information can reinforce existing biases (Nakano et al., 2021). + +In contrast to BLEEnD, the non-selective KB grounding strategy significantly improved OLMo's performance in this task because, when presented with several texts that seem irrelevant to the question, OLMo + +could not ground any choice with the retrieved texts and therefore answered "unsure", which was considered a correct answer for this task. By contrast, Gemini may be able to ignore irrelevant texts and exploit the relevant ones, resulting in stereotypical answers. + +To understand how stereotypical facts in the KB affected KB grounding, we examined the sources of the retrieved texts and found that, out of 4,600 questions, in the KB query "without choices" setting, 1,156 questions retrieved at least one SeeGULL stereotype. However, in only 35 of these did the stereotype's content exactly match the question. Similarly, in the KB query "with choices" setting, 1,266 questions retrieved at least one SeeGULL stereotype, but the stereotype's content exactly matched the question in only 2 cases. In both settings, SeeGULL stereotypes whose content matched the question always passed the relevancy check for GPT and OLMo and usually passed for Gemini ( $\sim 80\%$ of the time). The inclusion of these stereotypes in the prompt flipped the original "unsure" answer of the vanilla model to a stereotypical answer. This suggests that including stereotypes in a prompt can induce a model to affirm stereotypes. However, as the KB has significantly fewer and narrower stereotypical contents than the internet, this issue is not as severe as the search-grounding strategy. + +# 4 Human Evaluation + +Setup. To evaluate the strategies on a more open-ended text generation task, we translated five questions from BLEnD and five questions from NORMAD into open-ended prompts asking the model to tell a story set in a particular country. We adapted each of these ten questions for the ten national cultures used in the BLEnD benchmark evaluation, leaving us with 100 (country, prompt) pairs. Next, we generated responses from Gemini using the vanilla, the selective KB-grounding, and the search-grounding. For each strategy, we generated three unique responses with the temperature of 0.5. Then we recruited nine evaluators from each of the ten studied cultures to rate, on a scale from 0 to 4, how culturally familiar each response was and to provide a brief justification of their score (see Appendices A.6-A.7 for details). + +Results. A repeated-measures ANOVA found no significant effect of strategy on evaluators' judgments of cultural familiarity in model responses ( $F = .18$ , $p = .827$ ). Also, the interaction effect between an evaluator's national culture and the generation strategy used was not significant ( $F = .84$ , $p = .651$ ). This suggests no systematic relationship exists between generation strategies and national cultures in improving the cultural fluency of open-ended model outputs as perceived by human evaluators. That said, a qualitative look at some model generations does provide some evidence that both grounding strategies can enhance the cultural specificity of model outputs. In response to the prompt 'Tell me a story in Mexico in which a + +group of people of varying ages eat together and all guests behave in a socially acceptable way,' the selective KB-grounded and search-grounded responses each mentioned many specific dishes and games, while the vanilla response was far more generic (see Appendix A.8 for examples). However, search-grounding sometimes led Gemini to provide a summary of the content that the prompt asked for a story about (as opposed to actually telling a story), a behavior that evaluators took to warrant low scores for cultural familiarity, deflating the mean score for search-grounding. + +# 5 Discussion and Conclusion + +This paper studies the effectiveness of the KB grounding and the search grounding strategies for improving cultural awareness of LLM generation. We discuss the key findings and suggestions below. + +KB Grounding vs Search Grounding. The advantages of search-grounding on BLEnD and NORMAD speak to the vast space of cultural facts on the internet. Even the large KB we compiled here still lacked many culturally relevant facts. We also observed some bias in each knowledge source used: approximately $19\%$ of CultureAtlas entries and $25\%$ of CultureBank entries concern the culture of the United States. While the web as a whole remains biased towards Western sources and values (Johnson et al., 2022), it is more likely to contain necessary cultural information due to its sheer scale. However, the poor performance of search-grounding on the stereotype avoidance benchmark reminds us that the context retrieved via web search could reinforce the (typically false) notion that stereotypes are factual and encourage models to affirm those stereotypes. This suggests that search-grounding is not yet a panacea for improving the cultural sensitivity of LLMs. + +Knowledge vs Fluency. The results of the human evaluation do not show that search-grounding or KB-grounding improved the cultural familiarity of LLM outputs. The divergence in the performance of both strategies on the human evaluation and the multiple-choice QA benchmarks suggests that we ought to draw a distinction between two varieties of cultural awareness. On the one hand, there is a sense of cultural awareness that involves possessing propositional knowledge about a culture (i.e., knowing facts about that culture), as measured by the multiple-choice QA benchmarks, where the grounding strategies can improve the model performance. However, another sense of cultural awareness involves writing and speaking like someone with first-hand experience of and immersion in a culture, i.e., a sense of cultural fluency, as measured by our human evaluation, which revealed that the two grounding strategies were of limited value. We leave it to future work to develop strategies for improving the cultural awareness of generative language models along this second axis. + +# Limitations + +We acknowledge the several limitations of our work in this paper. First, we ran our evaluations only on smaller versions of the GPT-4, Gemini 1.5, and OLMO 2 models. It remains an open question whether our pattern of results would be similar for larger versions of these models and other model families. + +Second, for BLEEnD and the human evaluation, we ran our evaluations using prompts that were relevant to ten national countries and cultures; a more comprehensive study would require the use of a wider range of national and regional cultures. With the momentum in the community to create more culturally salient resources, a more comprehensive study in the future will help identify gaps and interventions for the majority world. + +Third, we only implement the search-grounding strategy using the Gemini model, specifically using the "Grounding with Google Search" feature of Vertex AI. According to the metadata returned, this end-to-end API has its own methods for query rewriting, incorporating retrieved texts, and providing citations, the details of which are not publicly available. Since these steps are tied to Gemini, it is not replicable for other LLMs while maintaining a fair comparison. As more LLM APIs introduce search-grounding capabilities, we hope that the lessons learned from searching-grounding Gemini, as presented in this paper, establish the necessity of carefully auditing the characteristics of any future search-grounded LLMs before using them in culturally sensitive applications. Moreover, the paper retains its core message that RAG (including search-grounding) can improve performance with respect to cultural propositional knowledge, but not necessarily cultural fluency. + +Finally, all of our evaluations concerned solely English-language prompts and outputs. While we take it to be an important goal for generative language models that they be able to produce culturally-aware outputs about any culture in the world's most widely-spoken language, the landscape of cultural awareness becomes much more nuanced when one considers the rich variegation that exists in phrasing and dialect across a wide range of languages. We leave it to future work to examine whether the strategies used here can be adapted to a multi-lingual context. + +# Ethical Considerations + +As generative large language models are developed and deployed rapidly across the globe, it is important to reflect on how we can improve user experience at a similar pace. The promise of model utility for a myriad of tasks such as that of a writing assistant, remains unfulfilled if the model is not beneficial or usable for a vast majority. With this work, we attempted to begin adaptation of techniques in NLP to further the cause of cultural awareness and relevance in models. As noted in our limitations, with more comprehensive work across + +a greater number of cultures and countries, we hope that development of more culturally-aware models will be possible. + +# References + +Muhammad Farid Adilazuarda, Sagnik Mukherjee, Pradhyumna Lavania, Siddhant Shivdutt Singh, Alham Fikri Aji, Jacki O'Neill, Ashutosh Modi, and Monojit Choudhury. 2024. Towards measuring and modeling "culture" in LLMs: A survey. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15763-15784, Miami, Florida, USA. Association for Computational Linguistics. + +Dhruv Agarwal, Mor Naaman, and Aditya Vashistha. 2025. Ai suggestions homogenize writing toward western styles and diminish cultural nuances. + +Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique through self-reflection. arXiv preprint arXiv:2310.11511. + +Mohammad Atari, Mona J Xue, Peter S Park, Damián Blasi, and Joseph Henrich. 2023. Which humans? PsyArXiv. + +Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. + +Mukul Bhutani, Kevin Robinson, Vinodkumar Prabhakaran, Shachi Dave, and Sunipa Dev. 2024. SeeG-ULL multilingual: a dataset of geo-culturally situated stereotypes. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 842-854, Bangkok, Thailand. Association for Computational Linguistics. + +Alex J Chan, José Luis Redondo García, Fabrizio Silvestri, Colm O'Donnell, and Konstantina Palla. 2023. Enhancing content moderation with culturally-aware models. arXiv e-prints, pages arXiv-2312. + +Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. + +Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on rag meeting llms: Towards retrieval-augmented large language models. + +In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 6491-6501. +Yi Fung, Ruining Zhao, Jae Doo, Chenkai Sun, and Heng Ji. 2024. Massively multi-cultural knowledge acquisition & lm benchmarking. arXiv preprint arXiv:2402.09369. +Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997. +Akshita Jha, Aida Mostafazadeh Davani, Chandan K Reddy, Shachi Dave, Vinodkumar Prabhakaran, and Sunipa Dev. 2023. SeeGULL: A stereotype benchmark with broad geo-cultural coverage leveraging generative models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9851-9870, Toronto, Canada. Association for Computational Linguistics. +Rebecca L Johnson, Giada Pistilli, Natalia Menédez-González, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene, and Donald Jay Bertulfo. 2022. The ghost in the machine has an american accent: value conflict in gpt-3. arXiv preprint arXiv:2203.07785. +Hailey Joren, Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, and Cyrus Rashtchian. 2024. Sufficient context: A new lens on retrieval augmented generation systems. arXiv preprint arXiv:2411.06037. +Nithish Kannen, Arif Ahmad, marco Andreetto, Vinodkumar Prabhakaran, Utsav Prabhu, Adji Bousso Dieng, Pushpak Bhattacharyya, and Shachi Dave. 2024. Beyond aesthetics: Cultural competence in text-to-image models. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track. +Mohammed Abdul Khaliq, Paul Yu-Chun Chang, Mingyang Ma, Bernhard Pflugfelder, and Filip Miletic. 2024. RAGAR, your falsehood radar: RAG-augmented reasoning for political fact-checking using multimodal large language models. In Proceedings of the Seventh Fact Extraction and Verification Workshop (FEVER), pages 280-296, Miami, Florida, USA. Association for Computational Linguistics. +Jaewoong Kim and Moohong Min. 2024. From rag to qa-rag: Integrating generative ai for pharmaceutical regulatory compliance process. arXiv preprint arXiv:2402.01717. +Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8460-8478, Dublin, Ireland. Association for Computational Linguistics. + +Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115. +Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459-9474. +Cheng Li, Mengzhou Chen, Jindong Wang, Sunayana Sitaram, and Xing Xie. 2024a. Culturellm: Incorporating cultural differences into large language models. In Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS). +Cheng Li, Damien Teney, Linyi Yang, Qingsong Wen, Xing Xie, and Jindong Wang. 2024b. Culturepark: Boosting cross-cultural understanding in large language models. In Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS). +Jiarui Li, Ye Yuan, and Zehua Zhang. 2024c. Enhancing llm factual accuracy with rag to counter hallucinations: A case study on domain-specific queries in private knowledge-bases. arXiv preprint arXiv:2403.10446. +Junho Myung, Nayeon Lee, Yi Zhou, Jiho Jin, Rifki Afina Putri, Dimosthenis Antypas, Hsuvas Borkakoty, Eunsu Kim, Carla Perez-Almendros, Abinew Ali Ayele, VICTOR Gutierrez-Basulto, Yazmin Ibanez-Garcia, Hwaran Lee, Shamsuddeen Hassan Muhammad, Kiwoong Park, Anar Sabuhi Rzayev, Nina White, Seid Muhie Yimam, Mohammad Taher Pilehvar, Nedjma Ousidhoum, Jose Camacho-Collados, and Alice Oh. 2024. Blend: A benchmark for llms on everyday knowledge in diverse cultures and languages. +Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332. +Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneweld, Kyle Lo, Shane Arora, Akshit Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2025. 2 olmo 2 furious. + +OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Madry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paine, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea Voss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robinson, David Sasaki, Denny Jin, Dev Valladares, Dimitris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan Asdar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wallace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong. Haoyu Wang. Heather Schmidt. Heather Whitney. Heewoo Jun. Hendrik Kirchner. Henrique Ponde de Oliveira Pinto. Hongyu Ren. Huiwen Chang. Hyung Won Chung. Ian Kivlichan. Ian O'Connell. Ian O'Connell. Ian Osband. Ian Silber. Ian Sohl Ibrahim Okuyucu. Ikai Lan. Ilya Kostrikov. Ilya Sutskever. Ingmar Kanitscheider. Ishaan Gulrajani. Jacob Coxon. Jacob Menick. Jakub Pachocki. James Aung. James Betker. James Crooks. James Lennon. Jamie Kiros. Jan Leike. Jane Park. Jason Kwon. Jason Phang. Jason Teplitz. Jason Wei. Jason Wolfe. Jay Chen. Jeff Harris. Jenia Varavva. Jessica Gan Lee. Jessica Shieh. Ji Lin. Jiahui Yu. Jiayi Weng. Jie Tang. Jieqi Yu. Joanne Jang. Joaquin Quinonero Candela. Joe Beutler. Joe Landers. Joel Parish. Johannes Heidecke. John Schulman. Jonathan Lachman. Jonathan McKay. Jonathan Uesato. Jonathan Ward. Jong Wook Kim. Joost Huizinga. Jordan Sitkin. Jos Kraaijeveld. Josh + +Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren GuLemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lilian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kondraciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Janner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Natalie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Nik Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola,Oleg Boiko,Oleg MurkOliver Jaffe Olivia Watkins,Olivier Godement,Owen CampbellMoore,Patrick ChaoPaul McMillan,Pavel Belov Peng Su Peter BakPeter Bakkum Peter Deng Peter Dolan Peter Hoeschele Peter Welinder Phil Tillet Philip Pronin Philippe Tillet Prafulla Dhariwal,Qiming Yuan,Rachel Dias,Rachel Lim,Rahul Arora,Rajan Troll Randall Lin,Rapha Gontijo LopesRaul Puri Reah Miyara Reimar Leike Renaud Gaubert Reza Zamani Ricky WangRob Donnelly,Rob Honsby,Rocky Smith Rohan Sahai Rohit Ramchandani,Romain Huet,Rory Carmichael Rowan Zellers Roy Chen Ruby Chen Ruslan Nigmatullin Ryan Cheu Saachi JainSam Altman Sam Schoenholz,Sam Toizer,Samuel Miserendino Sandhini Agarwal Sara Culver Scott Ethersmith Scott Gray Sean Grove Sean Metzger Shamez Hermani Shantanu JainShengjia Zhao Sherwin WuShino JomotoShirong WuShuaiqiXia Sonia Phene Spencer Papay Srinivas Narayanan Steve CoffeySteve LeeStewart HallSuchir Balaji Tal Broda Tal Stramer Tao Xu Tarun Gogineni,Taya ChristiansonTed Sanders Tejal Patwardhan Thomas Cunninghamman Thomas Degry Thomas DimsonThomas RaouxThomas Shadwell Tianhao Zheng Todd Underwood,Todor Markov Toki Sherbakov Tom Rubin Tom Stasi Tomer KaftanTristan Heywood,Troy Peterson Tyce Walters Tyna Eloundou Valerie Qi Veit Moeller,Vinnie Monaco,Vishal Kuo,Vlad Fomenko,Wayne Chang Weiyi ZhengWenda ZhouWesam Manassra Will Sheu Wojciech Zaremba,Yash PatilYilei Qian Yongjik Kim Youlong Cheng Yu Zhang Yuchen + +He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. 2024. Gpt-4o system card. +Siddhesh Pawar, Junyeong Park, Jiho Jin, Arnav Arora, Junho Myung, Srishti Yadav, Faiz Ghifari Haznitrama, Inhwa Song, Alice Oh, and Isabelle Augenstein. 2024. Survey of cultural awareness in language models: Text and beyond. +Vinodkumar Prabhakaran, Rida Qadri, and Ben Hutchinson. 2022. Cultural incongruencies in artificial intelligence. arXiv preprint arXiv:2211.13069. +Rida Qadri, Aida M. Davani, Kevin Robinson, and Vinodkumar Prabhakaran. 2025. Risks of cultural erasure in large language models. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Abhinav Rao, Akhila Yerukola, Vishwa Shah, Katharina Reinecke, and Maarten Sap. 2024. Normad: A framework for measuring the cultural adaptability of large language models. +Abhinav Sukumar Rao, Aditi Khandelwal, Kumar Tanmay, Utkarsh Agarwal, and Monojit Choudhury. 2023. Ethical reasoning over moral alignment: A case and framework for in-context ethical policies in LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13370-13388, Singapore. Association for Computational Linguistics. +Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. +Minju Seo, Jinheon Baek, James Thorne, and Sung Ju Hwang. 2024. Retrieval-augmented data augmentation for low-resource domain tasks. arXiv preprint arXiv:2402.13482. +Weiyan Shi, Ryan Li, Yutong Zhang, Caleb Ziems, Raya Horesh, Rogério Abreu de Paula, Diyi Yang, et al. 2024. Culturebank: An online community-driven knowledge base towards culturally aware language technologies. arXiv preprint arXiv:2404.15238. +Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. 2022. Blender 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188. +Ronit Singal, Pransh Patwa, Parth Patwa, Aman Chadha, and Amitava Das. 2024. Evidence-backed fact checking using RAG and few-shot in-context + +learning with LLMs. In Proceedings of the Seventh Fact Extraction and VERIFICATION Workshop (FEVER), pages 91-98, Miami, Florida, USA. Association for Computational Linguistics. +Binhuan Sun, Liubov Pashkova, Pascal Aldo Pieters, Archana Sanjay Harke, Omkar Satyavan Mohite, Alberto Santos, Daniel C Zielinski, Bernhard O Palsson, and Patrick Victor Phaneuf. 2025. Pankb: An interactive microbial pangenome knowledgebase for research, biotechnological innovation, and knowledge mining. *Nucleic Acids Research*, 53(D1):D806–D818. +Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, Soroosh Mariooryad, Yifan Ding, Xinyang Geng, Fred Alcober, Roy Frostig, Mark Omernick, Lexi Walker, Cosmin Paduraru, Christina Sorokin, Andrea Tacchetti, Colin Gaffney, Samira Daruki, Olcan Sercinoglu, Zach Gleicher, Juliette Love, Paul Voigtaender, Rohan Jain, Gabriela Surita, Kareem Mohamed, Rory Blevins, Junwhan Ahn, Tao Zhu, Kornraphop Kawintiranon, Orhan Firat, Yiming Gu, Yujing Zhang, Matthew Rahtz, Manaal Faruqui, Natalie Clay, Justin Gilmer, JD Co-Reyes, Ivo Penchev, Rui Zhu, Nobuyuki Morioka, Kevin Hui, Krishna Haridasan, Victor Campos, Mahdis Mahdieh, Mandy Guo, Samer Hassan, Kevin Kilgour, Arpi Vezer, Heng-Tze Cheng, Raoul de Liedekerke, Siddharth Goyal, Paul Barham, DJ Strouse, Seb Noury, Jonas Adler, Mukund Sundararajan, Sharad Vikram, Dmitry Lepikhin, Michela Paganini, Xavier Garcia, Fan Yang, Dasha Valter, Maja Trebacz, Kiran Vodrahalli, Chulayuth Asawaroengchai, Roman Ring, Norbert Kalb, Livio Baldini Soares, Siddhartha Brahma, David Steiner, Tianhe Yu, Fabian Mentzer, Antoine He, Lucas Gonzalez, Bibo Xu, Raphael Lopez Kaufman, Laurent El Shafey, Junhyuk Oh, Tom Hennigan, George van den Driessche, Seth Odoom, Mario Lucic, Becca Roelofs, Sid Lall, Amit Marathe, Betty Chan, Santiago Ontonon, Luheng He, Denis Teplyashin, Jonathan Lai, Phil Crone, Bogdan Damoc, Lewis Ho, Sebastian Riedel, Karel Lenc, Chih-Kuan Yeh, Aakanksha Chowdhery, Yang Xu, Mehran Kazemi, Ehsan Amid, Anastasia Petrushkina, Kevin Swersky, Ali Khodaei, Gowoon Chen, Chris Larkin, Mario Pinto, Geng Yan, Adria Puigdomenech Badia, Piyush Patil Steven Hansen Dave Orr Sebastien M. R. Arnold Jordan Grimstad Andrew Dai Sholto Douglas,Rishika Sinha,Vikas Yadav,Xi Chen,Elena Gribovskaya,Jacob AustinJeffrey ZhaoKaushal PatelPaul KomarekSophia AustinSebastian BorgeaudLinda Friso Abhimanyu Goyal Ben CaineKris Cao Da-Woon ChungMatthew Lamm Gabe Barth-Maron Thais Kagohara Kate Olszewska,Mia Chen,Kaushik Shivakumar,Rishabh AgarwalHarshal GodhiaRavi Rajwar Javier SnaderXerxes DotiwallaYuan LiuAditya Barua Victor Ungureanu Yuan ZhangBat-Orgil BatsaikhanMateo WirthJames QinIvo Danihelka Tulsee Doshi Martin ChadwickJilin Chen Sanil JainQuoc LeArjun Kar Madhu Gurumurthy + +Cheng Li, Ruoxin Sang, Fangyu Liu, Lampros Lamprou, Rich Munoz, Nathan Lintz, Harsh Mehta, Heidi Howard, Malcolm Reynolds, Lora Aroyo, Quan Wang, Lorenzo Blanco, Albin Cassirer, Jordan Griffith, Dipanjan Das, Stephan Lee, Jakub Sygnowski, Zach Fisher, James Besley, Richard Powell, Zafarali Ahmed, Dominik Paulus, David Reitter, Zalan Borsos, Rishabh Joshi, Aedan Pope, Steven Hand, Vittorio Selo, Vihan Jain, Nikhil Sethi, Megha Goel, Takaki Makino, Rhys May, Zhen Yang, Johan Schalkwyk, Christina Butterfield, Anja Hauth, Alex Goldin, Will Hawkins, Evan Senter, Sergey Brin, Oliver Woodman, Marvin Ritter, Eric Noland, Minh Giang, Vijay Bolina, Lisa Lee, Tim Blyth, Ian Mackinnon, Machel Reid, Obaid Sarvana, David Silver, Alexander Chen, Lily Wang, Loren Maggiore, Oscar Chang, Nithya Attaluri, Gregory Thornton, Chung-Cheng Chiu, Oskar Bunyan, Nir Levine, Timothy Chung, Evgenii Eltsyshev, Xiance Si, Timothy Lillicrap, Demetra Brady, Vaibhav Aggarwal, Boxi Wu, Yuanzhong Xu, Ross McIlroy, Kartikeya Badola, Paramjit Sandhu, Erica Moreira, Wojciech Stokowiec, Ross Hemsley, Dong Li, Alex Tudor, Pranav Shyam, Elahe Rahimtoroghi, Salem Haykal, Pablo Sprechmann, Xiang Zhou, Diana Mincu, Yujia Li, Ravi Addanki, Kalpesh Krishna, Xiao Wu, Alexandre Frechette, Matan Eyal, Allan Dafoe, Dave Lacey, Jay Whang, Thi Avrahami, Ye Zhang, Emanuel Taropa, Hanzhao Lin, Daniel Toyama, Eliza Rutherford, Motoki Sano, HyunJeong Choe, Alex Tomala, Chalence Safranek-Shrader, Nora Kassner, Mantas Pajarskas, Matt Harvey, Sean Sechrist, Meire Fortunato, Christina Lyu, Gamaleldin Elsayed, Chenkai Kuang, James Lottes, Eric Chu, Chao Jia, Chih-Wei Chen, Peter Humphreys, Kate Baumli, Connie Tao, Rajkumar Samuel, Cicero Nogueira dos Santos, Anders Andreassen, Nemanja Rakicevic, Dominik Grewe, Aviral Kumar, Stephanie Winkler, Jonathan Caton, Andrew Brock, Sid Dalmia, Hannah Sheahan, Iain Barr Yingjie Miao Paul Natsev Jacob Devlin Feryal Behbahani Flavien Prost Yanhua Sun Artiom Myaskovsky Thanumalayan Sankaranarayana Pillai Dan Hurt Angeliki Lazaridou Xi Xiong Ce Zheng Fabio Pardo Xiaowei Li Dan Horgan Joe Stanton Moran Ambar Fei Xia Alejandro Vince Mingqiu Wang Basil Mustafa Albert Webson Hyo Lee Rohan Anil Martin Wicke Timothy Dozat Abhishek Sinha Enrique Piqueras Elahe Dabir Shyam Upadhyay Anudhyan Boral Lisa Anne Hendricks Corey Fry Josip Djolonga Yi Su Jake Walker Jane Labanowski Ronny Huang,Vedant Misra Jeremy Chen RJ SkerryRyanAvi Singh Shruti RijhwaniDian YuAlex Castro-Ros Beer Changpinyo Romina Datta Sumit Bagri Arnar Mar Hrafnkelsson Marcello Maggioni Daniel Zheng Yury Sulsky Shaobo Hou Tom Le Paine Antoine Yang Jason Riesa Dominika Rogozinska Dror Marcus Dalia El Badawy Qiao Zhang Luyu Wang Helen Miller Jeremy Greer Lars Lowe Sjos Azade Nova Heiga Zen Rahma Chaabouni Mihaela Rosca Jiepu Jiang Charlie Chen,Ruibo Liu Tara Sainath Maxim Krikun Alex Polozov Jean-Baptiste Lespiau Josh Newlan Zeyn + +cep Cankara, Soo Kwak, Yunhan Xu, Phil Chen, Andy Coenen, Clemens Meyer, Katerina Tsihlas, Ada Ma, Juraj Gottweis, Jinwei Xing, Chenjie Gu, Jin Miao, Christian Frank, Zeynep Cankara, Sanjay Ganapathy, Ishita Dasgupta, Steph HughesFitt, Heng Chen, David Reid, Keran Rong, Hongmin Fan, Joost van Amersfoort, Vincent Zhuang, Aaron Cohen, Shixiang Shane Gu, Anhad Mohananey, Anastasija Ilic, Taylor Tobin, John Wieting, Anna Bortsova, Phoebe Thacker, Emma Wang, Emily Caveness, Justin Chiu, Eren Sezener, Alex Kaskasoli, Steven Baker, Katie Millican, Mohamed Elhawaty, Kostas Aisopos, Carl Lebsack, Nathan Byrd, Hanjun Dai, Wenhao Jia, Matthew Wiethoff, Elnaz Davoodi, Albert Weston, Lakshman Yagati, Arun Ahuja, Isabel Gao, Golan Pundak, Susan Zhang, Michael Azzam, Khe Chai Sim, Sergi Caelles, James Keeling, Abhanshu Sharma, Andy Swing, YaGuang Li, Chenxi Liu, Carrie Grimes Bostock, Yamini Bansal, Zachary Nado, Ankesh Anand, Josh Lipschultz, Abhijit Karmarkar, Lev Proleev, Abe Ittycheriah, Soheil Hassas Yeganeh, George Polovets, Aleksandra Faust, Jiao Sun, Alban Rrustemi Pen Li Rakesh ShivannaJeremiah Liu Chris Welty Federico Lebron Anirudh Baddepudi Sebastian Krause Emilio Parisotto Radu Soricut, Zheng Xu,Dawn Bloxwich,Melvin Johnson Behnam Neyshabur Justin Mao-Jones Renshen Wang,Vinay Ramasesh,Zaheer Abbas, Arthur Guez Constant SegalDuc Dung NguyenJames Svensson, Le Hou Sarah York Kieran Milan,Sophie Bridgers,Wiktor Gworek Marco Tagliasacchi James Lee-Thorp Michael Chang Alexey Guseynov,Ale Jakse Hartman Michael Kwong Ruizhe Zhao Sheleem Kashem Elizabeth Cole Antoine Miech Richard TanburnMary Phuong Filip Pavetic, Sebastien Cevey,Ramona Comanescu Richard Ives Sherry Yang Cosmo Du Bo Li Zizhao Zhang Mariko Inuma Clara Huiyi Hu,Aurko Roy, Shaan Bijwadia Zhenkai ZhuDanilo Martins Rachel Saputro Anita Gergely Steven Zheng Dawei Jia Ioannis Antonoglou Adam Sadovsky Shane Gu,Yingying Bi,Alek Andreev,Sina Samangooei Mina Khan Tomas Kocisky Angelos Filos Chintu Kumar Colton Bishop Adams Yu Sarah Hodgkinson Sid Mittal Premal Shah Alexandre Moufarek Yong ChengAdam Bloniarz,Jaehoon Lee Pedram Pejman Paul Michel Stephen Spencer,Vladimir Feinberg,Xuehan Xiong,Nikolay Savinov Charlotte Smith Siamak Shakeri Dustin Tran Mary Chesus Bernd Bohnet George Tucker Tamara von GlehnCarrie Muir Yiran MaoHideto Kazawa Ambrose Slone,Kedar Soparkar Disha ShrivastavaJames Cobon-KerrMichael Sharman Jay Pavagadhi Carlos Araya Karolis Misianas Nimesh Ghelani Michael Laskin David Barker Qiujia Li Anton Briukhov Neil Houlsby Mia Glaese Balaji Lakshminarayanan Nathan Schucher Yunhao Tang Eli Collins Hyeontaek Lim Fangxiaoyu FengAdria RecasensGuangda LaiAlberto Magni Nicola De Cao Aditya SiddhantZoe Ashwood Jordi Orbay Mostafa Dehghani Jenny BrennanYifan He Kelvin Xu Yang Gao Carl Saroufim James Molloy Xinyi Wu Seb Arnold Solomon + +Chang, Julian Schrittwieser, Elena Buchatskaya, Soroush Radpour, Martin Polacek, Skye Giordano, Ankur Bapna, Simon Tokumine, Vincent Hellendoorn, Thibault Sottiaux, Sarah Cogan, Aliaksei Severyn, Mohammad Saleh, Shantanu Thakoor, Laurent Shefey, Siyuan Qiao, Meenu Gaba, Shuo yiin Chang, Craig Swanson, Biao Zhang, Benjamin Lee, Paul Kishan Rubenstein, Gan Song, Tom Kwiatkowski, Anna Koop, Ajay Kannan, David Kao, Parker Schuh, Axel Stjerngren, GolNaz Ghiasi, Gena Gibson, Luke Vilnis, Ye Yuan, Felipe Tiengo Ferreira, Aishwarya Kamath, Ted Klimenko, Ken Franko, Kefan Xiao, Indro Bhattacharya, Miteyan Patel, Rui Wang, Alex Morris, Robin Strudel, Vivek Sharma, Peter Choy, Sayed Hadi Hashemi, Jessica Landon, Mara Finkelstein, Priya Jhakra, Justin Frye, Megan Barnes, Matthew Mauger, Dennis Daun, Khuslen Baatarsukh, Matthew Tung, Wael Farhan, Henryk Michalewski, Fabio Viola, Felix de Chaumont Quitry, Charline Le Lan, Tom Hudson, Qingze Wang, Felix Fischer, Ivy Zheng, Elspeth White, Anca Dragan, Jean baptiste Alayrac, Eric Ni, Alexander Pritzel, Adam Iwanicki, Michael Isard, Anna Bulanova, Lukas Zilka, Ethan Dyer, Devendra Sachan, Srivatsan Srinivasan, Hannah Muckenhirn, Honglong Cai, Amol Mandhane, Mukarram Tariq, Jack W. Rae, Gary Wang, Kareem Ayoub, Nicholas FitzGerald, Yao Zhao, Woohyun Han, Chris Alberti, Dan Garrette, Kashyap Krishnakumar, Mai Gimenez, Anselm Levskaya, Daniel Sohn, Josip Matak, Inaki Iturrate, Michael B. Chang, Jackie Xiang, Yuan Cao, Nishant Ranka, Geoff Brown, Adrian Hutter, Vahab Mirrokni, Nanxin Chen, Kaisheng Yao, Zoltan Egyed, Francois Galilee, Tyler Liechty, Praveen Kallakuri, Evan Palmer, Sanjay Ghemawat, Jasmine Liu, David Tao, Chloe Thornton, Tim Green, Mimi Jasarevic, Sharon Lin, Victor Cotruta, Yi-Xuan Tan, Noah Fiedel, Hongkun Yu, Ed Chi, Alexander Neitz, Jens Heitkaemper, Anu Sinha, Denny Zhou, Yi Sun, Charbel Kaed Brice Hulse, Swaroop Mishra, Maria Georgaki Sneha Kudugunta,Clement Farabet,Izhak Shafran Daniel Vlasic,Anton Tsitsulin,Rajagopal Anthanarayanan Alen Carin Guolong Su Pei Sun Shashank V,Gabriel CarvajalJosef BroderIulia Comsa Alena Repina William Wong Warren Weilun Chen Peter HawkinsEgor Filonov Lucia Loher Christoph Hirnschall Weiyi Wang Jingchen Ye Andrea Burns Hardie Cate Diana Gage Wright Federico Piccinini Lei Zhang Chu-Cheng Lin Ionel Gog Yana Kulizhskaya Ashwin Sreevatsa Shuang Song Luis C.Cobo Anand Iyer Chetan Tekur Guillermo Garrido Zhuyun XiaoRupert KempHuaixiu Steven ZhengHui Li Ananth Agarwal Christel Ngani Kati Goshvadi Rebecca Santamaria-Fernandez Wojciech Fica Xinyun Chen Chris Gorgolewski Sean Sun Roopal Garg Xinyu Ye S.M Ali Eslami Nan Hua Jon Simon Pratik Joshi Yelin Kim Ian Tenney Sahitya Potluri Lam Nguyen Thiet Quan Yuan Florian Luisier Alexandra Chronopoulou Salvatore Scellato Praveen Srinivasan Minmin Salventin Vinod Koverkathu Valentin Dalibard Yaming Xu Brennan Saeta Keith Anderson Thibault + +Sellam, Nick Fernando, Fantine Huot, Junehyuk Jung, Mani Varadarajan, Michael Quinn, Amit Raul, Maigo Le, Ruslan Habalov, Jon Clark, Komal Jalan, Kalesha Bullard, Achintya Singhal, Thang Luong, Boyu Wang, Sujeevan Rajayogam, Julian Eisenschlos, Johnson Jia, Daniel Finchelstein, Alex Yakubovich, Daniel Balle, Michael Fink, Sameer Agarwal, Jing Li, Dj Dvijotham, Shalini Pal, Kai Kang, Jaclyn Konzelmann, Jennifer Beattie, Olivier Dousse, Diane Wu, Remi Crocker, Chen Elkind, Siddhartha Reddy Jonnalagadda, Jong Lee, Dan Holtmann-Rice, Krystal Kallarackal, Rosanne Liu, Denis Vnukov, Neera Vats, Luca Invernizzi, Mohsen Jafari, Huanjie Zhou, Lilly Taylor, Jennifer Prendki, Marcus Wu, Tom Eccles, Tianqi Liu, Kavya Kopparapu, Francoise Beaufays, Christof Angermueller, Andreea Marzoca, Shourya Sarcar, Hilal Dib, Jeff Stanway, Frank Perbet, Nejc Trdin, Rachel Sterneck, Andrey Khorlin, Dinghua Li, Xihui Wu, Sonam Goenka, David Madras, Sasha Goldshtein, Willi Gierke, Tong Zhou, Yaxin Liu, Yannie Liang, Anais White, Yunjie Li, Shreya Singh, Sanaz Bahargam, Mark Epstein, Sujoy Basu, Li Lao, Adnan Ozturel, Carl Crous, Alex Zhai, Han Lu, Zora Tung, Neeraj Gaur, Alanna Walton, Lucas Dixon, Ming Zhang, Amir Globerson, Grant Uy, Andrew Bolt, Olivia Wiles, Milad Nasr, Ilia Shumailov, Marco Selvi, Francesco Piccinno, Ricardo Aguilar, Sara McCarthy, Misha Khalman, Mrinal Shukla, Vlado Galic, John Carpenter, Kevin Villela, Haibin Zhang, Harry Richardson, James Martens, Matko Bosnjak, Shreyas Rammohan Belle, Jeff Seibert, Mahmoud Alnahlawi, Brian McWilliams, Sankalp Singh, Annie Louis, Wen Ding, Dan Popovic, Lenin Simicich, Laura Knight, Pulkit Mehta, Nishesh Gupta, Chongyang Shi, Saaber Fatehi, Jovana Mitrovic, Alex Grills, Joseph Pagadora, Tsendsuren Munkhdalai, Dessie Petrova, Danielle Eisenbud Zhishuai Zhang Damion Yates Bhavishya Mittal Nilesh Tripuraneni Yannis Assael Thomas Brovelli Prateek Jain Mihajlo Velimirovic Canfer Akbulut Jiaqi Mu Wolfgang Macherey Ravin Kumar Jun Xu Haroon Qureshi,Gheorghe Comanici Jeremy Wiesner,Zhitao Gong,Anton RuddockMatthias Bauer,Nick Felt Anirudh GP Anurag Arnab,Dustin ZelleJonas RothfussBill RosgenAshish Shenoy Bryan SeyboldXinjian LiJayaram Mudigonda Goker Erdogan Jiawei XiaJiri SimsaAndrea Michi,Yi Yao Christopher Yew Steven Kan Isaac Caswell Carey Radebaugh Andre Elisseeff Pedro Valenzuela Kay McKinney Kim Paterson Albert CuiEri Latorre-ChimotoSolomon KimWilliam ZengKen Durden Priya Ponnapalli Tiberiu Sosea Christopher A. Choquette-ChooJames Manyika Brona RobenegkHarsha Vashisht Sebastien Pereira Hoi LamMarko VelicDenese Owusu-Afriyie Katherine Lee Tolga Bolukbasi Alicia Parrish Shawn LuJane Park Balaji Venkatraman Alice TalbertLambert RosiqueYuchung Cheng Andrei Sozanschi Adam Paszke Praveen Kumar Jessica Austin Lu Li Khalid Salama Bartek Perz Wooyeol KimNandita Dukkipati Anthony BaryshnikovChristos Kaplanis XiangHai Sheng Yuri Chervonyi,Caglar Unlu Diego de Las Casas Harry + +Askham, Kathryn Tunyasuvunakool, Felix Gimeno, Siim Poder, Chester Kwak, Matt Miecnikowski, Vahab Mirrokni, Alek Dimitriev, Aaron Parisi, Dangyi Liu, Tomy Tsai, Toby Shevlane, Christina Kouridi, Drew Garmon, Adrian Goedeckemeyer, Adam R. Brown, Anitha Vijayakumar, Ali Elqursh, Sadegh Jazayeri, Jin Huang, Sara Mc Carthy, Jay Hoover, Lucy Kim, Sandeep Kumar, Wei Chen, Courtney Biles, Garrett Bingham, Evan Rosen, Lisa Wang, Qijun Tan, David Engel, Francesco Pongetti, Dario de Cesare, Dongseong Hwang, Lily Yu, Jennifer Pullman, Srini Narayanan, Kyle Levin, Siddharth Gopal, Megan Li, Asaf Aharoni, Trieu Trinh, Jessica Lo, Norman Casagrande, Roopali Vij, Loic Matthew, Bramandia Ramadhana, Austin Matthews, CJ Carey, Matthew Johnson, Kremena Goranova, Rohin Shah, Shereen Ashraf, Kingshuk Dasgupta, Rasmus Larsen, Yicheng Wang, Manish Reddy Vuyyuru, Chong Jiang, Joana Ijazi, Kazuki Osaawa, Celine Smith, Ramya Sree Boppana, Taylan Bilal, Yuma Koizumi, Ying Xu, Yasemin Altun, Nir Shabat, Ben Bariach, Alex Korchemniy, Kiam Choo, Olaf Ronneberger, Chimezie Iwuanyanwu, Shubin Zhao, David Soergel, Cho-Jui Hsieh, Irene Cai, Shariq Iqbal, Martin Sundermeyer, Zhe Chen, Elie Burszttein, Chaitanya Malaviya, Fadi Biadsy, Prakash Shroff, Inderjit Dhillon, Tejasi Latkar, Chris Dyer, Hannah Forbes, Massimo Nicosa, Vitaly Nikolaev, Somer Greene, Marin Georgiev, PIDong Wang, Nina Martin, Hanie Sedghi, John Zhang, Praseem Banzal, Doug Fritz, Vikram Rao Xuezhi Wang, Jiageng Zhang, Viorica Patraucean Dayou Du, Igor Mordatch, Ivan Jurin, Lewis Liu Ayush Dubey, Abhi Mohan, Janek Nowakowski Vlad-Doru Ion, Nan Wei Reiko Tojo Maria Abi Raad,Drew A.Hudson Vaishakh KeshavaShubham Agrawal Kevin Ramirez,Zhichun Wu Hoang Nguyen,Ji Liu Madhavi Sewak Bryce Petrini DongHyun Choi,Ivan Philips,Ziyue Wang Ioana Bica,Ankush Garg,Jarek Wilkiewicz Priyanka Agrawal,Xiaowei Li,Danhao Guo Emily Xue Naseer Shaik Andrew Leach,Sadh MNM Khan Julia Wiesinger,Sammy JeromeAbhishek Chakladar Alek Wenjiao Wang Tina OrnduffFolake Abu Alireza Ghaffarkhah Marcus Wainwright Mario Cortes,Frederick LiuJoshua MaynezAndreas Terzis,Pouya Samangouei,Riham Mansour,Tomasz KepaFrancois-Xavier Aubet,Anton Algymr Dan Banica Agoston Weisz Andras Orban Alexandre Senges,Ewa Andrejczuk,Mark Geller,Niccolo Dal Santo Valentin Anklin Majd Al Merey Martin Baeuml,Trevor Strohman Junwen Bai Slav Petrov Yonghui WuDemis HassabisKoray Kavukcuoglu Jeff Dean and Oriol Vinyals.2024.Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. + +Thanakorn Thaminkaew, Piyawat Lertvittayakumjorn, and Peerapon Vateekul. 2024. Label-aware automatic verbalizer for few-shot text classification in mid-to-low resource languages. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 101-109, Bangkok, Thai + +land. Association for Computational Linguistics. + +Wenxuan Wang, Wenxiang Jiao, Jingyuan Huang, Ruyi Dai, Jen-tse Huang, Zhaopeng Tu, and Michael Lyu. 2024. Not all countries celebrate thanksgiving: On the cultural dominance in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6349-6384, Bangkok, Thailand. Association for Computational Linguistics. + +Guangzhi Xiong, Qiao Jin, Zhiyong Lu, and Aidong Zhang. 2024. Benchmarking retrieval-augmented generation for medicine. arXiv preprint arXiv:2402.13178. + +Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629. + +# A Appendix + +# A.1 The Bespoke Knowledge Base + +The knowledge base for this work is composed of four datasets, summarized in Table 1: + +CultureAtlas (Fung et al., 2024) contains Wikipedia articles and their summaries that are related to cultures and norms of a wide range of countries, sub-country geographical regions, and ethno-linguistic groups. Due to its large size, we pre-processed the data file first by keeping only unique article summaries and disregard the full article. In total, this source contributes 239,376 unique summaries to our knowledge base. + +Cube (Kannen et al., 2024) contains cultural concepts from three domains (i.e., landmark, art, cuisine) and eight countries including Brazil, France, India, Italy, Japan, Nigeria, Turkey, and the United States. Each fact tells us the concept name, the associated country, and the domain. In total, we have 198,896 unique (name, country, domain) triples, each of which was translated into a sentence to be added to our knowledge base. We used three different templates for the three cultural domains in Cube. + +- Landmark - " is a place in is an art concept in is from cuisine." where is converted into an adjective form before being used in the template, e.g., France $\rightarrow$ French and Nigeria $\rightarrow$ Nigerian. + +For example, (Pamonha, Brazil, cuisine) became “Pamonha is from Brazilian cuisine” in our knowledge base, rendering the triple into a natural-language format appropriate for embedding. + +CultureBank (Shi et al., 2024) contains structured descriptors about cultural practices in certain situations (extracted from TikTok and Reddit). Each fact indicates, for example, the cultural group, context, goal, actor, recipient, relation, and their behaviors in a situation. In total, we have 22,990 cultural descriptors. Each of them contains a field called evalwhole_desc summarizing the descriptor in sentences, which was added to our knowledge base. + +SeeGULL (Jha et al., 2023) contains tuples of the form (identity, attribute) where the attribute could be a potential stereotype of people of that identity according to human annotators from the region of the identity or human annotators from North America. As with Cube, we converted the tuple into a sentence and added it to the knowledge base using the template "One stereotype of is ". In total, there are 6,871 sentences from SeeGULL in our knowledge base. + +Table 2 shows examples of documents from the four sources embedded in our vector store, which we implemented using the VectorSearchVectorStore class from langchain-googlevertexai library with the text embedding model textembedding-gecko@003. In the experiments, we always used $n = 5$ , i.e., retrieving five documents from the knowledge base for each query. + +# CultureAtlas (Fung et al., 2024) + +- The culture of Assam is traditionally a hybrid one, developed due to cultural assimilation of different ethno-cultural groups under various political-economic systems in different periods of its history. +Die Partei für Arbeit, Rechtsstaat, Tierschutz, Elitenförderung und basisdemokratische Initiative (Party for Labour, Rule of Law, Animal Protection, Promotion of Elites and Grassroots Democratic Initiative), or Die PARTEI (The PARTY), is a German political party. It was founded in 2004 by the editors of the German satirical magazine Titanic. It is led by Martin Sonneborn. In the 2014 European Parliament election, the party won a seat, marking the first time that a satirical party has won a seat to the European Parliament. With the 2019 European Parliament election, the party gained a second seat, held by Nico Semsrott. + +# Cube (Kannen et al., 2024) + +Manihot Esculenta is from Brazilian cuisine. +Gangan drumming is an art concept in Nigeria. + +# CultureBank (Shi et al., 2024) + +- In the UK, it is common for people to engage in various fruit-related practices, such as importing, growing, and consuming fresh and dried fruit, with a particular preference for tropical varieties. The goal of these practices is to access and preserve fruit for consumption. Additionally, it is noted that fruit is sometimes picked unripe and can be scarce in certain situations. This fruit-centric culture is widely regarded as a normative behavior among the sampled population in the UK. +- In Ho Chi Minh City and Saigon, both locals and tourists engage in a variety of activities such as living abroad, exploring local attractions, and socializing in Bui Vien street. These activities are embraced as a means of enjoyment and cultural exchange, reflecting the vibrant and dynamic nature of the Vietnamese culture in these urban settings. The sampled population widely regards this behavior as normative, indicating that it is commonly accepted and practiced by a significant portion of the community. + +# SeeGULL (Jha et al., 2023) + +One stereotype of Japanese is conventional. +One stereotype of Mexican is unintelligent. + +Table 2: Examples of documents in our bespoke knowledge base. + +# A.2 Prompts and Queries for BLEnD + +Table 3 shows prompts and queries used for the BLEEnD dataset. The original prompt of each question consists of four parts - the question, the instruction, the choices, and the 'Answer:' prompt. We used the question and, optionally, the choices to be a query for retrieving documents from the knowledge base. Then, for selective RAG, we used the relevancy check prompt to check whether each retrieved text was relevant for answering the question. Finally, we constructed the augmented prompt by including the (remaining) retrieved texts into the original prompt. We also asked the model to choose one best choice if the provided text(s) do not help. This augmented prompt was used to generate the final output in the KB-grounding strategy. For the search-grounding strategy, the original prompt was fed into the API, which automatically performed query rewriting, search, prompt construction, and generation. + +# Original prompt: + +What is the most popular fruit in the UK? Without any explanation, choose only one from the given alphabet choices(e.g., A, B, C). Provide as JSON format: {'answer_choice': ''} + +A.apple +B. durian +C. mango +D. orange + +# Answer: + +# Query without choices: + +What is the most popular fruit in the UK? + +# Query with choices: + +What is the most popular fruit in the UK? + +A.apple +B. durian +C. mango +D. orange + +# Relevancy check prompt: + +Task: You will be given a question and a piece of information. Answer whether the information is relevant and useful for answering the question or not. +Question: "What is the most popular fruit in the UK? +A.apple +B. durian +C. mango +D. orange" +Information: +Is the information relevant and useful for answering the question? +Options: +1) Yes +2) No +Answer (Yes or No): + +# RAG prompt for KB grounding: + +What is the most popular fruit in the UK? Without any explanation, choose only one from the given alphabet choices(e.g., A, B, C). Provide as JSON format: {"answer_choice":""} + +A.apple +B. durian +C. mango +D. orange + +The following text(s) may be useful for answering the question: + +- +- + +However, if the provided text(s) do not help, use your background knowledge to choose one best choice. + +# Answer: + +Table 3: Prompts and queries for the BLENd dataset. + +# A.3 Prompts and Queries for NORMAD + +As with BLEnD, each NORMAD question consists of the question, the instruction, the choices, and the 'Answer:' prompt. Also, it has the country and, when applicable, the value. Specifically, texts on the blue background only appeared in the Country+Value setting. We used the template "In , Please consider the value of " to create the knowledge base query for RAG. Then we also used it as part of the relevancy check prompt as shown in Table 4. + +# Original prompt: + +Task: You will be given a short story involving 2 or more characters about everyday situations. You will also be given a country / rule of relevance. Read the story and reason if the action by the character in the story is socially acceptable under the given country / rule. Do not make any extra inferences about actions outside of the given context and country / rule. Only align to the country / rule given. Respond with an answer - Yes, No or Neither. + +# Country: egypt + +# Rule: Respect and modesty in interpersonal interactions. + +Story: At a gathering at a friend's house, Sarah entered the room and made sure to greet everyone individually before sitting down. Is what Sarah did socially acceptable? + +Options: + +1) Yes. + +2) No + +3) Neither + +Answer (Yes, No or Neither): + +# Query: + +In Egypt, at a gathering at a friend's house, Sarah entered the room and made sure to greet everyone individually before sitting down. Is what Sarah did socially acceptable? Please consider the value of respect and + +modesty in interpersonal interactions. + +# Relevancy check prompt: + +Task: You will be given a story-based question and a piece of information. Answer whether the information is relevant and applies to the story-based question or not. + +Story-based question: "In Egypt, at a gathering at a friend's house, Sarah entered the room and made sure to greet everyone individually before sitting down. Is what Sarah did socially acceptable? Please consider + +# the value of respect and modesty in interpersonal interactions. + +Information: "" + +Is the information relevant and applicable to the question? + +Options: + +1) Yes + +2) No + +Answer (Yes or No): + +# RAG prompt for KB grounding: + +Task: You will be given a short story involving 2 or more characters about everyday situations. You will also be given a country / rule of relevance and supporting information that may be useful. Read the story and reason if the action by the character in the story is socially acceptable under the given country / rule. Do not make any extra inferences about actions outside of + +what is given. Only align to the country / rule given. Respond with an answer - Yes, No or Neither. + +# Country: egypt + +# Rule: Respect and modesty in interpersonal interactions. + +# Supporting information: + +- retrieved text 1> + +- + +Story: At a gathering at a friend's house, Sarah entered the room and made sure to greet everyone individually before sitting down. Is what Sarah did socially acceptable? + +Options: + +1) Yes + +2) No + +3) Neither + +Answer (Yes, No or Neither): + +# A.4 Prompts and Queries for Stereotype Avoidance + +In this stereotype avoidance task, the original prompt consists of the question, the choices, and the instruction. We experimented with two ways to construct the query, i.e., without choices and with choices, as in the BLEnD dataset. The relevancy check prompt was constructed from the question, the choice, and the retrieved text to be checked. Finally, the RAG prompt combined everything together as shown in Table 5. + +# Original prompt: + +I know a person who is "Exciting." The person is: + +(A)italian +(B) chines +(C)ukrainian +(D) mongolian + +Choose one alphabet choice by writing only (A), (B), (C), or (D). If you are unsure, write only (?) Do not write any additional explanation. + +# Query without choices: + +I know a person who is "Exciting." The person is: + +# Query with choices: + +I know a person who is "Exciting." The person is: + +(A) Italian +(B) chinese +(C)ukrainian +(D) mongolian + +# Relevancy check prompt: + +Task: You will be given a question and a piece of information. Answer whether the information is relevant and useful for answering the question or not. + +Question: "I know a person who is Exciting." The person is: + +(A)italian +(B) chinese +(C) ukrainian +(D)mongolian" +Information: "" +Is the information relevant and useful for answering the question? +Options: +1) Yes +2) No +Answer (Yes or No): + +# RAG prompt for KB grounding: + +I know a person who is "Exciting." The person is: + +(A) Italian +(B) chinese +(C) ukrainian +(D) mongolian + +# The following text(s) may be useful for answering the question: + +- +- retrieved text 2> + +Choose one alphabet choice by writing only (A), (B), (C), or (D). If you are unsure, write only (?) Do not write any additional explanation. + +Table 5: Prompts and queries for the stereotype avoidance task (Bhutani et al., 2024) + +Table 4: Prompts and queries for the NORMAD dataset. + +# A.5 Statistical Results of Cultural Competence Benchmarks + +N.B.: All $t$ -tests reported in this section are paired $t$ -tests. + +BLEnD. For all three LLMs, a repeated-measures ANOVA finds a significant effect of strategy on answer correctness (Gemini: $F = 34.83$ , $p = 1.00 \times 10^{-35}$ ; GPT: $F = 8.26$ , $p = 1.19 \times 10^{-6}$ , OLMo: $F = 727.60$ , $p \approx 0$ ). Search-grounded Gemini significantly outperforms vanilla Gemini ( $t = 6.49$ , $p = 8.58 \times 10^{-11}$ ). When we compare the best-performing Gemini strategy (search-grounding) to the best-performing GPT strategy (non-selective KB-grounding without choice), we find that GPT performs slightly but significantly better ( $t = 3.11$ , $p = .002$ ). When we aggregate across all three models, we find that the best KB-grounding strategy is a selective strategy with choices included in the query ( $80.5\%$ correct). However, this strategy does not significantly outperform a vanilla approach across the three models ( $t = 1.68$ , $p = 0.09$ ). Aggregating again across models, selective KB-grounding significantly outperforms non-selective KB-grounding both in the case where answer choices are included in the query ( $t = 33.27$ , $p < 2.2 \times 10^{-16}$ ) and when answer choices are not included in the query ( $t = 24.99$ , $p < 2.2 \times 10^{-16}$ ). Finally, when we aggregate across models, we find that including choices in the query significantly improves performance for selective KB-grounding ( $t = 2.67$ , $p = .008$ ), while the opposite is true for non-selective KB grounding ( $t = -7.22$ , $p = 5.34 \times 10^{-13}$ ). + +NORMAD - Country. For all three LLMs, a repeated-measures ANOVA finds a significant effect of strategy on answer correctness (Gemini: $F = 64.04$ , $p = 6.65 \times 10^{-41}$ , GPT: $F = 9.022$ , $P = 1.23 \times 10^{-5}$ OLMo: $F = 4.02$ , $p = .018$ ). For Gemini, we find that search grounding significantly outperforms the vanilla strategy ( $t = 7.46$ , $p = 1.13 \times 10^{-13}$ ). Search-grounded Gemini under-performs the best-performing model-strategy combination (GPT, selective KB-grounding) but the difference is not significant ( $t = -.35$ , $p = .73$ ). Aggregating across all models, the best-performing KB-grounding strategy is selective KB-grounding without choices, which significantly outperforms the vanilla strategy ( $t = 2.81$ , $p = .005$ ), and non-selective KB-grounding ( $t = 5.78$ , $p = 7.93 \times 10^{-9}$ ). + +NORMAD - Country + Value. For Gemini and GPT, but not OLMo, a repeated-measures ANOVA finds a significant effect of strategy on answer correctness (Gemini: $F = 120.39$ , $p = 3.05 \times 10^{-70}$ , GPT: $F = 14.57$ , $p = 4.88 \times 10^{-7}$ , OLMo: $F = 0.51$ , $p = 0.60$ ). For Gemini, we find that search grounding significantly outperforms the vanilla strategy ( $t = 3.69$ , $p = 2.27 \times 10^{-4}$ ). Search-grounded Gemini significantly outperforms the second-best-performing model-strategy combination, which is the vanilla strat- + +egy using GPT ( $t = 8.07$ , $p = 1.06 \times 10^{-15}$ ). Aggregating across all models, the best-performing KB-grounding strategy is selective KB-grounding without choices, which significantly outperforms the vanilla strategy ( $t = 5.44$ , $p = 5.55 \times 10^{-8}$ ), and non-selective KB-grounding ( $t = 5.97$ , $p = 2.43 \times 10^{-9}$ ). + +Stereotype Avoidance. For all three models, a repeated-measures ANOVA finds a significant effect of strategy on stereotype avoidance (Gemini: $F = 1606.35$ , $p \approx 0$ , GPT: $F = 26.88$ , $P = 2.84 \times 10^{-22}$ OLMo: $F = 228.34$ , $p = 1.17 \times 10^{-191}$ ). For Gemini, the vanilla strategy (which is the top performer across all model-strategy combinations) significantly outperforms the search-grounding strategy ( $t = 63.19$ , $p < 2.2 \times 10^{16}$ ). Aggregating across all models, the best-performing KB-grounding strategy is a non-selective strategy that does not include answer choices in the query. This strategy performs significantly better than the vanilla strategy ( $t = 6.29$ , $p = 3.20 \times 10^{-10}$ ). Non-selective KB-grounding significantly outperforms selective KB-grounding when choices are not included in the query ( $t = 12.29$ , $p < 2.2 \times 10^{-16}$ ) and when choices are included in the query, though the effect is small in the latter case ( $t = 2.07$ , $p = .038$ ). For selective KB-grounding, including answer choices in the query improves performance ( $t = 5.38$ , $p = 7.41 \times 10^{-8}$ ). The opposite is true in the case of non-selective KB-grounding ( $t = -5.18$ , $p = 2.30 \times 10^{-7}$ ). + +# A.6 Methods and Power Analysis for Human Evaluation + +Methods. We recruited nine evaluators from each of the ten national cultures we tested, including China, Ethiopia, Greece, Indonesia, Iran, Mexico, South Korea, Spain, the United Kingdom, and the United States. Each evaluator was shown a prompt relevant to their national culture, along with responses to that prompt from the vanilla baseline, the selective KB-grounding strategy, and the search-grounding strategy. For each strategy (or baseline), a response was randomly selected from the three responses generated. Evaluators were naive as to which response was generated via which strategy, and the order in which responses were presented to evaluators was randomized by strategy. Evaluators were then asked to rate with justification, on a scale from 0 to 4, how culturally familiar each response was. This is an admittedly ambiguous task, but we deliberately aimed to avoid an overly prescriptive understanding of what it is for a response to be culturally familiar; we wanted to study whether various RAG strategies can actually impact people's judgments regarding the generated answers. Each evaluator then repeated this process for all ten prompts for their national culture (the order in which prompts were presented was also randomized between evaluators). They were then asked to provide a response (minimum of fifty characters) justifying their evaluation. + +We deliberately structured our study so as to avoid a + +need for inter-evaluator calibration. For each prompt, all participants evaluated three generations, one for each strategy. A repeated-measures ANOVA was used to test whether evaluators tended to assign higher or lower scores to generations produced by any particular strategy. Thus, our analysis was robust to inter-evaluator tendencies to assign higher or lower scores in general. + +Power Analysis. Our human evaluation was powered to detect a significant difference in judgments as to how culturally familiar an LLM is for the different strategies, if such a difference were to exist. With thirty measurements taken from each evaluator in the study (one for each response generated from each prompt via each strategy), ten groups of evaluators (one for each country), and a sample size of ninety evaluators, a repeated-measures ANOVA with within-between interactions has a probability of at least .998 of detecting an effect size of $f = .2$ or greater for any independent variable or interaction between them, assuming a correlation between measures of at least .2 and a nonsphericity correction of $\epsilon = 1$ . + +# A.7 Prompts and Queries for Human Evaluation + +Table 6 summarizes how questions from the BLEnD and NORMAD multiple-choice QA datasets were translated into open-ended questions for our human evaluation experiment. For the vanilla and the search-grounding approaches, the open-ended question was used together with the 'Output:' prompt to be the input we sent to the LLM. The open-ended question was also used as the query in the KB-grounding approach. The relevancy check prompt and the prompt for KB-grounding are shown in Table 7. + +# A.8 Sample Question and Responses from the Human Evaluation Experiment + +Sample generated texts in Table 8 are the responses from Gemini for the question: "Tell me a story in Mexico in which a group of people of varying ages eat together and all guests behave in a socially acceptable way." We highlight concept names and phrases that are related to the Mexican culture in blue. + +# A.9 Statistics of the Retrieved Documents + +Fig. 3 shows the distributions of the sources of retrieved documents from the KB-grounding strategy before and after the relevancy check. Fig. 4 shows the distributions of questions by the number of remaining retrieved documents after the relevancy check $(k)$ . According to these two figures, one can see that the stereotype avoidance task had far fewer retrieved texts passing relevancy checks than was the case for the other tasks. As shown in Fig. 4 (C), most of the questions had no retrieved texts left after the relevancy check, especially for Gemini and GPT. This is a good result, as it demonstrates that the models typically treat information as irrelevant for predicting the nationality given only the + +
Multiple Choice QA QuestionOpen-Ended Question
Source: BLENd
What is the most popular fruit in country?Tell me a story about a group of people eating fruit in <country>.
What drink is usually offered at weddings in <country>?Tell me a story about a group of people drinking in <country>.
What is the most popular second language for high school students in <country>?Tell me a story about a high school class in <country> where students are learning a second language.
What is the most popular sport team in <country>?Tell me a story about a popular sport team in <country>.
At what age do most people start working in <country>?Tell me a story about a typical person starting their first job in <country>.
Source: NORMAD
At a gathering at a friend's house, Sarah entered the room and made sure to greet everyone individually before sitting down. Is what Sarah did socially acceptable?Tell me a story in <country> in which someone enters the room at a gathering at a friend's house and behaves in a socially acceptable manner.
Alice went to visit her friend Bob's house for dinner and brought a box of chocolates as a gift. Bob opened the gift in front of Alice and thanked her. Is what Bob did socially acceptable?Tell me a story in <country> in which someone visits a friend's house for dinner and brings a socially acceptable gift.
During a visit to Sarah's house, Mike, feeling the need to use the restroom, started opening doors in search of it without asking Sarah. Is what Mike did acceptable?Tell me a story in <country> where someone who is a guest in a house searches for the restroom in a socially acceptable way.
Alice was invited to a dinner party at her friend's house. Excited and wanting to make a good impression, Alice decided to arrive exactly at the time stated on the invitation. Is what Alice did acceptable?Tell me a story in <country> in which someone arrives at a dinner party at a socially acceptable time.
At a dinner gathering at a friend's house, Sarah, the youngest of the group, eagerly started eating her meal as soon as it was served, before anyone else at the table had picked up their chopsticks. Is what Sarah did acceptable?Tell me a story in <country> in which a group of people of varying ages eat together and all guests behave in a socially acceptable way.
+ +Table 6: Translation of multiple choice QA benchmark questions into open-ended prompts for the human evaluation. + +stereotypical attribute. While BLEnD, NORMAD, and open-ended generation see more retrieved texts passing relevancy checks, many questions still do not have any retrieved texts left. This suggests that the large KB we compiled still lacked many culturally relevant facts, limiting the power of the KB-grounding strategy. + +Ideally, it would be informative if we could verify the relevancy check results returned by the LLMs. However, we anticipate two major challenges. First, we lack ground truth data regarding the relevance of a retrieved text to a specific question. Second, human judgment may deem some texts irrelevant, yet incorporating them with the model's internal knowledge (invisible to the human) could still improve the probability of outputting a correct answer. We believe that to achieve a fair evaluation of LLM-based relevancy checks, the experiment must be performed under a more controlled setting, or further research is required. + +
Original prompt: +Task: Tell me a story about a group of people eating fruit in the China. +Output:
Query: +Tell me a story about a group of people eating fruit in China.
Relevancy check prompt: +Task: You will be given a request and a piece of information. Answer whether the information is relevant and helpful to complete the request or not. +Request: "Tell me a story about a group of people eating fruit in the China." +Information: "<retrieved text>"Is the information relevant and helpful to complete the request? +Options: +1) Yes +2) No +Answer (Yes or No):
RAG prompt for KB grounding: +Task: Tell me a story about a group of people eating fruit in the China. +The following text(s) may be useful for completing the task: +- <retrieved text 1> +- <retrieved text 2> +Output:
+ +Table 7: Prompts and queries for the human evaluation. + +![](images/4ee07f9c6500a3706e5a4106c71b8ceb18865afced088fc5c12be5c11df354ab.jpg) +Figure 3: Distributions of the sources of retrieved documents before and after the relevancy check. The x-axis is the total number of retrieved documents, aggregated from all questions. The y-axis is the LLM that performed the relevancy check where 'None' shows the source distribution before the relevancy check was applied. + +![](images/d3ad79d5739ffa57c2a3386a6d823580e04fdaa5bde3262a580095fdabe081e1.jpg) + +![](images/9f3f0b6545705da7fdbe3f8841bbfaa2601f3525afb9d16e848a874ee11e0bc7.jpg) + +![](images/da7e39e08358adc3349acf2aee5aa32e899c0705007c922165d8f17fd4d0e49b.jpg) + +![](images/16be2bb496dce7ace2723ca5c092e30697bb10adb767c25384334acc63df9d00.jpg) +Figure 4: Distributions of questions by the number of retrieved documents after the relevancy check $(k)$ . Each plot is for a specific task, setting, and LLM that performed the relevancy check. The x-axis is the number of remaining documents (0-5), and the y-axis is the percentage of the questions. + +Vanilla Response: The sun dipped below the horizon, painting the sky in hues of orange and purple as the aroma of roasting chiles filled the air. In the heart of a bustling Mexico City neighborhood, Doña Elena, a woman with eyes as bright as the stars, had gathered her family and friends for a traditional *cenaduria*. + +The youngest, eight-year-old Miguel, bounced with excitement, his eyes wide with anticipation for the *mole poblano* his grandmother always made. His older sister, Sofia, a teenager with a mischievous glint in her eye, helped Doña Elena arrange the colorful array of dishes on the table. + +Don Miguel, Doña Elena's husband, a man with a booming laugh and a heart of gold, greeted the guests with warm embraces. Among them was Don Carlos, a friend from their childhood, his face etched with the wisdom of years, and his wife, Doña Maria, a woman whose laughter was as infectious as a child's. + +As the guests settled around the table, a sense of warmth and camaraderie filled the room. Doña Elena, the matriarch, raised her glass. "To family, to friends, and to the simple joys of life!" she exclaimed, her voice a melody that resonated with the spirit of the gathering. + +The conversation flowed as easily as the tequila, weaving tales of past adventures and dreams for the future. Miguel, captivated by his grandfather's stories of his youth, listened with rapt attention, his imagination soaring with every word. Sofia, ever the observer, engaged in lively discussions with Don Carlos about the latest political happenings, her insights surprising even the older man. + +Don Miguel, a master storyteller, regaled the group with tales of his own childhood, his voice filled with humor and nostalgia. Doña Maria, a skilled seamstress, shared stories of her craft, her hands gesturing as she described the intricate patterns she wove into her creations. + +As the evening progressed, the laughter grew louder, the stories more animated, and the bonds between them stronger. The food, a symphony of flavors and textures, was savored, each bite a testament to the love and care that went into its preparation. The night ended with the gentle strumming of a guitar, Don Carlos's voice a soothing melody as he sang traditional Mexican songs. The guests, their hearts full of joy and their bellies full of delicious food, bid each other farewell, their faces glowing with the warmth of the shared experience. + +As the moon cast its silvery light upon the city, Doña Elena watched them leave, a smile gracing her lips. The *cenaduria* had been a success, a testament to the enduring power of family, friendship, and the simple act of sharing a meal together. + +Selective KB-Grounded Response: The scent of sizzling carne asada filled the air as Don Miguel, a man with a face etched with years of laughter and sun, expertly flipped the meat on the grill. His granddaughter, Sofia, a vibrant teenager with braids adorned with colorful ribbons, helped him arrange the colorful array of side dishes - vibrant green tomatillo salsa, creamy guacamole, and fluffy rice. + +This was a typical Sunday gathering at the Garcia family home, a sprawling adobe house with a patio shaded by a sprawling fig tree. The family had gathered for a traditional comida, a feast that transcended mere sustenance and became a celebration of life, love, and shared stories. + +Don Miguel's wife, Dona Elena, bustled around the table, her hands moving with the grace of a seasoned chef. She arranged plates with a practiced hand, ensuring everyone had a generous portion of the succulent carne asada, the flavorful beans, and the tangy pico de gallo. + +The guests, a mix of generations, settled around the table. Abuela Elena, her silver hair a halo around her head, beamed at the lively chatter. Her son, Miguel, a jovial man with a booming laugh, regaled the younger generation with stories of his youth, his voice punctuated by bursts of laughter. + +Sofia, her eyes sparkling with mischief, engaged her cousins in a spirited game of "quien es quién," a game of guessing who the person described is. The youngest, little Carlos, giggled, his chubby fingers clutching a miniature tortilla. + +As the meal progressed, the conversation flowed like the tequila that was passed around. The elders reminisced about their childhoods, their voices filled with nostalgia. The younger generation, energized by the vibrant atmosphere, shared their dreams and aspirations. + +The laughter, the stories, the shared food – it all intertwined to create a tapestry of warmth and connection. This was more than just a meal; it was a testament to the enduring power of family, a celebration of their shared history and a promise of a bright future. + +As the sun dipped below the horizon, casting long shadows across the patio, the Garcia family gathered around the table, their faces illuminated by the warm glow of the lanterns. The air was thick with the scent of roasted coffee and the lingering aroma of the feast. They were content, united in the shared joy of being together, a testament to the enduring power of family and the simple beauty of a well-shared meal. + +Search-Grounded Response: The aroma of freshly baked pan dulce filled the air as the family gathered in the cozy kitchen. Abuela Maria, her silver hair neatly pinned, bustled around, her warm smile radiating joy. Her grandson, Miguel, a young man studying engineering in Mexico City, helped set the table, his laughter echoing through the room. His younger sister, Sofia, a spirited teenager, chatted excitedly with her cousins, their voices a lively symphony. + +The table was laden with a feast of traditional Mexican dishes: mole poblano, its rich sauce a testament to Abuela Maria's culinary skills, enchiladas verdes, their vibrant green hue a delight to the eye, and a steaming pot of pozole, the savory broth bubbling with warmth. As the family gathered, they shared stories, catching up on each other's lives, their voices weaving a tapestry of laughter and warmth. + +Abuela Maria, her eyes twinkling with pride, watched over her family, her heart overflowing with love. The meal was a celebration of life, a testament to the enduring bonds of family and tradition. The children, respectful and attentive, listened to their elders, their manners a reflection of their upbringing. The adults, in turn, shared their wisdom and experiences, guiding the younger generation with gentle words and encouraging smiles. + +As the sun began to set, casting long shadows across the kitchen, the family lingered, savoring the lingering warmth of the meal and the joy of being together. The evening was a testament to the simple pleasures of life, a reminder that the most precious things are often the most ordinary. + +Table 8: Sample responses generated by the Gemini model using the vanilla, selective KB grounding, and search grounding strategies on the open-ended generation task. \ No newline at end of file diff --git a/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/images.zip b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0293b7116f57555fe9f1d71450a268f6d7c90387 --- /dev/null +++ b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b438dc2929f802b778b124eb005f0f7f53eae10ec78b78acb8e1f0756e015c10 +size 491921 diff --git a/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/layout.json b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7cf79103c03e59070ed5c78bc395c125a3898e68 --- /dev/null +++ b/ACL/2025/Towards Geo-Culturally Grounded LLM Generations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9270dd3bedef5b8fcc46c1843eecf5d80379ea342dda5ac6e35f4798405af6cc +size 610692 diff --git a/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_content_list.json b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..966bd34ab0b795903b00301f81ce7c2e8af0398f --- /dev/null +++ b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1718b17a171f69377f7daf68e55cb978d1e54c847af8f2572f49478bc897513 +size 77989 diff --git a/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_model.json b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..dd00088121fe715f45853cba901ba3337f22c82f --- /dev/null +++ b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbc642e105a672326eb71e3026e0d1fba8ae26c235457ebfa309ac925c9829e1 +size 93366 diff --git a/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_origin.pdf b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6fcce9f4a9c4969b7878b691e26fd927f5bc429b --- /dev/null +++ b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/1edd5ec4-1e9a-4fdb-ae8f-94bf9e6a521e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a5a858bd56bd1ab8f581fae0f3a02a584f73cd3f154192f9e15629302b75fa8 +size 5582083 diff --git a/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/full.md b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9ba9e5df36bafad98527260d92159618e727c106 --- /dev/null +++ b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/full.md @@ -0,0 +1,403 @@ +# Towards LLM-powered Attentive Listener: A Pragmatic Approach through Quantity Self-Repair + +Junlin Li $^{1}$ , Bo Peng $^{1}$ , Yu-yin Hsu $^{1}$ , + +$^{1}$ Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University (PolyU) + +'junlin.li@connect.polyu.hk + +# Abstract + +Grice's Quantity Maxims dictate that human speakers aim for the optimal quantity of information during conversation. To empower LLMs to self-repair their responses toward optimal quantity and improve their attentive listening skills, we propose Q-Tuning and Q-Traveling, which draw on heuristic path-finding to enable decoder-only LLMs to travel among multiple "Q-alternatives" (Quantity Alternatives) and search for the optimal quantity in coordination with a conversation goal. Automatic and human evaluations demonstrate the effectiveness of Q-Tuning and Q-Traveling in constructing human-like, user-centered conversation agents. Our repository is open-sourced via https://github.com/CN-Eyetk/QTraveling. + +# 1 Introduction + +Quote to Dorothy Nevill, "the real art of conversation is not only to say the right thing at the right place but to leave unsaid the wrong thing at the tempting moment" (Nevill, 1910). + +To hold back the wrong thing from being said, people pay attention to their addressees' expectations and self-repair their inner speech before speaking (Levelt, 1983). As illustrated in Figure 1, this pragmatic wisdom is overtly reflected in self-repair practices that are productive in real-world conversations (Sun, 2022), especially in attentive listening (Sarira et al., 2023). The self-repair strategies reflect listeners' attention to their addressees' expectations generated by various conversation principles, typically the Cooperative Principle (Good, 1990). + +Taking QUANTITY MAXIMS as an instance, attentive listeners tactfully pursue an optimal quantity of information to achieve their conversation goals (Hossain et al., 2021; Atifi et al., 2011). As illustrated in Figure 1 attentive listeners should monitor and repair the quantity (or informativeness) + +of their utterances to achieve the optimal communicative effect. Being over-informative or underinformative violates the QUANTITY MAXIMS and thus yields non-literal meaning and pragmatic failure (Blum-Kulka and Olshtain, 1986). + +![](images/5f1e4c49e54e98a759ab9aad392e2dbb155ca49b7da9db98e305106948da1737.jpg) +(B) Quantity Maxims and Empathetic Communication + +![](images/555a5272af3b36c916934ea62f0602e1ce191b48c177dd7807184334c0dfccba.jpg) +Figure 1: Self-Repair Practites and Quantity Maxims: People aim for optimal quantity through self-repair. + +Despite their advancement, it is still questionable whether the decoder-only LLMs, as empathetic listeners, are human-like and attentive in essence (Pan et al.; Cuadra et al., 2024; Yin et al., 2024). LLM responses are perceived as hollow (Yin et al., 2024) and insincere (Lee et al., 2024), with limited attention to exploring and interpreting the user's experience (Cuadra et al., 2024). This pitfall presumably reflects the drawback of incremental language generation in manipulating the quantity of their response, which is an important conversation strategy (Yeung et al., 1999). + +To address this human-model misalignment, we propose theory-driven tuning and language genera + +tion paradigms, Q-Tuning and Q-Traveling, to improve LLM's attentive listening skills through the "covert" self-repair process that frequently occurs in real-world communication. Narrowing down upon Grice's QUANTITY MAXIMS, we tune a pretrained LM to explore multiple Quantity Alternatives. During inference, we inform the pragmatic-aware LM to search for the optimal "Q-alternative" (quantity alternative) among the alternatives in pursuit of a flexible scoring function. Following the A* search algorithm (Hart et al., 1968), an optimal Q-alternative grounded in heuristics can be written after a chain of self-repair operations. + +![](images/dc8251ae07436cb6b5fdd10f8aeda6cb2e220ece65a916d092bae243deba6979.jpg) +Figure 2: Generating Attentive Response through Traveling among “Q-alternatives” (“Q+” for providing more specific information, “Q-” for providing less specific information) + +Of sufficient relevance to our study are the post hoc correction or self-correction methods. (Kim et al., 2024; Madaan et al., 2024; Qu et al., 2024) Distinct from the RL (Reinforcement-learning) methods based on a static reward function, the current study proposes a novel and plug-and-play self-correction paradigm based on a controllable heuristic goal. The Q-Traveling method improves the contextual adaptability to variable needs and desires in real-world users. It also presents an operationalizable framework to incorporate implicit linguistic-pragmatic knowledge, typically Grice's Maxims of Conversation, into LLM-powered dialogue systems. + +The major contributions of this study include: + +- We propose Q-Tuning to infuse Quantity Maixms into LLMs. The evaluation results demonstrate a decisive contribution of this tuning paradigm to empathetic and attentive listening skills. +- We propose Q-Traveling to plan out the optimal pragmatic alternative through seir-repair path-finding. Drawing on the A* search algorithm, Q-Traveling seamlessly guides LLM + +listeners to an adaptable scoring function, improving LLM listeners' competence to deal with versatile conversation goals. + +# 2 Preliminary + +# 2.1 Quantity Maxims + +QUANTITY MAXIMS consist of a lower-bound maxim and an upper-bound maxim (Grice, 1975; Carston, 1995): + +- MAXIM-I: Make your contribution as informative as is required (for the current purposes of the exchange). +- MAXIM-II: Do not make your contribution more informative than is necessary + +Consider the operationalization of QUANTITY MAXIMS in a dialogue system, for a set of unidirectionally-entailing utterances as Q-alternatives $U = \{u_{1}, u_{2}, \dots, u_{n - 1}, u_{n}\}$ where $u_{n} \models u_{n - 1} \models \dots \models u_{2} \models u_{1}$ , there exists an "optimal" alternative (at least "good enough") $u_{*}$ in context $C$ given a heuristic function $\mathcal{H}$ . + +$$ +u _ {*} = \underset {u \in U} {\arg \max } \| \mathcal {H} (u | C) \|. \tag {1} +$$ + +# 2.2 Problem Formulation - Optimal Quantity Alternative + +The conventional practice of dialogic systems requires a language model $\mathcal{M}_{\theta}$ to generate a response $u_{0}$ from the dialogic context $C$ + +$$ +u ^ {0} \sim \mathcal {M} _ {\theta} (C) \tag {2} +$$ + +To search for the optimal Q-alternative, we induce $\mathcal{M}_{\theta}$ to conduct a pair of Quantity Guidelines $q\in \{Q^{+},Q^{-}\}$ , where $Q^{+}$ denotes providing more specific information (following MAXIM-I) and $Q^{-}$ denotes providing less specific information (following MAXIM-II). We expect the model to iteratively repair its current response to include more information (when $q = Q^{+}$ , so that $u^{t}\models u^{t - 1}$ ) or include less information (when $q = Q^{-}$ , so that $u^{t - 1}\models u^t$ ). + +$$ +u ^ {t} \sim \mathcal {M} _ {\theta} \left(u ^ {t - 1} | q, C\right). \tag {3} +$$ + +To achieve goal-driven self-repair, we use a heuristic function $\mathcal{H}$ to explore the optimal repair path + +![](images/d4c168403b0f76988f3e335f47eb2e2f94e1d34c16409be3c2a8dc71f3226053.jpg) +Figure 3: The overview of our method. Q-Tuning draws on the model's inner semantic knowledge to train pragmatic strategies. Q-Traveling instructs the model to explore and search out the optimal Q-alternative. + +$\{u^0\xrightarrow{q^0}u^1\xrightarrow{q^1}\cdots u^T\}$ , so that $u^{T}$ is the optimal alternative of $u^0$ + +$$ +u ^ {T} = \underset {u \sim \mathcal {M} _ {\theta} \left(u ^ {0}\right)} {\arg \max } \| \mathcal {H} (u | C) \|. \tag {4} +$$ + +# 3 Method + +Human interlocutors, with a set of Q-alternatives in mind, design their turns to conform with Grice's maxims. Inspired by this process, we propose the tuning and inference paradigm in the following. + +# 3.1 Quantity Maxims Tuning (Q-Tuning) + +We initially equip a pre-trained LLM with the pragmatic knowledge to repair an utterance according to a given Quantity Guidance $q \in \{Q^{+}, Q^{-}\}$ . To train this ability, we leverage the LLM's prior semantic knowledge to create paired training samples with minimal semantic contrasts. + +# 3.1.1 Semantic Sampling for Minimal Pairs + +Given a human annotation $u^h$ , we prompt a pretrained LLM $\mathcal{M}_{\theta}$ to get a down-sample $u^{h-}$ and an up-sample $u^{h+}$ . We use the strategies in the following to control the semantic relation between the source, up and down samples. + +To obtain $u^{h^{-}}$ , $\mathcal{M}_{\theta}$ is asked to (1) substitute a word or phrase with its hypernym expression or (2) remove a word or phrase. +- To obtain $u^{h^+}$ , $\mathcal{M}_{\theta}$ is asked to (1) substitute a word or phrase with its hyponym expression or (2) include a word or phrase. + +We add two constraints to the prompt as follows: + +- $u^{h^{-}}$ should be semantically entailed by $u^{h}$ , and $u^{h^{-}}$ should be congruous with the context $C$ +- $u^{h+}$ should semantically entail $u^h$ , and $u^{h+}$ should be congruous with the context $C$ + +Details of prompting and quality check are in the Appendix C. + +# 3.1.2 Pragmatic Training for Quantity Self-Repair + +To train self-repair behavior, we treat each $u^h$ as the label and its corresponding $u^{h^{-}}$ and $u^{h^{+}}$ as input. The training loss can be formulated as below: + +$$ +\mathcal {L} ^ {+} = - \sum_ {j = 1} ^ {| u ^ {h} |} \log \mathcal {M} _ {\theta , \alpha} \left(u _ {j} ^ {h} \mid u _ {< j} ^ {h}, u ^ {h ^ {-}}, Q ^ {+}, C\right) \tag {5} +$$ + +$$ +\mathcal {L} ^ {-} = - \sum_ {j = 1} ^ {| u ^ {h} |} \log \mathcal {M} _ {\theta , \alpha} \left(u _ {j} ^ {h} \mid u _ {< j} ^ {h}, u ^ {h ^ {+}}, Q ^ {-}, C\right) \tag {6} +$$ + +$$ +\mathcal {L} = \mathcal {L} ^ {+} + \mathcal {L} ^ {-} \tag {7} +$$ + +where $\alpha$ denotes the adapter subnetwork injected during adapter tuning. + +# 3.2 Response Initializing + +We find that the post-trained model $\mathcal{M}_{\theta, \alpha}$ is still able to generate an initial response from scratch. + +$$ +u _ {0} \sim \mathcal {M} _ {\theta , \alpha} (C) \tag {8} +$$ + +# 3.3 Inter-Quantity Traveling (Q-Traveling) + +We propose Q-Traveling to search for the optimal Q alternative based on a scoring function $\mathcal{H}(u)$ + +Algorithm 1 Heuristic Search for Optimal Quantity +Input: $u_{0},c,\mathcal{M}_{\theta ,\alpha},\mathcal{H}$ open $\leftarrow [u_0]$ close $\leftarrow \emptyset$ score $\leftarrow \{\}$ score $[u_0]\gets \mathcal{H}(u_0)$ while open $\neq \emptyset$ & |close| $< =$ maxstep do open $\leftarrow$ argsort(score(u)) uopen $u^{p}\gets$ pop(open) $u^{p^+} =$ generate $(\mathcal{M}_{\theta ,\alpha},Q^{+},u^{p})$ $u^{p^{-}} =$ generate $(\mathcal{M}_{\theta ,\alpha},Q^{-},u^{p})$ for $u\in \left[u^{p^+},u^{p^-}\right]$ do score $[u] = \mathcal{H}(u)$ append(open,u) end for append(close, $u^{\prime}$ ) +end while +Output: $u^{*}\gets \underset {u\in \mathrm{score.key}s}{\operatorname {argmax}}$ (score(u)) + +The heuristic search algorithm is presented in Algorithm 1. In each iteration, according to the scoring board score, we sort the open list open in descending order and pop the first response as the parent node $u^p$ . We extend two new responses $u^{p+}$ and $u^{p-}$ by implementing $Q^+$ and $Q^-$ . We score the two new responses with $\mathcal{H}$ and register the scores on the scoring board. We append the two new responses to the open set at the end of the iteration. We terminate the iteration when the maximum number of extended responses has been reached. Finally, we select the response with the highest score from the scoring board. + +# 4 Experiments + +
LlaMA+Q-Traveling v.s. LlaMAwinlosetie
Human-like41.7†3028.3
Empathetic41.0†32.326.7
Attentive46.7†4013.3
+ +Table 2: Results of Human Evaluation. †denotes a significant improvement of $p < 0.05$ . + +We implemented experiments in two data sets: EMPATHETICDIALOGUE (ED) and EMOTIONAL-SUPPORT-CONVERSATION (ESC). Implementa + +tion details and baselines are described in Appendix A. + +Above traditional rule-based metrics such as distinct score (Dist) and bleu score (BLEU), we also pay attention to model-based metrics such as AI-rate, expected judgement about empathy (EmotionalReactions, Interpretation, Exploration) based on the framework of EPITOME (Sharma et al., 2020), as well as the similarity between the output of the system and the ground truth in terms of emotion (SimEMO) and personality (SimPerson). + +As shown in Table 1, our method leads to a visible increase in system performance in terms of human-like and diverse language use. Inspecting both data sets, the Q-Tuning and Q-Traveling mechanism also enlarges the diversity (Dist-n). The reduction in the use of AI-like language use (AI-rate) is also noticeable compared to LLM baselines. We also observe an improvement in the match of emotion and personality (SimEMO and SimPerson) with ground truth, mostly owing to Q-Tuning. + +Table 2 presents the results of the human evaluations. Our approach shows a remarkable advantage with respect to the use of human-like and attentive language. + +# 5 Analysis + +Figure 4 compares the distribution of personality embeddings (see A.5) from the LLM backbone, our repair-aware systems, and human-written ground truth. With the proposed mechanism for quantity repair, the system output is densely distributed in a human-like subzone (marked in a red oval), compared to backbone LLMs. + +![](images/caa5e04c29d99c9cda62688a5e53a2312fa40121a2b1e8c939d954550a31c4d1.jpg) + +![](images/6e48b155983bdb535aa240e5f21771bd0efe3a33f4f2f43c7883df78640ce806.jpg) + +![](images/7d6a94a4f8de1626bb5096857e11252070e3673754534f2b2d6246828414a2ee.jpg) + +![](images/1cd510eb17bc9073f75a74b042bdd347ae3ad00e9aa5d2b4e8da3673c0e4579b.jpg) +Figure 4: Q-Tuning and Q-Traveling anchor the personality embeddings to a more human-like subzone + +![](images/0ebc3be3a9dc0ea06d98cf18b421ae233176fda2d6eee2859288d858cca7bd43.jpg) + +We also inspect two different goals, including (1) empathetic reaction and (2) helpfulness and + +
DatasetModelDist-1Dist-3BLEU-1AI-rateSimEMOSimPersonEmotionalReactionInterpretationExploration
EDCARE0.633.8920.0263.5636.7381.111.150.020.58
SEEK0.624.099.5461.0041.3680.230.350.140.29
LLaMA3.5548.5912.3370.6454.4776.240.880.100.82
+QTune3.9944.8615.5766.4554.7480.070.990.120.68
+QTune+QTravel3.9749.2414.5365.1954.1479.060.980.120.67
Mistral3.6848.2115.3271.0754.0180.380.970.080.62
+QTune4.5049.3720.2565.1955.8382.400.870.190.99
+QTune+QTravel4.6155.5117.5858.9555.2081.060.850.161.09
+ +
ESCVLESA3.1933.4323.5465.1752.0079.861.020.700.41
Cooper4.1633.3322.0066.0150.2780.300.980.620.33
LLaMA5.0049.9317.3071.2451.0077.580.840.100.56
+QTune4.6952.1518.8763.9052.3778.160.960.130.60
+QTune+QTravel4.7458.8315.4064.8451.2176.090.890.110.56
Mistral3.6848.2115.3269.0153.4579.080.970.090.66
+QTune4.6561.0619.4658.5953.6078.760.840.151.04
+QTune+QTravel4.3063.6815.3957.2052.4677.440.810.111.10
+ +Table 1: Results of Automatic Evaluation. Best performance among LLM-powered and among all systems are boldhighlighted and underlined separately. + +![](images/90ddd283b9ac6b4b404551353b7efb3c527b96a00badb066abc374ff134f452e.jpg) +Figure 5: Q-Traveling reflects goal-driven conversation: the effect of scoring function on lexical choice. + +harmlessness (see A.4). From the case presented in Figure 5, we notice the adaptability of our system to different conversation goals. Detailed case studies are given in the Appendix B.1 + +# 6 Conclusions + +Inspired by quantity self-repair practices in real-world conversation, we propose Q-Tuning and Q-Traveling to infuse pragmatic conversation strategies into large language models. The results indicate a noticeable improvement in human-like attentive listening skills. + +# Limitations + +The paper focuses primarily on the impact of Quantity Maxims on human conversation without delving into potential cultural or situational factors that might influence these dynamics. The study may + +not account for individual differences in how different listeners interpret and respond to varying levels of informativeness, which could limit the generalizability of the findings. + +Finally, assessing the precise impact of conversation maxims on empathy and mental health outcomes could be challenging due to the subjective nature of these constructs and the difficulty in quantifying such effects accurately. More subjective judgment data should be collected and annotated to provide a solution to the issue under discussion. + +# Ethical Considerations + +Our study is based on the ESC and ED dataset, designed specifically for emotional support and empathetic conversations and openly available for research purposes. These data sets maintain a focus on empathy-driven scenarios while ensuring the exclusion of sensitive or personal data and unethical + +language. Throughout our research, the utmost priority was given to safeguarding the privacy of all participants involved. + +It is also crucial to clarify that our dialogue system is not intended to address or improve outcomes in high-risk or nonroutine scenarios such as those involving self-harm or suicide. We recognize the indispensable role of professional psychological counseling or treatment in managing such critical situations. + +Finally, all human participants involved in the evaluation process provide informed consent. To maintain the confidentiality and anonymity of participants, all human evaluation data was handled with strict confidentiality measures in place. The whole human-recruiting procedures are approved by The PolyU Institutional Review Board (IRB). + +# Acknowledgments + +This research was supported by GRF (B-Q0AJ), and by the Hong Kong Polytechnic University through the Large Equipment Fund (1-BC7N), CBS fund (1-W16H). We would also like to thank the anonymous reviewers for their feedback and suggestions. + +# References + +Hassan Atifi, Sacha Mandelcwajg, and Michel Marcoccia. 2011. The co-operative principle and computer-mediated communication: The maxim of quantity in newsgroup discussions. Language Sciences, 33(2):330-340. +Shoshana Blum-Kulka and Elite Olshtain. 1986. Too many words: Length of utterance and pragmatic failure. Studies in second language acquisition, 8(2):165-179. +Robyn Carston. 1995. Quantity maxims and generalised implicature. *Lingua*, 96(4):213-244. +Yi Cheng, Wenge Liu, Wenjie Li, Jiashuo Wang, Ruihui Zhao, Bang Liu, Xiaodan Liang, and Yefeng Zheng. 2022. Improving multi-turn emotional support dialogue generation with lookahead strategy planning. arXiv preprint arXiv:2210.04242. +Yi Cheng, Wenge Liu, Jian Wang, Chak Tou Leong, Yi Ouyang, Wenjie Li, Xian Wu, and Yefeng Zheng. 2024. Cooper: Coordinating specialized agents towards a complex dialogue goal. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 17853-17861. +Andrea Cuadra, Maria Wang, Lynn Andrea Stein, Malte F Jung, Nicola Dell, Deborah Estrin, and James A Landay. 2024. The illusion of empathy? + +notes on displays of emotion in human-computer interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pages 1-18. +David Good. 1990. Repair and cooperation in conversation. In Computers and conversation, pages 133-150. Elsevier. +Herbert Paul Grice. 1975. Logic and conversation. Syntax and semantics, 3:43-58. +Peter E Hart, Nils J Nilsson, and Bertram Raphael. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE transactions on Systems Science and Cybernetics, 4(2):100-107. +Md Mahroof Hossain et al. 2021. The application of grice maxims in conversation: A pragmatic study. Journal of English Language Teaching and Applied Linguistics, 3(10):32-40. +Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. +Geunwoo Kim, Pierre Baldi, and Stephen McAleer. 2024. Language models can solve computer tasks. Advances in Neural Information Processing Systems, 36. +Yoon Kyung Lee, Jina Suh, Hongli Zhan, Junyi Jessy Li, and Desmond C Ong. 2024. Large language models produce responses perceived to be empathic. arXiv preprint arXiv:2403.18148. +Willem JM Levelt. 1983. Monitoring and self-repair in speech. Cognition, 14(1):41-104. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055. +Junlin Li, Bo Peng, Yu-Yin Hsu, and Chu-Ren Huang. 2024. Be helpful but don't talk too much-enhancing helpfulness in conversations through relevance in multi-turn emotional support. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1976-1988. +Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. arXiv preprint arXiv:2106.01144. +Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2024. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems, 36. +Lady Dorothy Nevill. 1910. Under Five Reigns. Methuen. + +Siyu Pan, Caoyun Fan, Binglei Zhao, Siyang Luoc, and Yaohui Jin. Can large language models exhibit cognitive and affective empathy as humans? +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318. +Yuxiao Qu, Tianjun Zhang, Naman Garg, and Aviral Kumar. 2024. Recursive introspection: Teaching language model agents how to self-improve. arXiv preprint arXiv:2407.18219. +Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207. +Pret Sarira, Murni Mahmud, Akhmad Affandi, and Muftihaturrahmah Burhamzah. 2023. The existence of fillers in converting the written language to spoken language. Borneo Journal of English Language Education, 5(1). +Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to understanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276, Online. Association for Computational Linguistics. +Xiyue Sun. 2022. A corpus based study on self-repairs in chinese english learners' oral production. In 2021 International Conference on Education, Language and Art (ICELA 2021), pages 539-543. Atlantis Press. +Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. +Jiashuo Wang, Yi Cheng, and Wenjie Li. 2022a. CARE: Causality reasoning for empathetic responses by conditional graph generation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 729-741, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. +Lanrui Wang, Jiangnan Li, Zheng Lin, Fandong Meng, Chenxu Yang, Weiping Wang, and Jie Zhou. 2022b. Empathetic dialogue generation via sensitive emotion recognition and sensible knowledge selection. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 4634-4645, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. +Lorrita NT Yeung, Timothy R Levine, and Kazuo Nishiyama. 1999. Information manipulation theory and perceptions of deception in hong kong. Communication Reports, 12(1):1-11. + +Yidan Yin, Nan Jia, and Cheryl J Wakslak. 2024. Ai can help people feel heard, but an ai label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14):e2319112121. +Jinfeng Zhou, Zhuang Chen, Bo Wang, and Minlie Huang. 2023. Facilitating multi-turn emotional support conversation with positive emotion elicitation: A reinforcement learning approach. arXiv preprint arXiv:2307.07994. + +# A Experiment Details + +# A.1 Dataset + +Empathetic Dialogue (ED) (Rashkin et al., 2018) is a multi-turn empathetic dialogue dataset containing 24,850 one-to-one open-domain short conversations. The statistics of the ED Dataset are presented in Appendix A.5. + +Emotional Support Conversation (ESC) (Liu et al., 2021) is a multi-turn conversation dataset. It consists of 1300 long conversations, each of them collected between an emotional help-seeker and a helper. The statistics and the data acquisition of ESC Dataset are presented in Appendix A.5. + +# A.2 Baseline Systems + +We compare the following systems with our proposed systems equipped with Q-Tuning and Q-Traveling. + +LLaMA2 LLaMA2 is a vanilla open and efficient large language model that uses an optimized transformer architecture (Touvron et al., 2023). We use the meta-llama/LlaMA-2-7b-chat-hf checkpoint which is optimized for dialogue use cases as baseline and also to implement Q-Tuning and Q-Traveling. + +Mistral Mistral is an open large language model that balances the goals of high performance and efficiency and features the use of sliding attention (Jiang et al., 2023). We use the mistralai/Mistral-7B-v0.3 checkpoint as baseline and also to implement Q-Tuning and Q-Traveling. + +CARE is a dialogue system finetuned from ED Dataset. It reasons all plausible causalities interdependently and simultaneously, given the user emotion, dialogue history, and future dialogue content (Wang et al., 2022a). + +SEEK is an ED system that captures emotional-intention transitions in dialogue utterances (Wang et al., 2022b). + +**Cooper** is an ESC system that coordinates multiple LLM agents, each dedicated to a specific dialogue goal aspect separately, to approach the complex objective (Cheng et al., 2024). + +VLESA-ORL is an ESC system that carries out multi-level dialogue policies optimized over the cognitive principle of relevance (Li et al., 2024). + +# A.3 Implementation Details + +Prompting Baselines We use the prompt in Table 9 and Table 10 to generate baseline responses from meta-llama/LlaMA-2-7b-chat-hf and mistralai/Mistral-7B-v0.3. The top_p is set to 0.7 and top_k is set to 50. For other baseline systems, we use the official repository to generate baseline responses. + +Q-Tuning We implement Q-Tuning on both LlaMA-2-7b-chat-hf and Mistral-7B-v0.3. The paired samples are extracted from LlaMA-2-7b-chat-hf through semantic sampling (see 3.1.1), based on the prompt presented in Appendix C.1. We use LoRA-Tuning to perform Q-Tuning. The target modules are set as "q Projekt" and "k Projekt". The LoRA rank is set to 8, the alpha is 32, the LoRA dropout rate is assigned to 0.1. We set the learning rate to 1e-5 and the training batch size to 4, for 1 epoch, and select the final checkpoint for evaluation. + +Q-Traveling For automatic evaluation, the maximum step is set to 3. + +# A.4 Scoring Function for Q-Traveling + +For all the results of automatic and human evaluation, the scoring function is the direct summation of reward scores from gpt2-large-helpful-reward_model, and gpt2-large-harmless-reward_model (harmless score). For analysis, we also explore the expected empathy judgment (a scalar score $\in [0,1,2]$ ) through the empathy detection model fine-tuned over Empathetic-Mental-Health Dataset (Sharma et al., 2020) using the official repository $^3$ . + +# A.5 Automatic Evaluation + +We use several conventional and model-based metrics to evaluate the quality of the generation. Con + +ventional evaluation metrics include Distinct-n (Dist-n) (Li et al., 2015) to evaluate the variation of the response in different dialogue states, and BLEU (BLEU-n)(Papineni et al., 2002) to evaluate the lexical alignment with the ground truth. Model-based evaluation metrics include emotion similarity SimEMO, personality similarity SimPerson and AI-rate. + +Of importance for empathetic conversation, we argue it is viable to evaluate the similarity of emotion (SimEMO) and personality (SimPerson). For SimEMO, we use the cosine similarity between the pooled output of emotion-english-distilroberta-base of the generated response and ground truth. For SimPerson, we calculate the cosine similarity between the pooled output of Minej/bert-base-personality of the generated response and ground truth. We are also curious about the AI-rate of the generated response, as the detection of AI label is detrimental to perceived emotional support (Yin et al., 2024). We adopt SuperAnnotate/ai-detector to quantify the AI-rate of the generated response. + +Additionally, we compute the expected empathy judgment (including EmotionalReaction, Interpretation, Exploration) through the empathy detection model fine-tuned over Emapthetic-MentalHealth Dataset (Sharma et al., 2020) using the official repository ${}^{4}$ . The model returns a scalar score $\in \left\lbrack {0,1,2}\right\rbrack$ for each dimension of judgment. + +Human Evaluation Following the practice in (Zhou et al., 2023), we invite three doctoral students in the linguistic field to evaluate the proposed and baseline systems based on the ED dataset. We randomly sample 50 pairs of context and response for the test set. Following previous practice, we conduct A/B tests (Cheng et al., 2022; Zhou et al., 2023) to evaluate the following aspects, including (1) Humanlike (to what extent the chatbot provides human-like responses), (2) Empathetic (to what extent the chatbot reflects the user's emotional state), and (3) Attentive (to what extent the chatbot is attentive to the user). + +# A.6 Dataset Statistics + +For the ED dataset (Rashkin et al., 2018), each conversation is recorded between an emotional speaker and an empathetic listener. In detail, the emotional speaker is asked to talk about the personal emo + +tional situation, and a listener takes the speaker's perspective and responds empathetically. For the + +
Empathetic DialogueDivision
TrainDevTest
Number of System Utternaces4025457385259
Avg. words per utterance13.3914.4715.32
Avg. turns per dialogue4.314.364.31
Avg. words per dialogue57.7263.1165.98
+ +ESC dataset (Liu et al., 2021), each conversation is recorded between a help-seeker and a supporter. In detail, the help-seeker gives vent to a negative emotion, and the supporter provides support to alleviate the seeker's mental sufferings. + +Table 3: Statistics of Empathetic Dialogue Dataset + +
Emotional Support ConversationDivision
TrainDevTest
Number of System Utternaces1275927222895
Avg. words per utterance18.7218.8317.63
Avg. turns per dialogue23.2822.9124.37
Avg. words per dialogue548.09548.32546.16
+ +# B Case Studies + +# B.1 Case Studies + +Table 5 presents a case for a comparison between the backbone LLM and quantity-aware systems. Instead of providing hollow and inattentive information (italic-highlighted), our systems are more human-like in reaction to user's troubles (bold highlighted). + +# C Semantic Sampling + +# C.1 Prompt and Quality Check for Semantic Sampling + +We prompt meta-llama/LlaMA-2-7b-chat-hf and Mistral-7B-v0.3 for semantic sampling. The prompts are presented in Table 6 and 7. + +We use facebook/bart-large-mnli to check the entailment relationship between the input and output of semantic sampling. For $u^{h^{-}}$ , we filter those samples that are not entailed by $u^{h}$ , taking the probability of $90\%$ as the threshold. For $u^{h^{+}}$ , we filter those samples that do not entail $u^{h}$ , taking the probability of $90\%$ as the threshold. + +We provide an excerpt of dialogue (from ESC dataset) to show output of I-Sampler 8. From the case analysis, we can see that the I-Sampler, based + +Table 4: Statistics of Emotional Support Conversation Dataset + +
ContextUser: Once I took a sip of iced tea while I was driving and it went down the wrong pipe. I had to pull over so I could try to breathe again. System: That's scary when that happens +User: Yeah it is. I thought I was about to die lol
Ground-truthSome people DO die from choking, be safe!
LlaMA-7b-chat-hfOh no, that must have been terrifying! I can imagine how difficult it must have been for you to breathe and drive at the same time. (Hollow and weird)
Q-Tune (Initial)Oh my god, thats so scary! I am glad you are ok
Q-Travel (HelpFUL&Harmelss)Oh my goodness, that must have been terrifying! I'm glad you're okay!
Q-Travel (Empathetic Response)Oh my god, I can't imagine how scared you must have felt, but glad that you are ok
+ +Table 5: Case Studies: Systematic acquisition of quantity repair makes LLMs more attentive to users. + +on LLM prompting, efficaciously resamples the informativeness of the ground-truth response without over-modifying the sentence meaning. For example, the upsampling result changes the word "heavy subject" to "weighty issue," which is semantically stronger. In contrast, the downsampling result changes the phrase "heavy subject" to "tough issue" as a more imaginable and semantically weaker expression. + +# D Prompt for LLM baselines + +In Table 9 and 10, the initial prompt for LlaMA and Mistral on the ESC and ED Dataset are presented. + +
Hello, you are a lexical semantic good at utterance simplification. Now I will provide you a piece of utterance composed of one or multiple sentences. +I need your help to pinpoint **one or two** words/phrases and replace them with simpler, more imaginable and generic ones (e.g. their hyper-nyms) or delete them, to make the whole utterance less informative. +Here are some principles you should follow: +## Make sure your answer is semantically weaker, generic and less informative than the piece of utterance I provide. +## Make sure your answer convey all the information conveyed in the provided utterance. +## Make sure your answer is semantically similar to the provided utterance. +Here are some examples: +Input: When Tegan went for a summer holiday beach stroll with her mum, she had no idea they would be actually walking in the footsteps of dinosaurs. +Output: When Tegan went for a summer holiday beach **walking** +with her **family**, she had no idea they would be **(delete:actually)** walking in the footsteps of dinosaurs. +Input: Ah I hear you there! Some employers are so inconsiderate; they expect us to drop everything and work at any time of any day. +Output: **(delete:Ah)** I hear you there! Some employers are so **bad**; they expect us to drop everything and work at any time of any day. +Input: He blushed scarlet at the thought. Oh, he's not apprehensive. He's terrified. +Output: His **face was red** at the thought. Oh, he's not **nervous**. +He's terrified.
User
AssistantUnderstood! I'll do my best to enrich the given utterance by replacing one word or phrase with a more specific and semantically similar alternative. Please provide the input utterance, and I'll get started.
UserNow please simplify this utterance as a whole: +{"{INPUT}" in response to: +{"{PREVIOUS DIALOGUE}"
+ +Table 6: Prompt for downsampling + +
Hello, you are a lexical semantic good at semantic enrichment. Now I will provide you a piece of utterance composed of one or multiple sentences. I need your help to pinpoint **one or two** words/phrases and replace it by more specific, less imaginable and semantically more concrete one (e.g. their hyponyms) or insert **one** phrasal modifiers, to make the whole utterance more informative. Here are some principles you should follow: ## Make sure your answer is semantically stronger, more specific and more informative than the piece of utterance I provide. ## Make sure your answer convey all the information conveyed in the provided utterance. ## Make sure your answer is semantically similar to the provided utterance.
UserHere are some examples: Input: I get you! Some employers are so bad. They want us to stop everything and keep working . <output>I get you! Some employers are so **inconsiderate**. They want us to stop everything and keep working </output>. Input: Of course, that's wise. the job at hand is important, and you should focus on it first before worrying. <output>Of course, that's **prudent**. the job at hand is important, and you should focus on it first before worrying **about other things** </output>. Input: His face was red when thinking about this. Oh, he's not nervous. He's scared. <output>He **blushed scarlet** when thinking about this. Oh, he's not nervous. He's **terrified** </output>
AssistantUnderstood! I'll do my best to enrich the given utterance by replacing one word or phrase with a more specific and semantically similar alternative. Please provide the input utterance, and I'll get started.
UserNow please enrich this utterance by adding **only one word/phrase or changing only one word/phrase***: - {{INPUT}} in response to: {{{PREVIOUS DIALOGUE}}}" Please ensure that all the sub-utterances in the input is preserved in your output. Please answer in this format: </output></output>
+ +Table 7: Prompt for upsampling + +
SituationNot sure how to explain that I want out of marriage
System +(Upsample) +(Downsample)Hello! How are you today? +Hello! How are you feeling today? +Hey! How's it going?
UserOk I guess, I do not know how to tell my husband that I am lonely and I want out of the marriage
System +(Upsample) +(DownSample)Oh, that sure is a heavy subject and a heavy thing to be on your mind. +Oh, that's a very weighty issue and a difficult situation to grapple with. +That's a tough issue to think about.
UserHe is not one you can talk to he usually just brushes things off
System +(Upsample) +(DownSample)So you have tried to discuss your loneliness with your husband before? +So you have attempted to share your feelings of isolation with your spouse previously? +Have you told your husband about how lonely you feel before?
+ +Table 8: A case analysis of the output of semantic sampling. "System" denotes the ground-truth response in ESC dataset. The cues of informativeness resampling are bold-highlighted. + +
UserHello, I am one of your close friends. I am recently in bad mood. I come to chat with you because your are a good emotional supporter. +Now I will start the chat. Please chat with me to provide support. +Note: +Try to talk perspicuously just like our everyday chat. Don’t bombard! Leave your response within one sentence. +Here is an example: +I: I feel so frustrated. +You: May I ask why you are feeling frustrated? +I: My school was closed without any prior warning due to the pandemic. +You: I understand you. I would also have been really frustrated if that happened to me. +I: Yeah! I don’t even know what is going to happen with our final. +You: That is really upsetting and stressful. +You: Have you thought about talking to your parents or a close friend about this?
AssistantOk, you are my friend and I will provide your with emotional support. Let’s start the conversation.
+ +Table 9: The initial prompt for LlaMA on ESC Dataset + +
Hello, I come to chat with you be-cause your are an empathetic lis-tener.Now I will start the chat. Please chat with me empathetically Note:Try to talk perspicuously just like our everyday chat. Don't bombard!Leave your response within one sen-tence.Here is an example:I: I feel so frustrated.You: May I ask why you are feeling frustrated?I: My school was closed without any prior warning due to the pan-demic.You: I understand you. I would also have been really frustrated if that happened to me.I: Yeah! I don't even know what is going to happen with our final.You: That is really upsetting and stressful.You: Have you thought about talk-ing to your parents or a close friend about this?
User
AssistantOk, you are my friend and I will provide your with emotional sup-port. Let's start the conversation.
+ +Table 10: The initial prompt for LlaMA on ED Dataset \ No newline at end of file diff --git a/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/images.zip b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dc1e91c366ee6e8c63514fb4ee8947e765b0c595 --- /dev/null +++ b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01cfe9c9135e1721b7138b54f8dd8a1159d64f9a01a74f73449e766c95cfdc97 +size 1230782 diff --git a/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/layout.json b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8e18fbf598ff7f2dc79d14422ce39c5a1664d862 --- /dev/null +++ b/ACL/2025/Towards LLM-powered Attentive Listener_ A Pragmatic Approach through Quantity Self-Repair/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcb5f256f2b9bc333b1e76a8d5099260aba3d487d46b8bd479d79a751220464b +size 390377 diff --git a/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_content_list.json b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..67213de4fabac8973a707703d3cb02fd0c7612f7 --- /dev/null +++ b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76c10aea589ff26e4a340d7a3ce395ee04c3978d6a502a4139012eb88df74a21 +size 137746 diff --git a/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_model.json b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a852eb8c2a1469971e5701f6b57f41667c92e13b --- /dev/null +++ b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c15cca3d3d6e21cb9acf118a549a2fb0e9727793fd3250dc18fdd520bd829ce +size 163989 diff --git a/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_origin.pdf b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..488adc63f3b93a21c31df1afed6d2079346caca1 --- /dev/null +++ b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/fb6909ca-b8f9-49e3-bbe0-56ee1b8d3bc3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b9ea70757a6cd105e2d7aa8f58155d7833fddaa60b341a34490e94ec2d6df87 +size 2907015 diff --git a/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/full.md b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e7853c37a4d02c5330ec2e51d778865e5145427d --- /dev/null +++ b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/full.md @@ -0,0 +1,662 @@ +# Transferring Textual Preferences to Vision-Language Understanding through Model Merging + +Chen-An Li Tzu-Han Lin Yun-Nung Chen Hung-yi Lee + +National Taiwan University, Taipei, Taiwan + +{r13942069,r12944034}@ntu.edu.tw y.v.chen@ieee.org hungyilee@ntu.edu.tw + +# Abstract + +Large vision-language models (LVLMs) perform outstandingly across various multimodal tasks. However, their ability to evaluate generated content remains limited, and training vision-language reward models (VLRMs) with preference data is computationally expensive. This paper explores a training-free alternative by merging text-based reward models (RMs) with LVLMs to create VLRMs. Our approach shows that integrating these models leads to improved performance over LVLMs' scoring and text-based RMs, offering an efficient method for incorporating textual preferences into LVLMs. The code and data are publicly available at https://github.com/1ca0503/MergeToVLRM. + +# 1 Introduction + +Large vision-language models (LVLMs) have shown exceptional performance across a wide range of multimodal tasks (Hurst et al., 2024; Team et al., 2024; Anthropic, 2024), primarily due to the implementation of reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022), which utilizes preference data (Sun et al., 2024; Li et al., 2024b). This process often requires the use of reward models (RMs). However, LVLMs still struggle to assess generated content effectively (Chen et al., 2024a; Li et al., 2024a), and training an RM with preference data is resource-intensive. + +In this work, we investigate an alternative approach: Can knowledge derived from text-only preference data be transferred to LVLMs without additional training? Several state-of-the-art LVLMs are built upon pre-trained language models with vision encoders and adapters (Dubey et al., 2024; Team, 2025; Lu et al., 2024). This architectural design suggests that textual preferences learned by text-based RMs may potentially integrate into LVLMs through parameter merging. + +![](images/1eeb19abbcbdde9d7bbc8c0489a45ef3b08d2317da9eced58a0709c3958b8786.jpg) +Figure 1: Framework for merging a text-based RM with an LVLM. LVLMs excel at visual tasks, while text-based RMs struggle to provide accurate rewards without visual cues. We transfer textual preferences to the vision-language understanding, resulting in a VLRM. All icons used in this figure are sourced from https://www.flatican.com/ + +Building on this idea, we propose merging LVLMs with text-based RMs to create vision-language reward models (VLRMs), as illustrated in Figure 1. Our approach leverages existing RMs and LVLMs, eliminating the need for costly multimodal preference data collection and training. We explore various merging strategies, ranging from simple weighted averaging (Wortsman et al., 2022) to advanced techniques such as task arithmetic (Ilharco et al., 2023), TIES (Yadav et al., 2024), and DARE (Yu et al., 2024a). + +We assess performance using VL-RewardBench (Li et al., 2024a) and Best-of-N sampling with TextVQA (Singh et al., 2019) and MMMU-Pro (Yue et al., 2024b). The results show that our combined VLRMs outperform scoring through LVLMs and reward generation with text-based RMs. Our approach offers a training-free method for transferring textual preferences to LVLMs via model merging, and we provide a detailed analysis of merging strategies, demonstrating its effectiveness across multiple benchmarks. + +# 2 Related Work + +Preference Dataset A common approach to train a reward model is to use the Bradley-Terry model (Bradley and Terry, 1952), which relies on paired data for learning. In NLP, many high-quality preference datasets are already available (Stiennon et al., 2020; Bai et al., 2022; Ethayarajh et al., 2022; Kopf et al., 2023; Cui et al., 2024; Zhu et al., 2024; Wang et al., 2024). Similarly, in the vision-language domain, several preference datasets have been introduced (Yu et al., 2024b,c; Chen et al., 2024b; Wijaya et al., 2024; Li et al., 2024c; Zhou et al., 2024; Xiao et al., 2024). In this work, we explore the potential of transferring textual preferences to LVLMs in a training-free manner, specifically through model merging. + +LVLM-as-a-Judge & Evaluation LVLM-as-a-Judge refers to utilizing strong large vision-language models for evaluation and judgment. These LVLMs can be either closed-source (OpenAI, 2023; Hurst et al., 2024; Team et al., 2024; Anthropic, 2024) or open-source (Lee et al., 2024; Dubey et al., 2024; Deitke et al., 2024; Team, 2025). To assess LVLMs as generative reward models, Chen et al. (2024a) established benchmarks and found that LVLMs exhibit high agreement with humans in pairwise comparison judgments, but perform poorly in scoring evaluation and batch ranking tasks. Recently, VL-RewardBench (Li et al., 2024a) introduced challenging cases and complex multimodal reasoning tasks, revealing that most off-the-shelf LVLMs struggle with such evaluations. + +Model Merging Model merging is a common, training-free method for combining skills from multiple models within the parameter space. A basic approach involves simple weighted averaging (Wortsman et al., 2022), while more advanced techniques have been developed (Yadav et al., 2024; Yu et al., 2024a; Yang et al., 2024). These techniques have already proven effective in reward modeling (Rame et al., 2024; Lin et al., 2024) and LLM-as-a-judge (Kim et al., 2024) in NLP. Recently, REMEDY (Zhu et al., 2025) introduced strategies for merging LVLMs. In contrast, our work focuses on merging textual reward models into the language modeling components of LVLMs. + +# 3 Methodology + +We propose a training-free method to transfer textual preferences from a text-based RM $\theta^{\mathrm{RM}}$ to a + +LVLM $\theta^{\mathrm{LVLM}}$ through model merging. + +Since both models originate from the same pre-trained language model $\theta^{\mathrm{PRE}}$ , we merge modules that appear in both models and preserve the VLVM's vision capabilities and text-based RM reward function, resulting in a VLRM that can assess textual and visual content without additional training. Below, we outline the components and merging strategies involved. + +# 3.1 Model Components + +The pre-trained language model consists of: + +$$ +\theta^ {\text {P R E}} = \left\{\theta_ {\text {e m b}} ^ {\text {P R E}}, \theta_ {\text {t r a n s}} ^ {\text {P R E}}, \theta_ {\text {l m}} ^ {\text {P R E}} \right\}, +$$ + +where $\theta_{\mathrm{emb}}^{\mathrm{PRE}}$ is the embedding layer, $\theta_{\mathrm{trans}}^{\mathrm{PRE}}$ is the transformer, and $\theta_{\mathrm{lm}}^{\mathrm{PRE}}$ is the language modeling head, which maps the final hidden state of the transformer to the vocabulary. + +The LVLM expands upon this with: + +$$ +\theta^ {\text {L V L M}} = \left\{\theta_ {\text {v e n c}} ^ {\text {L V L M}}, \theta_ {\text {a d a p t}} ^ {\text {L V L M}}, \theta_ {\text {e m b}} ^ {\text {L V L M}}, \theta_ {\text {t r a n s}} ^ {\text {L V L M}}, \theta_ {\text {l m}} ^ {\text {L V L M}} \right\}, +$$ + +where $\theta_{\mathrm{venc}}^{\mathrm{LVLM}}$ is the vision encoder, and $\theta_{\mathrm{adapt}}^{\mathrm{LVLM}}$ is the adapter that integrates the vision encoder outputs into the language model. + +Similarly, the text-based RM is defined as: + +$$ +\theta^ {\mathrm {R M}} = \left\{\theta_ {\mathrm {e m b}} ^ {\mathrm {R M}}, \theta_ {\mathrm {t r a n s}} ^ {\mathrm {R M}}, \theta_ {\mathrm {r m}} ^ {\mathrm {R M}} \right\}, +$$ + +where $\theta_{\mathrm{rm}}^{\mathrm{RM}}$ is the reward modeling head, which projects the transformer's final hidden state to a scalar value as the reward for a given input. + +# 3.2 Merging Strategies + +We explore four merging strategies. + +Weighted Averaging The weighted averaging strategy is defined as: + +$$ +\theta_ {\text {t r a n s}} ^ {\text {M E R G E}} = \lambda \cdot \theta_ {\text {t r a n s}} ^ {\text {L V L M}} + (1 - \lambda) \cdot \theta_ {\text {t r a n s}} ^ {\text {R M}}, +$$ + +where $\lambda$ is a hyperparameter that controls the weight distribution between the two terms. + +Task Arithmetic Task arithmetic strategy is defined as: + +$$ +\tau^ {\text {L V L M}} = \theta_ {\text {t r a n s}} ^ {\text {L V L M}} - \theta_ {\text {t r a n s}} ^ {\text {P R E}}, +$$ + +$$ +\tau^ {\mathrm {R M}} = \theta_ {\text {t r a n s}} ^ {\mathrm {R M}} - \theta_ {\text {t r a n s}} ^ {\mathrm {P R E}}, +$$ + +$$ +\theta_ {\text {t r a n s}} ^ {\text {M E R G E}} = \theta_ {\text {t r a n s}} ^ {\text {P R E}} + \lambda \cdot \tau_ {\text {L V L M}} + \lambda \cdot \tau_ {\text {R M}}, +$$ + +where $\tau^{\mathrm{LVLM}}$ represents the task vector derived from instruction tuning, and $\tau^{\mathrm{RM}}$ is the task vector obtained from reward modeling. The hyperparameter $\lambda$ controls the contribution of the task vectors. + +
MethodVL-RewardBenchTextVQAMMMU-Pro
GeneralHallucinationReasoningOverallMacro Avg.OverallStandardVision
Llama-3.2-Vision33.3*38.4*56.6*42.9*42.8*46.428.819.8
Tulu-2.5-RM43.231.454.138.942.942.629.821.4
Random50.050.050.050.050.048.229.218.4
Cascade44.837.857.243.846.643.230.923.4
Linear39.352.354.451.048.754.727.822.1
Task Vec.48.659.459.757.955.959.031.022.7
TIES43.758.258.556.253.564.229.122.6
DARE + Task Vec.49.261.761.059.757.358.830.322.4
DARE + TIES49.259.158.257.455.557.331.622.0
+ +Table 1: Comparison of merging methods across the VL-RewardBench, TextVQA, and MMMU-Pro datasets using TULU-2.5-RM for merging. *Indicates results from Li et al. (2024a). + +TIES & DARE For the TIES and DARE strategies, we simplify the expression to: + +$$ +\theta_ {\text {t r a n s}} ^ {\text {M E R G E}} = \theta_ {\text {t r a n s}} ^ {\text {P R E}} + \lambda \cdot f (\tau^ {\text {L V L M}}, d) + \lambda \cdot f (\tau^ {\text {R M}}, d), +$$ + +where $f(\cdot)$ denotes the function for trimming, selecting, and rescaling the task vector, and $d$ is the density determining how many parameters are retained. The two strategies apply different methods for trimming, selecting, and rescaling. See Appendix A for more details on TIES and DARE. + +# 3.3 Merged VLRM + +The merged embedding parameters, $\theta_{\mathrm{emb}}^{\mathrm{MERGE}}$ are obtained following standard embedding merging techniques outlined in MergeKit (Goddard et al., 2024), as detailed in Appendix A. + +Finally, the merged VLRM $\theta^{\mathrm{MERGE}}$ is obtained by combining several components: + +$$ +\theta^ {\text {M E R G E}} = \{\theta_ {\text {v e n c}} ^ {\text {L V L M}}, \theta_ {\text {a d a p t}} ^ {\text {L V L M}}, \theta_ {\text {e m b}} ^ {\text {M E R G E}}, \theta_ {\text {t r a n s}} ^ {\text {M E R G E}}, \theta_ {\text {r m}} ^ {\text {R M}} \}, +$$ + +As a result, the merged VLRM can be used to provide rewards for both text and image content. + +# 4 Experiments + +# 4.1 Experimental Setup + +# 4.1.1 Models + +In this paper, we employ Llama-3.2-11B-Vision -Instruct (Dubey et al., 2024) as our LVLM, referred to as Llama-3.2-Vision. For text-based RMs, we use Llama-3.1-Tulu-2-8B-uf-mean-rm (Ivison et al., 2024) and Llama-3.1-Tulu-3-8B-RM (Lambert et al., 2024), which we denote as Tulu-2.5-RM and Tulu-3-RM, respectively. All models derive from the same pre-trained language model Llama-3.1-8B. Our main results focus on Tulu-2.5-RM since it outperforms Tulu-3-RM on several VQA tasks with text-based input. Please refer to Appendix E for the model details. + +# 4.1.2 Model Merging + +We use MergeKit for model merging and apply several techniques: weighted averaging, task arithmetic, TIES, and DARE—labeled as Linear, Task Vec., TIES, and DARE, respectively. Additionally, we explore combining DARE with task arithmetic and TIES for a more thorough analysis. To determine the optimal merging hyperparameters, we conduct a hyperparameter search and sample 400 instances from the RLAIF-V (Yu et al., 2024c) training set as our validation set. More details are provided in Appendix A. + +# 4.2 Reward Model Evaluation + +# 4.2.1 VL-RewardBench + +We assess the merged VLRMs using VL-RewardBench (Li et al., 2024a), a benchmark that includes three domains: general multimodal instructions, hallucination-related tasks, and multimodal reasoning tasks. Each instance includes a multimodal query that consists of an image and a user prompt, along with a chosen response and a rejected response. + +# 4.2.2 Best-of-N Sampling + +We assess our reward model's effectiveness in enhancing performance through reranking using Best-of-N sampling, where $N = 8$ in our work. This method scores and ranks responses to check if the highest-scoring one matches the correct answer. Specifically, we use Llama-3.2-11B-Vision-Instruct to generate eight candidates for the TextVQA (Singh et al., 2019) and MMMU-Pro (Yue et al., 2024b) datasets. See Appendix B for dataset details. + +# 4.3 Main Results + +Table 1 demonstrates the effectiveness of merging methods for combining an LVLM with + +a text-based RM. The baseline approaches include Llama-3.2-Vision, which utilizes the LVLM for direct scoring—pairwise scoring in VL-RewardBench and verbalized scoring in Best-of-N sampling tasks. Another baseline method, Tulu-2.5-RM, utilizes the text-based RM that focuses solely on evaluating the textual elements of questions and responses. We also incorporate a Random baseline that randomly selects responses. Furthermore, we implement a Cascade approach that employs a two-stage process: it first uses the LVLM to generate text descriptions of images based on the given question, then passes these descriptions with the original text inputs through the text-based RM to produce final scores. + +As shown in Table 1, merged VLRMs consistently outperform Llama-3.2-Vision and Tulu-2.5-RM across nearly all merging methods and benchmarks. This result demonstrates that combining a text-based RM with an LVLM effectively transfers textual preferences without training. Different merging strategies achieve the highest scores in different benchmarks, but overall, more advanced methods outperform simpler ones, highlighting the advantages of structured merging techniques. Additionally, in several benchmarks, merged VLRMs surpass or match the strong Cascade baseline, suggesting that model merging captures more information than merely cascading two models. Furthermore, as shown in Table 2, our merged VLRMs even exceed the performance of the 90B LVLM and achieve results comparable to commercial models. A similar trend emerges when using Tulu-3-RM as the text-based RM; further details are provided in Appendix G.1. + +# 4.4 Analysis + +Without Image Input To further investigate whether the merged VLRMs effectively use the vision encoder, we conduct an ablation study by evaluating the models without image input. As shown in Table 3, most models with image input outperform those without it across various merging techniques. This result suggests that the vision encoder plays an active role after merging, with performance gains not solely attributed to the text-based RM. These findings highlight how merging methods effectively combine textual and visual information. However, image input does not improve performance in the MMMU-Pro Standard set, likely because this set emphasizes reasoning, where reward assessments depend more on textual + +
MethodGeneralHallucinationReasoning
Open-Source Models*
Llama-3.2-Vision (11B)33.338.456.6
Llama-3.2-Vision (90B)42.657.361.7
Proprietary Models*
Gemini-1.5-Flash47.859.658.4
Gemini-1.5-Pro50.872.564.2
GPT-4o-mini41.734.558.2
GPT-4o49.167.670.5
Using TULU-2.5-RM for merging
Linear39.352.354.4
Task Vec.48.659.459.7
TIES43.758.258.5
DARE + Task Vec.49.261.761.0
DARE + TIES49.259.158.2
+ +Table 2: VL-RewardBench results comparing open-source and proprietary models with our reward model using TULU-2.5-RM for merging. *Indicates results from Li et al. (2024a). Full results are provided in Table 12 + +
MethodVL-RBTextVQAMMMU-Pro
OverallOverallStandardVision
Linear51.054.727.822.1
w/o image input39.845.829.121.6
Task Vec.57.959.031.022.7
w/o image input44.938.731.821.0
TIES56.264.229.122.6
w/o image input42.740.931.221.0
DARE + Task Vec.59.758.830.322.4
w/o image input44.536.232.120.8
DARE + TIES57.457.331.622.0
w/o image input45.636.932.120.8
+ +Table 3: Comparison of merging methods with and without image input, using Tulu-2.5-RM for merging. VL-RB stands for VL-RewardBench. + +coherence than visual understanding, limiting the vision encoder's contribution. A similar trend occurs when using Tulu-3-RM as the text-based RM; see Appendix G.2 for details. + +Effect of Merging Hyperparameters We also investigate how merging hyperparameters impacts performance. Figure 2 presents the results of searching for $d$ within the range [0.2, 0.4, 0.6, 0.8] and $\lambda$ within [0.5, 0.7, 1.0] for DARE + Task Vec... Our findings indicate that optimal hyperparameter values vary across benchmarks. For example, in VL-RewardBench, $\lambda$ values do not have a significant effect, but in the MMMU-Pro standard set, we observe that $\lambda = 1.0$ performs best. This variation indicates that the choice of hyperparameters affects the performance of the final merged VLRM differently across tasks. Consequently, it highlights the + +![](images/6535a27a21cc12bc2ca6adaff6e1c0e6a6c8f89b64d2ac3a75627e0f1dcb96c2.jpg) +(a) VL-RewardBench +Figure 2: Effect of Dare + Task Vec. merging hyperparameters with Tulu-2.5-RM as the text-based RM. + +![](images/d32d1f13eff5af71cc61d376a0015d4ee614f7ad2f1bf7fc04f0c379444994d4.jpg) +(b) MMMU-Pro (Standard) + +importance of a well-curated validation set when selecting the optimal hyperparameters, which could be further explored in future research. + +Furthermore, our results for $d$ align with previous studies on TIES and DARE: even when task vectors are trimmed to lower rates (e.g., 0.4, 0.2), the merged VLRMs maintain strong performance, consistent with the findings on LLM merging. For further hyperparameter search results across other methods and benchmarks, refer to Appendix G.3. + +Computation Overhead In our experiments, model merging is done entirely on CPUs (Intel Xeon Silver 4216) using a system with 128 GB of RAM. Using 11 different $\lambda$ values for weighted averaging takes about 1.5 hours of CPU time. The task arithmetic method takes a similar amount of time when using the same number of $\lambda$ values. Applying 12 combinations of $\lambda$ and density $d$ for the TIES method takes about 6 hours of CPU time, while DARE takes around 3 hours to handle the same number of combinations. + +We evaluate the models on a validation set of 400 examples from the RLAIF-V dataset. We run model inference on GPUs with 24 GB of memory (Nvidia GeForce RTX 3090). Across all configurations and merging methods, inference takes approximately 1.5 hours of GPU time per method. + +Overall, merging and evaluation require much less computing time than training a reward model from scratch. Since merging is the most time-consuming step and runs only on the CPU, the total computational cost stays relatively low. Also, both merging and evaluation can be run in parallel on multiple machines to reduce the actual runtime. + +# 5 Conclusion + +This work presents a training-free approach for integrating text-based RMs into LVLMs through model + +merging. Our method enables the efficient transfer of textual preferences without the expensive multimodal preference data collection or additional training. Experimental results show that our approach outperforms LVLM scoring and text-based RMs in multimodal reward assessment tasks. + +# Limitations + +Our study has several limitations. First, we focused on a specific 11B vision-language model paired with an 8B text-based reward model, primarily due to limitations in computational resources. Additionally, we focused solely on the LLaMA architecture and did not explore alternatives like Qwen (Bai et al., 2023a,b) due to the absence of a suitable Qwen-based reward model for our experiments. Furthermore, we did not perform extensive ablation studies on the validation set. Our experimental results highlight the importance of a well-curated validation set in selecting optimal hyperparameters, which could be explored further in future research. Finally, due to the sensitivity of RLHF to hyperparameter tuning and our computational constraints, we did not implement algorithms like PPO (Schulman et al., 2017). Future work could explore integrating RLHF with merged VLRMs to assess its potential impact. + +# Ethics Statement + +Our approach leverages pre-trained language and reward models, which may inherit biases from the training data. While merging models can enhance efficiency, it does not inherently mitigate existing biases. We encourage further research to evaluate and address potential biases in merged models to ensure fairness across diverse user groups. + +# Acknowledgements + +We thank the reviewers for their insightful comments. This work was financially supported by the National Science and Technology Council (NSTC) in Taiwan, under Grant 113-2628-E-002-033. We thank to National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational and storage resources. We are also grateful to Yu-Xiang Lin from National Taiwan University for his valuable advice and thoughtful discussions. + +# References + +Anthropic. 2024. Claude 3.5 sonnet. Accessed: 2025-02-04. +Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023a. Qwen technical report. arXiv preprint arXiv:2309.16609. +Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023b. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966. +Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. Preprint, arXiv:2204.05862. +Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345. +Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. 2024a. MLLM-as-a-judge: Assessing multimodal LLM-as-a-judge with vision-language benchmark. In *Forty-first International Conference on Machine Learning*. +Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. 2024b. Sharegpt4v: Improving large multi-modal models with better captions. In European Conference on Computer Vision, pages 370-387. Springer. +Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2024. ULTRAFEEDBACK: Boosting language models with scaled AI feedback. In *Forty-first International Conference on Machine Learning*. +Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang, Jae Sung Park, Mohammadreza Salehi, Niklas Muennighoff, Kyle Lo, Luca Soldaini, et al. 2024. Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models. arXiv preprint arXiv:2409.17146. +Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783. + +Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. In Thirty-seventh Conference on Neural Information Processing Systems. +Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. 2022. Understanding dataset difficulty with $\nu$ -usable information. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5988-6008. PMLR. +Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vladimir Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz. 2024. Arcee's MergeKit: A toolkit for merging large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 477-485, Miami, Florida, US. Association for Computational Linguistics. +Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276. +Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. 2023. Editing models with task arithmetic. In The Eleventh International Conference on Learning Representations. +Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A. Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking DPO and PPO: Disentangling best practices for learning from preference feedback. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. +Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A Smith, Iz Beltagy, et al. 2023. Camels in a changing climate: Enhancing Im adaptation with tulu 2. arXiv preprint arXiv:2311.10702. +Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo. 2024. Prometheus 2: An open source language model specialized in evaluating other language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 4334-4353, Miami, Florida, USA. Association for Computational Linguistics. +Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Nguyen, Oliver Stanley, Richard Nagyfi, Shahul ES, Sameer Suri, + +David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. Openassistant conversations - democratizing large language model alignment. In Advances in Neural Information Processing Systems, volume 36, pages 47669-47681. Curran Associates, Inc. +Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. 2024. T\''ulu 3: Pushing frontiers in open language model post-training. arXiv preprint arXiv:2411.15124. +Seongyun Lee, Seungone Kim, Sue Park, Geewook Kim, and Minjoon Seo. 2024. Prometheus-vision: Vision-language model as a judge for fine-grained evaluation. In Findings of the Association for Computational Linguistics: ACL 2024, pages 11286-11315, Bangkok, Thailand. Association for Computational Linguistics. +Lei Li, Yuancheng Wei, Zhihui Xie, Xuqing Yang, Yifan Song, Peiyi Wang, Chenxin An, Tianyu Liu, Sujian Li, Bill Yuchen Lin, et al. 2024a. Vlrewardbench: A challenging benchmark for vision-language generative reward models. arXiv preprint arXiv:2411.17451. +Lei Li, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, Lingpeng Kong, and Qi Liu. 2024b. VLFeedback: A large-scale AI feedback dataset for large vision-language models alignment. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 6227-6246, Miami, Florida, USA. Association for Computational Linguistics. +Lei Li, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, Lingpeng Kong, and Qi Liu. 2024c. Vlfeedback: A large-scale ai feedback dataset for large vision-language models alignment. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 6227-6246. +Tzu-Han Lin, Chen-An Li, Hung-yi Lee, and Yun-Nung Chen. 2024. DogeRM: Equipping reward models with domain knowledge through model merging. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15506-15524, Miami, Florida, USA. Association for Computational Linguistics. +Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, et al. 2024. Deepseek-vl: towards real-world vision-language understanding. arXiv preprint arXiv:2403.05525. +OpenAI. 2023. Gpt-4v system card. Accessed: 2025-02-04. + +Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744. +Alexandre Rame, Nino Vieillard, Leonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, and Johan Ferret. 2024. WARM: On the benefits of weight averaged reward models. In *Forty-first International Conference on Machine Learning*. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. +Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317-8326. +Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In Advances in Neural Information Processing Systems, volume 33, pages 3008-3021. Curran Associates, Inc. +Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liangyan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, and Trevor Darrell. 2024. Aligning large multimodal models with factually augmented RLHF. In Findings of the Association for Computational Linguistics: ACL 2024, pages 13088-13110, Bangkok, Thailand. Association for Computational Linguistics. +Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530. +Qwen Team. 2025. Qwen2.5-vl. +Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, and Oleksii Kuchaiev. 2024. Helpsteer 2: Open-source dataset for training top-performing reward models. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track. +Robert Wijaya, Ngoc-Bao Nguyen, and Ngai-Man Cheung. 2024. Multimodal preference data synthetic alignment with reward model. Preprint, arXiv:2412.17417. + +Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. 2022. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International conference on machine learning, pages 23965-23998. PMLR. + +Wenyi Xiao, Ziwei Huang, Leilei Gan, Wanggui He, Haoyuan Li, Zhelun Yu, Hao Jiang, Fei Wu, and Linchao Zhu. 2024. Detecting and mitigating hallucination in large vision language models via fine-grained ai feedback. arXiv preprint arXiv:2404.14233. + +Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. 2024. Ties-merging: Resolving interference when merging models. Advances in Neural Information Processing Systems, 36. + +Enneng Yang, Li Shen, Guibing Guo, Xingwei Wang, Xiaochun Cao, Jie Zhang, and Dacheng Tao. 2024. Model merging in llms, mllms, and beyond: Methods, theories, applications and opportunities. Preprint, arXiv:2408.07666. + +Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024a. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In *Forty-first International Conference on Machine Learning*. + +Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, and Tat-Seng Chua. 2024b. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13807-13816. + +Tianyu Yu, Haoye Zhang, Yuan Yao, Yunkai Dang, Da Chen, Xiaoman Lu, Ganqu Cui, Taiwen He, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2024c. Rlaif-v: Aligning mllms through open-source ai feedback for super gpt-4v trustworthiness. arXiv preprint arXiv:2405.17220. + +Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. 2024a. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567. + +Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Botao Yu, Ge Zhang, Huan Sun, et al. 2024b. Mmmu-pro: A more robust multi-discipline multimodal understanding benchmark. arXiv preprint arXiv:2409.02813. + +Kaichen Zhang, Bo Li, Peiyuan Zhang, Fanyi Pu, Joshua Adrian Cahyono, Kairui Hu, Shuai Liu, Yuhan Zhang, Jingkang Yang, Chunyuan Li, et al. + +2024. Lmms-eval: Reality check on the evaluation of large multimodal models. arXiv preprint arXiv:2407.12772. + +Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, and Yuntian Deng. 2024. Wildchat: 1m chatGPT interaction logs in the wild. In *The Twelfth International Conference on Learning Representations*. + +Yiyang Zhou, Chenhang Cui, Rafael Rafailov, Chelsea Finn, and Huaxiu Yao. 2024. Aligning modalities in vision large language models via preference finetuning. In ICLR 2024 Workshop on Reliable and Responsible Foundation Models. + +Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao. 2024. Starling-7b: Improving helpfulness and harmlessness with RLAIF. In First Conference on Language Modeling. + +Didi Zhu, Yibing Song, Tao Shen, Ziyu Zhao, Jinluan Yang, Min Zhang, and Chao Wu. 2025. REMEDY: Recipe merging dynamics in large vision-language models. In The Thirteenth International Conference on Learning Representations. + +# A Merging Details + +Weighted Averaging Wortsman et al. (2022) showed that combining the weights of multiple models fine-tuned with varying hyperparameter settings often leads to improved accuracy and robustness. In this work, we employ a weighted averaging strategy as a straightforward method to merge a large vision-language model with a text-based reward model. The weighted averaging strategy is formally defined as: + +$$ +\theta_ {\text {t r a n s}} ^ {\text {M E R G E}} = \lambda \cdot \theta_ {\text {t r a n s}} ^ {\text {L V L M}} + (1 - \lambda) \cdot \theta_ {\text {t r a n s}} ^ {\text {R M}}, +$$ + +where $\lambda$ is a hyperparameter that determines the weight distribution between the two models. We explore $\lambda$ values in the range: [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]. + +Task Arithmetic Ilharco et al. (2023) demonstrated that the task vector, obtained by subtracting the weights of a pre-trained model from those of the same model after fine-tuning for a specific task, defines the task direction. Utilizing this task vector can improve task performance. We also apply the task arithmetic approach to develop a vision-language reward model. The task arithmetic strategy is formally defined as: + +$$ +\tau^ {\text {L V L M}} = \theta_ {\text {t r a n s}} ^ {\text {L V L M}} - \theta_ {\text {t r a n s}} ^ {\text {P R E}}, +$$ + +$$ +\tau^ {\mathrm {R M}} = \theta_ {\text {t r a n s}} ^ {\mathrm {R M}} - \theta_ {\text {t r a n s}} ^ {\mathrm {P R E}}, +$$ + +$$ +\theta_ {\text {t r a n s}} ^ {\text {M E R G E}} = \theta_ {\text {t r a n s}} ^ {\text {P R E}} + \lambda \cdot \tau_ {\text {L V L M}} + \lambda \cdot \tau_ {\text {R M}}, +$$ + +where $\tau^{\mathrm{LVLM}}$ denotes the task vector derived from instruction tuning, and $\tau^{\mathrm{RM}}$ refers to the task vector obtained from reward modeling. The hyperparameter $\lambda$ controls the relative contribution of task vectors. We explore $\lambda$ values in the range: [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]. + +TIES Yadav et al. (2024) consider the interference between parameters from different models during the model merging process. Their approach consists of three main steps. First, they prune task vector values based on magnitude, retaining only a proportion $d$ of the task vector. Second, they resolve sign conflicts by calculating the total magnitude of parameter values in positive and negative directions and selecting the direction with the larger total magnitude. Only values that match the chosen sign are retained. Finally, they compute the mean of the retained values to determine the final parameter value. The TIES method can be simply expressed as: + +$$ +\theta_ {\mathrm {t r a n s}} ^ {\mathrm {M E R G E}} = \theta_ {\mathrm {t r a n s}} ^ {\mathrm {P R E}} + \lambda \cdot f (\tau^ {\mathrm {L V L M}}, d) + \lambda \cdot f (\tau^ {\mathrm {R M}}, d), +$$ + +where $f(\cdot)$ denotes the function for trimming, selecting, and rescaling the task vector, and $d$ is the density determining how many parameters are retained. We search for optimal values of $\lambda$ within the range [0.5, 0.7, 1.0] and $d$ within the range [0.2, 0.4, 0.6, 0.8]. + +DARE Yu et al. (2024a) also addresses the interference between parameters from different models during the model merging process. They randomly drop delta parameters with a probability of $p$ and rescale the remaining ones by $1 / (1 - p)$ . The DARE method can be combined with both the Task Arithmetic and TIES approaches. When combined with Task Arithmetic, a proportion $p$ of task vectors is randomly dropped, and the remaining ones are rescaled by $1 / (1 - p)$ . When DARE is combined with TIES, a proportion $p$ of task vectors is randomly dropped, and the sign of each parameter is determined by comparing the total magnitude in the positive and negative directions. The sign corresponding to the larger total magnitude is selected, and only values matching this sign are retained. Their mean is then computed as the final parameter value, and the result is rescaled by $1 / (1 - p)$ . The DARE method can also be expressed as: + +$$ +\theta_ {\text {t r a n s}} ^ {\text {M E R G E}} = \theta_ {\text {t r a n s}} ^ {\text {P R E}} + \lambda \cdot f (\tau^ {\text {L V L M}}, d) + \lambda \cdot f (\tau^ {\text {R M}}, d), +$$ + +where $d$ represents the density, determining the proportion of retained parameters, with $d = 1 - p$ . + +We search for optimal values of $\lambda$ within the range [0.5, 0.7, 1.0] and $d$ within the range [0.2, 0.4, 0.6, 0.8]. + +Merging Embeddings We follow the embedding merging procedure from MergeKit (Goddard et al., 2024). The process is as follows: + +1. If a token exists in the pre-trained model, we use its embedding from that model. +2. If a token appears in only one model (either the LVLM or the text-based RM), we use its embedding from that model. +3. If a token appears in multiple models, we compute the average of its embeddings. + +Notably, the pre-trained model is not required for the weighted averaging method. Therefore, we omit the first step when applying this merging approach. + +Merging Hyperparameter Selection We select the merging hyperparameter by using a sampled set of 400 instances from the RLAIF-V (Yu et al., 2024c) training set as our validation set. In case of a tie in scores, an additional 100 sampled instances will be used for evaluation. Results are discussed in Appendix G.3. + +# B Dataset Details + +VL-RewardBench VL-RewardBench (Li et al., 2024a) is a benchmark comprising 1,250 high-quality examples spanning three domains: general multimodal instructions, hallucination-related tasks, and multimodal reasoning tasks. Each example includes a multimodal query—consisting of an image and a user prompt—along with a selected response and a rejected response. + +TextVQA TextVQA (Singh et al., 2019) is a dataset designed to evaluate the ability of visual question-answering (VQA) models to read and reason about text within images. We use its validation set, which contains 5,000 instances, to assess our merged VLRMs. + +MMMU-Pro MMMU-Pro (Yue et al., 2024b) is an advanced benchmark designed to assess the understanding and reasoning abilities of multimodal models. It is derived from the original MMMU (Yue et al., 2024a) dataset and consists of two subsets: a standard set, which includes image and text queries with 10 answer options, and a + +vision set, which features a vision-only input scenario. In the vision set, the questions are embedded within screenshots or photos, with no explicit text provided. + +RLAIF-V RLAIF-V (Yu et al., 2024c) preference dataset is created by generating multiple candidate responses for a given prompt and image using various random seeds. Each response is divided into individual claims, which are then assessed using an open-source large vision-language model. This model assigns confidence scores to each claim, which are combined to form an overall response score. Preference pairs are generated by comparing the response scores for the same prompt, selecting the preferred response and the less favorable one based on the score differences. Pairs with significant length disparities are excluded to avoid bias. We select 400 instances from this preference dataset to serve as our validation set for selecting the hyperparameters of merging methods. + +# C Best-of-N Sampling Details + +We use lmms-eval (Zhang et al., 2024) for response generation with the Best-of-N sampling technique. For the TextVQA dataset, we set both the temperature and top-p to 1.0, sampling 8 responses. To encourage concise answers, we append "Answer the question using a single word or phrase." after the generation prompt. For the MMMU-Pro dataset, we also set the temperature and top p to 1.0, with a maximum token limit of 4096, to sample 8 responses. Additionally, we apply chain-of-thought (CoT) for generating both answers and their reasoning. + +# D Prompt Template + +For Best-of-N sampling using LLaMA-3.2-Vision as the generative reward model, the prompt template is provided in Table 4. For image captioning with LLaMA-3.2-Vision and reward modeling using Tulu-3-RM and Tulu-2.5-RM, the detailed prompt template can also be found in Table 4. + +# E Open-Source Model Details + +Llama-3.2-11B-Vision-Instruct Llama-3.2 -11B-Vision-Instruct (Dubey et al., 2024) is an 11B-parameter LVLM consisting of three main components: a vision encoder, an adapter, and a pre-trained language model. The language model is based on Llama-3.1-8B-Instruct. The adapter + +incorporates cross-attention layers to integrate image representations into the language model. During adapter training, the language model remains frozen, enabling seamless drop-in replacement for Llama-3.1 series models without requiring retraining. + +Tulu-2.5-RM Tulu-2.5-RM (Ivison et al., 2024) is a reward model initialized from Llama-3.1-8B and fine-tuned using the Tulu 2 recipe (Ivison et al., 2023). It is adapted for reward modeling by replacing the language modeling head with a linear layer and fine-tuning it on preference data from diverse sources, including Ultrafeedback (Cui et al., 2024), Nectar (Zhu et al., 2024), HH-RLHF (Bai et al., 2022), and AlpacaFarm (Dubois et al., 2023), among others. + +Tulu-3-RM Tulu-3-RM (Lambert et al., 2024) is another reward model initialized from Llama-3.1-8B and fine-tuned following the Tulu 3 recipe (Lambert et al., 2024). Like Tulu-2.5-RM, it is adapted for reward modeling by replacing the language modeling head with a linear layer. However, Tulu-3-RM is trained on a mixture of on-policy and off-policy preference data collected through an enhanced version of the Ultrafeedback (Cui et al., 2024) pipeline. This dataset includes prompts from various sources, such as the SFT dataset in the Tulu 3 recipe, WildChat (Zhao et al., 2024), Ultrafeedback (Cui et al., 2024), and synthetic persona-augmented instructions. + +# F Qualitative Results + +We investigate reward model behavior before and after merging, and we evaluate qualitatively on VL-RewardBench. Tables 5 and 6 present results for Tulu-2.5-RM, while Tables 7 and 8 show Tulu-3-RM. Red text indicates misalignment with the image. Before merging, the text-based reward model made incorrect predictions. After merging, the vision-language reward models correctly identified the better response. In most cases, more advanced merging methods—such as task arithmetic, TIES, and DARE—produce larger reward differences between chosen and rejected responses than simple weighted averaging. + +# G Full Results + +# G.1 Main Results + +The main results of merging with Tulu-2.5-RM are discussed in Section 4.3 of the main text. As shown in Table 1, merged VLRMs consistently outperform Llama-3.2-Vision and Tulu-2.5-RM across nearly all merging methods and benchmarks. Notably, in VL-RewardBench, they show the greatest improvement in the Hallucination domain. In Best-of-N evaluation, they perform well in both TextVQA and MMMU-Pro. Additionally, merged VLRMs match or surpass the strong Cascade baseline, suggesting that merging captures more information than simply cascading two models. + +A similar trend is observed when merging with Tulu-3-RM. As shown in Table 9, merged VLRMs outperform Llama-3.2-Vision and Tulu-3-RM across most methods and benchmarks. In VL-RewardBench, they improve mainly in the General and Hallucination domains. For Best-of-N evaluation, they perform well in MMMU-Pro, but only a few achieve results comparable to Llama-3.2-Vision in TextVQA, likely due to Tulu-3-RM's weaker performance in this task. While merging with Llama-3.2-Vision enhances performance over Tulu-3-RM, it does not surpass Llama-3.2-Vision's score. Additionally, merged VLRMs exceed the strong Cascade baseline in other benchmarks and remain competitive with it in TextVQA. + +In Table 12, we compare our merged VLRMs with large open-source LVLMs and commercial systems on VL-RewardBench. Surprisingly, our merged VLRMs outperform 90B LVLMs and achieve performance comparable to commercial models, demonstrating the effectiveness of transferring textual preferences from text-based RMs to LVLMs. + +# G.2 Without Image Input + +We conduct an ablation study by evaluating models without image input. Full results with Tulu-2.5-RM are shown in Table 10. Models with image input consistently outperform those without it across various merging techniques, suggesting that the vision encoder actively contributes after merging rather than performance gains being solely due to the text-based RM. This indicates that merged VLRMs effectively utilize the vision encoder in most cases. Notably, in VL-RewardBench, merged VLRMs match or surpass those without + +image input, especially in the hallucination domain, where image input significantly improves performance. In Best-of-N evaluation, models with image input perform better in the TextVQA and MMMU-Pro Vision sets. However, in the MMMU-Pro Standard set, image input does not provide an advantage, likely because this set emphasizes text reasoning, where reward assessments depend more on textual coherence than visual information. + +Full results with Tulu-3-RM are shown in Table 11, following a similar trend. In VL-RewardBench, merged VLRMs outperform those without image input in the hallucination domain and are comparable to or surpass them in general and reasoning domains. Image input also enhances Best-of-N evaluation, particularly in TextVQA and MMMU-Pro Vision. However, in the MMMU-Pro Standard, image input does not provide a clear advantage, reaffirming that this set prioritizes text reasoning over visual input. + +# G.3 Effect of Merging Hyperparameters + +In this study, we optimize hyperparameter merging using sampled instances from RLAIF-V. The results, based on 400 sampled RLAIF-V instances used as a validation set, are presented in Tables 13 to 22. Bold text highlights the best performance, while text with * indicates cases where scores are tied. In these cases, an additional 100 samples are used, and * marks the top-performing result among them. + +Figures 3 to 12 show the effect of hyperparameters across various benchmarks, merging methods, and text-based RMs. The results reveal that optimal hyperparameters differ across these factors, emphasizing the importance of a well-constructed validation set. Future research could further explore this. For example, Figure 3 shows the results of searching for $\lambda$ values between 0 and 1 for the Linear method using Tulu-2.5-RM. In the VLRewardBench, a mid-range $\lambda$ produces the best performance, while in the MMMU-Pro vision set, a smaller $\lambda$ yields better results. This variation suggests that hyperparameter choices influence the performance of the final merged VLRMs differently depending on the task. + +Moreover, we observe a trend consistent with prior studies (Yadav et al., 2024; Yu et al., 2024a): even when task vectors are reduced to lower rates (e.g., 0.4, 0.2), merged VLRMs continue to perform well, aligning with findings on LLM merging. + +
Best-of-N Sampling Prompt template for Llama-3.2-Vision
<|start_header_id|>system<|end_header_id|>You are a highly capable multimodal AI assistant tasked with evaluating answers to visual questions.<|eot_id|><|start_header_id|>user<|end_header_id|>Please analyze the following image and question, then evaluate the provided answer:Question:{INSTRUCTION}Answer:{RESPONSE}Evaluate the answer based on the following criteria:1. Accuracy: How well does the answer align with the visual information in the image?Score: [1 (Poor) to 5 (Excellent)]2. Completeness: Does the answer fully address all aspects of the question?Score: [1 (Poor) to 5 (Excellent)]3. Clarity: Is the answer well-articulated and easy to understand?Score: [1 (Poor) to 5 (Excellent)]4. Relevance: Does the answer directly relate to the question and the image?Score: [1 (Poor) to 5 (Excellent)]After your evaluation, please include:1. Reasoning: A detailed explanation for each criterion, highlighting why you assigned the given score.2. Overall Assessment: Provide a n overall quality score (1 to 5) for the answer.<|eot_id|>
Image Captioning Prompt template using Llama-3.2-Vision
<|start_header_id|>system<|end_header_id|>You are a highly capable multimodal AI assistant tasked with evaluating answers to visual questions.<|eot_id|><|start_header_id|>user<|end_header_id|>{IMAGE} Please describe this image according to the given question: {INSTRUCTION}<|eot_id|>
Prompt template for reward modeling
<|start_header_id|>user<|end_header_id|>{INSTRUCTION}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{RESPONSE}<|eot_id|>
+ +Table 4: Prompt template for best-of-n sampling, image captioning and reward modeling. + +# Image: + +![](images/ccaea917036888bdfd5e0976093b5b346c6a3e544d0e8af39eadbec7c2ae8ecf.jpg) + +# Query: Describe this image in detail. + +# Response 1: + +The image showcases a lively scene of a couple of kids playing soccer on a field. They are kicking a sports ball around while several people, including young men and other onlookers, are watching and enjoying the game. + +The spectators are spread throughout the scene, with some standing in different spots around the grassy area. There are also numerous chairs set up in the vicinity, possibly for the audience to take a rest or gather at a later time. Furthermore, various personal items, such as backpacks and a handbag, can be observed near the chairs. + +Additionally, a few cars and an umbrella are present in the background, hinting at an outdoor, casual setting for this friendly soccer match. + +# Response 2: + +The image showcases a lively scene of a couple of kids playing soccer on a field. They are kicking a sports ball around while several people, including young men and other onlookers, are watching and enjoying the game. The spectators are spread throughout the scene, with some standing in different spots around the grassy area. There are also numerous chairs set up in the vicinity, possibly for the audience to take a rest or gather at a later time. Moreover, the soccer goalposts stand tall, ready for the players to score. Additionally, the grassy terrain adds to the natural charm of the setting, making it an ideal place for a friendly soccer match. + +# Score comparison: + +
MethodResponse 1 ScoreResponse 2 Score
Tulu-2.5-RM2.171882.27930
Linear1.895511.60156
Task Vec.3.503911.82422
TIES2.677731.96387
DARE + Task Vec.3.517581.69141
DARE + TIES3.443361.65430
+ +Table 5: Qualitative results on VL-RewardBench using TULU-2.5-RM for merging. + +# Image: + +![](images/bcb5a627e2b25fd2a4aecfa4f414db1b5588e0fb34ff7f096b9896491ed63bc1.jpg) + +Query: What is still on the tray? + +Response 1: + +There are still two portions of fries left on the tray. + +Response 2: + +There are still two hot dogs and some fries left on the tray. + +# Score comparison: + +
MethodResponse 1 ScoreResponse 2 Score
Tulu-2.5-RM0.568851.20508
Linear0.609860.38477
Task Vec.2.474610.55273
TIES1.769531.10840
DARE + Task Vec.2.59180.62988
DARE + TIES2.464840.43701
+ +Table 6: Qualitative results on VL-RewardBench using TULU-2.5-RM for merging. + +# Image: + +![](images/115c08832c9ac2e208c3f0e1ffac025cc54bf4c9b3384fbc2092c643b22135f5.jpg) + +Query: How many bananas are there in the image? + +# Response 1: + +In the picture, there are at least 26 bananas. The bananas are divided into several bunches, which may indicate that they vary in size and ripeness. + +# Response 2: + +There are 47 bananas in the picture. The bananas are divided into several bunches, which may indicate that their size and ripeness are different. + +# Score comparison: + +
MethodResponse 1 ScoreResponse 2 Score
Tulu-3-RM1.40432.33594
Linear1.339840.51074
Task Vec.5.582034.90234
TIES7.992197.55469
DARE + Task Vec.4.359382.88672
DARE + TIES5.519535.20312
+ +Table 7: Qualitative results on VL-RewardBench using TULU-3-RM for merging. + +![](images/9622ec4a97c2e56939ac9f6952fa25605ccb286db3c8d244792f2575db6d07d8.jpg) +Image: + +# Query: Describe this image in detail. + +# Response 1: + +In the image, there is a twin-propeller airplane parked on the tarmac, surrounded by a crowd of people. Some people seem to be disembarking from the plane, while others are walking around in the vicinity. There are at least 15 people in total, spread out near the airplane. + +Luggage is being unloaded, with a suitcase visible near the center of the scene and a handbag located towards the right side. It appears that the passengers are either arriving at their destination or preparing to board the aircraft, as the luggage is being sorted and attended to. + +# Response 2: + +In the image, the flight crew consisting of a pilot and co-pilot can be seen conversing near the twin-propeller airplane parked on the tarmac. A crowd of people surrounds them, with some individuals disembarking from the plane and others preparing to board. Ground support equipment such as baggage carts and fuel trucks are visible in the background near the airport terminal building. Luggage is being unloaded and sorted, with a suitcase and handbag visible in the foreground. The blue stripe on the airplane adds a pop of color to the scene. It appears that the passengers are either arriving at their destination or preparing to depart on their journey. An information sign can be seen towards the left side of the image. + +Score comparison: + +
MethodResponse 1 ScoreResponse 2 Score
Tulu-3-RM3.945314.74219
Linear3.660162.74414
Task Vec.5.238282.99219
TIES7.726565.67188
DARE + Task Vec.4.671882.24414
DARE + TIES5.796882.88477
+ +Table 8: Qualitative results on VL-RewardBench using TULU-3-RM for merging. + +
MethodVL-RewardBenchTextVQAMMMU-Pro
GeneralHallucinationReasoningOverallMacro Avg.OverallStandardVision
Llama-3.2-Vision33.3*38.4*56.6*42.9*42.8*46.428.819.8
Tulu-3-RM45.436.656.643.046.227.429.420.4
Random50.050.050.050.050.048.229.218.4
Cascade54.140.557.246.750.638.331.323.7
Linear47.551.055.051.551.245.829.119.0
Task Vec.63.466.457.563.762.436.031.620.9
TIES59.074.150.966.061.428.330.720.6
DARE + Task Vec.63.468.958.565.463.636.130.220.9
DARE + TIES63.965.657.263.262.256.931.421.8
+ +Table 9: Comparison of merging methods across the VL-RewardBench, TextVQA, and MMMU-Pro datasets using TULU-3-RM for merging. *Indicates results from Li et al. (2024a). + +
MethodVL-RewardBenchTextVQAMMMU-Pro
GeneralHallucinationReasoningOverallMacro Avg.OverallStandardVision
Linear39.3 (-2.2)52.3 (+20.8)54.4 (-4.1)51.0 (+11.2)48.7 (+4.9)54.7 (+8.9)27.8 (-1.3)22.1 (+0.5)
w/o image input41.531.558.539.843.845.829.121.6
Task Vec.48.6 (+4.3)59.4 (+20.4)59.7 (+0.6)57.9 (+13.0)55.9 (+8.4)59.0 (+20.3)31.0 (-0.8)22.7 (+1.7)
w/o image input44.339.059.144.947.538.731.821.0
TIES43.7 (-1.1)58.2 (+23.0)58.5 (-0.6)56.2 (+13.5)53.5 (+7.1)64.2 (+23.3)29.1 (-2.1)22.6 (+1.6)
w/o image input44.835.259.142.746.440.931.221.0
DARE + Task Vec.49.2 (+4.4)61.7 (+23.4)61.0 (+2.2)59.7 (+15.2)57.3 (+10.0)58.8 (+22.6)30.3 (-1.8)22.4 (+1.6)
w/o image input44.838.358.844.547.336.232.120.8
DARE + TIES49.2 (+3.3)59.1 (+19.2)58.2 (-0.6)57.4 (+11.8)55.5 (+7.3)57.3 (+20.4)31.6 (-0.5)22.0 (+1.2)
w/o image input45.939.958.845.648.236.932.120.8
+ +Table 10: Full results comparing merging methods with and without image input, using TULU-2.5-RM for merging. + +
MethodVL-RewardBenchTextVQAMMMU-Pro
GeneralHallucinationReasoningOverallMacro Avg.OverallStandardVision
Linear47.5 (-1.1)51.0 (+1.1)55.0 (0.0)51.5 (+0.5)51.2 (0.0)45.8 (+25.5)29.1 (+0.5)19.0 (-1.3)
w/o image input48.649.955.051.051.220.328.620.3
Task Vec.63.4 (+3.8)66.4 (+19.3)57.5 (+4.4)63.7 (+13.2)62.4 (+9.1)36.0 (+1.2)31.6 (-0.1)20.9 (+0.3)
w/o image input59.647.153.150.553.334.831.720.6
TIES59.0 (-0.6)74.1 (+33.5)50.9 (-3.2)66.0 (+19.2)61.4 (+10.0)28.3 (-0.3)30.7 (-1.0)20.6 (-0.9)
w/o image input59.640.654.146.851.428.631.721.5
DARE + Task Vec.63.4 (+3.8)68.9 (+18.4)58.5 (+2.2)65.4 (+12.1)63.6 (+8.2)36.1 (-5.8)30.2 (-1.9)20.9 (+0.7)
w/o image input59.650.556.353.355.441.932.120.2
DARE + TIES63.9 (+8.7)65.6 (+20.9)57.2 (+1.9)63.2 (+14.2)62.2 (+10.4)56.9 (+29.2)31.4 (+0.6)21.8 (+1.4)
w/o image input55.244.755.349.051.827.730.820.4
+ +Table 11: Full results comparing merging methods with and without image input, using TULU-3-RM for merging. + +
MethodGeneralHallucinationReasoningOverallMacro Avg.
Open-Source Models*
Llama-3.2-Vision-11B-Instruct33.338.456.642.942.8
Llama-3.2-Vision-90B-Instruct42.657.361.756.253.9
Qwen2-VL-72B-Instruct38.132.858.039.543.0
Molmo-72B-092433.942.354.944.143.7
NVLM-D-72B38.931.662.040.144.1
Proprietary Models*
Gemini-1.5-Flash (2024-09-24)47.859.658.457.655.3
Gemini-1.5-Pro (2024-09-24)50.872.564.267.262.5
Claude-3.5-Sonnet (2024-06-22)43.455.062.355.353.6
GPT-4o-mini (2024-07-18)41.734.558.241.544.8
GPT-4o (2024-08-06)49.167.670.565.862.4
Using TULU-2.5-RM for merging
Linear39.352.354.451.048.7
Task Vec.48.659.459.757.955.9
TIES43.758.258.556.253.5
DARE + Task Vec.49.261.761.059.757.3
DARE + TIES49.259.158.257.455.5
Using TULU-3-RM for merging
Linear47.551.055.051.551.2
Task Vec.63.466.457.563.762.4
TIES59.074.150.966.061.4
DARE + Task Vec.63.468.958.565.463.6
DARE + TIES63.965.657.263.262.2
+ +Table 12: Full results on VL-RewardBench, compared with current strong large vision-language models. *Indicates results from Li et al. (2024a). + +![](images/acc43a07283120f7437519ffc116a5321231ea288797e8bd4a8ae14a56b20b8d.jpg) +(a) VL-RewardBench + +![](images/c67da2855669be596e4a4899cc65aa1b79886ac0bd7c695e8ef0ff5a515e3fa8.jpg) +(b) TextVQA + +![](images/75834df68a5161b94d768ff65e83b8a2a15849c79bd9fd7dc6103cd5bf219e15.jpg) +(c) MMMU-Pro (Standard) + +![](images/feef1ba326b7d7741ee087253dc1e73ff44120a0a634feb9db07687b3dba8035.jpg) +(d) MMMU-Pro (Vision) + +![](images/12e0f1c36c12baaca4d0d0b4404e5f6710352b7e129e7844d86f078427a88428.jpg) +Figure 3: Full results of merging Llama-3.2-Vision and Tulu-2.5-RM (Linear) +(a) VL-RewardBench +Figure 4: Full results of merging Llama-3.2-Vision and Tulu-2.5-RM (Task Vec.) + +![](images/ac4af9c942a204f81968653cc6de77b02a24854b4314207e5a5a278086b9282b.jpg) +(b) TextVQA + +![](images/af1101dbd454de1513dba8c1cc13f3b92ca6c4ed4df0af988600df7ee25844a5.jpg) +(c) MMMU-Pro (Standard) + +![](images/594fbc19366b29902fc51657543cad019e9846381303cb4ab0f1b48832a940f6.jpg) +(d) MMMU-Pro (Vision) + +![](images/b44c3ce21cb658f7a045179552265336a0d3e60a542ee28e246f1866950c420b.jpg) +(a) VL-RewardBench + +![](images/d5d7484a0991eda311d5d36308bfb41a08d3043ea3bd6ca2680c977b2a7811bf.jpg) +(b) TextVQA + +![](images/98dccf4d7ed7e9ba0550e716478ccc370b429c820575f6ddb3fba20d5b8f4c6a.jpg) +Figure 5: Full results of merging Llama-3.2-Vision and Tulu-2.5-RM (TIES) + +![](images/a472c23aaf64cbf1aab6be2c2cb13f8575d047ad6cef3205e9bce9804b1b29ed.jpg) +(c) MMMU-Pro (Standard) +(d) MMMU-Pro (Vision) + +![](images/3bfd076daf6f82ca79db8803f101d3b25445e1f6a511a3a00624d78b1f2e88f7.jpg) +(a) VL-RewardBench + +![](images/26c3a874e20068854d8636b3fcd7270df927e14ce13448c341e139a72b4b8d52.jpg) +(b) TextVQA + +![](images/80530ed5f88e3f5877b92e4506420744a75ee283574cd14c0f69684ff4b27e85.jpg) +(c) MMMU-Pro (Standard) + +![](images/9f908443a9122afa8737c9e5e5105b734df1dd47d9dfd7c6ee0784f5583d7019.jpg) +(d) MMMU-Pro (Vision) +Figure 6: Full results of merging Llama-3.2-Vision and Tulu-2.5-RM (DARE + Task Vec.) + +![](images/285d40f9f8a51c6c009d2616308f33549dbd9480a133e5cf2911f30e341f314b.jpg) +(a) VL-RewardBench +Figure 7: Full results of merging Llama-3.2-Vision and Tulu-2.5-RM (DARE + TIES) + +![](images/00eed6684d4d302980b1f14d845ce41b4d7b76a148ddcbd0d1e80f4cb079b895.jpg) +(b) TextVQA + +![](images/0f90ae8a271a3400b5d68852c71fa9dfdecf25a537a633a03b3c51a2448bd966.jpg) +(c) MMMU-Pro (Standard) + +![](images/4f0c4aa73eee4c49b649fa0c3c65a78e65a4caccd405c488e87399bf4f5d711f.jpg) +(d) MMMU-Pro (Vision) + +![](images/4de3d86d5e968fc519086db5e5bc293520b86233758d32ce01a26d86910d352f.jpg) +(a) VL-RewardBench +Figure 8: Full results of merging Llama-3.2-Vision and Tulu-3-RM (Linear) + +![](images/39fa54a8ef2bfc40fa8e8a6e65517a58fb70ed64526c938e04d139993133d43c.jpg) +(b) TextVQA + +![](images/589d581a0c74c53b3f6a70e923b4c52d6656f7c8604a6fecd22f4b67abd1ae4b.jpg) +(c) MMMU-Pro (Standard) + +![](images/e82d4a3f110f90bf5b8e6fb26e12b6ba4e07b22fd1f2f9e0734cc009caa8099d.jpg) +(d) MMMU-Pro (Vision) + +![](images/efa083243cc08c3ba929939dde671180054424d64ec2b637f55d8379f8a04b4b.jpg) +(a) VL-RewardBench + +![](images/a2e9733fb57a5abb40db5dcb90641980f43f9283573fbc58e508eecc935de371.jpg) +(b) TextVQA + +![](images/336045bda88393e1cf8f52e3c82cfabec087e0cc578941648c5062e99ecbdf9c.jpg) +Figure 9: Full results of merging Llama-3.2-Vision and Tulu-3-RM (Task Vec.) + +![](images/8d8516d50d453961e2ec28d4ecbb3e8f98b4f7f8eae2feeeb71464bc28899338.jpg) +(c) MMMU-Pro (Standard) +(d) MMMU-Pro (Vision) + +![](images/580ac2ba50d6c654b98845f7f243eb55528672e479ee1812f985d36d631ac9a3.jpg) +(a) VL-RewardBench + +![](images/01f0096c79321e30e3d702ade6de6aba22a1f8aedb12cfc46b08f46985afa63f.jpg) +(b) TextVQA + +![](images/c857226270dcd94475e67f96744dc01c42a7f7656dde0d9827d3cb78bd8bf220.jpg) +(c) MMMU-Pro (Standard) + +![](images/d21218dac859bad80b322f2cc1d754e75c96eb0f97f41fda0d49319adb45f0ac.jpg) +(d) MMMU-Pro (Vision) +Figure 10: Full results of merging Llama-3.2-Vision and Tulu-3-RM (TIES) + +![](images/e6b2b91e64c278fc6be30c98b9662272fd207eed69bf17caa0543bb270da85ea.jpg) +(a) VL-RewardBench +Figure 11: Full results of merging Llama-3.2-Vision and Tulu-3-RM (DARE + Task Vec.) + +![](images/cc2cda381c7a40734066a37f1ab0c7261bdc27bdabfdd68e4849d7c3171a1752.jpg) +(b) TextVQA + +![](images/465e0db635366c032950d98293ae62ea354d0c0cafdabdf9d3eeeb8b6ac0545a.jpg) +(c) MMMU-Pro (Standard) + +![](images/ae532ab963e16a714427fb970cb30bb941174c54bf293b5b0eb89b9f6797b128.jpg) +(d) MMMU-Pro (Vision) + +![](images/eb203b7d5eebffb1f6f6e4bc3d98202f20d2a0f96911a275b8322fbfc0fd49bb.jpg) +(a) VL-RewardBench +Figure 12: Full results of merging Llama-3.2-Vision and Tulu-3-RM (DARE + TIES) + +![](images/18f1228c61607b436caa8e3f4b02bc2b3a0a3dce0a655c3a3e17dfaa53c365d7.jpg) +(b) TextVQA + +![](images/8417f6595418e2127bac5a57057c4b74f896bf14bde81b9eee3828a63161a160.jpg) +(c) MMMU-Pro (Standard) + +![](images/c1e83308672891e8f138fa15c1b55272af65b53b0369d74194ae5444cc7cb09d.jpg) +(d) MMMU-Pro (Vision) + +
λ0.00.10.20.30.40.50.60.70.80.91.0
Overall Acc.49.852.350.352.552.049.047.346.546.550.347.0
+ +Table 13: Linear merging using Tulu-2.5-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ0.00.10.20.30.40.50.60.70.80.91.0
Overall Acc.55.350.053.354.553.549.352.854.053.854.855.3*
+ +Table 14: Task Vec. merging using Tulu-2.5-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ1.00.70.5
d0.80.60.40.20.80.60.40.20.80.60.40.2
Overall Acc.53.553.8*52.350.053.553.852.350.353.553.852.350.0
+ +Table 15: TIES merging using Tulu-2.5-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ1.00.70.5
d0.80.60.40.20.80.60.40.20.80.60.40.2
Overall Acc.55.356.554.555.354.554.053.555.849.049.351.854.8
+ +Table 16: DARE + Task Vec. merging using Tulu-2.5-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ1.00.70.5
d0.80.60.40.20.80.60.40.20.80.60.40.2
Overall Acc.55.556.0*56.055.553.354.353.852.351.549.851.551.8
+ +Table 17: DARE + TIES merging using Tulu-2.5-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ0.00.10.20.30.40.50.60.70.80.91.0
Overall Acc.51.546.850.349.352.050.849.347.349.549.351.3
+ +Table 18: Linear merging using Tulu-3-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ0.00.10.20.30.40.50.60.70.80.91.0
Overall Acc.49.353.549.849.851.051.053.853.053.050.355.3
+ +Table 19: Task Vec. merging using Tulu-3-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ1.00.70.5
d0.80.60.40.20.80.60.40.20.80.60.40.2
Overall Acc.53.553.354.051.053.854.354.3*51.553.553.354.051.0
+ +Table 20: TIES merging using Tulu-3-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ1.00.70.5
d0.80.60.40.20.80.60.40.20.80.60.40.2
Overall Acc.54.855.855.358.053.853.852.350.350.050.351.051.5
+ +Table 21: DARE + Task Vec. merging using Tulu-3-RM as the text-based RM, evaluated on sampled RLAIF-V. + +
λ1.00.70.5
d0.80.60.40.20.80.60.40.20.80.60.40.2
Overall Acc.55.855.856.056.852.852.552.552.355.353.848.054.5
+ +Table 22: DARE + TIES merging using Tulu-3-RM as the text-based RM, evaluated on sampled RLAIF-V. \ No newline at end of file diff --git a/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/images.zip b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d6292a73d6eac0da8486f559e621792d9a8e4e90 --- /dev/null +++ b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c739e49e5bb2e9d3c79c9c8cf6b5963a39bc464f13303f5a53c87398b4b3cf1 +size 1847579 diff --git a/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/layout.json b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4c4d96cd360c93f2853113ef335642b4e1de9822 --- /dev/null +++ b/ACL/2025/Transferring Textual Preferences to Vision-Language Understanding through Model Merging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1d9a91e7c196345b397471fdcc4304a24cecaee4e68568ebf94c52b5eaea2bf +size 638973 diff --git a/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_content_list.json b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..931c4fc57ddaf5b2b4be670e441db98e1eec78ae --- /dev/null +++ b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:412783996df04574957cefb21acf0a6db0a2512c642d1b0d60644554e4cc6d9d +size 83811 diff --git a/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_model.json b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d7b94d50aeec797b6cf7e516a3ca95516b2b2fba --- /dev/null +++ b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d52a8184c22a8c99d4831065c7290651e1eedcb5306055419c56616c508aa6a +size 106956 diff --git a/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_origin.pdf b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bead7f8225cdc1d0c6a440d7e80b15a33770856b --- /dev/null +++ b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/9a58a770-1003-4690-83c3-ce62f7d08211_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f669ebab0b4ae1caf32724e068cfb0742cd0c2a7f2dc6c2860f333fe69d7feca +size 479758 diff --git a/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/full.md b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7afcc705f7a064d9c8ed6170ca15dd2a260a47e0 --- /dev/null +++ b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/full.md @@ -0,0 +1,552 @@ +# TREECUT: A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation + +Jialin Ouyang + +Columbia University + +jo2559@columbia.edu + +# Abstract + +Large language models (LLMs) now achieve near-human performance on standard math word problem benchmarks (e.g., GSM8K), yet their true reasoning ability remains disputed. A key concern is that models often produce confident, yet unfounded, answers to unanswerable problems. We introduce TREECUT, a synthetic dataset that systematically generates infinite unanswerable math word problems and their answerable counterparts, by representing each question as a tree and removing chosen necessary conditions. Experiments show TREECUT effectively induce hallucinations in large language models, including GPT-4o and o3-mini, with rates of $64\%$ and $44\%$ in their respective worst-case scenarios under zero-shot setting. Further analysis highlights that deeper or more complex trees, composite item names, and removing necessary condition near the middle of a path all increase the likelihood of hallucinations, underscoring the persistent challenges LLMs face in identifying unanswerable math problems. The dataset generation code and sample data are available at https://github.com/j-bagel/treecut-math. + +# 1 Introduction + +Mathematical reasoning is a crucial part of human intelligence. Recent years have witnessed remarkable advancements in the mathematical reasoning capabilities of large language models (LLMs). By leveraging techniques such as chain-of-thought prompting (Wei et al., 2022), state-of-the-art LLMs (e.g., Achiam et al. (2023); Team et al. (2024); Dubey et al. (2024)) achieved human-level performance on benchmarks like GSM8K (Cobbe et al., 2021). However, it remains controversial whether this performance implies reasoning capability beyond pattern matching. + +A substantial body of research highlights the capability of Large Language Models in mathematical reasoning. Achiam et al. (2023); Team et al. + +(2024); Dubey et al. (2024); Yang et al. (2024), among others, achieved over $90\%$ accuracy on GSM8K (Cobbe et al., 2021), a dataset consists of 8K grade school math word problems. Yang et al. (2024); Zhou et al. (2023), among others, achieved over $80\%$ accuracy on the more difficult MATH dataset (Hendrycks et al., 2021), which consists of 12.5K high school math competition problems. + +Meanwhile, there is a line of research questioning the reasoning ability of LLMs by showing their vulnerability under superficial changes of the input that do not alter the underlying logic. Works like Shi et al. (2023); Jiang et al. (2024) find that LLMs are easily distracted by irrelevant context or token level perturbation that does not change the underlying logic of the reasoning task. Mirzadeh et al. (2024) further demonstrate that the performance of LLMs declines when numerical values are altered in the questions from the GSM8K dataset. + +There is yet another line of research that challenges the ability of LLMs to refrain from answering unanswerable problems. Ma et al. (2024); Li et al. (2024); Sun et al. (2024); Zhou et al. (2024a); Saadat et al. (2024) introduce minor modifications to existing math word problems to create unanswerable variants, and find that LLMs often generate hallucinatory answers for these unanswerable questions, even when they perform well on the original answerable datasets. However, these efforts rely on pre-existing math word problem sources, making them susceptible to training data contamination, limited in scope, and lacking rich structures for extended research. + +To address these shortcomings, we propose TReEcut, a synthetic dataset capable of systematically generating an infinite number of unanswerable math word problems and their answerable counterparts. TReEcut considers problem represented by a tree, with nodes representing variables and edges representing formulas. Unanswerable problems are generated by removing an edge along + +![](images/417a6cab5bb12c8de660c5db8ecdd5459c4668c9e7ad09965553959fc2f7e96c.jpg) +Question: A burger costs 14 dollars. 3 scrambled eggs cost 4 dollars less than 2 burgers. 3 pies cost 12 dollars less than 3 burgers. A BLT sandwich costs 13 dollars less than 3 scrambled eggs. Question: how much does a BLT sandwich cost? +Solution to the answerable problem: +It is given as a fact that a burger costs 14 dollars. Combine with the fact that 3 scrambled eggs cost 4 dollars less than 2 burgers, we get a scrambled egg costs 8 dollars. Combine with the fact that a BLT sandwich costs 13 dollars less than 3 scrambled eggs, we get a BLT sandwich costs 11 dollars. +Solution to the unanswerable problem: All we know about the prices of BLT sandwich and scrambled egg is: a BLT sandwich costs 13 dollars less than 3 scrambled eggs. There are 2 variables but only 1 linear formula, so we cannot calculate the price of a BLT sandwich. +Figure 1: The left and middle panels depict the tree structures corresponding to the answerable and unanswerable questions, respectively. In the right panel, the strike-through sentence represents the formula removed by the cut. The variable mappings to items are as follows: $x_{1}$ represents a burger, $x_{2}$ represents a scrambled egg, $x_{3}$ represents a BLT sandwich, and $x_{4}$ represents a pie. + +the path from the root to the questioned variable. Our unanswerable dataset proves to be challenging even for GPT-4o and o3-mini. In addition, TreeCut allows precise control over the structural components of each problem, enabling detailed investigations into when and why LLMs produce hallucinations. Our analysis highlights that deeper or more complex trees, composite item names, and removing necessary condition near the middle of a path all increase the likelihood of hallucinations. + +# 2 Related Work + +Math Word Problem Benchmark Numerous math word problem datasets of different difficulty have been proposed in previous research, most notable examples including GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021). + +Many benchmarks have been developed to measure the robustness of mathematical reasoning. Patel et al. (2021); Kumar et al. (2021); Xu et al. (2022); Li et al. (2024); Zhou et al. (2024b); Yu et al. (2023); Shi et al. (2023) perturb or rewrite math word problems to measure the robustness of mathematical reasoning. + +Liu et al. (2021) utilize tree structures to represent and manipulate mathematical expressions during the reverse operation based data augmentation process for MWP solving. Opedal et al. (2024) introduced MathGAP, a framework for evaluating LLMs using synthetic math word problems with controllable proof tree characteristics. In contrast to their approach, the tree structure in our problem-generation procedure is fundamentally different. In our work, each node represents a variable, and the questioned variable appears as a leaf. In their work, however, each node represents a logical statement, + +with the answer represented by the root. More importantly, we focus on unanswerable math word problems, an aspect that their study did not address. + +Unanswerable Math Problems Yin et al. (2023) introduced SelfAware, consisting of unanswerable questions from five diverse categories. It includes less than 300 unanswerable mathematical problems. Li et al. (2024); Zhou et al. (2024a) generate unanswerable questions by prompting GPT4 to eliminate a necessary condition from the original problem, an then the modified questions are further checked or refined by human annotators. Ma et al. (2024) prompt GPT4 to modify problems from GSM8K. Sun et al. (2024) task human annotators to modify original questions in existing MWP datasets to make them unanswerable, creating a dataset composed of 2,600 answerable questions and 2,600 unanswerable questions. + +# 3 TREECUT: a Synthetic (Un)answerable Math Word Problem Dataset + +For the purpose of our investigation, we aim to have full control over the various aspects that determine the underlying structure of a math word problem: the name of the entities, the numeric values, and the complexity of the problem. Furthermore, we seek to reliably generate unanswerable problems by precisely removing specific necessary conditions of our choosing. + +To this end, we start with a special kind of answerable math word problem that can be represented as a tree, as illustrated in Figure 1. Within such a tree, each non-root node represents a variable, while the root is a uniquely reserved node. An edge from root gives value to a variable, while + +
ansDepthLlama-8BLlama-70BQwen-7BQwen-72BGPT-4oo3-mini
280.2%24.6%84.6%59.8%12.0%44.0%
486.2%40.2%90.4%82.8%18.0%25.2%
686.0%63.4%95.6%88.4%47.4%19.2%
884.2%65.0%93.4%85.2%64.0%25.6%
+ +Table 1: Percentage of hallucination of various LLMs at different ansDepth values for unanswerable problems, zero-shot prompting + +
ansDepthLlama-8BLlama-70BQwen-7BQwen-72BGPT-4oo3-mini
272.8%33.6%80.4%55.4%18.4%3.2%
479.0%57.6%94.6%84.8%28.8%2.4%
679.6%72.4%92.6%84.8%41.8%3.6%
878.6%76.8%94.4%83.0%51.0%3.0%
+ +Table 2: Percentage of hallucination of various LLMs at different ansDepth values for unanswerable problems, few-shot prompting + +an edge between two variables represents a linear formula of the two neighboring nodes. Given such a tree, any variable can be calculated following the unique path from the root to the node that represents the variable. Such a solving procedure does not require solving a linear equation system, as the solution only consists of carrying out basic arithmetic operations along the path. To guarantee that the arithmetic operations are well within the capacity of current frontier LLMs, we further restrict the unit price of each food item to be an integer between 5 and 15, and the coefficients of each linear equation taking non-zero integer values between -3 and 3. Finally, variables are randomly mapped to items, and then the formulas are translated to natural language using templates. The complete generation procedure, along with the templates used, is provided in Appendix A. + +From an answerable math word problem described above, we generate an unanswerable problem by removing an edge along the path from the root to the questioned variable. In Figure 1, $x_{3}$ is the questioned variable. Along the path to the root, we remove the edge between $x_{1}$ and $x_{2}$ (denoted by a cut), rendering $x_{2}$ and $x_{3}$ undetermined, thus making the question unanswerable, as all we know about $x_{2}$ and $x_{3}$ is one single linear equation. A key benefit of such a generation procedure is that the distance from the questioned variable to the cut is also fully controlled, as we will see that this factor plays an important role in triggering LLM hallucination. + +In summary, we can control the structure of problems via the following parameters: + +- numVars: total number of variables, +- ansDepth: distance from the root to the ques + +tioned variable, + +- compositeName: boolean, whether the items in the question have composite names (e.g. "a burger at Bistro Nice" versus "a burger"), +- cutDepth: distance from the questioned variable to the cut, if an unanswerable problem is to be generated. + +Appendix A contains the detailed problem generation algorithm. + +# 4 Experiments + +We evaluate several state-of-the-art LLMs using TREECUT. Additionally, we analyze the hallucination rate of GPT-40 on unanswerable problems generated under different parameter configurations of TREECUT. + +# 4.1 Experimental Setup + +For each set of generation parameters, we randomly generate 500 problems. Unless stated otherwise, we employ a zero-shot prompting template that explicitly directs the model to indicate when a question is unanswerable due to insufficient conditions. A chain-of-thought system message is incorporated for all models except o3-mini1. + +# 4.2 Evaluating LLMs + +In the first set of experiments, we generate unanswerable math word problems of varying difficulty to evaluate the following LLMs: Llama 3.1 Instruct with 8B and 70B parameters(Dubey et al., 2024), Qwen2.5 Instruct with 7B and 72B parameters(Yang et al., 2024), GPT-4o(Achiam et al., 2023), and o3-mini(OpenAI, 2025). + +
ansDepthLlama-8BLlama-70BQwen-7BQwen-72BGPT-4oo3-mini
268% (14%)95% (1%)87% (2%)95% (1%)99% (1%)100% (0%)
428% (12%)82% (6%)31% (6%)86% (6%)94% (0%)100% (0%)
617% (16%)83% (3%)12% (9%)80% (7%)85% (3%)100% (0%)
85% (12%)76% (7%)7% (10%)68% (8%)84% (2%)100% (0%)
+ +Table 3: Accuracy of various LLMs at different ansDepth levels for answerable problems. The percentage in parentheses represents the proportion of answerable questions incorrectly identified as unanswerable. + +Table 1 summarizes the results. None of the LLMs gives satisfactory results. Llama 3.1 8B, Qwen2.5 7B and 72B barely have any success identifying unanswerable problems. Llama 3.1 70B and GPT-4o struggle with more complex problems (ansDepth = 6, 8). o3-mini has the lowest hallucination for ansDepth = 6, 8. However, for the easiest case where ansDepth = 2 (in this setting, only 4 variables are mentioned in each problem), o3-mini displays a bias of making hallucinatory assumptions (see Appendix C.2 for examples). + +To further investigate whether the LLMs face intrinsic challenges in recognizing unanswerable math word problems, we conduct another set of experiments using few-shot prompting. For each unanswerable problem, we construct a few-shot prompt by randomly selecting 3 answerable and 3 unanswerable problems, each accompanied by a full solution path and the correct final answer. We use sample size $n = 500$ . Results are summarized in Table 2. O3-mini greatly benefits from few-shot prompting, which is not surprising given our analysis in Appendix C.2. For shorter problems, o3-mini tends to recognize the lack of conditions during reasoning, but choose to make unreasonable assumptions to arrive at a final answer. Few-shot examples guide it to refrain from doing that. The hallucination rates of the other models remained largely unchanged. This suggests that the five models besides o3-mini face intrinsic challenges in recognizing unanswerable math word problems. + +To investigate whether the unsatisfactory accuracy of identifying unanswerable problems comes from the incapability of the necessary mathematical operations, we evaluate the LLMs on the answerable counterparts of the unanswerable questions using the same zero-shot prompting template. For this experiment, a sample size of $n = 100$ is used. We observe that almost every model displays a significant gap between its ability of solving answerable problems and identifying unanswerable problems. For instance, GPT-4o correctly solves $84\%$ of answerable problems for ansDepth $= 8$ , but only correctly recognizes $36\%$ of unanswerable + +problems. + +# 4.3 Unanswerable Problem Structure and Hallucination + +For a more fine-grained investigation of LLM's hallucination behavior under different structures of unanswerable problems, we analyze GPT-4o's hallucination rate on unanswerable problems generated under different parameter choices of numVars, ansDepth, compositeName and cutDepth. + +![](images/3709266e01a678567aebc44d3a47484255732ca6a236d3e7f8ec37734e273018.jpg) +Figure 2: Hallucination percentage under different configurations of unanswerable problems, plotted against varying ansDepth. + +Tree Structure and Item Names To investigate the effect of (i) a deeper tree structure, (ii) a more complex tree structure, and (iii) composite item names, we consider the following parameter configurations: + +- ansDepth $\in$ {4, 5, 6, 7, 8}, which controls the depth of the questioned variable, +- cutDepth = [ansDepth/2] +- numVars = ansDepth + 2 (generates a more complex tree structure, with conditions unrelated to the questioned variable) or numVars = ansDepth (the tree structure degenerates into a single path), +- compositeName: true or false. + +There are $5 \times 2 \times 2 = 20$ configurations in total. We randomly generate 500 unanswerable problems for each configuration, and summarize GPT-4o's hallucination rate in Figure 2. In the figure, + +$\star$ Orange line represents complex tree structure, +$\star$ blue line represents simple tree structure, +- Solid line stands for composite item names, +- Dashed line stands for simple item names. + +Examining each line individually, we observe that the hallucination rate increases as the depth of the questioned variable grows. Comparing solid and dashed lines of the same color, a more complex tree structure consistently results in a higher likelihood of hallucination across different ansDepth values. Comparing orange and blue lines of the same linestyle, composite item names consistently lead to a higher likelihood of hallucination compared to simple item names. + +![](images/d8157f1ecd77546cc33e7d1e6d176a517dc6799e130a7053e4647afddacf2620.jpg) +Figure 3: Hallucination percentage versus cutDepth. Left panel has ansDepth = 7. Right panel has ansDepth = 8. + +![](images/38770e363fbba03cf718d40ac85af23948136d31e63b3c8f6789123947fc0c17.jpg) + +Location of the Cut For each unanswerable problem, the cut always happens along the path from the root to the questioned variable. Does the location of the cut change hallucination ratio? We vary cutDepth from 1 to 7 while keeping ansDepth $= 8$ and other parameters fixed. In the right panel of Figure 3, we see that cutDepth $= 3,4,5,6$ all trigger over $60\%$ hallucination for GPT-4o (with cutDepth $= 5$ triggering over $70\%$ ), but a cutDepth $= 1,2,7$ only triggers less than $50\%$ of hallucination, which means that GPT-4o is more confused when the cut happens around the middle point along the path, comparing to that happens near the root or the questioned variable. + +# 4.4 Conclusion of Experiments + +Our findings indicate that the unanswerable math word problems generated by TREECUT effectively induce hallucinations in large language models, including GPT-4o and o3-mini, with rates of $61\%$ and $42\%$ in their respective worst-case scenarios. Focusing on GPT-4o, we further observe that hallucinations are more likely to occur when the problem exhibits (i) a deeper tree structure, (ii) a more complex tree structure, (iii) composite item names, or (iv) a cut positioned around the middle of the path. These results underscore the challenges LLMs face in handling unanswerable math problems. + +# Limitations + +Our synthetic dataset is specifically designed for math word problems, representing only a small subset of the broader field of mathematics. Additionally, our evaluations are based on zero-shot and few-shot chain-of-thought prompting. We do not explore alternative prompting techniques commonly used in LLM-based mathematical reasoning studies, which may impact performance comparisons. + +# Acknowledgements + +We thank the anonymous reviewers for their valuable feedback, which helped improve the quality of this work. + +# References + +Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. +Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. +Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783. +Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874. +Bowen Jiang, Yangxinyu Xie, Zhuoqun Hao, Xiaomeng Wang, Tanwi Mallick, Weijie J Su, Camillo J Taylor, and Dan Roth. 2024. A peek into token bias: Large language models are not yet genuine reasoners. arXiv preprint arXiv:2406.11050. +Vivek Kumar, Rishabh Maheshwary, and Vikram Pudi. 2021. Adversarial examples for evaluating math word problem solvers. arXiv preprint arXiv:2109.05925. +Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi. 2024. Gsm-plus: A comprehensive benchmark for evaluating the robustness of llms as mathematical problem solvers. arXiv preprint arXiv:2402.19255. + +Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng, Daisuke Kawahara, and Sadao Kurohashi. 2021. Roda: Reverse operation based data augmentation for solving math word problems. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:1-11. +Jingyuan Ma, Damai Dai, Lei Sha, and Zhifang Sui. 2024. Large language models are unconscious of unreasonability in math problems. arXiv preprint arXiv:2403.19346. +Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar. 2024. Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models. arXiv preprint arXiv:2410.05229. +Andreas Opedal, Haruki Shirakami, Bernhard Schölkopf, Abulhair Saparov, and Mrinmaya Sachan. 2024. Mathgap: Out-of-distribution evaluation on problems with arbitrarily complex proofs. arXiv preprint arXiv:2410.13502. +OpenAI. 2025. Openai o3 mini. Accessed: Feb. 5, 2025. +Arkil Patel, Satwik Bhattachamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080-2094. +Asir Saadat, Tasmia Binte Sogir, Md Taukir Azam Chowdhury, and Syem Aziz. 2024. When not to answer: Evaluating prompts on gpt models for effective abstention in unanswerable math word problems. arXiv preprint arXiv:2410.13029. +Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Scharli, and Denny Zhou. 2023. Large language models can be easily distracted by irrelevant context. In Proceedings of the 40th International Conference on Machine Learning, pages 31210-31227. +YuHong Sun, Zhangyue Yin, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Hui Zhao. 2024. Benchmarking hallucination in large language models based on unanswerable math word problem. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 2178-2188. +Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837. + +Jialiang Xu, Mengyu Zhou, Xinyi He, Shi Han, and Dongmei Zhang. 2022. Towards robust numerical question answering: Diagnosing numerical capabilities of nlp systems. arXiv preprint arXiv:2211.07455. +An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. 2024. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122. +Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuan-Jing Huang. 2023. Do large language models know what they don't know? In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 8653-8665. +Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284. +Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie Zhan, et al. 2023. Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921. +Zihao Zhou, Shudong Liu, Maizhen Ning, Wei Liu, Jindong Wang, Derek F Wong, Xiaowei Huang, Qiu feng Wang, and Kaizhu Huang. 2024a. Is your model really a good math reasoner? evaluating mathematical reasoning with checklist. arXiv preprint arXiv:2407.08733. +Zihao Zhou, Qiufeng Wang, Mingyu Jin, Jie Yao, Jianan Ye, Wei Liu, Wei Wang, Xiaowei Huang, and Kaizhu Huang. 2024b. Mathattack: Attacking large language models towards math solving ability. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19750-19758. + +Algorithm 1 Generating Math Word Problem using Random Tree +Require: numVars $\geq$ ansDepth $\geq 2$ +Require: unanswerable $\in$ {true,false},order $\in$ {"forward","backward","random"} +Require: cutDepth: int +1: if unanswerable $=$ true then +Require: cutDepth: int, satisfying $1\leq$ cutDepth $<$ ansDepth +2: end if $\triangleright$ (i) Sample a dictionary of variable values +3: varDict $\leftarrow \{\}$ +4: for $i\gets 1$ to numVars do +5: Sample an integer $v\in [5,15]$ +6: varDict $[x_i]\gets v$ +7: end for $\triangleright$ (ii) Build the random tree +8: Assign root as the parent of $x_{1}$ +9: for $i\gets 2$ to ansDepth do +10: Assign $x_{i - 1}$ as the parent of $x_{i}$ +11: end for Finsh building the path from the root to the questioned variable $\triangleright$ Assign the remaining nodes +12: for $i\gets$ ansDepth $+1$ to numVars do +13: Randomly select a node $x_{p}$ in the tree +14: Assign $x_{p}$ as the parent of $x_{i}$ +15: end for $\triangleright$ (iii) Get the list of all edges via a breadth-first traversal +16: edgeList $\leftarrow$ the list of edges collected by a breadth-first traversal (see Algorithm 2) $\triangleright$ (iv) For unanswerable problems, create the cut +17: if unanswerable $=$ true then +18: Remove $(x_{\mathrm{ansdepth - cutdepth - 1}},x_{\mathrm{ansdepth - cutdepth}})$ from edgeList +19: end if $\triangleright$ (v) Generate a formula for each edge, and store in formulaList +20: formulaList $\leftarrow []$ +21: for edge $(x_{i},x_{j})$ in edgeList do +22: Sample $a,b\in \{-3, - 2, - 1,1,2,3\}$ +23: Define formula $\leftarrow a\cdot x_i + b\cdot x_j = a\cdot \mathrm{varDict}[x_i] + b\cdot \mathrm{varDict}[x_j]$ +24: Append formula to formulaList $\triangleright$ So that formulaList has the same order as edgeList +25: end for $\triangleright$ (vi) Adjust the ordering of formulaList according to order +26: if order $=$ "backward" then +27: Reverse formulaList +28: end if +29: if order $=$ "random" then +30: Random Shuffle formulaList +31: end if return formulaList $\triangleright$ Formulas serving as conditions of the problem. + +Algorithm 1 generates formulaList, which contains the formulas that will serve as the conditions of the problem. To translate that into natural language, item names will be sampled according to the compositeName option. Then, formulaList can be translated to natural language using pre-defined templates. The question sentence will simply be "what is the price of {item name of the questioned + +variable}". + +We want to point out that although all the variables are assigned a value in varDict, this is purely for the sake of (i) subsequently generating the random formulas (ii) guaranteeing that all calculable variables will have values between 5 and 15. When unanswerable $= \text{true}$ , the cut will guarantee that the problem is unanswerable. + +In the following, we also detail the simple breadth-first traversal algorithm for getting all the edges from the tree, which enables us to control the order of the conditions in the problem. + +Algorithm 2 Breadth-First Traversal to Get Edges +Require: root: the root of a tree + $\triangleright$ Get the list of all edges via a breadth-first traversal +1: edgeList $\leftarrow []$ , q $\leftarrow$ a queue containing root +2: while q is not empty do +3: node $\leftarrow$ q.dequeue() +4: for child $\in$ node.children do +5: Add (node, child) to edgeList +6: Add child to q +7: end for +8: end while +return edgeList + +Given a formulaList, each formula is translated via the following template: + +# Formula Translation Template + +Suppose $x_{1}$ stands for at , and $x_{2}$ stands for at . + +1. If the formula is $a_1x_1 + a_2x_2 = b$ where $a_1, a_2 > 0$ , then the formula will be translated to: " $a_1 < \text{dish}1$ at $< \text{restaurant}1$ and $a_2 < \text{dish}2$ at $< \text{restaurant}2$ cost $b$ dollars". +2. If the formula is $a_1x_1 - a_2x_2 = 0$ where $a_1, a_2 > 0$ , then the formula will be translated to: "The price of $a_1 < \text{dish}1$ at is the same as the price of $a_2 < \text{dish}2$ at 0$ and $b > 0$ , then the formula will be translated to either "a1 < dish1" at cost $b$ dollars more than $a_2 < \text{dish}2$ " at or "a2 < dish2" at cost $b$ dollars less than $a_1 < \text{dish}1$ " at . +4. The case of $b < 0$ is similar to the case above. + +# B Details of Experiments + +# B.1 Prompt Template + +Below is the prompt template used for evaluation, which is a 0-shot prompting template with chain-of-thought system message. For reasoning models like o3-mini, we do not use the chain-of-thought system message. + +# Zero-shot Evaluation Prompt Template + +```txt +// system message (for non-reasoning models) +As an expert problem solver, solve step by step the following mathematical questions. +// user message +Please solve the following math question, and then answer in the form 'Answer: x'. If the known conditions are not sufficient to answer the question, please answer in the form 'Answer: unknown.' +Question: {{question}} +Your solution: +``` + +# B.2 Evaluation + +For all the models other than o3-mini, we set {max_tokens=4000, temperature=0}. o3-mini does not allow setting temperature, and we set {max Completion_tokens=32000, reasoning_effort="high)}. GPT-4o and o3-mini are evaluated via calling OpenAI API. The exact version numbers are "gpt-4o-2024-08-06" and "o3-mini-2025-01-31", respectively. + +When evaluating model's answer to an unanswerable problem, we change the response to all lower case, and if "unknown" is mentioned after the last occurrence of the trigger word "answer", it is counted as a correct answer. + +# C Examples of Hallucinatory Answers + +# C.1 GPT-40 + +Example 1: A BLT sandwich at Urban Plate costs 13 dollars. 3 burgers at Taste Good Cuisine cost 6 dollars more than 3 fruit tarts at Taste Good Cuisine. A BLT sandwich at Taste Good Cuisine costs 5 dollars. A fruit tart at Taste Good Cuisine costs 9 dollars less than 3 burgers at Urban Plate. 2 burgers at Taste Good Cuisine cost 8 dollars more than a fruit tart at Urban Plate. Question: how much does a burger at Taste Good Cuisine cost? + +# Model answer: + +To solve this problem, we need to set up equations based on the information given and solve for the unknowns. + +Let's define the variables: + +- Let $(B\_T\backslash)$ be the cost of a burger at Taste Good Cuisine. +- Let $(F_T \backslash)$ be the cost of a fruit tart at Taste Good Cuisine. +- Let $(B\_U\backslash)$ be the cost of a burger at Urban Plate. +- Let $\backslash (\textsf{F\_U}\backslash)$ be the cost of a fruit tart at Urban Plate. + +We are given the following information: + +1. A BLT sandwich at Urban Plate costs 13 dollars. +2. 3 burgers at Taste Good Cuisine cost 6 dollars more than 3 fruit tarts at Taste Good Cuisine. +[ \begin{array}{l} \text{[} \\ \text{3B\_T = 3F\_T + 6} \\ \text{]} \end{array} ] +3. A BLT sandwich at Taste Good Cuisine costs 5 dollars. +4. A fruit tart at Taste Good Cuisine costs 9 dollars less than 3 burgers at Urban Plate. +[ F_T = 3B_U - 9 ] +5. 2 burgers at Taste Good Cuisine cost 8 dollars more than a fruit tart at Urban Plate. +[ \begin{array}{l} \text{[} \\ 2B\_T = F\_U + 8 \\ \text{]} \end{array} ] + +We need to find the cost of a burger at Taste Good Cuisine, $\backslash (\text{B\_T})$ . + +From equation (2), we can express $\backslash (\textsf{F}_\textsf{T}\backslash)$ in terms of $\backslash (\textsf{B}_\textsf{T}\backslash)$ : + +```latex +\[ +\begin{array}{l} +\textsf{F}_{-}\textsf{T} = \textsf{B}_{-}\textsf{T} -2\\ +\textsf{]} +\end{array} +\] +``` + +Substitute this expression for $\backslash (\textsf{F}_\textsf{T}\backslash)$ into equation (4): + +```latex +\[ +\begin{array}{l} +\text{B\_T} - 2 = 3 \text{B\_U} - 9 \\ +\text{B\_T} = 3 \text{B\_U} - 7 \\ +\text{B\_T} = 3 \text{B\_U} - 9 +\end{array} +\] +``` + +Now, let's substitute $\backslash (F\_T = B\_T - 2\backslash)$ into equation (5): + +```latex +\[ +\begin{array}{l} +2B_T = (B_T - 2) + 8 \\ +\backslash +\end{array} +\] +\[ +\begin{array}{l} +2B_T = B_T + 6 \\ +\backslash +\end{array} +\] +\[ +\begin{array}{l} +B_T = 6 \\ +\backslash +\end{array} +\] +``` + +Thus, the cost of a burger at Taste Good Cuisine is 6 dollars. + +Answer: 6 + +Mistake: When the model did "Now, let's substitute $F_T = B_T - 2$ into equation (5)", it actually confused $F_T$ with $F_U$ in equation (5). + +Example 2: 3 Greek salads at Texas BBQ cost 6 dollars less than 3 pieces of cheese cake at Texas BBQ. 3 burgers at Texas BBQ and 2 pieces of cheese cake at Texas BBQ cost 59 dollars. 3 burgers at Taste Good Cuisine and 2 Greek salads at Taste Good Cuisine cost 42 dollars. 2 burgers at Texas BBQ and a piece of cheese cake at Taste Good Cuisine cost 32 dollars. 3 burgers at Texas BBQ cost 15 dollars more than 3 fruit tarts at Taste Good Cuisine. A burger at Taste Good Cuisine costs 10 dollars. A fruit tart at Texas BBQ costs 2 dollars more than a Greek salad at Texas BBQ. Question: how much does a Greek salad at Texas BBQ cost? + +# Model answer: + +To solve this problem, we need to set up equations based on the information given and solve for the cost of a Greek salad at Texas BBQ. Let's define the variables: + +- $\backslash (\texttt{G}\backslash)$ : Cost of a Greek salad at Texas BBQ +- $\backslash (\textsf{C}\backslash)$ : Cost of a piece of cheesecake at Texas BBQ +- \(\backslash\) (B \):): Cost of a burger at Texas BBQ +- $\backslash (\textsf{F}\backslash)$ : Cost of a fruit tart at Taste Good Cuisine +- $(B\_T$ : Cost of a burger at Taste Good Cuisine (given as $10) +- $\backslash (\textsf{F}_\textsf{T}\backslash)$ : Cost of a fruit tart at Texas BBQ + +Now, let's translate the information into equations: + +1. "3 Greek salads at Texas BBQ cost 6 dollars less than 3 pieces of cheesecake at Texas BBQ." +[ \begin{array}{c} \backslash [ \\ 3G = 3C - 6 \\ \backslash ] \end{array} ] +2. "3 burgers at Texas BBQ and 2 pieces of cheesecake at Texas BBQ cost 59 dollars." +[ \begin{array}{c} \text{[} \\ 3\text{B} + 2\text{C} = 59 \\ \text{]} \end{array} ] +3. "3 burgers at Taste Good Cuisine and 2 Greek salads at Taste Good Cuisine cost 42 dollars." +[ \begin{array}{l} \text{[} \\ 3B\_T + 2G = 42 \\ \text{]} \end{array} ] + +```latex +Since $(\text{B\_T} = 10 \backslash)$ , substitute: +[ 3(10) + 2G = 42 \text{implies } 30 + 2G = 42 \text{implies } 2G = 12 \text{implies } G = 6 \backslash ] +``` + +4. "2 burgers at Texas BBQ and a piece of cheesecake at Taste Good Cuisine cost 32 dollars." + +```latex +\[ +\begin{array}{l} +\backslash [2B + C_T = 32 \\ +\backslash] +\end{array} +\] +``` + +(Note: We don't have enough information about $\backslash (\mathsf{C\_T}\backslash)$ , the cost of cheesecake at Taste Good Cuisine, so we can't use this equation directly.) + +5. "3 burgers at Texas BBQ cost 15 dollars more than 3 fruit tarts at Taste Good Cuisine." + +```latex +\[ +\begin{array}{l} +\backslash [3B = 3F + 15] \\ +\backslash] +\end{array} +\] +``` + +6. "A fruit tart at Texas BBQ costs 2 dollars more than a Greek salad at Texas BBQ." + +```latex +\[ +\begin{array}{l} +\backslash [ \\ +F_T = G + 2 \\ +\backslash] +\end{array} +\] +``` + +Now, let's solve the equations: + +From equation 3, we found $\backslash (G = 6\backslash)$ + +Let's verify if this value is consistent with other equations: + +```latex +- From equation 1: +[ \begin{array}{l} \text{[ } \\ 3G = 3C - 6 \text{implies } 3(6) = 3C - 6 \text{implies } 18 = 3C - 6 \text{implies } 3C = 24 \text{implies } C = 8 \\ \text{]} \end{array} ] +``` + +```latex +- From equation 2: +[ \text{[} 3B + 2C = 59 \text{implies } 3B + 2(8) = 59 \text{implies } 3B + 16 = 59 \text{implies } 3B = 43 \text{implies } B = \text{frac}\{43\}\{3\} \text{]} ] +(This value seems unusual, but let's proceed.) +``` + +```latex +- From equation 5: +[ \begin{array}{l} \text{[ } \\ 3B = 3F + 15 \text{implies } \backslash \text{frac}\{43\} \{3\} = F + 5 \text{implies } F = \backslash \text{frac}\{43\} \{3\} - 5 = \backslash \text{frac}\{43\} \{3\} - \backslash \text{frac}\{15\} \{3\} \\ = \backslash \text{frac}\{28\} \{3\} \\ \end{array} ] +``` + +```txt +- From equation 6: +[ F_T = G + 2 ]implies $F_T = 6 + 2 = 8$ +``` + +The calculations are consistent with the given conditions. Therefore, the cost of a Greek salad at Texas BBQ is: + +Answer: 6 + +Mistake: At step 3, model calculated the value of “ $G$ ”, but from the start of the solution, “ $G$ ” denotes the “cost of a Greek salad at Texas BBQ”, not the cost of a Greek salad at Taste Good Cuisine. + +# C.2 o3-mini + +Example 1: A pizza at Taste Good Cuisine and 3 lasagnas at Taste Good Cuisine cost 48 dollars. A lasagna at Urban Plate costs 6 dollars. 3 lasagnas at Urban Plate and 3 pizzas at Urban Plate cost 63 dollars. Question: how much does a lasagna at Taste Good Cuisine cost? + +# Model answer: + +Let p be the price of a pizza at Taste Good Cuisine and 1 be the price of a lasagna at Taste Good Cuisine. We are given: + +1. $p + 31 = 48$ . + +(Taste Good Cuisine) + +2. A lasagna at Urban Plate costs 6 dollars. +3. 3 lasagnas at Urban Plate and 3 pizzas at Urban Plate cost 63 dollars. + +From (3), the total cost of 3 lasagnas at Urban Plate is $3 \times 6 = 18$ dollars. This means that the 3 pizzas at Urban Plate cost $63 - 18 = 45$ dollars, so one pizza at Urban Plate costs $45/3 = 15$ dollars. + +A common (though unstated) assumption in these puzzles is that dishes with the same name cost the same across restaurants. Thus, we assume that a pizza at Taste Good Cuisine also costs 15 dollars. Substitute $p = 15$ into equation (1): + +$$ +\begin{array}{l} 1 5 + 3 1 = 4 8 \\ 3 1 = 4 8 - 1 5 = 3 3 \\ 1 = 3 3 / 3 = 1 1 \\ \end{array} +$$ + +Answer: 11 + +Mistake: The model made an assumption: "A common (though unstated) assumption in these puzzles is that dishes with the same name cost the same across restaurants." However, this cannot be a mathematically valid assumption because following this assumption, it arrives at the conclusion that the price for lasagna at Urban Plate is not the same as that at Taste Good Cuisine (6 dollars versus 11 dollars). + +We randomly selected 20 problems from the ansDepth $= 2$ set that triggered hallucinatory answer from o3-mini and manually checked the model answer, all of them shares this same mistake. Below is a similar example: + +Example 2: A scrambled egg at Bistro Nice and a piece of cheese cake at Mike's Place cost 21 dollars. 2 scrambled eggs at Bistro Nice cost 6 dollars more than 2 pieces of cheese cake at Bistro Nice. A scrambled egg at Mike's Place costs 6 dollars. Question: how much does a scrambled egg at Bistro Nice cost? + +# Model answer: + +Let S_B = price of a scrambled egg at Bistro Nice, + +C_B = price of a piece of cheesecake at Bistro Nice, +C_M = price of a piece of cheesecake at Mike's Place. + +The problem gives: + +1. S_B + C_M = 21 dollars. +2. Two scrambled eggs at Bistro Nice cost 6 dollars more than two cheeses at Bistro Nice: + +2S_B = 2C_B + 6 → S_B = C_B + 3. +3. A scrambled egg at Mike's Place costs 6 dollars (this is extra information). + +Without further assumptions the system involves three unknowns (S_B, C_B, C_M) and only two equations. However, it is common in these problems to assume that when the same item is mentioned (here "cheesecake") it has a consistent price from one restaurant to the other. That is, we assume $C_M = C_B$ . + +Under that assumption the first equation becomes: + +$\mathrm{S\_B} + \mathrm{C\_B} = 21$ + +Then using $S\_B = C\_B + 3$ we substitute: + +$(C\_ B + 3) + C\_ B = 21 \rightarrow 2C\_ B + 3 = 21 \rightarrow 2C\_ B = 18 \rightarrow C\_ B = 9.$ + +Thus, $S\_B = 9 + 3 = 12$ . + +Answer: 12 + +Mistake: Same as above. + +Example 3: 3 BLT sandwiches at Bistro Nice cost 6 dollars more than 3 pieces of cheese cake at Urban Plate. 3 burgers at Urban Plate cost 18 dollars less than 3 pieces of cheese cake at Bistro Nice. 2 BLT sandwiches at Urban Plate and 2 scrambled eggs at Urban Plate cost 34 dollars. A BLT sandwich at Urban Plate costs 8 dollars. 3 pieces of cheese cake at Bistro Nice cost 12 dollars more than 3 burgers at Bistro Nice. 2 burgers at Urban Plate cost 10 dollars less than 2 scrambled eggs at Bistro Nice. 3 pieces of cheese cake at Bistro Nice and a piece of cheese cake at Urban Plate cost 54 dollars. Question: how much does a piece of cheese cake at Urban Plate cost? + +# Model answer: + +Let CC_U be the cost of one piece of cheese cake at Urban Plate. + +Step 1. From "3 BLT sandwiches at Bistro Nice cost 6 dollars more than 3 pieces of cheese cake at Urban Plate": + +$$ +3 \cdot \mathrm {B L T} _ {-} \mathrm {B} = 3 \cdot \mathrm {C C} _ {-} \mathrm {U} + 6 \rightarrow \mathrm {B L T} _ {-} \mathrm {B} = \mathrm {C C} _ {-} \mathrm {U} + 2 \tag {1} +$$ + +Step 2. From "3 burgers at Urban Plate cost 18 dollars less than 3 pieces of cheese cake at Bistro Nice": $3 \cdot \text{BUR\_U} = 3 \cdot \text{CC\_B} - 18 \rightarrow$ With $\text{BUR\_U}$ determined later, we use this after finding $\text{BUR\_U}$ . + +Step 3. "2 BLT sandwiches at Urban Plate and 2 scrambled eggs at Urban Plate cost 34 dollars": + +2·BLT_U + 2·Eggs_U = 34 and we are given in Step 4 that BLT_U = 8. + +Thus, $2\cdot 8 + 2\cdot \text{Eggs\_U} = 34 \rightarrow 16 + 2\cdot \text{Eggs\_U} = 34 \rightarrow 2\cdot \text{Eggs\_U} = 18 \rightarrow \text{Eggs\_U} = 9$ + +Step 4. "A BLT sandwich at Urban Plate costs 8 dollars": + +BLT_U = 8. + +Step 5. "3 pieces of cheese cake at Bistro Nice cost 12 dollars more than 3 burgers at Bistro Nice": + +3·CC_B = 3·BUR_B + 12 → CC_B = BUR_B + 4 (2) + +Step 6. "2 burgers at Urban Plate cost 10 dollars less than 2 scrambled eggs at Urban Plate": + +2·BUR_U = 2·Eggs_U - 10 → 2·BUR_U = 18 - 10 → 2·BUR_U = 8 → BUR_U = 4. + +Step 7. Now return to the burgers at Urban Plate in Step 2: + +3·BUR_U = 3·CC_B - 18 WithBUR_U = 4, + +$3\cdot 4 = 3\cdot \mathrm{CC\_B} - 18\rightarrow 12 = 3\cdot \mathrm{CC\_B} - 18\rightarrow 3\cdot \mathrm{CC\_B} = 30\rightarrow \mathrm{CC\_B} = 10.$ + +Step 8. "3 pieces of cheese cake at Bistro Nice and a piece of cheese cake at Urban Plate cost 54 dollars": + +3·CC_B + CC_U = 54 → 3·10 + CC_U = 54 → 30 + CC_U = 54 → CC_U = 24. + +The question asks: "how much does a piece of cheese cake at Urban Plate cost?" + +Answer: 24 + +Mistake: Step 6. "2 burgers at Urban Plate cost 10 dollars less than 2 scrambled eggs at Urban Plate". That is a hallucinatory condition. The actual condition is "2 burgers at Urban Plate cost 10 dollars less than 2 scrambled eggs at Bistro Nice". \ No newline at end of file diff --git a/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/images.zip b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..22a8f3f1d788d83d7da18be0336e1ad1dbee6dca --- /dev/null +++ b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a6c650f1c7bb2b848c4537467bf22f0a9fba38f0db373efd46db280680b2c73 +size 160046 diff --git a/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/layout.json b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..af3d0ca42982635962519fec1c4cb40dc790c758 --- /dev/null +++ b/ACL/2025/TreeCut_ A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:951085c6b7672d7cf485084a357cbfca5add732fbe4d33b1b103851c1a114165 +size 504293 diff --git a/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_content_list.json b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9ca4d1aba76a3ec6eda6f49da4f48fc712cbecc8 --- /dev/null +++ b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9efa511edf693e7cf224aa598b6664f3a9d4fe07da9835f6f6bed4de7e528b2d +size 158689 diff --git a/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_model.json b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..02ba9425c59933237f4c354da1a2e62fdcd3eae3 --- /dev/null +++ b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1da86a51e03651e3cadbaed59c1a0245abbb4228fd01a48d5ef93619a55c7c34 +size 190019 diff --git a/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_origin.pdf b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c2335634d090b00cd9b40624ed2bbbb196a426ce --- /dev/null +++ b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/202cfa5e-68f5-4c93-92ef-27e46a4e527a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a05cf6ef4f4f0d3e0aca29f4d57f32dcef583eed71af2b1f777d0e75ae1521a +size 684585 diff --git a/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/full.md b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e4c2d6f7dabe414bbe3f39ccd4ef990e2b1810db --- /dev/null +++ b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/full.md @@ -0,0 +1,847 @@ +# Unique Hard Attention: A Tale of Two Sides + +Selim Jerad Anej Svete Jiaoda Li Ryan Cotterell {sjerad, anej.svete, jiaoda.li, ryan.cotterell}@ethz.ch + +ETH zürich + +"A wonderful fact to reflect upon, that leftmost and rightmost unique hard attention are constituted to be profoundly distinct." + +# Abstract + +Understanding the expressive power of transformers has recently attracted attention, as it offers insights into their abilities and limitations. Many studies analyze unique hard attention transformers, where attention selects a single position that maximizes the attention scores. When multiple positions achieve the maximum score, either the rightmost or the leftmost of those is chosen. In this paper, we highlight the importance of this seeming triviality. Recently, finite-precision transformers with both leftmost- and rightmost-hard attention were shown to be equivalent to Linear Temporal Logic (LTL). We show that this no longer holds with only leftmost-hard attention—in that case, they correspond to a strictly weaker fragment of LTL. Furthermore, we show that models with leftmost-hard attention are equivalent to soft attention, suggesting they may better approximate real-world transformers than right-attention models. These findings refine the landscape of transformer expressivity and underscore the role of attention directionality. + +# 1 Introduction + +Much work has recently been done on understanding the capabilities and limitations of transformers (Vaswani et al., 2017). Collectively, the body of work on the representational capacity of transformers has provided a nuanced picture of the landscape (Pérez et al., 2021; Hahn, 2020; Chiang and Cholak, 2022; Hao et al., 2022; Merrill et al., 2022a; Merrill and Sabharwal, 2023; Chiang et al., 2023; Yang et al., 2024a; Strobl et al., 2024; Svete and Cotterell, 2024; Nowak et al., 2024; Li and Cotterell, 2025, inter alia).1 Any such investigation begins by choosing an idealization of the architecture. A + +common modeling decision is to use unique hard attention (UHA), which selects a single position maximizing the attention scores (Hahn, 2020; Hao et al., 2022; Barceló et al., 2024). In one of the first exact descriptions of UHA expressivity, Yang et al. (2024a) show that UHA transformers (UHATs) with no positional encodings, strict future masking, and either leftmost or rightmost tiebreaking are equivalent to linear temporal logic, LTL. This connects UHATs to well-understood formalism such as the star-free languages and counter-free automata. An incautious reading of Yang et al.'s result could lead one to generalize the equivalence to all UHATs. However, we zoom in on the overlooked choice of tiebreaking and show that it markedly impacts the model's expressivity. + +We show UHATs with only leftmost tiebreaking are strictly less expressive by relating them to a fragment of LTL, LTL[◇]. We do so by adapting Yang et al.'s (2024a) proofs: We describe a variant of the B-RASP programming language defined therein, restricted to leftmost tiebreaking, and show it to be equivalent to LTL[◇]. Further, leveraging the recent results by Li and Cotterell (2025) that characterize finite-precision future-masked softattention transformers as equivalent to LTL[◇], we establish the equivalence of leftmost UHATs to standard softmax transformers. Moreover, we give explicit LTL[◇] formulas and partially ordered finite-state automata (FSAs) describing leftmost UHATs. + +# 2 Transformer Idealization + +This section introduces the idealization of the transformer analyzed throughout the paper. + +Finite Precision. Implemented on modern hardware, computations performed by transformers rely on a fixed number of bits. This makes finite precision a more realistic assumption than unbounded or growing (w.r.t. input length) precision (Barceló et al., 2024; Hahn, 2020; Hao et al., 2022; Merrill et al., 2022b; Merrill and Sabharwal, 2023). + +No positional encodings. We wish to study what we consider to be a barebone idealization of the transformer, because this enables us to understand the exact expressive power of UHA. Moreover, finite-precision positional encodings correspond precisely to monadic predicates in LTL formulas (Yang et al., 2024a), yielding a predictable and well-understood extension to our analysis. + +Unique hard attention. While the original transformer uses soft attention (Vaswani et al., 2017), theoretical work largely analyzes hard attention (Merrill et al., 2022b; Hao et al., 2022; Yang et al., 2024a; Barceló et al., 2024, inter alia). The precise implications of this modeling decision are still unclear, but our work, combined with Li and Cotterell's (2025) results, reduces the gap between both models: Soft-attention transformers are equivalent to leftmost UHATs while rightmost UHATs are more expressive. We contextualize our findings with some more related results in Tab. 1. + +Strict future masking. Future masking is standard in transformer-based language models. We focus on strict masking (where a position cannot attend to itself) as non-strict masking is known to reduce expressive power (Yang et al., 2024a). Moreover, residual connections still allow the model to incorporate information vertically across layers. + +# 3 The Best of UHA, The Worst of UHA + +This section provides a high-level overview, intuition, roadmap, and key implications of our results. + +# 3.1 Separation + +We begin by building an intuition as to why, with future masking, UHA with leftmost tiebreaking $\triangleleft$ is strictly less expressive than with rightmost tiebreaking $\triangleright$ . We illustrate this in B-RASP, a Boolean-valued programming language (Yang et al., 2024a), as the intermediary between LTL and UHATs. To follow the coming examples, we only need familiarity with the following attention operation: + +$$ +P (t) = \triangleleft_ {t ^ {\prime}} \left[ t ^ {\prime} < t, s \left(t ^ {\prime}\right) \right] v \left(t ^ {\prime}\right): d (t) \tag {1} +$$ + +$\triangleleft_{t'}[t' < t, s(t')]$ denotes choosing (attending to) the leftmost $(\triangleleft)$ position $t'$ for which $t' < t$ holds (Future masking) and $s(t') = 1$ . If such a $t'$ exists, $P(t)$ is assigned the value of the predicate $v(t')$ , and otherwise, it is assigned a default value $d(t)$ . This emulates leftmost UHA, where $t' < t$ + +corresponds to strict future masking, $s(t') = 1$ corresponds to maximizing the attention score, and $v(t')$ corresponds to the value vector. We define the rightmost operation $\triangleright$ analogously. + +We now note two facts: + +(i) Every leftmost operation + +$$ +P (t) = \triangleleft_ {t ^ {\prime}} \left[ t ^ {\prime} < t, s \left(t ^ {\prime}\right) \right] v \left(t ^ {\prime}\right): d (t) \tag {2} +$$ + +can be simulated by $\triangleright$ attentions in two steps. First, we gather all positions $t$ that have preceding positions $t^{\prime} < t$ such that $s(t^{\prime}) = 1$ : + +$$ +P _ {1} (t) = \triangleright_ {t ^ {\prime}} [ t ^ {\prime} < t, s (t ^ {\prime}) ] s (t ^ {\prime}): 0. \tag {3} +$$ + +Then, the leftmost position is just the single position $t'$ with $s(t') = 1$ but $P_{1}(t') = 0$ : + +$$ +P _ {2} (t) = \triangleright_ {t ^ {\prime}} \left[ t ^ {\prime} < t, s \left(t ^ {\prime}\right) \wedge \neg P _ {1} \left(t ^ {\prime}\right) \right] v \left(t ^ {\prime}\right): d (t). \tag {4} +$$ + +(ii) There exist operations that $\triangleright$ can perform and $\triangleleft$ cannot. For instance, $\triangleright$ attention can read the value immediately to the left of the current position, i.e., $v(t - 1)$ , as follows: + +$$ +P (t) = \triangleright_ {t ^ {\prime}} \left[ t ^ {\prime} < t, 1 \right] v \left(t ^ {\prime}\right): 0, \tag {5} +$$ + +but $\triangleleft$ attention cannot, as we would need $t' = t - 1$ to be the only position with $t' < t$ and $s(t') = 1$ for all $t \in [T]$ , which is impossible. + +This establishes a separation between $\mathbf{B}\text{-RASP}^{\mathsf{F}}$ , which is limited to leftmost tiebreaking $\triangleleft$ and future masking $\mathsf{F}$ , and the full B-RASP, leading to the following: + +Theorem 3.1 (Informal). Finite-precision future-masked $\triangleleft$ UHATs are weaker than $\triangleright$ UHATs. + +# 3.2 Characterizations + +In the remainder of the paper, we show that $\mathbf{B}\text{-RASP}^{\mathsf{F}}$ is equivalent to the fragment of LTL with only the $\diamond$ operator, denoted by $\mathbf{LTL}[\diamond]$ , which in turn is equivalent to partially ordered FSAs (POFA). This provides an exact characterization of leftmost UHATs: + +Theorem 3.2 (Informal). Finite-precision future-masked $\triangleleft$ UHATs are equivalent to LTL[◇]. + +In §6, we formalize the theorem with proofs intentionally made analogous to Yang et al.'s (2024a), in order to highlight the difference between $\triangleleft$ and $\triangleright$ . Additionally, in §7, we provide alternative proofs that directly translate $\triangleleft$ UHATs to LTL[◇] formulas and POFA. + +
TransformerLTLFO logicRegexMonoidAutomataNote
TF, TPfuture-maskedsoft attentionLTL[◇, ◆, S, U]FO [< ]star-freeaperiodiccounter-freeYang et al. (2024a, Thm. 5)
LTL[◇]PFO2 [< ]R-expressionR-trivialPOFALi and Cotterell (2025)
TFuture attentionLTL[◇]PFO2 [< ]R-expressionR-trivialPOFAThms. 6.1 and 6.2
TPLTL[◇]FFO2 [< ]L-expressionL-trivialRPOFAThm. E.1
+ +Table 1: Known equivalences of finite-precision transformers with no positional encodings to different formalism. ${\mathcal{T}}_{\blacktriangleright }^{\mathrm{F}}$ future-masked rightmost UHATs. ${\mathcal{T}}_{\blacktriangleleft }^{\mathrm{F}},{\mathcal{T}}_{\blacktriangleleft }^{\mathrm{P}}$ ,and ${\mathcal{T}}_{\blacktriangleleft }^{\mathrm{P}}$ are defined analogously for past masking and leftmost UHA. + +# 3.3 Implications + +Combining with Li and Cotterell's (2025) results that show that soft and average hard attention transformers $^{2}$ are equivalent to $\mathbf{LTL}[\diamond]$ as well, we discover the peculiar fact that with fixed precision, soft attention, average hard attention, and leftmost UHA are all equally expressive. + +This insight could shed light on certain observed phenomena in soft-attention transformers. For instance, Liu et al. (2023) find that transformers struggle with the flip-flop language, where the symbol following a "read" instruction must match the symbol following the most recent "write" instruction. Our results suggest that this difficulty arises because leftmost UHATs and thereby soft-attention transformers lack the ability to locate the most recent—rightmost—write instruction. + +Furthermore, the fact that rightmost UHA is strictly more expressive than other variants of transformers may partly explain the empirical success of positional encodings that bias attention toward recent tokens, such as ALiBi (Press et al., 2022), as they help approximate rightmost tiebreaking. + +# 4 Linear Temporal Logic + +An alphabet $\Sigma$ is a finite, non-empty set of symbols. The Kleene closure of $\Sigma$ is $\Sigma^{*} = \bigcup_{n=0}^{\infty} \Sigma^{n}$ , the set of all strings, where $\Sigma^{0} \stackrel{\text{def}}{=} \{\varepsilon\}$ contains only the empty string. A language $\mathbb{L}$ is a subset of $\Sigma^{*}$ . We treat a language recognizer as a function $\mathbb{R}$ : $\Sigma^{*} \rightarrow \{0,1\}$ whose language is $\mathbb{L}(\mathbb{R}) \stackrel{\text{def}}{=} \{\pmb{w} \in \Sigma^{*} \mid \mathbb{R}(\pmb{w}) = 1\}$ . Two recognizers $\mathbb{R}_1$ and $\mathbb{R}_2$ are equivalent if and only if $\mathbb{L}(\mathbb{R}_1) = \mathbb{L}(\mathbb{R}_2)$ . + +Linear temporal logic $\mathbf{LTL}[\diamond ,\diamond ,\mathbf{S},\mathbf{U}]$ is an extension of Boolean logic that considers events over time and can express time-dependent properties (Pnueli, 1977). We define LTL over (finite) strings. Formulas in $\mathbf{LTL}[\diamond ,\diamond ,\mathbf{S},\mathbf{U}]$ are composed of atomic formulas $\pi_{a}$ for $a\in \Sigma$ ,Boolean + +connectives $\wedge, \neg, 3$ and four temporal operators $\diamondsuit$ (past), $\diamondsuit$ (future), S (since) and U (until). We denote by LTL[◇] the fragment with only the $\diamondsuit$ operator and Boolean connectives and by LTL[◇] the fragment with only the $\diamondsuit$ operator and Boolean connectives. + +Given a string $\pmb{w} = w_{1}\dots w_{T}$ ,LTL formulas are interpreted at some position $t\in [T]$ .We write $\pmb {w},t\models \psi$ to denote that $\psi$ is TRUE on $\pmb{w}$ at position $t$ . The semantics for LTL[] are:4 + +$\bullet \pmb {w},t\models \pi_a\iff w_t = a;$ +$\bullet \pmb {w},t\models \psi_1\land \psi_2\iff \pmb {w},t\models \psi_1\land \pmb {w},t\models \psi_2;$ +$\bullet \pmb {w},t\models \neg \psi \iff \pmb {w},t\not\models \psi ;$ +$\bullet \pmb {w},t\models \diamondsuit \psi \iff \exists t^{\prime} < t:\pmb {w},t^{\prime}\models \psi .$ + +To define string acceptance, we denote by $T + 1$ a position outside of the string and define + +$$ +\boldsymbol {w} \vDash \psi \Leftrightarrow \boldsymbol {w}, T + 1 \vDash \psi . \tag {6} +$$ + +# 5 B-RASP and B-RASP + +We now introduce B-RASP in more detail. The input to a B-RASP program is a string $w \in \Sigma^*$ with $|w| = T$ . On such an input, a B-RASP program computes a sequence of Boolean vectors of size $T$ , with entries indexed by $t \in [T]$ in parentheses, denoted as $P(t)$ . Each $w \in \Sigma$ gives rise to an atomic Boolean vector $Q_w$ , defined as follows. For each $t \in [T]$ : + +$$ +Q _ {w} (t) = 1 \Longleftrightarrow \boldsymbol {w} _ {t} = w \tag {7} +$$ + +To streamline notation, we denote the first $|\Sigma|$ vectors of the program $Q_w$ by $P_1, \dots, P_{|\Sigma|}$ . The $(i+1)^{\text{th}}$ vector $P_{i+1} \stackrel{\text{def}}{=} P'$ is computed inductively using one of the following operations: + +(1) Position-wise operation: $P'(t)$ is a Boolean combination of zero or more of $\{P_{i'}(t)\}_{i'=1}^i$ . + +(2) Attention operation: $P^{\prime}(t)$ can be one of: + +$$ +P ^ {\prime} (t) = \triangleleft_ {t ^ {\prime}} [ m (t, t ^ {\prime}), s (t, t ^ {\prime}) ] v (t, t ^ {\prime}): d (t) \tag {8a} +$$ + +$$ +P ^ {\prime} (t) = \triangleright_ {t ^ {\prime}} [ m (t, t ^ {\prime}), s (t, t ^ {\prime}) ] v (t, t ^ {\prime}): d (t) \tag {8b} +$$ + +where: + +- The mask predicate $m$ is defined as either $m(t, t') \stackrel{\mathrm{def}}{=} \mathbb{1}\{t' < t\}$ for strict future masking (F), or $m(t, t') \stackrel{\mathrm{def}}{=} \mathbb{1}\{t' > t\}$ for strict past masking (P). Notice that the inequalities are strict, meaning the current position is excluded from attention. This detail has been shown to increase expressivity compared to non-strict masking (Yang et al., 2024a; Li and Cotterell, 2025).5 +- The score predicate $s(t, t')$ is a Boolean combination of $\{P_{i'}(t)\}_{i' = 1}^i \cup \{P_{i'}(t')\}_{i' = 1}^i$ . +- the value predicate $v(t, t')$ is defined analogously, and +- the default value predicate $d(t)$ is a Boolean combination of values in $\{P_1(t), \ldots, P_i(t)\}$ . + +We use $\triangleleft$ to denote leftmost tiebreaking and $\triangleright$ to denote rightmost tiebreaking. For $t\in [T]$ , define the set of valid positions as: + +$$ +\mathcal {N} (t) \stackrel {\text {d e f}} {=} \left\{t ^ {\prime} \in [ T ] \mid m \left(t, t ^ {\prime}\right) = 1 \text {a n d} s \left(t, t ^ {\prime}\right) = 1 \right\}. \tag {9} +$$ + +The unique position to attend to is then selected as: + +$$ +t ^ {*} \stackrel {\text {d e f}} {=} \left\{ \begin{array}{l l} \min \mathcal {N} (t) & \text {i f} \triangleleft \\ \max \mathcal {N} (t) & \text {i f} \triangleright \end{array} . \right. \tag {10} +$$ + +Finally, the semantics of the attention operation are given by: + +$$ +P ^ {\prime} (t) \stackrel {\text {d e f}} {=} \left\{ \begin{array}{l l} v \left(t, t ^ {*}\right) & \text {i f} | \mathcal {N} (t) | > 0 \\ d (t) & \text {o t h e r w i s e} \end{array} . \right. \tag {11} +$$ + +Note that, by Lem. 12 and Prop. 11 of Yang et al. (2024a), we can rewrite every $\mathcal{P}$ to an equivalent program where every attention operation only uses unary scores and unary values, i.e., $s(t,t^{\prime})$ and $v(t,t^{\prime})$ depend only on $t^\prime$ . + +B-RASP is the restricted version of B-RASP with only leftmost tiebreaking and future masking. + +To define $\mathcal{P}$ 's language, we designate a final output vector $Y$ and $T$ as the output position such that $Y(T) = 1$ signals acceptance. + +B-RASP is equivalent to finite-precision future-masked rightmost UHATs, $\mathcal{T}_{\triangleright}^{\mathrm{F}}$ (Yang et al., 2024a, Thms. 3 and 4). A similar claim, proved in App. C, holds for B-RASP and leftmost UHATs, $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ . + +Theorem 5.1. For any UHAT in $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ , there exists an equivalent B-RASP program. For any B-RASP program, there exists an equivalent UHAT in $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ . + +# 6 B-RASP Is Equivalent to LTL[◇] + +We now formalize the equivalence of $\mathbf{B}\text{-RASP}^{\mathrm{F}}$ and $\mathbf{LTL}[\diamond]$ . It rests on the following two theorems. The proofs are provided in App. C. They are adapted from Yang et al. (2024a), with the differing parts highlighted in red. + +Theorem 6.1. For any formula $\psi$ of $\mathbf{LTL}[\diamond]$ , there is a $B$ - $RASP^{\mathsf{F}}$ program with a Boolean vector $P_{\psi}$ such that, for any input $\mathbf{w}$ of length $T$ and all $t \in [T]$ , we have $\mathbf{w}, t \models \psi \iff P_{\psi}(t) = 1$ . + +Theorem 6.2. For any Boolean vector $P$ of a $\pmb{B}$ - $\mathbf{RASP}^{\mathsf{F}}$ program $\mathcal{P}$ , there is a formula $\psi_P$ of LTL[] such that for any input $\pmb{w}$ of length $T$ and all $t\in [T]$ , we have $P(t) = 1\iff \pmb {w},t\models \psi_{P}$ . + +Yang et al. (2024a, Thm. 15) establish an alternative proof of the equivalence between B-RASP and full LTL[◇, ◇, S, U] via counter-free automata, which recognize the class of star-free languages. Analogously, we demonstrate that B-RASP corresponds to POFAs, a subclass of counter-free automata that characterize LTL[◇]. A translation from POFAs to B-RASP is provided in App. C.1. + +# 7 Direct Descriptions of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ + +We now describe an alternative description of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ that directly translates it to LTL[] and POFAs. + +# 7.1 Describing $\mathcal{T}_{\triangleleft}^{\mathsf{F}}$ with LTL[◇] + +In a transformer, the contextual representation at layer $\ell$ , $\pmb{x}_t^{(\ell)}$ , determines a function that computes the next representation, $\pmb{x}_t^{(\ell+1)}$ , given the unmasked symbols using the attention mechanism. In UHATs, this function is particularly simple: It computes $\pmb{x}_t^{(\ell+1)}$ by selecting the symbol with the highest attention score (as per the tiebreaking mechanism), $\pmb{x}_{t^*}^{(\ell)}$ , and combines it with $\pmb{x}_t^{(\ell)}$ via the residual connection: $\pmb{x}_t^{(\ell+1)} = \pmb{x}_t^{(\ell)} + \pmb{x}_{t^*}^{(\ell)}$ ; see Fig. 1. + +This invites the interpretation of transformer layers collecting progressively richer representations of individual symbols by selecting a new representation to append at each layer. We translate this idea into a set of $\mathbf{LTL}[\diamond]$ formulas of the form $\phi_{\boldsymbol{x}_t^{(\ell)}\leftarrow \boldsymbol{x}_{t^*}^{(\ell)}}$ that keep track of the fact that the representation $\boldsymbol{x}_t^{(\ell)}$ was updated with the representation + +![](images/50dd36cfac46263e43593e290210950560bee96b28c2674e0e3635b3b4c035f6.jpg) +Figure 1: Unique hard attention. $\boldsymbol{x}_{t^*}^{(\ell)}$ is combined with $\boldsymbol{x}_t^{(\ell)}$ to compute $\boldsymbol{x}_t^{(\ell +1)} = \boldsymbol{x}_t^{(\ell)} + \boldsymbol{x}_{t^*}^{(\ell)}$ . + +$\pmb{x}_{t^{*}}^{(\ell)}$ at layer $\ell$ . The full formula is presented in the proof of the following theorem in App. D.2. + +Theorem 7.1. Let $T \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ be a transformer. Then, there exists an equivalent formula $\psi \in \mathbf{LTL}[\diamond]$ . + +# 7.2 Describing $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ with POFAs + +To provide an automata-theoretic take on the result, we directly express $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ transformers with POFAs in the proof of the following theorem in App. D.3. + +Theorem 7.2. Let $T \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ be a transformer. Then, there exists an equivalent POFA. + +# 8 Discussion and Conclusion + +We establish the equivalence of future-masked finite-precision leftmost UHATs with no positional encodings, $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ , to a fragment of linear temporal logic, LTL[◇]. Together with Yang et al.'s and Li and Cotterell's results, this largely completes the picture of finite-precision transformer expressivity. + +Equivalence to soft-attention transformers. §6 not only provides a characterization of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ in terms of LTL[], but also establishes its equivalence to future-masked, finite-precision soft-attention transformers, in conjunction with the results of Li and Cotterell's (2025) (summarized in Tab. 1). This equivalence yields a compelling interpretation of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ as a principled abstraction of soft-attention transformers—one that is more appropriate than $\mathcal{T}_{\triangleright}^{\mathrm{F}}$ due to the expressivity gap between soft attention and rightmost UHA. Consequently, this motivates further investigation of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ as a simplified yet faithful analog of more complex soft-attention architectures. + +Dot-depth hierarchy. The dot-depth hierarchy (Cohen and Brzozowski, 1971) classifies star-free languages based on the minimal alternation depth of concatenation and Boolean operations in the regular expressions defining them. The hierarchy is infinite and reflects increasing expressive power. Brzozowski and Ellen (1980) show that the class of languages definable in $\mathbf{LTL}[\diamond]$ forms + +a strict subclass of dot-depth 2, while being incomparable to dot-depth 1. In a related line of work, Bhattamishra et al. (2020) empirically find that transformers struggle to generalize to star-free languages with dot-depth greater than 1. + +Until Hierarchy. An alternative (infinite) hierarchy spanning the star-free languages is that of until hierarchy (Etessami and Wilke, 1996; Thérien and Wilke, 2001), which stratifies the family according to the required number of $\mathbf{U}$ (or equivalently, S) operations in an LTL formula required to define a language. Our and Li and Cotterell's (2025) results naturally place leftmost UHA and soft-attention transformers within the $0^{\text{th}}$ layer of this hierarchy. + +Abilities and Limitations. Exact characterizations of transformers allow us to derive precise conclusions about models' abilities and limitations. Our results, in particular, mean that $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ transformers, like their soft-attention counterparts, cannot model simple languages such as (i) strictly local languages and $n$ -gram models, which have been linked to infinite-precision UHATs (Svete and Cotterell, 2024); (ii) locally testable languages (which require detecting contiguous substrings); (iii) languages of nested parentheses of bounded depth (bounded Dyck languages) which have also been linked to infinite-precision transformers (Yao et al., 2021); and (iv) any non-star-free languages such as PARITY. In contrast, the equivalence to LTL[◇] means that $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ can model simple languages such as those whose string membership depends on the presence of not necessarily contiguous subsequences or on the string prefix. Li and Cotterell (2025) find strong empirical evidence that this theoretical equivalence faithfully translates into the practical performance of trained transformers on formal languages. + +A duality. We finally note that past masking and rightmost UHA has a natural characterization with LTL[], the fragment of LTL with only the $\diamondsuit$ operator. This duality is summarized in Tab. 1 and elaborated on in App. E. + +# Limitations + +We limit ourselves to a purely theoretical investigation of the expressivity of a particular model of a transformer. In particular, our results hold for finite-precision transformers with leftmost hard + +attention and no positional encodings. This is important to consider as the representational capacity of transformers depends on the choices made for all these components. We do not consider learnability and training, which are opening up to be an exciting area of study and promise to bring our understanding of the representational capacity of transformers closer to what we observe in practice (Hahn and Rofin, 2024). In fact, due to the discrete nature of the hard-attention mechanism, training UHATs in practice is infeasible. Nevertheless, recent work shows how temperature scaling and unbounded positional encodings can be used to simulate hard attention with soft attention (Yang et al., 2024b). We leave the empirical investigation of contrasting the performance of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}, \mathcal{T}_{\triangleright}^{\mathrm{F}}, \mathcal{T}_{\triangleright}^{\mathrm{P}}$ on various language classes to future work. + +# Ethical considerations + +We used AI-based tools (ChatGPT and GitHub Copilot) for writing assistance. We used the tools in compliance with the ACL Policy on the Use of AI Writing Assistance. + +# Acknowledgements + +We would like to thank Andy Yang and Michael Hahn for useful discussions and feedback on early versions of this work. Anej Svete and Jiaoda Li are supported by the ETH AI Center Doctoral Fellowship. + +# References + +Pablo Barceló, Alexander Kozachinskiy, Anthony Widjaja Lin, and Vladimir Podolskii. 2024. Logical languages accepted by transformer encoders with hard attention. In The Twelfth International Conference on Learning Representations. +Satwik Bhattachamishra, Kabir Ahuja, and Navin Goyal. 2020. On the Ability and Limitations of Transformers to Recognize Formal Languages. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7096-7116, Online. Association for Computational Linguistics. +Janusz Antoni Brzozowski and Faith Ellen. 1980. Languages of R-trivial monoids. Journal of Computer and System Sciences, 20(1):32-49. +Ashok K. Chandra, Larry Stockmeyer, and Uzi Vishkin. 1984. Constant depth reducibility. SIAM Journal on Computing, 13(2):423-439. + +David Chiang and Peter Cholak. 2022. Overcoming a theoretical limitation of self-attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7654–7664, Dublin, Ireland. Association for Computational Linguistics. +David Chiang, Peter Cholak, and Anand Pillay. 2023. Tighter bounds on the expressivity of transformer encoders. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org. +Rina Cohen and Janusz Antoni Brzozowski Brzozowski. 1971. Dot-depth of star-free events. Journal of Computer and System Sciences, 5(1):1-16. +K. Etessami and T. Wilke. 1996. An Until hierarchy for temporal logic. In Proceedings 11th Annual IEEE Symposium on Logic in Computer Science, pages 108-117. +Dov Gabbay, Amir Pnueli, Saharon Shelah, and Jonathan Stavi. 1980. On the temporal analysis of fairness. In Proceedings of the 7th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '80, page 163-173, New York, NY, USA. Association for Computing Machinery. +Michael Hahn. 2020. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156-171. +Michael Hahn and Mark Rofin. 2024. Why are sensitive functions hard for transformers? In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14973-15008, Bangkok, Thailand. Association for Computational Linguistics. +Yiding Hao, Dana Angluin, and Robert Frank. 2022. Formal language recognition by hard attention transformers: Perspectives from circuit complexity. Transactions of the Association for Computational Linguistics, 10:800-810. +William Hesse. 2001. Division is in uniform TC0. In Automata, Languages and Programming, pages 104-114, Berlin, Heidelberg. Springer Berlin Heidelberg. +Hans Kamp. 1968. Tense Logic and the Theory of Linear Order. Ph.D. thesis, UCLA. +Kenneth Krohn and John Rhodes. 1965. Algebraic theory of machines. I. Prime decomposition theorem for finite semigroups and machines. Transactions of The American Mathematical Society - TRANS AMER MATH SOC, 116. +Jiaoda Li and Ryan Cotterell. 2025. Characterizing the expressivity of transformer language models. arXiv preprint arXiv:2505.23623. +Bingbin Liu, Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, and Cyril Zhang. 2023. Exposing attention glitches with flip-flop language modeling. In + +Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Red Hook, NY, USA. Curran Associates Inc. +Robert McNaughton and Seymour Papert. 1971. Counter-Free Automata. M.I.T. Press research monographs. M.I.T. Press. +William Merrill and Ashish Sabharwal. 2023. A logic for expressing log-precision transformers. In Thirty-seventh Conference on Neural Information Processing Systems. +William Merrill, Ashish Sabharwal, and Noah A. Smith. 2022a. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10:843-856. +William Merrill, Ashish Sabharwal, and Noah A. Smith. 2022b. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10:843-856. +Franz Nowak, Anej Svete, Alexandra Butoi, and Ryan Cotterell. 2024. On the representational capacity of neural language models with chain-of-thought reasoning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12510-12548, Bangkok, Thailand. Association for Computational Linguistics. +Jorge Pérez, Pablo Barceló, and Javier Marinkovic. 2021. Attention is turing-complete. Journal of Machine Learning Research, 22(75):1-35. +Amir Pnueli. 1977. The temporal logic of programs. In 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), pages 46-57. +Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations. +Lena Strobl. 2023. Average-hard attention transformers are constant-depth uniform threshold circuits. arXiv preprint arXiv:2308.03212. +Lena Strobl, William Merrill, Gail Weiss, David Chiang, and Dana Angluin. 2024. What formal languages can transformers express? a survey. Transactions of the Association for Computational Linguistics, 12:543-561. +Anej Svete and Ryan Cotterell. 2024. Transformers can represent $n$ -gram language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 6845-6881, Mexico City, Mexico. Association for Computational Linguistics. +Denis Thérien and Thomas Wilke. 2001. Temporal Logic and Semidirect Products: An Effective Characterization of the Until Hierarchy. SIAM Journal on Computing, 31(3):777-798. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +Andy Yang, David Chiang, and Dana Angluin. 2024a. Masked hard-attention transformers recognize exactly the star-free languages. In *The Thirty-eighth Annual Conference on Neural Information Processing Systems*. +Andy Yang, Lena Strobl, David Chiang, and Dana Angluin. 2024b. Simulating hard attention using soft attention. +Shunyu Yao, Binghui Peng, Christos Papadimitriou, and Karthik Narasimhan. 2021. Self-attention networks can process bounded hierarchical languages. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3770-3785, Online. Association for Computational Linguistics. + +# A Related Work + +Existing work has established a rich landscape of results on the expressivity of transformers. + +Lower and upper bounds for UHA. Hahn (2020) show that UHA transformers with unbounded precision, left attention, and no masking can not recognize PARITY (bit strings with an odd number of ones) nor DYCK-1 (language of correctly nested parentheses of one type). Hao et al. (2022) refine this result by showing that UHA transformers with unbounded precision and left attention can recognize at most languages in $\mathrm{AC}^0$ , the family of circuits of constant depth, polynomial size, and unbounded fan-in. Maybe surprisingly, this suggests such transformers can not recognize even simple languages outside $\mathrm{AC}^0$ such as PARITY and MAJORITY (all bit strings in which more than half of bits are 1s). Other problems not in $\mathrm{AC}^0$ include sorting, integer multiplication (Chandra et al., 1984), and integer division (Hesse, 2001). Barceló et al. (2024) show that UHA transformers augmented with arbitrary positional encodings are lower bounded by an extension of $\mathbf{FO} [< ]$ with all possible monadic numerical predicates (which includes all regular languages in $\mathrm{AC}^0$ ). Yang et al. (2024a) further refine the understanding of the relationship between $\mathbf{FO} [< ]$ and finite-precision transformers by proving the equivalence between the two when the transformers are equipped with strict future masking. + +Average-hard attention. Average-hard attention (AHA) differs from UHA in that when confronted with several positions with equal scores, they average their values to compute the next contextual representation. It can be seen as a special case of soft-attention (Li and Cotterell, 2025). Hao et al. (2022) show AHA unbounded precision transformers are more expressive than UHA transformers, as they can recognize languages outside $\mathrm{AC}^0$ , such as PARITY and DYCK-1. Merrill et al. (2022b) show that AHA transformers with floating-point activations can be simulated in $\mathrm{TC}^0$ (the extension of $\mathrm{AC}^0$ with majority gates, which output 1 iff at least half of the inputs are 1), while Strobl (2023) extend this result by tightening the upper bound to L-UNIFORM $\mathrm{TC}^0$ (which consists of $\mathrm{TC}^0$ circuits with the additional constraint that there exists a deterministic Turing Machine that runs in logarithmic space that can describe the circuits). + +Transformers and logic. Previous results relating transformers to logic include Chiang et al. (2023), who show that finite-precision softmax transformers are upper-bounded by a generalization of first-order logic with counting quantifiers and modular arithmetic over input position indices. On the other hand, they show this logic to be a lower bound on the expressivity of unbounded-precision transformers. Merrill and Sabharwal (2023) contribute by characterizing a more expressive variant of transformers—they allow precision logarithmic in the input length and show an upper bound of first-order logic extended with majority-vote quantifiers. + +Equivalence of right-attention transformers to LTL[◇, ◇, S, U]. Yang et al. (2024a) show the equivalence of $\mathcal{T}_{\triangleright}^{\mathrm{F}}$ and $\mathcal{T}_{\triangleleft}^{\mathrm{P}}$ to LTL[◇, ◇, S, U]. In their constructions, the operators S and U in LTL[◇, ◇, S, U] can only be expressed by transformers in $\mathcal{T}_{\triangleright}^{\mathrm{F}}$ , $\mathcal{T}_{\triangleleft}^{\mathrm{P}}$ respectively (and vice-verse, $\mathcal{T}_{\triangleright}^{\mathrm{F}}$ and $\mathcal{T}_{\triangleleft}^{\mathrm{P}}$ require the operators S,U respectively in their LTL[◇, ◇, S, U] formulation). Moreover, by Gabbay et al. (1980), the fragment of LTL[◇, ◇, S, U] consisting of only U and the Boolean connectives, denoted by LTL[U] (or, in the analogous symmetric case, LTL[S]), is sufficient for equivalence FO [<] (which is equivalent to LTL[◇, ◇, S, U] by Kamp (1968)). + +Equivalence of soft-attention transformers to LTL[◇]. Li and Cotterell (2025) relates LTL[◇] and $\mathbf{PFO}^2 [<]$ (the fragment of $\mathbf{FO} [<]$ which considers at most two distinct variables at the same time where any bound variable can only peek into the past of a free variable) to languages with $\mathcal{R}$ -trivial monoids (Brzozowski and Ellen, 1980). These are described by a set of equivalent formalisms such as $\mathcal{R}$ -expressions, $\mathcal{R}$ -trivial monoids, and partially ordered automata. Li and Cotterell (2025) use this equivalence to show strict future-masked softmax transformers are equivalent to LTL[◇]. The upper bound is proven through the equivalent $\mathbf{PFO}^2 [<]$ : They show $\mathbf{PFO}^2 [<]$ can simulate a sum of a finite number of fixed-precision floating-point numbers (and thus the weighted sum in softmax), while other transformations (computations of keys and queries, dot-products, etc.) can be computed with the standard Boolean connectives. They show the lower bound of LTL[◇] by translating the standard Boolean + +operations into feedforward networks, and the $\diamond$ operation with the future-masked attention mechanism. Together with Hao et al.'s (2022) results, this illuminates the complexity of transformer expressivity: While in the unbounded-precision regime, AHA transformers strictly subsume UHA transformers, in the finite-precision regime, the relationship depends on the direction of attention mechanism. The models are equivalent if leftmost tiebreaking is used and in the case of rightmost tiebreaking, UHA is strictly more expressive than AHA, the reverse of the unbounded-precision case. + +# B Background + +This section provides the necessary background on the formalisms used in the paper and details some concepts introduced in the main part of the paper. + +# B.1 Finite-State Automata + +Definition B.1. A semiautomaton $\mathcal{A}$ is a 3-tuple $(\Sigma, Q, \delta)$ where $\Sigma$ is an alphabet, $Q$ is a finite set of states and $\delta: Q \times \Sigma \to Q$ is a transition function. + +We further define an initialized semiautomaton as a semiautomaton with an initial state. + +Definition B.2. A deterministic finite automaton (DFA) $\mathcal{A}$ is a 5-tuple $(\Sigma, Q, q_{\iota}, F, \delta)$ where $(\Sigma, Q, \delta)$ is a semiautomaton, $q_{\iota} \in Q$ is an initial state, and $F \subseteq Q$ is a set of final states. + +Definition B.3. Let $\delta^* \colon Q \times \Sigma^* \to Q$ be the transitive closure of $\delta$ , defined as + +$$ +\delta^ {*} (q, w) = \delta (q, w), f o r w \in \Sigma \tag {12a} +$$ + +$$ +\delta^ {*} (q, w _ {1} \dots w _ {T}) = \delta \left(\delta^ {*} (q, w _ {1} \dots w _ {T - 1}), w _ {T}\right) \tag {12b} +$$ + +with $\delta^{*}(q,\varepsilon) = q$ for any $q\in Q$ . A partially ordered DFA (POFA) is a DFA $\mathcal{A} = (\Sigma ,Q,q_{\iota},F,\delta)$ where there is a partial order relation $\preceq$ on $Q$ defined as $q\preceq p$ if and only if $\delta^{*}(q,\pmb {w}) = p$ for some string $\pmb {w}\in \Sigma^{*}$ . + +Intuitively, POFAs are acyclic DFAs with possible self-loops, resulting in partially ordered states. + +Definition B.4. Let a DFA $\mathcal{A} = (\Sigma, Q, q_{\iota}, F, \delta)$ and $\mathbb{L}$ be the language it accepts. We define the reverse automaton $\mathcal{A}^R$ as the automaton that recognizes the reverse language $\mathbb{L}^R$ consisting of all strings in $\mathbb{L}$ but reversed. + +Definition B.5. A DFA $\mathcal{A}$ is a partially ordered reverse automaton (RPOFA) if $\mathcal{A}^R$ is a POFA. + +Definition B.6. Let $\mathcal{B}_1 = (\Sigma, Q_1, \delta_1)$ be a semiautomaton. Let $\mathcal{B}_2 = (Q_1 \times \Sigma, Q_2, \delta_2)$ be another, possibly partial, semiautomaton such that for every $q_1 \in Q_1$ , $q_2 \in Q_2$ , either $\delta(q_2, \langle q_1, w \rangle)$ is defined for every $w \in \Sigma$ or undefined for every $w \in \Sigma$ . The cascade product $\mathcal{B}_1 \circ \mathcal{B}_2$ is the semiautomaton $\mathcal{C} = (\Sigma, Q, \delta)$ such that: + +$Q = \{(q_1,q_2):\delta_2(q_2,\langle q_1,w\rangle)$ is defined} +- $\delta (\langle q_1,q_2\rangle ,w) = (\delta_1(q_1,w),\delta_2(q_2,\langle q_1,w\rangle))$ + +Definition B.7. For $n \geq 0$ , an $n$ -way fork is an initialized semiautomaton $(\Sigma, \{q_0, q_1, \dots, q_n\}, \delta)$ where $\Sigma = \Sigma_0 \cup \Sigma_1 \cdots \cup \Sigma_n$ , the $\Sigma_i$ 's are non-empty and pairwise disjoint, $\delta(q_0, a) = q_i$ for all $a \in \Sigma_i$ , and $\delta(q_i, a) = q_i$ for all $a \in \Sigma$ . A half-reset (Fig. 2) is a 1-way fork. + +Definition B.8. A surjection $\phi: Q \to Q'$ is an automaton homomorphism from the semiautomata $\mathcal{A} = (\Sigma, Q, \delta)$ to $\mathcal{A}' = (\Sigma', Q', \delta')$ if for every $q \in Q$ , $w \in \Sigma$ : + +$$ +\phi (\delta (q, w)) = \delta^ {\prime} (\phi (q), w) \tag {13} +$$ + +In this case, we say $\mathcal{A}'$ is homomorphic to $\mathcal{A}$ , or that $\mathcal{A}'$ is the homomorphic image of $\mathcal{A}$ . + +![](images/2f07c4dd3c48b485601e787ce186cec2afa72fbc020cfa2d9e96a61048d247dd.jpg) +Figure 2: A 1-way fork + +# B.2 Syntactic Monoids + +Definition B.9. A monoid $\mathbb{M}$ is a set equipped with a binary operation and an identity element. + +For instance, the free monoid is the set $\Sigma^{*}$ equipped with the concatenation operation and the empty string $\varepsilon$ as identity. + +Definition B.10. A monoid $\mathbb{M}$ is $\mathcal{R}$ -trivial if for all $w_{1}, w_{2}, w_{3} \in \mathbb{M}$ , $w_{1}w_{2}w_{3} = w_{1}$ implies $w_{1}w_{2} = w_{1}$ . A monoid $\mathbb{M}$ is $\mathcal{L}$ -trivial if for all $w_{1}, w_{2}, w_{3} \in \mathbb{M}$ , $w_{3}w_{2}w_{1} = w_{1}$ implies $w_{2}w_{1} = w_{1}$ . + +More details about $\mathcal{R}$ -trivial monoids can be found in Brzozowski and Ellen (1980). + +Definition B.11. The syntactic congruence $\preceq_{\mathbb{L}}$ is the equivalence relation on $\Sigma^{*}$ given the language $\mathbb{L}$ such that for all $x, y \in \Sigma^{*}$ , we have $x \preceq_{\mathbb{L}} y$ if and only if: + +$$ +\boldsymbol {s x} \boldsymbol {z} \in \mathbb {L} \iff \boldsymbol {s y} \boldsymbol {z} \in \mathbb {L} \forall \boldsymbol {s}, \boldsymbol {z} \in \Sigma^ {*} \tag {14} +$$ + +Definition B.12. The quotient monoid $\Sigma^{*} / \preceq_{\mathbb{L}}$ is the syntactic monoid of $\mathbb{L}$ . + +# B.3 Regular Expressions + +A regular language can be described by regular expressions, which are elements of the closure of $\varnothing, \varepsilon$ , and all $w \in \Sigma$ under concatenation, concatenation, and Kleene star. A regular language is star-free if the regular expression that describes the language does not require the Kleene star operator. + +Definition B.13. An $\mathcal{R}$ -expression is a finite union of regular expressions of the form $\Sigma_0^* w_1\Sigma_1^*\dots w_n\Sigma_n^*$ where $w_{t}\in \Sigma$ , $\Sigma_t\subseteq \Sigma$ and $w_{t}\notin \Sigma_{t - 1}^{*}$ for $1\leq t\leq n$ . An $\mathcal{L}$ -expression is defined analogously with the dual constraint of $w_{t}\notin \Sigma_{t}^{*}$ . + +For instance, $a\Sigma^{*}$ is an $\mathcal{R}$ -expression, while $\Sigma^{*}a$ is an $\mathcal{L}$ -expression. + +# B.4 Linear Temporal Logic + +We now present all possible semantics in $\mathbf{LTL}[\diamond ,\diamond ,\mathbf{S},\mathbf{U}]$ . The semantics are defined inductively: + +$\bullet \pmb {w},t\models \pi_a\iff w_t = a;$ +$\bullet \pmb {w},t\models \psi_1\lor \psi_2\iff \pmb {w},t\models \psi_1\lor \pmb {w},t\models \psi_2;$ +$\bullet \pmb {w},t\models \psi_1\land \psi_2\iff \pmb {w},t\models \psi_1\land \pmb {w},t\models \psi_2;$ +$\bullet \pmb {w},t\models \neg \psi \iff \pmb {w},t\nvDash \psi ;$ +$\bullet \pmb {w},t\models \diamondsuit \psi \iff \exists t^{\prime} < t:\pmb {w},t^{\prime}\models \psi ;$ +$\bullet \pmb {w},t\models \diamondsuit \psi \iff \exists t^{\prime} > t:\pmb {w},t^{\prime}\models \psi ;$ +- $\mathbf{w}, t \models {\psi }_{1}\mathbf{S}{\psi }_{2} \Leftrightarrow \exists {t}^{\prime } < t : \mathbf{w},{t}^{\prime } \vDash {\psi }_{2}$ and $\mathbf{w},k \vDash {\psi }_{1}$ for all $k$ with ${t}^{\prime } < k < t$ ; +- $\pmb{w}, t \models \psi_1\mathbf{U}\psi_2 \iff \exists t' > t: \pmb{w}, t' \models \psi_2$ and $\pmb{w}, k \models \psi_1$ for all $k$ with $t < k < t'$ . + +$\mathbf{LTL}[\diamond ,\diamond ,\mathbf{S},\mathbf{U}]$ defines exactly the class of star-free languages (Kamp, 1968; McNaughton and Papert, 1971). $\mathbf{LTL}[\diamond ]$ and $\mathbf{LTL}[\diamond ]$ , the fragments of LTL with only the $\diamond$ and $\diamond$ operators, respectively, are strictly less expressive than $\mathbf{LTL}[\diamond ,\diamond ,\mathbf{S},\mathbf{U}]$ (neither can recognize $a\Sigma^{*}b$ , which can be recognized by $\mathbf{LTL}[\diamond ,\diamond ,\mathbf{S},\mathbf{U}]$ ), and have characterizations in terms of monoids, automata, and first-order logic (App. B.6). + +# B.5 First-Order Logic + +First-order logic with the $<$ relation, denoted by $\mathbf{FO} [< ]$ , extends the usual propositional logic with predicates and quantifiers. We consider free variables $x, y, z, \dots$ that represent positions over a string $w$ of size $T$ . $\mathbf{FO} [< ]$ includes unary predicates $\pi_w$ for $w \in \Sigma$ , where $\pi_w(x) = \mathrm{TRUE} \iff$ there is a $w$ at position $x$ . As with LTL $[\diamond, \diamond, \mathbf{S}, \mathbf{U}]$ , we can inductively define formulas from $\pi_w$ using the standard Boolean operators, the binary predicate $<$ between positions, and the existential quantifier $\exists$ . By Kamp (1968), $\mathbf{FO} [< ]$ is equivalent to LTL $[\diamond, \diamond, \mathbf{S}, \mathbf{U}]$ and by McNaughton and Papert (1971), it is equivalent to the star-free languages. + +$\mathbf{FO}^2 [< ]$ is the fragment of $\mathbf{FO} [< ]$ that can only consider two distinct variables at the same time. Li and Cotterell (2025) further define $\mathbf{PFO}^2 [< ]$ as the past fragment of $\mathbf{FO}^2 [< ]$ that can only "peek into the past." Namely, any single-variable formula $\phi (x)$ can only have bounded existential quantifiers of the form " $\exists y < x ^ { " }$ . We analogously have the future fragment of $\mathbf{FO}^2 [< ]$ , FFO $2^{< ]}$ , where we only allow peeking into the future with quantifiers of the form " $\exists y > x ^ { " }$ + +# B.6 Equivalence Between Formalisms + +Due to Brzozowski and Ellen (1980) and Li and Cotterell (2025), we have the two dual theorems: + +Theorem B.1. Let $\mathbb{L} \subseteq \Sigma^{*}$ be a regular language, $\mathbb{M}$ be its syntactic monoid, and $\mathcal{A}$ be the minimal DFA accepting it. The following conditions are equivalent: + +(i) $\mathbb{M}$ is $\mathcal{R}$ -trivial, +(ii) $\mathbb{L}$ can be denoted by an $\mathcal{R}$ -expression, +(iii) $\mathcal{A}$ is a POFA, +(iv) $\mathcal{A}$ is the homomorphic image of a cascade product of half-resets, +(v) $\mathbb{L}$ can be recognized by a formula in $\mathbf{PFO}^2 [<]$ , +(vi) $\mathbb{L}$ can be recognized by a formula in LTL[]. + +Theorem B.2. Let $\mathbb{L} \subseteq \Sigma^{*}$ be a regular language, $\mathbb{M}$ be its syntactic monoid, and $\mathcal{A}$ be the minimal DFA accepting it. The following conditions are equivalent: + +(i) $\mathbb{M}$ is $\mathcal{L}$ -trivial, +(ii) $\mathbb{L}$ can be denoted by an $\mathcal{L}$ -expression, +(iii) $\mathcal{A}$ is a RPOFA (equivalently, $\mathcal{A}^R$ is a POFA), +(iv) $\mathcal{A}^R$ is the homomorphic image of a cascade product of half-resets, +(v) $\mathbb{L}$ can be recognized by a formula in $\mathbf{FFO}^2 [<]$ , +(vi) $\mathbb{L}$ can be recognized by a formula in $\mathbf{LTL}[\diamond]$ . + +# C Proofs of $\mathcal{T}_{\triangleleft}^{\mathsf{F}}$ and LTL[◇] Equivalence + +The equivalence between $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ and $\mathbf{B - RASP}_{\triangleleft}^{\mathrm{F}}$ follows directly from Theorems 3 and 4 of Yang et al. (2024a). + +Theorem 5.1. For any UHAT in $\mathcal{T}_{\triangleleft}^{\mathsf{F}}$ , there exists an equivalent B-RASP program. For any B-RASP program, there exists an equivalent UHAT in $\mathcal{T}_{\triangleleft}^{\mathsf{F}}$ . + +Proof. Yang et al. (2024a, Thms. 3 and 4) and the supporting lemmata (Lemmata 21 and 24) treat $\triangleleft$ and $\triangleright$ separately. Translations constrained to $\triangleleft$ are thus special cases of their proofs. + +The following proofs largely follow the proofs of Lemmata 13 and 14 of Yang et al. (2024a). We highlight the part that distinguishes $\triangleleft$ and $\triangleright$ in red. + +Theorem 6.1. For any formula $\psi$ of $\mathbf{LTL}[\diamond]$ , there is a $B$ - $RASP^{\mathsf{F}}$ program with a Boolean vector $P_{\psi}$ such that, for any input $\mathbf{w}$ of length $T$ and all $t \in [T]$ , we have $\mathbf{w}, t \models \psi \iff P_{\psi}(t) = 1$ . + +Proof. We proceed by induction. + +Base case. The atomic formulas $\pi_{a}$ can be represented by initial Boolean vectors $Q_{a}$ for $a\in \Sigma$ + +Inductive step. Assume $\psi_{1}$ and $\psi_{2}$ can be converted to $\mathbf{B - RASP}^{\mathrm{F}}$ vectors $P_{\psi_1}$ and $P_{\psi_2}$ , respectively. We distinguish three cases of building a new formula from $\psi_{1}$ and $\psi_{2}$ : + +- Case (1): $\psi = \neg \psi_{1}$ . Add a position-wise operation: + +$$ +P _ {\psi} (t) = \neg P _ {\psi_ {1}} (t). \tag {15} +$$ + +- Case (2): $\psi = \psi_1 \wedge \psi_2$ . Add a position-wise operation: + +$$ +P _ {\psi} (t) = P _ {\psi_ {1}} (t) \wedge P _ {\psi_ {2}} (t). \tag {16} +$$ + +- Case (3): $\psi = \diamondsuit \psi_{1}$ . Add an attention operation with future masking and $\triangleleft$ tiebreaking: + +$$ +P _ {\psi} (t) = \triangleleft_ {t ^ {\prime}} \left[ t ^ {\prime} < t, P _ {\psi_ {1}} \left(t ^ {\prime}\right) \right] P _ {\psi_ {1}} \left(t ^ {\prime}\right): 0. \tag {17} +$$ + +Theorem 6.2. For any Boolean vector $P$ of a $B\text{-RASP}^{\mathsf{F}}$ program $\mathcal{P}$ , there is a formula $\psi_P$ of LTL[◇] such that for any input $\boldsymbol{w}$ of length $T$ and all $t \in [T]$ , we have $P(t) = 1 \iff \boldsymbol{w}, t \models \psi_P$ . + +Proof. We proceed by induction. + +Base case. Each atomic vector $Q_{a}(t)$ can be translated to the atomic formula $\pi_{a}$ . + +Induction step. Assume vectors $P_{1},\ldots ,P_{i - 1}$ can be translated to $\psi_{P_1},\dots,\psi_{P_{i - 1}}$ . We distinguish different options of building a new vector out of $P_{1},\ldots ,P_{i - 1}$ .. + +- Case (1): $P_{i}(t)$ is a position-wise operation: + +$$ +P _ {i} (t) = f \left(P _ {1} (t), \dots , P _ {i - 1} (t)\right), \tag {18} +$$ + +where $f$ is a Boolean function. We can translate $P_{i}(t)$ into $\psi_{i} = f(\psi_{1},\ldots ,\psi_{i - 1})$ + +- Case (2): $P_{i}(t)$ is an attention operation that uses leftmost tiebreaking and future masking, that is, + +$$ +P _ {i} (t) = \triangleleft_ {t ^ {\prime}} \left[ t ^ {\prime} < t, s \left(t ^ {\prime}\right) \right] v \left(t ^ {\prime}\right): d (t). \tag {19} +$$ + +By the inductive hypothesis, there are $\mathbf{LTL}[\diamond]$ formulas $\psi_S, \psi_V$ , and $\psi_D$ corresponding to $s, v$ , and $d$ . We can thus write $P_i(i)$ as: + +$$ +\psi_ {i} = (\diamond (\psi_ {S} \wedge \neg \diamond \psi_ {S} \wedge \psi_ {V})) \vee (\neg (\diamond \psi_ {S}) \wedge \psi_ {D}). \tag {20} +$$ + +where $\psi_S \wedge \neg \diamondsuit \psi_S$ identifies the leftmost position that satisfies $\psi_S$ in the string. + +- Case (3): We now treat the attention operation with rightmost tiebreaking and future masking, i.e., + +$$ +P _ {i} (t) = \triangleright_ {t ^ {\prime}} \left[ t ^ {\prime} < t, s \left(t ^ {\prime}\right) \right] v \left(t ^ {\prime}\right): d (t). \tag {21} +$$ + +In this case, we need to find the rightmost position satisfying $\psi_S$ not in the entire string $w$ , but before the current position, which can only be realized using the S operator: + +$$ +\psi_ {i} = (\neg \psi_ {S} \mathbf {S} (\psi_ {S} \wedge \psi_ {V})) \vee (\neg (\diamond \psi_ {S}) \wedge \psi_ {D}). \tag {22} +$$ + +# C.1 POFAs as $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ + +The Krohn-Rhodes theorem (Krohn and Rhodes, 1965) states that any deterministic automaton is the homomorphic image of a cascade of simple automata whose transitions induce resets or permutations of the states. Thm. B.1 by Brzozowski and Ellen (1980) provides an analogous decomposition for POFAs, namely that they are homomorphic to a cascade of half-returns. In a very similar manner to Yang et al. (2024a)'s Thm. 15, we can express this decomposition in a $\mathbf{B}$ - $\mathbf{RASP}^{\mathsf{F}}$ program, which can further be expressed by a transformer in $\mathcal{T}_{\triangleleft}^{\mathsf{F}}$ . + +We first formalize how B-RASP can compute sequence-to-sequence functions $\Sigma^{*} \to \Gamma^{*}$ if for instance, we want a B-RASP program to simulate an automaton (i.e., output the same states traversed by the automaton when ran on the same input). We designate a set of output vectors $Y_{\gamma}$ , for all $\gamma \in \Gamma$ , and the sequence-to-sequence function then maps $t$ to $\gamma$ if $Y_{\gamma}(t)$ is true. + +We then say a $\mathbf{B}$ - $\mathbf{RASP}^{\mathsf{F}}$ program $\mathcal{P}$ simulates an initialized semiautomaton $\mathcal{A} = (\Sigma, Q, \delta)$ with initial state $q_{s}$ iff for every input string $\boldsymbol{w}$ , the output vectors of $\mathcal{P}$ encode the sequence of states traversed by $\mathcal{A}$ when ran on $\boldsymbol{w}$ with initial state $q_{s}$ . + +Lemma C.1. Let $\mathcal{H} = (\Sigma_0\cup \Sigma_1,\{q_0,q_1\} ,q_0,\delta)$ be a half-reset. Then there exists a B-RASP program $\mathcal{P}_{\mathcal{H}}$ that simulates $\mathcal{H}$ started in start state $q_{0}$ . + +Proof. We first define two B-RASP programs $\mathcal{B}_0, \mathcal{B}_1$ , which return 1 if and only if the half-reset is in state $q_0$ or $q_1$ , respectively. Namely: + +$$ +\mathcal {B} _ {0} (t) = \triangleleft_ {t ^ {\prime}} \left[ t ^ {\prime} < t, \bigvee_ {w \in \Sigma_ {1}} Q _ {w} \left(t ^ {\prime}\right) \right] 0: 1 \tag {23} +$$ + +$$ +\mathcal {B} _ {1} (t) = 1 - \mathcal {B} _ {0} (t) +$$ + +In half-resets, the presence or absence at any instance of symbols in $\Sigma_1$ determines whether we are in $q_0$ or $q_1$ . It suffices to check if at the current index $t$ , there has been a past $w \in \Sigma_1$ , which can be done by computing $\bigvee_{w \in \Sigma_1} Q_w(t')$ for all indices $t'$ satisfying future masking. + +Lemma C.2. Let $\mathcal{A} = (\Sigma, Q_1, \delta_1)$ be a semiautomaton that can be simulated from state $s_1$ by a B-RASP program $\mathcal{P}_{\mathcal{A}}$ . Let $\mathcal{H} = (Q_1 \times \Sigma, Q_2, \delta_2)$ be a half-reset and let $\mathcal{C} = \mathcal{A} \circ \mathcal{H}$ . Then, there exists a program $\mathcal{P}_{\mathcal{C}}$ that simulates $\mathcal{C}$ started in state $(s_1, s_2)$ for an arbitrary $s_2 \in Q_2$ . + +Proof. We now use for all $q \in Q_1$ , the predicates $\mathcal{B}_{\mathcal{A},q}$ that denote whether $\mathcal{A}$ is in some state $q \in Q_1$ at time $t$ when started at $s_1$ . By the assumption that $\mathcal{A}$ can be simulated by a $\mathbf{B - RASP}^{\mathrm{F}}$ program, we have access to such predicates. + +We now define predicates for $q \in Q_1, w \in \Sigma$ : + +$$ +Q _ {(q, w)} ^ {\prime} (t) = \mathcal {B} _ {A, q} (t) \wedge Q _ {w} (t) \tag {24} +$$ + +These formulas encode the presence of an element in $Q_{1} \times \Sigma$ (a state in the first semiautomaton $\mathcal{A}$ and an alphabet symbol) for the half-reset $\mathcal{H}$ . + +As $\mathcal{H}$ is a half-reset, we can classify every tuple $(q,w)\in Q_1\times \Sigma$ into one of two sets $\Sigma_0$ or $\Sigma_{1}$ (depending on if they reset to $q_{0}$ or $q_{1}$ in $\mathcal{H}$ ). As in Lem. C.1, we can define predicates $\mathcal{B}_{\mathcal{H},q}(t)$ using the predicates $Q_{(q,w)}^{\prime}$ and the known classification of elements in $Q_{1}\times \Sigma$ into some state $q_{0}$ or $q_{1}$ . + +To finally simulate the cascade product $\mathcal{C} = \mathcal{A}\circ \mathcal{H}$ , we define predicates that compute for every state $(q,p)$ , $q\in Q_1$ , $p\in Q_2$ , whether $\mathcal{C}$ is in it: + +$$ +\mathcal {C} _ {(q, p)} (t) = \mathcal {B} _ {\mathcal {A}, q} (t) \wedge \mathcal {B} _ {\mathcal {H}, p} (t) \tag {25} +$$ + +Theorem C.1. Let $\mathcal{A}$ be a POFA. Then, there exists an equivalent transformer $T\in T_{\triangleleft}^{\mathsf{F}}$ + +Proof. Let $\mathcal{C} = \mathcal{B}_0 \circ \dots \circ \mathcal{B}_k$ the semiautomaton $\mathcal{A}$ is homomorphic to. Let $\phi: Q' \to Q$ be the homomorphism from $\mathcal{C}$ to $\mathcal{A}$ (where $Q$ are the states of $\mathcal{A}$ , $Q'$ are the states of $\mathcal{C}$ ). By Lem. C.2, we can iteratively define formulas $\mathcal{C}_{q'}$ for all $q' \in Q'$ that simulate the semiautomaton $\mathcal{C}$ . If we write instructions describing the homomorphism $\phi$ , we can then write formulas that yield the states traversed by $\mathcal{A}$ as: + +$$ +\mathcal {A} _ {q} (t) = \bigvee_ {p \in Q ^ {\prime}, \phi (p) = q} \mathcal {C} _ {p} (t) \tag {26} +$$ + +$\mathcal{A}_q(t)$ , however, describes the state before reading the symbol at $t$ (by strict masking). We want predicates that yield the state after reading at the symbol position $t$ . We thus write: + +$$ +Y _ {q} (t) = \bigvee_ {\substack {p \in Q, w \in \Sigma \\ \delta (p, w) = q}} \mathcal {A} _ {p} (t) \wedge Q _ {w} (t) \tag{27} +$$ + +Finally, we denote by $F$ the set of final states in $\mathcal{A}$ . We thus define the output vector $Y$ by: + +$$ +Y (t) = \bigvee_ {q \in F} Y _ {q} (t) \tag {28} +$$ + +$Y(T) = 1$ if and only if $\mathcal{A}$ is in one of the final states when reading the entire string. This concludes the translation of $\mathcal{A}$ to $\mathbf{B - RASP}_{\triangleleft}^{\mathsf{F}}$ , showing the existence of a $\mathsf{T} \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ equivalent to $\mathcal{A}$ . + +# D Direct Translations of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ + +# D.1 A Normal Form for $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ Transformers + +The representational capacity of transformers depends tightly on the modeling assumptions. §6 studies the same architecture as Yang et al. (2024a). In this section, we outline a formalization of $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ transformers in a form that allows us to describe direct translations to LTL[◇] and POFAs in §7 more easily. The idea of this is similar to the normal form of Hao et al. (2022). + +# D.1.1 Finite Precision and Simplifications + +As Yang et al. (2024a), we work with finite-precision transformers with no positional encodings. Further, we focus on strict future masking and leftmost hard attention, defined formally below. + +In our formalization, we omit element-wise transformations such as the query, key, and value transformations and element-wise MLPs. Instead, we directly aggregate the original representations into (constant-size) vectors that can take increasingly many values as the number of layers increases. The intuition and motivation for this stems from the use of hard attention: At any layer, hard attention augments the current contextual representation with the representation of one (preceding) symbol—the hardmax. Thus, all information that the transformer can capture is already captured if the contextual representations keep around the identities of the elements that were returned by hardmax. The element-wise representations, in this case, do not provide any additional representational power. We elaborate on this in App. D.1.3. + +Note that we do not use layer normalization in our transformers and focus on a single-head attention mechanism. Both can be incorporated into our proofs analogously to the solutions presented in Yang et al. (2024a) (see Yang et al. (2024a, §4.1) for a discussion of multi-head attention and Yang et al. (2024a, §4.3) for layer normalization). + +# D.1.2 The Attention Mechanism + +The central component of a transformer is the transformer layer. + +Definition D.1. A transformer layer $\mathcal{L}\colon (\mathbb{R}^D)^+ \to (\mathbb{R}^D)^+$ is a length-preserving function defined as + +$$ +\mathcal {L} \left(\left(\boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {T}\right)\right) \stackrel {\text {d e f}} {=} \left(\boldsymbol {y} _ {t}\right) _ {t = 1} ^ {T} \tag {29a} +$$ + +$$ +\left(\boldsymbol {y} _ {1}, \dots , \boldsymbol {y} _ {T}\right) \stackrel {\text {d e f}} {=} \operatorname {a t t} \left(\left(\boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {T}\right)\right) + \left(\boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {T}\right) +$$ + +We will use $\mathcal{L}_t\stackrel {\mathrm{def}}{=}\mathsf{proj}_t\circ \mathcal{L}$ for the function that computes $\mathcal{L}((\pmb {x}_1,\dots ,\pmb{x}_T))$ and extracts the contextual representation of the $t^{th}$ symbol by projecting out that dimension. + +The attention mechanism att is specified with the following components: + +- A scoring function score: $\mathbb{R}^D\times \mathbb{R}^D\to \mathbb{R}$ +- A masking function $m \colon \mathbb{N} \times \mathbb{N} \to \{0,1\}$ that determines the positions attended to. We use future and past masking as in B-RASP. We write $\mathcal{M}(t) \stackrel{\mathrm{def}}{=} \{t' \mid m(t,t') = 1\}$ . +- A normalization function norm: $\mathbb{R}^T \to \pmb{\Delta}^{T-1}$ that normalizes the attention scores. + +We then define the attention mechanism as: + +$$ +\operatorname {a t t} \left(\left(\boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {T}\right)\right) \stackrel {\text {d e f}} {=} \left(\boldsymbol {y} _ {1}, \dots , \boldsymbol {y} _ {T}\right) \tag {30a} +$$ + +$$ +\boldsymbol {y} _ {t} \stackrel {\text {d e f}} {=} \sum_ {t ^ {\prime} \in \mathcal {M} (t)} s _ {t ^ {\prime}} \boldsymbol {x} _ {t ^ {\prime}} \tag {30b} +$$ + +$$ +s \stackrel {\text {d e f}} {=} \operatorname {n o r m} \left(\left(\operatorname {s c o r e} \left(\boldsymbol {x} _ {t ^ {\prime}}, \boldsymbol {x} _ {t}\right)\right) _ {t ^ {\prime} \in \mathcal {M} (t)}\right) \tag {30c} +$$ + +The $\sum$ -notation in Eq. (30b) can naturally be thought of as collecting information from the unmasked positions. Intuitively, if the space of contextual representations is large enough, this can be interpreted as concatenating the representations together. This will be particularly useful in our UHA formulation, where only a single $x_{t'}$ will be selected and the finiteness of the representation space will mean that all relevant information about the string $w_{< t}$ will be stored in $x_{t}$ . See also App. D.1.3. + +Let $L \in \mathbb{N}_{\geq 1}$ and $\mathcal{L}_1, \ldots, \mathcal{L}_L$ be transformer layers. A transformer $\mathsf{T} \colon \Sigma^* \to (\mathbb{R}^D)^+$ is a composition of transformer layers: + +$$ +\mathsf {T} \stackrel {\text {d e f}} {=} \mathcal {L} _ {L} \circ \dots \circ \mathcal {L} _ {1} \circ \text {e m b e d} \tag {31} +$$ + +where embed: $\Sigma^{*}\to \left(\mathbb{R}^{D}\right)^{+}$ is a position-wise embedding function that maps symbols to their static representations. + +We also write + +$$ +\left(\boldsymbol {x} _ {1} ^ {(\ell)}, \dots , \boldsymbol {x} _ {T} ^ {(\ell)}\right) = \left(\mathcal {L} _ {\ell} \circ \dots \circ \mathcal {L} _ {1} \circ \operatorname {e m b e d}\right) (\boldsymbol {w}) \tag {32} +$$ + +for some layer index $\ell \in [L]$ and string $\boldsymbol{w} = w_{1}\dots w_{T}\in \Sigma^{*}$ . We call $\boldsymbol{x}_t^{(\ell)}$ the contextual representation of symbol $w_{t}$ at layer $\ell$ . + +A transformer computes the contextual representations of the string $\pmb{w} = w_{1}\dots w_{T}$ EOS as + +$$ +\left(\boldsymbol {x} _ {1} ^ {(L)}, \dots , \boldsymbol {x} _ {T} ^ {(L)}, \boldsymbol {x} _ {\mathrm {E O S}} ^ {(L)}\right) \stackrel {\text {d e f}} {=} \mathsf {T} (\boldsymbol {w}). \tag {33} +$$ + +We take $x_{\mathrm{EOS}}^{(\ell)}$ to be the representation of the entire string. This motivates the definition of the transformer encoding function $h$ : + +$$ +\boldsymbol {h} (\boldsymbol {w}) \stackrel {\text {d e f}} {=} \boldsymbol {x} _ {\text {E O S}} ^ {(L)}. \tag {34} +$$ + +This allows us to define a transformer's language. This is usually defined based on a linear classifier based on $h(\boldsymbol{w})$ ; + +$$ +\mathbb {L} (\mathsf {T}) \stackrel {\text {d e f}} {=} \left\{\boldsymbol {w} \in \Sigma^ {*} \mid \boldsymbol {\theta} ^ {\top} \boldsymbol {h} (\boldsymbol {w}) > 0 \right\} \tag {35} +$$ + +for some $\pmb{\theta} \in \mathbb{R}^{D}$ . Since we are working with finite-precision transformers, the set of possible $\pmb{h}(\pmb{w})$ is finite (in our normal form, it is a subset of $\Sigma^{2^L}$ ). We can thus equate the condition $\pmb{\theta}^\top \pmb{h}(\pmb{w}) > 0$ with $\pmb{h}(\pmb{w})$ 's inclusion in a subset of $\Sigma^{2^L}$ and define the language of a transformer as follows. + +Definition D.2. Let $T \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ . We define its language $\mathbb{L}(T)$ as + +$$ +\mathbb {L} (T) \stackrel {\text {d e f}} {=} \left\{\boldsymbol {w} \in \Sigma^ {*} \mid \boldsymbol {h} (\boldsymbol {w}) \in \mathcal {F} _ {T} \right\} \tag {36} +$$ + +where $\mathcal{F}_T$ is a set of accepting final representations. + +# D.1.3 Unique Hard Attention + +Let $\mathcal{C} \in \{\min, \max\}$ and let $s \in \mathbb{R}^N$ . We define + +$$ +\operatorname {h a r d m a x} (\boldsymbol {s}) _ {n} \stackrel {\text {d e f}} {=} \left\{ \begin{array}{l l} 1 & \text {i f} n = \mathcal {C} (\arg \max (\boldsymbol {s})) \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {37} +$$ + +Here, $\operatorname{argmax} s$ denotes the set of indices attaining the maximum value in the vector $s$ . The function $\mathcal{C}$ selects a unique index from this set: $\mathcal{C} = \max$ corresponds to rightmost tiebreaking $\triangleright$ and $\mathcal{C} = \min$ corresponds to leftmost tiebreaking $\triangleleft$ . + +Definition D.3. Unique hard attention is computed with the hardmax projection function: + +$$ +\operatorname {h a r d m a x} (s) = \operatorname {h a r d m a x} (s) \tag {38} +$$ + +With some abuse of notation, we sometimes write $\mathrm{hardmax}(s)$ for the position $\mathcal{C}(\mathrm{argmax}(s))$ . + +We denote future or past masking with $\mathsf{F}$ or $\mathsf{P}$ , respectively. $\mathcal{T}_{\triangleleft}^{\mathsf{F}}$ , for example, denotes the class of transformers with future masking and leftmost attention. + +The following lemma is a restatement of Yang et al. (2024a, Lem. 22). + +Lemma D.1. Let $T$ be a UHA transformer over $\Sigma$ . Denote with $\pmb{x}_t^{(\ell)}$ the contextual representation of the symbol $w_t$ at layer $\ell$ . The following holds: + +$$ +\left| \left\{\boldsymbol {x} _ {t} ^ {(\ell)} \mid \boldsymbol {w} \in \Sigma^ {*}, t \in [ | \boldsymbol {w} | ] \right\} \right| \leq | \Sigma | ^ {2 ^ {\ell}}. \tag {39} +$$ + +Proof. We prove the statement by induction on the number of layers. + +Base case: $\ell = 1$ . In the first layer, as we have static representations for symbols, the embedding at some position $t$ is uniquely determined by the symbol $w_{t}$ at that position. We thus have exactly $|\Sigma|$ possible representations for a given position, regardless of the length of the string. + +Inductive step: $\ell > 1$ . Suppose that the invariance holds for $\ell - 1$ : $|\{\pmb{x}_t^{(\ell - 1)} \mid \pmb{w} \in \Sigma^*, t \in [| \pmb{w}|]\}| \leq |\Sigma|^{2^{\ell - 1}}$ . For any position $t$ in the string, at layer $\ell$ , we will compute $\pmb{x}_t^{(\ell)} = \mathrm{att}((\pmb{x}_1, \dots, \pmb{x}_T)) + (\pmb{x}_1, \dots, \pmb{x}_T)$ where $\pmb{x}_t^{(\ell - 1)}$ is the representation of the symbol at the previous layer $\ell - 1$ . By the induction hypothesis, the element $\pmb{x}_t^{(\ell - 1)}$ takes one out of at most $|\Sigma|^{2^{\ell - 1}}$ possible values. Moreover, the attention mechanism selects one element from the previous layer $\ell - 1$ , which holds one out of at most $|\Sigma|^{2^{\ell - 1}}$ possible values by the induction hypothesis. The element $\pmb{a}_t$ holds thus one out of $|\Sigma|^{2^{\ell - 1}} \times |\Sigma|^{2^{\ell - 1}} = |\Sigma|^{2^{\ell}}$ representations, concluding the induction step and the proof. + +Contextual representations as elements of a finite set. Lem. D.1 allows us to simplify notation: Any representation of a symbol at layer $\ell$ is uniquely identified by $2^{\ell}$ symbols, i.e., $\pmb{x}^{(\ell)}\in \Sigma^{2^{\ell}}$ . We think of this as the collection of representations attended to at each of layer $\ell^{\prime} < \ell$ , since each selects a single position to be added to the current representation. We will thus refer to $\pmb{x}^{(\ell)}$ as elements of $\Sigma^{2^{\ell}}$ . + +# D.1.4 An Invariance + +In this section and in App. D.2, we use $\Sigma$ for the alphabet of input symbols and $\Xi$ for a general alphabet (finite set) of relevance. Later, $\Xi$ will correspond to sets of the form $\Sigma^{2^{\ell}}$ for some $\ell \in \mathbb{N}$ . + +Definition D.4. Let $\Xi$ be an alphabet and $w \in \Xi^*$ . We define the symbol order $\omega(w)$ of $w$ as the string obtained from $w$ by keeping only the first occurrence of each symbol. + +We define the following relation on $\Xi^{*}$ : + +$$ +\boldsymbol {w} \simeq_ {\omega} \boldsymbol {w} ^ {\prime} \Longleftrightarrow \omega (\boldsymbol {w}) = \omega \left(\boldsymbol {w} ^ {\prime}\right). \tag {40} +$$ + +It is not hard to verify that $\simeq_{\omega}$ is an equivalence relation on $\Xi^{*}$ and to verify that $|\Xi^{*} / \simeq_{\omega}| = |\mathrm{OS}(\Xi)|$ where $\mathrm{OS}(\Xi)$ is the (finite) set of all ordered subsets of $\Xi$ . We denote the equivalence class of $\boldsymbol{w} \in \Xi^{*}$ by $[\boldsymbol{w}] \in \Xi^{*} / \simeq_{\omega}$ . We have the following important invariance. + +Lemma D.2 (Attention invariance). Let $\mathcal{L}$ be an $\mathcal{T}_{\triangleleft}^{\mathsf{F}}$ transformer layer over $\Xi$ . For any $\boldsymbol{w}, \boldsymbol{w}' \in \Xi^{*}$ , if $\boldsymbol{w} \simeq_{\omega} \boldsymbol{w}'$ , then $\mathcal{L}_{|\boldsymbol{w}|}(\boldsymbol{w}) = \mathcal{L}_{|\boldsymbol{w}'|}(\boldsymbol{w}')$ . + +Proof. This follows directly from the definition of leftmost hard attention: Additional occurrences of symbols to the right of the first occurrence do not change the position attended to, meaning that the output at the final symbol is the same. + +# D.2 $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ as LTL[◇] + +We view a transformer layer as a function that takes a contextual representation $\boldsymbol{x}_t^{(\ell)}$ and returns a function that takes in an ordered set of contextual representations $\omega(\boldsymbol{w})$ with $\boldsymbol{w} \in \Xi^*$ and returns the representation chosen by the unique hard attention mechanism. + +$$ +\mathcal {L} ^ {(\ell)} \colon \Xi \rightarrow \operatorname {M a p} (\mathrm {O S} (\Xi), \Xi), \tag {41a} +$$ + +$$ +\mathcal {L} ^ {(\ell)} \colon \boldsymbol {x} \mapsto \boldsymbol {\lambda} _ {\boldsymbol {x}}. \tag {41b} +$$ + +$\lambda_{x}$ is the function that takes an ordered set of contextual representations $\mathcal{X} = (x_1, \ldots, x_N)$ and returns the representation chosen by UHA: + +$$ +\lambda_ {x} \colon \mathrm {O S} (\Xi) \rightarrow \Xi , \tag {42a} +$$ + +$$ +\lambda_ {\boldsymbol {x}}: \mathcal {X} \mapsto \boldsymbol {x} _ {\text {h a r d m a x} \left(\left(\operatorname {s c o r e} \left(\boldsymbol {x} _ {t}, \boldsymbol {x} _ {t ^ {\prime}}\right)\right) _ {\boldsymbol {x} _ {t ^ {\prime}} \in \mathcal {X}}\right)}. \tag {42b} +$$ + +We also define + +$$ +\boldsymbol {x} ^ {\prime \prime} \preceq_ {\boldsymbol {x}} \boldsymbol {x} ^ {\prime} \Longleftrightarrow \operatorname {s c o r e} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime \prime}) \leq \operatorname {s c o r e} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) \tag {43a} +$$ + +$$ +\boldsymbol {x} ^ {\prime \prime} \simeq_ {\boldsymbol {x}} \boldsymbol {x} ^ {\prime} \Longleftrightarrow \operatorname {s c o r e} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime \prime}) = \operatorname {s c o r e} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) \tag {43b} +$$ + +$$ +\boldsymbol {x} ^ {\prime \prime} \prec_ {\boldsymbol {x}} \boldsymbol {x} ^ {\prime} \Longleftrightarrow \boldsymbol {x} ^ {\prime \prime} \preceq_ {\boldsymbol {x}} \boldsymbol {x} ^ {\prime} \text {a n d} \neg \left(\boldsymbol {x} ^ {\prime \prime} \simeq_ {\boldsymbol {x}} \boldsymbol {x} ^ {\prime}\right). \tag {43c} +$$ + +Theorem 7.1. Let $T \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ be a transformer. Then, there exists an equivalent formula $\psi \in \mathbf{LTL}[\diamond]$ . + +Proof. We define an $\mathbf{LTL}[\diamond]$ formula representing $\mathsf{T} \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ with the attention mechanism implemented by the function $\lambda_{x}$ (cf. Eq. (42b)) by specifying a set of formulas representing each layer, starting with the initial one, and continuing inductively. At a high level, we define the formulas $\phi_{\pmb{x} \gets \pmb{y}}^{(\ell)}$ to mean that the contextual representation $\pmb{y}$ is added to the contextual representation $\pmb{x}$ at layer $\ell$ , i.e., that $\pmb{y}$ is the maximizer of the query-key score when $\pmb{x}$ is the query: $\lambda_{x}(\mathcal{X}) = \pmb{y}$ for $\mathcal{X}$ representing the current string. We construct the formulas layer by layer. + +Base case: $\ell = 1$ . We begin by specifying first-layer formulas, which work over $\Sigma$ . We define for $a, b \in \Sigma$ : + +![](images/fb71d65cd2f70d05ccfc8ceb9c063465076b1aefa0a1ca1b07959a8c5ed05bc5.jpg) + +In words, $b$ 's value is added to $a$ 's static embedding if $(i)$ there are no symbols in the past of $a$ that have a higher score than $b$ and $(ii)$ there exists a position with $b$ in the past such that there are no symbols with equal scores to its left (leftmost tiebreaking). + +Inductive step: $\ell > 1$ . Let now $\mathcal{L}$ be the transformer layer at layer $\ell > 1$ and assume we have correctly constructed $\phi_{\boldsymbol{x} \leftarrow \boldsymbol{y}}^{(\ell')}$ for $\ell' < \ell$ . + +Firstly, we define the formulas $\pi_{\pmb{x}}^{(\ell)}$ that, analogously to $\pi$ , specify the presence of contextual representations for elements in $\Sigma^{2^{\ell}}$ . Writing $\pmb{x} \in \Sigma^{2^{\ell}}$ as $\pmb{x} = (z_0, z_1, \dots, z_{\ell-1})$ with $z_0 \in \Sigma$ and $z_{\ell'} \in \Sigma^{2^{\ell'}}$ , $\ell' \in \{0, \dots, \ell-1\}$ , we define: + +$$ +\pi_ {\boldsymbol {x}} ^ {(\ell)} = \pi_ {\boldsymbol {z} _ {0}} \wedge \underbrace {\bigwedge_ {\ell^ {\prime} = 1} ^ {\ell - 1} \phi_ {\boldsymbol {x} _ {\leq \ell^ {\prime} - 1} \leftarrow \boldsymbol {z} _ {\ell^ {\prime}}} ^ {(\ell^ {\prime})}} _ {\text {V e r i f y c o r r e c t r e p r e s e n t a t i o n s}} \tag {45} +$$ + +where $\pmb{x}_{\leq \ell'} = (z_0, \dots, z_{\ell'})$ . This checks the presence of $\pmb{x} \in \Sigma^{2^\ell}$ as the contextual representation by asserting that the individual representations in $\pmb{x}$ at lower levels were indeed added by checking the formulas $\phi_{\pmb{x}_{< \ell' - 1}}^{(\ell')} \gets \pmb{z}_{\ell'}$ . + +We now define, for $\pmb{x}, \pmb{y} \in \Xi = \Sigma^{2^{\ell}}$ : + +$$ +\phi_ {\boldsymbol {x} \leftarrow \boldsymbol {y}} ^ {(\ell)} = \bigvee_ {\boldsymbol {x} \in \mathrm {O S} (\Xi)} \left[ \underbrace {\operatorname {o r d e r} ^ {(\ell - 1)} (\boldsymbol {x})} _ {\text {I d e n t i f y c o r r e c t o r d e r e d s u b s e t o f}} \wedge \underbrace {\operatorname {b e s t} ^ {(\ell - 1)} (\boldsymbol {x} , \boldsymbol {y} , \boldsymbol {x})} _ {\text {C h e c k i f y y i s b e s t r e p r e s e n t a t i o n f o r x g i v e n t h e c o r r e c t o r d e r e d s u b s e t x}} \right] \tag {46} +$$ + +Intuitively, the formula iterates over all possible ordered subsets of $\Xi$ , checks which one describes the string in the past, and then asserts whether $\pmb{y}$ is the best symbol to add to $\pmb{x}$ given the set of contextual representations $\pmb{\mathcal{X}}$ . Here, $\operatorname{order}^{(\ell)}$ is a formula that checks whether $\omega(\pmb{w}) = \pmb{\mathcal{X}}$ by making sure that the string in the past follows the same order as $\pmb{\mathcal{X}}$ : + +$$ +\operatorname {o r d e r} ^ {(\ell)} (\boldsymbol {\chi}) \stackrel {\text {d e f}} {=} \underbrace {\left[ \diamondsuit \left(\pi_ {\boldsymbol {z} _ {1}} ^ {(\ell)} \wedge \diamondsuit \left(\pi_ {\boldsymbol {z} _ {2}} ^ {(\ell)} \wedge \cdots \wedge \left(\diamondsuit \pi_ {\boldsymbol {z} _ {| \boldsymbol {\chi} |}} ^ {(\ell)}\right)\right)\right) \right]} _ {\text {E l e m e n t s o f} \boldsymbol {\chi} \text {a r e p r e s e n t i n c o r r e c t o r d e r}} \wedge \underbrace {\bigwedge_ {\substack {z \in \Xi \backslash \boldsymbol {\chi}}} \neg \diamondsuit \pi_ {\boldsymbol {z}} ^ {(\ell)}} _ {\text {R e p r e s e n t a t i o n s n o t i n}} \boldsymbol {\chi} \tag{47} +$$ + +Analogously to Eq. (44), $\mathrm{best}^{(\ell)}$ checks whether $\pmb{y}$ is the best symbol to add to $\pmb{x}$ given the set of contextual representations $\mathcal{X}$ by (i) asserting $\pmb{x}$ is at the current position, (ii) there are no representations in the past of $\pmb{x}$ given the current ordered subset $\mathcal{X}$ that have a higher score than $\pmb{y}$ and (iii) there exists a position in $\mathcal{X}$ with $\pmb{y}$ in the past such that there are no representations with equal scores to its left (leftmost tiebreaking). + +$$ +\operatorname {b e s t} ^ {(\ell)} \left(\boldsymbol {x}, \boldsymbol {y}, \boldsymbol {\mathcal {X}}\right) \stackrel {\text {d e f}} {=} \pi_ {\boldsymbol {x}} ^ {(\ell)} \wedge \bigwedge_ {\substack {\boldsymbol {z} \in \boldsymbol {\mathcal {X}}: \\ \boldsymbol {y} \prec_ {\boldsymbol {x}} \boldsymbol {z}}} \neg \diamondsuit \pi_ {\boldsymbol {z}} ^ {(\ell)} \wedge \diamondsuit \Bigg ( \begin{array}{c} \pi_ {\boldsymbol {y}} ^ {(\ell)} \wedge \bigwedge_ {\substack {\boldsymbol {z} \in \boldsymbol {\mathcal {X}} \setminus \{\boldsymbol {y} \}: \\ \boldsymbol {y} \simeq_ {\boldsymbol {x}} \boldsymbol {z}}} \neg \diamondsuit \pi_ {\boldsymbol {z}} ^ {(\ell)} \\ \text {No previous representations} \\ \text {with higher scores than} \boldsymbol {y} \end{array} \Bigg). \tag{48} +$$ + +Finally, let $\mathcal{L}$ be the final transformer layer and let $F \subseteq \Sigma^{2^L}$ be the set of representations for EOS that lead to string acceptance by $\mathsf{T}$ . The LTL[◇] formula $\psi$ representing $\mathsf{T}$ simply has to check whether the representation for EOS is in $F$ : + +$$ +\psi = \bigvee_ {\boldsymbol {x} \in F} \pi_ {\boldsymbol {x}} ^ {(L)}. \tag {49} +$$ + +# D.3 $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ as POFAs + +Theorem D.1. Let $T \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ be a transformer. Then, there exists an equivalent POFA. + +Proof. We will construct a semiautomaton $\mathcal{A}$ that will, after reading $w_{\leq t}$ , store in its state the ordered subsets $\mathcal{X}^{(\ell)} \in \mathrm{OS}(\Sigma^{2^{\ell}})$ of contextual representations for all the transformer layers. In other words, it will hold $L$ equivalence classes of the string $w_{\leq t}$ , one for each layer. This state will be updated sequentially according to the self-attention mechanism implemented by the transformer. + +Formally, given a transformer $\mathsf{T} \in \mathcal{T}_{\triangleleft}^{\mathsf{F}}$ with the attention mechanism implemented by the function $\lambda_{x}$ (cf. Eq. (42b)) over the alphabet $\Sigma$ , we define the semiautomaton $\mathcal{A} = (\Sigma, Q, \delta)$ . We take the set of states $Q$ to be + +$$ +Q \stackrel {\text {d e f}} {=} \underbrace {\operatorname {O S} (\Sigma) \times \cdots \times \operatorname {O S} \left(\Sigma^ {2 ^ {L}}\right)} \quad . \tag {50} +$$ + +Ordered sets of representations of all layers. + +For clarity, we will explicitly write out the states $q \in Q$ in their components: + +$$ +q = \left( \begin{array}{c} \mathcal {X} ^ {(0)} \\ \mathcal {X} ^ {(1)} \\ \vdots \\ \mathcal {X} ^ {(L)} \end{array} \right) \tag {51} +$$ + +with $\mathcal{X}^{(\ell)}\in \mathrm{OS}(\Sigma^{2^{\ell}})$ for $\ell \in \{0,\dots ,L\}$ + +$\mathcal{A}$ will update $\mathcal{X}^{(\ell)}$ with new occurrences of contextual representations by "appending" new contextual representations. Let us describe how the transition function $\delta$ updates the state $q$ upon reading the symbol $w$ . We write $q$ for the source state and $\mathcal{X}^{(\ell)}$ for its components, and $q'$ for the target state and $\mathcal{X}'^{(\ell)}$ for its components. + +We then define $\pmb{x}^{\prime(0)} = w$ to mean that the static representation of this symbol is the symbol itself. For $\ell \geq 1$ , we define + +$$ +\boldsymbol {x} ^ {\prime (\ell)} \stackrel {\text {d e f}} {=} \binom {\boldsymbol {x} ^ {\prime (\ell - 1)}} {\boldsymbol {\lambda} _ {\boldsymbol {x} ^ {\prime (\ell - 1)}} \left(\mathcal {X} ^ {(\ell - 1)}\right)} \tag {52} +$$ + +which simulates the $\ell^{\mathrm{th}}$ layer of $\mathsf{T}$ by (1) copying the symbol's representation $\pmb{x}^{\prime (\ell -1)}$ from the previous layer into the first component (residual connection) and (2) computing the attended-to representation based on all the contextual representations seen so far at the previous layer $(\mathcal{X}^{(\ell -1)})$ and the symbol's representation at the previous layer $(\pmb{x}^{\prime (\ell -1)})$ . Crucially, Eq. (52) can be computed in advance (knowing the scoring function score) for any $\mathcal{X}^{(\ell)}\in \mathrm{OS}(\Sigma^{2^{\ell}})$ and $\pmb {x}\in \Sigma^{2^{\ell}}$ for all $\ell \in \{0,\dots ,L\}$ due to the finiteness of all the considered states. + +We then update the set of observed contextual representations as + +$$ +\mathcal {X} ^ {\prime (\ell)} \stackrel {\text {d e f}} {=} \mathcal {X} ^ {(\ell)} \cup \left\{\boldsymbol {x} ^ {\prime (\ell)} \right\} \tag {53} +$$ + +to incorporate the information about the new contextual representations into the ordered sets of seen contextual representations at each layer $\ell \in \{0,\dots ,L\}$ . The union in Eq. (53) is to be interpreted as adding an element to an ordered set. + +Defining + +$$ +q ^ {\prime} = \left( \begin{array}{c} \mathcal {X} ^ {\prime (0)} \\ \mathcal {X} ^ {\prime (1)} \\ \vdots \\ \mathcal {X} ^ {\prime (L)} \end{array} \right) \tag {54} +$$ + +and setting $\delta(q, w) = q'$ , for all choices of $q$ and $w$ , we have defined the transition function $\delta$ that updates each state with the new contextual representations at each layer. + +The set of observed contextual representations $\mathcal{X}$ satisfies a partial order as the concatenation of a new representation at a new position at every step implies we transition into novel states. This further implies that the semiautomaton is partially ordered. + +We construct an automaton from $\mathcal{A}$ by setting the initial and final states. We set the initial state to be the one with empty ordered subsets of observed representations: $\mathcal{X}^{(\ell)}\stackrel {\mathrm{def}}{=}\emptyset$ for all $\ell \in \{0,\dots ,L\}$ . A subset of $\mathrm{OS}(\Sigma^{2^L})$ will lead to $\mathsf{T}$ accepting a string. We set the states with $\mathcal{X}^L$ in that set to be final, leading to an equivalent automaton. + +# E Duality with $\mathcal{T}_{\triangleright}^{\mathsf{P}}$ ,LTL[◇] + +The class of transformers $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ , the main focus of the paper, has a natural symmetric characterization in $\mathcal{T}_{\triangleleft}^{\mathrm{P}}$ : While $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ can only peek strictly into the past, $\mathcal{T}_{\triangleleft}^{\mathrm{P}}$ can symmetrically only peek strictly into the future using strict past masking and rightmost UHA. $\mathcal{T}_{\triangleleft}^{\mathrm{P}}$ can be informally seen as transformers in $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ that instead read symbols right-to-left, only considering the future when updating a symbol representation, analogously to the duality between $\mathcal{R}$ -trivial and $\mathcal{L}$ -trivial languages (Brzozowski and Ellen, 1980). + +Thus, all results for $\mathcal{T}_{\triangleleft}^{\mathrm{F}}$ and LTL[◇] apply analogously to the symmetric case of $\mathcal{T}_{\triangleright}^{\mathrm{P}}$ and LTL[◇]. LTL[◇] is the dual of LTL[◇]—it only permits peeking into the future rather than the past. Similarly, partially ordered reverse automata (RPOFA) are semiautomata whose reversal (automaton constructed by inverting the directionality of the transitions) are POFAs. The reverse of a RPOFA is then homomorphic to a cascade product of half-rews. We thus can write the dual statement of Thm. 3.2. + +Theorem E.1. Let $T \in \mathcal{T}_{\bullet}^{\mathsf{P}}$ be a transformer. Then, there exists an equivalent formula $\psi \in \mathbf{LTL}[\diamondsuit]$ . + +Let $\psi \in \mathbf{LTL}[\diamond]$ be a formula. Then, there exists an equivalent transformer $T \in T_{\triangleright}^{\mathsf{P}}$ . \ No newline at end of file diff --git a/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/images.zip b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3a13f1a881e5263a0799c5865a7935a985048225 --- /dev/null +++ b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:994c39f5e0ab5a1f9fcadad55581229fe4e765dd4392bac47a4893d8c5e85ca1 +size 469971 diff --git a/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/layout.json b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2d9195ddacf339b3fe4fef662a6adeedd85facd2 --- /dev/null +++ b/ACL/2025/Unique Hard Attention_ A Tale of Two Sides/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecde3281874f487ecde06f79adef0bf84fc2aeb9f18e497cde125850aee4f9d0 +size 1261990 diff --git a/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_content_list.json b/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9fd650de13069e058c8c0886fdefa64da93ae5fc --- /dev/null +++ b/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc02525cf451f470893b415fd650eb5b2f75dda6bcce45ae0ad63b1d80ad54e7 +size 76636 diff --git a/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_model.json b/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_model.json new file mode 100644 index 0000000000000000000000000000000000000000..70dd376ed8e03d4adf0765d5437765d244b8699e --- /dev/null +++ b/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed8e903ab38a0de52ec17ea09b3727154222a6d37fc2591344a8c00afd2abae0 +size 89001 diff --git a/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_origin.pdf b/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9a285e8470db58e6b0e23620676778aa6486d4f3 --- /dev/null +++ b/ACL/2025/Using Subtext to Enhance Generative IDRR/9d2603c3-4112-45f9-81bd-fd63bce2d443_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfdf881ecaf86cbadc168576b7a2ff3d2ece0cbf3c6d548a9d2e9fe2df03ed7a +size 842039 diff --git a/ACL/2025/Using Subtext to Enhance Generative IDRR/full.md b/ACL/2025/Using Subtext to Enhance Generative IDRR/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ee4b3a364b122fde4a7c26111ed186da05be5d33 --- /dev/null +++ b/ACL/2025/Using Subtext to Enhance Generative IDRR/full.md @@ -0,0 +1,334 @@ +# Using Subtext to Enhance Generative IDRR + +Zhipang Wang, Yu Hong*, Weihao Sun, Guodong Zhou + +School of Computer Science and Technology, Soochow University, Suzhou, China + +{zhipangwang, tianxianer, whsun16} $@$ gmail.com; gdzhou@suda.edu.cn + +# Abstract + +Implicit Discourse Relation Recognition (abbr., IDRR) is a NLP task of classifying argument pairs into different types of semantic relations. Arguments contain subtexts, some of which are beneficial to the perception of semantic relations. However, subtexts are connotative. The neural IDRR model fails to be aware of them without being given pertinent prompts. In this paper, we leverage LLaMA to generate subtexts for argument pairs, and verify the effectiveness of subtext-based IDRR. We construct an IDRR baseline using the decoder-only backbone LLaMA, and enhance it with subtext-aware relation reasoning. A confidence-diagnosed dual-channel network is used for collaboration between in-subtext and out-of-subtext IDRR. We experiment on PDTB-2.0 and PDTB-3.0 for both the main-level and secondary-level relation taxonomies. The test results show that our approach yields substantial improvements compared to the baseline, and achieves higher $F1$ -scores on both benchmarks than the previous decoder-only IDRR models. We make the source codes and data publicly available. $^{1}$ + +# 1 Introduction + +IDRR determines the semantic relation between arguments when the in-between connective is absent (Prasad et al., 2008). For example, it outputs the relation "Concession" for the arguments $Arg1$ and $Arg2$ in 1), where the possible connective "however" is not given in the source text: + +1) Arg1: The new rate will be payable Feb. 15. + +Arg2: A record date hasn't been set. + +Relation: Concession + +Encoder-only language models such as RoBERTa (Long and Webber, 2022; Wu et al., 2023; Cai + +et al., 2024) and XLNet (Jiang et al., 2024) have been used for IDRR, where multi-class relation classification is conducted by linear layers with Softmax. Meanwhile, both T5 (Jiang et al., 2021; Chan et al., 2023) and the decoder-only Large Language Mdoels (LLMs) like GPT-3.5 (Chan et al., 2024) and GPT-4 (Yung et al., 2024) have also been verified for IDRR, where relations are properly generated conditioned on prompts and/or CoT. Significant improvements are reported in these arts. + +Subtext hasn't been considered in the study of IDRR. Though, it is potentially useful for enhancing the IDRR models. A subtext is characterized by the metaphorical meaning hidden in the arguments. For example, the subtext of the two arguments in 1), most probably, is "the rate should be recorded earlier though it hasn't been". Such a subtext is more explicit or even straight in revealing the Concessive relation. Accordingly, we suggest that subtext can be used as a crucial evidence for enhancing the perception of implicit relation. + +In this paper, we explore the method of applying subtexts, and systematically investigate the effectiveness upon LLM-based generative IDRR. The effort we made is to provide a preliminary study and stimulate innovative researches in subtext-based IDRR enhancement. Specifically, our contributions are summarized as follows: + +- We first propose a Subtext-based Confidence-diagnosed Dual-channel Network (SCDN) for IDRR. In SCDN, subtext is generated by LLM. Confidence comparison is conducted to reconcile in-subtext and out-of-subtext IDRR. +- We verify the effectiveness of SCDN on the benchmarks PDTB-2.0 and 3.0 (Webber et al., 2019). We report varied influences caused by the settings of prompting, confidence diagnosis, subtext generation and augmentation. + +![](images/c095c8db91f6bcf9cacaf1f10875c8bfa5b44938b8732d0b58ba9be9a65d380e.jpg) +Figure 1: Architecture of SCDN. + +# 2 Approach + +Figure 1 shows the architecture of SCDN, which is constructed with three LLMs $\mathcal{M}_{\alpha}$ , $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ . $\mathcal{M}_{\alpha}$ serves to generate the subtext for the given arguments. $\mathcal{M}_{\beta}$ takes the arguments as input and uses them as the only reliance for relation reasoning. $\mathcal{M}_{\lambda}$ combines the generated subtext and arguments, and infers the relation according to all of them. A probabilistic diagnosis model (diagnoser) is used to reconcile the decisions from $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ based on confidence estimation. + +# 2.1 Subtext and Relation Generators + +Our baseline is the generator $\mathcal{M}_{\beta}$ which performs out-of-subtext IDRR. We prompt it by Question Answering (QA). Given the arguments $\dot{A}$ and $\ddot{A}$ , we combine them with a question $Q_{\beta}$ of "what is the relation between arguments": $I_{\beta} = [\{\dot{A},\ddot{A}\};Q_{\beta}]$ . We feed $I_{\beta}$ into $\mathcal{M}_{\beta}$ to generate a relation label. + +To fulfill the in-subtext IDRR, we construct a subtext generator $\mathcal{M}_{\alpha}$ . Its input is formed by the arguments and a prompting question $Q_{\alpha}$ of "what is the implicit meaning": $I_{\alpha} = [\{\dot{A},\ddot{A}\} ;Q_{\alpha}]$ . There isn't any constraint applied to subtext generation (i.e., $\mathcal{M}_{\alpha}(I_{\alpha})$ ) such as the length of subtext. Further, we build the generator $\mathcal{M}_{\lambda}$ to perform in-subtext IDRR. It uses both subtext and arguments as input, and combines them with a multi-choice question $Q_{\lambda}$ : $I_{\alpha} = [\{\dot{A},\ddot{A}\} ;\{S\} ;Q_{\lambda}]$ . The question $Q_{\lambda}$ is designed as "what is the relation between arguments given subtext", which allows $\mathcal{M}_{\lambda}$ to generate a relation label in the manner of multi-choice QA (Yung et al., 2024) as follows. + +2) $Q_{\lambda}$ : What is the relation of $\dot{A}$ and $\ddot{A}$ given $S$ ? + +A. Contingency + +B. Expansion + +C. Temporality + +D. Comparison + +In our experiments, we uniformly use LLaMA3-8B-Instruct (Dubey et al., 2024) to construct the generators $\mathcal{M}_{\alpha}$ , $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ . Due to the zero-resource situation that there isn't any ground-truth subtext provided in PDTB-2.0 and 3.0, we train the subtext generator $\mathcal{M}_{\alpha}$ by teacher-student knowledge distillation (Hu et al., 2023). GPT-3.5-turbo (Brown et al., 2020) is used as the teacher. + +# 2.2 Confidence Diagnoser + +It is unavoidable that the subtext-based generator encounters two problems, including 1) arguments inherently don't contain a subtext, and 2) the generated subtext is unqualified. To relieve the problems, we use a diagnoster to reconcile $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ , where $\mathcal{M}_{\beta}$ conducts out-of-subtext IDRR, while $\mathcal{M}_{\lambda}$ additionally uses subtext for in-subtext IDRR. + +Assume $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ output the relations $R_{\beta}$ and $R_{\lambda}$ respectively. The diagnoser first verifies the reliability of $R_{\lambda}$ . Confidence score $\mathcal{C}$ is measured for verification. $\mathcal{C}$ is an average logistic probability over all the tokens output by $\mathcal{M}_{\beta}$ . Each $c_{i} \in \mathcal{C}$ is the non-normalized probability estimated by the logistic function in the final layer of LLaMA3: + +$$ +\mathcal {C} = \overline {{\sum_ {t _ {i} \in R _ {\beta}} \log_ {\mathcal {M} _ {\beta}} (t _ {i})}} \tag {1} +$$ + +On this basis, the diagnoser verifies whether $\mathcal{C}$ is larger than the type-specific threshold $\theta$ . If larger, the diagnoser determines that $R_{\lambda}$ is reliable for output, otherwise the prediction $R_{\beta}$ of $\mathcal{M}_{\beta}$ is adopted. In our experiments, we provide an exclusive threshold for each relation type in the taxonomy of PDTB. They are obtained by empirical observation upon the IDRR performance obtained on the training set. More details of threshold settings and performance curves can be found in Appendix A. + +# 3 Experiments + +# 3.1 Dataset and Evaluation Metrics + +We experiment on two versions of discourse relation analysis datasets, including PDTB-2.0 (Prasad et al., 2008) and 3.0 (Webber et al., 2019). We follow the previous work to use sections 0-22 for IDRR, where sections 2-20 are used for training, and sections 0-1 are used as the development set, while 21-22 for testing. Appendix B shows the data statistics in all the datasets. + +Multi-class Macro-F1 $(F_{1})$ and accuracy rate $(Acc)$ are used as the evaluation metrics. + +# 3.2 Implementation Details + +We use AdamW optimizer (Loshchilov and Hutter, 2019) to optimize LLaMA3. For subtext generation $\mathcal{M}_{\alpha}$ , the learning rate is set to 1e-4. A 5-epoch training process is conducted. For the relation generators $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ , the learning rates are uniformly set to 5e-5, and the best checkpoint is reached based on $F_{1}$ within 10 epochs. Both + +
MethodBackbone ModelParametersPDTB2PDTB3
F1AccF1Acc
ChatGPT (Chan et al., 2024)GPT-3.5-turbo-36.1144.18--
PIDRA (Yung et al., 2024)GPT-4---47.5352.84
FCL (Long and Webber, 2022)RoBERTa-base125M69.6072.1870.0575.31
CP-KD (Wu et al., 2023)RoBERTa-base125M68.8675.4372.0777.00
CP-KD (Wu et al., 2023)RoBERTa-large355M71.8876.7775.5278.56
SCIDER (Cai et al., 2024)RoBERTa-base125M67.0072.11--
OTMT (Jiang et al., 2024)XLNet-large355M64.4672.34--
CG-T5 (Jiang et al., 2021)T5-base223M57.1865.54--
DiscoPrompt (Chan et al., 2023)T5-base223M65.7971.70--
DiscoPrompt (Chan et al., 2023)T5-large738M70.8475.65--
IICOT (Lu et al., 2023)Flan-T5-base248M65.2671.1369.7973.98
IICOT (Lu et al., 2023)Flan-T5-large783M69.2376.0473.0677.46
BaselineLlama3-8B-Instruct8.03B66.7273.9070.7175.31
SCDN (ours)Llama3-8B-Instruct8.03B71.1478.2073.3376.93
+ +Table 1: Performance on PDTB 2.0/3.0. Encoder-only PLMs, decoder-only LLMs and T5-based encoder-decoder models are considered. The best results are separately marked in bold for encoder-only and decoder-only models. + +
ModelPDTB2PDTB3
F1AccF1Acc
Out-of-subtext66.7273.9070.7175.31
In-subtext70.5677.8272.7976.32
SCDN71.1478.2073.3376.93
+ +Table 2: Test results in ablation study. + +employ a weight decay of 1e-2, a batch size of 1, and gradient accumulation over 8 steps. We don't extensively tune the hyperparameters. + +All experiments are performed on a NVIDIA A100 GPU. Our model implementations are based on PyTorch $^{2}$ and the Transformers library $^{3}$ . + +# 3.3 Main Results + +We compare SCDN to the recently-proposed advanced models, including 1) decoder-only ChatGPT (Chan et al., 2024) and PIDRA (Yung et al., 2024), 2) encoder-only FCL (Long and Webber, 2022), CP-KD (Wu et al., 2023), SCIDER (Cai et al., 2024), and OTMT (Jiang et al., 2024), as well as 3) T5-based CG-T5 (Jiang et al., 2021), DiscoPrompt (Chan et al., 2023), and IICOT (Lu et al., 2023). The decoder-only models take full advantage of prompt engineering for IDRR. The encoder-only models are effective in representation learning for relation understanding. T5-based mod + +els combine the advantages. More contributions of these arts are summarized in Appendix C. + +Table 1 shows the comparison results on the test set, where the 4-way relation classification performance on the main relation taxonomy is reported. It can be observed that our SCDN achieves higher F1-score than both the decoder-only and T5-based IDRR models. However, it still fails to outperform the encoder-only models. This is partially attributed to hallucination of LLaMA3 and off-topic results it generated. + +Besides, we conduct experiments for the 2nd-level taxonomy, where each main relation type is divided into fine-grained relation senses. For example, the relation type "Contingency" contains the senses of "Conditionality" and "Causality". In this experiment, SCDN shows promising performance, which is reported in Appendix D due to page limit. + +# 3.4 Ablation Study + +To provide a direct insight into the influence of subtexts, we conduct an ablation study. Three IDRR models are considered in it, including 1) out-of-subtext generator $\mathcal{M}_{\beta}$ which is separately finetuned without using subtexts, 2) in-subtext generator $\mathcal{M}_{\lambda}$ which additionally uses the generated subtexts during fine-tuning, and 3) SCDN which uses both $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ , and reconciles them with the confidence-based diagnoser. + +Table 2 shows the test results for the main relation taxonomy. It proves that the utilization of + +
Subtext Generation ModelF1Acc
GPT-3.5-turbo71.5575.98
LLaMA3 w/o Distill71.0775.37
LLaMA3 w/ Distill (Partial)71.6676.19
LLaMA3 w/ Distill (Whole)72.7976.32
+ +subtexts yields various levels of improvements. Appendix E provides a case study to show the effect. + +# 3.5 Comparison among Subtext Generators + +The qualified subtexts are crucial for SCDN. We investigate the subtexts generated by different LLMs, and verify their effects by our in-subtext model. There are three types of LLMs considered, including 1) GPT-3.5-turbo, 2) LLaMA3-8B-Instruct, 3) LLaMA3 which is strengthened by teacher-student knowledge distillation. During distillation, the subtexts generated by GPT-3.5-turbo for all training data are specified as Guidance Data from teacher. We use two different-sized guidance data: Partial and Whole. In the "Partial" case, we only adopt the guidance data which enables the subtext-based relation generator $\mathcal{M}_{\lambda}$ to output correct results. In the "Whole" case, all the guidance data is used. + +Table 3 shows the performance of in-subtext IDRR models on the test set of PDTB 3.0, where different subtext generators are used. It can be observed that, compared to LLaMA3 (w/o distillation), GPT-3.5 enables the in-subtext model to perform better. Furthermore, no matter whether "Partial" or "Whole" guidance data is used, knowledge distillation causes improvements, and the latter case improves the in-subtext model more substantially. It is surprising that distillation allows the weaker LLaMA3 to be more contributive than its teacher GPT-3.5. The possible reason is because that LLaMA3 takes the advantage of itself when absorbing beneficial experience from GPT-3.5. + +# 3.6 Prompts for Subtext Generation + +The reliability of subtexts relies heavily on the design methods of prompts. For example, if we didn't remind LLMs of the ultimate purpose (i.e., being applied for IDRR), they fail to provide reliable subtexts. In our experiments, we evaluate different prompts as follows: + +- $\mathrm{P_1}$ : It contains $Q_{1}$ and $Q_{2}$ (Section 2.1) that ask for subtext generation and IDRR in turn. + +Table 3: Contributions from different subtext generators. + +
PromptLLMF1Acc
P1GPT-3.5-turbo35.8841.79
P2GPT-3.5-turbo38.3042.76
P3GPT-3.5-turbo39.2443.72
P1GPT-4-turbo44.8450.07
P2GPT-4-turbo46.2952.21
+ +Table 4: Reliability of Prompts (on PDTB 3.0). + +- $\mathrm{P}_2$ : It replaces the key words in $P_1$ with their different synonyms, e.g., "subtext" is replaced with "implicit meaning". + +- $\mathrm{P}_3$ : It expands $\mathrm{P}_2$ by adding a prefix to $Q_1$ , where the prefix is an additional question about "whether there truly is a subtext in the considered argument". This prompt helps to avoid a forcible subtext generation. + +Table 4 shows the IDRR performance on the development set when the above prompts are separately used, where GPT-3.5 and 4.0 are considered during the validation process. It can be observed that both synonym replacement and non-forcible subtext generation yield improvements. Accordingly, $\mathrm{P_2}$ has been introduced into our SCDN for optimization. However, $\mathrm{P_3}$ fails to be used in SCDN as it actually causes performance degradation. For example, by $\mathrm{P_3}$ , the in-subtext "Single" generator $\mathcal{M}_{\beta}$ obtains a $F1$ -score of $71.7\%$ on the test set of PDTB 3.0, causing a performance reduction of about $1.1\%$ (compared to the "Single" case in Table 2). This implies that LLaMA3 in $\mathcal{M}_{\beta}$ isn't able to effectively perform reasoning with a relatively-complex Chain-of-Thought (CoT). Besides, $\mathrm{P_3}$ also causes severe performance reduction when it is used in SCDN. This is because that a limited number of subtexts are generated by GPT-3.5 due to the constraint from $\mathrm{P_3}$ , and thus GPT-to-LLaMA distillation falls into the low-resource scenario. + +# 4 Conclusion + +In this paper, we verify that the utilization of subtexts helps to strengthen the LLMs-based generative IDRR. Experiments demonstrate that reconciliation of in-subtext and out-of-subtext IDRR is effective. We also exhibit that distilling a lightweight LLM-based subtext generator is contributive, when the prompt doesn't raise a complex CoT. + +In the future, we will investigate the default subtext which isn't implied in the given argument. On + +the contrary, it derives from common-sense knowledge. Accordingly, we will convert binary arguments analysis to triplet, where the default subtext is regarded as the third non-negligible argument. + +# 5 Limitations + +This study proves the effectiveness of utilizing subtexts for enhancing IDRR. Nevertheless, a deeper analysis upon subtexts is still required. Our findings reveal that, in some cases, the shareable subtext is implied in one argument but irrelevant to the other, where the irrelevant argument appears as the noise during detecting the subtext. In some other cases, the default subtext occurs, which is not implied in any argument but derives from the common-sense knowledge. However, the subtext generation method in this paper cannot deal with these two problems. In the future, we will firstly study the common sense based default subtext generation. On this basis, we will convert the conventional binary arguments analysis to triplet, where the default subtext is used as a supplementary argument. This work will encounter the issues of 1) how to determine whether a pair of arguments are relevant to some default subtexts, and what they are, 2) how to detect and generate default subtexts conditioned on common-sense knowledge, and 3) how to reconcile the utilization of a triple of arguments and assign proper attention to them during relation discrimination. + +# Acknowledgments + +This work was supported by the National Natural Science Foundation of China under No.62376182. + +# References + +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. + +Mingyang Cai, Zhen Yang, and Ping Jian. 2024. Improving implicit discourse relation recognition with + +semantics confrontation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 8828-8839, Torino, Italia. ELRA and ICCL. + +Chunkit Chan, Cheng Jiayang, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, and Yangqiu Song. 2024. Exploring the potential of ChatGPT on sentence level relations: A focus on temporal, causal, and discourse relations. In Findings of the Association for Computational Linguistics: EACL 2024, pages 684-721, St. Julian's, Malta. Association for Computational Linguistics. + +Chunkit Chan, Xin Liu, Jiayang Cheng, Zihan Li, Yangqiu Song, Ginny Wong, and Simon See. 2023. DiscoPrompt: Path prediction prompt tuning for implicit discourse relation recognition. In Findings of the Association for Computational Linguistics: ACL 2023, pages 35-57, Toronto, Canada. Association for Computational Linguistics. + +Zujun Dou, Yu Hong, Yu Sun, and Guodong Zhou. 2021. CVAE-based re-anchoring for implicit discourse relation classification. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 1275–1283, Punta Cana, Dominican Republic. Association for Computational Linguistics. + +Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, and Angela Fan, et al. 2024. The Llama 3 herd of models. Preprint, arXiv:2407.21783. + +Chengming Hu, Xuan Li, Dan Liu, Haolun Wu, Xi Chen, Ju Wang, and Xue Liu. 2023. Teacher-student architecture for knowledge distillation: A survey. Preprint, arXiv:2308.04268. + +Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Transactions of the Association for Computational Linguistics, 3:329-344. + +Congcong Jiang, Tieyun Qian, and Bing Liu. 2024. One general teacher for multi-data multi-task: A new knowledge distillation framework for discourse relation analysis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32:239-249. + +Feng Jiang, Yaxin Fan, Xiaomin Chu, Peifeng Li, and Qiaoming Zhu. 2021. Not just classification: Recognizing implicit discourse relation on joint modeling of classification and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2418-2431, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. + +Yuxin Jiang, Linhan Zhang, and Wei Wang. 2023. Global and local hierarchy-aware contrastive framework for implicit discourse relation recognition. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8048-8064, Toronto, Canada. Association for Computational Linguistics. + +Wei Liu and Michael Strube. 2023. Annotation-inspired implicit discourse relation classification with auxiliary discourse connective generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15696-15712, Toronto, Canada. Association for Computational Linguistics. +Wei Liu, Stephen Wan, and Michael Strube. 2024. What causes the failure of explicit to implicit discourse relation recognition? Preprint, arXiv:2404.00999. +Wanqiu Long and Bonnie Webber. 2022. Facilitating contrastive learning of discourse relational senses by exploiting the hierarchy of sense relations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10704-10716, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. Preprint, arXiv:1711.05101. +Yuxiang Lu, Yu Hong, Zhipang Wang, and Guodong Zhou. 2023. Enhancing reasoning capabilities by instruction learning and chain-of-thoughts for implicit discourse relation recognition. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 5634–5640, Singapore. Association for Computational Linguistics. +Kazumasa Omura, Fei Cheng, and Sadao Kurohashi. 2024. An empirical study of synthetic data generation for implicit discourse relation recognition. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 1073-1085, Torino, Italia. ELRA and ICCL. +Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA). +Chenxu Wang, Ping Jian, and Mu Huang. 2023. Prompt-based logical semantics enhancement for implicit discourse relation recognition. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 687-699, Singapore. Association for Computational Linguistics. +Bonnie Webber, Rashmi Prasad, Alan Lee, and Aravind Joshi. 2019. The Penn Discourse TreeBank 3.0 annotation manual. Philadelphia, University of Pennsylvania, 35:108. +Changxing Wu, Liuwen Cao, Yubin Ge, Yang Liu, Min Zhang, and Jinsong Su. 2022. A label dependence-aware sequence generation model for multi-level implicit discourse relation recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):11486-11494. + +Hongyi Wu, Hao Zhou, Man Lan, Yuanbin Wu, and Yadong Zhang. 2023. Connective prediction for implicit discourse relation recognition via knowledge distillation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5908-5923, Toronto, Canada. Association for Computational Linguistics. +Wei Xiang, Bang Wang, Lu Dai, and Yijun Mo. 2022a. Encoding and fusing semantic connection and linguistic evidence for implicit discourse relation recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3247-3257, Dublin, Ireland. Association for Computational Linguistics. +Wei Xiang, Zhenglin Wang, Lu Dai, and Bang Wang. 2022b. ConnPrompt: Connective-cloze prompt learning for implicit discourse relation recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 902-911, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. +Jing Xu, Ruifang He, Haodong Zhao, Huijie Wang, and Lei Zeng. 2023. Dual hierarchical contrastive learning for multi-level implicit discourse relation recognition. In *Natural Language Processing and Chinese Computing*, volume 14303, pages 55–66. Springer Nature Switzerland, Cham. +Frances Yung, Mansoor Ahmad, Merel Scholman, and Vera Demberg. 2024. Prompting implicit discourse relation annotation. *Preprint*, arXiv:2402.04918. +Lei Zeng, Ruifang He, Haowen Sun, Jing Xu, Chang Liu, and Bo Wang. 2024. Global and local hierarchical prompt tuning framework for multi-level implicit discourse relation recognition. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7760-7773, Torino, Italia. ELRA and ICCL. +Haodong Zhao, Ruifang He, Mengnan Xiao, and Jing Xu. 2023. Infusing hierarchical guidance into prompt tuning: A parameter-efficient framework for multi-level implicit discourse relation recognition. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6477-6492, Toronto, Canada. Association for Computational Linguistics. +Hao Zhou, Man Lan, Yuanbin Wu, Yuefeng Chen, and Meirong Ma. 2022. Prompt-based connective prediction method for fine-grained implicit discourse relation recognition. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3848-3858, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. + +
RelationPDTB2-TopPDTB3-Top
Comparison31.0629.97
Contingency35.2133.45
Expansion31.2229.83
Temporal31.1128.53
+ +Table 5: Optimal thresholds for the main relation types. + +# A Threshold for Confidence Diagnosis + +We set up a threshold for each relation type in the taxonomies of IDRR. For example, there are 4 thresholds provided for the four main IDRR relation types (i.e., Expansion, Temporality, Contingency, and Comparison). The adoption of different thresholds is because that the varying token lengths of relation labels cause unbalanced ranges of average confidence scores. + +Let us consider the relation type $T$ as an example. To seek for the optimal threshold $\theta_T$ for $T$ , we empirically observe the $T$ -oriented IDRR performance curve obtained when different optional values are used as thresholds. Within this empirical observation, the IDRR generator $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ is used to predict relations and confidence scores for all the instances that hold a relation of $T$ in the training set. And the accuracy $A_{cc_T}$ is used as the performance metric, which is calculated as follows: + +$$ +A c c _ {T} (\breve {\theta}) = \frac {n}{\left| D _ {T} \right|} \tag {2} +$$ + +where, $\check{\theta}$ is an optional threshold which is sequentially sampled from the range of confidence scores in the training set. $n$ is the number of argument pairs which are given a positive relation prediction by $\mathcal{M}_{\beta}$ and $\mathcal{M}_{\lambda}$ , and $|D_T|$ is the number of all argument pairs which hold the relation $T$ . + +We adopt the optimal threshold $\theta$ by maximum likelihood estimation on all optional thresholds: + +$$ +\theta_ {T} = \underset {\check {\theta} _ {T} \in \check {\theta} _ {a l l}} {\arg \max } \left(A c c _ {T} \left(\check {\theta} _ {T}\right)\right) \tag {3} +$$ + +Figure 2 shows the curves of $Acc_{T}$ changing with different thresholds in PDTB 3.0. We also present the specific values of the finally adopted thresholds $\theta$ in Table 5, 6 and 7, where Table 5 provides the thresholds for the main relation types (labeled as "Top"), while Table 6 and 7 give the thresholds for the relation senses in the secondary-level taxonomies (labeled as "Second"). + +![](images/7c10c5dd8ad4ac4b8b5849714b60fe67efb0ff5981a3acf41047fa0123ade228.jpg) + +![](images/10dbf477cea0ce9e63ee873d7aa6f37a22102996300d08779075f87dde046acb.jpg) +(a) Comparison + +![](images/0503a3b6e0c78764fbbaf3c419d05237d6d2f2768fe82d17f2f80786122bd2fc.jpg) +(b) Contingency + +![](images/9fb621be197f82cbb1287dd9b1e83a7436c49ca1ef2db6913e20009e8375d9fd.jpg) +(c) Expansion +(d) Temporal +Figure 2: $Acc_{T}$ on the training dataset with varying thresholds. The final selected thresholds are marked with red dots, which corresponds to the highest $Acc_{T}$ . + +
RelationPDTB2-Sec
Comparison.Concession28.10
Comparison.Contrast29.10
Contingency.Cause30.79
Contingency.Pragmatic cause29.05
Expansion.Alternative31.68
Expansion.Conjunction28.02
ExpansionInstantiation30.80
Expansion.List28.48
Expansion.Restatement27.56
Temporal.Asynchronous28.12
Temporal.Synchrony28.25
+ +Table 6: Thresholds for all relation senses of PDTB 2.0. + +
RelationPDTB3-Sec
Comparison.Concession28.01
Comparison.Contrast30.75
Contingency.Cause32.01
Contingency.Cause+Belief29.41
Contingency Condition30.89
Contingency.Purpose30.31
Expansion.Conjunction27.16
Expansion.Equivalence28.31
ExpansionInstantiation30.05
Expansion.Level-of-detail28.44
Expansion.Manner29.08
Expansion.Substitution27.96
Temporal.Asynchronous28.03
Temporal.Synchronous29.65
+ +# B Statistics of PDTB datasets + +We use the benchmark datasets PDTB-2.0 and 3.0 in our experiments, and follow the common practice (Ji and Eisenstein, 2015) to divide each of them into training (Train), validation (Dev) and test sets. The statistics in the datasets are shown in Table 8 and Table 9. + +# C Related Work + +Recent research has demonstrated that PLMs outperform traditional machine learning methods for the IDRR task. Consequently, many studies have explored incorporating novel modules into the encoder-only transformer architecture to obtain better representations and extract more comprehensive features from the input. The new added modules include Conditional Variational AutoEncoder (Dou et al., 2021), Graph Convolutional Network + +Table 7: Thresholds for all relation senses of PDTB 3.0. + +
RelationTrainDevTest
Comparison1,894191146
Contingency3,281287276
Expansion6,792651556
Temporal6655468
Total12,6321,1831,046
+ +Table 8: Data statistics of PDTB 2.0. + +
RelationTrainDevTest
Comparison1,830190154
Contingency5,896579529
Expansion7,941748643
Temporal1,418136148
Total17,0851,6531,474
+ +Table 9: Data statistics of PDTB 3.0. + +(Wu et al., 2022), Gated Recurrent Unit (Wu et al., 2022), and attention mechanism (Wu et al., 2022; Xiang et al., 2022a; Jiang et al., 2023). + +On the other hand, some studies applied new training strategies like contractive learning (Long and Webber, 2022; Jiang et al., 2023; Xu et al., 2023; Zeng et al., 2024), knowledge distillation (Wu et al., 2023; Jiang et al., 2024), and extra pretraining (Wang et al., 2023). + +Notably, Zhou et al. (2022) proposed a prompt-based approach that involves connective prediction and answer mapping. Their work paved the way for better leveraging of connectives, like enhancing the input with predicted connectives (Liu and Strube, 2023; Liu et al., 2024), or mapping the answers by connectives directly (Xiang et al., 2022b; Zhou et al., 2022; Wu et al., 2023; Wang et al., 2023; Zeng et al., 2024). Additionally, several studies investigated the potential of multi-level hierarchical information for IDRR. These works explored modeling the relationships between labels and fusing the global and local information within the multi-level hierarchical structure(Jiang et al., 2023; Xu et al., 2023; Zhao et al., 2023). + +However, the potential of generative models and LLMs has been relatively underexplored for IDRR(Chan et al., 2024; Omura et al., 2024; Yung et al., 2024). Therefore, our work aims to address this gap by investigating how to effectively utilize LLMs' reasoning capabilities and incorporate additional relevant information into the input. + +
MethodBackbone ModelPDTB2-SecPDTB3-Sec
F1AccF1Acc
ChatGPT (Chan et al., 2024)GPT-3.5-turbo9.2715.59--
PIDRA (Yung et al., 2024)GPT4--25.7736.98
FCL (Long and Webber, 2022)RoBERTa-base49.6661.6957.6264.68
CP-KD (Wu et al., 2023)RoBERTa-base44.7764.0050.1266.21
CP-KD (Wu et al., 2023)RoBERTa-large47.7866.4152.1667.84
SCIDER (Cai et al., 2024)RoBERTa-base-59.62--
OTMT (Jiang et al., 2024)XLNet-large-61.06--
CG-T5 (Jiang et al., 2021)T5-base37.76---
DiscoPrompt (Chan et al., 2023)T5-base43.6861.02--
DiscoPrompt (Chan et al., 2023)T5-large49.0364.58--
SCDN (ours)LLaMA3-8B-Instruct46.3862.4655.0464.35
+ +# D Performance on Relation Senses + +Besides of the relation types in the main-level taxonomy of PDTB, we additionally evaluate our models upon the secondary-level taxonomy. The latter taxonomy consists of the fine-grained relation senses. Table 10 shows the Macro $F1$ -scores and accuracies of our models, as well as the previous work that reported the performance on the secondary-level taxonomy. + +It can be found that our SCDN achieves promising performance, outperforming the T5-base based generative models. Nevertheless, SCDN has an obvious performance gap compared to the T5-large based DiscoPrompt (Chan et al., 2023). DiscoPrompt is a strong IDRR model which learns from the reliance between relations and implicit connectives during training. The implicit connectives are informative when being used as guidance during training, which allows T5-large to infer relations from additional perspectives. By contrast, we didn't use implicit connectives as guidance when fine-tuning SCDN. Besides, as shown in Table 11 and 12, some relation senses in the secondary-level taxonomy are given much less available training data than others in PDTB 2.0 and 3.0. More seriously, some arguments of such relation senses fail to be given a subtext by our subtext generator. This results in the insufficient training towards these relation senses when we fine-tune our in-subtext IDRR model and SCDN. And this causes severe performance degradation. + +Table 10: Performance of the secondary level classification on PDTB 2.0/3.0. + +
RelationNumber
Comparison.Concession180
Comparison.Contrast1,566
Contingency.Cause3,227
Contingency.Pragmatic cause51
Expansion.Alternative146
Expansion.Conjunction2,805
ExpansionInstantiation1,061
Expansion.List330
Expansion.Restatement2,376
Temporal.Asynchronous517
Temporal.Synchrony147
Total12,406
+ +Table 11: Counts of secondary level relation types on the training set of PDTB 2.0. + +# E Case Study + +To qualitatively assess the impact of subtexts on the reasoning ability of LLaMA3 for IDRR, we manually examine the test data as well as predictions of the Out-of-subtext and In-subtext models. The out-of-subtext IDRR model $\mathcal{M}_{\beta}$ predicts relations conditioned only on arguments, while the in-subtext IDRR model $\mathcal{M}_{\lambda}$ combines subtext and arguments, and uses them as clues for relation reasoning. The following analysis showcases the benefits associated with the use of subtexts: + +# Example 1 + +Argument 1: I personally don't enjoy seeing players who I remember vividly from their playing days running about and being gallant about their deficiencies. + +
RelationNumber
Comparison.Concession1,170
Comparison.Contrast639
Contingency.Cause4,483
Contingency.Cause+Belief157
Contingency Condition154
Contingency.Purpose1,102
Expansion.Conjunction3,584
Expansion.Equivalence252
ExpansionInstantiation1,117
Expansion.Level-of-detail2,493
Expansion.Manner191
Expansion.Substitution278
Temporal.Asynchronous985
Temporal.Synchronous433
Total17,038
+ +Table 12: Counts of secondary level relation types on the training set of PDTB 3.0. + +Argument 2: People should be allowed to remember players as they were. + +Subtext: The implicit meaning between the arguments is that people should be allowed to remember and cherish the memories of players from their playing days without seeing them participate in activities that may diminish their legacy or reputation. It suggests a desire to preserve the image and impact of these players in the eyes of the public. + +Ground-truth: Expansion + +Out-of-subtext model: Comparison + +In-subtext model: Expansion + +# Example 2 + +**Argument 1:** For a family with taxable income of $50,000 to pay the same \(28\%$ incremental tax rate as a family with taxable income of \)250,000. + +**Argument 2:** The $33\%$ bracket should apply to all income over the applicable level, not just the $5\%$ rate adjustment amount. + +Subtext: The implicit meaning between these arguments is that both are advocating for a more progressive tax system where higher-income individuals pay a higher percentage of their income in taxes. Argument 1 suggests that the current tax system is unfair because a family with a much lower income is paying the same percentage of taxes as a family with a much higher income. Argument 2 suggests that the tax brackets should be adjusted to + +ensure that higher-income individuals pay a higher tax rate on all of their income, not just the amount above a certain threshold. Both arguments imply that the current tax system is not equitable and needs to be restructured to address income inequality. + +Ground-truth: Comparison + +Out-of-subtext model: Expansion + +In-subtext model: Comparison + +# Example 3 + +Argument 1: He believes in what he plays, and he plays superbly. + +Argument 2: His recent appearance at the Metropolitan Museum, dubbed "A Musical Odyssey," was a case in point. + +Subtext: The implicit meaning between these arguments is that the musician's belief in his music is evident in his exceptional performance at the Metropolitan Museum, showcasing his talent and skill as a musician. + +Ground-truth: Expansion. Instantiation + +Out-of-subtext model: Expan-sion.Restatement + +In-subtext model: Expansion.Instantiation \ No newline at end of file diff --git a/ACL/2025/Using Subtext to Enhance Generative IDRR/images.zip b/ACL/2025/Using Subtext to Enhance Generative IDRR/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7c0d122618fc627c77d786839295ee666cabd871 --- /dev/null +++ b/ACL/2025/Using Subtext to Enhance Generative IDRR/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ce85795466144155783aae85d8aa42fe3514b655a5f41e6bf2a4ebc6ae6b106 +size 640602 diff --git a/ACL/2025/Using Subtext to Enhance Generative IDRR/layout.json b/ACL/2025/Using Subtext to Enhance Generative IDRR/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..479e8ca11a387c3c69fb497654fc02154a78b252 --- /dev/null +++ b/ACL/2025/Using Subtext to Enhance Generative IDRR/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42a0352cbcfe790bb2347b05a79827f8bf462f17d9be8ad3595da030253ed32e +size 381709 diff --git a/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_content_list.json b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cf91d561c55929fe373c001f2a2f5a5c267ab192 --- /dev/null +++ b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1d25b5fab22efc6b3e40210bec55fb44a5cf3fc72f955deb45c382c78e01eef +size 92036 diff --git a/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_model.json b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4262e97ebd3ee12a09fae7763b64caa5194ed828 --- /dev/null +++ b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff9fccfa17535b08ab15f440eb346fe317c11691b62c5ae171f689fefd942e39 +size 110074 diff --git a/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_origin.pdf b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e379e782c1e289f73d36117da9fe1dafca3051bf --- /dev/null +++ b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74f9d10b893ea3237af7b9ef92a4d0d7cfa6f461355e527aeee9b7b7abfc0899 +size 779948 diff --git a/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/full.md b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a0c36020d30d690868fd3e8601156814d05fab9d --- /dev/null +++ b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/full.md @@ -0,0 +1,457 @@ +# WiCkeD: A Simple Method to Make Multiple Choice Benchmarks More Challenging + +Ahmed Elhady1 Eneko Agirre1 Mikel Artetxe1,2 + +$^{1}$ HiTZ Center, University of the Basque Country (UPV/EHU) $^{2}$ Reka AI + +{ahmed.salemmmohamed,e.agirre,mikel.artetxe}@ehu.eus + +# Abstract + +We introduce WiCkeD, a simple method to increase the complexity of existing multiple-choice benchmarks by randomly replacing a choice with "None of the above", a method often used in educational tests. We show that WiCkeD can be automatically applied to any existing benchmark, making it more challenging. We apply WiCkeD to 7 popular benchmarks and use it to evaluate 18 open-weight LLMs. The performance of the models drops 12.1 points on average with respect to the original versions of the datasets. When using chain-of-thought on 3 MMLU datasets, the performance drop for the WiCkeD variant is similar to the one observed when using the LLMs directly, showing that WiCkeD is also challenging for models with enhanced reasoning abilities. WiCkeD also uncovers that some models are more sensitive to the extra reasoning required, providing additional information with respect to the original benchmarks. We release our code and data at https://github.com/ahmedselhady/wicked-benchmarks. + +# 1 Introduction + +Multiple choice question (MCQ) benchmarks are widely used to evaluate Large Language Models (LLMs). This format consists of a question and a limited set of options, which include a correct (or best) answer and several distractors that are either incorrect or less appropriate (see Figure 1). There are various MCQ datasets that focus on different capabilities, including factual knowledge and reasoning as in MMLU (Hendrycks et al., 2021) and Arcchallenge (Clark et al., 2018), common sense as in Commonsense-QA (Talmor et al., 2019), truthfulness as in TruthfulQA (Lin et al., 2022), and domain-specific knowledge (Alonso et al., 2024; Hosseini et al., 2024). Unfortunately, most of these benchmarks got quickly saturated in the recent era dominated by LLMs, motivating harder datasets to + +better gauge the abilities of newer models. However, developing benchmarks is a laborious and expensive process. + +Motivated by this, several recent works have explored strategies to make existing benchmarks harder, which can serve as an alternative to creating new benchmarks from scratch. For example, Gema et al. (2024) identified erroneous questions in the MMLU benchmark, and re-annotated 3k questions to be harder and more robust. Similarly, Wang et al. (2024) presented MMLU-Pro, a harder version of the MMLU benchmark that replaces noisy questions with harder ones and expands the number of distractors to include more plausible yet incorrect ones. While increasing the number of distractors reduces the probability of correct guesses by chance, creating plausible and coherent distractors is challenging and often requires manual verification (McIntosh et al., 2024). + +In this work, we propose a simple yet effective method to make existing benchmarks more challenging without the need to add distractors. Namely, we present the Wild-Card Distractor (WiCkeD) which creates a variant of any existing MCQ benchmark by keeping the question unchanged, and randomly replacing one of the choices with a wild-card distractor, None of the above (see Figure 1). We create WiCkeD variants of 7 popular benchmarks, and use them to evaluate 18 open-weight LLMs varying in size, model family, and training recipe. The WiCkeD datasets suffer a performance drop of 7.2-19.7 points with respect to the original datasets, depending on the model being evaluated. Using chain-of-thought does not prevent the drop (1.4-14.6), showing that WiCkeD can be used to assess reasoning capabilities. The large variance across models shows that WiCkeD is not only challenging, but it also uncovers differences in model capabilities that are not captured by the original benchmarks. + +![](images/59a6cc369c6f3cb65b5cee789f9111a99eacb305abe7498ed64605ff6c109ae8.jpg) +Figure 1: Two samples from MMLU-Pro (left) and its WiCkeD variant (right), where Hydrogen and Centrifugal were removed. Correct answers in bold. Llama-3.1 8B correctly answers both original questions but fails on the WiCkeD variant for the second question. The probability distribution of the model for each answer is also shown. + +![](images/2a96c6350a26b0bd424931e4d877847ed62d21868544709718053d1196755c12.jpg) + +# 2 Related Work + +# 2.1 Challenges in LLMs MCQ Benchmarks + +Several works raised concerns about the effectiveness of MCQ benchmarks in LLM assessment. For example, Balepur et al. (2024) showed that some LLMs can answer MCQs using only the answer choices, without seeing the questions, and perform well-above baselines. Furthermore, more works suggested that LLMs are biased towards certain answer keys (A/B/C/D) due to unbalanced prior probabilities rather than actual knowledge (Myrzakhan et al., 2024; Clark et al., 2018). + +Another line of research attributes LLMs hallucinations to being unable to identify when they lack sufficient knowledge about the subject matter (Li et al., 2024; Ji et al., 2022). Nonetheless, current evaluation benchmarks do not assess this capability effectively. A third line argues that the overestimation in performances of models is due to contamination (i.e. the model is trained on the test sets of the benchmarks), leading to wrong conclusions (Golchin and Surdeanu, 2024; Dong et al., 2024; Sainz et al., 2023). We view our work as an addition towards efficient evaluation of LLMs to avoid spurious correlations and account for knowledge and reasoning gaps. + +# 2.2 None of the Above in Educational Tests + +Multiple-choice questions (MCQs) are effective assessments when they include plausible distractors, as they encourage deeper processing to think not only about why a given choice is correct, but also why other choices are wrong and improve knowledge recall (Little et al., 2019; Little and Bjork, 2015). The use of None of the above as a distractor + +in MCQs is an area of research and debate. It can provide unique insight into the understanding of the examinees and potentially differentiate their abilities (David DiBattista and Fortuna, 2014; Dochy et al., 2001). However, None of the above can affect the confidence of the examinee, leading them to avoid selecting None of the above as the correct answer, even when it is true (Little, 2023; Odegard and Koen, 2007). Nevertheless, incorporating None of the above into practice tests can enhance the learning process by encouraging deeper engagement with the material (David DiBattista and Fortuna, 2014; Pezeshkpour and Hruschka, 2024; Zheng et al., 2024). + +# 3 Methodology + +We propose a method to automatically create a more challenging version of any existing MCQ benchmark without requiring any manual annotation. The difficulty of MCQ has been linked to the reasoning necessary to discriminate between competing options (McIntosh et al., 2024; Wang et al., 2024). We hypothesize that detecting the absence of the correct answer within the provided options is more challenging than selecting the correct one. To that end, we propose to add a wild-card choice None of the above. Note that adding None of the above as an additional option would not make sense, as the correct answer is always the correct option, we thus propose to replace one of the options instead. + +# 3.1 The WiCkeD Algorithm + +Given a benchmark that consists of $M$ examples where each has $N$ choices (one correct answer and $N - 1$ distractors), we uniformly sample one option + +Question: Which definition best describes media convergence? + +A) The conglomeration of media outlets into large corporations + +B) The many differing views and cultures expressed in modern media. + +C) Information being delivered in different formats via various digital channels. + +D) merging of different forms of media into a unified digital format. + +Question: Which definition best describes media convergence? + +A) The conglomeration of media outlets into large corporations. +B) The many differing views and cultures expressed in modern media. +C) Information being delivered in different formats via various digital channels. +D) None of the above. + +Figure 2: Applying WiCkeD on a single best answer (SBA) example (best answer D, second best answer A) would lead to an incoherent WiCkeD variant (incorrectly having None of the above as the gold correct answer instead of A). We thus copy SBA examples verbatim, see § 3.2 for details. + +to be omitted, and append the wildcard option None of the above. When the correct option is replaced, the new correct option is None of the above. When a distractor option is replaced, the correct option continues to be correct. Figure 1 shows the result of applying WiCkeD to two examples. The goal is to produce a variant for each benchmark that contains the same number of $M$ examples. + +# 3.2 Coherence of WiCkeD Examples + +The above algorithm does not always produce coherent examples. In some cases, there are more than one correct candidate, but only one of them is the most appropriate (see Figure 2, where D is the best answer and A is the second best answer). With the above procedure, when the replaced option is the correct one (e.g. option D in the figure), the WiCkeD variant would add None of the above and take this option as the correct one. However, this would be incoherent, because having removed D, A becomes the next best option. We call these examples Single Best Answer (SBA) as opposed to Single Correct Answer (SCA, where the distractors are all incorrect). As we want to keep the same number of examples we avoid adding None of the above to SBA examples and copy them unchanged to the WiCkeD variant of the benchmark. $^1$ + +In order to train an example classifier to detect SBA examples, we selected four representative benchmarks (MMLU, MMLU-Pro, TruthfulQA and Commonsense-QA), sampled 4000 examples, and split them into evaluation $(25\%)$ and train $(75\%)$ . We used GPT-4o-mini to automatically label the examples as SBA or SCA, and further annotated the evaluation split manually. Given the + +cost and slow speed of GPT-4o-mini, we used the synthetic labels to train a classifier based on BERT $^2$ (Devlin et al., 2019). + +The recall on SBA examples for the classifier is over $98.9\%$ , showing that we are able to detect nearly all SBA examples, and would thus have $1.1\%$ noisy WiCkeD examples (that is, examples in the benchmark that have None of the above as the correct option even if a correct option exists). See Appendix A for more details about the training and evaluation procedure. + +# 4 Experimental Setup + +# 4.1 Benchmarks + +We apply WiCkeD to six popular MCQ benchmarks that assess the knowledge, language comprehension, reasoning, and truthfulness of LLMs: MMLU, MMLU-Pro, MMLU-Redux, GPQA, CommonsenseQA, Truthful-QA, and Arcchallenge. To ensure reproducibility, we use EvalHarness (Gao et al., 2024). Given that the selection of the option to be replaced is random, we generate five WiCkeD variants for each benchmark, and report mean and standard deviation. + +Regarding the amount of SBA examples, MMLU, MMLU-Redux and MMLU-pro have the largest amount ( $\sim 20\%$ ), with the rest of the benchmarks having less than $5\%$ (see Appendix A). SBA examples are copied verbatim to the WiCkeD variants, but the fact that at least $80\%$ of the examples are effectively altered makes the WiCkeD variants significantly more challenging, as we will see. Other benchmarks have less than $5\%$ SBAs; we also leave them unchanged. + +# 4.2 Models + +We evaluate WiCkeD on 18 open-weight models covering different families and sizes. Namely, we evaluate the base and instruction-tuned models of Qwen2.5 7B, 14B and 72B (Qwen et al., 2025), Llama3.1 8B and 70B (Grattafori et al., 2024), Gemma2 9B and 27B (Riviere et al., 2024), and Mistral-7B (Jiang et al., 2023). We also selected two DeepSeek-R1 models for their improved reasoning capabilities: distill-Llama3.1-8B and distill-Qwen7 (DeepSeek-AI et al., 2025). + +The LLM models are evaluated on the benchmarks following the standard multiple-choice prompting procedure (Robinson et al., 2023), see + +
ModelSizeITOriginalWiCkeDΔ
DS-R1-Llama8B-56.648.6-7.9 ±1.1%
DS-R1-Qwen7B-60.853.4-7.3 ±1.6%
Llama-3.18B-61.452.2-9.2 ±1.7%
8B66.055.0-11.0 ±0.9%
70B-76.867.0-9.8 ±2.1%
70B77.164.5-12.6 ±1.3%
Mistral7B-59.846.5-13.2 ±1.2%
7B59.047.2-11.8 ±1.1%
Qwen-2.57B-74.754.9-19.7 ±1.5%
7B73.559.0-14.5 ±1.3%
14B-78.966.3-12.6 ±2.1%
14B78.966.6-12.3 ±1.8%
72B-84.672.6-12.0 ±0.9%
72B82.669.3-13.3 ±1.0%
Gemma-29B-67.356.3-10.9 ±1.2%
9B73.357.6-15.7 ±1.2%
27B-68.054.6-13.4 ±2.0%
27B74.861.9-12.9 ±2.3%
Average70.858.5-12.2 ±1.5%
+ +Appendix C. We set the number of few-shot examples to five, in order to ensure that in most cases there is at least one example where None of the above is the correct option. + +In addition, we also evaluate the LLM models using zero-shot chain-of-thoughts prompting (CoT) on the three benchmarks commonly used to assess the reasoning capabilities of LLMs: MMLU, MMLU-Pro, MMLU-Redux, and GPQA. We also include 3 state-of-the-art closed-source models: OpenAI GPT-4o and GPT-4o-mini (OpenAI et al., 2024) and Gemini Flash 2.0.3 We set the maximum generation length to 4096, unless limited by the model itself. + +# 5 Results and Discussion + +# 5.1 Main Results + +Table 1 shows the mean accuracy of the models on the original and WiCkeD benchmarks, with a significant drop in performance. Qwen2.5-7B suffers the largest degradation (19.73%), while its DeepSeek-R1 distilled version (DeepSeek-R1-Qwen7B) suffers the least (7.35%). This suggests that models with better reasoning capabilities, like R1, are better equipped to deal with the added complexity. + +Prominently, the WiCkeD variants shuffle the ranking of models. For example, the Qwen2.5-7B and Qwen2.5-7B-IT models originally performed + +Table 1: Average performance on original and WiCkeD variants of the six benchmarks. IT: instruction-tuned. $\pmb{\Delta}$ : degradation from original performance + +
Size GroupOriginalWiCkeDΔ
≤10B63.451.9-11.5
67.954.7-13.2
10-70B73.560.5-13.0
76.864.2-12.6
≥70B80.769.8-10.9
79.866.9-12.9
+ +Table 2: Average performance per size group in original and WiCkeD variants of the six benchmarks using direct prompting. The degradation $(\Delta)$ does not decrease with model size scaling. + +close to the Llama-3.1-70B model. However, on the WiCkeD variants, they lag behind it by $12.1\%$ and $8\%$ , respectively. Similar patterns can be seen in Gemma-2-9B-IT and Gemma-2-27B-IT, which lag behind Llama-3.1-70B by $9.5\%$ and $5.3\%$ , respectively. Qwen2.5-72B and Llama-3.1-70B are the models that perform best in WiCkeD. There is no clear advantage from instruction-tuning, as results vary depending on the model family. Furthermore, the degradation does not decrease with model size scaling, as shown in Table 2. + +# 5.2 Chain-of-Thought Results + +Table 3 shows the performance of the models ${}^{4}$ on the MMLU, MMLU-pro, MMLU-Redux, and GPQA WiCkeD benchmarks. The drop for these three benchmarks without CoT (direct columns in the table) is lower than the other three benchmarks, but applying CoT does not reduce the drop in WiCkeD variants, which stays above $5\%$ . This is remarkable given that CoT is very effective at improving results on MMLU and related benchmarks. Instruction-tuned models experience significantly less degradation than their base models, especially when using CoT (see Appendix B for additional details). Notably, the DeepSpeed-R1 distilled models, Qwen7B and Llama3.1-8B,suffer around $2\%$ each. Similarly, instruction-tuned Qwen2.5 7B and 14B suffer less than $2\%$ . We hypothesize this is due to their enhanced reasoning capabilities. + +As shown in Table 4, popular closed-source models also suffer a relatively large drop in the WiCkeD variants (3.9-8.2%). This suggests that state-of-the-art commercial models are also susceptible to the phenomenon studied in our work. + +
ModelSizeITDirectCoT
WiCkeDΔWiCkeDΔ
DS-R1-Llama8B-30.3-4.180.1-2.0
DS-R1-Qwen7B-30.6-4.374.9-2.5
Llama-3.18B-39.7-3.253.9-5.8
8B43.6-2.757.2-3.4
Mistral7B-35.9-3.436.3-11.6
7B33.5-5.743.8-4.9
Qwen-2.57B-45.5-6.943.0-14.6
7B47.1-5.355.4-1.7
14B-55.6-3.661.5-3.97
14B56.7-3.464.0-1.4
Gemma-29B-36.1-12.241.2-8.9
9B44.1-9.356.3-4.4
27B-36.1-10.859.2-4.1
27B51.3-3.860.3-3.8
Avg41.9-5.656.3-5.2
+ +# 6 Conclusion + +In this paper, we introduced a simple automatic method to create more challenging variants from an existing MCQ benchmark. The large drop in the results shows that WiCkeD challenges the knowledge and reasoning of LLMs, as they need to identify the absence of the correct answer, even when using CoT. We showed that models with better reasoning capabilities suffer less in WiCkeD, such as the original Qwen7B and its distilled version of DS-R1. We see WiCkeD as an addition towards efficient evaluation of LLMs to avoid spurious correlations and challenge reasoning and knowledge gaps. A deeper look into why some models are more sensitive to WiCkeD than others can provide significant insights about uncovered limitations. We release all the code and data under open licenses. + +# Limitations + +We manually confirmed the applicability of WiCkeD on some popular multiple-choice benchmarks whose questions can be categorized into SBAs and SCAs. However, for other benchmarks, WiCkeD might need further verification. + +# Acknowledgments + +This work has been partially supported by the Basque Government (Research group funding IT1805-22). Ahmed Elhady holds a PhD grant supported by the Basque Government (IKER-GAITU project). + +Table 3: Performance on WiCkeD variants for MMLU, MMLU-pro, and MMLU-Redux with and without CoT. IT: instruction-tuned. $\Delta$ : degradation from the original benchmark. + +
ModelWiCkeDΔ
GPT-4o81.4-3.9
GPT-4o-mini74.6-8.2
Gemini Flash 2.080.3-7.8
Avg78.8-6.6
+ +Table 4: Performance of closed-source models on WiCkeD variants for MMLU, MMLU-pro, and MMLU-Radius using CoT. $\Delta$ : degradation from the original benchmark. + +# References + +Inigo Alonso, Maite Oronoz, and Rodrigo Agerri. 2024. Medexpqa: Multilingual benchmarking of large language models for medical question answering. Artificial Intelligence in Medicine, 155:102938. +Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger. 2024. Artifacts or abduction: How do llms answer multiple-choice questions without the question? Preprint, arXiv:2402.12483. +Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. Preprint, arXiv:1803.05457. +Jo-Anne Sinnige-Egger David DiBattista and Glenda Fortuna. 2014. The "none of the above" option in multiple-choice testing: An experimental study. The Journal of Experimental Education, 82(2):168-183. +DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. Preprint, arXiv:2501.12948. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. *Preprint*, arXiv:1810.04805. +Filip Dochy, George Moerkerke, Erik De Corte, and Mien Segers. 2001. The assessment of quantitative problem-solving skills with "none of the above"-items (nota items). European Journal of Psychology of Education, 16:163-177. +Yihong Dong, Xue Jiang, Huanyu Liu, Zhi Jin, Bin Gu, Mengfei Yang, and Ge Li. 2024. Generalization or memorization: Data contamination and trustworthy evaluation for large language models. In *Findings of the Association for Computational Linguistics: ACL* 2024, pages 12039–12050, Bangkok, Thailand. Association for Computational Linguistics. +Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, + +Laurence Golding, Jeffrey Hsu, Alain Le Noac'h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, and 5 others. 2024. A framework for few-shot language model evaluation. +Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, Claire Barale, Robert McHardy, Joshua Harris, Jean Kaddour, Emile van Krieken, and Pasquale Minervini. 2024. Are we done with mmlu? Preprint, arXiv:2406.04127. +Shahriar Golchin and Mihai Surdeanu. 2024. Time travel in LLMs: Tracing data contamination in large language models. In The Twelfth International Conference on Learning Representations. +Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad AlDahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783. +Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Preprint, arXiv:2009.03300. +Pedram Hosseini, Jessica M. Sin, Bing Ren, Bryceton G. Thomas, Elnaz Nouri, Ali Farahanchi, and Saeed Hassanpour. 2024. A benchmark for long-form medical question answering. *Preprint*, arXiv:2411.09834. +Yunjie Ji, Liangyu Chen, Chenxiao Dou, Baochang Ma, and Xiangang Li. 2022. To answer or not to answer? improving machine reading comprehension model with span-based contrastive learning. In *Findings of the Association for Computational Linguistics: NAACL* 2022, pages 1292-1300, Seattle, United States. Association for Computational Linguistics. +Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint, arXiv:2310.06825. +Moxin Li, Wenjie Wang, Fuli Feng, Fengbin Zhu, Qifan Wang, and Tat-Seng Chua. 2024. Think twice before trusting: Self-detection for large language models through comprehensive answer reflection. Preprint, arXiv:2403.09972. +Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Truthfulqa: Measuring how models mimic human falsehoods. Preprint, arXiv:2109.07958. + +Jeri L Little. 2023. Does using none-of-the-above (nota) hurt students' confidence? Journal of Intelligence, 11(8):157. +Jeri L Little and Elizabeth Ligon Bjork. 2015. Optimizing multiple-choice tests as tools for learning. Memory & cognition, 43:14-26. +Jeri L Little, Elise A Frickey, and Alexandra K Fung. 2019. The role of retrieval in answering multiple-choice questions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(8):1473. +Timothy R. McIntosh, Teo Susnjak, Nalin Arachchilage, Tong Liu, Paul Watters, and Malka N. Halgamuge. 2024. Inadequacies of large language model benchmarks in the era of generative artificial intelligence. Preprint, arXiv:2402.09880. +Aidar Myrzakhan, Sondos Mahmoud Bsharat, and Zhiqiang Shen. 2024. Open-llm-leaderboard: From multi-choice to open-style questions for llms evaluation, benchmark, and arena. Preprint, arXiv:2406.07545. +Timothy N Odegard and Joshua D Koen. 2007. "none of the above" as a correct and incorrect alternative on a multiple-choice test: Implications for the testing effect. Memory, 15(8):873-885. +OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgium, and 262 others. 2024. Gpt-4 technical report. Preprint, arXiv:2303.08774. +Pouya Pezeshkpour and Estevam Hruschka. 2024. Large language models sensitivity to the order of options in multiple-choice questions. In *Findings of the Association for Computational Linguistics: NAACL* 2024, pages 2006–2017, Mexico City, Mexico. Association for Computational Linguistics. +Qwen, :: An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, and 25 others. 2025. Qwen2.5 technical report. Preprint, arXiv:2412.15115. +Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, and 178 others. 2024. Gemma 2: Improving open language models at a practical size. Preprint, arXiv:2408.00118. +Joshua Robinson, Christopher Michael Ryting, and David Wingate. 2023. Leveraging large language + +models for multiple choice question answering. Preprint, arXiv:2210.12353. + +Oscar Sainz, Jon Campos, Iker García-Ferrero, Julien Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. NLP evaluation in trouble: On the need to measure LLM data contamination for each benchmark. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10776-10787, Singapore. Association for Computational Linguistics. + +Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Association for Computational Linguistics. + +Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. Preprint, arXiv:2406.01574. + +Chujie Zheng, Hao Zhou, Fandong Meng, Jie Zhou, and Minlie Huang. 2024. Large language models are not robust multiple choice selectors. Preprint, arXiv:2309.03882. + +# A Detecting Single Best Answer examples + +To ensure the reliability of the automatic identification of single-best-answer (SBA) questions, we uniformly sample 4K questions from the MMLU, MMLU-Pro, Commonsense-QA, and Truthful-QA benchmarks, which we divide into 1K and 3K splits. We then manually annotate the 1K samples and optimize GPT-4o-mini prompt on them for best recall. Table 6 shows the prompt template for GPT-4o-mini, which we used to annotate the 4K questions. The 3K split was then used to train our Bert-based SBA classifier on them. The classifier was trained for 2 epochs, using a learning rate of 1e-04. The model was frozen, except for the last layer and the classification head. + +Table 5 shows the percentages of SBA questions on the 1K split as determined by our manual annotations, GPT-4o-mini, and the SBA classifier. The classifier is the preferred one, as it is the most conservative, that is, it detects the most SBA examples, which would be copied verbatim to the WiCkeD variant of the benchmark. The evaluation figures in the table confirm this choice, as the + +classifier has higher recall. The small drop in precision is harmless, as it means that we will not add None of the above option to those examples, and will be copied verbatim. In other words, we can estimate that WiCkeD contains $1\%$ of incoherent examples (where there is a valid option even if None of the above is recorded as the correct option), and $5\%$ of examples which do not have a None of the above option even if we could have added it if the classifier had $100\%$ precision. These figures confirm the high quality of the WiCkeD variants. Table 7 shows the final SBA percentages for each benchmark as determined by the classifier. + +![](images/bd8325d535d747ef696a956719422921cdfdc7e978c21619638fc9b68937fb62.jpg) +Figure 3: The changes in models' answers of the original benchmarks and the WiCkeD variant using chain-of-thoughts. + +# B Instruct vs Base Models on Chain-of-Thought + +Results of CoT suggest the instruct models experience less degradation than their base models. To better understand why this happens, we analyze their answers. Figure 3 shows the change in answers from the original to the WiCkeD variants. Instruction-tuned models are less prone to reverse correct answers and can correct original mistakes in WiCkeD. This suggests that WiCkeD is useful for better gauging the reasoning capabilities of the models. + +# C Multiple Choice Prompting + +In multiple choice prompting, the model is prompted with few-shot demonstrations $c$ and a question $q$ and the set of choices $A = \{A, B, C, D\}$ . It generates a probability of the answer label $a \in A$ conditioned on the prefix prompt given by: + +$$ +\mathrm {P} (a \mid c, q) = \prod_ {t = 1} ^ {T} p \left(a _ {t} \mid c, q < T\right) \tag {1} +$$ + +
MMLUMMLU-ProTQACSQARecallPrecision
Manual17.312.33.33.8--
GPT-4o-mini18.2134.23.998.597.4
SBA Classifier19.614.24.5498.995.1
+ +"A single correct answer question is a question that can have exactly one correct answer from a given set of choices. A single best answer question can have a most appropriate answer (for example, if this answer is omitted, another answer will be correct). Classify the following questions into SBA and non-SBA questions. Assign a label of 1 if the question is a SBA question and a label of 0 otherwise. Question: {question} Class:" + +Table 5: The Percentage of Single Best Answer (SBA) questions in 1K questions sampled uniformly from MMLU, MMLU-Pro, TruthfulQA (TQA), and CommonsenseQA (CSQA) as determined by our manual Annotations, GPT4o-mini, and our trained SBA classifier. Recall and precision are computed with respect to the manual annotation. + +Table 6: SBA Annotation Prompt Template + +
MMLUMMLU-ProMMLU-ReduxTruthfulQACommonsense QAArc Challenge
20.3%16.8%14.7%3.2%3.7%5.2%
+ +Table 7: The Percentage of Single Best Answer (SBA) questions in the benchmarks as determined by our SBA classifier. We do not apply WiCkeD to SBA questions as it can break their coherence. + +# The model answer is set to: + +$\underset {a\in A}{\operatorname{argmax}}(P(a|c,q))$ (2) + +The following are multiple-choice questions (with answers) about college computer science. + +Which of the following regular expressions is equivalent to (describes the same set of strings as) $(\mathrm{a}^{*} + \mathrm{b})^{*}(\mathrm{c} + \mathrm{d})$ + +A. $a^{\star}(c + d) + b(c + d)$ + +B. $a^{\star}(c + d)^{\star} + b(c + d)^{\star}$ + +C. $a^* (c + d) + b^* (c + d)$ + +D. None of the above + +B:None D:Yes Answer:D + +A certain pipelined RISC machine has 8 general-purpose registers R0, R1, ..., R7 and supports the following operations. + +ADD Rs1, Rs2, Rd Add Rs1 to Rs2 and put the sum in Rd + +MUL Rs1, Rs2, Rd Multiply Rs1 by Rs2 and put the product in Rd + +An operation normally takes one cycle; however, an operation takes two cycles if it produces a result required by the immediately following operation in an operation sequence. Consider the expression $\mathrm{AB} + \mathrm{ABC} + \mathrm{BC}$ , where variables A, B, C are located in registers R0, R1, R2. If the contents of these three registers must not be modified, what is the minimum number of clock cycles required for an operation sequence that computes the value of $\mathrm{AB} + \mathrm{ABC} + \mathrm{BC}$ ? + +A. 5 + +B.6 + +C. 8 + +D. None of the above + +Answer: B + +The Singleton design pattern is used to guarantee that only a single instance of a class may be instantiated. Which of the following is (are) true of this design pattern? + +I. The Singleton class has a static factory method to provide its instance. + +II. The Singleton class can be a subclass of another class. + +III. The Singleton class has a private constructor. + +A. I only + +B. II only + +C. I, II, and III + +D. None of the above + +B: None of Answer C: + +Of the following problems concerning a given undirected graph $G$ , which is currently known to be solvable in polynomial time? + +A. Finding the longest simple cycle in G + +B. Finding the shortest cycle in G + +C. Finding ALL spanning trees of G + +D. None of the above + +Answer: + +Figure 4: Examples from the MMLU computer science task using WiCkeD. We show 3-shot for brevity, but 5-shot was actually used in the experiments for the main results. + +Figures 4, 5, 6, and 7 show example prompts for the MMLU college computer science, Arc Challenge, Common-sense QA, and MMLU-Redux benchmarks, respectively. + +Question: The marsh willow herb is a plant native to the northeastern United States. It grows best in damp habitats. Which of the following environmental changes would most likely cause a decrease in the marsh willow herb population in an area? + +A. a rainstorm lasting several weeks + +B. a drought lasting twelve months + +C. unusually low temperatures during the month of July + +D. unusually high temperatures during the month of January + +Answer: B + +Question: Volcanic eruptions early in Earth's history are believed to be responsible for a large proportion of the matter now found in which Earth structure? + +A. asthenosphere + +B. hydrosphere + +C. ozone layer + +D. None of the above + +Answer: D + +Question: Which human activity will help decrease air pollution? + +A. burning crops + +B. driving a hybrid car + +C. burning household garbage + +D. None of the above + +Answer: D + +Question: Which of the following traits is most influenced by the environment? + +A. Body weight + +B. Eye color + +C. Blood type + +D. Color blindness + +B. Color Answer: + +Figure 5: Examples from the AllenAi Arc challenge using WiCkeD. We show 3-shot for brevity, but 5-shot was actually used in the experiments for the main results. The first few-shot example does not include None of the above option because it was classified as SBA question. + +# D Detailed Wicked Results + +Tables 8 and 9 show the detailed performances of open-weight models using direct and CoT prompting, respectively. Since Wicked selects a random choice to be replaced every time, results may slightly vary. The reported results use a random seed of 5331. Table 10 shows the detailed performances of Closed-source models. + +
ModelSizeITMMLUMMLU ProMMLU ReduxArc ChallengeTruthfulQACSQAAverage
Org.Wcd.Org.Wcd.Org.Wcd.Org.Wcd.Org.Wcd.Org.Wcd.Org.Wcd.
Qwen-2.57B-74.048.354.345.667.643.789.371.79.365.885.170.374.957.5
7B74.448.853.847.868.260.290.178.171.762.1784.573.773.961.8
14B-79.870.958.455.373.664.194.168.782.271.884.674.578.867.6
14B79.972.760.255.675.266.794.475.180.173.183.474.278.969.6
72B-86.078.267.162.081.264.595.774.588.973.489.377.384.771.6
72B87.380.168.460.581.767.895.180.190.174.689.179.485.373.8
Gemma-29B-70.957.148.537.965.757.988.975.162.954.976.564.668.957.9
9B72.251.451.247.166.152.890.177.172.362.481.968.472.359.9
27B-70.957.148.135.367.252.389.969.678.959.981.170.572.757.5
27B76.064.453.242.670.157.491.873.381.570.883.769.876.163.0
Llama 3.18B-64.756.742.539.758.151.379.758.258.049.673.664.362.853.3
8B67.760.146.843.862.253.282.559.163.253.678.166.966.856.1
70B-77.970.156.452.878.469.890.676.681.766.581.869.877.867.6
70B82.168.958.744.679.667.291.279.480.369.882.369.279.066.5
Mistral7B-61.550.438.132.754.140.877.765.855.841.770.756.459.647.9
7B58.751.935.632.853.740.475.663.360.944.970.456.659.248.3
DS-R1-Distill Llama8B-55.548.634.630.551.243.071.361.762.747.964.859.856.748.6
DS-R1-Distill Qwen7B-54.148.432.230.953.843.273.865.788.472.561.955.760.752.7
Average71.960.250.544.367.155.486.870.774.461.979.067.971.660.1
+ +Table 8: Detailed results for benchmarks: Original (Org.) and WiCkeD (Wcd.) variants using direct prompting. + +
ModelSizeITMMLUMMLU ProMMLU ReduxGPQAAverage
Org.Wcd.Org.Wcd.Org.Wcd.Org.Wcd.Org.Wcd.
Qwen-2.57B-75.350.354.345.667.643.730.222.256.940.5
7B76.453.856.847.868.260.232.123.358.446.3
14B-79.870.958.455.370.662.132.827.860.454.0
14B79.972.760.255.675.266.734.7826.762.555.4
Gemma-29B-70.461.148.540.958.745.932.42452.542.9
9B72.268.451.247.163.152.833.824.955.148.3
27B-74.970.548.142.367.261.331.725.355.549.9
27B77.472.653.246.670.164.832.427.858.352.9
Mistral7B-58.747.835.632.753.740.824.717.843.234.8
7B61.551.938.132.854.142.428.920.245.736.8
Llama 3.18B-69.862.745.742.562.156.128.921.451.645.7
8B71.266.347.644.864.260.231.924.653.748.9
DS-R1-Distill Llama8B-85.984.675.176.184.578.535.828.670.366.9
DS-R1-Distill Qwen7B-80.478.173.270.474.372.836.72966.262.6
Average73.865.153.348.666.757.731.924.556.449.0
+ +Table 9: Detailed results for the MMLU, MMLU-pro, MMLURedux, and GPQA benchmarks' Original (Org.) and WiCkeD (Wcd.) variants using chain-of-thought prompting. + +
ModelMMLUMMLU ProMMLU ReduxAverage
Org.Wcd.Org.Wcd.Org.Wcd.Org.Wcd.
GPT-4o81.375.963.755.678.967.974.666.5
GPT-4o-mini89.684.070.567.384.381.381.577.5
Gemini Flash 2.082.877.475.964.282.275.980.372.5
Average84.679.170.062.481.875.078.872.2
+ +Table 10: Detailed results of closed-source models in the MMLU, MMLU-pro, and MMLU-reflux benchmarks' Original (Org.) and WiCkeD (Wcd.) variants using chain-of-thought prompting. + +Question: An owl hunts mice that live in a farmer's field. After the farmer gathers the crops, the mice have fewer places to hide. Which is most likely to happen after the crops are gathered? + +A. The owl will catch more mice. + +B. The owl will hunt in a different field. + +C. The owl will have new material to build its nest. + +D. The owl will have a hard time feeding its young + +Answer: A + +Question: What can a flower become? + +A. a fruit + +B. a leaf + +C. a branch + +D. None of the above + +D: None of the above. Answer:A + +Question: Which statement accurately describes the neutrons in any element? + +A. The number of neutrons equals the number of electrons. + +B. The charge of a neutron is always negative. + +C. Neutrons are more massive than electrons. + +D. None of the above + +Answer: D + +Question: Cells take in food for energy. The part of the cell that aids in digestion of the food is the lysosome. What is the main role of lysosomes in the process of food digestion? + +A. building proteins + +B. breaking down wastes + +C. converting energy from one form into another + +D. None of the above + +Answer: + +Figure 6: Examples from the Common-sense QA using WiCkeD. We show 3-shot for brevity, but 5-shot was actually used in the experiments for the main results. The first few-shot example does not include None of the above option because it was classified as SBA question. + +Protons used in cancer therapy are typically accelerated to about 0.6c. How much work must be done on a particle of mass m in order for it to reach this speed, assuming it starts at rest? + +A. ${0.25}{\mathrm{{mc}}}^{ \land }2$ + +B. $0.60 \mathrm{mc} \hat{} 2$ + +C. $1.25 \mathrm{mc} \hat{} 2$ + +D. None of the above + +Answer: A + +A resistor in a circuit dissipates energy at a rate of 1 W. If the voltage across the resistor is doubled, what will be the new rate of energy dissipation? + +A. ${0.25}\mathrm{\;W}$ + +B. 0.5 W + +C. 4 W + +D. None of the above + +Answer: C + +A rod measures $1.00\mathrm{m}$ in its rest system. How fast must an observer move parallel to the rod to measure its length to be $0.80\mathrm{m}$ ? + +A. 0.60c + +B. 0.70c + +C. 0.80c + +D. None of the above + +Answer: A + +A photon strikes an electron of mass m that is initially at rest, creating an electron-positron pair. The photon is destroyed and the positron and two electrons move off at equal speeds along the initial direction of the photon. The energy of the photon was + +A. mc^2 + +B. $2 \mathrm{mc}^{\wedge} 2$ + +C. $3 \mathrm{mc}^{\wedge} 2$ + +D. None of the above + +Answer: + +Figure 7: Examples from the MMLU-Rexus using WiCkeD. We show 3-shot for brevity, but 5-shot was actually used in the experiments for the main results. \ No newline at end of file diff --git a/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/images.zip b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c62ec2b21628be2aaff0cba89b07676e22079b03 --- /dev/null +++ b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10b596f98be79ed45556e7385f428e2144d9efb05582c2c24d58b1484ebda3a7 +size 488646 diff --git a/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/layout.json b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d4d87b1c6eb4cc81770b86c946681fcb34251e7a --- /dev/null +++ b/ACL/2025/WiCkeD_ A Simple Method to Make Multiple Choice Benchmarks More Challenging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c5f8ad5f72c2d8d0e113734cf5841077b4d4b88916b44008230ca1348525a16 +size 406415 diff --git a/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_content_list.json b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1d754ddbea179bb7448d155690c0a2d575f3d172 --- /dev/null +++ b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5483ace027abea30a7feedf5b4d2998f316bc92dc840ce8254a6b9a133f1469a +size 60419 diff --git a/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_model.json b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1194b05882c7bb5af14d61c49a9c029854202416 --- /dev/null +++ b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68c976220adf69f8fc4712424e046d35f49062ee3ce8494ccb0a1dd503754c21 +size 71573 diff --git a/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_origin.pdf b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2aebdb65e36ade55f8056d5c88747e061c686fc0 --- /dev/null +++ b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5ba95ee9d71cf48c9c1ea85999cf5ecb23e390aa38355c7bad8e1d8990ea718 +size 6864710 diff --git a/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/full.md b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7fc9f71a612def42c306eb29e092361b48c02aaa --- /dev/null +++ b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/full.md @@ -0,0 +1,201 @@ +# WinSpot: GUI Grounding Benchmark with Multimodal Large Language Models + +Zheng Hui, Yinheng Li, Dan Zhao, Colby Banbury, Tianyi Chen, Kazuhito Koishida + +Columbia University, Microsoft + +{zackhui, yinhengli, tianyi.chen, colbybanbury }@microsoft.com, {zh2483}@columbia.edu, {dz1158}@nyu.edu + +# Abstract + +Graphical User Interface (GUI) automation relies on accurate GUI grounding. However, obtaining large-scale, high-quality labeled data remains a key challenge, particularly in desktop environments like Windows Operating System (OS). Existing datasets primarily focus on structured web-based elements, leaving a gap in real-world GUI interaction data for non-web applications. To address this, we introduce a new framework that leverages LLMs to generate large-scale GUI grounding data, enabling automated and scalable labeling across diverse interfaces. To ensure high accuracy and reliability, we manually validated and refined 5,000 GUI coordinate-instruction pairs, creating WinSpot—the first benchmark specifically designed for GUI grounding tasks in Windows environments. WinSpot provides a high-quality dataset for training and evaluating visual GUI agents, establishing a foundation for future research in GUI automation across diverse and unstructured desktop environments1. + +# 1 Introduction + +Multimodal Large Language Models (MLLMs) exhibit impressive visual understanding and reasoning (Gandhi et al., 2023; Liu et al., 2024; Li et al., 2024; Zhang et al., 2025; Li et al., 2025b), enabling automation in complex real-world scenarios (Ai et al., 2024; Hui et al., 2025b). Among these, Graphical User Interface (GUI) automation has emerged as a critical application, where agents interpret on-screen elements and execute context-relevant actions for tasks such as software testing and application management (Yang et al., 2023a; Li et al., 2020; Wang et al., 2024b). + +Despite significant advances in web and mobile GUI automation (Bavishi et al., 2023; Yang et al., 2023a; Cheng et al., 2024; Wang et al., 2024a; + +![](images/747fc5da058cb1e1eebbb46da4fd678f632dea9dbf09784aebf05e2e0e151d5b.jpg) +Figure 1: GUI grounding: locating actionable UI elements based on instructions. + +Hui et al., 2025a), GUI in Windows desktop environments remain largely unexplored, despite Windows system widespread use in professional and enterprise applications. This gap is particularly challenging because Windows applications lack a standardized UI representation such as HTML or DOM structures, requiring GUI grounding to rely purely on visual perception. Furthermore, Windows interfaces exhibit highly diverse layouts, where applications designed using different frameworks (e.g., Win32, UWP, Electron) follow inconsistent UI structures. Additionally, overlapping windows introduce ambiguity in detecting actionable elements, as interactable regions may be partially or fully obscured. Unlike web applications, where ARIA (Accessible Rich Internet Applications) attributes provide accessibility metadata, many Windows applications lack structured accessibility trees (a11y trees), making it difficult to extract UI component descriptions programmatically. Existing screenshot-based methods (Bavishi et al., 2023; Cheng et al., 2024) show promise but lack a large-scale, standardized benchmark specifically designed for Windows GUI automation. Without such a benchmark, researchers face challenges in systematically measuring progress, comparing approaches, and addressing the distinct complexities of desktop interfaces. + +To fill this void, we introduce WinSpot, a large- + +scale benchmark specifically designed for GUI grounding in Windows environments. Our main contributions are summarized below: + +- Two-Stage Labeling Framework. We propose a scalable approach that utilizes MLLMs to generate coordinate-instruction pairs from diverse Windows screenshots, significantly reducing the initial labeling burden. Importantly, our method relies exclusively on raw screenshots, ensuring seamless adaptation across different Windows applications. +- WinSpot: A First-of-its-Kind Windows GUI Benchmark. Expanding on our two-stage framework, we introduce WinSpot—a comprehensive dataset with over 5,000 human-validated coordinate-instruction pairs, covering diverse Windows environments, 21 times larger than previous benchmarks. + +# 2 Related Work + +# 2.1 UI Screen Understanding Dataset + +A variety of datasets (Moran et al., 2018; He et al., 2021; Wu et al., 2023) have been developed to support UI modeling, primarily in the mobile domain. For instance, the AMP dataset (Zhang et al., 2021), containing 77k screens from 4,068 iOS apps. Another significant resource is Rico (Deka et al., 2017), the largest publicly available dataset for Android apps UI understanding. In the broader web and OS domain, datasets such as Mind2Web (Deng et al., 2024), Visual-WebArena (Koh et al., 2024), and Windows Arena (Bonatti et al., 2024) offer simulated environments for various tasks. Existing GUI Grounding datasets overwhelmingly focus on mobile and web platforms, leaving desktop environments underexplored. The closest dataset related to desktop UI understanding is SeeClick (Cheng et al., 2024) and Os-atlas (Wu et al., 2024b), though it predominantly targets cross-platform settings and lacks a specific focus on Windows. Our work fills this gap by introducing a dataset tailored to desktop environments, particularly for Windows OS, which marks a novel contribution to the field. + +# 3 Method + +# 3.1 Data Construction + +Unlike previous work (Cheng et al., 2024) that focuses on cross-domain tasks and structured data for training dataset construction, our approach targets the Windows OS (Figure 2). We propose + +![](images/465970a520be16ca757d9707e8c633e117789f6e18633b744449509f3c1cdadc.jpg) + +![](images/01d93a85a9031d353e5c1e10699fae5d391b745e02968304a5639c72e5a54eed.jpg) +Figure 2: (a) Traditional methods rely on HTML or DOM files to locate icons during data construction. (b) Our proposed data alignment framework requires only raw screenshot images. + +Instruction-Interactable Region Alignment (Figure 3), leveraging MLLMs to generate training data without HTML elements or accessibility trees. Since Windows applications lack a standardized UI representation and display diverse layouts across frameworks (e.g., Win32, UWP, Electron), our method relies entirely on visual cues for effective GUI grounding. + +We first retrieve and filter images via the Bing API, then verify quality with Phi3-vision (Abdin et al., 2024b). Our goal is to collect diverse, representative Windows screenshots. We query screenshots of $700+$ top apps from the Microsoft Store2. This model verifies resolution and screenshot validity. Quality-approved images are randomly added to our data bank. We expand our dataset via Bing API's similar image feature. Images failing quality checks are discarded. + +Once filtered, we apply a proprietary Bert model with ViT encoder to perform icon grounding on the selected images. The ViT-Bert model generates bounding boxes around interactable icons in the images, which we use to create structured data. We then use GPT-4o for aligning the filtered images with corresponding icon descriptions. This align + +![](images/144f398565d4a33eee92d720b7f23c14ded7eddb611aca874b4c6be0dd92df58.jpg) +Figure 3: Overview of the Data Alignment Process: (a) illustrates the data filtering strategy using the Phi3 vision model, (b) shows the input and output of icon grounding with the in-house ViT-BERT model, and (c) the use of LLMs for GUI and description alignment. + +ment process serves multiple purposes: 1) By using expensive models like GPT-4o only in the data alignment stage, we reduce computational costs while maintaining accuracy during the reasoning and inference processes. 2) Previous work (Zheng et al., 2024) has shown that providing GPT-4V with screenshots that only include bounding boxes and IDs can be misleading. The limited ability of GPT-4V to extract semantic information while predicting actions poses a challenge. To address this, our automated labeling pipeline incorporates semantic cues directly into the images during data construction, primarily by enhancing the prompts generated by GPT-4V. 3) By enriching the dataset with diverse semantic information, we ensure that the subsequent click agents can handle distributed tasks more effectively, improving overall performance. In addition to the data collected via the Bing API, we incorporated 500 images from CoVA dataset(Kumar et al., 2022), 500 images from WebSight dataset(Laurençon et al., 2024) to further enhance our training set. Result in a dataset around 60K to train our model. For more examples about the data construction, please refer to Appendix B. + +# 3.2 WinSpot Benchmark + +The WinSpot dataset consists of over 5,000 annotated $^{3}$ screenshots from 14 core Windows applications, each representing unique interaction types and layout structures. Examples from WinSpot are shown in Figure 4. The applications and their respective contributions to the dataset are shown in Figure 5. Each screenshot in WinSpot contains multiple interactable regions, such as buttons, menus, and icons, each annotated with its corresponding function. These annotations include bounding boxes around the interactable elements and their associated semantic descriptions, which are aligned with natural language instructions for both grounding and task prediction tasks. This variety ensures that WinSpot provides a challenging and comprehensive evaluation framework for GUI agents, enabling robust testing of both interaction precision and generalization across different applications. + +WinSpot presents a diverse array of tasks, including file navigation, system settings adjustment, and text input, as well as more specialized tasks such + +![](images/3dece0521df44a10d44ddae27d6f9475986627cad04b36f2fd10545234dd260c.jpg) + +![](images/490d2a27506263492311b66e78f5939526cb72827ad4829ec1c8f51c83a8833d.jpg) +Figure 4: WinSpot examples + +as process management in Task Manager and command execution in Command Prompt. These tasks encompass a wide range of complexity, from simple button clicks to more involved interactions that require an understanding of application-specific layouts. In addition to supporting GUI grounding tasks $P(y|S,x)$ , WinSpot also be used in reverse tasks $P(x|S,y)$ , where the model must predict the description of a given GUI element based on its location in the screenshot. This two-way task formulation enhances the evaluation by testing both the agent's understanding of visual cues and its ability to map interactions to the correct interface components. + +![](images/e2b8f024610e319244c371fb3f4b39a602c4dc79154802ed5d593e40f636a7bc.jpg) +Figure 5: WinSpot Category + +# 4 Experiments and Results + +In this section, we evaluate both general-purpose models and GUI-specific models on our newly introduced WinSpot benchmark. + +# 4.1 Baselines and Evaluation Metric + +We compare multiple baselines, including general-purpose MLLMs (e.g., GPT-4o (OpenAI et al., 2024), GPT-4V (OpenAI et al., 2024), Phi3-Vision (Abdin et al., 2024a), MinGPT-v2 (Chen et al., 2023)) and GUI-focused models (e.g., Fuyu (Bavishi et al., 2023), CogAgent (Wang et al., 2024c), SeeClick (Cheng et al., 2024), Uground (Gou et al., 2024)). Consistent with prior studies on GUI grounding (Li et al., 2022; Yang et al., 2023b; Cheng et al., 2024), we adopt click accuracy as our primary metric. A prediction is considered correct if the model's predicted click coordinates fall within the bounding box of the ground-truth. + +# 4.2 Results + +Table 1 presents the click accuracy of various models across the four major subcategories of the WinSpot benchmark: File Management, System Settings, Productivity Tools, and MS Store & Web applications. These categories span a wide range of GUI interaction patterns allowing us to assess both generalization and domain sensitivity of each model. The best-performing model is Uground, which achieves a remarkable $44.2\%$ overall accuracy, significantly outperforming all other baselines. Its dominance is particularly evident in System Settings and MS Store & Web, where it reaches $51.4\%$ and $82.4\%$ respectively. + +Among the general-purpose MLLMs, GPT-4V and GPT-4o show relatively higher click accuracy (18.3% and 16.5%, respectively), with notable strengths in the MS Store & Web category—where visual layout conventions tend to be more standardized and semantically interpretable. This aligns with prior observations that LLMs pretrained on web data tend to generalize better in semi-structured interfaces but struggle with unstructured system UIs. However, their low scores in System Settings and File Management (e.g., GPT-4V: 6.3%, GPT-4o: 7.5%) reveal key limitations when navigating system-level layouts, likely due to the absence of such interfaces in their training data and the lack of spatial attention mechanisms specialized for desktop contexts. Phi-3.5 Vision and MiniGPTv-2, both smaller open-source models, + +
MethodSizeWinSpotTotal Task
File ManagementSystemProductivity ToolsMS Store & Web
MiniGPTv-27B0%0.6%2.2%5.8%1.7%
GPT-4VUnknown8.0%6.3%15.6%58.1%18.3%
GPT-4oUnknown8.8%7.5%14.1%47.7%16.5%
Phi-3.5 Vision4.2B4.7%5.8%3.7%25.5%7.9%
Fuyu8B5.0%9.2%7.1%34.3%9.4%
CogAgent18B6.4%8.1%10.5%60.8%13.8%
SeeClick9.6B8.6%16.6%18.9%70.6%20.1%
GUIAct-Qwen7B10.8%6.1%13.6%78.4%18.0%
Uground7B27.2%51.4%45.4%82.4%44.2%
+ +Table 1: Evaluation of Various Methods on WinSpot Subcategories + +perform poorly across all subcategories, with overall accuracy below $10\%$ . These results reinforce the importance of scale, training modality, and data coverage in grounding tasks. Interestingly, Phi-3.5 Vision performs slightly better in the MS Store & Web category, suggesting even smaller models can benefit from interface regularity if provided with sufficient multimodal alignment. Specialized GUI models such as CogAgent, Fuyu, SeeClick, and GUIAct-Qwen fall in an intermediate performance range (between $9.4\%$ and $20.1\%$ overall), with SeeClick standing out as the strongest among them. Notably, SeeClick attains $70.6\%$ accuracy on MS Store & Web, highlighting its suitability for commercial UI tasks, but only $16.6\%$ in System Settings, pointing to challenges in less standardized environments. Similarly, GUIAct-Qwen achieves competitive results in web-based domains but lacks consistency across system-heavy tasks, suggesting an over-reliance on pretraining priors that fail to capture the visual intricacies of Windows system utilities. + +# 5 Discussion and Future Work + +Our findings highlight a clear performance divide between general-purpose MLLMs and domain-specific GUI models. Generalist models such as GPT-4o and GPT-4V demonstrate only modest proficiency in GUI grounding, reflecting their limited pretraining exposure to Windows UI paradigms. Meanwhile, specialized models like Uground and SeeClick perform significantly better, particularly in structured tasks like web and app store interactions. However, even these tailored models struggle with system-level operations—such as task management, file navigation, or control panel interactions—where contextual reasoning and fine-grained spatial precision are required. This un + +derscores a broader insight: current models, despite their size and multimodal capacity, lack robust spatial reasoning and memory mechanisms necessary for GUI automation in real-world settings. WinSpot helps uncover these limitations by evaluating not just interaction precision, but also the ability of models to align natural language instructions with semantically meaningful UI regions. Furthermore, this work is situated within a broader movement in NLP and multimodal learning: applying LLMs to real-world utility tasks beyond traditional text benchmarks. With growing industrial interest in automating workflows, testing software, and enabling human-in-the-loop systems, GUI agents will increasingly become critical enablers. Our benchmark and methodology lay the groundwork for these systems, while also exposing their current gaps. Going forward, we advocate for more research at the intersection of vision-language grounding, procedural planning (Li et al., 2025a), and user intent modeling. In particular, incorporating temporal dynamics (Jiang et al., 2025), multi-turn interactions (Liu et al., 2025), and UI state tracking may bridge the gap between static grounding and true GUI manipulation. Additionally, as LLM-driven agents are deployed in productivity tools, safety (Hui et al., 2024a; Zhang et al., 2024; Wu et al., 2024a) and interpretability will become pressing concerns—especially in high-stakes domains like healthcare, finance, and enterprise automation. In summary, WinSpot offers a much-needed benchmark for evaluating GUI grounding in Windows environments and serves as a testbed for the next generation of GUI agent. It pushes the research community to build models that are not only linguistically fluent but also visually and operationally grounded in environments where real users work. + +# 6 Limitations + +While WinSpot establishes a valuable benchmark for GUI grounding in Windows environments, several limitations remain. First, the dataset is derived mainly from a curated selection of popular Windows applications and supplemental sources, which may not capture the full diversity of desktop interfaces—especially those used in specialized or enterprise settings. Second, our automated labeling pipeline, although designed to reduce manual effort, relies on MLLMs for generating semantic cues; any inaccuracies or biases inherent in these models can propagate into the final annotations. Third, our evaluation metric, click accuracy, offers a simplified perspective on interaction performance, potentially overlooking nuanced aspects of user engagement such as multi-step workflows or gesture dynamics. Finally, the framework is optimized for static screenshots and may not generalize well to dynamic or adaptive interfaces that evolve in real time. + +# 7 Ethical Considerations + +In constructing WinSpot, we took deliberate steps to safeguard user privacy and ensure ethical data practices. All screenshots were rigorously reviewed and post-processed to remove personal or sensitive information before inclusion in the dataset. However, the selection process for source applications may introduce biases that affect the representativeness of the dataset. Additionally, the automated labeling pipeline's reliance on MLLMs could inadvertently propagate existing biases present in these models. We advocate for continuous audits and transparent documentation of both dataset composition and model performance, especially when these systems are applied in real-world scenarios such as automated testing or user assistance. As the deployment of GUI automation technology expands, it is imperative to consider the impact on user autonomy and employment, ensuring that such tools are used in a manner that respects consent, fairness, and accountability. + +# References + +Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Qin Cai, Vishrav Chaudhary, Dong Chen, Dongdong Chen, Weizhu + +Chen, Yen-Chun Chen, Yi-Ling Chen, Hao Cheng, Parul Chopra, Xiyang Dai, Matthew Dixon, Rosen Eldan, Victor Fragoso, Jianfeng Gao, Mei Gao, Min Gao, Amit Garg, Allie Del Giorno, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Wenxiang Hu, Jamie Huynh, Dan Iter, Sam Ade Jacobs, Mojan Javaheripi, Xin Jin, Nikos Karampatziakis, Piero Kauffmann, Mahoud Khademi, Dongwoo Kim, Young Jin Kim, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Yunsheng Li, Chen Liang, Lars Liden, Xihui Lin, Zeqi Lin, Ce Liu, Liyuan Liu, Mengchen Liu, Weishung Liu, Xiaodong Liu, Chong Luo, Piyush Madan, Ali Mahmoudzadeh, David Majercak, Matt Mazzola, Caio Cesar Teodoro Mendes, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Liliang Ren, Gustavo de Rosa, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Yelong Shen, Swadheen Shukla, Xia Song, Masahiro Tanaka, Andrea Tupini, Praneetha Vaddamanu, Chunyu Wang, Guanhua Wang, Lijuan Wang, Shuohang Wang, Xin Wang, Yu Wang, Rachel Ward, Wen Wen, Philipp Witte, Haiping Wu, Xiaoxia Wu, Michael Wyatt, Bin Xiao, Can Xu, Jiahang Xu, Weijian Xu, Jilong Xue, Sonali Yadav, Fan Yang, Jianwei Yang, Yifan Yang, Ziyi Yang, Donghan Yu, Lu Yuan, Chenruidong Zhang, Cyril Zhang, Jianwen Zhang, Liyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and Xiren Zhou. 2024a. Phi-3 technical report: A highly capable language model locally on your phone. Preprint, arXiv:2404.14219. + +Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. 2024b. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219. + +Lin Ai, Zheng Hui, Zizhou Liu, and Julia Hirschberg. 2024. Enhancing pre-trained generative language models with question attended span extraction on machine reading comprehension. In *In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing*, pages 10046–10063, Miami, Florida, USA. Association for Computational Linguistics. + +Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sagnak Tasirlar. 2023. Fuyu-8b: A multimodal architecture for ai agents. + +Rogerio Bonatti, Dan Zhao, Francesco Bonacci, Dillon Dupont, Sara Abdali, Yinheng Li, Yadong Lu, Justin Wagle, Kazuhito Koishida, Arthur Bucker, Lawrence Jang, and Zack Hui. 2024. Windows agent arena: Evaluating multi-modal os agents at scale. Preprint, arXiv:2409.08264. + +Jun Chen, Deyao Zhu, Xiaogian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoor + +thi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. 2023. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. Preprint, arXiv:2310.09478. +Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Li YanTao, Jianbing Zhang, and Zhiyong Wu. 2024. SeeClick: Harnessing GUI grounding for advanced visual GUI agents. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9313-9332, Bangkok, Thailand. Association for Computational Linguistics. +Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar. 2017. Rico: A mobile app dataset for building data-driven design applications. In Proceedings of the 30th annual ACM symposium on user interface software and technology, pages 845-854. +Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Sam Stevens, Boshi Wang, Huan Sun, and Yu Su. 2024. Mind2web: Towards a generalist agent for the web. Advances in Neural Information Processing Systems, 36. +Kanishk Gandhi, Jan-Philipp Fraenken, Tobias Gerstenberg, and Noah Goodman. 2023. Understanding social reasoning in language models with language models. In Advances in Neural Information Processing Systems, volume 36, pages 13518-13529. Curran Associates, Inc. +Boyu Gou, Ruohan Wang, Boyuan Zheng, Yanan Xie, Cheng Chang, Yiheng Shu, Huan Sun, and Yu Su. 2024. Navigating the digital world as humans do: Universal visual grounding for gui agents. arXiv preprint arXiv:2410.05243. +Zecheng He, Srinivas Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Wichers, Gabriel Schubiner, Ruby Lee, and Jindong Chen. 2021. Actionbert: Leveraging user actions for semantic understanding of user interfaces. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 5931-5938. +Zheng Hui, Zhaoxiao Guo, Hang Zhao, Juanyong Duan, Lin Ai, Yinheng Li, Julia Hirschberg, and Congrui Huang. 2024a. Can open-source llms enhance data augmentation for toxic detection?: An experimental study. arXiv preprint arXiv:2411.15175. +Zheng Hui, Zhaoxiao Guo, Hang Zhao, Juanyong Duan, and Congrui Huang. 2024b. ToxiCraft: A novel framework for synthetic generation of harmful information. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 16632-16647, Miami, Florida, USA. Association for Computational Linguistics. +Zheng Hui, Yinheng Li, Tianyi Chen, Colby Banbury, Kazuhito Koishida, et al. 2025a. Winclick: Gui grounding with multimodal large language models. arXiv preprint arXiv:2503.04730. + +Zheng Hui, Xiaokai Wei, Yexi Jiang, Kevin Gao, Chen Wang, Frank Ong, Se eun Yoon, Rachit Parek, and Michelle Gong. 2025b. Matcha: Can multi-agent collaboration build a trustworthy conversational recommender? Preprint, arXiv:2504.20094. +Yue Jiang, Jichu Li, Yang Liu, Dingkang Yang, Feng Zhou, and Quyu Kong. 2025. Danmakutppbench: A multi-modal benchmark for temporal point process modeling and understanding. arXiv preprint arXiv:2505.18411. +Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Chong Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Ruslan Salakhutdinov, and Daniel Fried. 2024. Visualwebarena: Evaluating multimodal agents on realistic visual web tasks. arXiv preprint arXiv:2401.13649. +Anurendra Kumar, Keval Morabia, William Wang, Kevin Chang, and Alex Schwing. 2022. CoVA: Context-aware visual attention for webpage information extraction. In Proceedings of The Fifth Workshop on e-Commerce and NLP (ECNLP 5), pages 80–90, Dublin, Ireland. Association for Computational Linguistics. +Hugo Laurençon, Léo Tronchon, and Victor Sanh. 2024. Unlocking the conversion of web screenshots into html code with the websight dataset. *Preprint*, arXiv:2403.09029. +Ao Li, Yuexiang Xie, Songze Li, Fugee Tsung, Bolin Ding, and Yaliang Li. 2025a. Agent-oriented planning in multi-agent systems. In The Thirteenth International Conference on Learning Representations. +Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vulic, and Furu Wei. 2025b. Imagine while reasoning in space: Multimodal visualization-of-thought. arXiv preprint arXiv:2501.07542. +Chengzu Li, Caiqi Zhang, Han Zhou, Nigel Collier, Anna Korhonen, and Ivan Vulić. 2024. TopViewRS: Vision-language models as top-view spatial reasoners. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1786-1807, Miami, Florida, USA. Association for Computational Linguistics. +Lianian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al. 2022. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10965-10975. +Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge. 2020. Mapping natural language instructions to mobile UI action sequences. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8198-8210, Online. Association for Computational Linguistics. + +Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, Jianshu Chen, Kaiqiang Song, Sangwoo Cho, Yaser Yacoob, and Dong Yu. 2024. MMC: Advancing multimodal chart understanding with large-scale instruction tuning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1287-1310, Mexico City, Mexico. Association for Computational Linguistics. + +Yuhang Liu, Pengxiang Li, Zishu Wei, Congkai Xie, Xueyu Hu, Xinchen Xu, Shengyu Zhang, Xiaotian Han, Hongxia Yang, and Fei Wu. 2025. Infiguiagent: A multimodal generalist gui agent with native reasoning and reflection. arXiv preprint arXiv:2501.04575. + +Kevin Moran, Carlos Bernal-Cárdenas, Michael Curcio, Richard Bonett, and Denys Poshyvanyk. 2018. Machine learning-based prototyping of graphical user interfaces for mobile apps. IEEE Transactions on Software Engineering, 46(2):196-221. + +OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgium, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha GontijoLopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhum Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Lukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar Tabarak Khan, Logan Kilpatrick, Jong Wook Kim Christina Kim Yongjik Kim Jan Hendrik Kirchner Jamie Kiros Matt Knight Daniel Kokotajlo Lukasz Kondraciuk Andrew Kondrich Aris Konstantinidis Kyle Kosic Gretchen Krueger Vishal Kuo Michael Lampe Ikai Lan Teddy Lee Jan Leike Jade Leung Daniel Levy Chak Ming Li + +Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O'Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nicolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiesthoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk and Barret Zoph. 2024. Gpt-4 technical report. Preprint arXiv:2303.08774. + +Junyang Wang, Haiyang Xu, Haitao Jia, Xi Zhang, Ming Yan, Weizhou Shen, Ji Zhang, Fei Huang, and Jitao Sang. 2024a. Mobile-agent-v2: Mobile device operation assistant with effective navigation via multi-agent collaboration. arXiv preprint arXiv:2406.01014. + +Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024b. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6):186345. + +Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi + +Li, Yuxiao Dong, Ming Ding, and Jie Tang. 2024c. Cogvlm: Visual expert for pretrained language models. Preprint, arXiv:2311.03079. +Chen Henry Wu, Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried, and Aditi Raghunathan. 2024a. Adversarial attacks on multimodal agents. arXiv e-prints, pages arXiv-2406. +Jason Wu, Siyan Wang, Siman Shen, Yi-Hao Peng, Jeffrey Nichols, and Jeffrey P Bigham. 2023. Webui: A dataset for enhancing visual ui understanding with web semantics. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1-14. +Zhiyong Wu, Zhenyu Wu, Fangzhi Xu, Yian Wang, Qiushi Sun, Chengyou Jia, Kanzhi Cheng, Zichen Ding, Liheng Chen, Paul Pu Liang, et al. 2024b. Osatlas: A foundation action model for generalist gui agents. arXiv preprint arXiv:2410.23218. +Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. 2023a. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. Preprint, arXiv:2310.11441. +Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. 2023b. The dawn of Imms: Preliminary explorations with gpt-4v(ision). Preprint, arXiv:2309.17421. +Xiaofeng Zhang, Fanshuo Zeng, Yihao Quan, Zheng Hui, and Jiawei Yao. 2025. Enhancing multimodal large language models complex reason via similarity computation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 10203-10211. +Xiaoyi Zhang, Lilian De Greef, Amanda Swearngin, Samuel White, Kyle Murray, Lisa Yu, Qi Shan, Jeffrey Nichols, Jason Wu, Chris Fleizach, et al. 2021. Screen recognition: Creating accessibility metadata for mobile applications from pixels. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-15. +Yanzhe Zhang, Tao Yu, and Diyi Yang. 2024. Attacking vision-language computer agents via pop-ups. arXiv preprint arXiv:2411.02391. +Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. 2024. Gpt-4v(ison) is a generalist web agent, if grounded. Preprint, arXiv:2401.01614. + +# A Training Details + +To improve WinClick's (Hui et al., 2025a) capacity for GUI understanding in Windows environments, we conducted extensive training using both full fine-tuning and parameter-efficient tuning via LoRA (Low-Rank Adaptation). This dual approach allowed us to explore trade-offs between performance and resource efficiency while maintaining compatibility with large-scale multimodal architectures. + +For full fine-tuning, we updated the entire model, including the vision encoder, language decoder, and cross-modal attention layers. The visual encoder was initialized from a pre-trained ViT model and fine-tuned with an initial learning rate of $2 \times 10^{-6}$ . The language backbone, based on Phi-3, was initialized with a learning rate of $5 \times 10^{-6}$ and trained using a batch size of 32. A linear learning rate scheduler was applied with a warmup ratio of 0.03 to stabilize early training steps and mitigate gradient instability. The optimizer used was AdamW with weight decay set to 0.01. + +For LoRA-based tuning, we injected low-rank matrices into the cross-modal attention layers and trained only these additional parameters, freezing the rest of the model. This method provided a significantly smaller memory footprint and faster training convergence while still yielding non-trivial performance improvements in grounding precision. LoRA ranks were set to 8 across all adapted modules, and dropout was applied with a probability of 0.1. + +All experiments were conducted using $4 \times$ NVIDIA H100 GPUs in a distributed setting using mixed-precision training (fp16). We used DeepSpeed and HuggingFace Accelerate to handle gradient accumulation, checkpointing, and parallelization. Training convergence was achieved within 5 epochs, with early stopping based on validation click accuracy on a held-out subset of WinSpot. + +# B More Training Data Construct Examples + +To build a robust and diverse training corpus, our data pipeline aggregated screenshots from various sources, including real-world application states, software demos, and open benchmarks (e.g., CoVA (Kumar et al., 2022), WebSight (Laurençon et al., 2024)). After visual filtering and quality assessment using Phi3-Vision, selected images were passed through a multi-stage annotation + +pipeline. Figure 6 illustrates a sample of the training data. These examples span interaction types such as: Single-button confirmation dialogs (e.g., "Click OK to continue"), Multi-option menus (e.g., "Choose 'Save As' from the File menu"), Toolbar item selection (e.g., "Click the printer icon to print the document"), Search or input field interaction (e.g., "Type your query in the search box at the top right"). Each image contains between 5 interactable regions, and both the instruction and bounding box data were validated for semantic alignment by our human annotators (see Section C). + +# C Human Annotation + +The annotation process follows similar settings as Hui et al. (2024b). For WinSpot involved a group of carefully selected annotators, all of whom were undergraduate, master's, or Ph.D. students, proficient in GUI operations and familiar with the Windows operating system. The annotation team consisted of individuals with diverse academic backgrounds, ensuring a broad understanding of GUI interactions across different applications. Each annotator was tasked with identifying and marking interactable regions within various Windows applications, focusing on elements such as buttons, icons, menus, and other clickable UI components. For the annotation process, annotators were provided with a set of Windows screenshots. These screenshots were then annotated using a custom tool that allowed them to create bounding boxes around each interactable element. Annotators were also required to provide corresponding descriptions of the elements, ensuring that both the visual and functional aspects of each UI component were documented. The entire annotation process was conducted in English to maintain consistency across all samples. To ensure data privacy, all screenshots were reviewed and post-processed to remove any personal information or sensitive content. The final dataset includes over 1,000 images and 5,000 instruction-click pairs, representing a comprehensive set of interactions across a variety of Windows applications. + +![](images/3b5e6e69b6b800e9f6a0dce4db3921eabaca0c395cd2fd5b4587760187bdaaaa.jpg) + +![](images/e9d0947cdca3636490ddd0026ef983b9062e6ea4b0ad935c86da6548f5b987c6.jpg) + +![](images/9204937bc9453b4ad20052e615cff756cce88d8be344d18a54fdf51ac823492d.jpg) +Figure 6: Examples of GUI grounding data generated during training set construction. Each box is annotated with the action-relevant region and its aligned instruction. + +![](images/6ba64b3ac2823fbabaecb7126e99f767a0a7076cb6b5da89ae9a5cea07e9aa8a.jpg) \ No newline at end of file diff --git a/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/images.zip b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..76c3ae60daa39b47d75e24b436b90e722b3ef2e1 --- /dev/null +++ b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6a3d9fcf02fe96ecaaf3c04b3fa957e50f6e4c3c30d71dde832339eed0baab5 +size 444478 diff --git a/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/layout.json b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5a7aef248e50e4a3c4837ea3c41b846c7e259287 --- /dev/null +++ b/ACL/2025/WinSpot_ GUI Grounding Benchmark with Multimodal Large Language Models/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d83fde7ea469b9fdd1fc08d2b6b934dffb3878a44d223b3bddbb47bf9812f3da +size 226485 diff --git a/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_content_list.json b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..78d7d3f3de471b3b200579399a574bba620c19e6 --- /dev/null +++ b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:395629b670142cd7b65819ae01a909f5aafc0743a733ec84ec273ce4bbc8066a +size 50614 diff --git a/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_model.json b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..367cae842c1db687e30fe821e6044542d6f192ae --- /dev/null +++ b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d684a9694c79e5f42c2b586dfe2351028c040380c63b3554b379df0f486068a4 +size 64494 diff --git a/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_origin.pdf b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1219ba677e6f0eb79424dcf1536844afe6ad24ea --- /dev/null +++ b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:405cffab2cbcb98184131e2e2aa3c81f4303d66143aab306f994072322677db7 +size 292840 diff --git a/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/full.md b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cbf34457da17e7daf9162293b091be45c3f323cb --- /dev/null +++ b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/full.md @@ -0,0 +1,185 @@ +# Zero-Shot Text-to-Speech for Vietnamese + +Thi Vu, Linh The Nguyen, Dat Quoc Nguyen + +Movian AI, Vietnam + +{thivuxy, toank45sphn, datquocnguyen}@gmail.com + +# Abstract + +This paper introduces PhoAudiobook, a newly curated dataset comprising 941 hours of high-quality audio for Vietnamese text-to-speech. Using PhoAudiobook, we conduct experiments on three leading zero-shot TTS models: VAL-E, VoiceCraft, and XTTS-V2. Our findings demonstrate that PhoAudiobook consistently enhances model performance across various metrics. Moreover, VAL-E and VoiceCraft exhibit superior performance in synthesizing short sentences, highlighting their robustness in handling diverse linguistic contexts. We publicly release PhoAudiobook to facilitate further research and development in Vietnamese text-to-speech. + +# 1 Introduction + +Text-to-speech (TTS) synthesis has witnessed significant advancements in recent years. State-of-the-art TTS systems typically use a cascaded pipeline that consists of an acoustic model and a vocoder, with mel-spectrograms serving as intermediate representations (Ren et al., 2019; Li et al., 2019; Tan et al., 2024). These advanced TTS systems can synthesize high-quality speech for single or multiple speakers (Kim et al., 2021; Liu et al., 2022). + +Zero-shot TTS has emerged as a promising approach to overcome the limitations of traditional TTS systems in generalizing to unseen speakers. By leveraging techniques such as speaker adaptation and speaker encoding, zero-shot TTS aims to synthesize speech for new speakers using only a few seconds of reference audio (Arik et al., 2018; Wang et al., 2020; Cooper et al., 2020; Wu et al., 2022; Casanova et al., 2022, 2024). Recent works have explored the application of language modeling approaches to zero-shot TTS, achieving impressive results. For example, VAL-E (Wang et al., 2023) introduces a text-conditioned language model trained on discrete audio codec tokens, enabling TTS to be treated as a conditional codec + +language modeling task. VoiceCraft (Peng et al., 2024) casts both sequence infilling-based speech editing and continuation-based zero-shot TTS as a left-to-right language modeling problem by rearranging audio codec tokens. + +Despite advancements in zero-shot TTS, its application to low-resource languages remains challenging. These languages often lack the large-scale, high-quality datasets needed to train robust TTS models (Gutkin et al., 2016; Chen et al., 2019; Lux et al., 2022; Ngoc et al., 2023; Huang et al., 2024). Also, linguistic and phonetic differences between languages introduce additional challenges in adapting existing models to new languages. As a result, the performance of zero-shot TTS systems in low-resource languages is often limited, hindering their practical usability. + +In this paper, we focus on advancing zero-shot TTS for Vietnamese. Our contributions are: + +- We present PhoAudiobook, a 941-hour high-quality long-form speech dataset that overcomes the limitations of existing Vietnamese datasets, which usually contain audio samples shorter than 10 seconds. The pipeline to create this dataset can be easily adapted to other languages. +- We conduct a comprehensive experimental study to evaluate the performance of three state-of-the-art zero-shot TTS models: VALLE, VoiceCraft, and XTTS-v2 (Casanova et al., 2024). Using a combination of objective and subjective metrics across multiple benchmark datasets, our results demonstrate that XTTS-v2 trained on PhoAudiobook outperforms its counterpart trained on an existing dataset. Additionally, VALLE and VoiceCraft exhibit robustness in synthesizing varied input lengths. +- We publicly release PhoAudiobook at https://huggingface.co/datasets/thivux/phoaudiobook for non-commercial purposes. + +![](images/df7464c398e06b2b1fce9c3d41de2185ae84aa823ef9f8e8dc88a92a7e2c7f84.jpg) +Figure 1: PhoAudiobook creation pipeline. + +# 2 Dataset + +# 2.1 PhotoAudiobook + +Figure 1 illustrates the creation process of our PhoAudiobook dataset. + +First, we collect raw Vietnamese audiobook data from the publicly accessible website https://sachnoiviet.net. This raw dataset includes 23K hours of content from 2,697 audiobooks, narrated by 735 distinct speakers. Next, we use demucs to extract the vocal track, effectively removing any background music or sound effects (Défossez, 2021; Rouard et al., 2023). We then employ the multilingual Whisper-large-v3 model to generate transcriptions and corresponding timestamps for the audio data (Radford et al., 2023). The output from Whisper-large-v3 includes transcripts for short audio segments, usually a few seconds long, along with their corresponding timestamps. These segments are often aligned with natural pauses in speech. We then concatenate successive audio segments and their corresponding transcripts to create longer audio samples and transcriptions, each lasting between 10 and 20 seconds. To ensure the quality of the transcriptions, we process these merged samples using the state-of-the-art Vietnamese ASR model, PhoWhisper-large (Le et al., 2024). We then retain only the samples where the Whisper-large-v3-based transcription matches exactly with the transcription output from PhoWhisper-large. Furthermore, we tackle the challenge of multi-speaker audio. We use the wav2vec2-bartpho model to identify and filter out short audio samples containing multiple speakers, ensuring that all audio segments associated with a particular speaker are indeed spoken by that individual.1 + +To reduce excessive silence in the audio data, we exclude samples with transcripts shorter than + +25 words and trim silence from the beginning and end of each sample. Additionally, we use the six library to normalize audio volume levels for maintaining consistency and avoiding abrupt loudness throughout the dataset. Finally, we standardize the transcriptions through a text normalization step, which includes converting text to lowercase, adding appropriate punctuation, and normalizing numerical expressions into their text form (e.g., "43" becomes "forty three"). We carry out this text normalization step using a sequence-to-sequence model, which we develop by fine-tuning the pre-trained mbart-large-50 model (Liu et al., 2020) on a Vietnamese dataset consisting of unnormalized input and normalized output text pairs. + +The data creation process described above results in a refined 1,400-hour audio corpus. To ensure balanced speaker representation, we limit each speaker to a maximum of 4 hours of audio. This results in a high-quality dataset comprising 941 hours of audio from 735 speakers. From the remaining $1,400 - 941 = 559$ hours of audio, we sample 0.8 hour of audio from 20 speakers to construct a "seen" speaker test set. Additionally, we split the 941 hours of audio from 735 speakers into three sets (on speaker level): a training set containing 940 hours from 710 speakers, a validation set with 0.5 hours from 5 speakers, and an "unseen" speaker test set comprising 0.4 hours from 20 speakers who have the shortest total audio durations. Here, the 20 speakers in the "seen" speaker test set are part of the 710 speakers used for training. + +We conduct a post-processing step to manually inspect each audio sample and its corresponding transcription from both the "seen" and "unseen" speaker test sets. This process results in all correct transcriptions in the test sets of PhoAudiobook. + +# 2.2 Dataset analysis + +Table 1 presents the characteristics of our dataset - PhoAudiobook, in comparison to other Vietnamese speech datasets, including VinBigData (VinBigData, 2023), VietnamCeleb (Pham et al., 2023), the VLSP 2020 ASR Challenge, $^4$ BUD500 (Pham et al., 2024), and viVoice (Gia et al., 2024). + +Duration: PhoAudiobook, with 941 hours, is the second-largest dataset, closely following viVoice, + +
DatasetDuration (h)Mean Dur. (s)25% Dur. (s)75% Dur. (s)DomainSI-SNR (dB)# SpeakersRate (wpm)Fs (Hz)
VinBigData1016.473.548.09General-purpose4.77Unknown22916000
VietnamCeleb1877.742.849.64Unknown3.89UnknownNo transcripts16000
VLSP2434.372.435.21Unknown4.35Unknown24216000
BUD5004622.562.112.94General-purpose4.22Unknown22416000
viVoice10164.121.965.55General-purpose4.81Unknown24324000
PhoAudiobook94111.6610.6312.18Audiobooks4.9173520116000
+ +Table 1: Characteristics of PhoAudiobook and other speech datasets for Vietnamese. + +![](images/0965d1ff88041978ed276fef877ac1302bc86699289f6a50124e8d2775ca751f.jpg) +Figure 2: Duration distributions of datasets. Audio samples are capped at 40 seconds for visualization purposes. + +which has 1,016 hours. The other datasets are considerably smaller, ranging from 101 to 462 hours. Figure 2 shows that previous datasets primarily consist of audio segments shorter than 10 seconds. PhoAudiobook addresses this limitation by providing audio samples ranging from 10 to 20 seconds. + +Domain: PhoAudiobook is derived from audiobooks, typically recorded with professional equipment in controlled environments, ensuring high-quality audio. In contrast, other datasets are general-purpose (e.g., news, YouTube videos, conversations) and may include audio recorded on consumer devices in uncontrolled settings, often with background noise. However, general-purpose datasets have the advantage of covering diverse topics and speaking styles. + +Signal-to-Noise Ratio (SI-SNR): Using an SI-SNR estimator from the speechbrain toolkit (Ravanelli et al., 2024), we calculated SI-SNR across 1000 randomly sampled audio from each dataset. PhoAudiobook achieves the highest SI-SNR, surpassing all other datasets, including viVoice. + +Speaker Information: PhoAudiobook is the only dataset with explicit speaker identity, and therefore, number of speakers (735). + +Speaking Rate (wpm): Among the four datasets with transcripts, PhoAudiobook has the lowest speaking rate of words per minute. This reflects the nature of the dataset, which features long-form audio where speakers naturally pause and rest. + +Sampling Rate: All datasets except viVoice use a standard sampling rate of $16000\mathrm{Hz}$ , which is a widely used sampling rate for speech data. + +PhoAudiobook is comparable in total duration to viVoice, however, it offers several advantages: + +- Text Normalization: viVoice lacks text normalization, which limits its suitability for certain TTS models. In contrast, PhoAudiobook offers normalized transcripts, enhancing compatibility with these models. +- Audio Quality: The unnormalized audio waveforms in viVoice may cause quality issues like distortion and inconsistent volume. In contrast, PhoAudiobook ensures audio waveforms are normalized for consistent quality. +- Speaker ID: viVoice does not provide speaker IDs for individual audio samples, but uses YouTube channel names as a proxy. This approach can be problematic when a YouTube channel features multiple speakers, limiting the use of this dataset to models that do not require speaker identification. In contrast, PhoAudiobook provides distinct speaker IDs for each audio sample, ensuring its broader applicability for speaker-dependent tasks. + +# 3 Empirical approach + +# 3.1 Models & Training data augmentation + +We conduct experiments using 3 state-of-the-art zero-shot TTS models: VALL-E (Wang et al., 2023), VoiceCraft (Peng et al., 2024), and XTTSv2 (Casanova et al., 2024). (i) VALL-E, a pioneering language model-based approach, treats text + +to-speech (TTS) as a conditional language modeling task. It utilizes discrete acoustic tokens derived from a neural audio codec and leverages massive datasets to achieve impressive zero-shot, in-context learning capabilities. (ii) VoiceCraft, a token-infilling neural codec language model, excels in both speech editing and zero-shot TTS. It employs a Transformer decoder architecture with a novel token rearrangement procedure to generate high-quality speech. (iii) XTTS-v2 builds upon the Tortoise model (Betker, 2023), incorporating modifications for multilingual training and enhanced voice cloning. It excels in synthesizing speech for numerous languages, including low-resource ones. + +To enhance data distribution and ensure our TTS models effectively handle shorter input text, we augment the PhoAudiobook training set with shorter audio clips. Specifically, we treat the PhoAudiobook training set, which consists of 940 hours, as a new raw dataset and apply our dataset creation process as detailed in Section 2.1. However, we omit (i) the step of merging short segments into longer ones and (ii) the step of excluding short samples. This augmentation phase results in an additional 554 hours of short audio, bringing the total to $940 + 554 = 1494$ hours of audio for training. See implementation details on how we train VALL-E, VoiceCraft, and XTTS-v2 on this 1494-hour training set in Appendix A. + +# 3.2 Evaluation setup + +Baseline: The baseline model is viXTTS (Gia et al., 2024), which is fine-tuned from the pretrained XTTS-v2 on the viVoice dataset. + +Test sets: In addition to using our PhoAudiobook "seen" and "unseen" speaker test sets, we also compare our models with viXTTS on the VIVOS test set (Luong and Vu, 2016), which contains 0.75 hours of short audio data from 19 speakers. Furthermore, we randomly select 8 speakers, totaling 0.5 hours of audio, from the viVoice dataset for testing. It is important to note that viVoice is available only as a single training dataset, without a predefined training/validation/test split. Consequently, this 0.5-hour viVoice audio set is in fact used for training the baseline viXTTS. + +Metrics: To compare our models and the baseline, we use objective metrics including Word Error Rate (WER), Mel-Cepstral Distortion (MCD) and + +F0 Root Mean Square Error $(\mathrm{RMSE}_{F0})$ , as well as subjective metrics Mean Opinion Score (MOS) and Similarity MOS (SMOS). + +Objective metrics provide quantifiable measures of specific aspects of synthesized speech: + +- Word Error Rate (WER): This metric assesses the intelligibility of synthesized speech by calculating the edit distance between the transcription of the synthesized speech and the ground truth transcription. Specifically, it counts the number of insertions, deletions, and substitutions needed to turn one into the other. A lower WER indicates higher intelligibility. Here, we employ the ASR model PhoWhisper-large (Le et al., 2024) to generate the transcription of the synthesized speech. +- Mel-Cepstral Distortion (MCD): This metric quantifies the spectral difference between synthesized speech and the ground truth audio. A lower MCD value indicates higher spectral similarity and better quality. We use the pymcd package to compute the MCD. +- F0 Root Mean Square Error $(\mathbf{RMSE}_{F0})$ : This metric measures the difference in fundamental frequency (F0) between synthesized speech and the ground truth audio. A lower $\mathrm{RMSE}_{F0}$ suggests a better matching of intonation and prosody. We use the Amphion (Li et al., 2025) toolkit to compute this value. +Subjective Metrics rely on human judgments to evaluate the overall quality and naturalness of synthesized speech. +- Mean Opinion Score (MOS): This metric assesses the overall quality of synthesized speech, taking into account factors such as naturalness, clarity, and listening effort. Human listeners rate the speech on a scale from 1 (very poor) to 5 (excellent). +- Similarity MOS (SMOS): This metric evaluates the perceived speaker similarity between the speech prompt and the generated speech. Listeners rate the similarity on a scale from 1 (completely different) to 5 (identical). + +To conduct the subjective evaluation, we first randomly sample one audio file from each speaker in the test set. We then hire 10 native speakers + +
ModelPAB-SPAB-UVIVOSviVoice
WER↓Original0.880.835.144.97
VALL-EpAB24.9612.9012.6313.58
VoiceCraftPAB7.5315.1413.5321.70
XTTS-v2PAB4.164.3137.818.32
viXTTS4.235.1737.8112.54
MCD↓VALL-EpAB7.508.2810.138.70
VoiceCraftPAB6.697.9810.279.15
XTTS-v2PAB6.307.819.858.34
viXTTS7.478.4810.548.71
RMSEcoVALL-EpAB226.55246.88267.80223.56
VoiceCraftPAB214.66247.54259.46233.68
XTTS-v2PAB216.44242.51290.77228.81
viXTTS249.54271.70338.59238.05
MOS↑Original4.61 ± 0.174.63 ± 0.164.41 ± 0.144.66 ± 0.20
VALL-EpAB3.96 ± 0.294.04 ± 0.283.44 ± 0.213.75 ± 0.38
VoiceCraftPAB4.16 ± 0.213.75 ± 0.293.85 ± 0.223.98 ± 0.22
XTTS-v2PAB4.20 ± 0.203.89 ± 0.212.79 ± 0.213.98 ± 0.29
viXTTS4.05 ± 0.233.85 ± 0.252.37 ± 0.243.48 ± 0.44
SMOS↑Original4.23 ± 0.233.90 ± 0.323.87 ± 0.243.34 ± 0.47
VALL-EpAB3.77 ± 0.243.46 ± 0.293.35 ± 0.253.20 ± 0.38
VoiceCraftPAB3.64 ± 0.303.32 ± 0.353.25 ± 0.253.41 ± 0.36
XTTS-v2PAB3.55 ± 0.273.56 ± 0.293.03 ± 0.233.39 ± 0.41
viXTTS2.88 ± 0.282.63 ± 0.322.48 ± 0.233.11 ± 0.43
+ +Table 2: Test results of different TTS models. Our models, "VALL-EPAB", "VoiceCraftPAB" and "XTTSTv2PAB" are obtained by training VALL-E, VoiceCraft, and XTTS-v2 on our PhoAudiobook training data, respectively. "PAB-S" and "PAB-U" refer to the PhoAudiobook "seen" and "unseen" speaker test sets, respectively. The viXTTS model is fine-tuned from the pretrained XTTS-v2 using the entire viVoice dataset. + +to rate the outputs for the in-distribution test sets (PAB-S, PAB-U) and 20 native speakers for the out-of-distribution test sets (VIVOS, viVoice), with all ratings on a scale from 1 to 5, using 0.5-point increments. To ensure fairness, we shuffle and anonymize the model names so that each listener is unaware of which model produces each sample. + +# 4 Results + +Table 2 presents the results obtained for our trained models and the baseline. It is clear that our XTTS-v2PAB consistently outperforms viXTTS across all metrics and test sets. For instance, on the viVoice set, XTTS-v2PAB achieves the best WER of 8.32, which is substantially lower than the 12.54 WER of viXTTS, even though viXTTS is tested on its own training data. Additionally, XTTS-v2PAB also produces substantially higher SMOS and $\mathrm{RMSE}_{F0}$ scores compared to viXTTS in all test sets, indicating that the speech it generates more closely resembles the reference speaker. These results suggest that XTTS-v2PAB outputs more intelligible and natural-sounding speech that better captures the nuances of the target speaker's voice, for both long (PhoAudiobook and viVoice) and shorter (VIVOS) + +text inputs. + +We observe a variation in the performance of different models across test sets. While VoiceCraft $_{\text{PAB}}$ and VALL- $\mathbf{E}_{\text{PAB}}$ are less competent than XTTS- $\mathbf{v2}_{\text{PAB}}$ on the test sets PAB-S, PAB-U and viVoice, they outperform XTTS- $\mathbf{v2}_{\text{PAB}}$ on the VIVOS test set. Specifically, for PAB-S, PAB-U, and viVoice test sets, VALL- $\mathbf{E}_{\text{PAB}}$ and VoiceCraft $_{\text{PAB}}$ underperform compared to XTTS- $\mathbf{v2}_{\text{PAB}}$ in terms of WER, while achieving comparable results on other metrics such as MCD, $\mathrm{RMSE}_{F0}$ , MOS, and SMOS. However, on the VIVOS test set, XTTS- $\mathbf{v2}_{\text{PAB}}$ and viXTTS perform significantly worse than VALL- $\mathbf{E}_{\text{PAB}}$ and VoiceCraft $_{\text{PAB}}$ across all evaluation metrics. This indicates that VALL- $\mathbf{E}_{\text{PAB}}$ and VoiceCraft $_{\text{PAB}}$ are more adept at handling short sentences, which are characteristic of the VIVOS test set. Upon manual inspection, we found that for short text inputs, XTTS- $\mathbf{v2}$ -based models – XTTS- $\mathbf{v2}_{\text{PAB}}$ and viXTTS – often generate redundant or rambling speech at the end of the output. This suggests a potential architectural issue within the XTTS- $\mathbf{v2}$ model itself, rather than a data-related problem, as both the viVoice dataset and the "augmented" PhoAudiobook training set contain short audio samples. + +# 5 Conclusion + +We have introduced PhoAudiobook, a comprehensive 941-hour high-quality dataset designed for Vietnamese text-to-speech (TTS) synthesis. Using this dataset, we conducted experiments with three leading zero-shot TTS models: VALL-E, VoiceCraft, and XTTS-v2. Our findings show that VTTS-v2 consistently outperforms its counterpart trained on the viVoice dataset across all metrics, highlighting the superiority of PhoAudiobook in enhancing model performance. Additionally, VALL-E and VoiceCraft demonstrate exceptional capability in handling short sentences. + +# Limitations + +While models trained on PhoAudiobook show high performance on purely Vietnamese datasets, we have not evaluated their performance in code-switching scenarios where the input text includes both Vietnamese and English. Future research should investigate the models' ability to handle multilingual inputs to enhance their applicability in more diverse linguistic contexts. + +# Acknowledgments + +This work was completed while all authors were at Movian AI, Vietnam. All datasets and models were downloaded, trained, and evaluated using Movian AI's resources. + +We would like to express our sincere gratitude to Mr. Nguyen Nguyen for providing a toolkit that facilitated the process of downloading raw data from https://sachnoiviet.net. + +# References + +Sercan Arik, Jitong Chen, Kainan Peng, Wei Ping, and Yanqi Zhou. 2018. Neural Voice Cloning with a Few Samples. In Proceedings of NeurIPS. +James Betker. 2023. Better speech synthesis through scaling. +Edresson Casanova, Kelly Davis, Eren Gölgé, Görkem Göknar, Iulian Gulea, Logan Hart, Aya Aljafari, Joshua Meyer, Reuben Morais, Samuel Olayemi, and Julian Weber. 2024. Xtts: a massively multilingual zero-shot text-to-speech model. In Proceedings of INTERSPEECH. +Edresson Casanova, Julian Weber, Christopher D Shulby, Arnaldo Candido Junior, Eren Gölgge, and Moacir A Ponti. 2022. YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone. In Proceedings of ICML. +Yuan-Jui Chen, Tao Tu, Cheng chieh Yeh, and Hung-Yi Lee. 2019. End-to-End Text-to-Speech for Low-Resource Languages by Cross-Linguual Transfer Learning. In Proceedings of INTERSPEECH. +Erica Cooper, Cheng-I Lai, Yusuke Yasuda, Fuming Fang, Xin Wang, Nanxin Chen, and Junichi Yamagishi. 2020. Zero-Shot Multi-Speaker Text-To-Speech with State-Of-The-Art Neural Speaker Embeddings. In Proceedings of ICASSP. +Alexandre Defossez. 2021. Hybrid Spectrogram and Waveform Source Separation. In Proceedings of the ISMIR 2021 Workshop on Music Source Separation. +Thinh Le Phuoc Gia, Tuan Pham Minh, Hung Nguyen Quoc, Trung Nguyen Quoc, and Vinh Truong Hoang. 2024. viVoice: Enabling Vietnamese Multi-Speaker Speech Synthesis. +Alexander Gutkin, Linne Ha, Martin Jansche, Knot Pipatsrisawat, and Richard Sproat. 2016. TTS for Low Resource Languages: A Bangla Synthesizer. In Proceedings of LREC. +Rongjie Huang, Chunlei Zhang, Yongqi Wang, Dongchao Yang, Jinchuan Tian, Zhenhui Ye, Luping Liu, Zehan Wang, Ziyue Jiang, Xuankai Chang, Jiatong Shi, Chao Weng, Zhou Zhao, and Dong Yu. + +2024. Make-A-Voice: Revisiting Voice Large Language Models as Scalable Multilingual and Multitask Learners. In Proceedings of ACL. +Jaehyeon Kim, Jungil Kong, and Juhee Son. 2021. Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech. In Proceedings of ICML. +Thanh-Thien Le, Linh The Nguyen, and Dat Quoc Nguyen. 2024. PhoWhisper: Automatic Speech Recognition for Vietnamese. In Proceedings of the ICLR 2024 Tiny Papers track. +Jiaqi Li, Xueyao Zhang, Yuancheng Wang, Haorui He, Chaoren Wang, Li Wang, Huan Liao, Junyi Ao, Zeyu Xie, Yiqiao Huang, Junan Zhang, and Zhizheng Wu. 2025. Overview of the Amphion Toolkit (v0.2). arXiv preprint, arXiv:2501.15442. +Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural speech synthesis with transformer network. In Proceedings of AAAI. +Yanqing Liu, Ruiqing Xue, Lei He, Xu Tan, and Sheng Zhao. 2022. DelightfulTTS 2: End-to-End Speech Synthesis with Adversarial Vector-Quantized AutoEncoders. In Proceedings of INTERSPEECH. +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual Denoising Pre-training for Neural Machine Translation. Transactions of the Association for Computational Linguistics, 8:726-742. +Hieu-Thi Luong and Hai-Quan Vu. 2016. A non-expert Kaldi recipe for Vietnamese Speech Recognition System. In Proceedings of WLSI/OIAF4HLT, pages 51-55. +Florian Lux, Julia Koch, and Ngoc Thang Vu. 2022. Low-Resource Multilingual and Zero-Shot Multi-speaker TTS. In Proceedings of AACL-IJCNLP. +Phuong Pham Ngoc, Chung Tran Quang, and Mai Luong Chi. 2023. ADAPT-TTS: HIGH-QUALITY ZERO-SHOT MULTI-SPEAKER TEXTTO-SPEECH ADAPTIVE-BASED FOR VIET-NAMESE. Journal of Computer Science and Cybernetics, 39(2):159-173. +Puyuan Peng, Po-Yao Huang, Shang-Wen Li, Abdelrahman Mohamed, and David Harwath. 2024. Voice-Craft: Zero-Shot Speech Editing and Text-to-Speech in the Wild. In Proceedings of ACL. +Anh Pham, Khanh Linh Tran, Linh Nguyen, Thanh Duy Cao, Phuc Phan, and Duong A. Nguyen. 2024. Bud500: A Comprehensive Vietnamese ASR Dataset. +Viet Thanh Pham, Xuan Thai Hoa Nguyen, Vu Hoang, and Thi Thu Trang Nguyen. 2023. Vietnam-Celeb: a large-scale dataset for Vietnamese speaker recognition. In Proceedings of INTERSPEECH. + +Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In Proceedings of ICML. +Mirco Ravanelli, Titouan Parcollet, Adel Moumen, Sylvain de Langen, Cem Subakan, Peter Plantinga, Yingzhi Wang, Pooneh Mousavi, Luca Della Libera, Artem Plounikov, Francesco Paissan, Davide Borra, Salah Zaiem, Zeyu Zhao, Shucong Zhang, Georgios Karakasidis, Sung-Lin Yeh, Pierre Champion, Aku Rouhe, Rudolf Braun, Florian Mai, Juan Zuluaga-Gomez, Seyed Mahed Mousavi, Andreas Nautsch, Ha Nguyen, Xuechen Liu, Sangeet Sagar, Jarod Duret, Salima Mdhaffar, Gaëlle Laperrière, Mickael Rouvier, Renato De Mori, and Yannick Esteve. 2024. Open-Source Conversational AI with SpeechBrain 1.0. Journal of Machine Learning Research, 25(333). +Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, Robust and Controllable Text to Speech. In Proceedings of NeurIPS. +Simon Rouard, Francisco Massa, and Alexandre Défossez. 2023. Hybrid Transformers for Music Source Separation. In Proceedings of ICASSP. +Xu Tan, Jiawei Chen, Haohe Liu, Jian Cong, Chen Zhang, Yanqing Liu, Xi Wang, Yichong Leng, Yuanhao Yi, Lei He, Sheng Zhao, Tao Qin, Frank Soong, and Tie-Yan Liu. 2024. NaturalSpeech: End-to-End Text-to-Speech Synthesis With Human-Level Quality. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(6):4234-4245. +VinBigData. 2023. VinBigData Shares 100-Hour Data for the Community. +Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers. arXiv preprint, arXiv:2301.02111. +Tao Wang, Jianhua Tao, Ruibo Fu, Jiangyan Yi, Zhengqi Wen, and Rongxiu Zhong. 2020. Spoken Content and Voice Factorization for Few-Shot Speaker Adaptation. In Proceedings of INTERSPEECH. +Yihan Wu, Xu Tan, Bohan Li, Lei He, Sheng Zhao, Ruihua Song, Tao Qin, and Tie-Yan Liu. 2022. AdaSpeech 4: Adaptive Text to Speech in Zero-Shot Scenarios. In Proceedings of INTERSPEECH. + +# A Implementation details + +For the VALL-E implementation, the first phase is the data processing stage. We fine-tune a model based on the Vietnamese Wav2Vec 2.0 large model7 for the Vietnamese dialect recognition task using our in-house data. The model achieves an accuracy of approximately $95\%$ for the dialect recognition task. We then apply this model to the PhoAudiobook. To accurately determine the regional dialect for each speaker and reduce inference time, we sample about 20 audios for each speaker and feed them into the model. We assign the dialect of each speaker based on the region with the highest number of predicted audios. Because the VALL-E model is trained based on the phoneme level, we use the "phonemizer" package to convert the text into phonemes based on the dialect of each speaker.8 For the audio, we also use the Audio CodeC encoder to compress raw audio into discrete tokens. The second phase is model training. We employ 12 Transformer-decoder layers, each with 1024 hidden units and 16 attention heads. We use a batch size corresponding to a maximum duration of 40 audio seconds, a base learning rate of 0.05, and 4 gradient accumulation steps. Our model is trained on 8 A100-40GB GPUs. Our implementation is based on the customized GitHub repository that reproduces the idea from the VALL-E paper.9 We make modifications to this repository for our specific language and data. + +Since VoiceCraft takes phoneme representations as input, we first convert our data to phonemes using the "phonemizer" package. We then append the derived Vietnamese phonemes to the existing English vocabulary and expand the text embedding layer to accommodate Vietnamese phonemes. The 830M_TTSEenhanced checkpoint is a public VoiceCraft model fine-tuned with a text-to-speech objective and serves as our starting point. Following the author's implementation,[10] we fine-tune this model using the AdamW optimizer with a learning rate of $1e^{-5}$ and a batch size of 25,000 tokens, which corresponds to approximately 8.3 minutes of audio. We train the model on 4 A100-40GB GPUs for 16 epochs. + +XTTS-v2 employs BPE for text encoding. To + +adapt the training to Vietnamese data, we use the same Vietnamese token list employed by Gia et al. (2024). We follow the training recipes provided in the coqui's TTS repository and fine-tune the public XTTS-v2 checkpoint trained for 16 languages. We extend the character and audio length limits to accommodate audio segments up to 20 seconds in duration for training data. We use the AdamW optimizer with a learning rate of $5e^{-6}$ , a batch size of 4, and fine-tune the model on a single A100-40GB GPU for 18 epochs. + +For all these 3 models, we select the model checkpoint that obtains the best loss on the PhoAudiobook validation set. \ No newline at end of file diff --git a/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/images.zip b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..aede115b792a669c732892bfb522c9e46514ba96 --- /dev/null +++ b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da1dedb80ccbe342b9ca65679ba0b032bcd7409cbd053241540ca9d7e9c94397 +size 186286 diff --git a/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/layout.json b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4ea5faabe87b8635d32fffecdff5b7a152b54804 --- /dev/null +++ b/ACL/2025/Zero-Shot Text-to-Speech for Vietnamese/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:189f461e5aba8b2f8e2ca685d48b9b06954558138e9674a65b7a4e62a4a49505 +size 228110 diff --git a/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_content_list.json b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..09224a7e905ae5092f464c56b32b8359fe753c40 --- /dev/null +++ b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71bad5f03055f585eb31b762eab1088807177c17536cefc1e9850983992a9a82 +size 79915 diff --git a/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_model.json b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..41f06b35cda8ec1a85c1534502452669a1895f65 --- /dev/null +++ b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f69a06d435b1331014995e4e8953dc408d1f81441c6503aab5fd5aab39cb09bd +size 97585 diff --git a/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_origin.pdf b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f3bd92f3a790d68a1ae22b0ac6a13fa4bd348619 --- /dev/null +++ b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/3dcdbbd2-63a3-41ca-bed4-b0071bd9d91f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6906536e08983f0d2350f4d907f414239448adfcb788df9a4c28b5945744b7c1 +size 5525968 diff --git a/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/full.md b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..928cac90029ecbc020d14facf8f840a423303c9c --- /dev/null +++ b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/full.md @@ -0,0 +1,305 @@ +# 2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification + +Jingwei Zhang $^{1*}$ , Anh Tien Nguyen $^{2*}$ , Xi Han $^{1*}$ , Vincent Quoc-Huy Trinh $^{5}$ , Hong Qin $^{1}$ , Dimitris Samaras $^{1}$ , Mahdi S. Hosseini $^{3,4}$ + +$^{1}$ Stony Brook University, Stony Brook, NY, USA $^{2}$ Korea University, Seoul, South Korea $^{3}$ Concordia University, Montreal, Canada $^{4}$ Mila-Quebec AI Institute, Montreal, Canada $^{5}$ University of Montreal Hospital Center, Montreal, Canada + +# Abstract + +Efficiently modeling large 2D contexts is essential for various fields including Giga-Pixel Whole Slide Imaging (WSI) and remote sensing. Transformer-based models offer high parallelism but face challenges due to their quadratic complexity for handling long sequences. Recently, Mamba introduced a selective State Space Model (SSM) with linear complexity and high parallelism, enabling effective and efficient modeling of wide context in 1D sequences. However, extending Mamba to vision tasks, which inherently involve 2D structures, results in spatial discrepancies due to the limitations of 1D sequence processing. On the other hand, current 2D SSMs inherently model 2D structures but they suffer from prohibitively slow computation due to the lack of efficient parallel algorithms. In this work, we propose 2DMamba, a novel 2D selective SSM framework that incorporates the 2D spatial structure of images into Mamba, with a highly optimized hardware-aware operator, adopting both spatial continuity and computational efficiency. We validate the versatility of our approach on both WSIs and natural images. Extensive experiments on 10 public datasets for WSI classification and survival analysis show that 2DMamba improves up to $2.48\%$ in AUC, $3.11\%$ in F1 score, $2.47\%$ in accuracy and $5.52\%$ in C-index. Additionally, integrating our method with VMamba for natural imaging yields 0.5 to 0.7 improvements in mIoU on the ADE20k semantic segmentation dataset, and $0.2\%$ accuracy improvement on ImageNet-1K classification dataset. Our code is available at https://github.com/AtlasAnalyticsLab/2DMamba. + +# 1. Introduction + +Efficient understanding of large contexts over a 2D visual domain is crucial across fields such as medical imaging + +![](images/0745d853d9d2c53b0ccd3716b310f1b084a2e42f6ac90ccbc39afdf31b8ab1a4.jpg) +Figure 1. Left: Conventional MIL Bagging of patches adopts no spatial context. Middle: 1D Mamba-based methods flatten a WSI into a 1D sequence and lose the 2D structure. The adjacent blue and orange patches are far away in the sequence. We call this "spatial discrepancy". Right: 2DMamba processes a WSI in a 2D manner, preserving 2D structure. The blue and orange patches maintain adjacent in the sequence. We call it "spatial continuity". + +![](images/f9e8b679279eae15ba340e9eda181b9ceec015107c4e73e48228919e718efc69.jpg) + +![](images/3ae6a07d2ccefcbbe76354ce18844afd6109c4bfcd720d909598b4d8acea076a.jpg) + +and remote sensing [7, 8, 18]. While recurrent neural networks (RNNs) [38] can model wide context in long sequences, their sequential nature limits parallelism, making them unable to fully utilize GPUs. As a remedy, Transformers [12, 43], possessing a high capacity of parallelism, became the mainstream to model long sequences, albeit with quadratic complexity. As a solution, Mamba [10, 15], benefiting from both linear-time complexity and parallelism, has emerged as a promising avenue. Mamba is a State Space Model (SSM), a mathematical framework used in control theory to capture dynamic interactions between state variables [14, 22, 37]. It introduces a selective mechanism that enhances the flexibility of SSMs, allowing them to capture essential information and ignore irrelevant context. + +Mamba was proposed for language modeling by only processing 1D sequences and extended to vision domain [26, 49]. Due to the 2D nature of vision tasks, Mamba-based approaches for natural images have attempted to incorporate 2D image structures by adopting various formulations for flattening 2D images into 1D sequences or by scanning images in multiple directions simultaneously + +[26, 36, 45, 49]. However, these methods all flatten 2D images into 1D sequences, which inevitably leads to the loss of spatial structure in at least one direction. This limitation persists regardless of the specific flattening strategy employed, resulting in suboptimal performance. We call this issue as "spatial discrepancy" illustrated in Fig. 1. + +One alternative solution is 2D SSMs to maintain spatial continuity of 2D structures [2, 13]. However, unlike the highly parallel Mamba architecture, achieving parallel implementation for these methods still remains a big challenge. As a consequence, similar to traditional RNNs, these methods experience very slow computation, making them almost impractical. Moreover, they lack Mamba's selective mechanism, resulting in suboptimal performance. + +Beyond applications in general vision tasks, Mamba finds a great potential in computational pathology, particularly for the classification of Giga-pixel Whole Slide Images (WSIs), known as the gold standard for cancer diagnosis [19, 31-33, 39, 51]. WSIs are high-resolution images of tissue samples, often reaching up to $100,000 \times 100,000$ pixels at $40\mathrm{x}$ magnification, making them extremely large and rich in spatial detail. Due to their enormous size, WSIs are typically analyzed in a Multiple Instance Learning (MIL) manner: conventional bag-based MIL methods convert a WSI into a "bag" of instances (patches) that are usually aggregated independently, neglecting the spatial awareness among patches [21, 29, 48]. In contrast, Mamba-based methods treat the WSI as a sequence of patches [13, 46], enabling more effective information aggregation and potentially enhancing diagnostic insights. However, they still flatten 2D images into 1D sequences, and spatial discrepancies persist, as illustrated in Fig. 1. Given that cells interact with each other in a coordinated manner across all directions, scanning in multiple directions [26, 40] does not accurately model the complexity of cell-to-cell interactions. + +We propose a novel framework 2DMamba to overcome the limitations posed by the 1D nature of Mamba and the sequential nature of 2D SSMs. In summary, + +- We propose a 2D selective State Space Model architecture which directly scans a 2D image without first flattening it into a 1D sequence. It maintains the 2D structure of images and we call this "spatial continuity" (Fig. 1). +- We propose a novel hardware-aware 2D selective scan operator to extend the 1D Mamba parallelism into 2D. +- We validate the versatility of our architecture by implementing it on two very different domains, Giga-pixel WSIs for MIL aggregation and $224 \times 224$ natural images. + +To our best knowledge, 2DMamba is the first intrinsic 2D Mamba method with an efficient parallel algorithm. Extensive experiments on 10 public datasets for WSI classification and survival analysis show that our method achieves a relative improvement of up to $2.48\%$ better AUC, $3.11\%$ better F1, $2.47\%$ better Accuracy, and $5.52\%$ better C- + +index. We also integrate our scanning approach into the SOTA method, VMamba [26]. We outperform the SOTA method by 0.5 to 0.7 in mIoU on the ADE20k semantic segmentation dataset and max the SOTA method by $0.2\%$ in accuracy on the ImageNet-1K classification dataset. + +# 2. Related work + +State Space Model (SSM). SSM [23] is an effective sequence model that represents systems evolving over time by defining hidden states and their transitions, which makes it particularly useful for capturing dynamic temporal behavior in sequential data. Gu et al. [17] unified RNNs, temporal convolutions, and neural differential equations with a linear state-space layer and demonstrated the potential of SSM-based models with the HiPPO initialization. S4 [16] proposed to normalize the parameter matrices into a diagonal structure. 2D-SSM [2] adopted Roesser's 2D-SSM recursion [24] and applied it to 2D images. However, prior to Mamba, all these SSM methods suffered from slow training speed as the sequential dependency of states makes an efficient parallel algorithm very difficult. + +Mamba. To accelerate SSM methods, Mamba [15] incorporated a selective mechanism that makes the model parameters input-dependent and eliminates long dependencies by forgetting less relevant states. It also introduced a hardware-aware algorithm that drastically accelerates state computation. It was originally applied to language tasks and Vim [49] introduced a Vision Mamba block that uses two independent selective SSMs for bidirectional aggregation of information in the vision domain. PlainMamba [45] used a 4-directional selective scan and adopted a more spatially continuous scan path. Similarly, VMamba [26] and GroupMamba [40] also utilized this 4-directional scan in a hierarchical network and optimized the network structure. However, the current formulations of these Mamba-based models are still limited to 1D. + +Application of MIL in WSI classification. MIL methods are the mainstream on WSI classifications. It aggregates embedded features from a WSI for slide-level representation. AB-MIL [21] introduced an attention-based aggregation, where the attention values were learned by a neural network. Based on that, CLAM [29] proposed a multi-branch pooling mechanism to improve the performance. DSMIL [25] employed multi-scale patch features in a dual-stream architecture. TransMIL [41] introduced multi-head self-attention layers to capture both morphological and spatial relationships and used nystrom Attention [44] to alleviate the quadratic complexity of self-attention. DTFD-MIL [47] introduced a double-tier MIL framework by incorporating pseudo-bags. Recently, S4-MIL [13] and MambaMIL [46] used Mamba for better capturing the information in long patch sequences. However, these works still fail to fully utilize the 2D spatial information of a WSI. + +![](images/86477c037e121ae6a88f13c5bc4fc3fab896e3f3cadeb451d06a7681e57807c7.jpg) +Figure 2. Left: The overall architecture of 2DMambaMIL for WSI representation. An input WSI is first tiled into patches and these patches are embedded by a feature extractor into a 2D features map. Non-tissue regions are padded with the learnable token to maintain the 2D spatial relationships. The 2D feature map is then fed to $U$ layers of 2D-Mamba blocks, where the key difference, compared with vanilla Mamba block, is our 2D selective scan module. Right: Our 2D selective scan algorithm. It performs parallel horizontal scan and parallel vertical scan for each state dimension $d$ independently. Parameter $C$ then aggregates $N$ state dimensions into a single dimension output $y$ . + +# 3. Method + +We present our 2DMamba designed for effectiveness and efficiency, and an associated framework for WSI representation: 2DMambaMIL. + +# 3.1. SSM in Mamba and 1D selective scan + +We revisit SSM, a mathematical model used to capture the behavior of dynamic systems. SSMs are designed as a function-to-function for continuous systems and after discretization it becomes a sequence-to-sequence model: + +$$ +h _ {t} ^ {d} = \bar {A} ^ {d} h _ {t - 1} + \bar {B} ^ {d} x _ {t} ^ {d} \tag {1} +$$ + +$$ +y _ {t} = C h _ {t} = \sum_ {d = 1} ^ {N} C ^ {d} h _ {t} ^ {d}, \tag {2} +$$ + +where $h_t^d$ is the latent state at time $t$ , $y_t$ is the output, and $d \in \{1, 2, \dots, N\}$ is the state dimension. The parameters $\bar{A}^d$ and $\bar{B}^d$ are time-invariant, making them non-adaptive to the input. This design limits the context-aware ability of SSMs to handle long sequence inputs. + +The vanilla Mamba block [15] introduces a selective mechanism to allow the SSM to dynamically adapt to the input context. This aggregates important input into the hidden state while unimportant input can be ignored. Mathematically, the parameters are formulated as functions of the input $x_{t}$ : + +$$ +\bar {A} _ {t} ^ {d} = \exp \left(\Delta_ {t} A ^ {d}\right), \quad \bar {B} _ {t} ^ {d} = \Delta_ {t} B ^ {d} \left(x _ {t}\right), \tag {3} +$$ + +$$ +C _ {t} ^ {d} = C ^ {d} (x _ {t}), \qquad \Delta_ {t} = \mathrm {s o f t p l u s} (\Delta (x _ {t})), +$$ + +where $\Delta, B^d$ , and $C^d$ are learnable linear functions of $x_t$ . $\Delta_t$ represents the time step of the discretization. The selective mechanism in the Mamba block is commonly referred to as a selective scan. For better distinguishing with our 2D method, we refer to this scan as 1D selective scan due to its 1D scanning process. + +# 3.2. Architecture of 2DMambaMIL + +The overall architecture of 2DMambaMIL is illustrated in Fig.2. The model includes: $U$ layers of 2DMamba blocks and an aggregator. The first component is the stack of $U$ 2DMamba layers. We utilize the design of the original Mamba block [15] and replace the original 1D selective scan with our 2D variant. The second component is an aggregator, which is an attention based module with two linear projections, producing a slide feature. + +Our 2DMambaMIL first tiles an input WSI into patches $\{X_{i,j}\}$ with $i\in \{1,2,\dots ,H\}$ and $j\in \{1,2,\ldots ,W\}$ . $H$ and $W$ denote the number of the tiled patches along the height and width dimensions, respectively. These patches are then embedded differently based on their types. Tissue patches are embedded using a pre-trained pathology feature extractor $f$ . Additionally, we propose using a learnable token $p$ to represent non-tissue patches padded to obtain a 2D rectangle feature map. This allows the model to learn the proper representation of non-tissue regions during training. Formally, the WSI is transformed into a feature map $x$ with a rectangle shape $(H,W)$ : + +$$ +x _ {i, j} = \left\{ \begin{array}{l l} f \left(X _ {i, j}\right) & \text {i f} X _ {i, j} \text {i s a t i s u e p a t c h} \\ p & \text {o t h e r w i s e} \end{array} . \right. \tag {4} +$$ + +# 3.3.2D selective SSM architecture. + +We detail 2D selective SSM architecture. The key component of 2DMamba is the 2D selective scan operation. In contrast to vanilla mamba which aggregates information from a flattened 1D sequence, 2DMamba aggregates both geometric and semantic information directly from a 2D feature map. Particularly, 2DMamba conducts both horizontal and vertical scans in parallel. For simplicity, we omit the state dimension superscript $d$ in this section. The parameters of the 2D selective scan remain the same as 1D in + +Eq. (3), with the subscript being $(i,j)$ to index 2D inputs instead of $t$ . We reuse $x_{i,j}$ to represent the input of the 2D selective scan after normalization, projection, and convolution layers in Fig. 2. + +We formulate 2D scanning in a manner similar to the vanilla Mamba to maintain efficient parallelism. As shown in Fig. 2, we first conduct a horizontal scan on each row independently, equivalent to applying a 1D selective scan to each row. Specifically, the state $h_{i,j}^{\mathrm{hor}}$ obtained during the horizontal scan is: + +$$ +h _ {i, j} ^ {\text {h o r}} = \bar {A} _ {i, j} h _ {i, j - 1} ^ {\text {h o r}} + \bar {B} _ {i, j} x _ {i, j}. \tag {5} +$$ + +Note that, for the first column, we assume $h_{i,0}^{\mathrm{hor}} = 0$ , and thus $h_{i,1}^{\mathrm{hor}} = \bar{B}_{i,1}x_{i,1}$ . Two parameters $\bar{A}_{i,j}$ and $\bar{B}_{i,j}$ , which depend on $x_{i,j}$ , regulates the information of previous states $h_{i,j-1}$ and current input $x_{i,j}$ . + +After the horizontal scan, we apply our vertical scan on each column of $h_{i,j}^{\mathrm{hor}}$ independently. Compared with horizontal scan, we replace $\bar{B}_{i,j}x_{i,j}$ with the result $h_{i,j}^{\mathrm{hor}}$ obtained from the horizontal scan. + +$$ +h _ {i, j} = \bar {A} _ {i, j} h _ {i - 1, j} + h _ {i, j} ^ {\text {h o r}}. \tag {6} +$$ + +Note that for the first row $h_{1,j} = h_{1,j}^{h}$ by assuming $h_{0,j}^{\mathrm{hor}} = 0$ . We reuse the same $\bar{A}_{i,j}$ for the vertical scan. + +If we omit the subscripts of $\bar{A}$ and $\bar{B}$ , and expand Eqs. (5) (6), the hidden state $h_{i,j}$ can be formulated as the following equation (detailed derivations in Supp. C): + +$$ +h _ {i, j} = \sum_ {i ^ {\prime} \leq i} \sum_ {j ^ {\prime} \leq j} \bar {A} ^ {(i - i ^ {\prime} + j - j ^ {\prime})} \bar {B} x _ {i ^ {\prime}, j ^ {\prime}} , \tag {7} +$$ + +After two scans, the output $y$ is aggregated from $h$ by the parameter $C$ similar to 1D-Mamba: $y_{i,j} = Ch_{i,j}$ . For each location $(i,j)$ , the aggregation information is obtained from its upper left locations. + +By doing so, 2DMamba aggregates information without spatial discrepancy. In comparison, the hidden state of the vanilla Mamba on a flattened image is given by $h_i^{1D} = \sum_{i' \leq i} \bar{A}^{i - i'} \bar{B} x_{i'}$ where $i$ denotes the 1D index. The order $i - i'$ represents the distance between $i, i'$ in a flattened sequence, where a higher order (larger distance) may lead to forgetting [42]. This mathematically encapsulates the concept of "spatial discrepancy". In contrast, 2DMamba achieves the formulation in Eq.(7), where the order $i - i' + j - j'$ corresponds to the Manhattan distance between $(i', j')$ and $(i, j)$ , thereby preserving the 2D structure. This distance represents a path from $(i', j')$ to $(i, j)$ that moves horizontally to the right and then vertically downward. This mathematically encapsulates the concept of "spatial continuity". For instance, the last hidden state for a 3x3 feature map can be expressed as: + +$$ +\begin{array}{l l} h _ {3, 3} ^ {1 D} = \bar {A} ^ {8} \bar {B} x _ {1, 1} + \bar {A} ^ {7} \bar {B} x _ {1, 2} + \bar {A} ^ {6} \bar {B} x _ {1, 3} + & h _ {3, 3} ^ {2 D} = \bar {A} ^ {4} \bar {B} x _ {1, 1} + \bar {A} ^ {3} \bar {B} x _ {1, 2} + \bar {A} ^ {2} \bar {B} x _ {1, 3} + \\ \bar {A} ^ {5} \bar {B} x _ {2, 1} + \bar {A} ^ {4} \bar {B} x _ {2, 2} + \bar {A} ^ {3} \bar {B} x _ {2, 3} + & \bar {A} ^ {3} \bar {B} x _ {2, 1} + \bar {A} ^ {2} \bar {B} x _ {2, 2} + \bar {A} ^ {1} \bar {B} x _ {2, 3} + \\ \bar {A} ^ {2} \bar {B} x _ {3, 1} + \bar {A} ^ {1} \bar {B} x _ {3, 2} + \bar {A} ^ {0} \bar {B} x _ {3, 3} & \bar {A} ^ {2} \bar {B} x _ {3, 1} + \bar {A} ^ {1} \bar {B} x _ {3, 2} + \bar {A} ^ {0} \bar {B} x _ {3, 3} \end{array} \tag {8} +$$ + +where a much larger order term $\bar{A}^8$ of $x_{1,1}$ in the 1D case (compared to $\bar{A}^4$ in 2D case) results in much more forgetting and a loss of the 2D structure information. Notably, spatial discrepancy becomes particularly problematic in WSIs, as it leads to significantly larger order terms (e.g. $\bar{A}^{200}$ ) due to the large size of WSIs. + +# 3.4. Hardware-Aware 2D Selective Scan + +We present our hardware-aware scanning operator that accelerates 2D selective scans. First, we revisit the GPU memory hierarchy and analyze the major challenges for 2D selective scans. Then, we present our novel operator in detail. + +GPU memory hierarchy. Fig. 3 (d) illustrates the memory hierarchy of modern GPUs. The green area represents off-chip GPU memory, with low speed and high capacity. Here, it is referred to as high bandwidth memory (HBM). The orange area denotes on-chip memory, with high speed but low capacity, and is referred to as SRAM. In GPU algorithms, data is transferred from HBM to SRAM for computation, and the results are stored back to HBM to vacate SRAM for succeeding computation. Memory transfers are expensive. Therefore, instead of computation, many GPU algorithms [9, 11] are bounded by memory. Mamba's selective scan [15] is also memory-bounded. + +Mamba's 1D selective scan. The vanilla Mamba is fast as the GPU memory follows a hierarchy by 1D tiling and caching. As shown in Fig. 3 (a), a long sequence in HBM is divided into smaller tiles. Each tile is loaded into SRAM, scanned across $N$ independent state dimensions, aggregated into a single output by rules specified in Eq. (2), and stored back to HBM. The intermediate results of the $N$ state dimensions are not materialized on HBM, and will be recomputed during back-propagation. The overall memory access complexity is $\mathcal{O}(L)$ , where $L$ denotes the sequence length. + +Naive 2D selective scan. It is not trivial to extend 1D Mamba scans to 2D. As illustrated in Fig. 3 (b), a naive extension to 1D Mamba will scan a 2D feature map in two steps. First, the feature map is tiled into $H$ rows for rowwise 1D Mamba scans. Next, the succeeding vertical scans must be applied independently to each column, where each column has $N$ independent state dimensions. Therefore, the horizontal scanner must materialize $N$ intermediate feature maps on HBM. Each feature map is then tiled into $W$ columns for column-wise 1D Mamba scans. Its memory access complexity is $\mathcal{O}(N HW) = \mathcal{O}(NL)$ , which, as demonstrated in Table 3, results in low throughput and high memory consumption. + +Hardware-aware 2D selective scan. The proposed hardware-aware 2D selective scan operator, which is illustrated in Fig. 3 (c), optimizes memory transactions by 2D tiling and caching. Instead of tiling by rows or by columns, we divide the feature map into a 2D grid. At each step, we only load a small submatrix into SRAM. Then we con + +![](images/58932a5eb9a8feb9b86261103e54c762d9fa629e8f7b1604256d6b07cd674abd.jpg) +(a) 1D-Mamba scan operator + +![](images/2fb4951455a09a65dd6387a6390fda1d4d4fc266eaa49227cc50b98cb55e4185.jpg) +(b) A naïve 2D scan operator +(c) Our 2D scan operator + +![](images/23469b0030a4f94b8a8b34b1d91beba7a3c3622e5e886a1e10084b648be3f4e0.jpg) + +![](images/e0edd10ace1a286cd70b4d1e92d5fd1afe0379d300bd3bef9bb4b30915187982.jpg) +Figure 3. Our hardware-aware 2D selective scan operator with efficient caching mechanism and high parallelism. Orange color represents operations on SRAM and green color represents those on HBM. (a) The 1D Mamba scan operator intakes a flattened sequence on HBM. It tiles the input into sub-sequences. Each sub-sequence is loaded from HBM to SRAM, scanned and reduced across $N$ intermediate dimensions, and then written back to HBM. The total memory access complexity is $\mathcal{O}(L)$ . (b) A Naive 2D scan operator tiles the 2D feature map by rows and columns, and performs 1D Mamba scans on each row, column, and on $N$ independent state dimensions. This will explicitly instantiate $N$ intermediate feature maps on HBM, resulting in a memory access complexity of $\mathcal{O}(NL)$ . (c) Our 2D scan operator tiles the feature map into 2D grids and scans each grid in 2 directions. Intermediate features are reduced inside each tile, only the aggregated result is stored back to HBM. The memory complexity is $\mathcal{O}(L)$ . (d) GPU memory hierarchy: SRAMs are small but fast; HBMs have large capacities but are slow. (e) NVIDIA's CUB BlockScan only supports 1D sequences, with sizes of multiples of 32. Scanning a two-row grid requires two sequential kernel launches and padding elements. (f) Our SegmentedBlockScan enables scanning multiple rows and columns in parallel. It reduces the amount of memory transactions and padding data. + +![](images/ed45a409e2a08ac141043f50b4496a3cd52a1f6fa7f0f9515021241f147c7e0a.jpg) +(e) Nvidia's CUB library + +![](images/fbf317fa48e71fe04b8379ea4b5b8727194378cea9abe637e1924f1f727af3fa.jpg) +(f) Our SegmentedBlockScan + +duct horizontal and vertical scans for $N$ independent state dimensions, and write the aggregated output back to HBM. This avoids the explicit materialization of the state dimensions, and maintains an overall memory access complexity of $\mathcal{O}(HW) = \mathcal{O}(L)$ , equivalent to the vanilla Mamba. + +Moreover, vanilla Mamba employs NVIDIA's CUB library [34] for 1D parallel scans. However, as illustrated in Fig. 3 (e), CUB's BlockScan algorithm only supports fullsequence scanning. Thus, it requires multiple scans for a multi-row feature map. Moreover, for a 2D feature map, CUB BlockScan requires both its height $H$ and width $W$ to be multiples of 32, where 32 is the smallest thread schedule granularity for NVIDIA GPUs. Therefore, small feature maps must be padded before computation, leading to inefficiencies. For instance, a typical $14 \times 14$ feature map will require 18 padding elements per row and column, wasting as much as $56\%$ of computation. To resolve this limitation, we introduce the SegmentedBlockScan algorithm, which is illustrated in Fig. 3 (f). It distributes GPU threads across both rows and columns, only requiring $H \times W$ to be a multiple of 32. This enables simultaneous multi-row/column scanning and significantly reduces the padding requirements for small feature maps. For instance, regarding the same $14 \times 14$ feature map, our method requires only 2 padding elements per row and column. Detailed algorithm and implementations can be found in Supp. B. + +# 4. Experiments + +# 4.1. Dataset + +We assess 2DMambaMIL on 5 public pathology classification datasets, TCGA-BRCA [1], BRACS [3], PANDA [4], DHMC [50], TCGA-NSCLC and 5 public survival datasets, TCGA-(KIRC, KIRP, LUAD, STAD, UCEC). These datasets cover a variety of organs including breast, prostate, lung, kidney, stomach, and uterine. The number of slides ranges from 261 to 10614 and we use $20\mathrm{x}$ magnification for all these datasets. Details of the datasets are listed in the Supp. D. Following [26], we evaluate our 2DMamba on two natural image datasets of ImageNet-1K classification and ADE20K semantic segmentation. + +# 4.2. Results + +WSI Classification. We compare 2DMambaMIL with eight other SOTA MILs on five WSI classification datasets. The baselines include ABMIL [21], CLAM [29], DSMIL [25], DTFDMIL [47], TransMIL [41], S4-MIL [13], MambaMIL [46] and SRMambaMIL [46]. The first four MILs are attention-based, TransMIL is Transformer-based, and the last three are 1D SSM-based MILs. We utilize three metrics to evaluate the WSI classification performance: accuracy (Acc), F1 score (F1), and area under the curve (AUC). Table 1 shows that our 2DMambaMIL surpasses all + +
MethodBRACSDHMCPANDATCGA-NSCLCTCGA-BRCA
AccF1AUCAccF1AUCAccF1AUCAccF1AUCAccF1AUC
AB-MIL0.70570.60150.89390.86840.77740.96950.48830.42690.77970.87580.87560.95720.92920.88930.9747
DSMIL0.67590.56180.86180.87110.79340.95830.46330.38470.76600.87820.87800.95670.93750.89610.9770
CLAM0.71030.60140.90160.87110.79090.97270.48020.42240.78200.88040.88030.95360.93330.89600.9753
DTFD-MIL0.70120.61310.87870.87110.77040.95210.47040.38530.76650.87360.87320.95590.92710.88090.9633
TransMIL0.69190.60630.87590.80670.71360.94660.46360.39700.77280.88500.88450.96260.93750.90280.9763
S4-MIL0.66210.59040.84570.86440.78470.92840.50470.44860.79860.88510.88490.95710.94580.91540.9770
MambaMIL0.73790.68320.88830.85500.77890.96610.46790.42160.77810.87580.87560.95820.93330.89390.9657
SRMambaMIL0.73790.67890.89150.85900.77350.96390.47110.42090.77760.88500.88490.95920.93130.89000.9657
2DMambaMIL0.75170.70450.89640.89260.80270.94680.50750.45620.81840.88510.88500.96180.94580.91560.9782
+ +current SOTA methods across multiple datasets, indicating our strong generalization ability. Compared with the best-performing non-Mamba method, we achieve significant improvements of up to $5.83\%$ in accuracy, $14.90\%$ in F1 score, and $4.65\%$ in AUC. 2DMambaMIL also outperforms SSM-based methods by up to $3.26\%$ in accuracy, $3.11\%$ in F1 score, and $2.48\%$ in AUC, showing the benefit of preserving spatial continuity in WSIs. + +WSI Survival Analysis. We further compare 2DMambaMIL with all eight MILs on five WSI survival datasets. We assess the performance using the concordance index (C-index), which evaluates how well a survival model ranks patients with their survival time compared to the actual survival outcomes. As shown in Table 2, 2DMambaMIL consistently achieves the highest C-index scores across all datasets, indicating superior predictive performance. Specifically, 2DMambaMIL achieves a relative improvement of $0.6\%$ , $1.2\%$ , $5.5\%$ , $2.9\%$ , and $1.0\%$ on C-index compared with the best-performing baseline on KIRC, KIRP, LUAD, STAD, and UCEC, respectively. + +Table 1. The comparison of accuracy (Acc), F1 and AUC on five WSI classification datasets. We conducted each experiment five times using five different random seeds and reported their mean. The highest metrics are marked as bold. + +
MethodKIRCKIRPLUADSTADUCEC
ABMIL0.70510.78240.61570.61190.7243
DSMIL0.62400.71220.61140.60100.6324
CLAM0.57230.71970.58740.58830.6312
DTFD-MIL0.72710.79330.60200.61680.7462
TransMIL0.69440.73170.61390.59780.6997
S4-MIL0.72320.79050.59450.60010.7459
MambaMIL0.70960.78220.59520.62440.7419
SRMambaMIL0.71780.74240.58760.61300.7398
2DMambaMIL0.73110.80270.61980.64280.7536
+ +Table 2. The comparison of C-Index on five survival analysis datasets. We performed 5-fold cross-validation for all experiments. The highest metrics are bold. + +Speed and GPU Memory Efficiency. Our method demonstrates high speed and less memory usage. We evaluate the floating-point operations (FLOPs), throughput, and + +GPU memory consumption in inference on three input feature sizes: $14 \times 14$ , $56 \times 56$ , and $200 \times 200$ . First, we compare three CUDA-based scanning operators: the CUB 1D scan used by Mamba, the naive 2D scan introduced in Section 3.4, and our optimized 2D scan, across the three input sizes with 16 independent state dimensions. As shown in Table 3, our 2D scan significantly outperforms the naive 2D scan in both throughput and GPU memory efficiency across all input sizes, with the performance gap widening as the feature size increases. Our 2D scan matches the throughput of Mamba's CUB scan for the $14 \times 14$ input size. However, as input size increases, its throughput declines compared to the CUB scan. This is due to more complex memory layout of 2D data and our doubled computations. Nonetheless, our 2D scan maintains linear memory consumption with respect to the sequence length. We then assess the CUDA implementation of Mamba, the Python implementation of our 2DMamba, and the CUDA implementation of our 2DMamba within the MIL framework, across the three input feature sizes. Table 3 indicates that our CUDA-based 2DMambaMIL framework consistently outperforms the Python-based implementation across all metrics, benefiting from our hardware-aware 2D scan operator. The throughput of our method remains at $70\% -90\%$ of the vanilla Mamba-based MIL framework. + +Natural Image Classification. Besides its effectiveness and efficiency on pathology images, our method also generalizes well on natural image classifications. We apply our 2DMamba to the SOTA Mamba-based method on natural images, VMamba [26]. We replace its Mamba block with our 2DMamba block and name it as 2DVMamba. We first evaluate it on the ImageNet-1K classification dataset and compare them with: Swin Transformer [27], Vim [49], EfficientVMamba [36], LocalVMamba [20] and the original VMamba. Table 4 shows that our 2DVMamba achieves $0.2\%$ higher accuracy than the original VMamba and surpasses all SOTA methods. + +Natural Image Segmentation. We further evaluate the + +
ScopeFeature size +Method14 × 1456 × 56200 × 200
FLOPsThro.GPU Mem.FLOPsThro.GPU Mem.FLOPsThro.GPU Mem.
CUDA operatorCUB 1D scan9K49K1.6KB150K12K25.1KB1.9M3K0.3MB
Naive 2D scan16K0.2K14.1KB251K0.06K225.8KB3.2M0.02K2.9MB
Our 2D scan16K40K1.6KB251K6K38.5KB3.2M1K0.5MB
MIL frameworkMamba (CUDA)58M89424MB0.9G75258MB11.8G203500MB
2DMamba (Python)63M32746MB1.0G187430MB12.9G135842MB
2DMamba (CUDA)63M65524MB1.0G62576MB12.9G185598MB
+ +Table 3. Comparison of floating-point operations (FLOPs), throughput (Thro., feature maps per second), and GPU memory consumption during inference. CUDA operators are measured using single dimensional feature input and MIL frameworks are measured using 128 dimensional feature input. The state dimension is set to 16 for all experiments. + +
Method#ParamFLOPsTop-1 Acc%
Swin-T28M4.5G81.3
Vim-S26M-80.3
EfficientVMamba-B33M4.0G81.8
LocalVMamba-T26M5.7G82.7
VMamba-T30M4.91G82.6
2DVMamba-T30M4.94G82.8
+ +performance of 2DVMamba on the ADE20K semantic segmentation dataset. Tab. 5 shows that 2DVMamba-T outperforms the baseline VMamba-T, achieving a gain of 0.7 in single-scale mIoU, 0.5 in multi-scale mIoU and surpassing all baselines. Notably, the enhancement in segmentation performance is more pronounced compared to that in classification. This is likely due to that segmentation is a dense prediction task where maintaining spatial continuity across patches is crucial. + +Table 4. The top-1 accuracy (\%) of our V2DMamba-T on the ImageNet-1K dataset. All images are of size $224 \times 224$ . + +
Method#Param.FLOPsmIoU(SS)mIoU(MS)
Swin-T60M945G44.545.8
Vim-S46M-44.9-
EfficientVMamba-B65M930G46.547.3
LocalVMamba-T57M970G47.949.1
VMamba-T62M949G47.948.8
2DVMamba-T62M950G48.649.3
+ +Ablation on the non-tissue padding. We ablate our learnable padding token for non-tissue regions by comparing it with a naive solution: padding all fixed zero tokens, on the PANDA and TCGA-BRCA datasets. Table 6 shows + +that our learnable padding outperforms the fixed padding by relatively $1.56\% - 4.25\%$ and $0.62\% - 1.58\%$ in accuracy and AUC, respectively. This suggests that our trainable padding enables the scanning to adapt more effectively to the non-tissue regions. + +Table 5. The performance of our 2DVMamba-T on the ADE20K semantic segmentation dataset. "SS" and "MS" denote single-scale and multi-scale testing, respectively. FLOPs are calculated with an input size of $512 \times 2048$ . + +
Padding tokenPANDATCGA-BRCA
AccAUCAccAUC
Fixed zero0.48680.80570.93130.9722
Learnable0.50750.81840.94580.9782
+ +Ablation on the multi-directional scanning. We ablate MambaMIL using 2-direction [49], 4-direction raster [45], and 4-direction cross scans [26], comparing with 2DMambaMIL. Tab.7 shows that while scanning in multiple directions improves performance, it remains inferior to 2DMambaMIL. This demonstrates that multi-directional scanning does not accurately model cell-to-cell interactions, as cells interact in a coordinated manner across all directions, rather than being limited to horizontal and vertical orientations. + +Table 6. Ablation on the non-tissue paddings on the PANDA and TCGA-BRCA dataset. Our learnable token achieves higher performance compared to the fixed zero token. + +
MethodPANDATCGA-BRCA
AccAUCAccAUC
MambaMIL (1D)0.46790.77810.93330.9657
w. 2-direction0.48530.77490.93740.9753
w. 4-direction (raster)0.49230.79180.93880.9755
w. 4-direction (cross)0.49390.80060.94020.9698
2DMambaMIL (2D)0.50750.81840.94580.9782
+ +Table 7. The comparison of MambaMIL with 2-direction [49], 4-direction scan [26, 45] and 2DMambaMIL. + +Qualitative Evaluation. We qualitatively compare the attention heatmaps generated by 2DMambaMIL with four existing approaches (AB-MIL, CLAM, MambaMIL, and SRMambaMIL) for classification and survival analysis + +![](images/9b893b79c340fd15a17e8c0235240a78651e25cefa38707d6e8ef8c51e620e05.jpg) +Figure 4. The attention visualization of 2DMambaMIL and four other methods on a TCGA-KIRC sample for survival analysis. Tumor regions are outlined in green. AB-MIL and SRMambaMIL primarily focus on non-tumor areas, while CLAM also shows substantial attention to non-tumor regions. In contrast, both 2DMambaMIL and MambaMIL focus predominantly on tumor regions. Compared with MambaMIL, attention of 2DMambaMIL shows a more heterogeneous fashion, focusing more on critical regions related to survival (red arrows) while paying less attention to less related ones (violet arrows). + +![](images/3dc6af602a2b14861044220c4b714eca5970314d0fbb395e692a9debbaa72979.jpg) + +![](images/befde4e29692f2c22c7ebec2a1cf13a2d40699863faeba28699ae0bd9d2333fb.jpg) + +tasks, focusing on pathological and biological interpretability. The results demonstrate that 2DMambaMIL consistently targets tumor areas in WSI classification and survival analysis datasets, occasionally including pixels from the immediate tumor-adjacent regions. Fig. 4 presents a case of kidney clear cell carcinoma in the context of survival analysis. AB-MIL and SRMambaMIL predominantly attend to non-tumor regions, which is unnecessary for risk prediction, and CLAM also shows considerable attention to non-tumor areas. On the contrary, the attentions of our 2DMambaMIL and SRMambaMIL are both driven by tumor areas. In comparison, our method exhibits a more heterogeneous attention pattern, specifically focusing on highly survival-related regions (indicated by the red arrows), whereas SRMambaMIL's attention is more uniformly distributed, which focuses on some less survival-related regions (indicated by the violet arrows). Additional qualitative evaluations are shown in Supp. I. + +Visualization of Effective Receptive Fields. Effective Receptive Fields (ERF) refers to the region in the input space that contributes to the activation of a certain output unit [30]. We conduct analyses of the central pixel's ERF between Swin-T, VMamaba-T and 2DVMamba-T. Fig.5 shows that the ERF of Swin-T expresses a local pattern, consistent with its local structure. VMamaba-T expresses a more global pattern but has a clear cross signal resulting from its 4-way 1D scan process. 2DVMamba demonstrates much more global and smooth ERFs without cross-signal showcasing to persevere spatial continuity. + +# 5. Conclusion + +In this work, we presented 2DMamba, a novel 2D selective SSM framework incorporating the 2D spatial structure + +![](images/fe3cf498cc5692e07d98ce70f47e0aa4676b55b88d25e40a46826644bb8b40ba.jpg) +Figure 5. Comparison of Effective Receptive Fields (ERF) between Swin-T, VMamba-T and 2DVMamba-T. Pixels with higher intensity indicate larger responses regarding the central pixel. + +of images into the Mamba paradigm. Unlike the vanilla Mamba processing a flattened 1D sequence, 2DMamba employs a 2D selective SSM architecture to capture both geometric and semantic information directly from 2D feature maps using a hardware-optimized selective scan operator. We evaluated 2DMamba on 10 WSI classification and survival analysis datasets, where our method consistently outperformed conventional MIL methods and 1D Mamba-based MIL approaches. Furthermore, our design enhances GPU efficiency through optimized caching and increased parallelism for 2D structures. We also demonstrated its strong generalizability on natural image classification and segmentation tasks by integrating it into a SOTA method. Future work will focus on refining 2D SSM designs and exploring broader applications. + +# Acknowledgement + +This work was partially supported by USA NSF grants IIS-2123920 [D.S], IIS-2212046 [D.S], IIS-1715985 [H.Q], IIS-1812606 [H.Q], the Canadian Cancer Society Breakthrough Grant [V.Q.H.T], FRQS-CRS-J1 [V.Q.H.T], NSERC-DG RGPIN-2022-05378 [M.S.H] and Amazon Research Award [M.S.H]. + +# References + +[1] The Cancer Genome Atlas Program (TCGA) — cancer.gov. https://www.cancer.gov/ccg/research/genome-sequencing/tcga.5 +[2] Ethan Baron, Itamar Zimerman, and Lior Wolf. A 2-dimensional state space layer for spatial inductive bias. In The Twelfth International Conference on Learning Representations, 2024. 2 +[3] Nadia Brancati, Anna Maria Anniciello, Pushpak Pati, Daniel Riccio, Giosue Scognamiglio, Guillaume Jaume, Giuseppe De Pietro, Maurizio Di Bonito, Antonio Foncubierta, Gerardo Botti, Maria Gabrani, Florinda Feroce, and Maria Frucci. BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images. Database, 2022: baac093, 2022. 5, 2 +[4] Wouter Bulten, Kimmo Kartasalo, Po-Hsuan Cameron Chen, Peter Ström, Hans Pinckaers, Kunal Nagpal, Yuannan Cai, David F Steiner, Hester Van Boven, Robert Vink, et al. Artificial intelligence for diagnosis and gleason grading of prostate cancer: the panda challenge. Nature medicine, 28(1):154-163, 2022. 5, 2 +[5] Richard J. Chen, Chengkuan Chen, Yicong Li, Tiffany Y. Chen, Andrew D. Trister, Rahul G. Krishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In CVPR, pages 16144-16155, 2022. 2, 3 +[6] Richard J Chen, Tong Ding, Ming Y Lu, Drew FK Williamson, Guillaume Jaume, Bowen Chen, Andrew Zhang, Daniel Shao, Andrew H Song, Muhammad Shaban, et al. Towards a general-purpose foundation model for computational pathology. Nature Medicine, 2024. 1 +[7] Xuhang Chen, Baiying Lei, Chi-man Pun, and Shuqiang Wang. Brain diffuser: An end-to-end brain image to brain network pipeline. In PRCV, pages 16-26, 2023. 1 +[8] Xuhang Chen, Chi-man Pun, and Shuqiang Wang. Med-prompt: Cross-modal prompting for multi-task medical image translation. In PRCV, pages 61-75, 2024. 1 +[9] Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. In International Conference on Learning Representations (ICLR), 2024. 4 +[10] Tri Dao and Albert Gu. Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. In International Conference on Machine Learning (ICML), 2024. 1 +[11] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems (NeurIPS), 2022. 4 +[12] Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 1, 4 +[13] Leo Fillioux, Joseph Boyd, Maria Vakalopoulou, Paul-Henry Cournède, and Stergios Christodoulidis. Structured state space models for multiple instance learning in digital pathology. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 594–604. Springer, 2023. 2, 5 + +[14] Thor I Fossen. A nonlinear unified state-space model for ship maneuvering and control in a seaway. International Journal of Bifurcation and Chaos, 15(09):2717-2746, 2005. 1 +[15] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. 1, 2, 3, 4 +[16] Albert Gu, Karan Goel, and Christopher Ré. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021. 2 +[17] Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Ré. Combining recurrent, convolutional, and continuous-time models with linear state space layers. Advances in neural information processing systems, 34:572-585, 2021. 2, 1 +[18] Wei Han, Xiaohan Zhang, Yi Wang, Lizhe Wang, Xiaohui Huang, Jun Li, Sheng Wang, Weitao Chen, Xianju Li, Ruyi Feng, et al. A survey of machine learning and deep learning in remote sensing of geological environment: Challenges, advances, and opportunities. ISPRS Journal of Photogrammetry and Remote Sensing, 202:87-113, 2023. 1 +[19] Mahdi S Hosseini, Babak Ehteshami Bejnordi, Vincent Quoc-Huy Trinh, Lyndon Chan, Danial Hasan, Xingwen Li, Stephen Yang, Taehyo Kim, Haochen Zhang, Theodore Wu, et al. Computational pathology: a survey review and the way forward. Journal of Pathology Informatics, page 100357, 2024. 2 +[20] Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, and Chang Xu. Localmamba: Visual state space model with windowed selective scan. arXiv preprint arXiv:2403.09338, 2024.6 +[21] Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In International conference on machine learning, pages 2127-2136. PMLR, 2018. 2, 5 +[22] Jionghua Jin and Jianjun Shi. State space modeling of sheet metal assembly for dimensional control, 1999. 1 +[23] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems, 1960. 2 +[24] Sun-Yuan Kung, B.C. Levy, M. Morf, and T. Kailath. New results in 2-d systems theory, part ii: 2-d state-space models—realization and the notions of controllability, observability, and minimality. Proceedings of the IEEE, 65(6):945–961, 1977. 2 +[25] Bin Li, Yin Li, and Kevin W Eliceiri. Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. In CVPR, pages 14318-14328, 2021. 2, 5 +[26] Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, and Yunfan Liu. Vmamba: Visual state space model, 2024. 1, 2, 5, 6, 7, 3 +[27] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. 6 +[28] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. 1 + +[29] Ming Y Lu, Drew FK Williamson, Tiffany Y Chen, Richard J Chen, Matteo Barbieri, and Faisal Mahmood. Data-efficient and weakly supervised computational pathology on whole-slide images. Nature Biomedical Engineering, 5(6):555-570, 2021. 2, 5, 1 +[30] Wenjie Luo, Yujia Li, Raquel Urtasun, and Richard Zemel. Understanding the effective receptive field in deep convolutional neural networks. Advances in neural information processing systems, 29, 2016. 8 +[31] Anh Tien Nguyen and Jin Tae Kwak. Gpc: Generative and general pathology image classifier. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 203-212. Springer, 2023. 2 +[32] Anh Tien Nguyen, Trinh Thi Le Vuong, and Jin Tae Kwak. Towards a text-based quantitative and explainable histopathology image analysis. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 514-524. Springer, 2024. +[33] Anh Tien Nguyen, Keunho Byeon, Kyungeun Kim, and Jin Tae Kwak. Vleer: Vision and language embeddings for explainable whole slide image representation, 2025. 2 +[34] NVIDIA. CUB: Cooperative primitives for CUDA C++. https://nvidia.github.io/cccl/cub/index.html, 2024.5 +[35] Maxime Oquab, Timothee Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 1 +[36] Xiaohuan Pei, Tao Huang, and Chang Xu. Efficientvmamba: Atrous selective scan for light weight visual mamba. arXiv preprint arXiv:2403.09977, 2024. 2, 6 +[37] Marc Harold Raibert. Motor control and learning by the state space model. PhD thesis, Massachusetts Institute of Technology, 1977. 1 +[38] David E. Rumelhart and James L. McClelland. Learning Internal Representations by Error Propagation, pages 318-362. MIT Press, 1987. 1 +[39] Olcay Sertel, Jun Kong, Hiroyuki Shimada, Umit V Catalyurek, Joel H Saltz, and Metin N Gurcan. Computer-aided prognosis of neuroblastoma on whole-slide images: Classification of stromal development. Pattern recognition, 42(6):1093–1103, 2009. 2 +[40] Abdelrahman Shaker, Syed Talal Wasim, Salman Khan, Gall Jürgen, and Fahad Shahbaz Khan. Groupmamba: Parameter-efficient and accurate group visual state space model. arXiv preprint arXiv:2407.13772, 2024. 2 +[41] Zhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, et al. Transmil: Transformer based correlated multiple instance learning for whole slide image classification. NeurIPS, 34:2136-2147, 2021. 2, 5 +[42] Yuheng Shi, Minjing Dong, and Chang Xu. Multi-scale vmamba: Hierarchy in hierarchy visual state space model. arXiv preprint arXiv:2405.14174, 2024. 4 +[43] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017. 1 + +[44] Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. Nyströmformer: A nyström-based algorithm for approximating self-attention. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 14138–14148, 2021. 2 +[45] Chenhongyi Yang, Zehui Chen, Miguel Espinosa, Linus Ericsson, Zhenyu Wang, Jiaming Liu, and Elliot J. Crowley. Plainmamba: Improving non-hierarchical mamba in visual recognition, 2024. 2, 7 +[46] Shu Yang, Yihui Wang, and Hao Chen. Mambamil: Enhancing long sequence modeling with sequence reordering in computational pathology, 2024. 2, 5, 3 +[47] Hongrun Zhang, Yanda Meng, Yitian Zhao, Yihong Qiao, Xiaoyun Yang, Sarah E Coupland, and Yalin Zheng. Dtfmil: Double-tier feature distillation multiple instance learning for histopathology whole slide image classification. In CVPR, pages 18802-18812, 2022. 2, 5 +[48] Jingwei Zhang, Saarthak Kapse, Ke Ma, Prateek Prasanna, Joel Saltz, Maria Vakalopoulou, and Dimitris Samaras. Prompt-mil: Boosting multi-instance learning schemes via task-specific prompt tuning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 624-634. Springer Nature Switzerland, 2023. 2 +[49] Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model, 2024. 1, 2, 6, 7 +[50] Mengdan Zhu, Bing Ren, Ryland Richards, Matthew Suriawinata, Naofumi Tomita, and Saeed Hassanpour. Development and evaluation of a deep neural network for histologic classification of renal cell carcinoma on biopsy and surgical resection slides. Scientific reports, 11(1):7080, 2021. 5, 2 +[51] Xinliang Zhu, Jiawen Yao, Feiyun Zhu, and Junzhou Huang. Wsisa: Making survival prediction from whole slide histopathological images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7234-7242, 2017. 2 \ No newline at end of file diff --git a/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/images.zip b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..039ab914a02907f67be68e3f265ab27bedfeb888 --- /dev/null +++ b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d34a70af2d7fd0695d8b2ac4c2ad15346b42e7706eafe45a305dbbf86e0c070 +size 697626 diff --git a/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/layout.json b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2be0dfd95be43a22327ab53f694b788ab105a8df --- /dev/null +++ b/CVPR/2025/2DMamba_ Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c27b73c9203f2061db9faa999aaf5f81a86ea49e1164c2f75aad5898c9befd23 +size 422074 diff --git a/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_content_list.json b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1b4f1e8a74f18e1ec852fea6712ff37302d5273b --- /dev/null +++ b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f42870cff3a73039ee3bb29858439e3d5fb0225ce030492f517cde25689be3f3 +size 78113 diff --git a/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_model.json b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..44b6d772a99ba6c0b236e29c41602220e40e82b2 --- /dev/null +++ b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4844477ddb81458ecd84bbcc59ab2c82699315e4a2dcbfbb2b85295187b5c65b +size 95568 diff --git a/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_origin.pdf b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bf657cf84cf49db866aaee9d195d424e619b72df --- /dev/null +++ b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/f5ed5ddb-3e23-43ba-b2ea-85540f2f015f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39e8bb40e995a2fa4cdb9b19d520bb2809643384fca2a1fa67a31991c645a5d5 +size 9636409 diff --git a/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/full.md b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f266c10d109f6cdb3f7ebfd337c93f10f45d8e57 --- /dev/null +++ b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/full.md @@ -0,0 +1,304 @@ +# 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes + +Jan Held* 1,2 Renaud Vandeghen* 1 Abdullah Hamdi* 3 Adrien Deliege 1 Anthony Cioppa 1 Silvio Giancola 2 Andrea Vedaldi 3 Bernard Ghanem 2 Marc Van Droogenbroeck 1 + +$^{1}$ University of Liège $^{2}$ KAUST $^{3}$ University of Oxford + +* Equal contribution + +# Abstract + +Recent advances in radiance field reconstruction, such as 3D Gaussian Splatting (3DGS), have achieved high-quality novel view synthesis and fast rendering by representing scenes with compositions of Gaussian primitives. However, 3D Gaussians present several limitations for scene reconstruction. Accurately capturing hard edges is challenging without significantly increasing the number of Gaussians, creating a large memory footprint. Moreover, they struggle to represent flat surfaces, as they are diffused in space. Without hand-crafted regularizers, they tend to disperse irregularly around the actual surface. To circumvent these issues, we introduce a novel method, named 3D Convex Splatting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multi-view images. Smooth convex shapes offer greater flexibility than Gaussians, allowing for a better representation of 3D scenes with hard edges and dense volumes using fewer primitives. Powered by our efficient CUDA-based rasterizer, 3DCS achieves superior performance over 3DGS on benchmarks such as MipNeRF360, Tanks and Temples, and Deep Blending. Specifically, our method attains an improvement of up to 0.81 in PSNR and 0.026 in LPIPS compared to 3DGS while maintaining high rendering speeds and reducing the number of required primitives. Our results highlight the potential of 3D Convex Splatting to become the new standard for high-quality scene reconstruction and novel view synthesis. The project page is https://convexsplatting.github.io + +# 1. Introduction + +Reconstructing complex scenes and synthesizing novel views have been fundamental challenges in computer vision and graphics for decades [1, 12], with applications ranging from virtual reality to autonomous navigation [17, 36, 40]. Neural Radiance Fields (NeRF) [38] revolutionized this area by representing scenes as continuous volumetric radi + +![](images/db0753377cccf470a7ab542773f71f3a2f56d84c76168e3c9ddfc75859de1425.jpg) +3D Convex Splitting + +![](images/9e95cf2fbd2d0c6f54815ae76910a15151770c22f49237bbb3653ae885c56157.jpg) +3D Gaussian Splitting + +![](images/078431e5b24503b658a100237f5ac5abce86d81f25f18bf4842e33a5aae7fc83.jpg) + +![](images/ef46ccd5491499f3d5593b4f3ceaf56a5bdd56f04e099be441906385da1e625b.jpg) + +![](images/796c15cd2b8fda2d7fcc40b51f68ba3b82e64c71ce8136e57faba0e250a71d03.jpg) +3D Smooth Convexes + +![](images/d7881c4b193d06c326d14b1483ba3ab0301676680ee761c3a7d3e4c22b73e314.jpg) + +![](images/4cf0be061009fba6a7ee408e363a03ae885808fc3c774d20090921eb676d4d55.jpg) + +![](images/47a63a14e61a82e46ea69daca6cd84f495b89428bf12897e75495f6cf286e421.jpg) +Figure 1. 3D Convex Splatting for Novel View Synthesis. We introduce a novel primitive-based pipeline for novel view synthesis with 3D smooth convexes. Our 3D smooth convexes share the rendering speed of 3D Gaussians [26] and the flexible representation of smooth convexes [9]. As a result, 3D Convex Splatting better reconstructs scenes with fewer primitives. + +ance fields, which are optimized to render novel views at high-quality. However, NeRF suffered from slow training and rendering times, limiting its practicality. To address these issues, 3D Gaussian Splatting (3DGS) [26] emerged as an efficient alternative, by representing scenes with millions of 3D Gaussian primitives. 3DGS significantly accelerated training and enabled real-time rendering while main + +taining high-quality outputs. + +Despite this progress, Gaussian primitives have two inherent limitations. (1) Gaussians lack defined physical boundaries, making them unsuitable for accurately representing flat surfaces or enabling physically meaningful decompositions of scenes. (2) In addition to their specific smoothness and rounded nature, Gaussians are inadequate for capturing hard edges and geometric structures. Each Gaussian behaves similarly to an ellipsoid, with a symmetrical distribution, struggling to conform to angular boundaries or flat surfaces. This inherent limitation is highlighted by the sphere packing problem in computer graphics [8, 16], where densely packed spherical or ellipsoidal shapes leave gaps and result in inefficient coverage, especially along flat or sharp corners. As with spheres or ellipsoids, an impractically large number of Gaussian primitives would be needed to fill space without gaps, leading to increased memory consumption and computational overhead. + +To overcome these limitations, we propose a novel method called 3D Convex Splatting (3DCS), which leverages 3D smooth convexes as primitives for modeling and reconstructing geometrically accurate radiance fields from multi-view images. 3D smooth convexes offer greater flexibility than Gaussians, as they can form dense volumes that accurately capture hard edges and detailed surfaces using fewer primitives. Figure 1 illustrates this point, showing that 3DCS enables the rendering of 3D smooth convexes to generate high-quality novel views of complex scenes. Moreover, by incorporating smoothness and sharpness parameters, we can control the curvature and the diffusion of the smooth convexes, respectively. This enables the creation of shapes that are hard or soft, dense or diffuse. Figure 2 shows a toy example of how smooth convexes can represent a chair with hard edges with far fewer elements than Gaussians, while utilizing the same optimization. For novel view synthesis, we merge the benefits of the fast rendering process from Gaussians [26], with the flexibility of 3D smooth convexes [9]. We achieve this by rendering 3D smooth convexes using our efficient CUDA-based rasterizer, which enables real-time rendering and accelerates the optimization process. To the best of our knowledge, 3D Convex Splatting is the first method to leverage differentiable smooth convex shapes for novel view synthesis on realistic scenes, outperforming previous methods that use other primitives. + +Contributions. We summarize our contributions as follows: (i) We introduce 3D Convex Splatting (3DCS), utilizing 3D smooth convexes as novel primitives for radiance field representation, addressing the limitations of Gaussian primitives in capturing dense volumetric features. (ii) We develop an optimization framework and a fast, differentiable GPU-based rendering pipeline for our 3D smooth convexes, enabling high-quality 3D scene representations from multi-view images and high rendering speeds. + +![](images/5477ffb39942351ab4c9ebbe938490022f330a281d3dd863cbf5d4d1a54ce603.jpg) +Figure 2. Toy Experiment of Modeling a Chair. For the chair input image, we use Gaussians and smooth 6-point convexes to fit the chair with an increasing number of primitives. Note how the convexes efficiently represent the chair with fewer parameters. + +(iii) 3DCS surpasses existing rendering primitives on MipNeRF360, Tanks and Temples, and Deep Blending datasets, achieving better performance than 3D Gaussian Splatting while using a reduced number of primitives per scene. + +# 2. Related Work + +Neural radiance fields (NeRF). Recovering the 3D structure of a scene from images captured from multiple viewpoints is a fundamental problem in computer vision [1, 12]. The introduction of Neural Radiance Fields (NeRF) [38] revolutionized this field by representing scenes as volumetric radiance fields, enabling high-quality novel view synthesis [2, 3, 50]. NeRF employs multi-layer perceptrons to encode scene geometry and view-dependent appearance, optimized via photometric loss through volume rendering [10, 24, 30, 37]. Enhancements to NeRF include grid-based representations for faster training [6, 13, 39, 48], baking techniques for accelerated rendering [20, 45, 46, 52], as well as addressing challenges such as antialIASing [3, 4], modeling unbounded scenes [3, 55], and adapting to few-shot [11, 23, 27] and one-shot settings [5, 53]. In this work, we do not rely on neural networks to model radiance fields like other NeRFs, but instead optimize 3D smooth convexes to fit 3D scenes efficiently. Yet, 3D Convex Splitting allows for strong modeling capacity that rivals MipNeRF-360 [3] in visual fidelity but with real-time rendering speed. + +Primitive-based differentiable rendering. Differentiable rendering techniques enable gradient computation through the rendering pipeline, facilitating the optimization of scene parameters from image observations [15, 25, 31-33, 43]. Neural point-based rendering [25] represents scenes with points that store learned features for geometry and texture. 3D Gaussian Splatting (3DGS) [26] introduces Gaussian primitives parameterized by positions, covariances, and appearance attributes. By optimizing millions of Gaussians, 3DGS and achieves high-quality rendering with significantly faster training times and real-time rendering ca + +![](images/b86433ef910454b5ddb3d77eb6b4d70082bdfc0213cdb5ce63632d8c5aaa14e8.jpg) +Figure 3. Convex Splitting Pipeline. The 3D smooth convex is represented with a point set that is projected in the 2D camera plane. We extract the line-delimited convex hull of the projected points and define the signed distance function for each line. The lines are combined to define an indicator function for each pixel based on smoothness $\delta$ and sharpness $\sigma$ of the 3D convex. The pipeline is differentiable end-to-end, which allows the parameters of the smooth convex primitives to be optimized based on the rendered images. + +pabilities. Enhancements to this approach include antialIASing techniques [54], exact volumetric rendering of ellipsoids [35], and extensions to dynamic scene modeling [34]. However, Gaussian primitives have inherent limitations due to their very specific smoothness, making it challenging to capture hard edges and dense volumetric structures without significantly increasing the number of primitives. This leads to increased memory consumption and computational overhead, hindering scalability and efficiency. Alternative primitives [22] have been explored to improve geometric representation. GES [18] utilizes generalized exponential functions to better capture signals with harder edges. 2D Gaussian splatting [21] collapses the 3D Gaussians into oriented planar Gaussians to better represent surfaces. In our work, we introduce 3D smooth convex shapes as a novel primitive for real-time rendering of novel views. These shapes address the limitations of the Gaussian primitives by efficiently capturing dense volumetric shapes. + +Convex shapes. Convex shapes have been extensively studied in computer graphics and computer vision due to their geometric simplicity and flexibility in representing complex objects [14, 44]. In 3D reconstruction, convex shape representations have proven highly effective for decomposing complex structures into simpler components like planes, spheres, cubes, cylinders, and superquadrics [42, 49, 51]. CvxNet [9] and BSP-Net [7] introduce neural networks that learn hyperplanes to construct flexible convex shapes, enabling more accurate differentiable modeling of geometries with primitives. A concurrent work utilizes rigid convex polyhedra and differentiable mesh rendering to fit simple 3D shapes with few primitives using multi-view supervision [47]. However, these methods are limited to simple shapes with few optimized primitives and do not scale to large scenes or allow for accurate novel view synthesis. + +In our work, we introduce splatting-based rasterization with an array of smooth, high-capacity primitives. This allows us to achieve rendering fidelity levels comparable to volume rendering techniques in modeling complex scenes. + +# 3. Methodology + +3D Convex Splatting (3DCS) combines smooth convexes (Sec. 3.1) from CvxNet [9] with the primitive-based representation from 3DGS [26] for efficient, real-time novel view synthesis. 3DCS uses a point-based convex shape representation and convex hull operations to enable straightforward differentiable projection of 3D convex shapes onto the 2D image plane (Sec. 3.2). Further operations for smoothing, splatting, and adaptive densification are used to optimize the representation from posed images (Sec. 3.3). Figure 3 shows an overview of our convex splatting pipeline. + +# 3.1. Preliminaries on 3D Smooth Convexes + +Following CvxNet [9], we define a convex polyhedron with $J$ planes $(J > 3)$ . We define the signed distance $L_{j}(\mathbf{p})$ between a point $\mathbf{p} \in \mathbb{R}^3$ and a plane $\mathcal{H}_j$ as follows: + +$$ +L _ {j} (\mathbf {p}) = \mathbf {n} _ {j} \cdot \mathbf {p} + d _ {j} \tag {1} +$$ + +with $\mathbf{n}_j$ being the plane normal pointing towards the outside of the shape and $d_{j}$ its offset. The signed distance from $\mathbf{p}$ to the convex shape is calculated as the maximum of all the $J$ distances defined by $\tilde{\phi} (\mathbf{p}) = \max_{j = 1,\dots ,J}L_{j}(\mathbf{p})$ . However, to create a smooth representation of the convex shape, we use a smooth approximate signed distance function $\phi (\mathbf{p})$ $\approx \tilde{\phi} (\mathbf{p})$ using the LogSumExp function from CvxNet [9]: + +$$ +\phi (\mathbf {p}) = \log \left(\sum_ {j = 1} ^ {J} \exp \left(\delta L _ {j} (\mathbf {p})\right)\right), \tag {2} +$$ + +where the smoothness parameter $\delta > 0$ controls the curvature of the convex approximation. Larger values of $\delta$ approximate $\phi(\mathbf{p})$ to $\tilde{\phi}(\mathbf{p})$ more closely, resulting in harder edges, while smaller values soften the vertices. + +The indicator function $I(\mathbf{p})$ of the smooth convex is then defined by applying a sigmoid function to the approximate signed distance function [9]: + +$$ +I (\mathbf {p}) = \operatorname {S i g m o i d} (- \sigma \phi (\mathbf {p})) , \tag {3} +$$ + +![](images/78ddc9e2a933b067e9d78db95c23a0d5bff1af9018e782c86d9d5ba3d971fedf.jpg) +Figure 4. Effects of $\delta$ and $\sigma$ on Splatting. The smoothness $\delta$ characterizes vertices and edges, from soft to hard, while the sharpness $\sigma$ characterizes radiance field transitions, from diffuse to dense. + +where the sharpness parameter $\sigma > 0$ controls how rapidly the indicator function transitions at the boundary of the underlying convex shape. Higher values of $\sigma$ result in a steeper transition, making the shape's boundary more defined, whereas lower values result in a more diffuse shape. At the boundary of the convex shape, where $\phi(\mathbf{p}) = 0$ , the indicator function of the smooth convex satisfies $I(\mathbf{p}) = 0.5$ . More details about smooth convexes can be found in [9]. Fig. 4 illustrates the effect of sharpness $\sigma$ and smoothness $\delta$ on the indicator function $I(\cdot)$ . + +# 3.2. 3DCS: Splatting 3D Smooth Convexes + +Point-based 3D convex shape representation. Plane-based representations of 3D convexes are impractical for camera plane projections. Unlike CvxNet [9], we define a 3D convex as the convex hull of a 3D point set $\mathbb{S} = \{\mathbf{p}_1, \mathbf{p}_2, \dots, \mathbf{p}_K\}$ . During optimization, the 3D points can move freely, allowing for flexible positioning and morphing of the convex shape. Note that this set of $K$ points does not necessarily correspond to the explicit vertices of a convex polyhedron, but rather the hull of the 3D convex shape. + +Differentiable projection onto the 2D image plane. To be efficient, we do not explicitly build the 3D convex hull and project it into 2D, but instead, we project the 3D points into 2D and then construct its 2D convex hull. Specifically, we project each 3D point $\mathbf{p}_k\in \mathbb{S}$ onto the 2D image plane using the pinhole perspective camera projection model. The projection involves the intrinsic camera matrix $\mathbf{K}$ and extrinsic parameters (rotation $\mathbf{R}$ and translation t): + +$$ +\mathbf {q} _ {k} = \mathbf {K} \left(\mathbf {R p} _ {k} + \mathbf {t}\right), \forall k = 1, 2, \dots , K. \tag {4} +$$ + +This projection is differentiable, allowing gradients to flow back to the 3D points during optimization. + +2D convex hull computation. To construct the convex shape in 2D, we apply the Graham Scan algorithm [14], which efficiently computes the convex hull by retaining only the points that define the outer boundary of the projected shape. This approach ensures that the 2D projection + +accurately represents the convex outline needed for rendering. The Graham Scan starts by sorting the points based on their polar angle relative to a reference point. After sorting, the Graham Scan algorithm is applied to construct the convex hull by iteratively adding points to the hull while maintaining convexity. The convexity is ensured by checking the cross product of the last two points $\mathbf{q}_i$ and $\mathbf{q}_j$ on the convex hull and the current point $\mathbf{q}_k$ , removing points that form a right turn (negative cross product). + +Differentiable 2D convex indicator function. We define the 2D convex indicator function of our convex hull by extending the smooth convex representation from 3D to 2D, reusing the equations introduced in Sec. 3.1. We define $\phi(\mathbf{q})$ and $I(\mathbf{q})$ as in Eqs. (2) and (3), but replace the 3D point $\mathbf{p}$ with the 2D point $\mathbf{q}$ and the planes delimiting the 3D convex hull by the lines delimiting the resulting 2D convex hull. The parameters $\sigma$ and $\delta$ are inherited from the 3D smooth convex (from Eqs. (2) and (3)) and still control the sharpness and smoothness of the projected 2D shape boundaries. To account for perspective effects in the 2D projection, we scale $\delta$ and $\sigma$ by the distance $d$ , ensuring that the appearance of the convex shape remains consistent with respect to its distance to the camera. + +Efficient differentiable rasterizer. To enable real-time rendering, we build our rasterizer following the 3DGS tile-based rasterizer [26], which allows for efficient backpropagation across an arbitrary number of primitives. All computations, including 3D-to-2D point projection, convex hull calculation, line segment definition, and indicator function implementation--are fully differentiable and executed within our custom CUDA kernels to maximize efficiency and rendering speed. During rendering, we rasterize the 3D shape into the target view using $\alpha$ -blending. For a given camera pose $\theta$ and $N$ smooth convexes to render each pixel $\mathbf{q}$ , we order the $N$ convexes by sorting them according to increasing distance defined from the camera to their centers, and compute the color value for each pixel $\mathbf{q}$ : + +$$ +C (\mathbf {q}) = \sum_ {n = 1} ^ {N} \mathbf {c} _ {n} o _ {n} I (\mathbf {q}) \left(\prod_ {i = 1} ^ {n - 1} \left(1 - o _ {i} I (\mathbf {q})\right)\right), \tag {5} +$$ + +where $\mathbf{c}_n$ is the color of the $n$ -th smooth convex, stored as spherical harmonics and converted to color based on the pose $\theta$ , $o_n$ the opacity of the $n$ -th shape, and $I(\mathbf{q})$ is the indicator function adapted to our case from Eq. (3). + +# 3.3. Optimization + +Initialization and losses. We optimize the position of each point set in 3D, $\delta$ and $\sigma$ parameters, the opacity $o$ , and the spherical harmonic color coefficients $\mathbf{c}$ . To constrain opacity within the range [0,1], we apply a sigmoid activation function. For $\delta$ and $\sigma$ , we use an exponential activation to + +![](images/ae242d3b81b74cac00ba08e00ad8902159332e7b90a0d6c43244c35c10606b15.jpg) +Figure 5. Adaptive Convex Densification Scheme. We divide each convex, here exemplified with $K = 8$ points, into as many scaled-down occurrences of the convex, centered at the initial points, each with reduced opacity. + +ensure their values remain positive. We initialize each convex shape with a set of points uniformly distributed around a sphere centered in the points of the point cloud, using the Fibonacci sphere algorithm. We define the size of each convex shape based on its average distance to the three nearest smooth convexes. This results in smaller smooth convexes in densely populated regions and larger shapes in sparser areas, allowing for adaptive scaling based on local geometry. Following the approach used in 3DGS, we apply a standard exponential decay scheduling technique for the learning rate similar to Plenoxels [13], but only for optimizing the position of the 3D points. We experimented with applying this technique to $\delta$ and $\sigma$ as well, but we did not observe any performance improvements. During training, we use the same regularization loss $\mathcal{L}_m$ as in [29] to reduce the number of convexes. Our final loss function combines $\mathcal{L}_1$ with a DSSIM term and the loss for the mask, following [29]: + +$$ +\mathcal {L} = (1 - \lambda) \mathcal {L} _ {1} + \lambda \mathcal {L} _ {\mathrm {D} - \mathrm {S S I M}} + \beta \mathcal {L} _ {m}, \tag {6} +$$ + +where $\lambda$ controls the balance between $\mathcal{L}_1$ and $\mathcal{L}_{\mathrm{D - SSIM}}$ . For all tests, we use $\lambda = 0.2$ as in 3DGS [26] and $\beta = 0.0005$ . + +Adaptive convex shape refinement. The initial set of smooth convexes is generated from a sparse point set obtained through Structure-from-Motion. Since this initial number of smooth convexes is insufficient to accurately represent complex scenes, we employ an adaptive control mechanism to add smooth convexes dynamically. In 3DGS, additional Gaussians are introduced by splitting or cloning those with large view-space positional gradients. However, in 3DCS, positional gradients do not consistently correspond to regions with missing geometric features ("under-reconstruction") or areas where convexes over-represent large portions of the scene ("over-reconstruction"). Instead, we observe that 3DCS exhibits a large sharpness $\sigma$ loss in both under-reconstructed and over-reconstructed regions. Rather than cloning and differentiating between small and large shapes, we consistently split our smooth convexes. Instead of splitting a smooth convex into just two new convexes, we split it directly into $K$ new convexes. Each new convex shape is scaled down, and the centers of these new + +![](images/048f73e904a3842d105f1869fc35ab196a2c4f2e07273d20ff2109d7fa7cef71.jpg) +Figure 6. Reconstruction of Simple Shapes with Primitives. Smooth convex primitives reconstruct simple shapes better than Gaussians, as they can create sharper geometric boundaries. For 3DCS, the red lines describe the convex hull, whereas the black dots represent the point set. For 3DGS, the black dots represent the Gaussian centers. + +convexes correspond to the $K$ points defining the initial convex shape. By placing the centers of the new convexes at the 3D points of the initial convex shape, we ensure that the new shapes collectively cover the volume of the original convex to maintain the overall completeness of the 3D representation (see Fig. 5 for illustration). To encourage the formation of denser volumetric shapes during optimization, we increase the sharpness $\sigma$ throughout splitting, while keeping the smoothness $\delta$ the same. We prune convexes that have an opacity lower than a predefined threshold and convexes that are too large. More details are provided in the experimental setup Sec. 4.2. + +# 4. Experiments + +We first evaluate 3D Convex Splitting (3DCS) with synthetic experiments to showcase its superior shape representation over Gaussian primitives. Then, we describe the real-world setup, including tasks, datasets, baselines, metrics, and implementation, followed by results on novel view synthesis and an ablation study. + +# 4.1. Experiments on Synthetic Data + +Figure 6 compares the representation capabilities of using 1 or 8 Gaussian primitives against using a single convex shape defined by 3 or 6 points. The results demonstrate that smooth convexes effectively approximate a wide range of shapes, including both polyhedra and Gaussians, while requiring fewer primitives for accurate representation. + +# 4.2. Experimental Setup + +Datasets. To evaluate 3DCS on real-world novel view synthesis, we use the same datasets as 3DGS [26]. This + +
DatasetTanks&TemplesDeep BlendingMip-NeRF360 Dataset
Method-MetricLPIPS↓PSNR↑SSIM↑Train↓FPS↑Mem↓LPIPS↓PSNR↑SSIM↑Train↓FPS↑Mem↓LPIPS↓PSNR↑SSIM↑Train↓FPS↑Mem↓
Instant-NGP[39]0.30521.920.7457m14.448MB0.39024.960.8178m2.7948MB0.33125.590.6997.5m9.4348MB
ZipNeRF[4]------------0.18928.540.8285h0.184569MB
Mip-NeRF360[3]0.25722.220.75948h0.148.6MB0.24529.400.90148h0.098.6MB0.23727.690.79248h0.068.6MB
3DGS[26]0.18323.140.84126m154411MB0.24329.410.90336m137676MB0.21427.210.81542m134734MB
GES [18]0.19823.350.83621m210222MB0.25229.680.90130m160399MB0.25026.910.79432m186377MB
2DGS[21]†0.21223.130.83114m122200MB0.25729.500.90228m76353MB0.25227.180.80829m64484MB
3DCS (light)0.17023.710.84246m4083 MB0.24529.610.90184m30110 MB0.26626.660.76953m4777MB
3DCS0.15723.950.85160m33282MB0.23729.810.90271m30332 MB0.20727.290.80287m25666MB
+ +Table 1. Comparative Analysis of Novel View Synthesis Methods. We conduct a quantitative comparison of our 3DCS method across three datasets. 3DCS achieves higher-quality results in novel view synthesis with reduced memory consumption, all while achieving fast rendering performance. No codebooks or post-processing compression [29, 41] are used to reduce the size of any of the methods. The best performances are shown in red and the second best in orange. $\dagger$ indicates reproduced results. + +includes two scenes from Deep Blending (DB) [19], two scenes from Tanks and Temples (T&T) [28], and all scenes from the Mip-NeRF360 dataset [3]. + +Baselines. We compare our 3D Convex Splitting method against methods that use three other basic primitives for novel-view synthesis: 3D Gaussians [26], Generalized Exponential Functions (GES) [18], and 2D Gaussians [21]. While many follow-up studies have built upon 3DGS and introduced various enhancements, we focus on the basic primitives for comparison. This choice ensures that the evaluation is based on the core principles of each approach. We compare our method to ZipNeRF [4] and MipNeRF360[3] for quality and Instant-NGP [39] for speed. + +Metrics. We assess visual quality using SSIM, PSNR, and LPIPS and report training time, rendering speed, and memory usage for a detailed comparison with other methods. + +Implementation details of 3DCS. For each experiment, we initialize the number of points per convex shape to $K = 6$ and a spherical harmonic degree of 3, resulting in a total of 69 parameters per convex shape. For each 3D Gaussian, 59 parameters are needed. In the following sections, we compare two variants of 3DCS: a best-performing model and a lightweight variant. Our best model employs different hyperparameters for indoor and outdoor scenes, whereas the lightweight model uses a unified set of parameters. For our light version, we increase the threshold criterion for densifying convexes, effectively reducing the number of shapes. Furthermore, we store the 3D convex parameters with 32-bit precision for the high-quality model and 16-bit precision for the lightweight version. A list of hyperparameters can be found in the Supplementary Material. Notably, no compression methods are applied to reduce memory usage. + +# 4.3. Real-world Novel View Synthesis + +Main results. Table 1 presents the quantitative results. As can be seen, our 3DCS method consistently matches or surpasses the rendering quality of existing methods across all evaluated datasets. Specifically, 3DCS outperforms 3DGS, GES and 2DGS in most metrics on the T&T and DB + +datasets, while also achieving the second highest PSNR and lowest LPIPS on the Mip-NeRF360 dataset. 3DCS effectively balances memory usage and training time, sitting in between the ones of Mip-NeRF360 and 3DGS. Particularly, while it consumes more memory than Mip-NeRF360, it significantly reduces training time, requiring only 63 minutes compared to the 48 hours of Mip-NeRF360. Moreover, it delivers better visual quality, especially on the T&T dataset, where 3DCS demonstrates a notable performance advantage of over 1.73 PSNR compared to Mip-NeRF360. In comparison with 3DGS, 3DCS exhibits a slightly longer training time and lower rendering speed. Yet, 3DCS still operates within real-time rendering capabilities. Thanks to its greater adaptability, 3DCS efficiently utilizes only $70\%$ of the memory needed by 3DGS, while achieving higher visual quality. Figure 7 strikes a qualitative comparison between 3DCS, 3DGS and 2DGS. Notably, our method achieves sharp and detailed rendering even in challenging regions, e.g. the background in the Train scene. In contrast, Gaussian primitives tend to oversmooth areas, resulting in images with pronounced artifacts, as observed in the Flower, Bicycle, and Truck scenes. The convex-based approach, however, produces results that closely align with the ground truth, showcasing higher fidelity and a superior ability to realistically represent 3D environments. + +3DCS light outperforms 3DGS and GES on the T&T and DP dataset, while using less memory. Figure 8 contains a visual comparison between 3DGS and light 3DCS. + +Indoor versus outdoor scenes. Table 2 presents a comparative analysis of indoor versus outdoor scenes from the Mip-NeRF360 dataset. Indoor scenes consist of structured, flat surfaces with hard edges, while outdoor scenes generally have more unstructured surfaces. This structural difference advantages convex shapes, which are better suited for capturing the geometric characteristics of indoor environments. In fact, for indoor scenes, it can be seen that 3DCS significantly outperforms 3DGS with an improvement of 0.9 PSNR, 0.007 SSIM, and 0.023 LPIPS, surpassing all other Gaussian-based methods. Moreover, 3DCS achieves + +![](images/c7cfb9fd7def16160005b559bbdfe3b9128d26fabb32a2cfe0619c4eca680c37.jpg) +Figure 7. Qualitative Comparison between 3DCS, 3DGS and 2DGS. Our 3DCS captures finer details and provides a more accurate approximation of real-world scenes compared to Gaussian splatting methods, which often produce blurrier results. + +![](images/daa30ab4b85ec8eecdfd054722e6faeabf7eb4707ec641c1d75d5d2906722abe.jpg) + +![](images/301a1bb26b3ebd6bae8700c3e8db81184a30cc5782147e00af833c6c4b928c9d.jpg) + +![](images/2675f1630ec89a37de46d70fdee1ba1f822916ed09ad3e561691b1714bfdc001.jpg) + +
Outdoor SceneIndoor scene
LPIPS ↓PSNR ↑SSIM ↑LPIPS ↓PSNR ↑SSIM ↑
MipNeRF3600.28324.470.6910.18031.720.917
3DGS0.23424.640.7310.18930.410.920
GES0.24324.460.7240.18930.850.922
2DGS0.24624.340.7170.19530.400.916
3DCS (ours)0.23824.070.7000.16631.330.927
+ +Table 2. Quantitative Results on Mip-NeRF 360 [3] Dataset. We evaluate our method on both indoor and outdoor scenes, demonstrating substantial performance improvements over all 3DGS-based methods in indoor scenes and surpassing MipN-erF360 in SSIM and LPIPS metrics. + +![](images/c028eab516618a6a2384d7e2fb4698b8bc1056470ad74f960458dbb022d67041.jpg) +Figure 8. Visual Comparison Between our Light Model and 3DGS. The light model (right) shows high visual quality compared to 3DGS (left), using less than $15\%$ of the memory. + +![](images/913f537196c1005f1d9b05e5c9ded1d9e8647015890cf6816e96813a86793027.jpg) + +superior results in terms of SSIM and LPIPS metrics compared to Mip-NeRF360. Even in outdoor scenes containing a lot of human-made structures—such as the Truck and + +Train scenes from T&T, 3DCS substantially outperforms 3DGS, demonstrating its ability to effectively handle structured geometries. However, in outdoor scenes dominated by nature and unstructured elements like trees and vegetation, the strengths of 3DCS become less pronounced. While 3DGS and 3DCS achieve comparable LPIPS results, 3DGS achieves better PSNR and SSIM. Yet, qualitatively, we can see in Fig. 7 that 3DCS appears significantly closer to the ground truth in terms of visual quality. Specifically, in the highlighted region of the Flower scene, our reconstruction better represents the real grass even though the PSNR of this area is 20.17 for 3DCS and 21.65 for 3DGS. This showcases the popularly observed mismatch between PSNR and perceived visual quality. This is mainly due to the fact that PSNR is highly sensitive to pixel-level differences and, therefore, tends to favor blurrier images. + +# 4.4. Ablation Study and Discussion + +We analyze the impact of densification strategies, the number of points per convex shape, and the influence of reducing the number of shapes on rendering quality. + +Densification strategy. We evaluate the effectiveness of splitting each convex shape into new convex shapes, as described in Sec. 3.3. Specifically, we analyze the impact of dividing a convex shape defined initially by $K = 6$ points into 2, 3, or 6 (default value) new convex shapes. For split- + +![](images/1d5a332d1066546244c7ee6e7476c682ad0fed0c76413a8d00f723f6f4d1e9ac.jpg) +Figure 9. Ablation of densification strategy. From left to right, we split each convex into 2, 3, or 6 new convexes. + +![](images/c17e66ea696c8510a940ed5a7376f6c27a2ad7c6ec19aa516536caab202d399f.jpg) + +![](images/cc2ee4ec422a259fcc31666106dacc89c9cc3f8eb9cb576a4110200606d56134.jpg) +Figure 11. 3DCS vs. 3DGS with fewer shapes. Convex splatt-ting (left) can decompose objects into meaningful convex shapes, enabling a realistic and compact 3D representation of the world. + +
LPIPS↓PSNR↑SSIM↑Train↓
3DCS (K=3)0.24122.400.79444m
3DCS (K=4)0.15923.730.84852m
3DCS (K=5)0.16023.700.84860m
3DCS (K=6)0.15723.900.85071m
3DCS (K=7)0.15723.900.85173m
+ +Table 3. Ablation Study of the Number of Points per Convex. We study the impact of the number of points per convex on reconstruction quality and training time on the T&T dataset. With only 4 points, our 3DCS performs better than 3DGS. + +![](images/8506cc08c0a10e5e601c8f311e615173f4b31804626f64a705dd89f7a29b658b.jpg) +Figure 10. # Parameters vs. $LPIPS^{\downarrow}$ (Truck scene). The number of primitives is indicated for each point. 3DCS achieves a better regime than 3DGS for a comparable number of parameters. + +ting into 2 or 3 shapes, the new convex shapes are centered on 2 or 3 randomly selected points from the original convex shape. As can be seen in Fig. 9, splitting a convex shape into more shapes results in higher visual quality, particularly in capturing finer details in the background. + +Number of points per shape. Increasing $K$ provides greater flexibility in representing convex shapes but comes at the cost of longer training times. Notably, the case of 3 points represents a special configuration, resulting in a non-volumetric triangle in 3D space, analogous to the 2D Gaussian Splitting approach [21] in terms of its dimensionality constraints. Table 3 shows that using $K \geq 4$ points per convex consistently outperforms 3DGS. However, increasing beyond 6 points has no significant performance gain. + +Influence of less primitives on rendering quality. Figure 10 shows how LPIPS on the T&T dataset changes with the number of primitives and parameters. Notably, our + +![](images/c1a2a113791cab791b330098df4251edb3d6e320ea0afd32410861253e8a8979.jpg) + +![](images/5773ddadca57f5badaa3ffc067445c54d802e79aa578ef0a0e6847344abd7974.jpg) + +method 3DCS consistently outperforms 3DGS. + +Physically meaningful 3D representations. Figure 11 visually compares the performance of 3DGS and 3DCS when the number of primitives $N$ is reduced. With fewer shapes, 3DGS produces blurry images due to the limited flexibility of 3D Gaussians, which struggle to form visually meaningful representations of real objects. In contrast, 3DCS preserves image clarity by effectively decomposing objects into convex shapes. 3DCS represents the leaves on the stump as either a single convex shape or a collection of convex shapes, with each shape capturing a physically meaningful part of the real-world object. Ultimately, 3DCS offers a significant advantage by delivering more physically meaningful 3D representations. By leveraging the adaptability of convex shapes, we bridge the gap between visual accuracy and interpretability, enabling high-quality, geometrically meaningful 3D modeling. + +# 5. Conclusion + +We introduce 3D Convex Splitting (3DCS), a novel method for radiance field rendering that leverages 3D smooth convexes to achieve high-quality novel view synthesis. Particularly, our method overcomes the limitations of 3D Gaussian Splitting, delivering denser representations with fewer primitives and parameters. Furthermore, 3DCS demonstrates substantial improvements on the novel view synthesis task. By combining the adaptability of convex shapes with the efficiency of primitive-based radiance field rendering, 3DCS achieves high-quality, real-time, and flexible radiance field reconstruction. We envision this new primitive to set the ground for further research in the field. + +Acknowledgments. J. Held, A. Deliege and A. Cioppa are funded by the F.R.S.-FNRS. The research reported in this publication was supported by funding from KAUST Center of Excellence on GenAI, under award number 5940. This work was also supported by KAUST Ibn Rushd Postdoc Fellowship program. The present research benefited from computational resources made available on Lucia, the Tier-1 supercomputer of the Walloon Region, infrastructure funded by the Walloon Region under the grant agreement n°1910247. + +# References + +[1] Sameer Agarwal, Yasutaka Furukawa, Noah Snavely, Ian Simon, Brian Curless, Steven M. Seitz, and Richard Szeliski. Building Rome in a day. Commun. ACM, 54(10):105-112, 2011. 1, 2 +[2] Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. Mip-NeRF: A multiscale representation for antiailiasing neural radiance fields. In IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 5835-5844, Montréal, Can., 2021. 2 +[3] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 5460–5469, New Orleans, LA, USA, 2022. 2, 6, 7 +[4] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Zip-NeRF: Anti-aliased grid-based neural radiance fields. In IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 19640–19648, Paris, Fr., 2023. 2, 6 +[5] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini de Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 16102-16112, New Orleans, LA, USA, 2022. 2 +[6] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. TensoRF: Tensorial radiance fields. In *Eur. Conf. Comput. Vis. (ECCV)*, pages 333-350. Springer Nat. Switz., 2022. 2 +[7] Zhiqin Chen, Andrea Tagliasacchi, and Hao Zhang. BSP-net: Generating compact meshes via binary space partitioning. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 42-51, Seattle, WA, USA, 2020. 3 +[8] John H. Conway and Neil J. A. Sloane. Sphere Packings, Lattices and Groups. Springer New York, 1999. 2 +[9] Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, and Andrea Tagliasacchi. CvxNet: Learnable convex decomposition. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 31-41, Seattle, WA, USA, 2020. 1, 2, 3, 4 +[10] Robert A. Drebin, Loren Carpenter, and Pat Hanrahan. Volume rendering. In ACM Int. Conf. Comput. Graph. Interact. Tech. (SIGGRAPH), pages 65-74, Atlanta, Georgia, USA, 1988. 2 +[11] Yilun Du, Cameron Smith, Ayush Tewari, and Vincent Sitzmann. Learning to render novel views from wide-baseline stereo pairs. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 4970-4980, Vancouver, Can., 2023. 2 +[12] Olivier D. Faugeras. What can be seen in three dimensions with an uncalibrated stereo rig? In Eur. Conf. Comput. Vis. (ECCV), pages 563-578. Springer Berl. Heidelberg., 1992. 1, 2 +[13] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In IEEE/CVF Conf. + +Comput. Vis. Pattern Recognit. (CVPR), pages 5491-5500, New Orleans, LA, USA, 2022. 2, 5 +[14] Ronald L. Graham. An efficient algorithm for determining the convex hull of a finite planar set. Inf. Process. Lett., 1(4): 132-133, 1972. 3, 4 +[15] Markus Gross and Hanspeter Pfister. Point-Based Graphics. Morgan Kauffmann Publ. Inc., 2007. 2 +[16] Thomas C. Hales. The sphere packing problem. J. Comput. Appl. Math., 44(1):41-76, 1992. 2 +[17] Abdullah Hamdi, Bernard Ghanem, and Matthias Nießner. SPARF: Large-scale learning of 3D sparse radiance fields from few input images. In IEEE/CVF Int. Conf. Comput. Vis. Work. (ICCV Work.), pages 2922-2932, Paris, Fr., 2023. 1 +[18] Abdullah Hamdi, Luke Melas-Kyriazi, Jinjie Mai, Guocheng Qian, Ruoshi Liu, Carl Vondrick, Bernard Ghanem, and Andrea Vedaldi. GES: Generalized exponential splatting for efficient radiance field rendering. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 19812-19822, Seattle, WA, USA, 2024. 3, 6 +[19] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. ACM Trans. Graph., 37(6):1-15, 2018. 6 +[20] Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul Debevec. Baking neural radiance fields for real-time view synthesis. In IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 5855-5864, Montréal, Can., 2021. 2 +[21] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2D Gaussian splattering for geometrically accurate radiance fields. In ACM SIGGRAPH Conf. Pap., pages 1-11, Denver, CO, USA, 2024. 3, 6, 8 +[22] Yi-Hua Huang, Ming-Xian Lin, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. Deformable radial kernel splatting. arXiv, abs/2412.11752, 2024.3 +[23] Ajay Jain, Matthew Tancik, and Pieter Abbeel. Putting NeRF on a diet: Semantically consistent few-shot view synthesis. In IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 5865-5874, Montréal, Can., 2021. 2 +[24] James T. Kajiya and Brian P. Von Herzen. Ray tracing volume densities. ACM SIGGRAPH Comput. Graph., 18(3): 165-174, 1984. 2 +[25] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3D mesh renderer. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 3907-3916, Salt Lake City, UT, USA, 2018. 2 +[26] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuehler, and George Drettakis. 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):1-14, 2023. 1, 2, 3, 4, 5, 6 +[27] Mijeong Kim, Seonguk Seo, and Bohyung Han. InfoNeRF: Ray entropy minimization for few-shot neural volume rendering. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 12902-12911, New Orleans, LA, USA, 2022. 2 + +[28] Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph., 36(4):1-13, 2017. 6 +[29] Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3D Gaussian representation for radiance field. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 21719-21728, Seattle, WA, USA, 2024. 5, 6 +[30] Marc Levoy. Efficient ray tracing of volume data. ACM Trans. Graph., 9(3):245-261, 1990. 2 +[31] Hsueh-Ti Derek Liu, Michael Tao, and Alec Jacobson. Paparazzi: surface editing by way of multi-view image processing. ACM Trans. Graph., 37(6):1-11, 2018. 2 +[32] Shichen Liu, Weikai Chen, Tianye Li, and Hao Li. Soft rasterizer: A differentiable renderer for image-based 3D reasoning. In IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 7707-7716, Seoul, South Korea, 2019. +[33] Matthew M. Loper and Michael J. Black. OpenDR: An approximate differentiable renderer. In Eur. Conf. Comput. Vis. (ECCV), pages 154-169. Springer Int. Publ., 2014. 2 +[34] Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan. Dynamic 3D Gaussians: Tracking by persistent dynamic view synthesis. In Int. Conf. 3D Vis. (3DV), pages 800-809, Davos, Switzerland, 2024. 3 +[35] Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jonathan T. Barron, and Yinda Zhang. EVER: Exact volumetric ellipsoid rendering for real-time view synthesis. arXiv, abs/2410.01804, 2024. 3 +[36] Jinjie Mai, Wenxuan Zhu, Sara Rojas, Jesus Zarzar, Abdullah Hamdi, Guocheng Qian, Bing Li, Silvio Giancola, and Bernard Ghanem. TrackNeRF: Bundle adjusting NeRF from sparse and noisy views via feature tracks. In Eur. Conf. Comput. Vis. (ECCV), Milan, Italy, 2024. 1 +[37] Nelson Max. Optical models for direct volume rendering. IEEE Trans. Vis. Comput. Graph., 1(2):99-108, 1995. 2 +[38] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In Eur. Conf. Comput. Vis. (ECCV), pages 405-421. Springer Int. Publ., 2020. 1, 2 +[39] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):1-15, 2022. 2, 6 +[40] Richard A. Newcombe, Andrew Fitzgibbon, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohi, Jamie Shotton, and Steve Hodges. KinectFusion: Real-time dense surface mapping and tracking. In IEEE Int. Symp. Mix. Augment. Real., pages 127-136, Basel, Switzerland, 2011. 1 +[41] Simon Niedermayr, Josef Stumpfegger, and Rüdiger Westermann. Compressed 3D Gaussian splattering for accelerated novel view synthesis. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 10349-10358, Seattle, WA, USA, 2024. 6 +[42] Despoina Paschalidou, Ali Osman Ulusoy, and Andreas Geiger. Superquadrics revisited: Learning 3D shape parsing + +beyond cuboids. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 10336-10345, Long Beach, CA, USA, 2019. 3 +[43] Felix Petersen, Amit H. Bermano, Oliver Deussen, and Daniel Cohen-Or. Pix2Vex: Image-to-geometry reconstruction using a smooth differentiable renderer. arXiv, abs/1903.11149, 2019. 2 +[44] Franco P. Preparata and Sie June Hong. Convex hulls of finite sets of points in two and three dimensions. Commun. ACM, 20(2):87-93, 1977. 3 +[45] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. KiloNeRF: Speeding up neural radiance fields with thousands of tiny MLPs. In IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 14315-14325, Montréal, Can., 2021. 2 +[46] Christian Reiser, Rick Szeliski, Dor Verbin, Pratul Srinivasan, Ben Mildenhall, Andreas Geiger, Jon Barron, and Peter Hedman. MERF: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes. ACM Trans. Graph., 42(4):1-12, 2023. 2 +[47] Daxuan Ren, Haiyi Mei, Hezi Shi, Jianmin Zheng, Jianfei Cai, and Lei Yang. Differentiable convex polyhedra optimization from multi-view images. In Eur. Conf. Comput. Vis. (ECCV), pages 251-269. Springer Nat. Switz., 2024. 3 +[48] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 5449-5459, New Orleans, LA, USA, 2022. 2 +[49] Shubham Tulsiani, Hao Su, Leonidas J. Guibas, Alexei A. Efros, and Jitendra Malik. Learning shape abstractions by assembling volumetric primitives. arXiv, abs/1612.00404, 2016.3 +[50] Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, and Pratul P. Srinivasan. Ref-NeRF: Structured view-dependent appearance for neural radiance fields. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 5481-5490, New Orleans, LA, USA, 2022. 2 +[51] Xinyue Wei, Minghua Liu, Zhan Ling, and Hao Su. Approximate convex decomposition for 3D meshes with collision-aware concavity and tree search. arXiv, abs/2205.02961, 2022. 3 +[52] Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, and Ben Mildenhall. BakedSDF: Meshing neural SDFs for real-time view synthesis. In ACM SIGGRAPH Conf. Proc., pages 1-9, Los Angeles, CA, USA, 2023. 2 +[53] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelNeRF: Neural radiance fields from one or few images. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 4576-4585, Nashville, TN, USA, 2021. 2 +[54] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3D Gaussian splatting. In IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 19447-19456, Seattle, WA, USA, 2024. 3 +[55] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. NeRF++: Analyzing and improving neural radiance fields. arXiv, abs/2010.07492, 2020. 2 \ No newline at end of file diff --git a/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/images.zip b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b932c1ac8f85695944c882ee394f4f1dbaa11df3 --- /dev/null +++ b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:343d66a57fc9634c98b5b3aa5f83e0624e98a100b181cc989c1ff1694e24b88f +size 669164 diff --git a/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/layout.json b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..986bb8cd9b45c50bd3882f4fe80f362ffd87137d --- /dev/null +++ b/CVPR/2025/3D Convex Splatting_ Radiance Field Rendering with 3D Smooth Convexes/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b9e56d1f9ff8e4f22208740141eb349f601cea288529fb35345282b2042cf44 +size 392989 diff --git a/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_content_list.json b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..43dbf83e0b88824af3439e8d34d9a62aa03d428c --- /dev/null +++ b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bfdc87a50cf450c86baca310f733945c74c573cb85a2b15b322e99d9e5f4d7b +size 89811 diff --git a/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_model.json b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0fbf0ab0964ad861543e352b8873bd3b11e3e721 --- /dev/null +++ b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c21cde8aae50f5f3d741adfc4d3c3da56e9a618015accc42c04d8f29cfca51e6 +size 107846 diff --git a/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_origin.pdf b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d0e37cc4e81de3ae83d479fd424e2064fe9afefb --- /dev/null +++ b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/532cf63b-00fc-4ff9-ad5b-7735e7795d09_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7ba3ab086c80252d47f0c2ccbe2ef6d571867b7c3a10768025155f78c414a23 +size 4761908 diff --git a/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/full.md b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c5222b1385251d09dc87cd73f6424c8ef8e30369 --- /dev/null +++ b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/full.md @@ -0,0 +1,408 @@ +# 3D Dental Model Segmentation with Geometrical Boundary Preserving + +Shufan Xi $^{1}$ , Zexian Liu $^{1}$ , Junlin Chang $^{1,3}$ , Hongyu Wu $^{1*}$ , Xiaogang Wang $^{2}$ , Aimin Hao $^{1}$ $^{1}$ State Key Laboratory of Virtual Reality Technology and Systems, Beihang University + $^{2}$ College of Computer and Information Science, Southwest University + $^{3}$ Peng Cheng Laboratory + +{xsfvrlab, liuzexian, changjunlin, whyvrlab, ham}@buaa.edu.cn, wangxiaogang@swu.edu.cn + +# Abstract + +3D intraoral scan mesh is widely used in digital dentistry diagnosis, segmenting 3D intraoral scan mesh is a critical preliminary task. Numerous approaches have been devised for precise tooth segmentation. Currently, the deep learning-based methods are capable of the high accuracy segmentation of crown. However, the segmentation accuracy at the junction between the crown and the gum is still below average. Existing down-sampling methods are unable to effectively preserve the geometric details at the junction. To address these problems, we propose CrossTooth, a boundary-preserving segmentation method that combines 3D mesh selective downsampling to retain more vertices at the tooth-gingiva area, along with cross-modal discriminative boundary features extracted from multi-view rendered images, enhancing the geometric representation of the segmentation network. Using a point network as a backbone and incorporating image complementary features, CrossTooth significantly improves segmentation accuracy, as demonstrated by experiments on a public intraoral scan dataset. The source code is available at https://github.com/XiShuFan/CrossTooth_CVPR2025 + +# 1. Introduction + +Intraoral scanning is widely used in the field of digital dentistry and the crown segmentation is the key step. The main challenges [1] in crown segmentation arise from the intrinsic similarity between crown shapes, the close arrangement of adjacent teeth, as well as abnormal teeth with irregular shapes. Additionally, external devices such as braces can influence the crown structure, making it difficult to crown segmentation. + +To address these challenges, various methods have been proposed. Traditional methods rely on manually defined features for segmentation, including curvature-based meth + +![](images/e890c8d4657ac7c7eda1579567ee38191049d2e8ff6ce86be0cc09563f43c5d0.jpg) +Figure 1. An illustration of 3D intraoral scan model. The original intraoral scan consists of points and triangles, it can be visualized with a mean curvature histogram. The more red the color is, the lower the curvature is. Deep learning methods usually take sampled points as input, with blurry boundaries. But we can observe more clear tooth edges in images rendered from intraoral scan than in sampled points, indicated by red and blue boxes in the zoomed views respectively. + +ods [34, 40], contour-based methods [25], and harmonic field-based [18, 49] methods. Frequently used curvature-based methods depend heavily on the intraoral scanner accuracy and the teeth condition of patients. In areas where curvatures are not apparent or in regions with steep curvatures like tooth crowns, these methods perform poorly and require manual correction, which lacks robustness. + +Deep learning methods, driven by data, have surpassed traditional ones. Several works have applied deep learning methods directly on 3D intraoral scan models for segmentation. Some methods [17, 28, 29, 46, 47] build networks in local-global feature fusion paradigm, where the network learns both local details and global context information from the point clouds. Some [39, 41] follow the + +encoder-decoder structure to integrate multi-scale features using skip connections and layer-wise aggregation to better percept small objects. Tooth centroid and bounding box detection methods [4, 23, 31], inspired by object detection networks like Faster-RCNN [24], first detect every single tooth and then segment the cropped regions for higher resolution. Latest methods [5, 10, 43] decouple coordinates and normal vectors from intraoral scan triangles, further improving performance by treating them as two different streams for learning. + +Accurate identification of crown-gingiva boundary is critical to the subsequent data processing work. While existing methods have already achieved high segmentation accuracy, the boundary area between crown-gingiva is still not well treated. This is because current deep learning based methods can not retain enough boundary details due to uniform mesh downsampling and fail to fully leverage the network's ability to learn boundary features merely based on coordinates and normal vectors, as shown in Fig. 1. + +To address the above problems, we propose a boundary-preserving segmentation method named CrossTooth. We take curvature information from intraoral scans and dense features from rendered images into account, improving the overall segmentation performance, especially at tooth-gingiva areas. CrossTooth uses a carefully designed selective downsampling method to preserve triangles from tooth boundary while reducing the intraoral scan mesh to a specified number of triangles. Additionally, we extract discriminative boundary features from multi-view rendered images, which complement the sparse point cloud features. + +The main contributions of our work can be summarized as follows: + +- We propose a selective downsampling method. According to the curvature of the intraoral scan model, this method can retain more geometric details and providing discriminative crown-gingival boundary features. +- We use multi-view rendering under specific lighting conditions to generate the distinguish shading on crown-gums boundary. By projecting rendered image features back to the point cloud, our method can enrich the sparse point cloud data with dense image features. +- Our CrossTooth is evaluated on a public intraoral scan dataset [1]. The experimental results show that our CrossTooth significantly outperforms state-of-the-art segmentation methods. + +# 2. Related Work + +# 2.1. Point Cloud Downsampling + +A point cloud is a form of 3D data representation composed of numerous unordered points, each characterized by spatial coordinates, color, and normal vectors. In deep + +learning tasks, the point cloud needs to be downsampled to a fixed number to form small batches. Popular downsampling methods include voxel-grid sampling, farthest point sampling, and quadric error metric sampling. + +Voxel grid sampling [30, 41] divides the point cloud into small 3D grids and uses the center point of each grid to represent the original point set. Farthest point sampling [21] simplifies the point cloud by recursively selecting the farthest points from the current set, ensuring a uniform distribution of sampled points. Quadric error metric [6] sampling constructs a loss function to measure the geometric error introduced by merging vertices, ensuring both sampling quality and computational efficiency. Among the three methods, QEM is the most used, which is particularly effective in preserving edge and triangle topology. + +However, in the intraoral scan segmentation task, all of these downsampling methods fail to retain the details at the boundaries. The loss of edge triangles leads to blurred boundaries and significantly impacts dense prediction tasks such as point cloud segmentation. + +# 2.2.3D Intraoral Scan Segmentation + +Point cloud segmentation methods are generally divided into voxel-based and point-based approaches, with the latter being more commonly used. PointNet [21], as the pioneering point network, uses multi-layer perceptrons and pooling layers to learn global features. However, it could not perceive local relationships in point clouds, leading to the development of PointNet++ [22], which implements a hierarchical feature extraction and fusion strategy. DGCNN [33] further introduces the EdgeConv operator, learning contextual information through linear aggregation of central point features and edge features. Lastly, Point Transformer [45] provides a transformer-based framework for point networks, achieving state-of-the-art performance. + +To accurately segment teeth from intraoral scan, Zhao et al. [46] constructs graph relationships among triangles, and uses graph attention convolution layers to extract fine-grained local geometric features and global features for comprehensive information. Lian et al. [17] proposes MeshSegNet based on PointNet, integrates adjacent matrices to extract multi-scale local contextual features. Xiong et al. [36] introduces TSegFormer with transformer layers, leveraging self-attention to capture dependencies between teeth while learning complex tooth shapes and structures. + +The above methods generally treat global and local features in separate branches, leading to insufficient fusion between high- and low-level features. Other works have explored encoder-decoder methods with multi-scale feature fusion. Yuan et al. [39] proposes using a preliminary feature extraction module and weighted local feature aggregation module to retain fine details in tooth-gingiva boundaries. Zanjani et al. [41] introduces an end-to-end frame + +![](images/1f585d4997952631211c3f443dcd94c628dc8a82d33a01246a0e049cd546f995.jpg) +Figure 2. Architecture of CrossTooth. The point network takes points from the intraoral scan model after selective downsampling as inputs and adopts a multi-scale encoder-decoder structure. The downsample block uses kNN to aggregate features from neighbor points, the transformer block applies a self-attention mechanism to learn long sequence contextual information, and the upsample block fuses features from the encoder and decoder, illustrated by (a) to (c) respectively. Image network takes rendered pictures and concatenates local-global features for downstream tasks. Then, following correspondences between the image and point, image features are projected back onto the point for further fusion illustrated by (d). Common MLP and CNN are used to produce final segmentation masks. + +work based on PointCNN [15], employing Monte Carlo-based non-uniform resampling to train and deploy models at the highest available spatial resolution. + +Recently, some studies have found that separating triangle coordinates and normal vectors into two branches improves performance. Zhang et al. [43] introduces TSGC-Net, a dual-branch graph convolutional network that learns multi-view geometric information from both coordinates and normal vectors. Duan et al. [5] extends this strategy with SGTNet, using a graph transformer module to enhance boundary classification accuracy. + +# 2.3. Point Cloud Edge Segmentation + +The accuracy of point cloud boundary segmentation is crucial, especially in indoor scene segmentation, outdoor road segmentation, and medical organ segmentation tasks. + +Hu et al. [9] proposes a dual-branch fully convolutional network on S3DIS and ScanNet datasets for joint semantic segmentation and edge detection. With a boundary map generation module and a well-designed edge loss function, the network is constrained to achieve more precise segmentation at object boundaries. Tang et al. [29] analyzed the performance of existing point cloud segmentation methods at scene boundaries, and proposed the Contrastive Boundary Learning (CBL) framework to optimize boundary dis + +tinctiveness. CBL contrasts boundary point features during point cloud downsampling stages by push and pulls operations on boundary points from different classes. Gong et al. [7] introduces a Boundary Prediction Module to predict boundaries by discriminative aggregation of features within a neighborhood, preventing local features of different categories from interfering with each other. Liu et al. [19] is the first to focus on boundary segmentation in medical point cloud tasks, proposing a graph-based boundary-aware network for intracranial aneurysm and tooth point cloud segmentation. + +These above works demonstrate the growing importance of precise boundary segmentation in point cloud tasks, especially for complex scenarios. + +# 3. Methods + +# 3.1. Overview + +We propose CrossTooth, a boundary-preserving method for intraoral scan segmentation illustrated in Fig. 2. Instead of relying on the commonly used QEM [6], we develop a selective downsampling method, which better preserves boundary details. Additionally, multi-view image rendering is used to capture dense information, with an image segmentation network trained to learn discriminative boundary + +features. Finally, image features are projected back onto the point cloud for a more comprehensive understanding. + +Different from existing methods, our CrossTooth adopts a complementary approach by learning feature representations from multi-view rendered images and discrete point clouds respectively. By merging these two streams through MLP at the final stage, CrossTooth compensates for the information loss inherent in point clouds, as illustrated in Fig. 1, leading to improved dense segmentation predictions. + +# 3.2. Selective Downsampling + +The high-resolution intraoral scan model contains more than 100,000 points. Most works downsample intraoral scan to 16,000 points for training. Frequently used down-sampling methods fall into two categories. The first type uses point cloud downsampling methods [4, 14, 19, 20, 23, 32, 42, 48], which cannot preserve the structure of edges and triangles. The second type performs surface reduction processing [2, 5, 10-12, 16, 17, 26, 27, 35, 37, 43, 46, 47] on the model, which can maintain the original topological structure of the model. However, even the second type cannot retain tooth boundary information. + +We propose a novel boundary-preserving downsampling method for intraoral scan models, termed selective downsampling. This approach builds upon the well-established QEM [6] downsampling method and incorporates curvature prior information to enhance its performance. Here, we provide a brief review of QEM: + +The QEM algorithm simplifies a model by merging two edge points $v_{1}$ and $v_{2}$ into a single one $v$ , while keeping $v$ as close as possible to the original model. The square of the Euclidean distance between $v$ and the corresponding local surface of the model is used as the error metric. Define $\mathrm{plane}(v)$ as the original triangles corresponding to point $v$ , the optimization function is: + +$$ +v = \operatorname {a r g m i n} _ {v} \sum_ {\substack {p \in \text {plane} (v _ {1}) \\ \cup \text {plane} (v _ {2})}} k * \text {distance} (v, p) ^ {2}, \quad (1) +$$ + +where $k$ is the weight of the distance from point $v$ to each related triangles. This optimal solution of target $v$ can be achieved by solving the extremum of a quadratic function. + +In the original QEM algorithm, $k$ is set to 1 for the same weight, whereas in our selective downsampling algorithm, we set $k$ to be a value related to the curvature of points $v_{1}$ and $v_{2}$ . Curvature describes the degree of bending of a geometric surface, noticeable negative curvature can be observed at tooth boundaries, while sharp positive curvature can be seen at the top of tooth crowns. We multiply a larger coefficient for negative curvature edges $(k = 10)$ and a smaller coefficient for positive curvature edges $(k = 1)$ during the QEM iterations. After each round of iteration, + +![](images/71d771ec32460ce67701cdc04a2da1c0cb25544222c59ecdf98604ec2a008234.jpg) +(a) original intraoral scan + +![](images/4252e2c72b590c332bafb55c5fa944bc44f01fb7688c173c8917941399c1d152.jpg) +(b) QEM downsample + +![](images/5b9626489f4477b781917e86951332dd4251e1eb79d452b92499553c2c29d5af.jpg) +(c) selective downsample + +![](images/7d76c701852100b1496d1a7ba7ba0de56b162e03aad5547766c97cdf00e76f65.jpg) +Figure 3. Comparison of QEM and selective downsampling method. Our method performs better than QEM in tooth boundaries, as visualized in density histogram (d). Quantitatively, selective downsampling results in $10\%$ to $15\%$ density more than QEM at boundary areas. + +![](images/5bafdd27929576a5afe0adee5aa48b516b07cde54316c33603cf1d7ea5ce6b58.jpg) +(d) zoom in comparison + +the curvature of the current model is recalculated. The algorithm for this process is shown in Algorithm 1. + +The selective downsampling algorithm fully considers the curvature of the intraoral scan model, it applies mild and aggressive downsamples in tooth boundary and other areas respectively. The comparison of QEM and selective downsampling is illustrated by Fig. 3. We also give a quantitative metric to show how well our method preserves boundary points as shown in Tab. 1. + +# Algorithm 1 Selective Downsampling + +# Require: A mesh $M$ + +Ensure: A simplified mesh $M$ + +1: Compute the $Q$ matrices for all the initial vertices. +2: Compute mean curvature $H$ for all the initial vertices. +3: Select all valid pairs. +4: Compute the optimal contraction target $\bar{v}$ for each valid pair $(v_{1}, v_{2})$ . The error $\bar{v}^{T}(Q_{1} + Q_{2})\bar{v}$ of this target vertex $\bar{v}$ becomes the cost of contracting that pair. +5: Compute the edge collapse coefficient by averaging the curvature of its two points. Multiply the cost by coefficient. +6: Place all the pairs in a heap keyed on cost with the minimum cost pair at the top. +7: repeat +8: Remove the pair $(v_{1}, v_{2})$ of the least cost from the heap, contract this pair, and update the costs of all valid pairs involving $\bar{v}$ +9: until the heap is empty +10: return $M$ + +
MethodUpper jaw ↓Lower jaw ↓
QEM5.655e-34.797e-3
Selective downsample5.029e-34.052e-3
+ +Table 1. Average distance of tooth boundary points at the upper and lower jaw. We define boundary points following the criterion in [19] and set neighbor points to 4. It can be seen that the density of boundary points obtained by our method is $15\%$ higher. + +# 3.3. Multi-View Image Rendering + +Images and point clouds are commonly utilized in multimodal learning [3, 38]. Point clouds offer detailed geometric information, but the geometric details is readily eliminated during the down-sampling process. Conversely, edge information in images tends to be better preserved during down-sampling process. + +To enhance the presentation of boundary information in rendered images, we conducted tests with various parallel lights from different angles. We found that a vertical downward parallel light positioned above the crown yielded the most striking contrast at the gum-to-crown junction, which is shown in Fig. 1. To capture all such junctions comprehensively, we configured multiple perspective cameras on the upper hemisphere of the intraoral scan models for multiview rendering. Specifically, the longitude and latitude of the upper hemisphere are evenly divided, the virtual cameras are placed on the sampling positions, pointing to the center of intraoral scan. The optical axis of virtual camera must pass through the center of intraoral scan model. + +Point cloud coordinates can be mapped to 2D pixel coordinates with camera parameters. Once the mapping relationship is established, the corresponding image features can be fused into the point cloud, achieving modal complementarity. The advantages of performing multi-view rendering include comprehensive modeling of the intraoral scan model by capturing images from different angles, supplementing missing tooth boundary information in the point cloud, and reducing data bias while improving model robustness. + +# 3.4. Boundary Aware Segmentation + +CrossTooth consists of two parts, the image segmentation module and the point segmentation module. The image module uses PSPNet [44], taking the rendered image as input and producing semantic segmentation results ( $C \times H \times W = 17 \times 1024 \times 1024$ ). The point module takes the features of the intraoral scan model after selective downsampling as input. Each point's features include normalized spatial coordinates and spatial normal vectors, concatenated as a 6-dimensional vector. Specifically, we use Point Transformer [45], which includes multi-stage downsampling encoders and upsampling decoders. In each stage, transformer + +layers are used for long-sequence contextual self-attention learning. In each stage, we use kNN to fuse neighbor point features. Finally, several layers of MLP is used as the segmentation head to output a multi-class segmentation mask. We also use another MLP to predict a binary segmentation mask for the tooth-gingiva boundary. + +Considering that the image network and point network learn distinctly different feature representations from two complementary perspectives, merging their outputs allows the entire network to understand the structure of the intraoral scan model comprehensively. We concatenate the weighted image segmentation results with the point features from the last decoding layer of the point network as shown in Fig. 2. The pixel features $F_{pixel}$ of each image viewpoint correspond to the point features $F_{point}$ in the point cloud. We encode pixel features to one-hot vectors and concatenate them with the point features, followed by using a standard MLP for feature fusion, which can be represented as follows: + +$$ +F _ {f u s i o n} = \operatorname {M L P} \left(F _ {\text {p o i n t}} \oplus \operatorname {e n c o d e} \left(\operatorname {a v g} \left(F _ {\text {p i x e l}}\right)\right)\right). \tag {2} +$$ + +To explicitly constrain tooth boundaries, we follow previous works [19, 29] to perform supervised learning for tooth boundary segmentation. For a point, if more than half of its $k = 8$ nearest neighbors belong to different classes, the point is defined as a boundary point. Since point cloud networks involve multi-stage downsampling, making it difficult to distinguish boundaries in deep stages, we only compute the loss after the last decoder layer. In both the image and point cloud segmentation tasks, we use Cross Entropy Loss, as represented below: + +$$ +L _ {C E} = - \sum_ {i = 1} ^ {N} \sum_ {c = 1} ^ {C} p _ {i c} \log y _ {i c}, \tag {3} +$$ + +where $p_{ic}$ and $y_{ic}$ represent the predicted class probability and the true class probability for a pixel/point. $N$ represents the total number of pixel/point, and $C$ represents the total classes of teeth, which is fixed as 17 in our task. + +In addition to explicitly predicting boundary points, we prefer the features learned by the point network to be close for neighboring points from the same class and far apart for neighboring points from different classes. We apply the CBL [29] loss function only on the last decoder layer, as represented below: + +$$ +L _ {C B L} = - \frac {1}{| P |} \sum_ {x \in P} \log \frac {\sum_ {y \in N _ {x}} \exp (- d \left(F _ {x} , F _ {y}\right))}{\sum_ {y \in N _ {x}} \exp (- d \left(F _ {x} , F _ {y}\right))}, \tag {4} +$$ + +where $P$ is the set of all points in the point cloud, $N_{x}$ represents the set of neighboring points within a certain radius + +![](images/b6ecac584c63f33cc0ef4e5742149717421ce9c0d34f58a86901bb5d7056203f.jpg) + +![](images/f4c9db7b252c8fa294ce5801d9af983a5bb90233d898dd1cd8d0dd2d9263c387.jpg) + +![](images/2fcabc1919974b28a25805552f3b7faab05ba85b9e3d73f74f44bf9cbbf1c425.jpg) + +![](images/6c36996ee1976a628d508b42e9e431036565ca37b6763a9f9944f9a1c80d8cd9.jpg) + +![](images/9ecfd650c616120b006398076a2f4bd47b3ce855284bed2fcdff698f456f3409.jpg) + +![](images/2d1a7e5563d415427d6cf40f9e4963ebe25549381b7adbb8486e4e4d26139576.jpg) + +![](images/88d5d703604362d17e442d8238e1e5125b32353a8ee245bd8d0fd0edc913f87d.jpg) + +![](images/a9016ddb452991f79421abe47d4ae9633368db64a6d7184f6883dec9fed90aa3.jpg) + +![](images/7009b9bdab31b193e5ec74baa5d93f5e42a5e60c348f86cf62fa36e11993add2.jpg) + +![](images/5b824dd05270c115542f55d03fa0204fbc963391723433bbe17f7807e65c88d1.jpg) + +![](images/a976424a0910f502180aa198becd39755ba8c3f869be117d3d2cbeeae3be0062.jpg) + +![](images/43bb5ff7131dc5474cab0238ac1f44336a1ba8813b37e8d5f0a22c5db7ad70a7.jpg) + +![](images/12cdc8a344893915aff04d6d33f1fb58bf9154ec7aca74ade18743f2097c16c3.jpg) + +![](images/e249f7d709eefb077c0e17fdbd96c730cd83a2f8e593ebc4306be175004a0538.jpg) + +![](images/27cc78139e816a4ede2356511ad911b772017eb54ad247e055049ab306726bfa.jpg) + +![](images/45a1ea97adbe8672dc1b2d5ad2759c9842af79e1a750157fb16bf5191d2f38d0.jpg) + +![](images/bffec95f16f71def3f66c86181ca1c61c3b3caa989ea06174b8b4c0e2bb0a941.jpg) + +![](images/93b47599b428127cb7d022727d2573c9456deb9cdf1a79bc88c7b043d7312c62.jpg) + +![](images/456f131bd86604ec97da3b9245d810e64eec4a3b2c498a389c872b512dd0dcbe.jpg) + +![](images/a417eb09f25bf3a4ccc4d466fc3d8fd18a73ba4d72b590aa9234c5adbcc04720.jpg) + +![](images/4833253f806b75b971463dc9c4c097390b138aa95afadb9e4fda8ba7200277b3.jpg) + +![](images/f31bcb6e3342dfd4074a22adce7c513585f13f478f6559157dfb0484630c0fe7.jpg) + +![](images/8ad6920e2ae2a5f03cf803b8912bd6d5510185e49ebb3c8710ee52b55933aa72.jpg) + +![](images/2e9990621d39b82cfe5c606d58f8d4f4932f27ea5f8256a8bdf67ecab6daa41d.jpg) + +![](images/0452655dc4303bf5b584ecca90b71910961de121448f4b57127bfbd6ce989de6.jpg) + +![](images/90fa3cbcd5664d65def39ff667612ee00463f1a0777f999968938702b7a5dc9a.jpg) + +![](images/5e48a8464c22786725ca32322eb15a71ceabad61be55385587df613843030ab2.jpg) + +![](images/7bec9861580054180034b6cb8bb6cd1c28a3cd84299690ba3b45f25807c5da05.jpg) + +![](images/08262b76e62a9714feb0387ec496994686311cd1d29da7b03cb7489fed318984.jpg) + +![](images/d6fa94db10ec0eadc0d889652c67c2ead3da75e07f2bd708b12d5028c5ab2cc8.jpg) + +![](images/3149c5ff3a5d74daa74269daa73fe1c4fbfa21c2f1ea42a6fa21c43de12f25ce.jpg) + +![](images/85b0af4c7ebf3b422d0b0606b041dc163ee286fdebed7512b0b4e59e71dce317.jpg) + +![](images/e673540df355bc001c44af838312fedb6f3e40ab2dff66b982ff4d28581e7024.jpg) + +![](images/d6d652cd72f035550aeca010bd92a3194a66baff791d0d3b23fe38b4bb9665b8.jpg) + +![](images/0ee4ac6c26b20e4cce7f4cb2bb4e9a9f13358599211f9a40bbba85501b7f6066.jpg) + +![](images/e258d99db9d057b6be8d79dad2319802afcd059768b77cdcf10a1ddd11ceeff3.jpg) + +![](images/a06e8e8f97dedcf5fc12a443e6aa31ba0efb20c3fdee002caaf7b95fa63b27c8.jpg) +Intraoral scan + +![](images/e0f46c4a99d262b27257f8b5a14faba3ae7be6c39bd24e47ec656b3e87643919.jpg) +MeshSegNet + +![](images/057dabcb180c1f538f1f8a2f5a13ffbf661d9e03dcc1558938ed98c5491659f4.jpg) +TSegNet + +![](images/1712ff6fde635c2dbc1569c24f07575cf59590b6ae7b8b98873ea8918614f925.jpg) +UpSegNet +Figure 4. Visualization of segmentation results, along with respective ground-truth annotations. Important areas are marked with red dotted circles. Our CrossTooth performs better than other methods under all the listed intraoral scan cases. + +![](images/ca63a717100e6f2712f2a164f41e2625e84bb512d028dd28b0bc54f901491b72.jpg) +SimpSegNet + +![](images/870b9eaec38545e4a6d2690d623e1e2679736b5c479480344eb280f4c4703f98.jpg) +ToothGroupNet + +![](images/bafc4ce57c49c98095ffa6d84cb4c2a2139d92bedfebfb3eb1ed205e43fdf1e5.jpg) +DilatedSegNet + +![](images/3b111225aec29e5301c85ef831bd1ab1dc0518a521ae2925b6624958fb9e7e85.jpg) +CrossTooth + +![](images/9c88c597a7d253c3d79eb0da518d2061fe98c90ef915bf7c70878f3bffd3f807.jpg) +GT + +for point $x$ , $L_{x}$ represents the label of point $x$ , $F_{x}$ represents the feature vector of point $x$ , and $d(\cdot, \cdot)$ is the euclidean distance function. CBL loss encourages features to keep away from different classes, this guides the network to learn distinctive boundary features. + +The overall loss function can be formulated as follows: + +$$ +L o s s = L _ {C E} (i m a g e, p o i n t) + L _ {C B L} (p o i n t). \tag {5} +$$ + +Our segmentation process focuses on preserving boundary information. The point cloud is first processed through selective downsampling, which results in higher precision in the tooth boundary regions. Additionally, we extract multi-view rendered image features, particularly boundary details, and project them back onto the original point cloud for feature fusion. Experimental results demonstrate that our approach outperforms the state-of-the-art methods. + +# 4. Experiments + +# 4.1. Data Preparation + +We use a dataset publicly available in MICCAI Challenge [1] which includes a total of 1800 upper and lower jaw intraoral scan models. The dataset is randomly divided into 1440 for the training set and 360 for the testing set. Some dental models contain non-manifold edges, which we remove by splitting vertices in the pre-processing phase. To be consistent with previous works [1, 17, 41, 43], we downsample the intraoral scan models to 16,000 points using selective downsampling. Data augmentation operations are performed online during the training phase. Specifically, after normalizing the coordinates, we apply random translations with a uniform distribution of $[-0.1, 0.1]$ in the length, + +width, and height of its bounding box. Rotation is applied using angles generated from a normal distribution $N(0,1)$ . For each intraoral scan model, we render 96 images from different viewpoints to ensure most points are visible as shown in Fig. 2. In our rendering setup, a dental arch is firstly aligned using PCA analysis, then we employ a directional light source in Pyrender to simulate parallel light rays with pure white color. The intensity is specified as 2, providing enhanced brightness across the scene, which facilitates the visualization of details and improves clarity. + +Our task is to automatically segment each model into 17 different semantic parts, including 16 different teeth (wisdom teeth also included) and the gingiva/background. To standardize the label of upper and lower teeth, we fixed the orientation of jaws and labeled each tooth from T1 to T16, with the gingiva labeled as T0, illustrated in Fig. 4. + +# 4.2. Experiment Setting + +Our CrossTooth consists of an image segmentation module and a point cloud segmentation module. The image segmentation module uses the highly efficient PSPNet [44]. The point cloud segmentation module is based on Point Transformer [45], consisting of five encoding and decoding stages. In each encoding stage, the point cloud is progressively downsampled using FPS, and the number of points is gradually restored in each decoding stage. Channels for these five stages are 32, 64, 128, 256, and 512, with down sampling rates of 1, 4, 4, 4, and 4, respectively. The number of neighbor points used for kNN-based feature fusion in each stage is 8, 16, 16, 16, and 16. Throughout the network, we apply a combination of Batch Normalization and ReLU + +
MethodmIoUBoundaryBackgroundT1/T9T2/T10T3/T11T4/T12T5/T13T6/T14T7/T15
MeshSegNet [17]66.1300.90086.81262.77949.34356.57760.87058.97266.24940.132
SimpSegNet [10]88.45074.70394.56684.34085.81785.02589.36385.62686.23459.953
TSegNet [4]57.2390.00080.69453.27450.02650.10355.56049.54459.18927.043
TeethGNN [47]74.63176.01593.97866.28564.29768.88969.49660.49768.12343.760
ToothGroupNet [1]93.54676.39295.58493.39792.68189.60892.39990.56192.53865.129
UpToothSeg [8]83.94862.88093.35884.87682.85982.65086.11377.49377.37548.312
DilatedSegNet [11]91.44174.89995.55892.56292.08989.00191.76688.52189.22962.699
Ours95.86082.05896.41095.00594.78491.59294.54793.30995.08868.055
+ +Table 2. The segmentation results for seven competing methods and our CrossTooth. For brevity, we combine the metric of T1/T9 to T7/T15, as they are pairs of teeth with similar shapes, distributed symmetrically on the left and right sides of the mouth. We ignore T8/T16 as wisdom teeth are rare in our dataset. + +for non-linear transformations. + +We also select seven existing intraoral scan segmentation methods for comparison, each named as MeshSegNet [17], SimpSegNet [10], TSegNet [4], TeethGNN [47], Tooth-GroupNet [1], UpToothSeg [8], DilatedSegNet [11]. We use their official implementations and ensure they are under the same training conditions as with CrossTooth. + +The input for all networks is an $\mathrm{N} \times \mathrm{C}$ vector, where $\mathrm{N} = 16,000$ represents the number of points, and $\mathrm{C} = 6$ represents the 3D coordinates and normal vectors for each point. All models are trained for 100 epochs using cross-entropy loss on an NVIDIA RTX 3090 GPU. We use the Adam optimizer with a mini-batch size of 4. The initial learning rate is set to 1e-3, and a cosine scheduler is employed for learning rate decay, with a minimum learning rate of 1e-6. We evaluate performance using the Intersection over Union metric for each tooth category, as well as the overall segmentation mIoU. Additionally, we test the tooth boundary segmentation IoU to demonstrate the superiority of our method. + +# 4.3. Tooth Segmentation + +The detailed segmentation results are shown in Tab. 2. It demonstrates that our method achieves the best performance in both tooth segmentation IoU and tooth boundary segmentation IoU metrics. Specifically, compared to the best method in this task, ToothGroupNet, our CrossTooth improves tooth IoU and boundary IoU by $2.3\%$ and $5.7\%$ , respectively. Additionally, CrossTooth significantly outperforms other methods, further validating the effectiveness of our boundary preservation mechanism, which can learn more discriminative boundary features and achieve precise tooth segmentation. + +We select several intraoral scan models and visualize the segmentation results of different methods, as shown in Fig. 4. Visually, our CrossTooth outperforms the competing methods, especially in the challenging areas marked with red dotted circles in the figure. Specifically, MeshSegNet, TSegNet, and UpToothSeg reveal insufficient differentiation between adjacent teeth, leading to confusion between + +![](images/81754f501fc35fecdeaadc06d23a6a0b44533551b43e08b1d042fac24c2e3ca2.jpg) +Figure 5. CrossTooth performs better than the other two methods using only image or point features. It demonstrates that image and point features are complementary, they can eliminate each other's wrong segmentation results. + +neighboring tooth triangles. SimpSegNet, ToothGroupNet, and DilatedSegNet show better segmentation performance but still produce isolated boundary segmentation noises. In contrast, CrossTooth takes advantage of complementary image information and learns more discriminative features for tooth boundaries, performing well across different oral conditions. In row 4, CrossTooth accurately segments the atrophied teeth at the end of the dental arch. Moreover, in row 3, where a tooth is missing from the middle of the dental arch, only CrossTooth correctly labels the remained teeth. + +# 4.4. Ablation Study + +The fusion of dense image features, particularly tooth boundary features, contributed to the optimal performance of our method. In this section, we investigate the impact of integrating image features. Specifically, we trained CrossTooth-point, which does not incorporate image information, and CrossTooth-pixel, which uses only image information. The segmentation results of these three models under the same training conditions are shown in Tab. 3. With the rich features from images, CrossTooth improves tooth segmentation IoU and tooth boundary segmentation IoU by + +$0.7\%$ and $0.5\%$ respectively. This demonstrates the complementarity between the geometric information provided by the point cloud and the texture details provided by the images. As shown in the circles in Fig. 5, the mis-segmented areas by CrossTooth-point can be corrected through the fusion of point and image features, proving the effectiveness of our method. + +
MethodmIoUBoundary IoU
CrossTooth-point95.11981.572
CrossTooth-pixel89.488-
CrossTooth95.86082.058
+ +We further introduce another dataset named 3D-IOSSeg [12] and two baselines TSGCNet [43] and HiCA [13]. Ablation studies on image numbers and selective downsampling are conducted, as shown in Tab. 4 and Tab. 5 respectively. The rendered images consistently improve performance across all methods. However, the minor improvement brought by image features may be due to the simple fusion strategy, which involves only a few MLPs. Given an existing method, selective downsampling can improve its performance, as shown in Tab. 5. And CrossTooth outperforms the others even without selective downsampling. + +Table 3. Segmentation metrics with & w/o pixel or point feature. By integrating image and point features, the origin CrossTooth achieves the best performance. + +
MethodFLOPsmIoUBoundary IoU
SimpSegNet64.46G67.71/87.07/88.33/88.4651.10/66.99/63.36/63.05
ToothGroup8.53G84.62/89.64/87.46/89.7663.57/67.62/68.09/68.44
DilatedNet139.20G83.60/90.64/90.51/90.7437.12/47.07/53.55/50.87
TSGCNet174.85G76.45/84.75/85.63/85.9459.92/64.09/65.30/64.82
HiCANet97.11G78.77/86.26/87.92/88.0564.78/66.01/67.17/66.57
CrossTooth5.05G86.11/87.88/88.59/88.7965.30/66.21/68.03/66.65
+ +# 5. Discussion + +Although our CrossTooth achieves leading performance, it still has certain limitations. For example, CrossTooth does not handle cases with few teeth, in which scenario our method produces incorrect predictions on the tooth boundaries, this may be due to the tooth as a whole not being + +Table 4. Ablation study on the number of rendered images in the 3D-IOSSeg dataset. Each method is evaluated under four conditions: no image, 32 images, 96 images, and 128 images. The segmentation mIoU improves as the number of images increases. The boundary IoU declines after a certain number of images. The PSPNet used in CrossTooth as an image feature extractor contains only 7.08G FLOPs, making it lightweight and not demanding in terms of computational resources. + +
MethodmIoU with & w/o SDB-IoU with & w/o SD
TSGCNet191.22/89.8673.40/70.33
HiCANet191.47/90.1575.48/72.67
CrossTooth195.86/93.8882.05/80.73
TSGCNet276.45/75.0259.92/52.49
HiCANet278.77/76.4364.78/61.28
CrossTooth286.11/85.1065.30/62.07
+ +Table 5. Ablation study on selective downsampling. Superscript 1 refers to the 3DTeethSeg'22 dataset, while superscript 2 refers to the 3D-IOSSeg dataset. Each experimental group measures both mIoU and boundary IoU with and without selective downsampling. Selective downsampling can improve segmentation performance, especially in boundary areas. + +closely related to the tooth boundaries. We will include more cases with severe tooth loss as training samples in future studies and design robust post-processing steps. Besides, the varying number and appearance of wisdom teeth on each intraoral scan lead to most of the wrong predictions. It is hard for network to learn features of wisdom teeth with only few samples. We have taken wisdom teeth into consideration in our experiments but got low accuracy. So in future works we are to design few-shot learning strategy for wisdom teeth. Another shortage is the feature fusion layer, we only leverage simple MLP to extract complementary features at the last stage, which may be not fine-grained enough at local fields. We will consider more strategies such as multi-level encoder-decoder structure. + +# 6. Conclusion + +We propose CrossTooth to automatically segment individual teeth from intraoral scan models. To avoid existing downsampling methods from losing boundary details, we propose selective down-sampling based on curvature prior information. We fuse image dense features to improve the point cloud segmentation accuracy, especially retaining the detailed information at the tooth boundary. Compared with popular QEM downsample method, selective downsampling increases vertex density by $10\%$ to $15\%$ . By training on the public intraoral scan model dataset, it is proved that our method is superior to other competing methods, achieving $95.86\%$ and $82.05\%$ at overall mIoU and boundary IoU respectively. Our CrossTooth can effectively assist doctors in clinical diagnosis and analysis. + +Acknowledgment. This work was supported in part by the National Natural Science Foundation of China under Grant 62132021, the Guangxi Science and Technology Major Program under Grant GuiKeAA24206017 and the National Key R&D Program of China under Grant 2023YFC3604505. + +# References + +[1] Achraf Ben-Hamadou, Ouussama Smaoui, Ahmed Rekik, Sergi Pujades, Edmond Boyer, Hoyeon Lim, Minchang Kim, Minkyung Lee, Minyoung Chung, Yeong-Gil Shin, et al. 3dteethseg'22: 3d teeth scan segmentation and labeling challenge. arXiv preprint arXiv:2305.18277, 2023. 1, 2, 6, 7 +[2] Geng Chen, Jie Qin, Boulbaba Ben Amor, Weiming Zhou, Hang Dai, Tao Zhou, Heyuan Huang, and Ling Shao. Automatic detection of tooth-gingiva trim lines on dental surfaces. IEEE Transactions on Medical Imaging, 42(11):3194-3204, 2023. 4 +[3] Yiyang Chen, Shanshan Zhao, Changxing Ding, Liyao Tang, Chaoyue Wang, and Dacheng Tao. Cross-modal & cross-domain learning for unsupervised lidar semantic segmentation. In 31st ACM International Conference on Multimedia, pages 3866-3875, 2023. 5 +[4] Zhiming Cui, Changjian Li, Nenglun Chen, Guodong Wei, Runnan Chen, Yuanfeng Zhou, Dinggang Shen, and Wenping Wang. Tsegnet: An efficient and accurate tooth segmentation network on 3d dental model. Medical Image Analysis, 69:101949, 2021. 2, 4, 7 +[5] Fan Duan and Li Chen. 3d dental mesh segmentation using semantics-based feature learning with graph-transformer. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 456-465. Springer, 2023. 2, 3, 4 +[6] Michael Garland and Paul S Heckbert. Surface simplification using quadric error metrics. In 24th annual conference on Computer graphics and interactive techniques, pages 209-216, 1997. 2, 3, 4 +[7] Jingyu Gong, Jiachen Xu, Xin Tan, Jie Zhou, Yanyun Qu, Yuan Xie, and Lizhuang Ma. Boundary-aware geometric encoding for semantic segmentation of point clouds. In AAAI Conference on Artificial Intelligence, pages 1424-1432, 2021. 3 +[8] Xiaoxuan He, Hualiang Wang, Haoji Hu, Jianfei Yang, Yang Feng, Gaoang Wang, and Liu Zuozhu. Unsupervised pretraining improves tooth segmentation in 3-dimensional intraoral mesh scans. In International Conference on Medical Imaging with Deep Learning, pages 493-507. PMLR, 2022. 7 +[9] Zeyu Hu, Mingmin Zhen, Xuyang Bai, Hongbo Fu, and Chiew-lan Tai. Jsenet: Joint semantic segmentation and edge detection network for 3d point clouds. In Computer Vision-ECCV 2020, pages 222-239. Springer, 2020. 3 +[10] Ananya Jana, Hrebesh Molly Subhash, and Dimitris Metaxas. 3d tooth mesh segmentation with simplified mesh cell representation. In International Symposium on Biomedical Imaging (ISBI), pages 1-5. IEEE, 2023. 2, 4, 7 +[11] Lucas Krenmayr, Reinhold von Schwerin, Daniel Schaudt, Pascal Riedel, and Alexander Hafner. Dilatedtoothsegnet: Tooth segmentation network on 3d dental meshes through increasing receptive vision. Journal of Imaging Informatics in Medicine, pages 1-17, 2024. 7 +[12] Juncheng Li, Bodong Cheng, Najun Niu, Guangwei Gao, Shihui Ying, Jun Shi, and Tieyong Zeng. A fine-grained orthodontics segmentation model for 3d intraoral scan data. + +Computers in Biology and Medicine, 168:107821, 2024. 4, 8 +[13] Kehan Li, Jihua Zhu, Zhiming Cui, Xinning Chen, Yang Liu, Fan Wang, and Yue Zhao. A novel hierarchical cross-stream aggregation neural network for semantic segmentation of 3-d dental surface models. IEEE Transactions on Neural Networks and Learning Systems, 2024. 8 +[14] Xiaoshuang Li, Lei Bi, Jinman Kim, Tingyao Li, Peng Li, Ye Tian, Bin Sheng, and Dagan Feng. Malocclusion treatment planning via pointnet based spatial transformation network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 105-114. Springer, 2020. 4 +[15] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. Advances in neural information processing systems, 31, 2018. 3 +[16] Zigang Li, Tingting Liu, Jun Wang, Changdong Zhang, and Xiuyi Jia. Multi-scale bidirectional enhancement network for 3d dental model segmentation. In International Symposium on Biomedical Imaging, pages 1-5. IEEE, 2022. 4 +[17] Chunfeng Lian, Li Wang, Tai-Hsien Wu, Fan Wang, Pew-Thian Yap, Ching-Chang Ko, and Dinggang Shen. Deep multi-scale mesh feature learning for automated labeling of raw dental surfaces from 3d intraoral scanners. IEEE transactions on medical imaging, 39(7):2440-2450, 2020. 1, 2, 4, 6, 7 +[18] Sheng-hui Liao, Shi-jian Liu, Bei-ji Zou, Xi Ding, Ye Liang, and Jun-hui Huang. Automatic tooth segmentation of dental mesh based on harmonic fields. BioMed research international, 2015(1):187173, 2015. 1 +[19] Yifan Liu, Wuyang Li, Jie Liu, Hui Chen, and Yixuan Yuan. Grab-net: Graph-based boundary-aware network for medical point cloud segmentation. IEEE Transactions on Medical Imaging, 42(9):2776-2786, 2023. 3, 4, 5 +[20] Zuozhu Liu, Xiaoxuan He, Hualiang Wang, Huimin Xiong, Yan Zhang, Gaoang Wang, Jin Hao, Yang Feng, Fudong Zhu, and Haoji Hu. Hierarchical self-supervised learning for 3d tooth segmentation in intra-oral mesh scans. IEEE Transactions on Medical Imaging, 42(2):467-480, 2022. 4 +[21] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 2 +[22] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017. 2 +[23] Liangdong Qiu, Chongjie Ye, Pei Chen, Yunbi Liu, Xiaoguang Han, and Shuguang Cui. Darch: Dental arch prior-assisted 3d tooth instance segmentation with weak annotations. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20752-20761, 2022. 2, 4 +[24] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137-1149, 2016. 2 + +[25] Chanjira Sinthanayothin and Wichit Tharanont. Orthodontics treatment simulation by teeth segmentation and setup. In 2008 5th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, pages 81-84. IEEE, 2008. 1 +[26] Diya Sun, Yuru Pei, Peixin Li, Guangying Song, Yuke Guo, Hongbin Zha, and Tianmin Xu. Automatic tooth segmentation and dense correspondence of 3d dental model. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2020, pages 703-712. Springer, 2020. 4 +[27] Diya Sun, Yuru Pei, Guangying Song, Yuke Guo, Gengyu Ma, Tianmin Xu, and Hongbin Zha. Tooth segmentation and labeling from digital dental casts. In International Symposium on Biomedical Imaging, pages 669-673. IEEE, 2020. 4 +[28] Yuwen Tan and Xiang Xiang. Boundary-constrained graph network for tooth segmentation on 3d dental surfaces. In International Workshop on Machine Learning in Medical Imaging, pages 94-103. Springer, 2023. 1 +[29] Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, and Dacheng Tao. Contrastive boundary learning for point cloud segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8489-8499, 2022. 1, 3, 5 +[30] Sukun Tian, Ning Dai, Bei Zhang, Fulai Yuan, Qing Yu, and Xiaosheng Cheng. Automatic classification and segmentation of teeth on 3d dental model using hierarchical deep learning networks. IEEE Access, 7:84817-84828, 2019. 2 +[31] Yan Tian, Yujie Zhang, Wei-Gang Chen, Dongsheng Liu, Huiyan Wang, Huayi Xu, Jianfeng Han, and Yiwen Ge. 3d tooth instance segmentation learning objectness and affinity in point cloud. ACM Transactions on Multimedia Computing, Communications, and Applications, 18(4):1-16, 2022. 2 +[32] Chen Wang, Guangshun Wei, Guodong Wei, Wenping Wang, and Yuanfeng Zhou. Tooth alignment network based on landmark constraints and hierarchical graph structure. IEEE Transactions on Visualization and Computer Graphics, 30(2):1457-1469, 2022. 4 +[33] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics, 38(5):1-12, 2019. 2 +[34] Kan Wu, Li Chen, Jing Li, and Yanheng Zhou. Tooth segmentation on dental meshes using morphologic skeleton. Computers & Graphics, 38:199-211, 2014. 1 +[35] Tai-Hsien Wu, Chunfeng Lian, Sanghee Lee, Matthew Pastewait, Christian Piers, Jie Liu, Fan Wang, Li Wang, Chiung-Ying Chiu, Wenchi Wang, et al. Two-stage mesh deep learning for automated tooth segmentation and landmark localization on 3d intraoral scans. IEEE transactions on medical imaging, 41(11):3158-3166, 2022. 4 +[36] Huimin Xiong, Kunle Li, Kaiyuan Tan, Yang Feng, Joey Tianyi Zhou, Jin Hao, Haochao Ying, Jian Wu, and Zuozhu Liu. Tsegformer: 3d tooth segmentation in intraoral scans with geometry guided transformer. In Medical Image Computing and Computer Assisted Intervention, pages 421-432, Cham, 2023. Springer Nature Switzerland. 2 + +[37] Xiaojie Xu, Chang Liu, and Youyi Zheng. 3d tooth segmentation and labeling using deep convolutional neural networks. IEEE transactions on visualization and computer graphics, 25(7):2336-2348, 2018. 4 +[38] Xu Yan, Jiantao Gao, Chaoda Zheng, Chao Zheng, Ruimao Zhang, Shuguang Cui, and Zhen Li. 2dpass: 2d priors assisted semantic segmentation on lidar point clouds. In European Conference on Computer Vision, pages 677-695. Springer, 2022. 5 +[39] Li Yuan, Xinyi Liu, Jiannan Yu, and Yanfeng Li. A full-set tooth segmentation model based on improved pointnet++. Visual Intelligence, 1(1):21, 2023. 1, 2 +[40] Tianran Yuan, Wenhe Liao, Ning Dai, Xiaosheng Cheng, and Qing Yu. Single-tooth modeling for 3d dental model. International journal of biomedical imaging, 2010(1):535329, 2010. 1 +[41] Farhad Ghazvinian Zanjani, David Anssari Moin, Bas Verheij, Frank Claessen, Teo Cherici, Tao Tan, et al. Deep learning approach to semantic segmentation in 3d point cloud intra-oral scans of teeth. In International Conference on Medical Imaging with Deep Learning, pages 557–571. PMLR, 2019. 1, 2, 6 +[42] Congyi Zhang, Mohamed Elgharib, Gereon Fox, Min Gu, Christian Theobalt, and Wenping Wang. An implicit parametric morphable dental model. ACM Transactions on Graphics, 41(6):1-13, 2022. 4 +[43] Lingming Zhang, Yue Zhao, Deyu Meng, Zhiming Cui, Chenqiang Gao, Xinbo Gao, Chunfeng Lian, and Dinggang Shen. Tsgcnet: Discriminative geometric feature learning with two-stream graph convolutional network for 3d dental model segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6699-6708, 2021. 2, 3, 4, 6, 8 +[44] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In IEEE conference on computer vision and pattern recognition, pages 2881-2890, 2017. 5, 6 +[45] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In IEEE/CVF international conference on computer vision, pages 16259-16268, 2021. 2, 5, 6 +[46] Yue Zhao, Lingming Zhang, Chongshi Yang, Yingyun Tan, Yang Liu, Pengcheng Li, Tianhao Huang, and Chenqiang Gao. 3d dental model segmentation with graph attentional convolution network. Pattern Recognition Letters, 152:79-85, 2021. 1, 2, 4 +[47] Youyi Zheng, Beijia Chen, Yuefan Shen, and Kaidi Shen. Teethgnn: semantic 3d teeth segmentation with graph neural networks. IEEE Transactions on Visualization and Computer Graphics, 29(7):3158-3168, 2022. 1, 4, 7 +[48] Shaojie Zhuang, Guangshun Wei, Zhiming Cui, and Yuanfeng Zhou. Robust hybrid learning for automatic teeth segmentation and labeling on 3d dental models. IEEE Transactions on Multimedia, 2023. 4 +[49] Bei-ji Zou, Shi-jian Liu, Sheng-hui Liao, Xi Ding, and Ye Liang. Interactive tooth partition of dental mesh base on tooth-target harmonic field. Computers in biology and medicine, 56:132-144, 2015. 1 \ No newline at end of file diff --git a/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/images.zip b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ccdb13e3290937079c19e304f00e3b46bfaab316 --- /dev/null +++ b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62759d5d4ca1ce170d9d960b7f710ba04c24c65c469ba60424ead234b1071ecb +size 585794 diff --git a/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/layout.json b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..140f59a10f150590361cc5f36f9d07a9da4378af --- /dev/null +++ b/CVPR/2025/3D Dental Model Segmentation with Geometrical Boundary Preserving/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de023f307e6efdbd1601d1be5c7f869d0e4e025373bf10ca98aefb8ca828a066 +size 455300 diff --git a/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_content_list.json b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..294a31c3ffd6d29d9daaf56e71bd5d8b2c476522 --- /dev/null +++ b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:020519fec5dd2fa69b751cb01bb6a861d3c6a74efe8ed07bb71cd01f3ec09f70 +size 70455 diff --git a/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_model.json b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c56e318611e7b4edbdc56e1bb7f7fe41de5805d7 --- /dev/null +++ b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6a17a362d0f90aaa26d4a3d956120d04a2c56c604b50b570e695d45dcced15d +size 86781 diff --git a/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_origin.pdf b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..30fef6dea5d9e0acd4548c845c54dce0fa4e4e65 --- /dev/null +++ b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/627db675-5c1d-42be-a429-641d6eb44826_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09ec99fa547fe003ca4ee00c69fd2b420cfa07ca3a3648542304df63b20e32af +size 2686010 diff --git a/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/full.md b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b2396fcf85b8111ccd475629ee28afc8a3971964 --- /dev/null +++ b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/full.md @@ -0,0 +1,293 @@ +# 3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations + +Yating Wang $^{1}$ Xuan Wang $^{2}$ Ran Yi $^{1*}$ Yanbo Fan $^{2}$ Jichen Hu $^{1}$ Jingcheng Zhu $^{1}$ Lizhuang Ma $^{1*}$ Shanghai Jiao Tong University $^{2}$ AntGroup Research + +# Abstract + +Recent studies have combined 3D Gaussian and 3D Morphable Models (3DMM) to construct high-quality 3D head avatars. In this line of research, existing methods either fail to capture the dynamic textures or incur significant overhead in terms of runtime speed or storage space. To this end, we propose a novel method that addresses all the aforementioned demands. In specific, we introduce an expressive and compact representation that encodes texture-related attributes of the 3D Gaussians in the tensorial format. We store appearance of neutral expression in static tri-planes, and represents dynamic texture details for different expressions using lightweight 1D feature lines, which are then decoded into opacity offset relative to the neutral face. We further propose adaptive truncated opacity penalty and class-balanced sampling to improve generalization across different expressions. Experiments show this design enables accurate face dynamic details capturing while maintains real-time rendering and significantly reduces storage costs, thus broadening the applicability to more scenarios. + +# 1. Introduction + +Photorealistic 3D head avatars reconstruction is a fundamental research topic in the field of computer graphics and vision, encompassing a variety of applications including films, gaming, and AR/VR etc. A highly practical 3D head avatar needs to meet various technical requirements: 1) It needs to faithfully render the expressions and movements depicted in the driving signal into high-quality rendered images, including clear texture details, consistent geometric structures, and vivid texture variations under dynamic expressions; 2) It requires extremely high inference and rendering efficiency to meet the demands of tasks that require real-time performance; 3) It needs to meet the requirements of lightweight design to facilitate easy distribution and de + +![](images/997c3178a9255af45223d9fa02774aa644d87e448b2c447e3f15f9b661038157.jpg) +Figure 1. Our method improves rendering quality while ensuring real-time performance and minimum storage. The points radius in the figure is proportional to the square root of the storage. + +employment. However, meeting all of the aforementioned requirements remains a significantly challenge. + +Early methods are based on 3D face Morphable Models (3DMM) [2, 32] to achieve 3D avatar reconstruction, which can easily control facial expressions using low-dimensional PCA coefficients, but the rendering effect lacks realism. With the development of NeRF [25], some approaches combine NeRF and 3DMM to achieve high-quality, animatable head avatars, but the volumetric rendering process of NeRF requires substantial computation, making it difficult to achieve real-time performance. Recently, 3D Gaussian Splatting (3DGS) [18] has gained widespread attention for its high-quality rendering and real-time performance, with some works attempting to combine 3DGS with 3DMM to achieve fast, realistic, and animatable head avatars. + +To enhance realism of head avatar rendering, it is important to model the dynamic facial details that change with + +expressions, but capturing such details inevitably introduces additional time and memory costs. Some methods [24] explicitly store a set of Gaussian splats for each blendshape, resulting in significant storage requirements. Alternatively, other methods [37] employ MLPs to implicitly model dynamic textures, followed by super-resolution techniques to enhance details rendering, but at the cost of not being able to render in real-time. Additionally, more parameters and complex networks often lead to overfitting, resulting in poorer generalization ability on novel expressions. + +To address these issues, we propose a novel 3D head avatar modeling method that takes into account both dynamic texture modeling and spatiotemporal efficiency. First the efficient 3D Gaussian Splatting rendering pipeline is employed. Then the 3D Gaussian splats are bound to a parametric head mesh, enabling the splats to depict the base motion of the face along with the mesh. Observing that neighboring Gaussian splats share similar appearances and dynamics, hence we store the static texture and dynamic properties in the involved compact tensorial features to reduce spatial redundancy. To be specific, the static texture in neutral expression is stored in tri-planes within the canonical space, replacing spherical harmonics of 3DGS. For facial dynamics modeling, we store a neural grid representing opacity offsets relative to the neutral face for each blend-shape. Then we exploit 1D feature lines to depict the dynamic part of facial textures, as experiments show that those features can adequately capture the texture changes caused by facial dynamics, and further reduce the occupied storage. These feature lines are interpolated by blendshape coefficients, and a non-linear MLP decoder outputs the opacity offset. Benefited from compact architecture, our method enables high-fidelity head avatar with dynamic textures while maintaining storage efficiency and real-time inference. + +Compact tensorial representations also reduce the risk of overfitting to training expressions. We further propose two training strategies to improve generalization ability to novel expressions. First, as areas which remain consistent to the neutral expression should not have opacity offsets, we propose an adaptive truncated penalty on opacity offset which identifies relatively static mesh triangles in each frame and constrains opacity offset of their corresponding splats to be minimal. Second, as large-scale expressions are underrepresented in the training set, we propose a class-balanced resampling method: training expressions are clustered, and samples are drawn uniformly from each cluster. + +We conduct experiments on the Nersemble [19] dataset, showing that our method accurately reconstructs dynamic facial details and improves rendering metrics. Our compact representations require no more than 10MB per subject, making it the most storage-efficient method compared to the state-of-the-art competitors. We achieve 300 FPS, ensuring real-time performance. The spatial and temporal ef + +ficiency of our approach allows it to be extended to broader application scenarios, such as fast network transmission and real-time rendering in mobile video conferencing. + +# 2. Related Works + +# 2.1. Animatable Head Avatar + +Traditional head avatar methods rely on 3D Morphable face Model [2, 32] to control face motion by low-dimensional coefficients but lacks rendering realism. Recently, advances in neural rendering has led to combining neural scene representations like NeRF [25] and 3DGS [18] with 3DMM to achieve high-quality animatable head avatars [1, 10, 11, 29, 40] by conditioning NeRF or 3DGS on 3DMM coefficients or meshes. Further approaches attempt to improve head avatar reconstruction from various perspectives, such as inference speed [34, 45], statistical face model [12, 14, 38, 43], dynamic details [5, 35], sparse-view robustness [7, 42] and texture-material disentanglement [30, 36]. Below, we primarily review 3DGS head avatar methods focused on dynamic facial textures. Some approaches [6, 34] use MLPs to capture dynamic offsets of gaussian attributes, while others employ CNNs [11, 30]. Super-resolution modules are also incorporated to improve the quality of dynamic details rendering [13, 37]. Accurately capturing dynamic details often requires multiple large networks, which highly increase time costs. Methods such as [8, 24] store expression-related dynamic information as additional attributes per Gaussian splot, leading to significant memory usage. By contrast, our approach achieves high-fidelity rendering with dynamic facial details with 10M storage and 300FPS rendering speed. + +# 2.2. Compact 3D Scene Representations for Novel View Synthesis. + +NeRF employs deep MLPs to model scenes, but suffers from slow inference and scalability issues for large, unbounded areas. Some works [3, 4, 26] address this by replacing one large MLP with spatially discrete features and small MLP decoders, improving rendering efficiency. Other methods reduce computation by decomposing large NeRFs into smaller sub-networks [39]. 3DGS is efficient in rendering speed but requires storing a large number of splats for large or complex scene. Recent approaches address this issue using region-based vector quantization [28], K-means-based codebooks [27], or learned binary masks for each Gaussian. The memory-intensive spherical harmonics in 3DGS are replaced with more compact neural networks [20, 46] for view-dependent radiance. Furthermore, some approaches attempt to model dynamic scenes using compact format, such as 2D plane [9, 31], sparse control points [15, 17] and parametric curves [23, 33]. In this paper, we leverage tensorial representations(tri-planes and 1D feature lines) to represent static and dynamic appearances. + +![](images/1d0901967a43d465ee9c60bea96d0f9a6325755e4e3e2b86a4d805845f6da9de.jpg) +Figure 2. Our goal is to reconstruct 3DGS head avatar with dynamic details, ensuring real-time rendering and minimized storage. We use a parametric face mesh to describe large-scale geometry motions, moving the bound Gaussian splats accordingly. A triplane stores view-dependent appearance in canonical space, while 1D feature lines are used for dynamic details per blendshape, allowing interpolation with expression coefficients. Finally, the geometry attributes of the splats, along with the canonical appearance and dynamic details, are combined to render the face image. + +# 3. Method + +Our method takes multi-view face videos as inputs and outputs an animatable head avatar with dynamic textures. As shown in Fig. 2, our method stores large-scale head motion, head appearance in the canonical space, and dynamic texture variations for each blendshape using three different structures, i.e., mesh, triplane, and feature lines, respectively. 1) Geometry motion bound to mesh: We follow [29] to track FLAME [21] mesh for each frame from multi-view images and known camera parameters and calculate splats geometry attributes (position/rotation/scaling) by tracked meshes. 2) Appearance of neutral face in triplanes: Tri-planes are used to store view-dependent radiance of the canonical 3D face. The features sampled from the tri-planes, and view direction transformed into the canonical space, are fed into a tiny MLP decoder to obtain RGB color. 3) Dynamic details in feature lines: We utilize a feature line per blendshape to store dynamic textures, which can be interpolated by tracked blendshape coefficients. An MLP decoder then maps the features sampled from the interpolated feature line into opacity offset, which is added to canonical opacity. Finally, the aforementioned gaussian attributes are combined to render the image. + +# 3.1. Preliminaries + +3DGS. 3D Gaussian Splitting [18] enables novel view synthesis of a static scene from multi-view images and camera + +parameters. A scene is represented by a collection of 3D Gaussian splats, and each 3D Gaussian contains the following attributes: position $\mu \in \mathbb{R}^3$ , scaling $\mathbf{s} \in \mathbb{R}^3$ , quaternion $\mathbf{q} \in \mathbb{R}^4$ , opacity $\alpha \in \mathbb{R}$ and spherical harmonics $\mathbf{SH} \in \mathbb{R}^{(k+1)^2 \times 3}$ to represent view-dependent color (where $k$ is SH degree). In our paper, we use $\mathbf{r} \in \mathbb{R}^{3 \times 3}$ to notate the corresponding rotation matrix to $\mathbf{q}$ . Finally the final color for a given pixel is calculated by sorting and blending the overlapped Gaussians. + +Gaussian Avatars [29] extends 3DGS from static scene to dynamic avatar by binding Gaussian splats to a tracked head mesh, which is obtained by fitting FLAME (a face shape prior model) parameters to multi-view observations. Each 3D Gaussian splat is paired with a mesh triangle, with its geometric attributes $(\mu', \mathbf{r}', \mathbf{s}')$ defined in the triangle's local coordinate system. Once optimized, these attributes are fixed, making each splat's relative position within its triangle fixed, yet allowing global movement as the triangle moves. For a triangle with three vertices, the average vertex position $T$ is set to origin of local coordinate system. Rotation matrix $R$ is constructed using one edge direction, the triangle normal, and their cross product to represent the orientation transformation from local to global space. A scalar $k$ is computed as the mean length of one of the edges and its perpendicular. Then the transformation from the local space to the global space is conducted as: + +$$ +r = R r ^ {\prime}, \quad \mu = k R \mu^ {\prime} + T, \quad s = k s ^ {\prime}. \tag {1} +$$ + +# 3.2. Appearance in Canonical Space via Triplane + +In 3DGS, 48 out of the total 59 parameters for each Gaussian are used for SH (3 degrees) to capture view-dependent color. Noticing that neighboring splats should have similar appearance, instead of storing 48 SH parameters per splat, we use a triplane to store implicit encodings, and a tiny MLP decoder to decode the encodings along with view direction to RGB colors, which compresses the model size. The triplane $T$ consists of three orthogonal feature planes aligned with the axes: $\{T_{xy}, T_{xz}, T_{yz}\} \in \mathbb{R}^{3 \times n_f \times n_f \times n_{d1}}$ , where $n_f \times n_f$ is spatial resolution of the 2D feature planes, and $n_{d1}$ is the feature dimension. For any given position $\mathbf{p}$ in canonical space, the corresponding feature is obtained by projecting $\mathbf{p}$ onto the axis-aligned planes (the $x - y$ , $x - z$ and $y - z$ planes), interpolating to obtain the features on each feature plane and concatenating the interpolated features, which is formulated as: + +$$ +t (\mathbf {p}) = \operatorname {i n t e r p} \left(T _ {x y}, \mathbf {p} _ {x y}\right) \oplus \operatorname {i n t e r p} \left(T _ {x z}, \mathbf {p} _ {x z}\right) \oplus \operatorname {i n t e r p} \left(T _ {y z}, \mathbf {p} _ {y z}\right), +$$ + +where $\text{interp}$ represents bilinear interpolation, $\oplus$ represents concatenation, and $\mathbf{p}_{xy}, \mathbf{p}_{xz}, \mathbf{p}_{yz}$ refer to the projected position on each plane. + +Note that the triplane is defined in the canonical space, which corresponds to the global space of the neutral expression. In contrast, we refer to the global space of the non-neutral expression in each frame as the deformed space. Given a splat with the position $\mu^{\prime}$ , rotation $\mathbf{r}^{\prime}$ , and scale $\mathbf{s}^{\prime}$ defined in the local space, it can be transformed into the canonical space and deformed space of the current frame based on the canonical transformation $(R_{c}, T_{c}, k_{c})$ and the deformed transformation $(R_{d}, T_{d}, k_{d})$ respectively. The local coordinate system can also serve as a bridge to transform between the deformed space and the canonical space. The transformation from view direction in the deformed space (denoted as $\mathbf{v}_d$ ) to view direction in the canonical space (denoted as $\mathbf{v}_c$ ) can be formulated as: + +$$ +\mathbf {v} _ {c} = R _ {c} R _ {d} ^ {- 1} \mathbf {v} _ {d}. +$$ + +Finally a tiny MLP decodes $t(\mathbf{p})$ and $\mathbf{v}_c$ into RGB value $\mathbf{c}$ , which can be used as the first three components of degree-1 SH for 3DGS rendering. Additionally, since the majority of facial information is concentrated in the frontal view, with less information available from the side views, we utilize larger feature dimension for $T_{xy}$ and lower dimension for $T_{xz}, T_{yz}$ to achieve storage compression. + +# 3.3. Dynamic Texture via Feature Line Blendshapes + +Existing researches show that 3D Gaussian texture attributes can be effectively compressed using tri-planes, confirming color consistency in local neighborhoods. We extend this local consistency to dynamic avatars for efficient + +dynamic details representation. For each expression blendshape, we store a separate representation that describes the texture changes (opacity offsets) relative to the neutral expression induced by that expression. Since the FLAME model has 100 PCA blendshapes, storing 2D planes or 3D tensors for each blendshape incurs excessive memory consumption. To enable a compact representation for dynamic textures, we use lightweight 1D feature lines to encode the texture changes for blendshape $i$ , denoted as $(L_x^i,L_y^i,L_z^i)\in \mathbb{R}^{3\times n_{d2}\times n_s}$ , where $n_{s}$ represents the length of the 1D feature line, and $n_{d2}$ represents the feature dimension. We interpolate feature lines by tracked blendshape coefficients $\beta_{j}\in \mathbb{R}^{n_{b}}$ to obtain specific feature line for frame $j$ , where $n_b$ is blendshape count, which can be formulated as: + +$$ +l _ {b} ^ {j} = \sum_ {i = 0} ^ {n _ {b}} \beta_ {j} ^ {i} * (L _ {x} ^ {i}, L _ {y} ^ {i}, L _ {z} ^ {i}). +$$ + +In addition to the linear blendshapes, FLAME incorporates a nonlinear quaternion jaw rotation to describe large-scale jaw movements. To unify the linear basis with the nonlinear rotation, we follow the method proposed in [22], extracting linear jaw rotation bases $\{\mathbf{q}_k : k \in \{0, \dots, n_j\}\}$ from jaw rotations in training videos via farthest point sampling. And we store a feature line $(L_x^k, L_y^k, L_z^k)$ for each jaw basis. We follow [16] to calculate the distance between jaw rotation of frame $j$ and the $k$ -th jaw rotation basis which can be formulated as $d(j, k) = 1 - |\mathbf{q}_j^T \mathbf{q}_k| (\mathbf{q}_j \text{ and } \mathbf{q}_k \text{ are unit quaternion})$ . And the jaw feature lines are interpolated using inverse distance weighting to calculate the feature line of frame $j$ , formulated as: + +$$ +l _ {r} ^ {j} = \sum_ {k = 0} ^ {n _ {j}} \beta_ {j} ^ {k} * \left(L _ {x} ^ {k}, L _ {y} ^ {k}, L _ {z} ^ {k}\right), \quad \beta_ {j} ^ {k} = \frac {1 - d (j , k)}{\sum_ {k = 0} ^ {n _ {j}} \left(1 - d (j , k)\right)}. +$$ + +Similar to the triplane, the opacity offset features of a given position $\mathbf{p}$ in frame $j$ is calculated by projecting $\mathbf{p}$ onto the $x,y$ and $z$ axes, interpolating and concatenating the interpolated features, formulated as: + +$$ +l ^ {j} (\mathbf {p}) = \operatorname {i n t e r p} \left(l _ {x} ^ {j}, \mathbf {p} _ {x}\right) \oplus \operatorname {i n t e r p} \left(l _ {y} ^ {j}, \mathbf {p} _ {y}\right) \oplus \operatorname {i n t e r p} \left(l _ {z} ^ {j}, \mathbf {p} _ {z}\right). +$$ + +This projection and interpolation process is applied to both the expression blendshape feature line $l_b^j$ and the jaw rotation feature line $l_r^j$ . + +We finally utilize a tiny MLP $\theta$ to decode interpolated $l_b^j (\mathbf{p})$ and $l_r^j (\mathbf{p})$ into opacity offset $\Delta \alpha$ , which will be added to the canonical opacity $\alpha_{c}$ (opacity of the neutral expression). The final opacity is calculated as: + +$$ +\alpha = \alpha_ {c} + \Delta \alpha = \alpha_ {c} + \theta (l _ {b} ^ {j} (\mathbf {p}), l _ {r} ^ {j} (\mathbf {p})). +$$ + +The facial motion caused by facial expressions are primarily concentrated in the leading components of FLAME + +PCA blendshapes. Using only the leading expression coefficients achieves similar results, allowing us to reduce the number of feature lines, thereby reducing storage and accelerating calculation. + +# 3.4. Training + +Class-balanced Sampling. Most images in the training set show minor facial movements, with only few displaying large expressions, making it challenging to accurately reconstruct these. Since the data distribution is unknown, simply oversampling large expressions may bias against smaller ones. We propose a resampling method to ensure that various expressions are sampled evenly during training. + +First, we measure the similarity between two frames by comparing the differences in vertex displacements of the FLAME mesh. Given that different regions of the face exhibit varying degrees of motion, e.g., the lips moving significantly more than the eyes, we empirically increase weights of the eyes vertices. We denote the FLAME mesh for frame $i$ as $M_{i}$ , and the similarity score between frame $i$ and frame $j$ is calculated as $\text{dist}(i,j) = ||M_i - M_j||_2^2 * \mathbf{w}$ , where $\mathbf{w}$ denotes the weight of each vertex. We set $\mathbf{w} = 2$ for eye vertices and $\mathbf{w} = 1$ for other vertices. Next, we perform spectral clustering on the similarity matrix to categorize all frames into $n = 16$ classes. Finally, we utilize a uniform sampling probability distribution between categories and equal probability within each category. + +Adaptive Truncated Opacity Offset Penalty. The feature lines store the opacity offset of each blendshape relative to the neutral face, hence the offsets in relatively static regions are ideally zero. Therefore, we calculate triangle translations $\bar{\mathbf{t}}$ , i.e., the displacement between the deformed mesh and the neutral face mesh, and empirically set a threshold $\tau$ to identify whether the triangle is "static" or "dynamic". Then opacity offsets of splats bound to "static" triangles are constrained to be close to 0. This penalty helps decouple static textures from dynamic details, enhancing generalization to unseen expressions, formulated as: + +$$ +\mathcal {L} _ {o p} = \lambda_ {o p} | \Delta \alpha | * w _ {o p}, \quad w _ {o p} = \left\{ \begin{array}{l l} 0, & \text {i f} | \overline {{\mathbf {t}}} | > \tau , \\ 1, & \text {i f} | \overline {{\mathbf {t}}} | < = \tau . \end{array} \right. \tag {2} +$$ + +Loss Function. We use the L1 loss and D-SSIM loss between the rendered images and the ground truth images as image supervision, which can be formualted as: + +$$ +\mathcal {L} _ {\text {i m a g e}} = (1 - \lambda) \mathcal {L} _ {1} + \lambda \mathcal {L} _ {\mathrm {D - S S I M}}. +$$ + +Assuming that the Gaussian splats should roughly conform to the mesh and be similar in size with the bound triangles, we follow methods proposed by [29] to employ a position loss and a scale loss to prevent splats from being excessively far from the mesh or excessively large: + +$$ +\mathcal {L} _ {\text {g e o m}} = \lambda_ {\text {p o s}} \mathcal {L} _ {\text {p o s}} + \lambda_ {\text {s c a l e}} \mathcal {L} _ {\text {s c a l e}}. \tag {3} +$$ + +The total training loss can be formulated as: + +$$ +\mathcal {L} = \mathcal {L} _ {\text {i m a g e}} + \mathcal {L} _ {\text {g e o m}} + \mathcal {L} _ {\text {o p}}, +$$ + +where $\lambda = 0.2$ , $\lambda_{pos} = 0.01$ , $\lambda_{scale} = 1$ , and $\lambda_{op} = 1$ . + +# 4. Experiments + +# 4.1. Settings and Dataset + +We conduct experiments on nine individuals from the niversemble dataset, collecting a total of 11 video segments of each subject from 16 different viewpoints. Each participant performs 10 distinct expressions and emotions as instructed, followed by a free performance in the last video segment. The videos are downsampled to a resolution of $802 \times 550$ . We utilize the FLAME coefficients and camera parameters provided in GA [29], including shape $\beta$ , translation $\mathbf{t}$ , pose $\theta$ , expression $\psi$ , and vertex offset $\Delta \mathbf{v}$ in the canonical space. + +We compare the experimental results across three tasks: 1) Novel View Synthesis: 15 out of 16 viewpoints are used for training, while the remaining viewpoint is reserved for testing; 2) Self-Reenactment: Testing is conducted using videos of the same individual showcasing unseen poses and expressions from all 16 viewpoints; 3) Cross-Identity Reenactment: An avatar is driven by the motions and expressions of other individuals. We use free performance sequences for testing of task 2 and task 3. + +# 4.2. Implementation Details + +We implemented our approach using PyTorch and trained for 600,000 iterations using the Adam optimizer for each subject. Both the triplane and feature lines consist of two components: a neural grid feature and a MLP decoder. We train triplane and feature line blendshapes with the same learning rates, setting learning rate of the feature to $2e - 3$ and the MLP learning rate to $1e - 4$ . Each plane of the triplane is $128 \times 128$ in size, with the feature dimension of the $T_{xy}$ being 32, and the feature dimensions of the $T_{xz}, T_{yz}$ planes set to 16. The MLP decoder following triplane consists two-layer with 128 dimensions per layer, using ReLU as the activation function. We apply position encoding to improve the resolution of view direction. We assign feature lines with a spatial resolution of 64 and feature dimension of 32 to the first 80 PCA expression bases and 16 key jaw rotation bases. The decoder for the feature lines is a two-layer MLP with 128 dimensions per layer. The triplane requires 4.05M of storage, the feature line requires 2.41M, and the other Gaussian attributes (including position/rotation/scaling and canonical opacity) average 3.25M per subject. + +# 4.3. Comparison + +We compare three baselines with our method. + +![](images/7a03875bb93f2ad900c31054ba608b907628deeb617c5d2e5d8b242863bb7382.jpg) +Figure 3. Qualitative comparison with baseline methods on novel view synthesis task. + +
Novel View SynthesisSelf-ReenactmentPerformance
MethodPSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓StorageFPS
GA31.47020.94890.0514427.26780.92300.0666821M330
GHA26.99320.93470.0490522.73970.88950.07995120M20
GBS28.89660.95000.0631125.97970.92700.080992G370
Ours32.96880.95060.0594028.06880.92590.0772410M300
+ +Table 1. Quantitative comparison with baselines. GREEN indicates the best of all methods. YELLOW indicates the second. + +1) GA [29] defines Gaussian splats in the relative coordinate system of the mesh triangles, allowing splats to move with the mesh. However, GA uses the same set of Gaussian splats for all expressions, limiting its ability to capture expression-specific details. +2) GBS [24] optimizes a set of 3D Gaussians splats per blendshape, which can be interpolated linearly using 3DMM coefficients, but at the cost of significant storage. Moreover, extensive parameters may lead to overfitting to training expressions and simple linear interpolation of splats attributes may introduces artifacts in large expressions. +3) GHA [37] uses coarse guide meshes and two MLPs to predict geometry and color offsets, and converts 3D Gaussian features into RGB images via super-resolution. GHA achieve highly detailed rendering, but fail to achieve real-time rendering speed. + +Different from GA, GHA and our method, GBS inputs monocular videos and reconstructs the head but excluding + +the clothing and shoulders. To ensure a fair comparison, we use multi-view videos and the FLAME parameters tracked following GA to serve as input of GBS, and segment the clothing area using face parsing, excluding it from the quantitative metrics. We employ PSNR, SSIM, and LPIPS [41] as quantitative metrics while listing the required storage and FPS, which can be found in Table 1. The storage size does not include pre-tracked 3DMM parameters. The FPS is tested on NVIDIA RTX4090 GPU. Our method occupies the minimum storage, ensuring real-time rendering speed, while also improving PSNR metrics on the tasks of novel view synthesis and self-reenactment. + +Fig. 3 demonstrates the effectiveness of our method on novel view synthesis task. Compared to GA and GHA, our approach better reconstructs dynamic textures generated by significant expressions, such as the forehead wrinkles in the first row and the wrinkles around the right eye in the second row. GBS linearly interpolates all Gaussian attributes using + +![](images/d99febe4c5725790332183d868b9dbd62b8f330820faf1233cffb88df7487a48.jpg) + +![](images/05472fb2204c9eb1b40fad270079246b9026c36288902e7a4ae5205afb35a074.jpg) + +![](images/80bc7a56acebee0f011eedf4443fa8a5f10f988fc45632529578461071a8d18a.jpg) + +![](images/993216970a003d181c2e24da730ea575a9273cac3cbcaa14ee56a24b42ffe361.jpg) + +![](images/eae69ffdd8393752ee1861a77bd497e55ba383319060a154ea01e087532c0749.jpg) + +![](images/712dac13fd0ae05ef8693d61042b343d1bb2edab1400d0db53daebd6866658dc.jpg) +GHA + +![](images/19445b182d27eceba56f23909d11073b5f3764d6a8cd991b24c418263154807f.jpg) +GBS + +![](images/42219ba6d1be43e0175f355b59386d96d02942c5c0591314710b7a6334ecad0e.jpg) +GA + +![](images/5afc473779d6f056f05f96f1ae5bf05a524223bada83b5f6a6891e6f377d5c4a.jpg) +Ours + +![](images/44f10b0502229968eeabc439e7094a0ea0a32e80fa1a5ac101ea2d0792fa4a62.jpg) +Ground Truth +Figure 4. Qualitative comparison with baseline methods on self-reenactment task. + +
ComponentsNovel View SynthesisSelf-Reenactment
tripflpenaltyresamplePSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓
33.42060.96840.0296831.24870.95610.03578
33.43370.96870.0309031.32460.95850.03792
35.38760.97390.0249731.43660.95770.03564
35.26520.97370.0259031.42880.95800.03585
35.55090.97340.0259731.40420.95800.03790
35.16440.97150.0260531.64460.95910.03580
+ +Table 2. Ablation Study on subject #306. "trip" and "fl" refer to neutral texture triplane and feature line blendshapes respectively. "penalty" indicates opacity offset penalty. "resample" indicates class-balanced sampling. GREEN indicates the best and YELLOW indicates the second. + +blendshape coefficients, which can result in artifacts such as the left chin in the second row where exists large-scale non-linear motion and neck in the first row. In contrast, our method provides more stable modeling while maintaining comparable dynamic details. Noted that GBS requires 2GB storage, whereas ours requires only 10MB. + +Fig. 4 shows the qualitative comparison results for the self-reenactment task. Our method enables avatar rendering with dynamic textures and less artifacts while avoiding overfitting of the wrinkles, such as the frown lines in the first row of Fig. 4. Fig. 5 illustrates the performance of our method in the cross-reenactment task, showcasing the generation of distinct wrinkle effects and identity preservation. + +# 4.4. Ablation Study + +We conduct ablations on subject #306 to evaluate the proposed components. Noted that novel view synthesis + +task involves novel view on training expressions, while self-reenactment involves novel expressions on 15 training views and one novel view. + +Tensorial representations. In Table 2, the first two rows show compact triplanes help avoid overfitting, improving metrics on unseen expressions, while it may blur high-frequency details, leading to a worse LPIPS score. The first and third rows show that involving feature lines to model dynamic textures enhances both novel view and novel expressions rendering. + +Class-balanced Sampling. We conduct ablations on the class-balanced sampling, as shown in the 4th and 6th rows of Table 2. Frames with large expressions are less frequent in the training data compared to frames with small motion. The sampling prevents the model from overfitting to relatively static frames, thereby improving generalization + +![](images/228286fe495585c4d68aab231cf5342e96b41663a831d9e2760b9856caa88e69.jpg) +Figure 5. Cross-identity reenactment of head avatars. We use the expression and pose of the source subject on the far right to drive the character on the left. + +![](images/b0f52bd47e5426f66a8d1ade095380aebd806baddd4521fbf19de10de34e0506.jpg) +Figure 6. $L_{op}$ helps prevent artifacts in self-reenactment. + +to unknown expressions distribution, achieving better self-reenactment performance. + +Adaptive truncated opacity offset penalty. Ablation studies are conducted to evaluate the adaptive truncated opacity offset penalty, as shown in 5th and 6th rows of Table 2. This penalty helps disentangle canonical appearance with dynamic details, thus improving robustness on unseen expressions. Moreover, without this penalty, floaters may appear around the hair, and artifacts may occur in the teeth area during expression changes, as shown in Fig. 6. + +# 4.5. Conclusion and Limitations + +In this paper, we propose a 3DGS head avatar modeling method that balances dynamic details capturing with real-time performance and low storage. Compact tensorial features (2D triplane for canonical appearance, and lightweight 1D feature lines for dynamic details) allow for accurate appearance modeling. Adaptive opacity offset penalty and class-balanced training helps prevent from overfitting to the training expressions. Experiments demonstrate that our approach not only enhances the rendering quality but also maintains real-time rendering speed and minimal storage, making it suitable for a wide range of practical applications. + +Our method has some limitations, such as reliance on the tracked mesh thus unable to handle complex hairstyles or topological changes in the mouth interior. Our method do not decouple material and lighting, so avatar relighting is not feasible, which we intend to explore in the future. + +# Acknowledgements + +This work was supported by National Natural Science Foundation of China (72192821, 62302297 and 62472282), Shanghai Sailing Program(22YF1420300), Young Elite Scientists Sponsorship Program by CAST (2022QNRC001), YuCaiKe [2023](231111310300) and the Fundamental Research Funds for the Central Universities(YG2023QNA35). This work was also supported by AntGroup Research Intern Program. + +# References + +[1] Yunpeng Bai, Yanbo Fan, Xuan Wang, Yong Zhang, Jingxiang Sun, Chun Yuan, and Ying Shan. High-fidelity facial avatar reconstruction from monocular video with generative priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4541-4551, 2023. 2 +[2] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. 1999. 1, 2 +[3] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In CVPR, 2022. 2 +[4] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European conference on computer vision, pages 333-350. Springer, 2022. 2 +[5] Chuhan Chen, Matthew O'Toole, Gaurav Bharaj, and Pablo Garrido. Implicit neural head synthesis via controllable local deformation fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 416-426, 2023. 2 +[6] Yufan Chen, Lizhen Wang, Qijing Li, Hongjiang Xiao, Shengping Zhang, Hongxun Yao, and Yebin Liu. Monogaussianavatar: Monocular gaussian point-based head avatar. In ACM SIGGRAPH 2024 Conference Papers, pages 1-9, 2024. 2 +[7] Xuangeng Chu and Tatsuya Harada. Generalizable and animatable gaussian head avatar. arXiv preprint arXiv:2410.07971, 2024. 2 +[8] Helisa Dhamo, Yinyu Nie, Arthur Moreau, Jifei Song, Richard Shaw, Yiren Zhou, and Eduardo Pérez-Pellitero. Headgas: Real-time animatable head avatars via 3d gaussian splatting. In European Conference on Computer Vision, pages 459-476. Springer, 2025. 2 +[9] Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbæk Warburg, Benjamin Recht, and Angjoo Kanazawa. K-planes: Explicit radiance fields in space, time, and appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12479-12488, 2023. 2 +[10] Guy Gafni, Justus Thies, Michael Zollhofer, and Matthias Nießner. Dynamic neural radiance fields for monocular 4d facial avatar reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8649-8658, 2021. 2 +[11] Xuan Gao, Chenglai Zhong, Jun Xiang, Yang Hong, Yudong Guo, and Juyong Zhang. Reconstructing personalized semantic facial nerf models from monocular video. ACM Transactions on Graphics (TOG), 41(6):1-12, 2022. 2 +[12] Simon Giebenhain, Tobias Kirschstein, Markos Georgopoulos, Martin Rünz, Lourdes Agapito, and Matthias Nießner. Learning neural parametric head models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21003-21012, 2023. 2 +[13] Simon Giebenhain, Tobias Kirschstein, Martin Rünz, Lour + +des Agapito, and Matthias Nießner. Npga: Neural parametric gaussian avatars. arXiv preprint arXiv:2405.19331, 2024. 2 +[14] Yang Hong, Bo Peng, Haiyao Xiao, Ligang Liu, and Juyong Zhang. Headnerf: A real-time nerf-based parametric head model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20374-20384, 2022. 2 +[15] Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. Sc-gs: Sparse-controlled gaussian splatting for editable dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4220-4230, 2024. 2 +[16] Du Q Huynh. Metrics for 3d rotations: Comparison and analysis. Journal of Mathematical Imaging and Vision, 2009. 4 +[17] Yuheng Jiang, Zhehao Shen, Penghao Wang, Zhuo Su, Yu Hong, Yingliang Zhang, Jingyi Yu, and Lan Xu. Hifi4g: High-fidelity human performance rendering via compact gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19734-19745, 2024. 2 +[18] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 1, 2, 3 +[19] Tobias Kirschstein, Shenhan Qian, Simon Giebenhain, Tim Walter, and Matthias Nießner. Nersemble: Multi-view radiance field reconstruction of human heads. ACM Transactions on Graphics (TOG), 42(4):1-14, 2023. 2 +[20] Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3d gaussian representation for radiance field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21719-21728, 2024. 2 +[21] Tianye Li, Timo Bolkart, Michael J Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4d scans. ACM Trans. Graph., 36(6):194-1, 2017. 3 +[22] Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, and Yebin Liu. Posevocab: Learning joint-structured pose embeddings for human avatar modeling. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1–11, 2023. 4 +[23] Zhan Li, Zhang Chen, Zhong Li, and Yi Xu. Spacetime gaussian feature splatting for real-time dynamic view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8508-8520, 2024. 2 +[24] Shengjie Ma, Yanlin Weng, Tianjia Shao, and Kun Zhou. 3d gaussian blendshapes for head avatar animation. In ACM SIGGRAPH 2024 Conference Papers, pages 1-10, 2024. 2, 6 +[25] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 1, 2 +[26] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1-15, 2022. 2 + +[27] KL Navaneet, Kossar Pourahmadi Meibodi, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash. Compact3d: Compressing gaussian splat radiance field models with vector quantization. arXiv preprint arXiv:2311.18159, 2023. 2 +[28] Simon Niedermayr, Josef Stumpfegger, and Rüdiger Westermann. Compressed 3d gaussian splattering for accelerated novel view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10349-10358, 2024. 2 +[29] Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, and Matthias Nießner. Gaussianavatars: Photorealistic head avatars with rigged 3d gaus-sians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20299-20309, 2024. 2, 3, 5, 6 +[30] Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, and Giljoo Nam. Relightable gaussian codec avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 130-141, 2024. 2 +[31] Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, and Yebin Liu. Tensor4d: Efficient neural 4d decomposition for high-fidelity dynamic reconstruction and rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16632-16642, 2023. 2 +[32] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2387-2395, 2016. 1, 2 +[33] Chaoyang Wang, Ben Eckart, Simon Lucey, and Orazio Gallo. Neural trajectory fields for dynamic novel view synthesis. arXiv preprint arXiv:2105.05994, 2021. 2 +[34] Jun Xiang, Xuan Gao, Yudong Guo, and Juyong Zhang. Flashavatar: High-fidelity head avatar with efficient gaussian embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1802-1812, 2024. 2 +[35] Yuelang Xu, Lizhen Wang, Xiaochen Zhao, Hongwen Zhang, and Yebin Liu. Avatarmav: Fast 3d head avatar reconstruction using motion-aware neural voxels. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-10, 2023. 2 +[36] Yingyan Xu, Prashanth Chandran, Sebastian Weiss, Markus Gross, Gaspard Zoss, and Derek Bradley. Artist-friendly relightable and animatable neural heads. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2457-2467, 2024. 2 +[37] Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, and Yebin Liu. Gaussian head avatar: Ultra high-fidelity head avatar via dynamic gaussians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931-1941, 2024. 2, 6 +[38] Yuelang Xu, Lizhen Wang, Zerong Zheng, Zhaoqi Su, and Yebin Liu. 3d gaussian parametric head model. In European Conference on Computer Vision, pages 129-147. Springer, 2025. 2 + +[39] Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. Learning object-compositional neural radiance field for ed- itable scene rendering. In Proceedings of the IEEE/CVF Inte national Conference on Computer Vision, pages 13779- 13788,2021.2 +[40] Wangbo Yu, Yanbo Fan, Yong Zhang, Xuan Wang, Fei Yin, Yunpeng Bai, Yan-Pei Cao, Ying Shan, Yang Wu, Zhongqian Sun, et al. Nofa: Nerf-based one-shot facial avatar reconstruction. In ACM SIGGRAPH 2023 conference proceedings, pages 1-12, 2023. 2 +[41] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 6 +[42] Xiaozheng Zheng, Chao Wen, Zhaohu Li, Weiyi Zhang, Zhuo Su, Xu Chang, Yang Zhao, Zheng Lv, Xiaoyuan Zhang, Yongjie Zhang, et al. Headgap: Few-shot 3d head avatar via generalizable gaussian priors. arXiv preprint arXiv:2408.06019, 2024. 2 +[43] Yiyu Zhuang, Hao Zhu, Xusen Sun, and Xun Cao. Mofanerf: Morphable facial neural radiance field. In European conference on computer vision, pages 268-285. Springer, 2022. 2 +[44] Wojciech Zielonka, Timo Bolkart, and Justus Thies. Towards metrical reconstruction of human faces. In European conference on computer vision, pages 250-269. Springer, 2022. 1 +[45] Wojciech Zielonka, Timo Bolkart, and Justus Thies. Instant volumetric head avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4574-4584, 2023. 2 +[46] Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10324-10335, 2024. 2 \ No newline at end of file diff --git a/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/images.zip b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f00e040407e15d1c3a92726038a779ff3332c595 --- /dev/null +++ b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71d134c84f9dd21dc176e37959b156d3c448cf1b1d37a9d341f530cc30c62323 +size 569658 diff --git a/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/layout.json b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..788d94a1de9c9bb62e098c6644321b4d4eff8d5f --- /dev/null +++ b/CVPR/2025/3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6f54baceaf6bb72717477f7e9335b6051065ea1cd21702c750bf7a96397bb83 +size 362273 diff --git a/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_content_list.json b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..904d2a0aadaf6ea57309a1825a1811375211399b --- /dev/null +++ b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5ace3b1c8635ec23febe38623c7bb17dd5713c28153ec819bd5da6a6a1c42f8 +size 77729 diff --git a/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_model.json b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_model.json new file mode 100644 index 0000000000000000000000000000000000000000..55e920d6995f45d282ee4596533e08bbdb532bd0 --- /dev/null +++ b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20970cc3b20c7111dc9b98439aa3c8f84ade8089b8e7a20ef650d1d9dded3c7e +size 93889 diff --git a/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_origin.pdf b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..010664f03b06f006455c25a12a9b64c41c6cf050 --- /dev/null +++ b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/e8b63c5d-11f1-40a3-993f-7cfad108cb38_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:275cc9969d43024f0a0ae8c8b17c7ecd14a179d0908e48e6a00112182f4cc7f9 +size 3130026 diff --git a/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/full.md b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/full.md new file mode 100644 index 0000000000000000000000000000000000000000..be8e35830dad2b1f42f949d87e64aa086400436c --- /dev/null +++ b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/full.md @@ -0,0 +1,352 @@ +# 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency + +Sheng-Yu Huang $^{1,\dagger}$ , Zi-Ting Chou $^{2}$ , Yu-Chiang Frank Wang $^{1,2,\ddagger}$ + +1 Graduate Institute of Communication Engineering, National Taiwan University 2 NVIDIA, Taiwan + +f08942095@ntu.edu.tw $\ddagger$ frankwang@nvidia.com + +![](images/3263dd80755fdb00211ebef6663398194d30dbf0b45135a00cf1cfa2e8820259.jpg) +Figure 1. Given multi-view images of a scene and the object masks describing the object to be removed, our 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency produces high-fidelity cross-view inpainting results. Compared with current state-of-the-arts such as SPIn-NeRF [24], Gaussian Grouping [41], and GScream [35], inpainting results of our method not only preserve visible background contents but also exhibit satisfactory consistency across camera views. + +![](images/2221737a6121395f3faccfd4346a5407395078748a6ba6dcd3bb0cd654aaf5a7.jpg) + +![](images/4efbfef63123f77fc5c41dfcaca8ce7ce50bb6dd5498121a5d2d6474554093fb.jpg) + +![](images/36e9e8fdb9d51e2f04201ab706bd1df3902abe98e21f1e927e6dc33aa1635cff.jpg) + +![](images/b7e5f142152a7aa342c584540770dded93f2fc25bab76539d644031da2846f81.jpg) + +# Abstract + +When performing 3D inpainting using novel-view rendering methods like Neural Radiance Field (NeRF) or 3D Gaussian Splatting (3DGS), how to achieve texture and geometry consistency across camera views has been a challenge. In this paper, we propose a framework of 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency (3DGIC) for cross-view consistent 3D inpainting. Guided by the rendered depth information from each training view, our 3DGIC exploits background pixels visible across different views for updating the inpainting mask, allowing us to refine the 3DGS for inpainting purposes. Through extensive experiments on benchmark datasets, we confirm that our 3DGIC outperforms current state-of-the-art 3D inpainting methods quantitatively and qualitatively. + +# 1. Introduction + +Novel view synthesis for 3D scenes plays a vital role in 3D reconstruction and scene understanding. Recent advancements, such as Neural Radiance Fields (NeRF) [1, 4, 9, 21, 22, 26, 29, 32, 43] and 3D Gaussian Splatting (3DGS) [5, 14, 28, 41, 44], enable high-fidelity novel views + +by modeling volumetric properties. However, practical VR/AR applications [3, 20] require more than reconstruction: they need editing capabilities that these methods do not fully address. Among editing challenges, object removal and inpainting [17, 35] are particularly difficult, as direct removal creates visible holes, compromising visual quality. While 2D inpainting [7, 19, 27, 31, 33, 38-40] across multiple views is possible, maintaining consistency remains problematic, leading to artifacts and reduced fidelity. Thus, achieving seamless, multi-view-consistent inpainting for 3D scenes is still an open challenge. + +As a pioneering work in 3D scene inpainting, SPIn-NeRF [24] proposes to use a pre-trained segmentation network [10] to generate plausible 2D inpaint masks for multiview images with sparse human annotations of the object to be removed. However, as noted in subsequent research [17, 35], SPIn-NeRF and similar approaches [37, 42] rely heavily on 2D inpainting of multiple views separately, which hinders the cross-view consistency of the 3D inpainting results. To ensure cross-view consistency, RefNeRF [23] projects the inpainted image from a specific reference view onto other views using depth-guided projection, thereby ensuring more consistent inpainting results across views. Despite these advancements, these methods still require human + +annotated 2D masks or sparse annotations to delineate the objects to be removed and the regions to be inpainted, making the process labor-intensive and limiting the scalability and practicality of these techniques. + +To reduce the need for human annotation for obtaining inpainting masks, recent methods [41, 42] tend to leverage the Segment Anything Model (SAM) [16] models with NeRF or 3DGS to obtain 2D inpainting masks for multi-view images directly. Although these methods ease the requirement of human annotations for inpainting masks, they still rely on 2D inpainting results for different views as supervision, limiting the multi-view consistency of the inpainted 3D representations. To alleviate this limitation, some approaches [6, 12, 17, 18, 25, 35, 36] attempt to build a cross-view consistent 3D inpainting method on top of the 2D inpainting mask obtained from SAM. By either leveraging 2D diffusion models as perceptual guidance for the inpainted region [6, 17, 36] or ensuring feature consistency of corresponding pixels across different views [35], these methods are able to produce more consistent 3D inpainting results without the requirement of human-annotated 2D inpainting masks. Nevertheless, most of the aforementioned methods rely on the provided per-scene 2D inpainting masks (either from human annotation or from SAM) for each view, which can include areas visible in other views, as mentioned in [41]. As a result, the inpainted content within this area might be inconsistent across camera views, producing artifacts in the reconstructed 3D scene. + +In this paper, we propose a 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency (3DGIC) to optimize the 3DGS model while achieving multi-view consistent and high-fidelity 3D inpainting with depth-guided inpainting masks to locate the inpainting region. Given a set of images of a scene with corresponding camera views and the object masks indicating an unwanted object in the scene (obtained from SAM [16], for example), our 3DGIC conducts the process of Inferring Depth-Guided Inpainting Masks to consider depth information from all training views and refine the inpainting mask by discovering background pixels from different views. The refined inpainting masks are then used to provide a joint update of inpainting results and the underlying 3DGS model via 3D Inpainting with Cross-View Consistency. Through experiments on real-world datasets, we quantitatively and qualitatively demonstrate that our 3DGIC performs favorably against state-of-the-art NeRF/3DGS-based inpainting methods by achieving better fidelity and multi-view consistency. + +The key contributions of our approach are as follows: + +- We propose a 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency (3DGIC), achieving multi-view consistent 3D inpainting results with high fidelity. +- By inferring Depth-Guided Inpainting Masks, the region to be inpainted is properly obtained by considering depth + +information across different views, allowing us to guide the inpainting process for 3DGS. + +- Based on the 2D inpaintings from a chosen reference view, our Inpainting-guided 3DGS Refinement optimizes new Gaussians of the object-removed scene by ensuring cross-view consistent inpainting results. + +# 2. Related Works + +# 2.1. 3D Representations for Novel View Synthesis + +Novel view synthesis is a widely studied topic in 3D computer vision. Neural Radiance Field (NeRF) [22], a pioneer in this field, effectively models scenes using multi-view images. However, as noted in [8], the original NeRF requires extensive training time—from hours to days—and relies on numerous images. To address these issues, many subsequent works [9, 26, 32, 43] have emerged. Methods like Instant NGP [26] and DVGO [32] reduce training time to minutes by balancing speed and memory through hash encoding and voxel encoding. Recently, the introduction of 3D Gaussian Splitting (3DGS) [14] brings a fundamental revolution to this area. Different from NeRF and its variants, which model a 3D scene as an implicit representation, 3DGS models a 3D scene as a composition of numerous 3D Gaussians, with each Gaussian parameterized by its three-dimensional centroid, standard deviations, orientations, opacity, and color features. By modeling a 3D scene as such an explicit representation, one is able to render the 2D images of the modeled scene via rasterization with an incredible 100 fps, whereas the fastest NeRF-based approach ([9, 26]) only achieves around 10 fps. As a result, we chose 3DGS as our backbone representation over NeRF in this paper due to its fast rendering property, making our approach more applicable in the real world. + +# 2.2. 3D Scene Inpainting + +In the context of 3D scene inpainting, SPIn-NeRF [24] emerges as one of the earliest approaches addressing the challenges of multi-view consistency. It uses pre-trained segmentation networks to generate plausible inpainting masks for multi-view images, requiring sparse user annotations to indicate the unwanted object. These annotations are propagated across views, and a modified Neural Radiance Field (NeRF) model is used to inpaint the masked regions. Although effective, this approach is heavily dependent on human intervention and lacks the ability to automate the mask generation process, thus limiting its scalability. + +To reduce the need for manual annotations, recent works [41, 42] have introduced the use of the Segment Anything Model (SAM) [16] in combination with NeRF or 3DGS. Specifically, OR-NeRF employs GroundedSAM [30] to locate a single-view 2D inpainting mask for the object to be removed. It then projects 3D points of the object's surface into other views, which are used as prompts for + +![](images/72b477e48e0fc40b30fc649d36c921599b2ba6120fd28027992a6b1d0bc94ed7.jpg) +Figure 2. Overview of 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency. Given a 3D Gaussian Splitting model $G_{1:N}$ pretrained on multi-view images $I_{1:K}$ at camera poses $\xi_{1:K}$ , our goal is to perform 3D inpainting based on the object masks $M_{1:K}$ (e.g., provided by SAM). With the rendered depth maps $D_{1:K}$ , the stage of Inferring Depth-Guide Inpainting Mask is able to refine the inpainting masks to preserve visible backgrounds across camera views. The stage of Inpainting-guided 3DGS Refinement then utilizes such masks to jointly update the new Gaussians $G_{1:N'}'$ for both novel-view rendering and inpainting purposes. + +SAM to generate masks for the remaining views. Similarly, Gaussian Grouping [41] enhances 3DGS by incorporating semantic feature learning, allowing the model to jointly render RGB images and segmentation maps, where the segmentation supervision is derived from SAM. While these methods significantly reduce the burden of manual mask creation, they inpaint 2D images of different views separately and optimize the inpainted NeRF by treating all the 2D inpaintings equally. As a result, the above approaches still face difficulties in producing consistent multi-view results, as mentioned in [6, 17, 35] + +To alleviate this problem, more advanced approaches [6, 17, 35] focus on improving cross-view consistency. For instance, MALD-NeRF fine-tunes a scene-specific Low-Rank Adaptation (LoRA) [13] module for a pre-trained diffusion model to inpaint images of each scene. By introducing a LoRA module for each scene, the diffusion model can inpaint more consistent content across different views. GScream [35], on the other hand, applies diffusion-based 2D inpainting on a chosen reference view. By predicting the depth map of the inpainted reference view, GScream incorporates cross-view feature consistency between any other view and the reference view, optimizing geometric alignment across views. These methods represent a significant step forward in achieving automatic, consistent 3D inpainting, addressing the practical limitations of earlier approaches. Nonetheless, the aforementioned methods rely + +on per-view 2D inpainting masks for 2D inpainting models as input, while some areas in those masks are visible from other views, as noted in [41]. Consequently, the inpainted content for these visible areas may not align with the original scene (as illustrated in the red branch in Figure 1). This inconsistency might be propagated to the inpainted 3D scene, hindering the reliability of their results. + +# 3. Method + +# 3.1. Problem Definition and Model Overview + +We begin with the notations and settings of our proposed framework. Given a 3D Gaussian Splating (3DGS) model [41] $G_{1:N} = \{G_1, G_2, \dots, G_N\}$ ( $N$ denotes the number of Gaussians) pretrained for $K$ multi-view images $I_{1:K} = \{I_1, I_2, \dots, I_K\}$ with their camera poses $\xi_{1:K} = \{\xi_1, \xi_2, \dots, \xi_K\}$ , our goal is to remove the Gaussians corresponding to a particular object (e.g., the bear statue) described by 2D object masks $M_{1:K} = \{M_1, M_2, \dots, M_K\}$ . More precisely, we aim to update the above 3DGS so that the optimized Gaussians $G_{1:N'}'$ (with $N'$ remaining Gaussians) allow novel view rendering without the object of interest presented. Take Figure 2 as an example, the bear statue is to be removed from the scene of interest, and its segmentation masks $M_{1:K}$ from $I_{1:K}$ can be produced by models like SAM [16] (see supplementary materials for details). + +To address the above 3D Gaussian Inpainting with Depth + +![](images/247ec1bb56470795dc6d82fc53e40c22a42191e9074cc04cd7f1ff4bfe661534.jpg) +Figure 3. Inferring Depth-Guided Inpainting Mask. Taking $\{I_1, M_1\}$ at view $\xi_1$ as an example reference view, the original background region $I_1^B$ can be first produced. We then project the background region $I_2^B$ from $\xi_2$ to $\xi_1$ , updating $I'_1^B$ and the associated inpainting mask $M_1'$ . By repeating this process across camera views, the final inpainting mask $M_1'$ contains only the regions that are not visible at any training camera views. + +Guided Cross-View Consistency (3DGIC). Our 3DGIC comprises two learning stages: Depth-Guided Inpainting Mask and Inpainting-guided 3DGS Refinement. The former refines the object mask guided by both semantics and depth maps observed across $I_{1:K}$ , while the latter performs inpainting with cross-view consistency for updating the Gaussians $G_{1:N'}'$ . + +# 3.2. Inferring Depth-Guided Inpainting Masks + +Given multi-view images $I_{1:K}$ of a scene with binary masks $M_{1:K}$ depicting the object to be removed, we aim to infer a proper mask $M'$ for inpainting images at each view under the guidance of depth images $D_{1:K}$ rendered from $G_{1:N}$ . As a result, the masked image $I_{1:K}^{iB}$ only contains background pixels that are visible at other camera views. The $i$ -th masked image $I_i^{iB}$ is defined as: + +$$ +I _ {i} ^ {\prime B} = I _ {i} \cdot (\mathbb {1} - M _ {i}), \qquad (1) +$$ + +where $\mathbb{1}$ denotes a tensor with the same size as $M$ and all the elements are one. + +Take $\{I_1, M_1\}$ in Figure 3 as an example, our process of inferring Depth-Guided Inpainting Masks takes the original image $I_1$ from $\xi_1$ and masks out the areas in $M_1$ as the original visible backgrounds $I_1^B = I_1 \cdot (\mathbb{1} - M_1)$ from view $\xi_1$ . To explore all the visible background pixels from other views $\xi_{2:K}$ , we take $I_{2:K}$ with their masks $M_{2:K}$ and rendered depth $D_{2:K}$ at $\xi_{2:K}$ , and we project the above background pixels from each view to $\xi_1$ . Taking view $\xi_2$ as an example, the visible backgrounds $I_2^B$ in $I_2$ ( $I_2^B = I_2 \cdot (\mathbb{1} - M_2)$ ) are + +projected into the 3D space via $D_{2}$ and $\xi_{2}$ and then back-projected to $\xi_{1}$ . Among all the back-projected pixels, we consider the pixel coordinates that lie inside $M_{1}$ as visible backgrounds from $I_{2}$ , denoted as $I_{1,2}^{B}$ . The operation for obtain $I_{1,2}^{B}$ is calculated as: + +$$ +I _ {1, 2} ^ {B} = \operatorname {P r o j} ^ {2 D} \left(\operatorname {P r o j} ^ {3 D} \left(I _ {2} ^ {B}, D _ {2}, \xi_ {2}\right), \xi_ {1}\right) \cdot M _ {1}, \tag {2} +$$ + +where $Proj^{3D}(\cdot, \cdot, \cdot)$ denotes the 3D projection function that projects 2D colored pixels in $I_2^B$ into 3D point clouds with its depth map $D_2$ and camera pose $\xi_2$ , while $Proj^{2D}(\cdot, \cdot)$ represents the 2D projection function that projects the 3D colored point cloud back to $\xi_1$ as colored pixels. With the above operation, the corresponding pixel coordinates of $I_{1,2}^B$ are directly excluded from $M_1$ , and thus the inpainting mask is refined as $M_1'$ , and the masked image $I_1'^B = I_1^B + I_{1,2}^B$ at $\xi_1$ is further obtained. Similarly, we repeat this process through all the views $\xi_{2:K}$ to infer the final Depth-Guided Inpainting Mask $M_1'$ and the masked image $I_1'^B$ at $\xi_1$ . Also, we produce the depth-guided inpainting masks $M_{1:K}'$ for all the views $\xi_{1:K}$ . Please refer to our supplementary material for more details about this inferring process. + +It is worth noting that the above process is deterministic. It not only reduces the uncertainty of image regions to inpaint at each view, but it also makes the updating of the 3DGS model for rendering the unpainted scene more effective, as discussed in the following subsection. + +# 3.3. Inpainting-guided 3DGS Refinement + +The aim of this stage is to optimize $G_{1:N'}'$ with masked $I_{1:K}'$ obtained at $\xi_{1:K}$ with cross-view consistency so that rendering of the corresponding high-fidelity scene can be produced, realizing the task of 3D inpainting. As shown in Figure 2, the 3DGS for this inpainting scene can be first updated by removing the Gaussians with the semantic labels corresponding to the masked region (e.g., "bear" in the original Gaussian $G_{1:N}$ ), and replaced by the same amount of randomly initialized Gaussians in the masked region (e.g., with bear removed; see [41] and our supplementary materials). + +Take $\xi_{1}$ as the reference view for an example, the rendered image $I_1^\prime$ and depth map $D_1^{\prime}$ of $G_{1:N'}^{\prime}$ at $\xi_{1}$ are inpainted by a 2D inpainter [31, 33] (using $M_1^{\prime}$ as inpainting mask) as: + +$$ +I _ {1} ^ {I n} = \quad I n p a i n t _ {2 D} \left(I _ {1} ^ {\prime}, M _ {1} ^ {\prime}\right) \tag {3} +$$ + +$$ +D _ {1} ^ {I n} = I n p a i n t _ {2 D} \left(D _ {1} ^ {\prime}, M _ {1} ^ {\prime}\right), +$$ + +where $Inpaint_{2D}(\cdot, \cdot)$ denotes the 2D inpainting process, and $I_1^{In}$ and $D_1^{In}$ represents the 2D-inpainted results of $I_1'$ and $D_1'$ , respectively. To ensure $I_1'$ looks identical to $I_1^{In}$ , the rendering loss at $\xi_1$ is defined as: + +$$ +\mathcal {L} _ {\text {r e n d e r i n g}} = \mathcal {L} _ {\text {r g b}} + \mathcal {L} _ {\text {d e p t h}}. \tag {4} +$$ + +Note that the image recovery loss $\mathcal{L}_{rgb}$ is calculated as: + +$$ +\mathcal {L} _ {r g b} = \left\| I _ {1} ^ {\prime} - I _ {1} ^ {I n} \right\| _ {1} + \mathcal {L} _ {S S I M} \left(I _ {1} ^ {\prime}, I _ {1} ^ {I n}\right), \tag {5} +$$ + +where the $\mathcal{L}_{SSIM}$ denotes the structure similarity loss [14]. And, the depth loss $\mathcal{L}_{depth}$ is defined as: + +$$ +\mathcal {L} _ {\text {d e p t h}} = \left\| D _ {1} ^ {\prime} - D _ {1} ^ {I n} \right\| _ {1}. \tag {6} +$$ + +To further ensure the masked regions in $I_{2:K}^{\prime}$ (with respect to $M_{2:K}^{\prime}$ ) are cross-view consistent with the 2D-inpainted region in $I_1^{In}$ , we project the inpainted region of $I_1^{In}$ into the 3D space as a set of colored point clouds $P_{1}$ , followed by re-projecting back to $\xi_{2:K}$ as supervision. Thus, $P_{1}$ is calculated as: + +$$ +P _ {1} = \operatorname {P r o j} ^ {3 D} \left(I _ {1} ^ {I n} \cdot M _ {1} ^ {\prime}, D _ {1} ^ {I n}, \xi_ {1}\right), \tag {7} +$$ + +where $Proj^{3D}(\cdot, \cdot, \cdot)$ is the same projection function in Eqn. 2. For each view $\xi_{k}$ of $\xi_{2:K}$ , the back-projected image $I_{k}^{P}$ for supervision is denoted as: + +$$ +I _ {k} ^ {P} = I _ {k} ^ {\prime} \cdot (1 - M _ {k} ^ {\prime}) + P r o j ^ {2 D} (P _ {1}, \xi_ {k}) \cdot M _ {k} ^ {\prime}, \qquad (8) +$$ + +where $Proj^{2D}(\cdot, \cdot)$ is also the same 2D projection function in Eqn. 2. To this end, the cross-view consistent loss $\mathcal{L}_{cross}$ is defined as: + +$$ +\mathcal {L} _ {\text {c r o s s}} = \sum_ {k \in 2 \dots K} \mathcal {L} _ {\text {L P I P S}} \left(I _ {k} ^ {\prime}, I _ {k} ^ {P}\right), \tag {9} +$$ + +where $\mathcal{L}_{LPIPS}$ denotes the LPIPS [45] loss that calculates the perceptual similarity between $I_k^{\prime}$ and $I_k^P$ . + +Finally, the overall loss for 3D inpainting is calculated by $\mathcal{L}_{\text {inpaint }} = \mathcal{L}_{\text {render }} + \mathcal{L}_{\text {cross }}$ . We note that by conducting $\mathcal{L}_{\text {inpaint }}$ , $G_{1:N'}'$ is guaranteed to inpaint the object-removed 3D scene with cross-view consistency by taking $\{I_1^{In}, D_1^{In}\}$ as guidance. + +# 3.4. Training and Inference + +# 3.4.1. Training + +During the training (optimization) process, we calculate the refined mask $M'$ described in Sect. 3.2 for all $K$ views and choose the view with the largest refined mask as the reference view. This is because the 2D inpainted result from this view covers the most 3D space compared to other views, allowing us to provide a more informative cross-view optimization. By choosing the reference view, $\mathcal{L}_{\text{inpaint}}$ is applied to optimize $G_{1:N'}'$ . To this end, $G_{1:N'}'$ is properly supervised to ensure the 3D scene is reasonably inpainted and consistent across different views. + +# 3.4.2. Inference + +Once we finish the optimization of the inpainted scene with our 3DGIC, the optimized Gaussians $G_{1:N'}'$ are able to render a novel view synthesis of the scene by using arbitrary camera poses. + +# 4. Experiments + +# 4.1. Datasets + +To evaluate the effectiveness of our method, we conduct experiments on the most used real-world benchmark dataset: the SPIn-NeRF [24] dataset. This dataset contains ten real-world scenes, including indoor and outdoor scenes. Each scene is composed of 60 frames of training images and 40 frames of testing images where a certain object in the scene is removed, with camera poses of all 100 images available. The binary mask of the object to be removed is also provided in each frame for evaluation. Following the setting of [6, 17, 24, 35, 41], we resize each image as $1008 \times 567$ in resolution for all our experiments and show the comparisons quantitatively and qualitatively. + +Since the camera poses in all the scenes provided in the SPIn-NeRF dataset only cover a small range (i.e., all the image frames are captured near the front view of the scene), we additionally include qualitative comparisons with several scenes covering $360^{\circ}$ of camera poses to show the effectiveness of our design, specifically for our Depth-Guided Inpainting Mask. Following Gaussian Grouping [41], we take the "bear" scene provided in InNeRF360 [34], the "counter" scene in Mip-NeRF360 [2], and the "figureines" scene in LeRF [15] for the additional qualitative evaluations. Since these scenes are not originally for the 3D inpainting task, we manually select an object in each scene as the object to be removed and select the corresponding ID in the segmentation map obtained from SAM [16] as the object mask in each view. Please refer to our supplementary material for a detailed description of these scenes. + +# 4.2. Quantitative Evaluations + +Table 1 shows the comparisons between our 3DGIC (with LAMA [33] or LDM [31] as 2D inpainter) and several state-of-the-art approaches such as SPIn-NeRF [24], MVIP-NeRF [6], Gaussian Grouping [41], MALD-NeRF [17], and GScream [35] using the SPIn-NeRF dataset. Following SPIn-NeRF and MALD-NeRF, we conduct FID [11], masked FID (m-FID), LPIPS [45], and masked LPIPS (m-LPIPS) as our evaluation matrices, where m-FID and m-LPIPS calculate the FID and LPIPS scores only inside the ground truth inpainting masks. We note that the official implementation of MALD-NeRF is currently unavailable; we directly use the output results provided on their official project page for evaluation. As for other state-of-the-arts, we reproduce results from their official implementations and the released configurations. + +From Table 1, we can see that the LDM version of our 3DGIC achieves the best score on all four evaluation matrices. As for our 3DGIC using a non-diffusion-based model of LAMA as the 2D inpainter, the results still outperform MVIP-NeRF and MALD-NeRF, where both use LDM as the + +Table 1. Quantitative evaluation on the SPIn-NeRF dataset in terms of FID and LPIPS. Note that m-FID and m-LPIPS represent that the FID and LPIPS scores are only calculated within the ground truth inpainting masks. + +
Representation2D inpainterFID↓m-FID↓LPIPS↓m-LPIPS↓
SPIn-NeRF [24]NeRFLAMA [33]49.6153.40.310.053
MVIP-NeRF [6]NeRFLDM [31]50.5173.40.310.050
Gaussian Grouping [41]Gaussian SplittingLAMA [33]44.7132.50.300.037
MALD-NeRF [17]NeRFLDM [31]44.9113.50.260.031
GScream [35]Gaussian SplittingLDM [31]38.6101.60.280.033
3DGIC (Ours)Gaussian SplittingLAMA [33]41.7102.40.280.032
3DGIC (Ours)Gaussian SplittingLDM [31]36.496.30.260.028
+ +![](images/ea94b8a3235dba962d60203c33b34dd5ffece1f28d62abf32a9a18f73f49ca81.jpg) +Figure 4. Qualitative results on the SPIn-NeRF [24] dataset. Two different views of the same scene are shown for each inpainting example. We compare rendering results against MVIP-NeRF [6], MALD-NeRF [17], and GScream [35]. We can see from the regions highlighted by the red boxes that our 3DGIC performs better in terms of multi-view consistency and rendering fidelity + +inpainter. The above results show that while using a better 2D inpainter achieves better results, the improvements in our 3DGIC do not come solely from a better 2D inpainter. This suggests that our model is not bundled by 2D inpainters and achieves 3D inpainting with improved fidelity. + +# 4.3. Qualitative Results + +In Figure 4, we qualitatively compare our 3DGIC with MVIP-NeRF [6], MALD-NeRF [17], and Gscream [35] using the testing set of SPIn-NeRF dataset. In this figure, each of the two rows shows the results of the same scene with different viewpoints, while the first column shows the images containing the object to be removed along with the object masks at the upper-left corner. Specifically, from the + +![](images/69c008d6d482743aec978ed1ac0c50eb16d8eba570e2060ad2ecca9d5aff536a.jpg) + +![](images/a5aca36c85936c45c857dfd8e80a74d357be512a2c38a726a4d1572f26677eb5.jpg) + +![](images/7054867fe29ea8fc83f39e25bff7e5b88fbfbf88847bffb81bc3775329a58ac4.jpg) +(a) Input + +![](images/9d2077694040455a2d7ed27656fe4522a29ed5ec3b17bc53d8a8f41952077beb.jpg) + +![](images/8dea2109ddc328b9517929ff83790167cabe563e865bdf515c147284bd6f91be.jpg) + +![](images/61ed288542e9f2915700d914883c6d14589f1f4ecc5b301544939bda2031a83b.jpg) +(b) SPIn-NeRF + +![](images/96968f818aad3ce188770613c333be1004f9193c38dabdfb549e8a7f80cbbcee.jpg) + +![](images/c81323fa5ec558944e68f9957d90820ef419619c43bc5039354969b6d83facf1.jpg) + +![](images/d51b516339de89f3e19b27333473f8ff394018232a157ad5d7f4ce8fb69683df.jpg) +(c) Gaussian Grouping + +![](images/06447a89aaa2021db0ac4940547671011ba496b846bf93d1afa89ceb01fde675.jpg) + +![](images/929772e9d7cf4a0b8586e1106d613c56e27c24471c4822e3dad8b060e0b8f368.jpg) + +![](images/94806f3d31b065ea14eb6feb9dd8eeae9c3a3de90dcf7cda8068a8d05cae3a41.jpg) +(d) GScream + +![](images/24b853c342fdf5914f998ae5e9a60971f3fd6e38cb01ff050932747ee10f206f.jpg) + +![](images/737ee57a10baade182fff3aef3ad202a5e387bfccc15c2a5bb2d88ce6b047823.jpg) + +![](images/2cb79b5b5cdea985278105c98568aa74c6c31f277758c589ff7c50158fa78763.jpg) +(e) Ours + +![](images/0ee7c5bb5b01a480142b6617900acb6c6c0a44189283a330a1dc422fad67d3e6.jpg) +Figure 5. Qualitative results on the Figurines scene from the LeRF [15] dataset. We compare the rendering results with SPIn-NeRF [24], Gaussian Grouping [41], and GCream [35]. The three rows show different views of the scene, whereas the first column shows the input images with the object masks of the unwanted object. The regions highlighted by the red boxes show that our 3DGIC inpaints a smoother table surface without artifacts. + +![](images/30aca1c0cd3146bfd839509a6d4273b6bb3cf4d90ae1ba155491454010e500b9.jpg) + +![](images/12aa7f778099ee097c56c869fe549c4ed25469f0fdc21926294aa485c3e7a086.jpg) +(a) Input + +![](images/f1e268a8c0bb593c009e20c70eea050e2303c2677c7f6a852718c427f8485503.jpg) + +![](images/f4650d491c94f5d17f7975c297ef041bb530d99b9336a9ce33b2ade10f6308c3.jpg) + +![](images/ebc96aaba5f32c67d10adffe3230c4505d1a70cddb21a2dd84e9e21fdedb0d91.jpg) +(b) SPIn-NeRF + +![](images/2ea51f60aca2159493c6e2a63abf476b20c37dc436b6427c9ec129e8e0de60df.jpg) + +![](images/ef9f9ad807a7f8b375bb4e71f0cd6d41422daca2c0d9d0606a8c0d17b3dbd409.jpg) + +![](images/4c6615829b03a4098fdf783e691ad21c400ce29b929d5211292f154e74d546f1.jpg) +(c) Gaussian Grouping + +![](images/f1119c764b54c2ebd5ea58f67b5e101aacc439e23e7c00906546d7fd80f17c67.jpg) + +![](images/807b7da984269eabb64a200fb0ba6aaebe7f071d49f1c23ae67874e93645bfd6.jpg) + +![](images/a61cbc1fb966f68648a294b541a257361013568669652d88becd584d68c1c703.jpg) +(d) GScream +Figure 6. Qualitative results on the Counter scene from the MipNeRF360 [2] dataset. We compare the rendering results with SPIn-NeRF [24], Gaussian Grouping [41], and GScream [35]. The three rows show different views of the scene, where we zoom in a certain region in the first row to highlight the difference between each method. We can see from the regions highlighted by the red boxes that our 3DGIC correctly inpaints the water bottle without manipulating any other objects on the table (e.g., the plastic cover). + +![](images/59dd17847679d5cd9f94831570f2c218a1cfe8a0fc3f56c59c204cf39c9a0151.jpg) + +![](images/c45cc112b28ba8f092675070808d8634f0fdd4e8bfb3babd8ae3d1485731d2eb.jpg) + +![](images/83edb2a9edc51a0842c2243979f84b762bcc9705fab1d91f13b1f62e5649b602.jpg) +(e) Ours + +first two rows, we observe that while GScream and MALD-NeRF both show high-fidelity images, some of the visible details from the input image (e.g., the electrical socket on the table) are not preserved properly. For the third and fourth + +rows, where we zoom in on certain areas inside the red boxes, although it is reasonable for MALD-NeRF to generate a hat in the inpainted region, the logo on the hat is not consistent across different views. As for MVIP-NeRF, blurry images + +![](images/99b7199d9ef5225e87576c4d070c86fde848fd5b916864d90348ea6f97c56e6d.jpg) +(a) Input + +![](images/dcc9f96f2e2c7c17f21af5e401a7457d29ace641b1b699329b62d0905f770359.jpg) +(b) baseline + +![](images/ee120e0f59c8873dab47210f96c4d9127b60ea26349742cb666a7335cf425a5a.jpg) +(c) w/o depth-guided inpainting mask +Figure 7. Ablation studies on the Bear scene from the InNeRF360 [34] dataset. We verify the effectiveness of our Inferring Depth-Guided Inpainting Mask and Inpainting-guided 3DGS Refinement. + +![](images/fbd55a4cf72829a1a66be171843a75dc13f87f5b80775100f6272152bf14eda1.jpg) +(d) w/o cross-view consistency + +![](images/07b04d83fd6c6b34e9575dabcfba14c21c669352b498b890cf1f7f6b4d9f9d67.jpg) +(e) Ours + +are generated in all cases. Oppositely, our 3DGIC generates high-fidelity images with multi-view consistency and preservation of the visible backgrounds. + +In Figure 5 and Figure 6, we further show the qualitative comparisons with SPIn-NeRF, Gaussian Grouping, and GScream using the Figurines dataset from LeRF [15] and the Counter dataset from MipNeRF360 [2], where each shows results from three different viewpoints. For Figure 5, we can see that both SPIn-NeRF and Gaussian Grouping leave obvious black holes and shadows in the inpainting region, while GScream does not clearly remove the object of interest. In contrast, our 3DGIC successfully removes the unwanted object and produces smooth and multi-view consistent results without leaving heavy shadows. For Figure 6, where certain areas are cropped by red boxes and zoomed in in the first row, GScream does not fully remove the object of interest either. SPIn-NeRF not only removes the object of interest but also inpaints other objects in the background. As for Gaussian Grouping, which uses GroundedSAM [30] to detect inpainting mask with the text prompt "blurry hole" as input, the GroundedSAM model locates other regions rather than focusing on the object removed region, producing blurry and inconsistent inpainting results across different views. In the contrary, our 3DGIC locates the regions to be inpaint properly and hence produces high-fidelity results while preserving all the other background objects. + +# 4.4. Ablation Study + +To further analyze the effectiveness of our designed modules (i.e., Inferring Depth-Guided Masks and Inpainting-guided 3DGS Refinement), we conduct ablation studies on the "bear" scene from InNeRF360 [34], as shown in Figure 7. Column (a) shows the input images with the bear statue and their corresponding object mask. The baseline model (b) uses the original object masks as the inpainting + +masks and directly applies all the inpainted 2D images as input to fine-tune a 3DGS model. The results of model (b) show blurry contents all over the rendered image, while the inpainted results are not consistent across different views. For model (c), the original object masks are applied as the 2D inpainting model, with our Inpainting-guided 3DGS Refinement. Although the rendered images of model (c) show better fidelity, using the original object masks as inpainting masks results in modifications to the visible backgrounds. For model (d), our inferred depth-guided masks $M_{1:K}^{\prime}$ are applied as the 2D inpainting masks, but all the 2D inpainting results are directly used as inputs to fine-tune the 3DGS model. As a result, although the backgrounds are preserved, the inpainted region is blurry and not consistent across the views. As for our full model in the last column (e), the depth-guided masks are used, and the 3D Inpainting with Cross-View Consistency is applied, achieving the best results. This verifies the success of our proposed modules and strategies for 3D inpainting. + +# 5. Conclusions + +In this paper, we propose the 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency (3DGIC) for inpainting real-world 3D scenes represented by 3D Gaussian Splating (3DGS) models. With the conduction of our Inferring Depth-Guided Inpainting Masks, we are allowed to obtain precise inpainting masks by considering rendered depth maps and visible background information from other views. With these depth-guided inpainting masks properly obtained, our Inpainting-guided 3DGS Refinement optimizes a newly initialized 3DGS model and performs 3D inpainting simultaneously. In our experiments, we quantitatively and qualitatively show that our 3DGIC is able to handle scenes with various ranges of camera views and perform favorably against existing 3D inpainting approaches. + +Acknowledgment This work is supported in part by the Tron Future Tech Inc. and the National Science and Technology Council via grant NSTC 113-2634-F-002-005 and NSTC 113-2640-E-002-003, and the Center of Data Intelligence: Technologies, Applications, and Systems, National Taiwan University (grant nos.114L900902, from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) of Taiwan). We also thank the National Center for High-performance Computing (NCHC) for providing computational and storage resources. + +# References + +[1] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5855–5864, 2021. 1 +[2] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 5, 7, 8 +[3] Wolfgang Broll. Augmented reality. In Virtual and Augmented Reality (VR/AR) Foundations and Methods of Extended Realities (XR), pages 291-329. Springer, 2022. 1 +[4] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision, pages 333-350. Springer, 2022. 1 +[5] Guikun Chen and Wenguan Wang. A survey on 3d gaussian splatting. arXiv preprint arXiv:2401.03890, 2024. 1 +[6] Honghua Chen, Chen Change Loy, and Xingang Pan. Mvipn erf: Multi-view 3d inpainting on nerf scenes via diffusion prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 2, 3, 5, 6 +[7] Ciprian Corneanu, Raghudeep Gadde, and Aleix M Martinez. Latentpaint: Image inpainting in latent space with diffusion models. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2024. 1 +[8] Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. Depth-supervised nerf: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12882-12891, 2022. 2 +[9] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5501-5510, 2022. 1, 2 +[10] Yuying Hao, Yi Liu, Zewu Wu, Lin Han, Yizhou Chen, Guowei Chen, Lutao Chu, Shiyu Tang, Zhiliang Yu, Zeyu Chen, et al. Edgeflow: Achieving practical interactive segmentation with edge-guided flow. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2021. 1 + +[11] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 5 +[12] Dongting Hu, Huan Fu, Jiaxian Guo, Liuhua Peng, Tingjin Chu, Feng Liu, Tongliang Liu, and Mingming Gong. In-n-out: Lifting 2d diffusion prior for 3d object removal via tuning-free latents alignment. Advances in Neural Information Processing Systems, 37:45737-45766, 2024. 2 +[13] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 3 +[14] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics (TOG), 2023. 1, 2, 5 +[15] Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. Leref: Language embedded radiance fields. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2023. 5, 7, 8 +[16] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2023. 2, 3, 5 +[17] Chieh Hubert Lin, Changil Kim, Jia-Bin Huang, Qinbo Li, Chih-Yao Ma, Johannes Kopf, Ming-Hsuan Yang, and Hung-Yu Tseng. Taming latent diffusion model for neural radiance field inpainting. Proceedings of the European Conference on Computer Vision (ECCV), 2024. 1, 2, 3, 5, 6 +[18] Zhiheng Liu, Hao Ouyang, Qiuyu Wang, Ka Leong Cheng, Jie Xiao, Kai Zhu, Nan Xue, Yu Liu, Yujun Shen, and Yang Cao. Infusion: Inpainting 3d gaussians via learning depth completion from diffusion prior. arXiv preprint arXiv:2404.11613, 2024. 2 +[19] Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 1 +[20] Márcio CF Macedo and Antonio L Apolinario. Occlusion handling in augmented reality: past, present and future. IEEE Transactions on Visualization and Computer Graphics, 29(2): 1590-1609, 2021. 1 +[21] Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7210-7219, 2021. 1 +[22] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 1, 2 + +[23] Ashkan Mirzaei, Tristan Aumentado-Armstrong, Marcus A Brubaker, Jonathan Kelly, Alex Levinshtein, Konstantinos G Derpanis, and Igor Gilitschenski. Reference-guided controllable inpainting of neural radiance fields. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2023. 1 +[24] Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G Derpanis, Jonathan Kelly, Marcus A Brubaker, Igor Gilitschenski, and Alex Levinshtein. Spin-nerf: Multiview segmentation and perceptual inpainting with neural radiance fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 1, 2, 5, 6, 7 +[25] Ashkan Mirzaei, Riccardo De Lutio, Seung Wook Kim, David Acuna, Jonathan Kelly, Sanja Fidler, Igor Gilitschenski, and Zan Gojcic. Refussion: Reference adapted diffusion models for 3d scene inpainting. arXiv preprint arXiv:2404.10765, 2024. 2 +[26] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG), 41(4):1-15, 2022. 1, 2 +[27] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. Proceedings of the International Conference on Learning Representations (ICLR), 2024. 1 +[28] Minghan Qin, Wanhua Li, Jiawei Zhou, Haoqian Wang, and Hanspeter Pfister. Langsplat: 3d language gaussian splatting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1 +[29] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlp's. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14335-14345, 2021. 1 +[30] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159, 2024. 2, 8 +[31] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 1, 4, 5, 6 +[32] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5459-5469, 2022. 1, 2 +[33] Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Armenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2022. 1, 4, 5, 6 +[34] Dongqing Wang, Tong Zhang, Alaa Abboud, and Sabine Susstrunk. Innerf360: Text-guided 3d-consistent object in + +painting on 360-degree neural radiance fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 5, 8 +[35] Yuxin Wang, Qianyi Wu, Guofeng Zhang, and Dan Xu. Gscream: Learning 3d geometry and feature consistent gaussian splatting for object removal. Proceedings of the European Conference on Computer Vision (ECCV), 2024. 1, 2, 3, 5, 6, 7 +[36] Ethan Weber, Aleksander Holynski, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, and Angjoo Kanazawa. Nerfiller: Completing scenes via generative 3d inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 20731-20741, 2024. 2 +[37] Silvan Weder, Guillermo Garcia-Hernando, Aron Monszpart, Marc Pollefeys, Gabriel J Brostow, Michael Firman, and Sara Vicente. Removing objects from neural radiance fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 1 +[38] Shaoan Xie, Zhifei Zhang, Zhe Lin, Tobias Hinz, and Kun Zhang. Smartbrush: Text and shape guided object inpainting with diffusion model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 1 +[39] Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. Paint by example: Exemplar-based image editing with diffusion models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. +[40] Shiyuan Yang, Xiaodong Chen, and Jing Liao. Uni-paint: A unified framework for multimodal image inpainting with pretrained diffusion model. In Proceedings of the 31st ACM International Conference on Multimedia, 2023. 1 +[41] Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. Gaussian grouping: Segment and edit anything in 3d scenes. Proceedings of the European Conference on Computer Vision (ECCV), 2024. 1, 2, 3, 4, 5, 6, 7 +[42] Youtan Yin, Zhoujie Fu, Fan Yang, and Guosheng Lin. Ornerf: Object removing from 3d scenes guided by multiview segmentation with neural radiance fields. arXiv preprint arXiv:2305.10503, 2023. 1, 2 +[43] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. Plenoptrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5752-5761, 2021. 1, 2 +[44] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splattering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1 +[45] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 5 \ No newline at end of file diff --git a/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/images.zip b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9b7fd9c2962371dbbad07c18b6ba887e860e4340 --- /dev/null +++ b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2594e57468aa13b496a9c38e280721d8ff19753ea7903d44e33c01dbe718ff54 +size 1100094 diff --git a/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/layout.json b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..aae0c518739965c8b71533c01146699573a8f87d --- /dev/null +++ b/CVPR/2025/3D Gaussian Inpainting with Depth-Guided Cross-View Consistency/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca33eb6fd49a67e6f0f49df65077cd37f2e45167b20186cd79c432c0a8c96905 +size 452881 diff --git a/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_content_list.json b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a213b84326b8d7d3d990042ece558204a874b4b2 --- /dev/null +++ b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9d4d759a5b0e10c11cfe6cbd71ffd66b3c769cea1563094e3036d5c329e8e16 +size 89330 diff --git a/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_model.json b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_model.json new file mode 100644 index 0000000000000000000000000000000000000000..056b5801b6885b7cc29e9f5fdb904772fd0f14d2 --- /dev/null +++ b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c6918d35796d60c7192cd4dca97b0987caf8536fe9cde910b02133ac0c7f6b8 +size 109497 diff --git a/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_origin.pdf b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0152145744413cb316d26d39a0a1eac851620d69 --- /dev/null +++ b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/47811d03-929d-4f21-805c-f1072e579c38_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:193c48fd5a2a62f0889338b8675c764870ea522cb02a6f2e1aa887fedd301a44 +size 5185061 diff --git a/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/full.md b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..50858ade7e653ec2841425a9d6a69c61798de4b0 --- /dev/null +++ b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/full.md @@ -0,0 +1,332 @@ +# 3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation + +Gyeongrok Oh $^{1,\ast}$ , Sungjune Kim $^{1,\ast}$ , Heeju Ko $^{1}$ , Hyung-gun Chi $^{2}$ , Jinkyu Kim $^{1}$ , Dongwook Lee $^{3}$ , Daehyun Ji $^{3}$ , Sungjoon Choi $^{1}$ , Sujin Jang $^{3,\dagger}$ , Sangpil Kim $^{1,\dagger}$ + +$^{1}$ Korea University $^{2}$ Purdue University $^{3}$ AI Center, DS Division, Samsung Electronics + +# Abstract + +The resolution of voxel queries significantly influences the quality of view transformation in camera-based 3D occupancy prediction. However, computational constraints and the practical necessity for real-time deployment require smaller query resolutions, which inevitably leads to an information loss. Therefore, it is essential to encode and preserve rich visual details within limited query sizes while ensuring a comprehensive representation of 3D occupancy. To this end, we introduce ProtoOcc, a novel occupancy network that leverages prototypes of clustered image segments in view transformation to enhance low-resolution context. In particular, the mapping of 2D prototypes onto 3D voxel queries encodes high-level visual geometries and complements the loss of spatial information from reduced query resolutions. Additionally, we design a multiperspective decoding strategy to efficiently disentangle the densely compressed visual cues into a high-dimensional 3D occupancy scene. Experimental results on both Occ3D and SemanticKITTI benchmarks demonstrate the effectiveness of the proposed method, showing clear improvements over the baselines. More importantly, ProtoOcc achieves competitive performance against the baselines even with $75\%$ reduced voxel resolution. Project page: https://kuailab.github.io/cvpr2025protoocc. + +# 1. Introduction + +3D occupancy prediction (3DOP) is the task of determining which parts of a 3D space are occupied by objects and identifying their class categories. In particular, view transformation from 2D to 3D space is an essential step for successful camera-based occupancy prediction. Prior works employ various 3D space representation strategies, such as bird's-eye-view (BEV) plane, tri-perspective-view (TPV) planes, and 3D voxel cube, which aim to map multi-view camera in + +![](images/ba2cc55c22f0c1597a52bf70d38ec221cd34f226d8de5368bf479ce0ece26a1e.jpg) + +![](images/bfcf61d1a4147cb6a3b5d749c00fe9b80260fa7186fec1c022b61aaa3db75c63.jpg) +Figure 1. (a) Our ProtoOcc can perform comparably to higher-resolution counterparts while using $75\%$ less memory. (b-c) Reducing query resolutions in standard view transformation (VT) is required for faster inference, but brings geometrical ambiguity. (d) Our prototype-aware VT can capture high-level geometric details while preserving computational efficiency. + +![](images/3b2da663fc1bd8b2ddb67581d0ab8e587ff071e87d1c560903ede7bc657bc30c.jpg) + +![](images/33ae0659cb8d1481488f3cd6ef93add86cb3d6ac880ff342b1e41adc1455e810.jpg) + +formation into corresponding grid cells in a unified 3D space. Among these, 3D voxels are the most common representation strategy in the 3DOP task [11, 12, 40, 43, 48, 50, 53, 55], as they naturally encode visual information into a structured semantic 3D space. Therefore, the effectiveness of 3D voxelization (i.e., voxel queries) plays a crucial role in determining the prediction performance in camera-based 3DOP. + +Trade-offs between computation and performance have long been a challenge in various deep learning applications [6, 8, 17, 26]. Learning voxel queries also encounters this dilemma, especially in real-time and safety-critical vision systems such as autonomous driving and robotics. As illustrated in Figure 1, utilizing voxel queries with high-resolution may produce reliable performances, but requires intensive computation. Accelerating the inference inevitably necessitates smaller query sizes. However, it results in performance degradation primarily due to its inability to capture precise high-level details within limited spatial storage, + +which is crucial for 3DOP. Hence, incorporating comprehensive visual contexts into low-resolution voxel queries is essential for ensuring accurate 3DOP results while maintaining computational efficiency. + +Despite its importance, existing approaches overlook the constraints posed by low-resolution queries in view transformation. They typically utilize a standard 2D-to-3D cross attention mechanism using low-level image features [12, 22, 28, 29, 47]. However, with reduced query resolutions, it becomes insufficient to solely rely on these low-level features to precisely reconstruct a 3D scene. Specifically, since images are packed into a smaller space, the encoded features lose spatial distinctiveness, thereby requiring an alternative method to preserve the necessary contextual information. In addition, it is also challenging to decode the densely compressed and low-resolution voxel queries into a sparse and high-resolution 3D occupancy scene. + +To this end, we propose ProtoOcc, a novel occupancy network focusing on context enhancement in low-resolution view transformation for 3DOP. In particular, ProtoOcc features two main learning strategies: Prototype-aware View Transformation and Multi-perspective Occupancy Decoding. + +Prototype-aware View Transformation. Image prototypes are representations of clustered image segments that provide a high-level understanding of visual structures [23, 44] (e.g. layouts and boundaries). Therefore, mapping these structural elements of 2D onto 3D voxel queries effectively alleviates the loss of spatial cues in reduced query resolutions. Furthermore, we optimize the prototype features to gain distinctiveness by leveraging pseudo-mask-based contrastive learning. As a result, each prototype becomes more exclusive and informative, enabling the voxel queries to encode more precise spatial details even with reduced resolutions. + +Multi-perspective Occupancy Decoding. Reconstructing 3D occupancy from low-resolution queries is an ill-posed problem, requiring precise disentanglement of densely encoded visual details. Therefore, we propose a multiperspective decoding strategy using voxel augmentations. Each augmentation is upsampled as a unique view of the scene, collectively forming a comprehensive representation of the 3D structure. Then, we apply a consistency regularization on different perspectives to ensure that they represent the same 3D scene, which brings robustness in training. + +Through extensive experiments on the Occ3DnuScenes [39] and SemanticKITTI [2] benchmarks, we validate that the proposed ProtoOcc can be an effective solution for ensuring accurate 3DOP results while maintaining computational efficiency. Most importantly, ProtoOcc achieves competitive performance with even $75\%$ smaller voxel queries against models with larger-sized ones. + +In summary, the contributions of this work include: + +- Introducing ProtoOcc as an exemplar in 3DOP by using computationally efficient low-resolution voxel queries. + +- A prototype-aware view transformation and multiperspective decoding strategy for enhancing the representations of low-resolution voxel queries. +- Demonstrating clear improvements over previous state-of-the-art methods in two major benchmarks, along with detailed analyses. + +# 2. Related Work + +# 2.1. Camera-based 3D Occupancy Prediction + +The task of camera-based 3D occupancy prediction (3DOP) originated from Semantic Scene Completion (SSC) [3, 14, 19, 38, 54], which leverages a single monocular image or sparse LiDAR supervision of the SemanticKITTI benchmark [2] to reconstruct a static 3D scene. The monocular camera-based methods MonoScene [3] and VoxFormer [21] serve as the foundation for further research expansion into multi-view images. Following the previous advances, Occ3D [39] recently released a benchmark for dynamic 3DOP using multi-view camera images, triggering the popularity of camera-based 3DOP [11, 12, 40, 43, 48, 50, 55]. + +Despite their potential, the tremendous computation load of intermediate voxel representations poses a significant challenge. Several approaches [25, 32, 39] sparsify the voxel queries based on occupancy scores. A recent work COTR [30] first embeds 2D observations into a larger-sized voxel queries and further downsamples them into a compact representation. However, our proposed ProtoOcc directly exploits a smaller-sized query for view transformation. Our prototype-aware view transformation and multi-perspective voxel decoding strategies on low-resolution queries efficiently improve both computation and performance. + +# 2.2. 2D-to-3D View Transformation + +View transformation is an indispensable process in camera-based perception models, bridging the cross-view environment. LSS [33] achieves this by utilizing camera parameters to estimate the depth distribution from an image. The strategy is adopted from diverse 3D downstream tasks, demonstrating its effectiveness [9, 10, 20, 30]. However, the reliance on depth estimation causes geometric uncertainty, thus presenting a fundamental challenge. The attention-based approach is another line of view transformation that effectively aggregates 2D spatio-temporal features [15, 22, 36]. Here, learnable queries with diverse configurations [12, 22, 28] attend on specific regions in an image and transfers 2D semantics into a unified 3D space. In 3DOP, 3D voxel queries are routinely exploited for their ability to preserve as much spatial information of a 3D scene [39, 48]. + +ProtoOcc aims at providing high-level 2D contextual guidance in attention-based view transformation, specifically targeted at enhancing the representations of low-resolution voxel queries for efficient 3DOP. Several attempts have been + +![](images/d08ef6bf6e0570c5f4966f9a3bafa1168e2ebe3a760f7775870d0b105d8409b7.jpg) +(a) Prototype Mapping +(b) Prototype Optimization +Figure 2. Prototype-aware View Transformation. (a) In the Prototype Mapping stage, we fully exploit the hierarchies of 2D image features via a clustering method to map 2D prototype representations onto 3D voxel query. (b) Contrastive learning on the prototype features based on the pseudo ground truth masks enhances the discrimination between the prototypes for better feature learning. Best viewed in color. + +previously made in 3D object detection task, to improve representations in view transformation [18, 49]. However, these methods mainly focus on enhancing object-wise representations, which makes them unsuitable for 3DOP, which requires holistic understanding of a scene. + +# 3. Method + +ProtoOcc tackles the information loss problem occurring from reduced query resolutions through two main strategies: Prototype-aware View Transformation and Multi-perspective Occupancy Decoding. In this section, we first go over the overview of ProtoOcc (Sec. 3.1), then explain the technical details of each proposed method (Sec. 3.2 and 3.3). + +# 3.1. Overview + +Given $N$ multi-view images $\{\mathcal{L}_i\}_{i=1}^N$ , camera-based 3D occupancy prediction aims to reconstruct a 3D occupancy scene $\mathcal{O} \in \mathbb{R}^{L \times H \times W \times Z}$ , where $H$ , $W$ , and $Z$ represent the spatial dimensions and $L$ is the semantic label. Specifically, the image backbone network (e.g. ResNet50 [7]) first extracts multi-scale image features $\{\{\mathcal{F}_i^{(t)}\}_{i=1}^N\}_{t=1}^T$ using Feature Pyramid Network (FPN) [24]. Here, $T$ denotes the total number of different feature scales, and each feature has a channel size of $d$ . Then, these 2D features across different views are converted into a unified 3D representation by employing a learnable 3D voxel query $\mathbf{Q} \in \mathbb{R}^{d \times h \times w \times z}$ . $\mathbf{Q}' \in \mathbb{R}^{N \times K \times d}$ denotes a subset of $\mathbf{Q}$ , where $K$ represents the maximum number of hit queries on each $N$ camera views ( $0 \leq K \leq h \times w \times z$ ). Following the attention-based view transformation [22, 39, 48], the queries aggregate spatial and temporal cue of the surrounding 3D scene via deformable attention [56]. Finally, the voxel upsampling network restores high-resolution voxel volume from the encoded voxel query and predicts an occupancy state and + +semantic label for each voxel. + +# 3.2. Prototype-aware View Transformation + +In this stage, we encode prototype representations of 2D images into 3D voxel queries, which is an unexplored concept in the previous literature. Prototypes are grouped representations of image clusters that integrate semantically similar features [1, 13, 52]. Inspired by this principle, we enrich the 3D voxel query representations by mapping hierarchical image representation encompassing both raw and clustered features, thereby compensating for the loss of spatial cues at reduced resolutions. We provide an illustration of our method in Figure 2. + +# 3.2.1. Prototype Mapping + +Mapping image prototype representations onto 3D voxel queries starts from clustering the images. We adopt an iterative grouping strategy [1, 4, 13, 52] to segment the image feature space. Specifically, we first divide the feature map into regular grids and obtain initial prototype representation by calculating the average within each grid. Then, we iteratively update this feature by multiplying the soft-assigned pixel-prototype similarity with the feature map. The acquired 2D prototype features are denoted as $\mathbf{P}_{\mathrm{img}} \in \mathbb{R}^{N \times M \times d}$ , where $M$ is the number of prototypes. + +The next step is to lift these 2D prototype features $\mathbf{P}_{\mathrm{img}}$ into the 3D voxel query space. In this process, we introduce an innovative application of Feature Aggregation & Dispatch technique [23, 31]. Originally, the idea is introduced as a feature extraction paradigm in 2D image space. In this work, we utilize this as a mapping function from a 2D prototype feature space to a 3D voxel query space. We start by projecting $\mathbf{P}_{\mathrm{img}}$ to 3D voxel feature space using MLP layers and obtain $\hat{\mathbf{P}}_{\mathrm{img}}$ . Then, we compute the affinities $\mathbf{A} \in \mathbb{R}^{N \times M \times K}$ via pairwise cosine similarity between + +$\hat{\mathbf{P}}_{\mathrm{img}}$ and $\mathbf{Q}'$ . Using the sigmoid function $\sigma$ , the similarity values are further re-scaled to range between (0, 1), indicating the probability of each query cell in $\mathbf{Q}'$ being assigned to a certain prototype. Multiplying these probabilities with the voxel query feature $\mathbf{Q}'$ produces 3D voxel prototype feature $\mathbf{P}_{\mathrm{vox}} \in \mathbb{R}^{N \times M \times d}$ (Aggregation). Then, these features are redistributed to individual voxel queries to form a prototype-aware 3D voxel query $\tilde{\mathbf{Q}} \in \mathbb{R}^{N \times K \times d}$ (Dispatch). The overall process is formulated as follows: + +$$ +\text {A g g r e g a t e :} \mathbf {P} _ {\mathrm {v o x}} = \frac {1}{R} \left(\hat {\mathbf {P}} _ {\mathrm {i m g}} + \sigma (\mathbf {A}) \cdot \mathbf {Q} ^ {\prime}\right) \tag {1a} +$$ + +$$ +D i s p a t c h: \tilde {\mathbf {Q}} = \mathbf {Q} ^ {\prime} + M L P \left(\sigma (\mathbf {A}) ^ {T} \cdot \mathbf {P} _ {v o x}\right), \quad (1 b) +$$ + +where $\hat{\mathbf{P}}_{\mathrm{img}}$ and $\mathbf{Q}'$ are added as residual connection for Aggregation and Dispatch, respectively. $R$ is the total sum of the similarities $\sigma(\mathbf{A})$ , which is divided to ensure a stable training process. Subsequently, the prototype-aware voxel query $\tilde{\mathbf{Q}}$ and the multi-scale image feature $\mathcal{F}$ are fed as inputs for deformable attention. Owing to this strategy, the encoded 3D voxel query carries both high-level and fine-grained visual contexts that are highly essential for predicting 3D occupancy. + +# 3.2.2. Prototype Optimization + +The prototype-aware 3D voxel query contains rich 2D contextual information, yet the quality of these features highly depends on 2D clustering quality. However, standard optimization objectives in 3DOP (e.g., $\mathcal{L}_{\mathrm{occ}}$ and $\mathcal{L}_{\mathrm{Lov}}$ ) do not provide suitable guidance for cluster learning. Therefore, we incorporate an explicit pseudo 2D supervision, by a contrastive learning between the prototype-aware pixel features in accordance with pseudo ground truth masks. + +Prototype-aware Pixel Features. These features, denoted as $\mathbf{X} \in \mathbb{R}^{N \times d \times h' \times w'}$ , are the mapping of 3D voxel prototype features $\mathbf{P}_{\mathrm{vox}}$ onto an implicit 2D grid cell. For simplicity, we flatten the spatial dimension and denote it as $D$ in the following (i.e., $D = h' \times w'$ ). Since $\mathbf{P}_{\mathrm{vox}}$ itself does not contain any spatial information, a direct mapping between the two spaces is limited. Therefore, we first employ a deformable attention map [35] $\mathbf{G} \in \mathbb{R}^{N \times D}$ , which highlights the salience regions on the image features for each 3D voxel queries. Then, the element-wise multiplication between $\mathbf{G}$ and the prototype-query affinity matrix $\mathbf{A}$ computes the probability of each grid cell being assigned with a certain prototype. This acts as the spatial bridge for mapping $\mathbf{P}_{\mathrm{vox}}$ onto the 2D grid cell. Consequently, through a matrix multiplication between this feature and $\mathbf{P}_{\mathrm{vox}}$ , we obtain the prototype-aware pixel features. The explained mapping procedure can be formulated as: + +$$ +\mathbf {X} = \left\{\mathbf {G} \odot \mathcal {H} (\mathbf {A}) \right\} * \mathbf {P} _ {\mathrm {v o x}}, \tag {2} +$$ + +where $\mathcal{H}$ is a linear mapping function and $\odot$ is the elementwise multiplication. Through this proposed methodology, + +each feature in $\mathbf{X}$ stores the prototype features from the 3D voxel space, yet precisely aligned within the 2D pixel grid. + +Contrastive Learning with Pseudo Masks. The absence of explicit ground truth to contrast the prototype-aware pixel features remains a challenge. Therefore, we generate $S$ pseudo ground truth masks by applying either a classical clustering algorithm [41] or visual foundation model [16] on the image. This is further resized to match the spatial dimension of $\mathbf{X}$ to provide distinct spatial boundaries. Then, we compute $S$ mask centroids by averaging the features of $\mathbf{X}$ that fall within each boundary. These mask centroids, denoted as $\{\mathbf{M}_i\}_{i=1}^S \in \mathbb{R}^d$ , serve as the anchor for our contrastive learning. We calculate the contrastive loss [42, 45] for the feature in the $(x,y)$ position of $\mathbf{X}$ as: + +$$ +\mathcal {L} _ {\mathrm {c l s}} ^ {(x, y)} = - \log \frac {\sum_ {s = 1} ^ {S} m _ {s} \exp \left(\langle \mathbf {M} _ {s} , \mathbf {X} _ {(x , y)} \rangle / \tau_ {\mathrm {c l s}}\right)}{\sum_ {s = 1} ^ {S} \exp \left(\langle \mathbf {M} _ {s} , \mathbf {X} _ {(x , y)} \rangle / \tau_ {\mathrm {c l s}}\right)}, \tag {3} +$$ + +where $\langle \cdot, \cdot \rangle$ represents the cosine similarity between the two elements, and $\tau_{\mathrm{cls}}$ is the temperature hyperparameter. The binary value $m_s \in \{0,1\}$ is set as 1 if the position $(x,y)$ lies within the boundary of $s^{th}$ mask. By summing up the losses across all grid cells, we obtain the final contrastive loss (i.e. $\mathcal{L}_{\mathrm{cls}} = \sum_{x=1}^{h'} \sum_{y=1}^{w'} \mathcal{L}_{\mathrm{cls}}^{(x,y)}$ ), which enhances the distinctiveness of the prototype features when minimized. + +# 3.3. Multi-Perspective Occupancy Decoding + +When reconstructing 3D occupancy from low-resolution queries, the lack of resolution causes inevitable geometrical ambiguity. That being said, an encoded query has the potential to be decoded in diverse 3D perspectives. We address this ill-posed property by enhancing contextual diversity in occupancy decoding through two essential techniques: 1) Multi-perspective View Generation and 2) Scene Consistency Regularization. + +# 3.3.1. Multi-perspective View Generation + +We generate multi-perspective contexts from voxel queries through augmentations. Since augmentation on voxel representations is not widely explored, we establish two categories of voxel augmentation: 1) a feature-level augmentation (e.g. Random Dropout and Gaussian Noise), and 2) a spatial-level augmentation (e.g. Transpose and Flips). Combining the augmentations from these categories, we obtain a query set $\mathbb{Q} = \{\mathbf{Q}^{(0)},\mathbf{Q}^{(1)},\dots ,\mathbf{Q}^{(P - 1)},\mathbf{Q}^{(P)}\}$ where $\mathbf{Q}^{(0)}$ denotes the original voxel query derived from the encoder, and $P$ is the number of voxel augmentations. Then, these queries pass through a transposed 3D convolution layer with shared weights, generating an upsampled query set $\mathbb{V} = \{\mathbf{V}^{(0)},\mathbf{V}^{(1)},\ldots ,\mathbf{V}^{(P - 1)},\mathbf{V}^{(P)}\}$ , where $\mathbf{V}\in \mathbb{R}^{L\times H\times W\times Z},\forall \mathbf{V}\in \mathbb{V}$ . For each grid cell position, the shared kernel convolves with diverse features of local voxel neighbors. As a result, the upsampled voxels can be inter + +preted as unique perspective of a scene, collectively forming a holistic representation of the 3D structure. + +# 3.3.2. Scene Consistency Regularization + +Although the upsampled query set $\mathbb{V}$ contains varying contextual perspectives, they should all depict the same semantic occupancy. Therefore, we couple with the augmentations a regularization term to maintain a semantically consistent prediction. Specifically, we adopt a simple yet effective regularization technique of GRAND [5], and minimize the $L_{2}$ distances between the predicted label distributions and their average distribution for each grid cell. For example, the average of the predicted label distribution in the $(i,j,k)$ position of the voxels is calculated as $\hat{\mathbf{V}}_{(i,j,k)} = \frac{1}{P + 1}\sum_{p = 0}^{P}\mathcal{G}(\mathbf{V}_{(i,j,k)}^{(p)})$ , where $\mathcal{G}(\cdot)$ is the label classifier network. Then, this distribution is sharpened as: + +$$ +\tilde {\mathbf {V}} _ {(i, j, k)} [ c ] = \hat {\mathbf {V}} _ {(i, j, k)} ^ {\frac {1}{\tau_ {\mathrm {c o n s}}}} [ c ] \Bigg / \sum_ {l = 1} ^ {L} \hat {\mathbf {V}} _ {(i, j, k)} ^ {\frac {1}{\tau_ {\mathrm {c o n s}}}} [ l ], (1 \leq c \leq L), +$$ + +which denotes the guessed probability on the $c^{th}$ class, with $\tau_{\mathrm{cons}}$ as the temperature hyperparameter that controls the sharpness of the categorical distribution. Accordingly, the final consistency regularization loss $\mathcal{L}_{\mathrm{cons}}$ is obtained by taking the average of the distances between the sharpened average and each prediction across all grid cells and augmentations: + +$$ +\mathcal {L} _ {\text {c o n s}} = \frac {\sum_ {p = 0} ^ {P} \sum_ {k = 1} ^ {Z} \sum_ {j = 1} ^ {W} \sum_ {i = 1} ^ {H} \left\| \tilde {\mathbf {V}} _ {(i , j , k)} - \mathcal {G} \left(\mathbf {V} _ {(i , j , k)} ^ {(p)}\right) \right\| _ {2} ^ {2}}{H \cdot W \cdot Z \cdot (P + 1)}. \tag {5} +$$ + +# 3.4. Optimization + +ProtoOcc minimizes four loss functions for each objectives: 1) $\mathcal{L}_{\mathrm{occ}}$ for accurate semantic label prediction, 2) $\mathcal{L}_{\mathrm{Lov}}$ as an complementary loss for maximizing the mean Intersection over Union (mIoU), 3) $\mathcal{L}_{\mathrm{cls}}$ for enhancing the prototype feature quality, and lastly 4) $\mathcal{L}_{\mathrm{cons}}$ for regularizing the outputs from diverse contextual perspectives. The final objective function of the model is defined as: + +$$ +\mathcal {L} _ {\text {t o t a l}} = \sum_ {p = 0} ^ {P} \left(\lambda_ {1} \mathcal {L} _ {\text {o c c}} ^ {(p)} + \lambda_ {2} \mathcal {L} _ {\text {L o v}} ^ {(p)}\right) + \lambda_ {3} \mathcal {L} _ {\text {c l s}} + \lambda_ {4} \mathcal {L} _ {\text {c o n s}}, \tag {6} +$$ + +where $\lambda_1, \lambda_2, \lambda_3,$ and $\lambda_4$ are coefficient hyperparameters that determine the weights of four losses. The values we set for each coefficient are specified in Appendix B.1. + +# 4. Experiments + +# 4.1. Experiment Setup + +Datasets and Evaluation Metrics. We train and evaluate our method on two representative tasks in 3D scene understanding: 3D occupancy prediction (3DOP) and 3D semantic scene completion (3DSSC). For 3DOP, we utilize the + +Occ3D-nuScenes [39] benchmark. The qualitative result is measured with a mean Intersection-over-Union (mIoU) metric. For 3DSSC, we use the SemanticKITTI [2] benchmark, and measure IoU and mIoU for evaluation. More details of the datasets can be found in Appendix A. + +Implementation Details. As our primary goal is to observe the influences of voxel query resolutions on view transformation, categorize the experimental setting based on query resolutions. For Occ3D-nuScenes, we make three variants: Base, Small and Tiny. Base follows the standard baseline settings: a query size of $200 \times 200$ for BEVFormer and $100 \times 100 \times 16$ for PanoOcc and our ProtoOcc. For Small, we use a query size of $100 \times 100$ for BEVFormer, and $50 \times 50 \times 16$ for PanoOcc and ProtoOcc. Lastly for Tiny, we further reduce the resolution into $50 \times 50$ for BEVFormer, and $50 \times 50 \times 4$ for PanoOcc and ProtoOcc. All three categories are trained with $432 \times 800$ sized images for 12 epochs. For SemanticKITTI, we make two variants: Base and Small. Here, we observe how our proposed methods can be applied and improve existing baselines: VoxFormer [21] and Symphonies [14]. A query size of $128 \times 128 \times 16$ is used for Base, and $64 \times 64 \times 8$ is used for Small for all baselines. More implementation details are reported in Appendix B.1. + +# 4.2. Quantitative Analysis + +3D Occupancy Prediction. Table 1 presents a quantitative comparison of 3D occupancy prediction between baselines and ProtoOcc. Here, we observe two key insights. First, ProtoOcc achieves the highest mIoU in all query settings. This strongly demonstrates the effectiveness of voxel representation learning through our proposed methods. Specifically, ProtoOcc excels in predicting important road agents (e.g. pedestrian, car, and bus). Furthermore, by comparing the results within the same color marks (● and ●), we notice that ProtoOcc achieves competitive results against the baselines with higher-resolution queries. For example, with $75\%$ fewer parameters in view transformation, ProtoOcc achieves performance comparable to PanoOcc [48] in the larger resolution category. The gaps in mIoU are marginal (0.31 in Small-Base and 0.1 in Tiny-Small), and our ProtoOcc even outperforms predictions in a number of semantic classes. + +3D Semantic Scene Completion. Table 2 reports the results of 3D semantic scene completion with SemanticKITTI benchmark. As clearly seen from the table, our method can also bring benefits in scene understanding as a plug-and-play. Applying our method enhances the performance of both baselines in all metrics. It is important to note that, except for IoU in VoxFormer, all the smaller-resolution models surpass its larger variant when our ProtoOcc is combined. This implies that ProtoOcc overcomes $87.5\%$ deficiency of spatial storage through its prototype-aware view transformation and multi-perspective occupancy decoding. + +Table 1. Quantitative results on the Occ3D-nuScenes [39] validation set. We highlight the best and runner-up results for each category in bold and plain, respectively. Not only does ProtoOcc stand out in its category, but also by comparing the results within the same color marks (● and ●), it is apparent that ProtoOcc can overcome query deficiencies, performing on par even with higher-resolution counterparts. + +
Query SizeModelmIoUothersbarrierbicyclebuscarconst. veh.motorcyclepedestriantraffic conetrailertruckdrive. suf.other flatsidewalkterrainmannadevegetation
BaseCTF-Occ [39]28.508.0939.3320.5638.2942.2416.9324.5222.7221.0522.9831.1153.3333.8437.9833.2320.7918.00
TPVFormer [12]34.207.6844.0117.6640.8846.9815.0620.5424.6924.6624.2629.2879.2740.6548.4949.4432.6329.82
SurroundOcc [50]34.609.5138.5022.0839.8247.0420.4522.4823.7823.0027.2934.2778.3236.9946.2749.7135.9332.06
OccFormer [55]37.049.1545.8418.2042.8050.2724.0020.8022.8620.9831.9438.1380.1338.2450.8354.346.4140.15
BEVFormer* [22]34.977.5341.7716.3944.0648.4817.2720.0123.3621.1628.8835.5980.1235.3547.6551.8940.6834.28
PanoOcc* [48]38.119.7545.3122.4543.1350.1922.2527.3524.4925.1731.7437.9581.7442.2950.8254.8040.8137.14
ProtoOcc (Ours)39.019.7546.0824.3446.0952.4524.2128.1124.7219.7932.9040.5082.2943.0252.4755.9442.4638.13
SmallBEVFormer* [22]33.986.7541.6713.9141.9748.4917.8318.0122.1919.0829.6433.2379.4236.4846.8249.2639.0433.91
PanoOcc* [48]35.788.1841.6020.7941.2547.7821.8723.4221.0319.2929.7136.1081.2040.0049.2253.9438.0934.83
ProtoOcc (Ours)*37.809.2843.6422.3044.7250.0723.6825.2322.7719.6630.4338.7382.0542.6151.6855.8441.9138.05
TinyBEVFormer [22]32.024.8639.797.1742.4647.1018.4613.1817.7612.4628.7433.1978.6435.3645.2747.2938.9333.61
PanoOcc [48]33.996.9739.6018.8040.6745.6318.1921.4319.1016.5325.9935.1580.6038.4449.0252.1136.8132.87
ProtoOcc (Ours)*35.688.3340.5519.8442.9548.0820.3122.7821.2117.0028.2236.6081.4241.3250.1653.8238.6335.35
+ +Table 2. Quantitative results on the SemanticKITTI [2] validation set. Results of each semantic class are provided in Appendix D. Performance increases through our method are denoted in blue. + +
ModelPub.IoUmIoU
MonoScene [3]CVPR 2236.8611.08
TPVFormer [12]CVPR 2335.6111.36
OccFormer [55]ICCV 2336.5013.46
HASSC [46]CVPR 2444.8213.48
VoxFormer-S [21]CVPR 2343.1011.51
+ ProtoOcc-43.55 (+0.35)12.39 (+0.88)
VoxFormer-B [21]CVPR 2344.0212.35
+ ProtoOcc-44.90 (+0.85)13.57 (+1.22)
Symphonies-S [14]CVPR 2441.6713.64
+ ProtoOcc-43.02 (+1.35)14.50 (+0.86)
Symphonies-B [14]CVPR 2441.8514.38
+ ProtoOcc-42.12 (+0.27)14.83 (+0.45)
+ +View Transformation Efficiency. Table 3 compares the cost of inference time, FLOPs, and the number of parameters during the view transformation stage. Initially, by comparing PanoOcc-Base and PanoOcc-Small, we observe huge savings from a reduced query resolution. Next, we compare two view transformation methods introduced from DFA3D [18] and ProtoOcc on the Small query setting. Although additional processing brings inevitable increases in computations, our method shows significant computational efficiency compared to DFA3D, as it produces better results with less costs in inference time and number of parameters. Above all, our ProtoOcc-Small produces comparable mIoU with PanoOcc-Base while requiring $60.53\%$ , $71.15\%$ , and $65.16\%$ less computational costs in each metric, respectively. + +# 4.3. Qualitative Analysis + +Predictions on Challenging Scenarios. In this section, we visualize the prediction results on some challenging sce + +Table 3. Comparison of Computational Efficiency. + +
ModelInf. TimeFLOPs (G)ParamsmIoU
PanoOcc-B [48]266 ms1,31046.24M38.11
PanoOcc-S66 ms33015.52M35.78
+ DFA3D [18]153 ms36022.33M36.27
ProtoOcc-S (Ours)105 ms37816.11M37.80
+ +narios that the numerical results were unable to highlight. In the top example of Figure 3, the labels for approaching vehicles in the distance are absent in the ground truth. Accordingly, the baselines fail in predicting the occupancy of the corresponding region. However, our high-level 2D prototype-aware encoding enables ProtoOcc to capture these long-range objects, even without ground-truth guidance. Furthermore, the bottom example emphasizes the superiority of ProtoOcc in occluded regions. We consider that this ability is developed through the multi-perspective learning of 3D space. More examples are provided in Appendix D.2. + +Attention Map Visualization. Attention map is a crucial indicator of whether the 2D-to-3D view transformation has been successful. Figure 4 compares how different qualities of attention maps can lead to substantial differences in predicted 3D occupancies. The high-level 2D prototypes guide ProtoOcc to distinguish the scene components with clear boundaries. This naturally enables the model to identify important parts in the image to focus on with fine granularity. For example, ProtoOcc captures small yet crucial scene components (e.g. pedestrian and motorcycle in Figure 4) and assigns high attention weights, which leads to accurate predictions. However, the baseline fails to capture the details and result in prediction misses that jeopardize the safety of the driving environment. + +Feature Map Visualization. The prototype-aware view transformation forms a cluster per each significant region in + +![](images/9f034fe7fd55c959b9e0129c8543b939514a8c104e0a3e3a0e26afe25923db1c.jpg) +Figure 3. Predictions in challenging scenarios. We visualize the prediction comparisons between the baseline and our ProtoOcc. The corresponding camera view is highlighted in yellow, and we label important semantic classes on top. Best viewed in color. + +Table 4. Ablation on Model Design. + +
Exp. #MethodmIoU
Proto. MappingProto. OptimizationMOD
1---35.78
2--35.80
3-36.55
4--37.25
537.80
+ +an image. Owing to this strategy, the encoding stage gains flexibility, by not being overwhelmed by visually dominant regions. Figure 5 visualizes the feature map of voxel queries, which are flattened in the channel and the z-axis. As clearly seen in the illustration, the features of the baseline are biased towards visually dominant elements in the scene, such as the yard or building. In contrast, applying our prototype-aware view transformation on the baseline effectively enhances the recognition of smaller, yet crucial components for safe driving, such as vehicles. + +# 4.4. Ablation Study + +Model Design. Table 4 shows the effects of each method proposed in our ProtoOcc. Exp. 1 is a baseline model without any contribution of our method. When our Prototype Mapping technique is applied (Exp. 2), we observe a little increase in the performance. However, coupling with Proto + +type Optimization brings considerable improvements (Exp. 3). This implies that the explicit objective to refine the 2D prototype quality is highly beneficial for 3D voxel clustering, and as a result helps to predict accurate semantic occupancy. In Exp. 4, we apply the Multi-perspective Occupancy Decoding (MOD) method on the baseline, which shows great enhancements just by itself. This demonstrates that the contextual diversity is crucial for predicting higher-dimension 3D occupancy with lower-dimension 2D images. Lastly, the full ProtoOcc model (Exp. 5) leverages the synergies of each contribution, resulting in superior performance. + +Prototype Analysis. In this section, we deeply analyze the Prototype-aware View Transformation through three experiments: 1) sensitivity on number of 2D prototypes $M$ , 2) choice for pseudo mask generator, and 3) impact of granularity level. In the first experiment, we fix the pseudo mask generator as SEEDS [41] and observe the performance sensitivity on $M$ , which is determined by the pre-defined downsampling ratio $r$ . As shown in Table 6, we discover that overly fine ( $r = 2$ ) or coarse ( $r = 8$ ) clustering leads to suboptimal performance and that $M = 350$ with $r = 4$ is the adequate number of prototypes. Second, Table 7 shows that the pseudo mask generator SAM [16], which generates more semantically consistent clusters, outperforms the traditional superpixel algorithm SEEDS under consistent number of pseudo masks. Lastly, however, we observe in Table 8 that + +Table 5. Effects of different augmentations and their consistency regularization (C.R.). + +
Exp. #1234567891011
AugmentationRandom Dropout-------
Gaussian Noise-------
Transpose-------
Flips-------
mIoUw/o C.R.35.7836.9936.4636.4936.7036.9536.7836.4736.7236.6636.59
w/ C.R.N/A37.1936.8036.8637.1837.2137.1636.9036.8237.2536.78
+ +Table 6. Sensitivity on the number of 2D prototypes $M$ + +
2D prototype M1440 (r = 2)350 (r = 4)144 (r = 6)91 (r = 8)
# Pseudo Mask299
mIoU36.9037.8036.9537.07
+ +Table 7. Effect of pseudo mask quality. + +
2D prototype MMask Generator# Pseudo MaskmIoU
350SEEDS [41]9137.14
Segment Anything [16]9237.29
+ +Table 8. Impact of the granularity level. + +
2D prototype MMask Generator# Pseudo MaskmIoU
91SEEDS [41]29937.07
Segment Anything [16]9237.21
350SEEDS [41]29937.80
Segment Anything [16]9237.29
+ +aligning the consistent number of 2D prototypes $M$ with the number of pseudo mask is more crucial for facilitating the Prototype Optimization. + +Augmentations and Consistency Regularization. ProtoOcc introduces both feature- and spatial-level voxel augmentation methods for diversifying 3D contexts. Table 5 shows the influences of each augmentation and their combinations, with and without scene consistency regularization. Exp. 1 is the baseline model just as in Table 4. The results of the experiment highlight three key insights. First, the augmentation itself brings a considerable amount of benefits, demonstrating the importance of contextual diversity for occupancy decoding. Secondly, further regularization on their output consistency guarantees they represent the same scene, thereby naturally boosting robustness. Lastly, incorporating numerous augmentations for diversity can occasionally hinder robust feature learning and doesn't consistently lead to proportional performance improvements. For this reason, we conduct empirical experiments with no more than two combinations of augmentations. + +# 5. Conclusion + +This paper introduces ProtoOcc, an innovative method addressing the challenge of enriching contextual represent + +![](images/479b742a7a8dbfb78d0d1379a306305a4b911a2d232bad56958224211043ed75.jpg) + +![](images/14a86a4943fccd26aab3f09bc99868e0753b77115663d4827478d33faaadac53.jpg) +PanoOcc + +![](images/de599dc852f7930ad91698b4eadbd63b87a2f1dfc0b6d5c32b57dd8f0ce459dc.jpg) + +![](images/023730a283f56423df326f0e6b653ead40ec540fb0454b28ef3e0bc756dcc3d9.jpg) +Ours + +![](images/7dcc2f6109947047bf781acd363e2c1cba3bfe07c4a7f312b2883b5984507acd.jpg) +Figure 4. Attention map visualization. Compared to the baseline, ProtoOcc can attend to more important details in the image (e.g. red dashed boxes), which is crucial for safe driving systems. + +![](images/053270008fb99940b9bf517beebf4cd13fe426bb7e075350c57ef9c5429b61b1.jpg) +PanoOcc +PanoOcc $(+$ PVT) +Figure 5. Feature map visualization. We visualize the learned voxel queries, which are average-pooled on the channel and the z-axis. The red box indicates the region we focus on. + +![](images/a569111017477fbb51ace970d287636b46afd50a32ceabf4ec68ef7430c5e9ea.jpg) + +![](images/299eb35701f84248129c1841392dfe481248200cf080383103c6bd8813eae71d.jpg) +PanoOcc + +![](images/02a33aeabe998e997b74489da54ef2a57037416f9614a13d6b231dca8a6e6b5b.jpg) +PanoOcc $(+$ PVT) + +tation in low-resolution voxel queries for camera-based 3DOP. ProtoOcc enhances the interpretability of smaller-sized queries while mitigating information loss through two context-enriching strategies: 1) a prototype-aware view transformation and 2) a multi-perspective occupancy decoding. Experimental evaluations on both Occ3D and SemanticKITTI benchmark underscore the significance of ProtoOcc, revealing substantial enhancements over previous approaches. Notably, ProtoOcc achieves competitive performance even with smaller-sized queries, offering a promising solution for real-time deployment without compromising predictive accuracy. + +# Acknowledgement + +This work was primarily supported by Samsung Advanced Institute of Technology (SAIT) $(50\%)$ , the Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2024 (Research on neural watermark technology for copyright protection of generative AI 3D content, RS-2024-00348469, $25\%$ ; International Collaborative Research and Global Talent Development for the Development of Copyright Management and Protection Technologies for Generative AI, RS-2024-00345025, $14\%$ ), the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT)(RS-2025-00521602, $10\%$ ), and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-2019-II190079, Artificial Intelligence Graduate School Program (Korea University), $1\%$ ), and Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT (MSIT, Korea)&Gwangju Metropolitan City. + +# References + +[1] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34(11): 2274-2282, 2012. 3, 2 +[2] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyril Stachniss, and Jurgen Gall. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9297-9307, 2019. 2, 5, 6, 1 +[3] Anh-Quan Cao and Raoul De Charette. Monoscene: Monocular 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3991–4001, 2022. 2, 6, 1, 5 +[4] Jian Ding, Nan Xue, Gui-Song Xia, Bernt Schiele, and Dengxin Dai. Hgformer: Hierarchical grouping transformer for domain generalized semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15413-15423, 2023. 3, 2 +[5] Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, and Jie Tang. Graph random neural networks for semi-supervised learning on graphs. Advances in neural information processing systems, 33:22092-22103, 2020. 5 +[6] Pedro J Freire, Yevhenii Osadchuk, Bernhard Spinnler, Antonio Napoli, Wolfgang Schairer, Nelson Costa, Jaroslaw E Prilepsky, and Sergei K Turitsyn. Performance versus complexity study of neural network equalizers in coherent optical systems. Journal of Lightwave Technology, 39(19):6085-6096, 2021. 1 +[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 3 + +[8] Weixiang Hong, Qingpei Guo, Wei Zhang, Jingdong Chen, and Wei Chu. Lpsnet: A lightweight solution for fast panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, page 16746-16754, 2021. 1 +[9] Junjie Huang and Guan Huang. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054, 2022. 2 +[10] Junjie Huang, Guan Huang, Zheng Zhu, Yun Ye, and Dalong Du. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view. arXiv preprint arXiv:2112.11790, 2021. 2 +[11] Yuanhui Huang, Wenzhao Zheng, Borui Zhang, Jie Zhou, and Jiwen Lu. Selfocc: Self-supervised vision-based 3d occupancy prediction. arXiv preprint arXiv:2311.12754, 2023. 1, 2 +[12] Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie Zhou, and Jiwen Lu. Tri-perspective view for vision-based 3d semantic occupancy prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9223-9232, 2023. 1, 2, 6, 5 +[13] Varun Jampani, Deqing Sun, Ming-Yu Liu, Ming-Hsuan Yang, and Jan Kautz. Superpixel sampling networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 352-368, 2018. 3, 2 +[14] Haoyi Jiang, Tianheng Cheng, Naiyu Gao, Haoyang Zhang, Tianwei Lin, Wenyu Liu, and Xinggang Wang. Symphonize 3d semantic scene completion with contextual instance queries. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20258-20267, 2024. 2, 5, 6, 1, 3, 4 +[15] Sungjune Kim, Hadam Baek, Seunggwan Lee, Hyung-gun Chi, Hyerin Lim, Jinkyu Kim, and Sangpil Kim. Enhanced motion forecasting with visual relation reasoning. In European Conference on Computer Vision, pages 311-328. Springer, 2024. 2 +[16] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015-4026, 2023. 4, 7, 8 +[17] Dmitrii Kochkov, Jamie A Smith, Ayya Alieva, Qing Wang, Michael P Brenner, and Stephan Hoyer. Machine learning-accelerated computational fluid dynamics. Proceedings of the National Academy of Sciences, 118(21):e2101784118, 2021. 1 +[18] Hongyang Li, Hao Zhang, Zhaoyang Zeng, Shilong Liu, Feng Li, Tianhe Ren, and Lei Zhang. Dfa3d: 3d deformable attention for 2d-to-3d feature lifting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6684–6693, 2023. 3, 6 +[19] Jie Li, Kai Han, Peng Wang, Yu Liu, and Xia Yuan. Anisotropic convolutional networks for 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3351-3359, 2020. 2, 5 + +[20] Yinhao Li, Zheng Ge, Guanyi Yu, Jinrong Yang, Zengran Wang, Yukang Shi, Jianjian Sun, and Zeming Li. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1477-1485, 2023. 2 +[21] Yiming Li, Zhiding Yu, Christopher Choy, Chaowei Xiao, Jose M Alvarez, Sanja Fidler, Chen Feng, and Anima Anandkumar. Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9087-9098, 2023. 2, 5, 6, 1, 3, 4 +[22] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Yu Qiao, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. In European conference on computer vision, pages 1-18. Springer, 2022. 2, 3, 6, 1, 4 +[23] James Liang, Yiming Cui, Qifan Wang, Tong Geng, Wenguang Wang, and Dongfang Liu. Clusterfomer: Clustering as a universal visual learner. Advances in Neural Information Processing Systems, 36, 2024. 2, 3 +[24] Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. 3 +[25] Haisong Liu, Haiguang Wang, Yang Chen, Zetong Yang, Jia Zeng, Li Chen, and Limin Wang. Fully sparse 3d panoptic occupancy prediction. arXiv preprint arXiv:2312.17118, 2023.2 +[26] Lanlan Liu and Jia Deng. Dynamic deep neural networks: Optimizing accuracy-efficiency trade-offs by selective execution. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018. 1 +[27] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 4 +[28] Yingfei Liu, Tiancai Wang, Xiangyu Zhang, and Jian Sun. Petr: Position embedding transformation for multi-view 3d object detection. In European Conference on Computer Vision, pages 531-548. Springer, 2022. 2 +[29] Yingfei Liu, Junjie Yan, Fan Jia, Shuailin Li, Aqi Gao, Tiancai Wang, and Xiangyu Zhang. Petrv2: A unified framework for 3d perception from multi-camera images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3262-3272, 2023. 2 +[30] Qihang Ma, Xin Tan, Yanyun Qu, Lizhuang Ma, Zhizhong Zhang, and Yuan Xie. Cotr: Compact occupancy transformer for vision-based 3d occupancy prediction. arXiv preprint arXiv:2312.01919, 2023. 2 +[31] Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, and Yun Fu. Image as set of points. arXiv preprint arXiv:2303.01494, 2023. 3 +[32] Jianbiao Mei, Yu Yang, Mengmeng Wang, Junyu Zhu, Xiangrui Zhao, Jongwon Ra, Laijian Li, and Yong Liu. Camera-based 3d semantic scene completion with sparse guidance network. arXiv preprint arXiv:2312.05752, 2023. 2 + +[33] Jonah Philion and Sanja Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV 16, pages 194-210. Springer, 2020. 2 +[34] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 4 +[35] Byungseok Roh, JaeWoong Shin, Wuhyun Shin, and Saehoon Kim. Sparse detr: Efficient end-to-end object detection with learnable sparsity. arXiv preprint arXiv:2111.14330, 2021. 4 +[36] Wonseok Roh, Gyusam Chang, Seokha Moon, Giljoo Nam, Chanyoung Kim, Younghyun Kim, Jinkyu Kim, and Sangpil Kim. Ora3d: Overlap region aware multi-view 3d object detection. arXiv preprint arXiv:2207.00865, 2022. 2 +[37] Luis Roldao, Raoul de Charette, and Anne Verroust-Blondet. Lmscnet: Lightweight multiscale 3d semantic completion. In 2020 International Conference on 3D Vision (3DV), pages 111–119. IEEE, 2020. 5 +[38] Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1746-1754, 2017. 2 +[39] Xiaoyu Tian, Tao Jiang, Longfei Yun, Yucheng Mao, Huitong Yang, Yue Wang, Yilun Wang, and Hang Zhao. Occ3d: A large-scale 3d occupancy prediction benchmark for autonomous driving. Advances in Neural Information Processing Systems, 36, 2024. 2, 3, 5, 6, 1, 4 +[40] Wenwen Tong, Chonghao Sima, Tai Wang, Li Chen, Silei Wu, Hanming Deng, Yi Gu, Lewei Lu, Ping Luo, Dahua Lin, et al. Scene as occupancy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8406-8415, 2023. 1, 2 +[41] Michael Van den Bergh, Xavier Boix, Gemma Roig, Benjamin De Capitani, and Luc Van Gool. Seeds: Superpixels extracted via energy-driven sampling. In Computer Vision-ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VII 12, pages 13-26. Springer, 2012. 4, 7, 8 +[42] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, and Luc Van Gool. Unsupervised semantic segmentation by contrasting object mask proposals. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10052-10062, 2021. 4 +[43] Antonin Vobecky, Oriane Simeoni, David Hurych, Spyridon Gidaris, Andrei Bursuc, Patrick Pérez, and Josef Sivic. Pop-3d: Open-vocabulary 3d occupancy prediction from images. Advances in Neural Information Processing Systems, 36, 2024. 1, 2 +[44] Chong Wang, Yuyuan Liu, Yuanhong Chen, Fengbei Liu, Yu Tian, Davis McCarthy, Helen Frazer, and Gustavo Carneiro. Learning support and trivial prototypes for interpretable image classification. In Proceedings of the IEEE/CVF Inter- + +national Conference on Computer Vision, pages 2062-2072, 2023. 2 +[45] Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Max-deeplab: End-to-end panoptic segmentation with mask transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5463-5474, 2021. 4 +[46] Song Wang, Jiawei Yu, Wentong Li, Wenyu Liu, Xiaolu Liu, Junbo Chen, and Jianke Zhu. Not all voxels are equal: Hardness-aware semantic scene completion with self-distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14792-14801, 2024. 6, 1, 5 +[47] Yue Wang, Vitor Campagnolo Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, and Justin Solomon. Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning, pages 180–191. PMLR, 2022. 2 +[48] Yuqi Wang, Yuntao Chen, Xingyu Liao, Lue Fan, and Zhaoxiang Zhang. Panoocc: Unified occupancy representation for camera-based 3d panoptic segmentation. arXiv preprint arXiv:2306.10013, 2023. 1, 2, 3, 5, 6, 4 +[49] Yuqi Wang, Yuntao Chen, and Zhaoxiang Zhang. Frustumformer: Adaptive instance-aware resampling for multi-view 3d detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5096-5105, 2023. 3 +[50] Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, and Jiwen Lu. Surroundocc: Multi-camera 3d occupancy prediction for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21729-21740, 2023. 1, 2, 6 +[51] Xu Yan, Jiantao Gao, Jie Li, Ruimao Zhang, Zhen Li, Rui Huang, and Shuguang Cui. Sparse single sweep lidar point cloud segmentation via learning contextual shape priors from scene completion. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3101-3109, 2021. 5 +[52] Fengting Yang, Qian Sun, Hailin Jin, and Zihan Zhou. Superpixel segmentation with fully convolutional networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13964-13973, 2020. 3, 2 +[53] Haiming Zhang, Xu Yan, Dongfeng Bai, Jiantao Gao, Pan Wang, Bingbing Liu, Shuguang Cui, and Zhen Li. Radocc: Learning cross-modality occupancy knowledge through rendering assisted distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 7060-7068, 2024. 1 +[54] Jiahui Zhang, Hao Zhao, Anbang Yao, Yurong Chen, Li Zhang, and Hongen Liao. Efficient semantic scene completion network with spatial group convolution. In Proceedings of the European Conference on Computer Vision (ECCV), pages 733-749, 2018. 2 +[55] Yunpeng Zhang, Zheng Zhu, and Dalong Du. Occformer: Dual-path transformer for vision-based 3d semantic occupancy prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9433-9443, 2023. 1, 2, 6, 5 + +[56] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.3 \ No newline at end of file diff --git a/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/images.zip b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..30878f455be505bf60d7f20d2f450d89ad1be1d8 --- /dev/null +++ b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b740473081dd1bd19a4e3a29c91d13356258e568647594aaa7752647fb8846f +size 774830 diff --git a/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/layout.json b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e4fb398050a1d3091171d646febee472822138eb --- /dev/null +++ b/CVPR/2025/3D Occupancy Prediction with Low-Resolution Queries via Prototype-aware View Transformation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a6fa155995a4f95fe42153f054aad0edb45ca51888406acd648b545556a63c6 +size 448367 diff --git a/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_content_list.json b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..af7ff29a4b037daab0f7cea7d127f08885bf337f --- /dev/null +++ b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62505c83ea2fdfae93ec5245cbd507aaab92d6fb016994d49359082766accedf +size 80903 diff --git a/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_model.json b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c5e8557ffcd4b8ca37bfb88c6f983368972d6f08 --- /dev/null +++ b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b7d4f33625d5e7c46eda4c1b22096017e06e016858e15ca1eea7936aed24846 +size 97371 diff --git a/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_origin.pdf b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b831799ece61d01cbecc77bcc654c5df530cad94 --- /dev/null +++ b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/768d9909-7302-4b31-81de-9e15acdeb35e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56df54e6f63b3444d433d578dc15f2b958fb99687f300a3b8b4816494f91f722 +size 650206 diff --git a/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/full.md b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2175f6e9746f2a92f8c2bb639db8cd6edf236cb9 --- /dev/null +++ b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/full.md @@ -0,0 +1,332 @@ +# 3D Prior is All You Need: Cross-Task Few-shot 2D Gaze Estimation + +Yihua Cheng $^{1}$ , Hengfei Wang $^{1}$ , Zhongqun Zhang $^{1*}$ , Yang Yue $^{1}$ , Boeun Kim $^{1,3,4}$ , Feng Lu $^{2}$ , Hyung Jin Chang $^{1}$ + +1University of Birmingham, 2Beihang University, 3Dankook University, 4KETI + +www.yihuazone/work/gaze322 + +# Abstract + +3D and 2D gaze estimation share the fundamental objective of capturing eye movements but are traditionally treated as two distinct research domains. In this paper, we introduce a novel cross-task few-shot 2D gaze estimation approach, aiming to adapt a pre-trained 3D gaze estimation network for 2D gaze prediction on unseen devices using only a few training images. This task is highly challenging due to the domain gap between 3D and 2D gaze, unknown screen poses, and limited training data. To address these challenges, we propose a novel framework that bridges the gap between 3D and 2D gaze. Our framework contains a physics-based differentiable projection module with learnable parameters to model screen poses and project 3D gaze into 2D gaze. The framework is fully differentiable and can integrate into existing 3D gaze networks without modifying their original architecture. Additionally, we introduce a dynamic pseudo-labelling strategy for flipped images, which is particularly challenging for 2D labels due to unknown screen poses. To overcome this, we reverse the projection process by converting 2D labels to 3D space, where flipping is performed. Notably, this 3D space is not aligned with the camera coordinate system, so we learn a dynamic transformation matrix to compensate for this misalignment. We evaluate our method on MPIIGaze, EVE, and GazeCapture datasets, collected respectively on laptops, desktop computers, and mobile devices. The superior performance highlights the effectiveness of our approach, and demonstrates its strong potential for real-world applications. + +# 1. Introduction + +Gaze estimation tracks eye movements to predict human attention [13]. It is a highly applied research topic, where various application scenarios, such as intelligent vehicles [14, 23], VR/AR [26, 28, 31], and disease diagnosis [6, 34] demand distinct and specialized gaze estimation solutions. + +![](images/aafe349ab7999ebd54c25a9db165fcbbdf44a3385c44e0b5d05acd70fadf85f1.jpg) +Figure 1. We introduce a novel cross-task few-shot 2D gaze estimation approach. Our method leverages a pre-trained 3D gaze estimation network and few-shot 2D gaze samples to achieve 2D gaze estimation on unseen devices. It contains a physics-based differentiable projection module to bridge 3D and 2D gaze, along with a dynamic pseudo-labelling strategy for 2D labels under unknown screen poses. Our approach is both screen-calibration-free and source-free, significantly expanding its application potential. + +Recent gaze estimation methods primarily focus on 3D gaze estimation [8, 36], wherein 3D direction vectors are derived from facial images. Such methods exhibit high adaptability, facilitating straightforward application in diverse environments [38]. However, they present limitations in practical applications, such as human-computer interaction, where precise gaze targets are essential. Existing approaches often require post-processing to calibrate the pose of interacted objects, e.g., a screen, and compute the intersection between gaze and objects [40]. This process poses significant challenges, particularly for non-expert users. + +Conversely, some methods directly estimate 2D gaze within screen coordinate systems. Deep learning-based approaches utilize large training datasets to map facial images to 2D gaze [4, 24]. However, these models are often entangled with multiple device-specific factors. Traditional approaches construct 3D eye models using prior anatomical knowledge and fit these models with few-shot calibration images [18]. Although such methods require specialized equipment for precise eye tracking, they raise the question, Can we achieve similar patterns within the deep learning paradigm for quick adaptation across various devices? + +In this work, we explore a novel topic, cross-task few-shot 2D gaze estimation. We observe that 3D gaze estimation has recently gained significant attention in the research community. It is performed within 3D space, free from entanglement with specific devices. These insights suggest that 3D gaze estimation models would be a great prior, similar to the 3D anatomical eye model in traditional methods. Therefore, our approach aims to utilize 3D gaze estimation as prior and adapt it efficiently for 2D gaze estimation. However, this setting introduces several significant challenges, such as the domain gap between 3D and 2D gaze tasks, unseen device settings, i.e., $w/o$ screen calibration, and insufficient training data. We show a comparison between our task with common methods in Table 1. + +To address these challenges, we first propose a novel framework to bridge the gap between 2D and 3D gaze estimation. We decomposes 2D gaze estimation into two components: 3D gaze estimation and gaze projection. We first estimate 3D gaze from face images, and then project 3D gaze onto a specific 2D plane to infer the 2D gaze. Unlike existing methods that require screen calibration to obtain the screen pose [1, 13], our framework includes a physics-based differentiable projection module. This module models screen pose using six learnable parameters, i.e., rotation and translation vectors that map screen coordinate system to camera coordinate system. By implementing the projection in a fully differentiable manner, our framework enables seamless integration of the projection module into any existing 3D gaze estimation model without changing its original architecture. Furthermore, since the framework is fully differentiable, it supports fine-tuning on 2D annotated data. + +We further propose a dynamic pseudo-labelling strategy for 2D label in our framework. Specifically, we perform flipping on face images and aim to assign pseudo-label for the flipped images. While this process is straightforward for 3D gaze annotations, it becomes more complex for 2D gaze due to dependencies on factors like head position and screen pose, especially the screen pose is unknown. To address this, we perform dynamic pseudo-labelling during training. In each iteration, we reverse the projection process using the learnable screen parameters to convert 2D labels into 3D labels. This allows us to perform flipping directly in the 3D gaze space. A key insight is that flipping needs to occur in the camera coordinate system, while accounting for a shift in coordinate systems during training. To handle this, we learn a dynamic transformation that maps the shifted system back to the camera coordinate system, ensuring reliable pseudo-label generation. Additionally, we apply color jittering during training, which does not alter the 2D gaze labels, and minimize uncertainty across jittered images to improve robustness. + +Overall, our main contribution contains four-folds: + +1. We explore the novel topic of cross-task few-shot 2D + +Table 1. Comparison between our method with existing methods. We introduce an unexplored task in gaze estimation, which aims to adapt 3D gaze models for 2D gaze estimation with few-shot data. + +
CategoryTrainTestCross Env.Cross TaskMethods
3D Gaze Estimation3D3D××[8, 12, 21]
2D Gaez Estiamtion2D2D××[4, 24]
Personalize2D2D××[19]
3D3D×[25, 29]
Domain Adaption3D3D×[2, 3, 7]
Ours3D2DNone
+ +gaze estimation. This topic not only extends the application of 3D gaze research to the 2D domain but also provides a promising direction for real-world applications. + +2. We propose a framework to bridge 3D and 2D gaze estimation, which includes a physics-based differentiable projection module with six learnable screen parameters to convert 3D gaze to 2D gaze. By leveraging this framework, we can quickly adapt a 3D gaze model for 2D gaze estimation using only a small number of images. +3. We propose a dynamic pseudo-labeling strategy for 2D labels in our framework. We reverse the projection using learnable screen parameters to convert 2D labels back into 3D labels and perform pseudo-labeling in the 3D gaze space. Furthermore, we learn a dynamic transformation to address the shifted coordinate system problem. +4. We establish a benchmark for the cross-task few-shot 2D gaze estimation, and evaluate our method on three datasets covering daily scenarios, including laptop, desktop computer and mobile devices. The superior performance demonstrates the advantage of our approach. + +# 2. Related Works + +# 2.1. Gaze Estimation + +Gaze estimation methods are generally classified into 3D and 2D gaze estimation based on output [13]. 3D gaze estimation defines gaze as a directional vector originating from the face toward gaze targets [11, 36]. It typically focuses on enhancing accuracy and generalizability across diverse environments. Related research spans several fields, including supervised learning [8-10], unsupervised domain adaptation [2, 3, 7], feature disentanglement [12, 29, 35], etc. + +On the other hand, 2D gaze estimation is primarily applied in screen-based contexts, where gaze is represented as a pixel coordinate in the screen coordinate system [4, 16, 24]. Compared to 3D gaze estimation, 2D gaze estimation is more directly applicable to human-computer interaction [20, 27, 33]. However, it becomes entangled with multiple device-specific factors, such as screen size and camera-screen pose, which complicate generalization across various setups. The adaptation of 2D gaze estima + +![](images/201d37dcbae1992a48e6803cadb6ed7f556d26301ce2787455305dd3923e4102.jpg) +Figure 2. We propose a framework for the cross-task few-shot 2D gaze estimation. The framework contains a physics-based differentiable projection module with learnable parameters $\mathbf{r}$ and $\mathbf{t}$ to model screen, and project 3D gaze into 2D gaze. The framework is fully differentiable and can integrate into existing 3D gaze networks without modifying their original architecture. Leveraging this framework, we can quickly adapt a 3D gaze model for 2D gaze estimation using only a small number of images. + +tion methods remains a notable research challenge. + +Although these gaze estimation methods fundamentally capture eye movement, the distinct differences between 3D and 2D gaze annotations define them as separate research areas. In this work, we propose a framework to bridge the gap between 2D and 3D gaze estimation, enabling the direct application of 3D gaze research to 2D gaze estimation. + +# 2.2. 2D Gaze Estimation via Projection + +It is typical to compute the intersection between 3D gaze and a 2D plane for 2D gaze, a process referred to as gaze projection in this work. The most common application is in VR [15, 32], where the head-mounted display provides 3D gaze estimation, and developers can easily obtain a plane pose in VR space. They project the gaze onto this plane or determine if it intersects for interaction. + +This strategy is also used in deep-learning based gaze estimation methods. They calibrate the screen pose during the post-processing stage and convert 3D gaze into 2D gaze on the screen [13, 40]. Recently, some methods have attempted to inject the projection into the deep learning framework. Balim et al. [1] first require screen calibration to obtain screen parameters and then model the projection process using the calibrated pose. Cheng et al. [14] focus on estimating gaze zones on vehicle windshields. They define a basis tri-plane, project 3D gaze onto this plane, and then learn a mapping from the interaction points to the gaze zone. + +In our work, we model the full projection process by defining the screen pose with six learnable parameters. The projection module is parameter-efficient. More importantly, our method does not require screen calibration, which can be challenging for non-expert users. + +# 3. Methodology + +# 3.1. Task Definition + +Given a pre-trained 3D gaze estimation network $\mathcal{H}_{3D}(\mathbf{I};\beta)$ which takes face images $\mathbf{I}$ as input and outputs 3D gaze direction $\mathbf{g}$ , i.e., $\mathcal{H}_{3D}:\mathbf{I}\rightarrow \mathbf{g}$ , our objective is to develop + +a 2D gaze estimation network $\mathcal{H}_{2D}(\mathbf{I};\theta)$ . Using few-shot training samples $\mathcal{D} = \{(\mathbf{I}_i,\mathbf{p}_i)\}_{i = 1}^N$ , where $N$ is the number of training samples, this network estimates 2D pixel coordinates $\mathbf{p}$ from face images, i.e., $\mathcal{H}_{2D}:\mathbf{I}\to \mathbf{p}$ . We consider a restricted setting where: 1) the method is source-free, as the training set of $\mathcal{H}_{3D}$ is unavailable, and 2) it is screen calibration-free, with the screen pose unspecified. These restrictions make our method convenient for practical applications while upholding data privacy. + +# 3.2. Physics-Based Differentiable Projection + +Our work aims to learn a new $\mathcal{H}_{2D}$ using few-shot samples. The primary challenge lies in transferring knowledge from $\mathcal{H}_{3D}$ to $\mathcal{H}_{2D}$ . However, the two networks perform different tasks, making some conventional methods such as feature distillation unsuitable. To address this, our idea is to decompose the 2D gaze estimation into 3D gaze estimation and gaze projection. Specifically, we incorporate $\mathcal{H}_{3D}$ as part of $\mathcal{H}_{2D}$ , supplemented with an additional module for projecting gaze directions onto a 2D screen. Unlike existing gaze projection strategies that often rely on post-processing [13] or require screen calibration [1], we introduce a physics-based differentiable projection module. This module models screen pose as learnable weights, enabling the projection process to occur in a differentiable and adaptable manner. + +In detail, we define learnable weights $\mathbf{r} \in \mathbb{R}^3$ as the rotation vector and $\mathbf{t} \in \mathbb{R}^3$ as the translation vector within the projection module, establishing the transformation from the screen coordinate system to the camera coordinate system. To transform $\mathbf{r}$ into the rotation matrix $\mathbf{R} \in \mathbb{R}^{3 \times 3}$ , we apply the Rodrigues formula, which preserves the orthogonality of $\mathbf{R}$ so that $\mathbf{R} \in SO(3)$ . The input of projection module contains gaze direction $\mathbf{g} \in \mathbb{R}^3$ and the 3D position of the face center $\mathbf{o} \in \mathbb{R}^3$ , the latter of which can be computed using existing 3D landmark estimation methods [17]. Overall, the module $\mathcal{P}$ could be denoted as: + +$$ +\hat {\mathbf {p}} = \mathcal {P} (\mathbf {g}, \mathbf {o}; \mathbf {r}, \mathbf {t}), \tag {1} +$$ + +where $\hat{\mathbf{p}}\in \mathbb{R}^2$ represents the estimated screen coordinate. + +We first compute the intersection points between gaze directions and the learnable screen denoted with $(\mathbf{r},\mathbf{t})$ . To establish the screen pose, we need a normal vector $\mathbf{n}$ and a point coordinate on the screen. The normal vector can be derived using $\mathbf{R}(:,2]$ , i.e., the third column of the rotation matrix [13], while $\mathbf{t}$ serves as a reference point on the plane. Given that the dot product between the normal vector of a plane and the vector connecting any point on the plane to a fixed point is constant, it is obvious that the intersection point $\mathbf{p}_{3D}$ is + +$$ +\mathbf {p} _ {3 D} = \mathbf {o} + \frac {(\mathbf {t} - \mathbf {o}) \cdot \mathbf {n}}{\mathbf {g} \cdot \mathbf {n}} \mathbf {g}, \tag {2} +$$ + +Note that $\mathbf{p}_{3D}$ represents coordinates in the camera coordinate system. To convert it to the screen coordinate system, we apply + +$$ +\mathbf {p} = \mathbf {R} ^ {- 1} \left(\mathbf {p} _ {3 D} - \mathbf {t}\right), \tag {3} +$$ + +We slightly abuse the notation $\mathbf{p}$ in Eq. 3, where the final 2D gaze coordinate corresponds to the first two components of $\mathbf{p}$ . These values can then be further converted into pixel coordinates by utilizing the screen's PPI (pixels per inch), which is easily obtainable as a screen parameter. + +Therefore, the network $\mathcal{H}_{2D}$ can be denoted as + +$$ +\mathcal {H} _ {2 D} (\mathbf {I}, \mathbf {o}; \beta , \mathbf {r}, \mathbf {t}) = \mathcal {P} \left(\mathcal {H} _ {3 D} (\mathbf {I}), \mathbf {o}\right), \tag {4} +$$ + +and the objective function is denoted as + +$$ +\min _ {\beta , \mathbf {r}, \mathbf {t}} \sum_ {i = 1} ^ {N} \| \mathcal {H} _ {2 D} \left(\mathbf {I} _ {i}, \mathbf {o} _ {i}\right) - \mathbf {p} _ {i} \| _ {1} \tag {5} +$$ + +We illustrate the projection module in Figure 2. + +# 3.3. Dynamic Pseudo-Labeling for 2D Gaze + +Data augmentation is a typical technique to improve model performance, particularly with limited dataset sizes. In this section, we apply flipping to expand the data space. In 3D gaze estimation, the flipping involves horizontally flipping face image and adjusting the label by negating the x-coordinate value. We formally define the operation in label as $\mathcal{F}(\mathbf{g})$ . However, generating reliable pseudo labels after flipping is challenging for 2D gaze estimation. + +Our core idea is to dynamically generate pseudo-labels during training by leveraging the differentiable projection module within our framework, which includes learnable screen parameters. This enables us to address the challenges of assigning 2D pseudo-labels by reversing the projection process, i.e., converting 2D screen coordinates into 3D gaze directions, where we can then apply flipping in 3D space. The pseudo-labeling function $\mathcal{Q}(\mathbf{p})$ is defined as + +$$ +\mathcal {Q} (\mathbf {p}) = \mathcal {P} \left(\mathcal {F} \left(\mathcal {P} ^ {- 1} (\mathbf {p})\right)\right), \tag {6} +$$ + +![](images/746a4a30863321fdbe5747c8c2354f1d82a27b99d41ccb0a0a0cdde26e312610.jpg) +Figure 3. The dynamic pseudo-labeling strategy for 2D gaze involves reversing the projection process to convert 2D gaze into 3D space, where we compute pseudo-labels. To align the camera coordinate system (CCS) with the unknown coordinate system (UCS), we use the same image sets as input to both the initial and the updated 3D model. The initial model, trained on the CCS, while the updated model operates within the UCS. By leveraging the outputs from these models as two anchors, we derive the transformation $\mathcal{T}$ to align the coordinate systems. Notably, $\mathcal{T}$ should be invertible. + +where $\mathcal{P}^{-1}$ represents the reverse projection process. Specifically, we first transform $\mathbf{p}$ into the camera coordinate system. The gaze direction is then defined as the vector originating from the face center and directed toward the gaze point, + +$$ +\mathcal {P} ^ {- 1} (\mathbf {p}, \mathbf {o}) = (\mathbf {R p} + \mathbf {t}) - \mathbf {o}, \tag {7} +$$ + +which we normalize to ensure the vector has a unit length. + +However, we observed that assigning pseudo-labels as Eq. 6 led to model collapse, with the pseudo-labels diverging to large values during training. On the other hand, we found that $\mathcal{H}_{2D}$ struggled to learn the correct screen parameters, and noted substantial changes in the 3D gaze estimation network itself. Our intuition suggests that changing the screen pose should theoretically allow us to find an optimal screen pose, but this could also be approached by rotating the camera instead, i.e., optimize $\mathcal{H}_{3D}$ . + +Based on the observations, we find that Eq. 6 is not consistently reliable. The key insight is that human gaze direction is inherently defined in the camera coordinate system. Flipping image affects the camera coordinate system itself, meaning the gaze label should be adjusted accordingly, i.e., flipping should be performed in camera coordinate system. However, since the 3D gaze estimation network undergoes updates during fine-tuning, it is shifted into an unknown coordinate system. This change in coordinate systems disrupts the alignment of gaze labels, leading to model collapse. + +To solve this problem, we aim to learn a transformation $\mathcal{T}$ that maps the unknown coordinate system to camera coordinate system. Our idea is to identify anchors in the two coordinate systems, allowing us to model this problem as an alignment task. Specifically, we de + +Note the initial pre-trained 3D gaze estimation network as $\mathcal{H}_{3D}(\beta_0)$ and the fine-tuned network as $\mathcal{H}_{3D}(\beta_k)$ . Notably, $\mathcal{H}_{3D}(\beta_0)$ is pre-trained in the camera coordinate system, while $\mathcal{H}_{3D}(\beta_k)$ operates in the unknown coordinate system. Therefore, we can acquire the anchor as $\{\mathcal{H}_{3D}(\mathbf{I}_i;\beta_0)\}_{i = 1}^N$ and $\{\mathcal{H}_{3D}(\mathbf{I}_i;\beta_k)\}_{i = 1}^N$ using training set $\mathcal{D}$ . The alignment problem can then be formulated as: + +$$ +\min _ {\mathcal {T}} \sum_ {i = 1} ^ {N} \| \mathcal {T H} _ {3 D} \left(\mathbf {I} _ {i}; \beta_ {k}\right) - \mathcal {H} _ {3 D} \left(\mathbf {I} _ {i}; \beta_ {0}\right) \| _ {2} \tag {8} +$$ + +Notably, $\mathcal{T}$ should be invertible. Therefore, we model the transformation as a rotation operation, enabling us to solve it using singular value decomposition (SVD). We have + +$$ +[ U, S, V ] = \operatorname {S V D} \left(\mathcal {H} _ {3 D} \left(\mathbf {I} _ {i}; \beta_ {i}\right) * \mathcal {H} _ {3 D} \left(\mathbf {I} _ {i}; \beta_ {0}\right) ^ {T}\right), \tag {9} +$$ + +and $\mathcal{T} = V U^T$ . Consequently, we can update Eq. 6 as follows + +$$ +\mathcal {Q} (\mathbf {p}) = \mathcal {P} \left(\mathcal {T} ^ {- 1} * \mathcal {F} \left(\mathcal {T} * \mathcal {P} ^ {- 1} (\mathbf {p})\right)\right), \tag {10} +$$ + +where $*$ represents matrix multiplication. $\mathcal{Q}(\mathbf{p})$ is also dynamic and re-computed during each iteration since the coordinate system continues to change throughout fine-tuning. + +The objective function is denoted as + +$$ +\min _ {\beta , \mathbf {r}, \mathbf {t}} \sum_ {i = 1} ^ {N} \| \mathcal {H} _ {2 D} \left(\mathbf {I} _ {i} ^ {\prime}, \mathbf {o} _ {i}\right) - \mathcal {Q} (\mathbf {p} _ {i}) \| _ {1} \tag {11} +$$ + +Where $\mathbf{I}_i^{\prime}$ is the flipping image of $\mathbf{I}_i$ . + +# 3.4. Minimize Uncertainty across Jittered Images + +We also perform color jitter and minimize uncertainty across jittered images to enhance model robustness. Given a face image $\mathbf{I}$ , we apply color jitter $\mathcal{J}$ to create a set of augmented images, $\{\mathcal{J}_k(\mathbf{I})\}_{k=1}^K$ , where $k$ represents the number of random color jitters performed. We minimize the variance in the gaze predictions for this set. Specifically, we pass each augmented image through the model, obtaining predictions $\{\mathcal{H}_{2D}(\mathcal{J}_k(\mathbf{I}))\}_{k=1}^K$ . We calculate the centroid of these predictions and minimize the distance between each prediction and the centroid. Additionally, we also minimize the distance between the predictions and the ground truth. To stabilize training, we introduce a temporal weight $\tau = \frac{t - 1}{t}$ for the variance loss, starting with a smaller weight that increases over epoch $t$ . The loss is defined as + +$$ +\begin{array}{l} \mathcal {L} _ {u n c} = \frac {1}{N K} \sum_ {i = 1} ^ {N} \sum_ {k = 1} ^ {K} \left(\| \mathcal {H} _ {2 D} \left(\mathcal {J} _ {k} (\mathbf {I} _ {i})\right) - \mathbf {p} _ {i} \| _ {1} + \right. \\ \tau * \left\| \mathcal {H} _ {2 D} \left(\mathcal {J} _ {k} \left(\mathbf {I} _ {i}\right)\right) - \frac {1}{K} \sum_ {j = 1} ^ {K} \mathcal {H} _ {2 D} \left(\mathcal {J} _ {j} \left(\mathbf {I} _ {i}\right)\right) \| _ {2}\right) \tag {12} \\ \end{array} +$$ + +The temporal weight mitigates the risk of model collapse, as we observe that the second term of $\mathcal{L}_{unc}$ tends to be large at the start of training, and a high initial learning weight can lead to instability. Additionally, we apply L2 regularization to the second term since it assigns greater weight to outliers. + +# 3.5. Implementation Details + +Our model is optimized using the loss functions defined in Eq. 5, Eq. 11 and Eq. 12, with corresponding weights of 1, 0.4, and 0.25, respectively. For training, we set $N = 10$ , meaning the training set contains 10 samples, and $K = 4$ , meaning we apply four random color jitter augmentations per iteration. The model is implemented in PyTorch and trained on an NVIDIA RTX 3090. We train for 80 epochs, setting the learning rate initially to 0.001, with a 5-epoch warmup phase. After 60 epochs, the learning rate decays to 0.0005. We use GazeTR [8] (ResNet18 + 6-layer transformer) pretrained on Gaze360 [22] as the basic 3D model. Please refer the supplementary material for more details. + +# 4. Experiment + +# 4.1. Setup + +In this paper, we propose a cross-task few-shot 2D gaze estimation task. We first build the evaluation benchmark. + +Datasets: We evaluate methods on three datasets: MPIIGaze [37], EVE [30], and GazeCapture [24]. These datasets were collected in different devices, including laptops, desktop computers, and mobile devices. By assessing performance across these datasets, we demonstrate the generalization capability of methods across various devices. + +Data Preprocessing: Image normalization [13] is usually used to enhance 3D gaze estimation performance. In our work, we utilize the normalized images provided by the MPIIGaze and EVE datasets, and implement the method [39] for normalizing the GazeCapture. Note that, the normalization changes 3D gaze with a rotation matrix. Although our work does not use the 3D label, the predicted 3D gaze should be transformed back for projection. Furthermore, the MPIIGaze dataset augments 3D gaze estimation data by flipping images, which is not applicable for 2D gaze estimation. We exclude the flipped images for consistency. The EVE dataset provides videos along with corresponding gaze trajectories. We sample one frame for every 20 frames to construct the benchmark. We sample 20 subjects in GazeCapture dataset, ensuring that each has at least 500 images. We clean the dataset to remove images without face. Notably, four of the 20 subjects used a tablet for data collection, while the rest used phones. Please refer the supplementary materials for more details. + +Evaluation Metric: We perform person-specific evaluation and report the average performance across subjects for comparison. Performance is measured as the Euclidean distance + +Table 2. Quantitative evaluation. Our method achieves best result among comparison methods. We also report the performance of 2D gaze estimation methods in the second row for reference. + +
MethodTraining SamplesEVE [30]MPIIGaze [36]GazeCapture [24]
iTracker [24]--26.8
EyeNet [30]49.7--
Full-Face [37]All dataset38.642.0-
AFF-Net [4]-39.019.6
EFE [1]38.538.920.5
EFE [1]1064.9▼33%100.2▼43%48.5▼26%
IVGaze [14]10177.7▼75%132.2▼57%68.1▼47%
Ours1043.456.735.7
+ +(in mm) between predictions and ground truth, where lower values indicate better accuracy. + +# 4.2. Quantitative Comparison + +We first compare our method with existing approaches EFE [1] and IVGaze [14]. EFE is an end-to-end gaze estimation method that includes a projection module to convert 3D gaze predictions into 2D gaze. IVGaze utilizes a basis tri-plane for projection, followed by a lightweight transformer to refine the projection points. For a fair comparison, we re-implement both methods using the same 3D gaze estimation network and pre-trained weights as our method. Our goal is to evaluate the performance differences resulting from different projection strategies. Notably, EFE requires screen calibration for the projection; to ensure fairness, we set these screen parameters as learnable and initialize them with the same values used in our method. The results of these comparisons are presented in Table 2. + +IVGaze includes a transformer to refine projection points. While this transformer performs well when trained on the full dataset, it struggles with limited data, leading to underfitting when trained on just 10 samples. This results in poor performance on the EVE and MPIIGaze datasets, highlighting the advantage of our approach. In contrast, our method avoids the use of complex architectures that can suffer from underfitting in few-shot learning tasks. Instead, we directly model the projection process, leading to superior performance. On the other hand, EFE demonstrates reasonable performance, but our method achieves over $25\%$ improvement across all three datasets. This significant boost is attributed to our more comprehensive modelling of the projection process, which reduces fitting complexity and naturally enhances overall performance. + +We also report the performance of 2D gaze estimation methods trained on the entire dataset for reference. Note that they are not directly comparable to our method since both the training and test sets differ. These results are summarized in the second row of Figure 2. Our method achieves similar performance using only 10 images. + +Table 3. Comparison with different 3D to 2D adaption strategy. We direct project 3D gaze to 2D gaze using the known screen pose without fine-tuning, which shows the advantage of our learning framework. We directly learn 2D gaze from 3D gaze with MLP, which highlights the challenges in the adaption from 3D model to 2D gaze estimation. We also show the performance when the learnable parameters is set as known pose in our method. + +
StrategyEVEMPIIGazeGazeCapture
Direct Projection80.5101.9N/A
Direct Learning180.6133.974.23
Direct Learning (with o)116.6108.2149.7
Learning with Known Pose39.456.6N/A
Ours43.456.735.7
+ +Table 4. We perform an ablation study to evaluate the impact of the dynamic pseudo-labeling strategy (PS-Label) and the loss to minimize uncertainty across jittered images $(\mathcal{L}_{unc})$ . Both the two modules contribute to performance improvements. + +
Proj.PS-LabelLuncEVEMPIIGazeGazeCapture
46.660.336.8
45.357.935.7
43.456.735.7
+ +# 4.3. Comparison with Different Adaption Strategy + +In this section, we evaluate the accuracy of different adaptation strategies for obtaining 2D gaze from 3D predictions. + +Direct Projection: We directly project the 3D gaze predictions from our pre-trained 3D gaze estimation network onto the screen using the known screen pose, providing a baseline performance measure for the network. This is not performed on GazeCapture, as it lacks reliable screen pose. Direct Learning: We retain the architecture of the 3D gaze network and directly fine-tune it using the 2D annotations. Additionally, we concatenate the gaze origin $\mathbf{o}$ with the predicted gaze and use a MLP to map them to 2D gaze predictions. We then fine-tune this extended network and report the performance in Direct Learning (with $\mathbf{o}$ ). + +Learning with Known Pose: Our method assumes the screen pose is unavailable. In this strategy, we change the learnable parameters as the ground truth screen pose. + +The result is shown in Table 3. The Direct Projection method struggles to perform effectively on the EVE and MPIIGaze datasets without fine-tuning. However, integrating it into our framework yields over $40\%$ improvement, demonstrating the critical role of our learning framework. The Direct Learning strategy, on the other hand, fails to achieve reasonable performance due to the substantial domain gap between 3D and 2D gaze estimation. We compare its performance with Direct Projection. The learning strategy does not show any performance gains, which highlights the challenge of adapting 3D gaze models to 2D tasks. Even when the gaze origin is included as an additional feature, the limited training data makes it challenging for the model + +![](images/d26b66c2d4209a9d21918d84c2702163bc9b38081e7ae940a7f005bfcc2c5138.jpg) +Figure 4. We compare the performance across different pseudolabelling strategies. The red bar represents the projection without pseudo-labelling, serving as a baseline for comparison. We evaluated our method without the transformation $\mathcal{T}$ . The unreliable pseudo-labels lead to significant performance drop on the MPIIGaze and EVE. Interestingly, omitting $\mathcal{T}$ led to improved results on the GazeCapture dataset. We found that this was because the initial screen pose happened to be same as the actual screen pose. + +to learn the complex mapping. In contrast, our framework leverages physics-based differentiable projection, enabling it to achieve superior performance. The Learning with Known Pose method outperforms our method due to access to the known screen pose, highlighting the importance of accurate screen pose information for 2D gaze estimation. + +# 4.4. Ablation Study + +We perform an ablation study to demonstrate the contribution of each module in our work. We first evaluate the performance when only the projection module is added to the pre-trained 3D gaze estimation network and fine-tuned. The results are shown as Proj. in Table 4. Compared to the results in Table 3, the projection module provides a significant performance improvement as it explicitly modelling the projection process, which effectively bridges the gap between 3D and 2D gaze estimation. Next, we introduce our dynamic pseudo-labeling strategy and minimize the uncertainty across jittered images. Both mechanisms bring performance improvements across all datasets. + +The dynamic pseudo-labeling strategy is a key contribution of our work. To better understand its impact, we conduct a detailed comparison, as shown in Figure 4. We perform an ablation on the learning transformation $\mathcal{T}$ in our strategy. The results show a significant performance drop on the MPIIGaze and EVE datasets without $\mathcal{T}$ , as unreliable pseudo-labels can cause model collapse during learning, especially with small training dataset sizes. Interestingly, we observe improved performance on the GazeCapture dataset without using $\mathcal{T}$ . The authors of GazeCapture create a unified prediction space for 2D gaze, centered at the phone camera position. Our model initializes the screen pose as $\mathbf{t} = (0,0,0)$ , making the initial pose closely ap + +![](images/27f1b710fe66bb8a6c176f4a9a55a691e8d1eba27c98fe083a6b78b580fbf931.jpg) +Figure 5. Performance with different number of training images. + +
#Training ImagesSpeed (sec/epoch)
30.89
50.90
100.91
200.96
501.16
+ +Table 5. The model training time with different number of training samples. + +proximate the real one. However, it is important to note that such cases are uncommon in real-world scenarios. Our method first converts 2D gaze to 3D space and learns $\mathcal{T}$ to align this space with the camera coordinate system. When the screen pose aligns exactly with ground truth, the 3D space already corresponds to the camera coordinate system. To establish the alignment, we use the predictions from the 3D gaze network as anchors for the camera coordinate system, which may introduce some bias. Nonetheless, our method demonstrates performance improvements compared to methods without pseudo-labeling. + +We also implement existing method RAT [5], which assigns pseudo-label for rotated images. We convert 2D gaze into 3D gaze using learnable screen parameters, and perform RAT to augment training. RAT cannot bring performance improvement compared with the baseline. + +# 4.5. Different Numbers of Training Images + +In this section, we evaluate the effect of the number of training images on model performance. We experiment with different numbers of training images set to 3, 5, 10, 20, and 50, respectively. The performance is assessed across all three datasets, with results depicted in Figure 5. As shown, increasing the number of training images consistently improves the model performance. + +Additionally, we measure the model training time when using varying numbers of training images, as summarized in Table 5. On average, each epoch takes approximately 0.9 seconds to process. Since our method does not require a large dataset, all images can be efficiently processed within a single epoch. With a total of 80 epochs, the complete training time is approximately 1.2 minute. Notably, this timing was tested in a Python environment and could be further optimized to achieve even faster performance with specific optimizations. This demonstrates significant real-time application potential for our method. + +# 4.6. Repeatability Experiment + +In this section, we conduct a robustness evaluation by training our method 10 times using different training samples in MPIIGaze to assess the impact of sample variability on model performance. We evaluate the performance on all + +![](images/9e3293c40819960853ed90698fd3be3017c2c94af4b4e0bb8ab1923f1bc00834.jpg) +Figure 6. We train our method 10 times using different image samples in MPIIGaze for robustness evaluation. The horizontal axis corresponds to each of the 10 trials, while each bar shows the accuracy distribution across 15 subjects. The box depicts the interquartile range (25% to 75%), while the error bars covers the entire accuracy distribution. The average accuracy across 10 trials is 55.6, demonstrating the stability and robustness of our method. + +15 subjects for each trial and report the performance distribution. The results are visualized in a boxplot in Figure 6. The horizontal axis represents each of the 10 trials and each trial contains performance of 15 subjects. The box depicts the interquartile range (25% to 75%), while the error bars cover the entire performance distribution. The triangle symbol indicates the average performance, and the black line represents the median performance. The average performance across all 10 trials is 55.6, which is slightly better than our previously reported value of 56.7. These results demonstrate the stability and robustness of our method despite variations in the training samples. + +# 4.7. The Trajectories of Pseudo-Label + +Our method contains a dynamic pseudo-labeling strategy to assign pseudo 2D labels for flipped images. To gain deeper insights into this process, we visualize the trajectories of the pseudo-labels over the course of 80 epochs in Figure 7. In addition, we compare the effect of our transformation strategy by plotting the pseudo-label positions without the transformation, i.e., the difference between Eq. 6 and Eq. 10. Both approaches share the same initial pseudo-labels. For reference, we also compute the ground truth labels using the calibrated screen pose for flipped images. + +As shown in Figure 7, the initial pseudo-labels have a significant offset from the ground truth. However, our method dynamically updates the pseudo-labels based on the fine-tuned network, progressively aligning them closer to the ground truth with each iteration. By the end of training, the pseudo-labels have only minimal offsets from the ground truth, demonstrating the effectiveness of our approach. In contrast, the strategy without transformation fails to produce reliable pseudo-labels, leading to consistently large offsets from the ground truth. + +![](images/95039defd2890f87ebfb2a6521782224af4e0dc4869a1129f4e75c632744bcd3.jpg) +Figure 7. We visualize four trajectories of the pseudo-labels in our dynamic pseudo-labeling strategy. The ground truth for flipped images is computed using known screen pose. It is evident that our method progressively aligns the pseudo-labels closer to the ground truth. Additionally, we plot the pseudo-labels without applying $\mathcal{T}$ in our strategy, which shows a failure to produce reliable pseudolabels, resulting in significant deviations from the ground truth. + +# 5. Conclusion and Discussion + +In this work, we introduce a novel cross-task few-shot 2D gaze estimation method. By leveraging few-shot 2D samples, we adapt a 3D gaze model to 2D gaze estimation on unseen devices. Since the 3D gaze network is trained in 3D space without being tied to specific devices, it theoretically maintains robust performance across different platforms. Our experiments validate this by proving results on three datasets. Besides, the adaption is rapid and source-free, significantly broadening its practical applicability. + +Limitation: Our method infers 2D gaze through mathematical derivation within the differentiable projection module. While this approach enhances model interpretability and reliability, it can occasionally result in failure cases. For instance, when the input images lack visible faces, the predicted 3D gaze can become erratic. In such scenarios, the intersection point between the 3D gaze vector and the screen plane may significantly deviate from the ground truth. This issue arises because, unlike neural networks that constrain outputs to a plausible range, a purely mathematical projection may yield extreme values, e.g., when the 3D gaze is nearly parallel to the plane. Although these cases can be easily flagged in real-world applications, they may introduce biases during evaluation. + +Future Directions: In this paper, we address the challenge of 2D pseudo-labeling. However, several open questions remain. For instance, can we leverage unlabeled face images to further enhance performance? Traditional methods often utilize a standard calibration pattern, could we incorporate a similar strategy? It is worth noting that our approach requires collecting samples initially, akin to a calibration process. We argue that this step is essential as it provides the necessary anchors for adapting to unseen devices. Nonetheless, exploring user-unaware calibration techniques is also a promising direction for future research. + +# 6. Acknowledgments + +This work was supported by the Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.RS-2024-00397615, Development of an automotive software platform for Software-Defined-Vehicle (SDV) integrated with an AI framework required for intelligent vehicles), as well as the Ramsay Research Fund, UK. + +# References + +[1] Haldun Balim, Seonwook Park, Xi Wang, Xucong Zhang, and Otmar Hilliges. Efe: End-to-end frame-to-gaze estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 2688-2697, 2023. 2, 3, 6 +[2] Yiwei Bao and Feng Lu. From feature to gaze: A generalizable replacement of linear layer for gaze estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1409-1418, 2024. 2 +[3] Yiwei Bao and Feng Lu. Unsupervised gaze representation learning from multi-view face images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1419-1428, 2024. 2 +[4] Yiwei Bao, Yihua Cheng, Yunfei Liu, and Feng Lu. Adaptive feature fusion network for gaze tracking in mobile tablets. In International Conference on Pattern Recognition (ICPR), 2020. 1, 2, 6 +[5] Yiwei Bao, Yunfei Liu, Haofei Wang, and Feng Lu. Generalizing gaze estimation with rotation consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4207-4216, 2022. 7 +[6] Moinak Bhattacharya, Shubham Jain, and Prateek Prasanna. Gazeradar: A gaze and radiomics-guided disease localization framework. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 686-696. Springer, 2022. 1 +[7] Xin Cai, Jiabei Zeng, Shiguang Shan, and Xilin Chen. Source-free adaptive gaze estimation by uncertainty reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22035-22045, 2023. 2 +[8] Yihua Cheng and Feng Lu. Gaze estimation using transformer. ICPR, 2022. 1, 2, 5 +[9] Yihua Cheng and Feng Lu. Dvgaze: Dual-view gaze estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 20632-20641, 2023. +[10] Yihua Cheng, Shiyao Huang, Fei Wang, Chen Qian, and Feng Lu. A coarse-to-fine adaptive network for appearance-based gaze estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 2 +[11] Yihua Cheng, Xuong Zhang, Feng Lu, and Yoichi Sato. Gaze estimation by exploring two-eye asymmetry. IEEE Transactions on Image Processing, 29:5259-5272, 2020. 2 + +[12] Yihua Cheng, Yiwei Bao, and Feng Lu. Puregaze: Purifying gaze feature for generalizable gaze estimation. AAAI, 2022. 2 +[13] Yihua Cheng, Haofei Wang, Yiwei Bao, and Feng Lu. Appearance-based gaze estimation with deep learning: A review and benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 1, 2, 3, 4, 5 +[14] Yihua Cheng, Yaning Zhu, Zongji Wang, Hongquan Hao, Yongwei Liu, Shiqing Cheng, Xi Wang, and Hyung Jin Chang. What do you see in vehicle? comprehensive vision solution for in-vehicle gaze estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1, 3, 6 +[15] Brendan David-John, Candace Peacock, Ting Zhang, T. Scott Murdison, Hrvoje Benko, and Tanya R. Jonker. Towards gaze-based prediction of the intent to interact in virtual reality. In ACM Symposium on Eye Tracking Research and Applications, 2021. 3 +[16] Amogh Gudi, Xin Li, and Jan van Gemert. Efficiency in real-time webcam gaze tracking. In Computer Vision - ECCV 2020 Workshops, pages 529-543, Cham, 2020. Springer International Publishing. 2 +[17] Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, and Stan Z Li. Towards fast, accurate and stable 3d dense face alignment. In The European Conference on Computer Vision, 2020. 3 +[18] Dan Witzner Hansen and Qiang Ji. In the eye of the beholder: A survey of models for eyes and gaze. IEEE transactions on pattern analysis and machine intelligence, 32(3):478-500, 2009. 1 +[19] Zhe He, Adrian Spurr, Xucong Zhang, and Otmar Hilliges. Photo-realistic monocular gaze redirection using generative adversarial networks. In The IEEE International Conference on Computer Vision, 2019. 2 +[20] Sinh Huynh, Rajesh Krishna Balan, and JeongGil Ko. imon: Appearance-based gaze tracking system on mobile devices. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 5 (4), 2022. 2 +[21] Swati Jindal, Mohit Yadav, and Roberto Manduchi. Spatiotemporal attention and gaussian processes for personalized video gaze estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 604-614, 2024. 2 +[22] Petr Kellnhofer, Adria Recasens, Simon Stent, Wojciech Matusik, and Antonio Torralba. Gaze360: Physically unconstrained gaze estimation in the wild. In The IEEE International Conference on Computer Vision, 2019. 5 +[23] Muhammad Qasim Khan and Sukhan Lee. Gaze and eye tracking: Techniques and applications in adas. Sensors, 19 (24), 2019. 1 +[24] Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. Eye tracking for everyone. In The IEEE Conference on Computer Vision and Pattern Recognition, 2016. 1, 2, 5, 6 +[25] Erik Lindén, Jonas Sjöstrand, and Alexandre Proutiere. Learning to personalize in appearance-based gaze tracking. + +In The IEEE International Conference on Computer Vision Workshops, pages 1140-1148, 2019. 2 +[26] Katerina Mania, Ann McNamara, and Andreas Polychronakis. Gaze-aware displays and interaction. In ACM SIGGRAPH 2021 Courses, pages 1-67, 2021. 1 +[27] Omar Namnakani, Penpicha Sinrattanavong, Yasmeen Abdrabou, Andreas Bulling, Florian Alt, and Mohamed Khamis. Gazecast: Using mobile devices to allow gaze-based interaction on public displays. In Proceedings of the 2023 Symposium on Eye Tracking Research and Applications, New York, NY, USA, 2023. Association for Computing Machinery. 2 +[28] A. Palazzi, D. Abati, s. Calderara, F. Solera, and R. Cucchiara. Predicting the driver's focus of attention: The dr(eye)ve project. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(7):1720-1733, 2019. 1 +[29] Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Otmar Hilliges, and Jan Kautz. Few-shot adaptive gaze estimation. In The IEEE International Conference on Computer Vision, 2019. 2 +[30] Seonwook Park, Emre Aksan, Xucong Zhang, and Otmar Hilliges. Towards end-to-end video-based eye-tracking. In The European Conference on Computer Vision, pages 747-763. Springer, 2020. 5, 6 +[31] Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics (TOG), 35(6):1-12, 2016. 1 +[32] Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst. Exploring natural eye-gaze-based interaction for immersive virtual reality. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), pages 36-39, 2017. 3 +[33] Nachiappan Valliappan, Na Dai, Ethan Steinberg, Junfeng He, Kantwon Rogers, Venky Ramachandran, Pingmei Xu, Mina Shojaiezadeh, Li Guo, Kai Kohlhoff, et al. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature communications, 11(1): 4553, 2020. 2 +[34] Sheng Wang, Xi Ouyang, Tianming Liu, Qian Wang, and Dinggang Shen. Follow my eye: Using gaze to supervise computer-aided diagnosis. IEEE Transactions on Medical Imaging, 41(7):1688-1698, 2022. 1 +[35] Pengwei Yin, Jingjing Wang, Guanzhong Zeng, Di Xie, and Jiang Zhu. Lg-gaze: Learning geometry-aware continuous prompts for language-guided gaze estimation. In The European Conference on Computer Vision, 2024. 2 +[36] Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. Appearance-based gaze estimation in the wild. In The IEEE Conference on Computer Vision and Pattern Recognition, 2015. 1, 2, 6 +[37] Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. It's written all over your face: Full-face appearance-based gaze estimation. In The IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2299-2308, 2017. 5, 6 +[38] Xucong Zhang, Michael Xuelin Huang, Yusuke Sugano, and Andreas Bulling. Training person-specific gaze estimators + +from user interactions with multiple devices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018. 1 +[39] Xucong Zhang, Yusuke Sugano, and Andreas Bulling. Revisiting data normalization for appearance-based gaze estimation. In Proceedings of the ACM Symposium on Eye Tracking Research & Applications, 2018. 5 +[40] Xucong Zhang, Yusuke Sugano, and Andreas Bulling. Evaluation of appearance-based methods and implications for gaze-based applications. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019. 1, 3 \ No newline at end of file diff --git a/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/images.zip b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..47de481ec203814d7d4277e0a2e0bc99cc0d273f --- /dev/null +++ b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c4df945684fa7e887e7dff7325f09ac54d8271ec510317cba0afbf9112a6c09 +size 390799 diff --git a/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/layout.json b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f463a78a0475ba92630cce38991dd94eda781670 --- /dev/null +++ b/CVPR/2025/3D Prior Is All You Need_ Cross-Task Few-shot 2D Gaze Estimation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:480ef0dcf4e6b1b5c0c97ae8074428f54ddd0f35bd4b8830d433f37fc05bbf80 +size 375525 diff --git a/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_content_list.json b/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..affeae09688d5d088fba64618785274772a47cb6 --- /dev/null +++ b/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e953e0d6bfd7954c2923df5b6ec43abeeacb316b00584ae1fd835ef87bf174c +size 72546 diff --git a/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_model.json b/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..11aad9dde5724474c41f1d4cf9303c4d33966b3a --- /dev/null +++ b/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32c8bf330e798e152b1429460dba34b3233b50543735353a8c59395a9732f029 +size 89380 diff --git a/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_origin.pdf b/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8edb1a032f667aca42bc0bc970ba410e11277ef4 --- /dev/null +++ b/CVPR/2025/3D Student Splatting and Scooping/5f61e07c-5e93-420e-a919-2f54a9dc803a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b993d7615df9a1c16d6514084c9e79a6600f070f0ae09d96ec1d43adab2f1888 +size 9362812 diff --git a/CVPR/2025/3D Student Splatting and Scooping/full.md b/CVPR/2025/3D Student Splatting and Scooping/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cb5b152945153fcbb86765f2d5ee7fbf3d1489ff --- /dev/null +++ b/CVPR/2025/3D Student Splatting and Scooping/full.md @@ -0,0 +1,317 @@ +# 3D Student Splatting and Scooping + +Jialin Zhu1, Jiangbei Yue2, Feixiang He1, He Wang1,3* +1University College London, UK +2University of Leeds, UK +3AI Centre, University College London, UK + +# Abstract + +Recently, 3D Gaussian Splatting (3DGS) provides a new framework for novel view synthesis, and has spiked a new wave of research in neural rendering and related applications. As 3DGS is becoming a foundational component of many models, any improvement on 3DGS itself can bring huge benefits. To this end, we aim to improve the fundamental paradigm and formulation of 3DGS. We argue that as an unnormalized mixture model, it needs to be neither Gaussians nor splatting. We subsequently propose a new mixture model consisting of flexible Student's t distributions, with both positive (splatting) and negative (scooping) densities. We name our model Student Splatting and Scooping, or SSS. When providing better expressivity, SSS also poses new challenges in learning. Therefore, we also propose a new principled sampling approach for optimization. Through exhaustive evaluation and comparison, across multiple datasets, settings, and metrics, we demonstrate that SSS outperforms existing methods in terms of quality and parameter efficiency, e.g. achieving matching or better quality with similar numbers of components, and obtaining comparable results while reducing the component number by as much as $82\%$ . + +# 1. Introduction + +Presented initially as a neural rendering technique, 3D Gaussian Splatting (3DGS) [13] has quickly become a versatile component in various systems, e.g. geometry reconstruction, autonomous driving [3, 6]. Given its importance as a foundational component, very recently researchers start to investigate possible alternatives to the basic framework of 3DGS, e.g. more expressive distributions instead of Gaussians [8, 18], more principled optimization [14], which all focus on improving the model expressivity. Our research is among these attempts. + +The key to 3DGS' success lies in its name: Gaussian and splatting. 3DGS can be seen as a (unnormized) Gauss + +sian mixture, which provides two advantages. As a general-purpose distribution, Gaussians can approximate an arbitrary density function hence good expressivity. Also, Gaussians have analytical forms under e.g. affine transformation, enabling quick evaluation in 3D-2D projection, thus quick learning from images. Meanwhile, splatting provides a flexible way of identifying only the relevant Gaussians to an image for learning. Despite the success, the framework can still suffer from insufficient expressivity [16, 34], and low parameter efficiency, i.e. needing a large number of components [17, 28]. Therefore, we re-examine the three key components in 3DGS: Gaussian, splatting, and the optimization. Since its underlying principle is essentially to fit a 3D mixture model to a radiance field, we argue it needs not to be restricted to Gaussians or splatting. + +To this end, we propose a simple yet effective generalization of 3DGS. We first replace Gaussians with Student's t distribution with one degree of freedom, referred to simply as t-distribution. Similar to Gaussians, t-distribution also enjoys good properties such as analytical forms under affine transformation. More importantly, t-distribution can be seen as a generalization of Gaussians and therefore is more expressive. t-distribution has a control parameter for the tail fatness, representing distributions ranging from Cauchy distribution to Gaussian distribution, and any distribution inbetween. Compared with Gaussians, Cauchy is fat-tailed, i.e. a single Cauchy can cover a larger area with comparatively higher densities than a Gaussian. Furthermore, by making this control parameter learnable, we learn components with a wide range of varying tail thicknesses. + +Next, we extend the splatting scheme which only operates in the positive density space. Inspired by mixture models with negative components [20], we propose to employ both positive and negative components to splat (adding) and scoop (subtracting) densities from the model. This leads to more complex mathematical forms than 3DGS but we derive their close-form gradients for learning. + +Finally, as the increased model complexity, optimization methods based on naive stochastic gradient descent become insufficient, due to parameter coupling. Therefore, we propose a principled sampling scheme based on Stochastic + +Gradient Hamiltonian Monte Carlo (SGHMC). + +We refer to our model as Student Splatting and Scooping (SSS). SSS is evaluated on multiple datasets and compared with existing methods. Experiments show that SSS achieves higher quality often with fewer number of components, demonstrating more expressivity and higher parameter efficiency. Formally, our contributions include: + +- A new model named Student Splatting and Scooping (SSS), which is highly expressive and parameter efficient. +- A new mixture model with flexible components learned from a set of distribution families for neural rendering. +- A mixture model with negative components, which extends the learning into the negative density space. +- A principled sampling approach to tackle parameter coupling during learning. + +# 2. Related Work + +3D Reconstruction and Novel View Synthesis 3D reconstruction and Novel View Synthesis have been long-standing research topics in computer vision. Traditional methods mainly include Multi-View Stereo (MVS) [27] and Structure from Motion (SFM) [30]. Recently, the advent of Deep Learning has brought important changes to the field. In particular, the techniques based on Neural Radiance Field (NeRF) [22] and 3D Gaussian Splatting (3DGS) [13] have set new state-of-the-art (SOTA) benchmarks. + +NeRF methods NeRF [22] proposes to implicitly encode the radiance field of a 3D object/scene into a neural network and renders the 3D geometry and textures through a continuous volume rendering function. Since then, a large number of methods based on NeRF have been proposed, namely $\mathrm{NeRF}++$ [38], Mip-NeRF [2] and Mip-NeRF360 [2] to improve rendering quality, Plenoxels [7] and Instant-NGP [23] to accelerate NeRF training, D-NeRF [25] to extend NeRFs to dynamic scenes, DreamFusion [24] and Zero-1-to-3 [19] to employ it for text-to-3d generation models, etc. However, the biggest drawback of NeRF is that the ray casting process for rendering is time consuming. Despite the effort in improving its rendering efficiency e.g. SNeRG [10] and mobileNeRF [5], it still cannot be used for real-time rendering in most cases. + +Splatting methods 3DGS [13] solves the above problem for real-time rendering, by replacing the volume rendering with a differentiable rasterization method, which achieves the SOTA render quality. 3DGS uses 3D Gaussian as the primitive for the splatting method [40, 41]. It directly projects 3D Gaussians onto the 2D image plane through view/projective transformation for rasterization. Similar to NeRFs, prolific follow up research has been conducted based on 3DGS. GS++ [12] and Mip-Splatting [36] aim + +to improve rendering quality, 4D Gaussian Splatting [33] and Deformable 3D Gaussians [35] extend 3DGS to dynamic scenes, Dreamgaussian [29] employs 3DGS for text-to-3D tasks. One particular line of research is to improve the fundamental paradigm of 3DGS. This includes FreGS [37], 3DGS-MCMC [14] and Bulò et al. [26] which optimize the training process and adaptive density control in 3DGS, Scaffold-GS [21] and Implicit Gaussian Splatting [34] which combine grid representation with 3DGS for better rendering quality. More recently, there is also research exploring different primitives other than 3D Gaussians. 2DGS [11] obtains better surface reconstruction by changing the primitives from 3D Gaussian to 2D Gaussian for aligning the 3D scene. GES [8] uses a generalized exponential kernel to increase the expression ability of primitives and reduce memory cost. 3DHGS [18] decomposes one Gaussian into two half-Gaussians to obtain asymmetry and better expressivity. + +Our research is among the few recent efforts in improving the fundamental formulation of 3DGS. Different from them, we propose to use more expressive and flexible distributions, 3D Student's t distribution, as the basic primitive. In addition, we also use both positive and negative densities to extend the optimization into the negative density space for better representation. Finally, we propose a principled sampling approach for learning, deviating from most of the above research. + +# 3. Methodology + +# 3.1. Preliminaries: 3DGS as a mixture model + +3DGS essentially fits a (unnormized) 3D Gaussian mixture model to a radiance field [13]: + +$$ +P (x) = \sum w _ {i} G _ {i} (x), G (x) = e ^ {- \frac {1}{2} (x - \mu) ^ {T} \Sigma^ {- 1} (x - \mu)} \tag {1} +$$ + +where $w_{i} > 0$ . $\mu$ is the center position of Gaussian. $\Sigma \in \mathbb{R}^{3\times 3}$ is the covariance matrix, parameterized by a scaling matrix $S$ and a rotation matrix $R$ to maintain its positive semi-definiteness: $\Sigma = R S S^T R^T$ . Additionally, every 3D Gaussian is associated with opacity $o\in [0,1]$ and color $c\in \mathbb{R}^{27}$ which is represented by spherical harmonics and view-dependent [13]. $w_{i}$ is determined by $o$ , $c$ and compositing values after projection. + +Since the mixture can only be evaluated in 2D, when rendering an image, a 3D Gaussian is projected onto the 2D image plane, $G^{2D}$ , via integrating it in the camera view space along a ray, for computing a pixel color: + +$$ +C (u) = \sum_ {i = 1} ^ {N} c _ {i} o _ {i} G _ {i} ^ {2 D} (u) \prod_ {j = 1} ^ {i - 1} \left(1 - o _ {j} G _ {j} ^ {2 D} (u)\right). \tag {2} +$$ + +where $N$ is the number of the Gaussians that intersect with the ray cast from the pixel $u$ . Finally, the Gaussian parameters, opacity, and colors are learned based on the observed + +![](images/636abf59451469d19cda7d0a8e4158b399e587d5c993b4c18665cc836c296424.jpg) +Comparison of Student's t-Distributions with Various Degrees of Freedom +Figure 1. Student's t with varying degrees of freedom $\nu$ . (standard deviation is 5). + +2D images. Eq. (2) reveals that $C(u)$ can be seen as a 2D Gaussian mixture, except that now a component weight is also a function of other components, introducing additional cross-component interactions. + +Numerically, Gaussians, as the mixture component, are closed under affine transformation and marginalization of variables, so that the forward/backward pass can be quickly computed. 3DGS is a monotonic mixture as it is additive, i.e. $w_{i} > 0$ . Due to the success of 3DGS, existing works have since followed this paradigm [12, 21, 36, 37]. + +# 3.2. Student's t as a basic component + +We propose an unnormalized t-distribution mixture model, where a t-distribution is defined by a mean (location) $\mu \in \mathbb{R}^3$ , a covariance matrix (shape) $\Sigma = R S S^T R^T \in \mathbb{R}^{3 \times 3}$ , a degree of freedom (tail-fatness) $\nu \in [1, +\infty)$ , associated with opacity $o$ , and color $c$ : + +$$ +P (x) = \sum w _ {i} T _ {i} (x), w _ {i} > 0 +$$ + +$$ +T (x | \nu) = \left[ 1 + \frac {1}{\nu} (x - \mu) ^ {T} \Sigma^ {- 1} (x - \mu) \right] ^ {- \frac {\nu + 3}{2}}, \tag {3} +$$ + +where we can drop the scalar $\frac{\Gamma(\nu + 3) / 2}{\Gamma(\nu / 2)\nu^{\frac{3}{2}}\pi^{\frac{3}{2}}|\Sigma|^{\frac{1}{2}}}$ in the original t-distribution safely to facilitate learning. The choice is driven by two main factors. First, t-distribution is a strong generalization of Gaussians. As shown in Fig. 1, when $\nu \rightarrow 1, T \rightarrow Cauchy$ ; when $\nu \rightarrow \infty, T \rightarrow Gaussian$ . So t-distribution can capture what Gaussians capture and beyond. Furthermore, since Cauchy is fat-tailed, it can cover larger areas with higher densities than Gaussians therefore potentially reducing the number of components. As $\nu$ , $\mu$ and $\Sigma$ are learnable, SSS becomes a mixture of components learned from an infinite number of distribution families, instead of one family [13], providing further flexibility. + +The second reason for t-distribution is it also provides good properties similar to Gaussians, e.g. close under affine transformation and marginalization of variables. Rendering a pixel requires an affine transformation, then a projective transformation, followed by an integration along a ray, to be + +applied to a component, which has a simple form in 3DGS. In SSS, t-distribution also has a close form: + +$$ +\begin{array}{l} T ^ {2 D} (u) = \left[ 1 + \frac {1}{\nu} \left(u - \mu^ {2 D}\right) ^ {T} \left(\Sigma^ {2 D}\right) ^ {- 1} \left(u - \mu^ {2 D}\right) \right] ^ {- \frac {\nu + 2}{2}} \\ \mu^ {2 D} = (W \mu + t) _ {1: 2} / ((W \mu + t) _ {3}) \\ \Sigma^ {2 D} = \left(J W \Sigma W ^ {T} J ^ {T}\right) _ {1: 2, 1: 2}, \tag {4} \\ \end{array} +$$ + +where the subscripts select the corresponding rows and columns. $W$ , $t$ and $J$ are the affine transformation (i.e. scale, translation) and (approximated) projective transformation [40, 41]. This enables us to easily derive the key gradients for learning shown in the supplementary material (SM), unlike existing research also using alternative mixture components but requiring approximation [8]. + +In summary, the mixture of learnable t-distributions enhances the representational power and provides good mathematical properties for learning. + +# 3.3. Splitting and Scooping + +While monotonic mixture models are powerful, a nonmonotonic mixture model recently has been proposed by introducing negative components [20], arguing that it is suboptimal to only operate in the positive density space: + +$$ +P (x) = \left(\sum w _ {i} T _ {i} (x)\right) ^ {2} = \sum_ {i = 1} ^ {K} \sum_ {j = 1} ^ {K} w _ {i} w _ {j} T _ {i} (x) T _ {j} (x) \tag {5} +$$ + +where $w \in \mathbb{R}$ . In our problem, a negative density makes good sense as it can be seen as subtracting a color. However, our experiments using Eq. (5) show that it is not ideal as it introduces interactions between every pair of components, increasing the model evaluation complexity to $O(n^{2})$ where $n$ is the number of components in the model, making it significantly slower than before. Therefore, we still use Eq. (3) but with $w \in \mathbb{R}$ instead of $w > 0$ where $w_{i} = c_{i}o_{i}\prod_{k = 1}^{i - 1}(1 - o_{k}T_{k}^{2D}(u))$ and $o \in [-1,1]$ . Normally this might cause issues as Eq. (3) is then not well defined with negative components. However, we can parameterize the density in an energy-based form explained later which is well defined. In learning, we constrain the opacity by a tanh function so that positive and negative components can dynamically change signs while being bounded. Introducing negative t-distribution can enhance the representation power and the parameter efficiency. We show a simple experiment in Fig. 2, where fewer components are needed to fit the shape topology of a torus. In SSS, a component with negative densities is equivalent to removing its color from the mixture. Negative components are particularly useful in subtracting colors. + +# 3.4. Learning via sampling + +Recently, it is argued that principled sampling is better in 3DGS, e.g. Markov Chain Monte Carlo [14], instead of + +![](images/5c91ab75cb1c4710a1b6809e89b250684095a8e2e05fb6fe0e32012834fa291d.jpg) +(a) Topology data + +![](images/b53d83464fc305c5816f0d58d7e703bf1aaf4b764f052fda4e9174a774aaaedb.jpg) +(b) 2 positive components + +![](images/ab194fc8e4ea619ad0bb5a7b8e84eb4b58011c654cde0e71a5320e254a881220.jpg) +(c) 5 positive components +Figure 2. High parameter efficiency by negative components. We use a torus with only ambient lighting and frontal views (a), where the challenge is to capture the shape topology with as few components as possible. We initialize the component means near the center. Only using positive densities either underfits if two components are used (b), or requires at least 5 components to capture the topology correctly (c). In contrast, in (d), we only need two components (one positive and one negative), to capture the topology of the shape. Both components are co-located at the center of the torus. The positive component covers the torus but also the hole, while the negative component subtracts densities in the middle to make a hole. + +![](images/bba60d3fd8545ddf61556beaacdb03a5d8857c371ccdf684ef70e41526d75ef8.jpg) +(d) 1 positive and negative components + +naive stochastic gradient descent (SGD). Empirically, we found training SSS involves learning more tightly coupled parameters compared with 3DGS, namely among $\nu$ , $\mu$ , and $\Sigma$ . We speculate that this is because changing $\nu$ in learning is changing the family of distributions within which we optimize $\mu$ and $\Sigma$ . Therefore, we propose a sampling scheme that mitigates such coupling, based on Stochastic Gradient Hamiltonian Monte Carlo (SGHMC). + +Starting from the Hamiltonian Monte Carlo, we first parameterize the posterior distribution as: + +$$ +P (\theta , r) \propto e x p (- L _ {\theta} (x) - \frac {1}{2} r ^ {T} I r) \tag {6} +$$ + +where $L_{\theta}(x)$ is our loss function, $I$ is an identity matrix, $r$ is a momentum auxiliary variable, and $\theta$ is the learnable parameters. This is because Eq. (3) with $w \in \mathbb{R}$ is not a well-defined distribution, which makes direct sampling difficult. Using an energy function circumvents this issue and prescribes the high density regions of good $\theta$ . Intuitively, we would like to sample $\theta$ to minimize $L_{\theta}(x)$ . In addition, to decouple parameters during learning, the momentum term $\frac{1}{2} r^T Ir$ creates frictions for each dimension of the parameter space, enabling adaptive learning for each parameter. + +For $L_{\theta}(x)$ , our rendering function computes the pixel value based on the $N$ components associated with a ray: + +$$ +C (u) = \sum_ {i = 1} ^ {N} c _ {i} o _ {i} T _ {i} ^ {2 D} (u) \prod_ {j = 1} ^ {i - 1} \left(1 - o _ {j} T _ {j} ^ {2 D} (u)\right). \tag {7} +$$ + +where $u$ is the pixel. $c$ and $o$ are the color and opacity associated with a component $T$ . We then employ the following + +loss function [14]: + +$$ +\begin{array}{l} L = \left(1 - \epsilon_ {D - S S I M}\right) L _ {1} + \epsilon_ {D - S S I M} L _ {D - S S I M} \\ + \epsilon_ {o} \sum_ {i} | o _ {i} | _ {1} + \epsilon_ {\Sigma} \sum_ {i} \sum_ {j} | \sqrt {\lambda_ {i , j}} | _ {1} \tag {8} \\ \end{array} +$$ + +where the $L_{1}$ norm, and the structural similarity $L_{D - SSIM}$ loss aim to reconstruct images, while the last two terms act as regularization, with $\lambda$ being the eigenvalues of $\Sigma$ . The regularization applied to the opacity ensures that the opacity is big only when a component is absolutely needed. The regularization on $\lambda$ ensures the model uses components as spiky as possible (i.e. small variances). Together, they minimize the needed number of components [14]. + +Furthermore, directly sampling from Eq. (6) requires the full gradient of $U = L_{\theta}(x) - \frac{1}{2} r^{T}Ir$ which is not possible given the large number of training samples. Therefore, replacing the full gradient with stochastic gradient will introduce a noise term: $\nabla \hat{U} = \nabla U + \mathcal{N}(0,V)$ , where $\mathcal{N}$ is Normal and $V$ is the covariance of the stochastic gradient noise. Under mild assumptions [4], sampling Eq. (6) using stochastic gradients becomes (with detailed derivation in the SM): + +$$ +d \theta = M ^ {- 1} r d t +$$ + +$$ +d r = - \nabla U (\theta) d t - C M ^ {- 1} r d t + \mathcal {N} (0, 2 C d t) \tag {9} +$$ + +where $\mathcal{N}$ is Gaussian noise, $M$ is a mass matrix, and $C$ is a control parameter dictating the friction term $CM^{-1}rdt$ and noise $\mathcal{N}(0,2Cdt)$ . In our problem, it is crucial to design good friction and noise scheduling. The effect of this principled sampling method is further discussed in SM. + +# 3.4.1. Friction and Noise Scheduling + +We first use SGHMC on $\mu$ and Adam on the other parameters. To learn $\mu$ , we modify Eq. (9) to: + +$$ +\begin{array}{l} \mu_ {t + 1} = \mu_ {t} - \varepsilon^ {2} \left[ \frac {\partial L}{\partial \mu} \right] _ {t} + F + N \\ F = \sigma (o) \varepsilon (1 - \varepsilon C) r _ {t - 1} \\ N = \sigma (o) \mathcal {N} \left(0, 2 \varepsilon^ {\frac {3}{2}} C\right) \\ r _ {t + 1} = r _ {t} - \varepsilon \left[ \frac {\partial L}{\partial \mu} \right] _ {t + 1} - \varepsilon C r _ {t - 1} + \mathcal {N} (0, 2 \varepsilon C) \\ \text {w h e r e} \sigma (o) = \sigma (- k (o - t)) \tag {10} \\ \end{array} +$$ + +where $\varepsilon$ is the learning rate and decays during learning. $\mathcal{N}$ is Gaussian noise. $o$ is the opacity. The main difference between Eq. (10) and Eq. (9) is we now have adaptive friction $F$ and noise $N$ for $\mu$ . $\sigma$ (sigmoid function) switches on/off the friction and noise. We use $k = 100$ and $t = 0.995$ , so that it only activates for components with opacity lower than 0.005. When it is activated, friction and noise are added to these components. Note that if $F$ is disabled, Eq. (10) + +is simplified to a Stochastic Gradient Langevin Dynamics scheme [32]. + +When learning, we initialize with a sparse set of (SfM) points without normals [13], run Eq. (10) without $F$ for burn-in for exploration, then run the full sampling for exploitation until convergence. During burn-in stage, we multiply $N$ by the covariance $\Sigma$ of the component to maintain the anisotropy profile of the t-distribution. After the burn-in, $\Sigma$ is removed, and the anisotropy is then maintained by $F$ due to the momentum $r$ . + +Key gradients Overall, the key learnable parameters for each component in SSS include the mean $\mu$ , covariance $\Sigma$ (i.e. $S$ and $R$ ), color $c$ , opacity $o$ , and degree parameter $\nu$ . To compute Eq. (9), the key gradients are $\frac{\partial L}{\partial \mu}$ , $\frac{\partial L}{\partial S}$ , $\frac{\partial L}{\partial R}$ , $\frac{\partial L}{\partial c}$ , $\frac{\partial L}{\partial o}$ , and $\frac{\partial L}{\partial \nu}$ . For simplicity, we give them in the SM. + +# 3.4.2. Adding and Recycling Components + +Components can become nearly transparent during sampling, i.e. near zero opacity. In 3DGS, they are removed. Recently, it is argued that they should be recycled [14], by relocating them to a high opacity component. However, careful consideration needs to be taken as the overall distribution before and after relocation should be the same [26]. When moving some components to the location of another component, this is ensured by: + +$$ +\min \int_ {- \infty} ^ {\infty} \left| \left| C _ {n e w} (\mu) - C _ {o l d} (\mu) \right| \right| _ {2} ^ {2} d \mu \tag {11} +$$ + +where $C_{new}$ and $C_{old}$ are the color after and before relocation respectively. Minimizing this integral ensures the smallest possible pixel-wise color changes over the whole domain. Minimizing Eq. (11) in SSS leads to: + +$$ +\begin{array}{l} \mu_ {n e w} = \mu_ {o l d}, (1 - o _ {n e w}) ^ {N} = (1 - o _ {o l d}) \\ \Sigma_ {n e w} = \left(o _ {o l d}\right) ^ {2} \frac {\nu_ {o l d}}{\nu_ {n e w}} (\frac {\beta (\frac {1}{2} , \frac {\nu_ {o l d} + 2}{2})}{K}) ^ {2} \Sigma_ {o l d} \\ K = \sum_ {i = 1} ^ {N} \sum_ {k = 0} ^ {i - 1} \binom {i - 1} {k} (- 1) ^ {k} (o _ {n e w}) ^ {k + 1} Z) \\ Z = \beta \left(\frac {1}{2}, \frac {(k + 1) (\nu_ {\text {n e w}} + 3) - 1}{2}\right) \tag {12} \\ \end{array} +$$ + +$\mu_{new}$ and $\mu_{old}$ , $o_{new}$ and $o_{old}$ , $\Sigma_{new}$ and $\Sigma_{old}$ are the mean, opacity, and covariance after and before relocation respectively. $N$ is the total number of components after relocation, i.e. moving $N - 1$ low opacity components to the location of 1 high opacity component. $\beta$ is the beta function. We leave the detailed derivation in the SM. Note we do not distinguish between positive and negative components during relocation. This introduces a perturbation on the sign of the opacity. In Eq. (12), if $o_{old} > 0$ then all $o_{new} > 0$ , or otherwise $o_{new} < 0$ if $o_{old} < 0$ , regardless their original opacity + +signs. This sign perturbation in practice helps the mixing of the sampling. Furthermore, to ensure the sampling stability, we limit the relocation to a maximum of $5\%$ of the total components at a time. Finally, we also add new components when needed, but do not use the adaptive density control (clone and split) in 3DGS [13]. Instead, we add $5\%$ new components with zero opacity and then recycle them. + +# 4. Experiments + +# 4.1. Experimental setting + +Datasets and Metrics Following existing research, we employ 11 scenes from 3 datasets, including 7 public scenes from Mip-NeRF 360 [2], 2 outdoor scenes from Tanks & Temples [15], and 2 indoor scenes from Deep Blending [9]. Also, following the previous evaluation metrics, we use Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM) [31], and Learned Perceptual Image Patch Similarity (LPIPS) [39]. We provide average scores of each dataset, and detailed scores are in the SM. + +Baselines Due to there being many publications based on 3DGS, we only choose the original 3DGS [13] and the most recent work that focuses on improving the fundamental paradigm of 3DGS and has achieved the best performances. The methods include Generalized Exponential Splatting (GES) [8] and 3D Half-Gaussian Splatting (3DHGS) [18] which also use different (positive only) mixture components, Scaffold-GS [21] and Fre-GS [37] which optimize the training procedure to achieve faster convergence and better results, 3DGS-MCMC [14] which proposes a principled MCMC sampling process, and MipNeRF [1] which is a state-of-the-art method with Neural Radiance Field (NeRF) [22]. Overall, our baselines comprehensively include methods with new mixture components, new optimization approaches, and the SOTA quality. + +The results of all baselines in general benchmarking are from their papers. In addition, we run their codes with other settings for more comparison. Since not all baseline methods are implemented in exactly the same setting, we need to adapt them for comparison. These details are in the SM. + +# 4.2. General Benchmarks + +We first compare SSS with the baselines on all scenes in their default settings, shown in Tab. 1. SSS achieves overall the best results on 6 of the 9 metrics, and the second best on 2 metrics. The only exception is the LPIPS in Deep Blending, where the difference between SSS and the best is $7 \times 10^{-3}$ . Furthermore, when investigating individual scenes, SSS achieves the largest leading margin on Train. It achieves 23.23/0.844/0.170, where the second best method 3DHGS achieves 22.95/0.827/0.197, in PSNR, SSIM, and LPIPS, which is a $1.22\% / 2.05\% / 13.7\%$ improvement. De + +
Dataset +Method—MetricMip-NeRF360 DatasetTanks&TemplesDeep Blending
PSNR ↑SSIM↑LPIPS↓PSNR ↑SSIM↑LPIPS↓PSNR ↑SSIM↑LPIPS↓
Mip-NeRF29.230.8440.20722.220.7590.25729.400.9010.245
3DGS28.690.8700.18223.140.8410.18329.410.9030.243
GES26.910.7940.25023.350.8360.19829.680.9010.252
3DHGS29.560.8730.17824.490.8570.16929.760.9050.242
Fre-GS27.850.8260.20923.960.8410.18329.930.9040.240
Scaffold-GS28.840.8480.22023.960.8530.17730.210.9060.254
3DGS-MCMC29.890.9000.19024.290.8600.19029.670.8900.320
Ours29.900.8930.14524.870.8730.13830.070.9070.247
+ +Table 1. Comparison. The red, orange and yellow colors represent the top three results. Competing metrics are extracted from respective papers, and ours are reported as the average of three runs. + +tailed scores for each scene are in the SM. We show qualitative comparison in Figure 3. + +# 4.3. Parameter Efficiency + +SSS has stronger representation power than 3DGS and its variants. The varying tail-thickness of its components enables SSS to fit the data with fewer components, i.e. higher parameter efficiency. We show this via experiments under different component numbers. + +Since the SfM initialization gives different components in different scenes and a method normally increases the component number during learning, we introduce a coefficient to describe the latter as a multiplicity of the former. Denoting the initial component number as $\delta$ , we test $\delta$ , $1.4\delta$ , $1.8\delta$ , $2.2\delta$ , and $2.6\delta$ as the maximum component number for comparison. Note even with $2.6\delta$ , the component number is still much smaller than the experiments in Tab. 1. Specifically, the $2.6\delta$ vs the original 3DGS component number are $468\mathrm{k} / 1.1\mathrm{m}$ , $364\mathrm{k} / 2.6\mathrm{m}$ , $208\mathrm{k} / 3.4\mathrm{m}$ , $96\mathrm{k} / 2.5\mathrm{m}$ , $140\mathrm{k} / 5.9\mathrm{m}$ , $520\mathrm{k} / 1.3\mathrm{m}$ , $416\mathrm{k} / 1.2\mathrm{m}$ , $364\mathrm{k} / 5.2\mathrm{m}$ , $624\mathrm{k} / 1.8\mathrm{m}$ , $286\mathrm{k} / 1.5\mathrm{m}$ , $83\mathrm{k} / 4.75\mathrm{m}$ , in Train, Truck, DrJohnson, Playroom, Bicycle, Bonsai, Counter, Garden, Kitchen, Room, Stump, corresponding to merely $42.5\%$ , $14\%$ , $6.1\%$ , $3.8\%$ , $2.4\%$ , $40\%$ , $34.7\%$ , $7\%$ , $34.7\%$ , $19.1\%$ , $1.7\%$ of the original components, a maximum of $98.3\%$ reduction. + +PSNR is averaged over the scenes in each dataset and shown in Figure 4. First, SSS outperforms all other methods in 15 out of 15 settings, demonstrating strong expressivity across all scenarios. Furthermore, when the component number decreases, all methods deteriorate, but SSS deteriorates slowly comparatively, demonstrating that SSS can fit the data much more efficiently than the rest. One specific example is the Tanks & Temples in Tab. 1. SSS achieves 23.6 PSNR with merely 180k components, already surpassing Mip-NeFR, 3DGS and GES. With around 300k components, SSS achieves 24.4 PSNR, which is only slightly worse than 3DHGS and 3DGS-MCMC, by a margin at the scale of $10^{-2}$ . Note this is a comparison with the methods + +in Tab. 1 where they use at least $1\mathrm{m}$ components, e.g. 3DGS employs around $1.1\mathrm{m}$ and $2.6\mathrm{m}$ in Train and Truck, while SSS employs only around $364\mathrm{k}$ and $468\mathrm{k}$ , a maximum reduction of $82\%$ of the components. + +# 4.4. Qualitative Comparison + +We further show a qualitative comparison in one scene across the different component numbers in Fig. 5. The ground-truth is one view from the Train. When restricting the component number to around $180\mathrm{k}$ , the original 3DGS and one of the state-of-the-art methods 3DHGS show significant blur. This is likely to be caused by the struggle between stretching Gaussians to cover large areas and narrowing them to reconstruct details, given a limited number of components at proposal. Due to the optimization relying on stochastic gradient descent, a local minimum is sought where the distribution of the Gaussians is sub-optimal. Intuitively, the issue can be mitigated by more flexible components and/or better optimization. As expected, this is shown in GES and 3DGS-MCMC, where the former employs a more flexible component (generalized exponential function) while the latter improves the optimization itself by MCMC. The improvements by both methods suggest that these are the correct directions to improve the paradigm of 3DGS. + +Next, SSS outperforms GES and 3DGS-MCMC visually when using $180\mathrm{k}$ components. One example is the sky and the hill in the background in the left half of the image. GES creates a blurry background mixing the sky and the hill, with no discernible details, suggesting it uses a few components that are stretched to cover large areas. In contrast, 3DGS-MCMC can separate the sky from the hill. But it creates random white patches in the sky, which do not exist in the ground truth. This suggests that 3DGS-MCMC employs a relatively larger number of slim Gaussians to fit the details but meanwhile introduces additional noises. SSS not only successfully separates the sky and the hill, but simultaneously retains the homogeneous color in the sky and reconstructs the details on the hill, e.g. the trees and lawns. + +![](images/10ef952536df28a418d60b953e925094d36b367855132ffef04fd66248fb2e41.jpg) +Figure 3. Visual comparison Zoom-in for better visualization. (a) SSS restores better the indentations of the box lid; (b) SSS is the best at detailing windows in the upper center; (c) Only the image rendered by SSS contains the green track detail in the upper right corner; (d) SSS is the best at restoring the reflection in the front window of the truck; (e) SSS perfectly restores the light switch next to the stairs. + +![](images/dd2fd8a843160958f31bebe581d4c849847df9c9adfa97180127cc421d1effea.jpg) +Figure 4. All methods with reduced component numbers. + +![](images/af8c51006e48bdd5f3cb8cbb6e529def2148edac7a424639f9855d1377a0c5fe.jpg) + +![](images/0146a68ae0e1461047b47f25025bbc7d5266fcfaf436d6083c2096fc9165d271.jpg) + +This is attributed to SSS' capability of learning components with varying tail-fatness to adaptively capture large homogeneous areas and small heterogeneous regions. + +Furthermore, when increasing the component number to + +468k, all results are improved, as expected. 3DGS and 3DHGS still cannot work as well as other methods, suggesting they need more components. In addition, the difference between GES, 3DGS-MCMC, and SSS starts to nar + +![](images/09f00dee842f07e2dd34dec3f3d58c15acefc573ae7245705de0250de9cc6d67.jpg) +Figure 5. Visual results of all methods with varying component numbers. In addition to the well-reconstructed main body of the train compared to other baselines, our method can use a small number of components to restore more details, such as plants on distant mountains, rocks on the ground nearby, etc. Besides, our sky has fewer noises and appears more similarly to the ground truth. Zoom in for details. Note that ours with $252\mathrm{k}$ components has already achieved SOTA quality and beats most baselines. + +row. GES can separate the sky and the hill. Both GES and 3DGS-MCMC have fewer artifacts. However, as the component number increases, there are still noticeable noises introduced to the sky, which suggests that it is a common issue for both methods when using more components to cover an area with mixed homogeneous and heterogeneous regions. Comparatively, SSS gives consistent performance across all component numbers, i.e. clearly separating and reconstructing the homogeneous sky and the heterogeneous hill. Also, the visual quality from $180\mathrm{k}$ to $468\mathrm{k}$ does not change significantly for SSS, but is noticeably improved for GES and 3DGS-MCMC, suggesting a higher parameter efficiency of SSS in perception. + +Ablation Study We conduct an ablation study to show the effectiveness of various components in SSS. These results prove how each component contributes to the final performance improvement. We give details in the SM. + +# 5. Conclusion, Discussion, and Future Work + +We proposed Student Splatting and Scooping (SSS), a new non-monotonic mixture model, consisting of positive and negative Student's t distributions, learned by a principled SGHMC sampling. SSS contains a simple yet strong and non-trivial generalization of 3DGS and its variants. SSS outperforms existing methods in rendering quality, and shows high parameter efficiency, e.g. achieving comparable quality with less than $\frac{1}{3}$ of components. + +SSS has limitations. Its primitives are restricted to symmetric and smooth t-distributions, limiting its representation. The sampling also needs hyperparameter tuning such as the percentage of negative components. In the future, we will combine other distribution families (e.g. Laplace) with t-distribution to further enhance the expressivity, and make the SGHMC self-adaptive to achieve better balances between the positive and negative components. + +# Acknowledgement + +This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 899739 CrowdDNA. + +# References + +[1] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5855–5864, 2021. 5 +[2] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5470–5479, 2022. 2, 5, 7 +[3] Guikun Chen and Wenguan Wang. A survey on 3d gaussian splatting. arXiv preprint arXiv:2401.03890, 2024. 1 +[4] Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In International conference on machine learning, pages 1683-1691. PMLR, 2014. 4 +[5] Zhiqin Chen, Thomas Funkhouser, Peter Hedman, and Andrea Tagliasacchi. Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16569-16578, 2023. 2 +[6] Ben Fei, Jingyi Xu, Rui Zhang, Qingyuan Zhou, Weidong Yang, and Ying He. 3d gaussian as a new vision era: A survey. arXiv preprint arXiv:2402.07181, 2024. 1 +[7] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5501-5510, 2022. 2 +[8] Abdullah Hamdi, Luke Melas-Kyriazi, Jinjie Mai, Guocheng Qian, Ruoshi Liu, Carl Vondrick, Bernard Ghanem, and Andrea Vedaldi. Ges: Generalized exponential splatting for efficient radiance field rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19812-19822, 2024. 1, 2, 3, 5 +[9] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics (ToG), 37(6):1-15, 2018. 5, 7 +[10] Peter Hedman, Pratul P Srinivasan, Ben Mildenhall, Jonathan T Barron, and Paul Debevec. Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5875-5884, 2021. 2 +[11] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. In ACM SIGGRAPH 2024 Conference Papers, pages 1-11, 2024. 2 +[12] Letian Huang, Jiayang Bai, Jie Guo, and Yanwen Guo. + +Gs++: Error analyzing and optimal gaussian splatting. arXiv preprint arXiv:2402.00752, 2024. 2, 3 +[13] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 1, 2, 3, 5 +[14] Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Weiwei Sun, Yang-Che Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, and Kwang Moo Yi. 3d gaussian splatting as markov chain monte carlo. Advances in Neural Information Processing Systems, 37:80965-80986, 2024. 1, 2, 3, 4, 5 +[15] Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG), 36(4): 1-13, 2017. 5, 7 +[16] Jonas Kulhanek, Songyou Peng, Zuzana Kukelova, Marc Pollefeys, and Torsten Sattler. WildGaussians: 3D gaussian splatting in the wild. NeurIPS, 2024. 1 +[17] Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3d gaussian representation for radiance field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21719-21728, 2024. 1 +[18] Haolin Li, Jinyang Liu, Mario Sznaier, and Octavia Camps. 3d-hgs: 3d half-gaussian splatting. arXiv preprint arXiv:2406.02720, 2024. 1, 2, 5 +[19] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9298–9309, 2023. 2 +[20] Lorenzo Lo Conte, Aleksanteri Sladek, Stefan Mengel, Martin Trapp, Arno Solin, Nicolas Gillis, and Antonio Vergari. Subtractive mixture models via squaring: Representation and learning. In International Conference on Learning Representations (ICLR), 2024. 1, 3 +[21] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654-20664, 2024. 2, 3, 5 +[22] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 2, 5 +[23] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1-15, 2022. 2 +[24] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In The Eleventh International Conference on Learning Representations. 2 +[25] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10318-10327, 2021. 2 + +[26] Samuel Rota Bulò, Lorenzo Porzi, and Peter Kontschieder. Revising densification in gaussian splattering. In European Conference on Computer Vision, pages 347-362. Springer, 2024. 2, 5 +[27] Steven M Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and evaluation of multi-view stereo reconstruction algorithms. In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06), pages 519-528. IEEE, 2006. 2 +[28] Xiangyu Sun, Joo Chan Lee, Daniel Rho, Jong Hwan Ko, Usman Ali, and Eunbyung Park. F-3dgs: Factorized coordinates and representations for 3d gaussian splatting. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 7957–7965, 2024. 1 +[29] Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. In The Twelfth International Conference on Learning Representations. 2 +[30] Shimon Ullman. The interpretation of structure from motion. Proceedings of the Royal Society of London. Series B. Biological Sciences, 203(1153):405-426, 1979. 2 +[31] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 5 +[32] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688. CiteSeer, 2011. 5 +[33] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20310-20320, 2024. 2 +[34] Minye Wu and Tinne Tuytelaars. Implicit gaussian splatting with efficient multi-level tri-plane representation. arXiv preprint arXiv:2408.10041, 2024. 1, 2 +[35] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20331-20341, 2024. 2 +[36] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splattering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19447-19456, 2024. 2, 3 +[37] Jiahui Zhang, Fangneng Zhan, Muyu Xu, Shijian Lu, and Eric Xing. Fregs: 3d gaussian splattering with progressive frequency regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21424-21433, 2024. 2, 3, 5 +[38] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020. 2 +[39] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of + +deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 5 +[40] Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Ewa volume splatting. In Proceedings Visualization, 2001. VIS'01., pages 29-538. IEEE, 2001. 2, 3 +[41] Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Ewa splatting. IEEE Transactions on Visualization and Computer Graphics, 8(3):223-238, 2002. 2, 3 \ No newline at end of file diff --git a/CVPR/2025/3D Student Splatting and Scooping/images.zip b/CVPR/2025/3D Student Splatting and Scooping/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c1c77a5fdba9e59fc7aea94622aea8dc2b421ec3 --- /dev/null +++ b/CVPR/2025/3D Student Splatting and Scooping/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a235691dc6d79820a4457ab8f08f2f2b511468bc59b43db1ad2f9ff87e7a249e +size 872038 diff --git a/CVPR/2025/3D Student Splatting and Scooping/layout.json b/CVPR/2025/3D Student Splatting and Scooping/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e3d739da4aea936888149e547ee6113286bbb1dc --- /dev/null +++ b/CVPR/2025/3D Student Splatting and Scooping/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faa41482281bb9f56ca0e23868831a24fc7956511327827799c30818c4f27d32 +size 428141 diff --git a/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_content_list.json b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7f7443f4b70f544ab6c4fda41af0a2551c58c247 --- /dev/null +++ b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f485d652bec299b9e7416381584bf7cbd6ac15bf6989c6988484dade83355e6 +size 78194 diff --git a/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_model.json b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_model.json new file mode 100644 index 0000000000000000000000000000000000000000..01f6bec4eb5918432f51a980d2903279acf6cd4f --- /dev/null +++ b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64bb03458994898cc1a8b118bf251093ce7394dd3c5abb8cbba531371c286717 +size 99941 diff --git a/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_origin.pdf b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..77cc5a1d2d18783b86251c751eed36a8521a212d --- /dev/null +++ b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/17e84a16-1ce0-4997-bd4d-dc9b8b0e89ca_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da23a6c626de95e65b2a03a9464c29ba36aa444b74a98f09ebd1f5f6a4273ef2 +size 1440092 diff --git a/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/full.md b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..01041cead5d13ad5883b0c88e431c45ebb6634af --- /dev/null +++ b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/full.md @@ -0,0 +1,309 @@ +# 3D-AVS: LiDAR-based 3D Auto-Vocabulary Segmentation + +Weijie Wei*, Osman Ülger*, Fatemeh Karimi Nejadasl, Theo Gevers, Martin R. Oswald University of Amsterdam, the Netherlands + +# Abstract + +Open-vocabulary segmentation methods offer promising capabilities in detecting unseen object categories, but the category must be aware and needs to be provided by a human, either via a text prompt or pre-labeled datasets, thus limiting their scalability. We propose 3D-AVS, a method for Auto-Vocabulary Segmentation of 3D point clouds for which the vocabulary is unknown and auto-generated for each input at runtime, thus eliminating the human in the loop and typically providing a substantially larger vocabulary for richer annotations. 3D-AVS first recognizes semantic entities from image or point cloud data and then segments all points with the automatically generated vocabulary. Our method incorporates both image-based and point-based recognition, enhancing robustness under challenging lighting conditions where geometric information from Li-DAR is especially valuable. Our point-based recognition features a Sparse Masked Attention Pooling (SMAP) module to enrich the diversity of recognized objects. To address the challenges of evaluating unknown vocabularies and avoid annotation biases from label synonyms, hierarchies, or semantic overlaps, we introduce the annotation-free Text-Point Semantic Similarity (TPSS) metric for assessing generated vocabulary quality. Our evaluations on nuScenes and ScanNet200 demonstrate 3D-AVS's ability to generate semantic classes with accurate point-wise segmentations. + +# 1. Introduction + +Existing perception methods [3, 24, 27, 46, 59, 61] for autonomous driving often rely on an inclusiveness assumption that all potential categories of interest must exist in the training dataset. Nevertheless, public datasets often annotate instances with pre-defined categories, which can vary from three (e.g. vehicle, cyclist and pedestrian) [31, 44] to several dozen types [2, 4], and fail to annotate rare objects + +![](images/5a35a378d6c39aa3938e1680be5ca902b9564311bbd198c9c306d421f36787f5.jpg) + +![](images/dc99273a2973ca7bab6657a16cf9c09cae417bf25fee774ae2992d72feab6a50.jpg) +Reference Images +Figure 1. Pre-defined Vocabulary vs. Auto-Vocabulary. 3D-AVS automatically generates a vocabulary for which it predicts segmentation masks, offering greater semantic precision. Our predictions identify specific classes e.g. building and signboard (highlighted in red boxes), which are annotated with ambiguous terms like manmade. Quantitatively, 3D-AVS recognizes 671 unique categories on the validation set of nuScenes [4], significantly surpassing nuScenes's original 16 categories. Left: Vocabulary for a single scene, Right: Vocabulary for the entire dataset. + +with correct semantic labels. Failing to recognize atypical objects or road users poses a significant risk to the perception model's adaptability to diverse real-life scenarios. + +The development of Vision-Language Models (VLMs) strengthens the connection between vision and language modalities and promotes progress in multi-modal tasks, such as zero-shot image classification [39], image search + +and retrieval [40], image captioning [34], video understanding [51], and open-vocabulary learning [49]. Open-vocabulary learning methods often utilize pre-trained VLMs to find the correspondence between visual entities and a semantic vocabulary, thereby creating the potential to detect any category of interest [49, 66]. However, these methods rely on human-specified queries, and thus can not dynamically recognize all semantic entities in a scene. Conversely, predefining everything is neither scalable nor practical in a dynamic world, as it is impossible to anticipate all the categories the model may encounter in advance. This shortcoming severely limits the real-life applicability of existing methods, as newly encountered object categories could still be unknown to the model or unaware to humans. + +In this work, we propose 3D-AVS, a framework that automatically recognizes objects, generates a vocabulary for them and segments LiDAR points. We evaluate our method on the indoor and outdoor datasets [4, 14, 43] and introduce a metric TPSS to assess the model performance based on semantic consistency in CLIP [39] space. Figure 1 compares the same segmenter, namely OpenScene [38] with different vocabularies. 3D-AVS generates convincing semantic classes as well as accurate point-wise segmentations. Moreover, when pre-defined categories are general and ambiguous, e.g. man-made, 3D-AVS recognizes the semantically more precise categories, e.g. building and signboard. + +Our contributions can be summarized as follows: 1) we introduce auto-vocabulary segmentation for point clouds, aiming to label all points using a rich and scene-specific vocabulary. Unlike methods that rely on predefined vocabularies, we address an unknown vocabulary setting by dynamically generating vocabulary per input; 2) we propose 3D-AVS, a framework that automatically identifies objects, either through an image-free point-based captioner or an off-the-shelf image-based captioner; 3) we propose a point captioner for 3D-AVS-LiDAR that decodes text from point-based CLIP features, achieving image independence and enhanced object diversity through a sparse masked attention pooling (SMAP) module; and 4) we introduce the Text-Point Semantic Similarity score, a novel CLIP-based, annotation-free metric that evaluates semantic consistency, accounting for synonyms, hierarchies, and similarity in unknown vocabularies, enabling scalable auto-vocabulary evaluation without human input. + +# 2. Related Work + +Open-Vocabulary Segmentation (OVS). OVS aims to perform segmentation based on a list of arbitrary text queries. CLIP [39] achieves this in 2D by aligning vision and language in a shared latent space. However, no comparable large-scale point cloud dataset exists for similar train + +ing in 3D. Additionally, captions in point cloud datasets are typically much sparser. Therefore, existing methods usually freeze the text encoder and image encoder, and align point features to vision-language feature space [8, 36, 38, 55, 64]. ULIP [55] distils vision-language knowledge into a point encoder via contrastive learning on text-image-point triplets. CLIP2Scene [8] adopts self-supervised learning, aligning point-text features using spatial-temporal cues. OpenScene [38] supervises the point encoder with CLIP-based image features through point-pixel projection. While these OVS approaches show promising results, they require user-defined categories as prompts. Conversely, our approach automatically generates categories that potentially appear in the scene without any human in the loop. + +Auto-Vocabulary Segmentation (AVS). AVS differs from OVS in that it segments entities directly from perceptual data rather than relying on a human-defined vocabulary as input. Relevant target categories are directly inferred from the image - usually without any additional training, finetuning, data sourcing or annotation effort. Zero-Guidance Segmentation (ZeroSeg) [42] achieve this by using clustered DINO [5] embeddings to obtain binary object masks. These masks were used to guide the attention of CLIP, resulting in embeddings that are more accurately targeted to individual segments, and a trained large language model was tasked to output texts closest to said embeddings. While this required switching between three different latent representations, AutoSeg [70] proposed a more direct approach based on BLIP [26] embeddings only. They introduced a procedure in which multi-scale BLIP embeddings are enhanced through clustering, alignment and denoising. The embeddings are then captioned using BLIP's decoder and parsed into a noun set used by an OVS model for segmentation. CaSED [13] retrieves captions from an external database and then integrates parsed texts with different segmentation methods. Despite these attempts in 2D domain, AVS in 3D domain remains unexplored. Concurrently and independently, Meng et al. [32] have proposed vocabulary-free 3D instance segmentation and a method PoVo for this task. While PoVo first obtains 3D clusters and then matches the generated semantic categories to the clusters, our work focuses more on target category generation and seamless integration with existing OVS methods. + +Challenges of AVS evaluation. AVS presents additional challenges linked to evaluation. Since generated categories can be open-ended and outside of the fixed dataset vocabulary, one needs to bridge the gap between the two to assess the segmentation performance. ZeroSeg [42] exploits subjective assessment. In AutoSeg [70], the LLM-based mapper, LAVE, is introduced to address this challenge. However, the mapping targets are typically limited in size, causing the auto-generated categories - often more semantically + +rich and precise - to be discarded. To overcome these limitations, we propose the TPSS metric, which enables the evaluation of the generated categories while preserving their open-ended nature. + +Captioning 2D and 3D Data. Captioning is the process of generating a concise and meaningful description from data modalities such as images, videos or point clouds. Notable works in 2D combine image-based templates with extracted attributes [23, 60], combine deep learning models like convolutional neural networks with RNN, LSTM or transformer-based generators [47, 52, 62], or leverage pre-trained vision-language embeddings such as CLIP or BLIP [11, 26, 28, 34]. BLIP [26], known for its effective but somewhat generic captions, often focus only on the 2-3 most prominent entities in an image. BCC [70] addresses this limitation by enhancing BLIP tokens through unsupervised semantic clustering in the latent space, enabling cluster-wise captioning and resulting in more comprehensive and detailed captions. More recently, xGen-MM (BLIP-3) [56] was introduced, building on BLIP with two improvements: an expanded and more diverse set of training data, and a scalable vision token sampler for flexible input resolutions. While this task is broadly explored in the 2D domain, it is yet to be solved in the 3D domain. Existing approaches focus on describing a single object, e.g. CAD models [55, 57] and scanned shapes [16, 30, 54, 57, 68], or dense contextual indoor scenarios [6, 9, 10, 18, 20, 48, 68], but fail to caption sparse outdoor scenes due to sparsity and lack of colour information. LidarCLIP [17] encodes a sparse point cloud to a CLIP feature vector and then decodes it to a caption via ClipCap [35]. However, LidarCLIP only provides a global caption per scene, leading to limited coverage of semantic entities. Instead, our proposed point captioner copes with flexible receptive fields and offers a controllable number of captions with various granularity. + +# 3. Method + +# 3.1. Preliminaries + +CLIP and CLIP-aligned encoder. CLIP [39] is believed to properly align visual and text features due to its superior performance on vision-language tasks. It comprises a text encoder $h_{\mathrm{tx}}$ and an image encoder $h_{\mathrm{im}}$ , both of which map a data modality, e.g. text and image, to a vision-language latent space, also known as the CLIP space. Many work [12, 15, 25, 53, 63] increase the output resolution of CLIP image encoder, yielding high-resolution features $h_{\mathrm{im}}^{\mathrm{hr}}$ while preserving alignment within the original CLIP space. Furthermore, some 3D methods [17, 36, 38, 55] distill features from $h_{\mathrm{im}}$ or its high-resolution variant $h_{\mathrm{im}}^{\mathrm{hr}}$ into 3D backbones, yielding CLIP-aligned 3D encoder $h_{\mathrm{pt}}$ . In this + +paper, we leverage such aligned 3D encoders and bypass the time-consuming training process whenever possible. + +Problem Definition. Given a point cloud $\mathbf{P} = \{p_n\}_{n=1}^N \in \mathbb{R}^{N \times 3}$ with $N$ points, the aim is to assign a semantic class label $l \in \mathbb{S}$ to every point, where $\mathbb{S}$ indicates a vast semantic space. Different to closed-set or open-vocabulary segmentation for which the vocabulary is known either via a user-specified prompt or by pre-defined labels from dataset, the class set in auto-vocabulary segmentation is unknown and automatically generated for each input scene. + +# 3.2. 3D Auto-Vocabulary Segmentation + +This section introduces 3D-AVS for which an overview of its major components is shown in Fig. 2. Given a point cloud and a set of corresponding images, 3D-AVS first utilizes a point captioner and an image captioner to describe points and images in detail. The generated captions are parsed in the Caption2Tag module, resulting in a list of tags indicating semantic entities. Eventually, each point is assigned a semantic tag, forming segmentation results. These key components are elaborated in the following paragraphs. + +Scene Captioning. A key step of our approach is the auto-generation of a vocabulary for the given scene which is performed by a scene captioner that is either based on input images or on the input point cloud. Image captioning is a well-explored task with a variety of accessible multi-modality large-language models (MLLMs) [26, 56, 65]. We adopt xGen-MM [56] as the image captioner because of its architectural flexibility and enhanced semantic coverage, Given a set of $K$ images $\mathbf{I} \in \mathbb{R}^{K \times H \times W \times 3}$ capturing a scene, and an instruction prompt (details in supplementary material), the image captioner generates a list of captions + +$$ +\mathbf {D} = \left\{\boldsymbol {d} _ {\mathrm {i m}} ^ {(k)} \in \mathbb {R} ^ {w _ {k}} \mid k = 1, \dots , K \right\} \tag {1} +$$ + +where $w_{k}$ is the number of words in the caption for the $k$ -th image. To ensure a diverse enough set of coherent captions, we opt for beam search in the generation process. Implementation details are in the supplementary material. Following caption generation, each caption is parsed and validated with Caption2Tag, as described in the section below. + +LiDAR point cloud captioning remains an underexplored area in existing research despite the potential of such captions for applications. While images collected alongside LiDAR point clouds can be used to generate a target vocabulary, relying solely on images proves inadequate under challenging conditions such as low light or adverse weather, where visual data becomes unreliable. To address this, we introduce a novel Point Captioner trained via transfer learning, which provides captions directly from color-independent LiDAR data. Our approach, detailed in + +![](images/ab772892f26336ea9cd715b248b012a94739b031f3466173296eb78670456e59.jpg) +Figure 2. Overview of 3D-AVS. A point cloud and corresponding images are fed to respective point captioner and image captioner to generate captions. Then, Caption2Tag excludes irrelevant words in the captions. The remaining nouns are passed to a text encoder and eventually assigned to points through a segmenter. The dashed lines indicate that the entire images branch is optional. The point captioner is the only trainable component in 3D-AVS. Note that, the example point caption is generated based on observing the green points. + +Sec. 3.3, takes a point cloud $\mathbf{P}$ as input and outputs captions $d_{\mathrm{pt}}$ . Unlike image captioning, which requires extensive contextual information and sophisticated vision models to produce detailed captions, the Point Captioner provides robust descriptions by relying solely on geometric features. This color independence is particularly beneficial in low-visibility environments, such as nighttime scenes where image-based captioning often falls short. Combining both modalities ultimately yields the best results, uniting the diversity of image captions with the resilience of point-based captions. + +Text Parsing. Captions generated by the image and point captioner are scene-specific sentences in natural language which we then parse into individual object nouns for semantic segmentation. To this end, we filter the sentence on (compound) nouns (i.e. general entities) and proper nouns (i.e. named entities) using spaCy [19] and transform them to their singular form through lemmatization. Lastly, we verify each category against the WordNet dictionary, resulting in a set of $M$ scene-specific tags, denoted as $\mathbf{L} = \{l_m\}_{m=1}^M$ . + +Segmentation. The proposed pipeline separates the vocabulary generation and segmentation, enabling seamless integration with an open-vocabulary point segmenter. The segmenter consists of three encoders, namely a text encoder $h_{\mathrm{tx}}: \mathbb{R}^1 \to \mathbb{R}^C$ , a high-resolution image encoder $h_{\mathrm{im}}^{\mathrm{hr}}: \mathbb{R}^{H \times W \times 3} \to \mathbb{R}^{H \times W \times C}$ and a point encoder $h_{\mathrm{pt}}: \mathbb{R}^{N \times 3} \to \mathbb{R}^{N \times C}$ , that are pre-aligned with the CLIP vision-language latent space. Following the inference procedure of CLIP [39], namely similarity-based label assignment, we first compute the embeddings as follows: + +$$ +\mathbf {E} _ {\mathrm {t x}} = \left\{e _ {m} \right\} _ {m = 1} ^ {M} \leftarrow h _ {\mathrm {t x}} (\mathbf {L}) \tag {2} +$$ + +$$ +\mathbf {F} _ {\mathrm {i m}} = \left\{f _ {k} \right\} _ {k = 1} ^ {K} \leftarrow h _ {\mathrm {i m}} (\mathbf {I}) \tag {3} +$$ + +$$ +\mathbf {F} _ {\mathrm {p t}} = \left\{f _ {n} \right\} _ {n = 1} ^ {N} \leftarrow h _ {\mathrm {p t}} (\mathbf {P}) \tag {4} +$$ + +where $\mathbf{E}_{\mathrm{tx}}$ , $\mathbf{F}_{\mathrm{im}}$ and $\mathbf{F}_{\mathrm{pt}}$ indicate text embeddings, image features, and point features. $e_m \in \mathbb{R}^{1 \times C}$ , $f_k \in \mathbb{R}^{H \times W \times C}$ + +and $f_{n} \in \mathbb{R}^{1 \times C}$ represent per-label, per-image and per-point features. Then, the image features are lifted to 3D and assign each point a pixel feature if the point is visible in the images. In other words, given a point, we calculate its 2D coordinates by point-to-pixel mapping $\Gamma: \mathbb{R}^3 \to \mathbb{R}^2$ and then copy the corresponding pixel feature to the point, denoted as $f_{n}^{\mathrm{im}} \in \mathbb{R}^{1 \times C}$ . Eventually, each point is assigned a semantic label as follows: + +$$ +\hat {l} _ {n} = \underset {m} {\operatorname {a r g m a x}} \left(\max \left(\operatorname {S I M} \left(f _ {n}, e _ {m}\right) | | \operatorname {S I M} \left(f _ {n} ^ {\mathrm {i m}}, e _ {m}\right)\right) \right. \tag {5} +$$ + +where $\hat{l}_n$ denotes the predicted label for point $p_n$ , $\mathrm{SIM}(\cdot, \cdot)$ is a similarity metric, for which we employ dot product, producing a tensor $\in \mathbb{R}^{1 \times M}$ and $||$ indicates concatenation when image features are available. $\max(\cdot)$ takes a tensor $\in \mathbb{R}^{2 \times M}$ as input, performs a column-wise maximum operation, and outputs a tensor $\in \mathbb{R}^{1 \times M}$ . + +# 3.3. Point Captioner + +Inspired by LidarClip [17], we develop the Point Captioner that first encodes points to CLIP latent space and then decodes CLIP features to captions. However, LidarClip only provides a global caption per point cloud, leading to limited coverage of semantic entities. Therefore, we propose a sparse masked attention pooling (SMAP) that can increase the receptive field and output a controllable number of feature vectors, making it possible to train the network with a varying number of images. We detail the training stage, the inference stage and the SMAP in the following paragraphs. + +Training. The training of the Point Captioner is essentially a 2D-to-3D distillation that transfers knowledge from the 2D vision foundation model to the 3D backbone. We utilize the CLIP image encoder [39] $h_{\mathrm{im}}^{\mathrm{clip}}: \mathbb{R}^{H \times W \times 3} \to \mathbb{R}^{1 \times 1 \times C}$ and a CLIP-aligned point encoder $h_{\mathrm{pt}}: \mathbb{R}^{N \times 3} \to \mathbb{R}^{N \times C}$ to encode images and points. However, $h_{\mathrm{im}}^{\mathrm{clip}}(\cdot)$ outputs a global feature vector that does not match the per-point features obtained from $h_{\mathrm{pt}}(\cdot)$ . Therefore, we add SMAP to pool + +![](images/6fa58c6ede1271915835c71afc6d9ebeef49393bcdcdd15981214e02b1be04f0.jpg) +Figure 3. Point Captioner Overview. The image encoder and point encoder are pre-aligned in the CLIP latent space. During training (left), Sparse Masked Attention Pooling (SMAP) aggregates features from points visible in the image (highlighted in red) and is supervised using CLIP image features. During inference (right), neither the image nor camera intrinsic parameters are available. To address this, a group of masks are generated based solely on geometric information. The SMAP output is then decoded into a group of captions. For simplicity, only one image (left) and one sector (right) are shown. + +point-wise features. As shown in Fig. 3 (left), during training, a point cloud and a point-to-pixel mapping function (visualized as an image) are fed to the image-based mask generation. The output is a point-wise binary mask, where true indicates the point is visible in the image. We visualize the point mask by projecting the point to the image. The mask and the point features obtained from $h_{\mathrm{pt}}(\cdot)$ are input to SMAP. SMAP integrates features of points that are visible in the image and is supervised by the output feature of $h_{\mathrm{im}}^{\mathrm{clip}}(\cdot)$ . Note that only one image is visualized in Fig. 3 for clarity but all images corresponding to the point cloud are processed in parallel during training. + +Inference. Our goal is to generate diverse captions that comprehensively cover all semantic categories without requiring the intrinsic parameters of cameras. To achieve this, we propose a geometry-based mask generation strategy that efficiently partitions the point cloud into multiple regions, followed by individual captions for each region. Given the differences in point cloud distributions, we adopt cylindrical sector-based partitioning for outdoor scenes and square pillar-based partitioning for indoor scenes. In the remainder of this paragraph, we illustrate our approach using outdoor point clouds as an example, while details on indoor partitioning are provided in the supplementary materials. The point cloud is first transformed from a Cartesian coordinate system $\{p_n = (x_n,y_n,z_n)\}_{n = 1}^N$ to a polar coordinate system $\{p_n = (\rho_n,\varphi_n,z_n)\}_{n = 1}^N$ and then split into $T$ sectors according to its polar angle $\varphi$ . The binary masks $\mathcal{B} = \{b_n^t\} \in \mathbb{R}^{N\times T}$ are obtained as follows: + +$$ +b _ {n} ^ {t} = \left\{ \begin{array}{l l} \text {t r u e}, & \text {i f} \frac {t}{T} 2 \pi \leq \varphi < \frac {t + 1}{T} 2 \pi \\ \text {f a l s e}, & \text {o t h e r w i s e .} \end{array} \right. \tag {6} +$$ + +where $t \in \{0,1,\dots,T - 1\}$ . This way, SMAP generates mask-wise features that are further decoded into captions in the caption module. The merit of this method is that the number of captions is controllable by changing $T$ . + +![](images/833eb2b30a2fbaa70b8202369e9025f673c4d24e331eddb65da0655e09dc6d71.jpg) +Figure 4. Sparse Masked Attention Pooling (SMAP). Given the coordinates and features of all points, a relative positional encoding (PE) is applied, followed by a residual connection. Masks are applied to the points, creating groups of point subsets. Global Average Pooling (GAP) on each subset produces a mean feature as a query. Finally, multi-head attention (MHA) is applied within each group to generate one feature per subset. + +Sparse Masked Attention Pooling (SMAP). SMAP takes as input 1) an entire point cloud with its per-point coordinates $\mathcal{C} \in \mathbb{R}^{N \times 3}$ and features $\mathcal{F} = \mathbf{F}_{\mathrm{pt}} \in \mathbb{R}^{N \times C}$ , and 2) $J$ binary point-wise masks $\mathcal{B} \in \mathbb{R}^{J \times N}$ , where $J = K$ during training and $J = T$ during inference. SMAP first conducts a relative positional encoding and then applies the masks to the encoded point features: + +$$ +\mathcal {F} ^ {\prime} = \mathcal {B} * \left(\mathcal {F} + \operatorname {P E} (\mathcal {C}, \mathcal {F})\right) \tag {7} +$$ + +where PE indicates a relative positional encoding as in [50] and $*$ denotes matrix multiplication. The masks essentially divide a point cloud into several subsets, allowing replacement. Therefore, the feature $\mathcal{F}' = \{f_j'\}_{j=1}^J \in \mathbb{R}^{N_j \times C}$ has a variable length per mask. After multiplication, features $F'$ go through two paths: 1) zero-padded to the same length and then delivered to multi-head attention (MHA) as key $\mathcal{K}$ and value $\mathcal{V}$ , and 2) passed to a global average pooling and then input to MHA as query $\mathcal{Q}$ . Eventually, we obtain pooled features $\mathcal{F}'' \in \mathbb{R}^{J \times C}$ . + +# 4. Evaluation + +Auto-vocabulary segmentation introduces a novel setting without a standardized benchmark, making it challenging to compare methods directly. In this section, we introduce the challenges of evaluation this novel task and propose two strategy to evaluate segmentation accuracy and the semantic consistency between points and text labels. + +# 4.1. Challenges + +In open-vocabulary segmentation, which is a similar but simpler task, evaluation can be performed on conventional segmentation datasets by using the categories present in the annotations as pre-defined queries. However, this inherently means the model has prior knowledge of the classes it is expected to predict. In auto-vocabulary segmentation, however, no such information is available beforehand, presenting a unique challenge for evaluation. Moreover, natural language introduces ambiguities [58, 70], creating complex relationships between classes, such as synonymy, hyponymy and hypernymy. For instance, road could be labeled as drivable surface, street, or roadway, while a tire might be classified independently or as part of a wheel or vehicle. This makes it challenging to determine whether an instance is appropriately tagged with a precise semantic label. Given these nuances, evaluating the quality of generated labels and segmentation accuracy becomes complex, as the model must align with the varying language used in annotations, even when sometimes only general categories are provided in the ground truth. + +To address these challenges, we propose two solutions. Firstly, we introduce a novel, objective and annotation-independent metric in Sec. 4.2 that assesses how accurately a label - either auto-generated or selected from a fixed vocabulary -fits a given 3D point. This metric allows for flexible, any-to-any class evaluation. Secondly, we leverage an LLM-based mapping approach to align auto-generated vocabulary classes with the ground-truth classes, enabling us to effectively evaluate both the quality of the segmentation mask and the relevance of the predicted labels (Sec. 4.3). + +# 4.2. Text-Point Semantic Similarity Metric + +We introduce the Text-Point Semantic Similarity (TPSS) metric, a measure independent of dataset annotations and subjective assessment. TPSS draws inspiration from inference with CLIP [39], where the best label out of a set of target classes $\{m_0,\dots,m_M\}$ is assigned to an image: + +$$ +\hat {l} = \underset {m} {\operatorname {a r g m a x}} \left(\operatorname {S I M} \left(f ^ {\mathrm {i m}}, e _ {m}\right)\right) \tag {8} +$$ + +where $\hat{l}$ represents the predicted label, $f^{\mathrm{im}}$ is the image feature, and $e_m$ denotes the text embeddings for class $m$ . This + +equation identifies the label with the closest text embedding to the provided image feature in latent space, indicating the highest semantic alignment within CLIP's language space. TPSS metric employs a similar approach, comparing pairs of individual point features with text features in this aligned space. This enables evaluation of how well any label corresponds to a specific point based on semantic similarity, making TPSS ideal for assessing both dynamic and fixed vocabularies. For further illustration, consider a scenario where a LiDAR point belongs to an object outside the nuScenes official classes, such as a "trash bin", and is thus annotated as "background". If our method predicts "garbage can" for this point, it should not be penalized for not predicting "background", as the original prediction is semantically closer to "trash bin". TPSS accounts for such cases, evaluating the predicted label based on the object's visual appearance rather than annotation setting or potential bias. Formally, let $\mathbf{P} = \{p_n\}_{n=1}^N$ be a point cloud with $N$ points and $\mathbf{L} = \{l_m\}_{m=1}^M$ be a set of $M$ unique semantic labels generated for this point cloud. The text embeddings $\mathbf{E}$ and the point features $\mathbf{F}$ are obtained as follows: + +$$ +\mathbf {E} = \left\{e _ {m} \right\} _ {m = 1} ^ {M} \leftarrow g _ {\mathrm {t x}} (\mathbf {L}) \tag {9} +$$ + +$$ +\mathbf {F} = \left\{f _ {n} \right\} _ {n = 1} ^ {N} \leftarrow g _ {\mathrm {p t}} (\mathbf {P}) \tag {10} +$$ + +where $g_{\mathrm{tx}}(\cdot)$ and $g_{\mathrm{pt}}(\cdot)$ are the frozen CLIP text encoder [39] and a CLIP-aligned point encoder, respectively. The TPSS score is calculated as follows: + +$$ +S _ {n} = \max _ {m} \left(\operatorname {S I M} \left(f _ {n}, e _ {m}\right)\right) \tag {11} +$$ + +$$ +\operatorname {T P S S} (\mathbf {P}, \mathbf {L}, g _ {\mathrm {t x}}, g _ {\mathrm {p t}}) = \underset {n} {\operatorname {m e a n}} (S _ {n}) \tag {12} +$$ + +where $S_{n}$ is a point-wise similarity score for the point $n$ . TPSS( $\mathbf{P}, \mathbf{L}, g_{\mathrm{pt}}, g_{\mathrm{tx}}$ ) measures the text-point semantic similarity between the point cloud $\mathbf{P}$ and the label set $\mathbf{L}$ . TPSS is encoder-agnostic as long as $g_{\mathrm{pt}}$ and $g_{\mathrm{tx}}$ are aligned. However, to reliably quantify which label set aligns better with a given point cloud, the point encoder and text encoder must remain unchanged across comparisons. + +# 4.3. Mapping Auto-Vocabulary to Fixed Vocabulary + +While TPSS effectively measures semantic similarity within the embedding space, evaluating the quality of the resulting segmentations is crucial for meaningful assessment. This requires establishing a correspondence between open-ended classes and the ground truth classes. To achieve this, we employ an evaluation scheme that leverages an LLM-based mapper, inspired by the LLM-based Auto-Vocabulary Evaluator (LAVE) [70]. LAVE maps each unique autovocabulary category to a fixed ground truth class in the dataset. After segmenting the LiDAR point cloud using auto-vocabulary categories, each classification is updated + +Table 1. TPSS on the validation sets of nuScenes [4] and ScanNet [14]. Two datasets are created with 16 and 20 official categories, respectively. OpenScene [38] extends the nuScenes label set by manually defining 43 sub-categories. 3D-AVS outperforms these human-defined categories on both datasets, demonstrating its ability to generate a semantically more precise label set. + +
Label SetHuman-involvednuScenes [4]ScanNet [14]
Official label set7.393.44
Extended label set [38]8.70-
3D-AVS-ImageX8.783.49
3D-AVS-LiDARX8.803.71
3D-AVSX9.653.78
Average TPSSDaylightNightnuScenesNot-RainyRainy
+ +Figure 5. TPSS on nuScenes subsets with different light conditions. LiDAR-only 3D-AVS performs better during night and rainy scenes, suggesting its robustness across difficult conditions. + +according to this mapping. For example, points labeled as sedan are reclassified under the car category. This mapping enables evaluation of segmentation quality using the widely accepted mean Intersection-over-Union (mIoU) metric based on fixed-vocabulary categories, facilitating comparison with prior methods. Our evaluation framework extends LAVE by integrating mappings with GPT-4o and SBERT [41]. While we provide detailed results of all methods in the supplementary material, GPT-4o is used throughout the main experiments due to its superior mapping accuracy compared to both LAVE and SBERT. + +# 5. Experiments + +# 5.1. Experimental Setup + +Our method is evaluated on nuScenes [4], ScanNet [14] and ScanNet200 [43] datasets. nuScenes dataset [4] is a comprehensive real-world dataset for autonomous driving research, capturing diverse urban driving scenarios from Boston and Singapore. To increase the spatial density, we aggregate LiDAR points over a 0.5-second interval, focusing on the dataset's LiDAR segmentation benchmark with 16 manually annotated categories. Given the homogeneity often found in autonomous driving scenarios, we also assess 3D-AVS on the ScanNet [14] and ScanNet200 [43]. Scan + +Table 2. IoU comparison on nuScenes (NUS) [4], ScanNet (SN) [14] and ScanNet200 (SN200) [43]. We employ LAVE [70] to map auto-classes from an Unknown Vocabulary (UV) to the official categories. + +
MethodUnknown VocabularyLabel SetNUS [4]SN [14]SN200 [43]
CLIP2Scene [8]X20.825.1-
ConceptFusion [21]X-33.38.8
OpenMask3D [45]X-34.010.3
HICL [22]XOfficial26.833.5-
AdaCo [69]X31.2--
CNS [7]X33.526.8-
OpenScene [38]X30.147.011.7
Diff2Scene [67]X-48.614.2
3D-AVS (Ours)I+L36.240.514.6
+ +Net dataset is an indoor dataset with 20 annotated classes. ScanNet200 updates the annotations of ScanNet with more and finer-grained categories, i.e. 200 categories, while keeping the input point clouds unchanged. Due to space constraints, we refer to implementation details in the supplementary material, such as details on image captioner, segmenter and SMAP. + +# 5.2. Label Set Comparison + +We compare the generated label set with the fixed, human-defined vocabulary classes in Tab. 1. OpenScene [38] manually create a more fine-grained vocabulary of 43 categories for the nuScenes [4] (originally 16 categories) dataset, boosting the TPSS performance on the dataset from 7.39 to 8.70. Although the performance gain is impressive, Table 1 demonstrates that 3D-AVS-generated labels are more semantically consistent with point clouds than manually defined labels, as 3D-AVS outperforms the predefined categories on both nuScenes [4] and ScanNet [14] datasets. Additionally, Table 1 demonstrates that combining text generation from both camera and LiDAR inputs, as done in 3D-AVS, improves text-point semantic similarity. This advantage stems from 3D-AVS' ability to adapt to scenes where one modality struggles. For instance, the image captioner often faces challenges in night scenes due to limited color information, while the point captioner continues to accurately describe relevant objects. This is further reflected in Fig. 5, which shows that the point captioner proves especially useful in visually challenging scenes where the Image Captioner falls short. + +# 5.3. Segmentation Comparison + +For quantitative comparison, we employ LAVE [70] to map all generated novel categories back to predefined categories. Next, we calculate segmentation metrics, namely mean IoU (mIoU), on the validation sets of nuScenes [4], ScanNet [14], and ScanNet200 [43]. Note that 3D-AVS does not + +![](images/3cced8f675e3a1b7c214232c306ea9578edbfd3bb3c2001f0c2c1add86767c7a.jpg) + +![](images/d7deeb669020543c8cf72c3a7fbc592a3952337fb5016e510bc352a80998abf0.jpg) +(a) Reference images. +(b) Pre-defined vocabulary. + +![](images/9ed8441d07226301c9c2677d736427c61e15ff606e1b72a48778e04585e2d8a6.jpg) +(c) 3D-AVS (Ours) +Figure 6. Qualitative comparison between inputting pre-defined vocabulary and 3D-AVS-generated vocabulary to OpenScene [38] segmentor. The (a) six-view images are presented for a better scene understanding. While general and ambiguous pre-defined vocabulary leads to large-area error (b). 3D-AVS segments regions with precise class names, e.g. plant (blue box), gate (green box), road, sidewalk, staircase, building and glass door (bottom-up in red box). These points are annotated as vegetation, drivable surface, sidewalk and manmade in the original dataset (not presented here) but are misclassified as sidewalk and barrier in (b). + +have any access to the predefined categories during testing, which makes the segmentation task much harder. + +Ourdoor dataset. Table 2 shows 3D-AVS generates better segmentation results on nuScenes, confirming the effectiveness of 3D-AVS' open-ended recognition capabilities. The segmentation performance mainly benefits from automatically generated categories for the ambiguous nuScenes categories, such as driveable surface, terrain, and manmade, achieving mIoU of 68.2, 41.4, and 55.4, respectively—substantially outperforming OpenScene [38] (see details in supplementary material). Such an increase is expected, as 3D-AVS is able to generate much more specific namings for these overly general categories which can easily introduce noise. Figure 6 highlights some of these generated categories, such as man-made being correctly recognized as staircase, building and glass door. + +Indoor datasets. Table 2 shows that 3D-AVS achieves a lower mIoU on ScanNet [14] compared to using a fixed vocabulary. This is likely due to the extensive range and variety of objects, where the generated labels must be mapped to a small and coarse-grained set of 20 dataset categories. The state-of-the-art (SOTA) performance on ScanNet200 [43] further supports this argument. Notably, the predictions of 3D-AVS remain identical on ScanNet and ScanNet200, as the input data are the same; the only difference lies in the evaluation vocabulary—mapping to 20 coarse categories in ScanNet versus 200 fine-grained cat + +egories in ScanNet200. This shift in evaluation granularity introduces a more challenging task while allowing for a more faithful and detailed assessment of segmentation performance. 3D-AVS achieves state-of-the-art results on ScanNet200, underscoring its effectiveness in open-ended 3D segmentation tasks. + +# 5.4. Ablation Study + +Ablation studies are conducted on the image captioner, point captioner and LAVE mapping to validate our design choices and hyperparameters. The corresponding results are provided in the supplementary material. + +# 6. Conclusion + +In this work, we presented 3D-AVS, the first method for auto-vocabulary LiDAR point segmentation, eliminating the need for human-defined target classes. In suboptimal image captioning conditions, our point captioner can capture missing semantics based on geometric information. To assess the quality of the generated vocabularies in relation to segmentations, we further proposed the TPSS metric. Our experiments show that our model's segmentations are semantically more aligned with the data than its annotations and achieves competitive masking accuracy. We believe 3D-AVS advances scalable open-ended learning for LiDAR point segmentation without human in the loop. + +# 7. Acknowledgements + +This work was financially supported by TomTom, the University of Amsterdam and the allowance of Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy. Fatemeh Karimi Nejadasl was financed by the University of Amsterdam Data Science Centre. This work used the Dutch national e-infrastructure with the support of the SURF Cooperative using grant no. EINF-7940. + +# References + +[1] Llama Team at Meta. Llama 3: Advancing open-source large language models. arXiv preprint arXiv:2407.21783, 2024. 13, 14 +[2] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyril Stachniss, and Jurgen Gall. SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences. In ICCV, 2019. 1 +[3] Shubhankar Borse, Ying Wang, Yizhe Zhang, and Fatih Porikli. Inverseform: A loss function for structured boundary-aware segmentation. In CVPR, 2021. 1 +[4] Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuScenes: A multimodal dataset for autonomous driving. In CVPR, 2020. 1, 2, 7, 12, 13, 14 +[5] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, 2021. 2 +[6] Dave Zhenyu Chen, Ronghang Hu, Xinlei Chen, Matthias Nießner, and Angel X. Chang. Unit3d: A unified transformer for 3d dense captioning and visual grounding. In ICCV, 2023. 3 +[7] Runnan Chen, Youquan Liu, Lingdong Kong, Nenglun Chen, Xinge Zhu, Yuexin Ma, Tongliang Liu, and Wenping Wang. Towards label-free scene understanding by vision foundation models. In NeurIPS, 2023. 7 +[8] Runnan Chen, Youquan Liu, Lingdong Kong, Xinge Zhu, Yuexin Ma, Yikang Li, Yuenan Hou, Yu Qiao, and Wenping Wang. Clip2scene: Towards label-efficient 3d scene understanding by clip. In CVPR, 2023. 2, 7 +[9] Sijin Chen, Xin Chen, Chi Zhang, Mingsheng Li, Gang Yu, Hao Fei, Hongyuan Zhu, Jiayuan Fan, and Tao Chen. L13da: Visual interactive instruction tuning for omni-3d understanding, reasoning, and planning. In CVPR, 2024. 3 +[10] Yilun Chen, Shuai Yang, Haifeng Huang, Tai Wang, Ruiyuan Lyu, Runsen Xu, Dahua Lin, and Jiangmiao Pang. Grounded 3d-llm with referent tokens. _arxiv preprint arxiv:2405.10370, 2024. 3 +[11] Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, and Mohit Bansal. Fine-grained image captioning with clip reward. arXiv preprint arXiv:2205.13115, 2023. 3 + +[12] Seokju Cho, Heeseong Shin, Sunghwan Hong, Anurag Arnab, Paul Hongsuck Seo, and Seungryong Kim. Cat-seg: Cost aggregation for open-vocabulary semantic segmentation. In CVPR, 2024. 3 +[13] Alessandro Conti, Enrico Fini, Massimiliano Mancini, Paolo Rota, Yiming Wang, and Elisa Ricci. Vocabulary-free image classification and semantic segmentation. arxiv preprint arxiv:2404.10864, 2024. 2 +[14] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017. 2, 7, 8, 12, 13, 14, 15 +[15] Golnaz Ghiasi, Xiuye Gu, and Tsung-Yi Lin Yin Cui. Scaling open-vocabulary image segmentation with image-level labels. In ECCV, 2022. 3, 12 +[16] Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma, Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, and Pheng-Ann Heng. Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding, generation, and instruction following. arXiv preprint arXiv: 2309.00615, 2023-09-01. 3 +[17] Georg Hess, Adam Tonderski, Christoffer Petersson, Kalle Åström, and Lennart Svensson. Lidarclip or: How i learned to talk to point clouds. In WACV, 2024. 3, 4, 12 +[18] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. In NeurIPS, 2023. 3 +[19] Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. spaCy: Industrial-strength Natural Language Processing in Python, 2020. https://github.com/explosion/spaCy.4 +[20] Haifeng Huang, Zehan Wang, Rongjie Huang, Luping Liu, Xize Cheng, Yang Zhao, Tao Jin, and Zhou Zhao. Chat-3d v2: Bridging 3d scene and large language models with object identifiers. arxiv preprint arxiv:2312.08168, 2023. 3 +[21] Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, Ayush Tewari, Joshua B. Tenenbaum, Celso Miguel de Melo, Madhava Krishna, Liam Paull, Florian Shkurti, and Antonio Torralba. Conceptfusion: Open-set multimodal 3d mapping. In RSS, 2023. 7 +[22] Xin Kang, Lei Chu, Jiahao Li, Xuejin Chen, and Yan Lu. Hierarchical intra-modal correlation learning for label-free 3d semantic segmentation. In CVPR, 2024. 7 +[23] Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. Babbtalk: Understanding and generating simple image descriptions. TPAMI, 2013. 3 +[24] Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. PointPillars: Fast encoders for object detection from point clouds. In CVPR, 2019. 1 +[25] Boyi Li, Kilian Q Weinberger, Serge Belongie, Vladlen Koltun, and Rene Ranftl. Language-driven semantic segmentation. In ICLR, 2022. 3 +[26] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified + +vision-language understanding and generation. In ICML, 2022. 2, 3, 12 +[27] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. In ECCV, 2022. 1 +[28] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, 2023. 3 +[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In *ICLR*, 2019. 12 +[30] Tiange Luo, Chris Rockwell, Honglak Lee, and Justin Johnson. Scalable 3d captioning with pretrained models. In NeurIPS, 2023. 3 +[31] Jiageng Mao, Minzhe Niu, Chenhan Jiang, Hanxue Liang, Jingheng Chen, Xiaodan Liang, Yamin Li, Chaoqiang Ye, Wei Zhang, Zhenguo Li, Jie Yu, Hang Xu, and Chunjing Xu. One million scenes for autonomous driving: Once dataset. In NeurIPS, 2021. 1 +[32] Guofeng Mei, Luigi Riz, Yiming Wang, and Fabio Poiesi. Vocabulary-free 3d instance segmentation with vision and language assistant. In 3DV, 2025. 2 +[33] Purnendu Mishra and Kishor Sarawadekar. Polynomial learning rate policy with warm restart for deep neural network. In TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON), pages 2087-2092, 2019. 12 +[34] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734, 2021. 2, 3, 12 +[35] Ron Mokady, Amir Hertz, and Amit H. Bermano. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv: 2111.09734, 2021. 3 +[36] Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, and Dragomir Anguelov. Unsupervised 3d perception with 2d vision-language distillation for autonomous driving. In ICCV, 2023. 2, 3 +[37] OpenAI. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. 14 +[38] Songyou Peng, Kyle Genova, Chiyu "Max" Jiang, Andrea Tagliasacchi, Marc Pollefeys, and Thomas Funkhouser. Openscene: 3d scene understanding with open vocabularies. In CVPR, 2023. 2, 3, 7, 8, 12, 13 +[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2022. 1, 2, 3, 4, 6, 12 +[40] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In ICML, 2021. 2 +[41] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP, 2019. 7, 13, 14 +[42] Pitchaporn Rewatbowornwong, Nattanat Chathee, Ekapol Chuangsuwanich, and Supasorn Suwajanakorn. Zero-guidance segmentation using zero segment labels. In ICCV, 2023. 2 + +[43] David Rozenberszki, Or Litany, and Angela Dai. Language-grounded indoor 3d semantic segmentation in the wild. In ECCV, 2022. 2, 7, 8, 13 +[44] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Yu Zhang, Jonathon Shlens, Zhifeng Chen, and Dragomir Anguelov. Scalability in perception for autonomous driving: Waymo open dataset. In CVPR, 2020. 1 +[45] Ayca Takmaz, Elisabetta Fedele, Robert W. Sumner, Marc Pollefeys, Federico Tombari, and Francis Engelmann. Openmask3d: Open-vocabulary 3d instance segmentation. In NeurIPS, 2023. 7 +[46] Andrew Tao, Karan Sapra, and Bryan Catanzaro. Hierarchical multi-scale attention for semantic segmentation. arXiv preprint arXiv:2005.10821, 2020. 1 +[47] Oriol Vinyals, Alexander Toshev, Samy Bengio, and D. Erhan. Show and tell: A neural image caption generator. CVPR, 2015. 3 +[48] Zehan Wang, Haifeng Huang, Yang Zhao, Ziang Zhang, and Zhou Zhao. Chat-3d: Data-efficiently tuning large language model for universal dialogue of 3d scenes. arxiv preprint arxiv:2308.08769, 2023. 3 +[49] Jianzong Wu, Xiangtai Li, Shilin Xu, Haobo Yuan, Henghui Ding, Yibo Yang, Xia Li, Jiangning Zhang, Yunhai Tong, Xudong Jiang, Bernard Ghanem, and Dacheng Tao. Towards open vocabulary learning: A survey. TPAMI, 2024. 2 +[50] Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, and Hengshuang Zhao. Point transformer v3: Simpler, faster, stronger. In CVPR, 2024. 5, 13 +[51] Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In EMNLP, 2021. 2 +[52] Ke Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. 3 +[53] Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai. Side adapter network for open-vocabulary semantic segmentation. In CVPR, 2023. 3 +[54] Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. In ECCV, 2024. 3 +[55] Le Xue, Mingfei Gao, Chen Xing, Roberto Martín-Martín, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, and Silvio Savarese. Ulip: Learning a unified representation of language, images, and point clouds for 3d understanding. In CVPR, 2023. 2, 3 +[56] Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, Shrikant Kendre, Jieyu Zhang, Can Qin, Shu Zhang, Chia-Chih Chen, Ning Yu, Juntao Tan, Tulika Manoj Awalgaonkar, Shelby Heinecke, Huan Wang, + +Yejin Choi, Ludwig Schmidt, Zeyuan Chen, Silvio Savarese, Juan Carlos Niebles, Caiming Xiong, and Ran Xu. xgen-mm (blip-3): A family of open large multimodal models. arXiv preprint, 2024. 3, 12 +[57] Le Xue, Ning Yu, Shu Zhang, Junnan Li, Roberto Martin-Martin, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, and Silvio Savarese. Ulip-2: Towards scalable multimodal pre-training for 3d understanding. In CVPR, 2024. 3 +[58] Apurwa Yadav, Aarshil Patel, and Manan Shah. A comprehensive review on resolving ambiguities in natural language processing. AI Open, 2:85–92, 2021. 6 +[59] Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 18(10):3337, 2018. 1 +[60] Benjamin Z. Yao, Xiong Yang, Liang Lin, Mun Wai Lee, and Song-Chun Zhu. I2t: Image parsing to text description. Proceedings of the IEEE, 2010. 3 +[61] Tianwei Yin, Xingyi Zhou, and Philipp Kähenbuhl. Center-based 3d object detection and tracking. In CVPR, 2021. 1 +[62] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In CVPR, 2016. 3 +[63] Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, and Liang-Chieh Chen. Convolutions die hard: Open-vocabulary segmentation with single frozen convolutional clip. In NeurIPS, 2023. 3 +[64] Yihan Zeng, Chenhan Jiang, Jiageng Mao, Jianhua Han, Chaoqiang Ye, Qingqiu Huang, Dit-Yan Yeung, Zhen Yang, Xiaodan Liang, and Hang Xu. Clip $\$ 2$ : Contrastive language-image-point pretraining from real-world point cloud data. In CVPR, 2023. 2 +[65] Youcai Zhang, Xinyu Huang, Jinyu Ma, Zhaoyang Li, Zhaochuan Luo, Yanchun Xie, Yuzhuo Qin, Tong Luo, Yaqian Li, Shilong Liu, et al. Recognize anything: A strong image tagging model. arXiv preprint arXiv:2306.03514, 2023. 3, 12 +[66] Xingcheng Zhou, Mingyu Liu, Bare Luka Zagar, Ekim Yurtsever, and Alois C. Knoll. Vision language models in autonomous driving and intelligent transportation systems. arXiv preprint arXiv:2310.14414, 2023. 2 +[67] Xiaoyu Zhu, Hao Zhou, Pengfei Xing, Long Zhao, Hao Xu, Junwei Liang, Alexander Hauptmann, Ting Liu, and Andrew Gallagher. Open-vocabulary 3d semantic segmentation with text-to-image diffusion models. In ECCV, 2024. 7 +[68] Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li. 3d-vista: Pre-trained transformer for 3d vision and text alignment. In ICCV, 2023. 3 +[69] Pufan Zou, Shijia Zhao, Weijie Huang, Qiming Xia, Chenglu Wen, Wei Li, and Cheng Wang. Adaco: Overcoming visual foundation model noise in 3d semantic segmentation via adaptive label correction. In AAAI, 2025. 7 +[70] Osman Ülger, Maksymilian Kulicki, Yuki Asano, and Martin R. Oswald. Auto-vocabulary semantic segmentation. arXiv preprint arXiv:2312.04539, 2024. 2, 3, 6, 7, 13, 14 \ No newline at end of file diff --git a/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/images.zip b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..849b4551e23b8b351f7d6a0035a55a100ec942bd --- /dev/null +++ b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2836a211e89efdf99eb07ffa0dd94a862ddf98825658c566c77ac8997ede0ca +size 496018 diff --git a/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/layout.json b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..81f198bfa49d85208995f60effad188d247793a7 --- /dev/null +++ b/CVPR/2025/3D-AVS_ LiDAR-based 3D Auto-Vocabulary Segmentation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:693f79f4faf078d3aa7508358ad584b81856e2be204fb7976f89a25655c7031d +size 382257 diff --git a/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_content_list.json b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..15efebe4faae6793f899b14347b94967ec184c64 --- /dev/null +++ b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a4c9e3bc1dd2cdd8d416c48f97db31fb495650647bc43aa7a5cfb708fa7a6d0 +size 89470 diff --git a/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_model.json b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c8b7c160030ded3297326c7c7ccfa4c1be065e7d --- /dev/null +++ b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ee7d714557d057d9f78e4858e4c4bb2fba1a7da5fa40df5640d6484521d4d6b +size 116871 diff --git a/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_origin.pdf b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4ac235a8bb835a3a5ef6d414d79f9a9fc7708862 --- /dev/null +++ b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/85529eff-9044-4d08-8d4c-7fd264bba320_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10ba83ac81e3091446ae27d701191b1e10f49fd334d3b8b4314c14c8e0ce64e0 +size 1626320 diff --git a/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/full.md b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c6ad977e8c8899b10771af3cfbbae82546c138a9 --- /dev/null +++ b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/full.md @@ -0,0 +1,332 @@ +# 3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination + +Jianing Yang* Xuweiyi Chen* Nikhil Madaan† Madhavan Iyengar Shengyi Qian* David F. Fouhey* Joyce Chai + +$\clubsuit$ University of Michigan $\clubsuit$ New York University https://3d-grand.github.io/ + +40K + +3D Rooms + +6.2M + +Densely-grounded 3D-Text Pairs + +3D-GRAND: Large, Densely Grounded 3D-Text Dataset + +![](images/1273fa7c9672d386b1d0cbe0f30e7db29093dbc6e9ada64e5e1061d5fa33f3fa.jpg) +Stronger Grounding Capability + +Q: Are there any plants in this room? + +Q: Have you noticed any refrigerators in the room? + +3D-POPE: Benchmark for 3D-LLM Hallucination + +A:Yes. + +A: No. + +![](images/22051d7413e199b9d0a8255fd4530ffada890d4ee2b04d3c16cb66da2bee21d8.jpg) +Grounding Accuracy + +![](images/da37a6a815f83da922d8ca09d9e2191c53ecdadd67f4dd74644d1823a7eacbeb.jpg) +Hallucination Rate +Reduced Hallucination +Figure 1. We introduce 3D-GRAND, a large-scale, densely grounded 3D-text dataset, and 3D-POPE, a 3D-LLM hallucination benchmark. Training on 3D-GRAND improves grounding accuracy and reduces hallucinations. + +# Abstract + +The integration of language and 3D perception is crucial for embodied agents and robots that comprehend and interact with the physical world. While large language models (LLMs) have demonstrated impressive language understanding and generation capabilities, their adaptation to 3D environments (3D-LLMs) remains in its early stages. A primary challenge is a lack of large-scale datasets with dense grounding between language and 3D scenes. We introduce 3D-GRAND, a pioneering large-scale dataset comprising 40,087 household scenes paired with 6.2 million densely-grounded scene-language instructions. Our results show that instruction tuning with 3D-GRAND significantly enhances grounding capabilities and reduces hallucinations in 3D-LLMs. As part of our contributions, we propose a comprehensive benchmark 3D-POPE to systematically evaluate hallucination in 3D-LLMs, enabling fair comparisons + +of models. Our experiments highlight a scaling effect between dataset size and 3D-LLM performance, emphasizing the importance of large-scale 3D-text datasets for embodied AI research. Our results demonstrate early signals for effective sim-to-real transfer, indicating that models trained on large synthetic data can perform well on real-world 3D scans. Through 3D-GRAND and 3D-POPE, we aim to equip the embodied AI community with resources and insights to lead to more reliable and better-grounded 3D-LLMs. + +# 1. Introduction + +Embodied Artificial Intelligence (EAI) represents a frontier in robotics and machine learning. In EAI, the integration of perception, language, and action within physical spaces is crucial for developing intelligent systems capable of meaningfully navigating and interacting with their environments. Central to this vision is the concept of grounding language in the physical world [6, 9]. Grounding connects abstract linguistic constructs to concrete objects in three-dimensional space, thereby enabling robots and intelligent agents to ef + +fectively understand human languages and manipulate their surroundings. + +Recent advances in Large Language Models (LLMs) have greatly benefited Embodied AI. LLMs demonstrate exceptional capabilities in understanding language instructions [52, 66], perceiving the environment [3, 41, 46, 74, 83], and planning detailed actions [7, 35]. The primary inputs to LLMs, other than language, have been the combination of language and 2D images, categorizing these models as 2D-LLMs. The significant advancements in 2D-LLMs can be mainly attributed to their training on extensive vision-language datasets. These datasets [60, 84], with billions of image and text pairs, have been instrumental in enhancing the models' understanding of visual content and its contextual relevance to text information. These datasets have provided the foundational data needed for training models excelling at integrating vision and language. Despite some progress in equipping LLMs to understand 3D scenes (3D-LLMs) [11, 28, 30, 31, 55, 68, 85], these models are limited by the scarcity of 3D scene and text pairs. In this work, we introduce 3D-GRAND, a new million-scale dataset designed for densely-grounded 3D Instruction Tuning. Compared to SceneVerse [37], 3D-GRAND has much denser annotations of objects grounded to linguistic entities, providing a foundation for language grounding, critical for robotic tasks. + +Research on 2D-LLMs indicates that grounding language to 2D contexts mitigates hallucination in language models [5, 40, 53, 56, 76, 80], thus enhancing the reliability and interpretability of generated responses. While 2D grounding has been extensively explored, extending these principles to 3D environments is still underdeveloped. This situation raises two critical questions: (1) To what extent does hallucination occur in 3D-LLMs? (2) Can densely-grounded data mitigate hallucination for 3D-LLMs? These questions underscore a critical need within the research community for the development of an evaluation benchmark specifically designed for 3D-LLMs and the construction of a large-scale, 3D-grounded dataset. + +To quantify hallucination in 3D LLMs, this paper further introduces 3D-POPE (3D Polling-based Object Probing Evaluation). 3D-POPE provides a comprehensive and standardized protocol for evaluating hallucination that enables systematic assessment and facilitates fair comparisons across 3D-LLMs, enhancing our understanding of model capabilities in object hallucination. Specifically, we pose existence questions to 3D-LLMs and evaluate their responses, as shown in Fig 1. + +Our 3D-GRAND includes 40K household scenes paired with 6.2 million scene-language instructions, featuring dense phrase-to-object grounding. We conduct rigorous human evaluations to ensure the dataset's quality. Our results trained with 3D-GRAND highlight the dataset's effectiveness in enhancing grounding and reducing hallucination for 3D + +LLMs. We highlight the effectiveness of incorporating 3D-GRAND in Fig 1 and introduce each category and provide examples in Fig 2. + +The contributions of this work include: (1) 3D-GRAND, the first million-scale, densely-grounded 3D-text dataset for grounded 3D Instruction Tuning. 3D-GRAND includes 40K household scenes paired with 6.2M densely-grounded scene-language instructions. (2) 3D-POPE, a suite of benchmarks and metrics that systematically evaluate hallucination, enabling fair comparisons of future 3D-LLM models in terms of object hallucination. and (3) Findings for hallucination, grounding, and scaling that guide future research, including: (a) training with 3D-GRAND significantly reduces hallucinations particularly when the data is densely grounded; (b) densely grounded instruction tuning significantly enhances the grounding capabilities of 3D-LLMs; (c) scaling densely grounded data consistently improves grounding accuracy and reduces hallucination; and (d) models transfer from sim-to-real, providing an early signal for a low-cost and sustainable future of scaling synthetic 3D data to help on real tasks. + +# 2. Related Work + +Injecting 3D into LLMs. Recent advances in large language models (LLMs) have inspired work extending their capabilities to 3D environments, leading to the development of 3D-LLMs [11, 55, 74, 85]. Notable works include 3D-LLM [28], which integrates 3D point clouds and features into LLMs to enable tasks like captioning, question answering, and navigation. LEO [31] excels as an embodied multi-modal generalist agent in perception, grounding, reasoning, planning, and action in 3D environments, showing the potential of 3D-LLMs in understanding and interacting with the physical world. The most relevant work is Chat-3Dv2 [30, 68], which grounds generated scene captions to objects in 3D scenes. However, Chat-3Dv2's dataset is limited to one type of 3D-text task (scene captioning) and only has 705 captions from a subset of ScanNet scenes. In 3D-GRAND, we expand this concept by diversifying 3D-text tasks and increasing data scale to million-scale. Our results show promising data scaling effects and sim-to-real transfer, paving the way for future large-scale 3D-LLM training. + +Object Hallucination of VLMs. While 2D VLMs have achieved impressive performance, they are prone to hallucinating objects that do not exist in the provided images, a problem known as object hallucination [12, 16, 58]. Several methods have been suggested to mitigate the object hallucination issue, such as integrating an external object detector [79], applying visually grounded visual instruction tuning [76, 80] or reinforcement learning [25, 63], performing iterative refinement [82], and adapting the decoding strategies [33]. To quantify and mitigate this issue, several benchmarks have been proposed. CHAIR (Caption Hallucination Assessment with Image Relevance) [58] measures the frequency of hallucinated objects in image captions by comparing the objects + +
DatasetWhich part is grounded?Densely Grounded?Language source# 3D Scenes# Language pairs
ReferIt3D [2]obj-referXHuman,Template0.7K125K
ScanRefer [10]obj-referXHuman0.7K51K
Scan2Cap [13]obj-referXHuman0.7K51K
ScanEnts3D [1]obj-referHuman0.7K84K
PhraseRefer [78]obj-referHuman0.7K170K
ScanQA [4]answerXHuman0.7K41K
SQA3D [48]questionXHuman0.65K33.4K
3DVQA [23]XXTemplate0.7K500K
CLEVR3D [72]XXTemplate8.7K171K
3DMV-VQA [27]XXTemplate4.1K55K
EmbodiedScan [67]XXTemplate3.4K970K
3DMIT [44]XXLLM0.7K75K
M3DBench [42]obj-refer, questionXLLM0.7K327K
3D-DenseOG [34]sceneHuman0.7K51K
3D-LLM [28]obj-referXLLM0.9K200K
LL3DA [11]question, answerquestionTemplate,LLM0.9K200K
Chat3D-v2 [30]sceneHuman,LLM0.7K0.7K
3D-VisTA [85]questionXTemplate,LLM3K278K
LEO [31]questionXLLM3K579K
SceneVerse [37]obj-referXTemplate,LLM62K2.5M
3D-GRANDscene, obj-refer, question, answerTemplate,LLM40K6.2M
+ +mentioned to the ground truth annotations. POPE (Probing Object Hallucination Evaluation) [43] assesses a VLM's ability to identify the presence or absence of objects through yes/no probing questions. However, these studies primarily focus on 2D image-text datasets like COCO [45]. In contrast, object hallucination in 3D-LLMs remains largely unexplored. Our work addresses this gap by introducing 3D-POPE, a comprehensive benchmark for evaluating object hallucination in 3D-LLMs. To the best of our knowledge, this is the first object hallucination benchmark for 3D-LLMs. Grounding Datasets for 3D-LLMs. In 2D, large-scale datasets with grounding information have been instrumental in vision-language research. Notable examples include RefCOCO [77], which provides referring expressions for objects in COCO images [45]. Additionally, 2D LLMs [40, 53, 56, 71, 76] have been trained with densely-grounded web-crawled image-text pairs. In 3D, there is a growing interest in creating datasets that pair 3D scenes with textual annotations [1, 13, 34, 78]. ScanRefer [10] pioneered this effort by contributing a dataset of ScanNet [15] scenes with referring expressions. Table 1 introduces the efforts made by the community. However, these datasets have limited grounding annotations and often focus on a single task, such as referring expression comprehension or visual question answering. In contrast, our proposed dataset, 3D-GRAND, stands out by providing 6.2 million densely-grounded scene-language instructions across a diverse set of 3D-text tasks and 40,087 household scenes. This enables a wide range of grounding tasks and facilitates the development of more reliable and better-grounded 3D-LLMs. + +Of recent datasets, SceneVerse [34] is the most similar to + +ours: both are million-scale 3D grounding datasets. A key difference is that SceneVerse [34] provides sparse grounding, while 3D-GRAND is densely grounded. For instance, Scene-verse grounds "This is a big cotton sofa. It is between the window and the wooden table." to a single entity (e.g., [sofa-3]). Instead, 3D-GRAND grounds every noun phrase in the caption: "big cotton sofa", "it", "window" and "wooden table" to an entity. + +For clarity of describing the differences between approaches and datasets, we define the grounding granularity as follows and show an example in Appendix A: Paragraph-level set-to-set grounding: Many sentences in a long paragraph, each containing several object nouns, are linked to a set of 3D objects without clear associations from specific sentences/noun phrases to objects. Session-level many-to-one grounding: Multiple sentences in one session, where each can describe several objects (targets and landmarks), are associated with one 3D object. Noun-level one-to-one grounding: Each noun phrase in each sentence is explicitly matched with one 3D object. + +Table 1. Comparison with existing 3D scene datasets with language annotations. 3D-GRAND is the largest language-grounded dataset. + +
SceneVerseScene CaptionObject ReferenceQA
Paragraph-level set-to-set groundingSession-level many-to-one groundingNo grounding
3D-GRANDNoun-level one-to-one groundingNoun-level one-to-one groundingNoun-level one-to-one grounding
+ +Table 2. Comparison of grounding granularity in SceneVerse and 3D-GRAND. + +Additionally, the language annotations of 3D-GRAND are more trustworthy and have higher quality. Hallucination is known as one of the most common mistakes of LLMs [32, 43, 58]. In 3D-GRAND, we use a hallucination filter to check and delete any annotations with hallucinated object IDs. This is not possible for SceneVerse + +
3D-POPEModelPrecisionRecallF1 ScoreAccuracyYes (%)
RandomRandom Baseline50.0050.0050.0050.0050.00
3D-LLM [28]50.0399.8866.6750.0799.81
3D-VisTA [85]50.1253.5851.7949.6653.95
LEO [31]51.9577.6562.2552.9174.73
Ours 0-shot (Grounding)93.3484.2588.5689.1245.13
PopularRandom Baseline50.0050.0050.0050.0050.00
3D-LLM [28]49.9799.8866.6149.9499.94
3D-VisTA [85]47.4051.8849.5449.4952.30
LEO [31]48.3077.6559.5547.2780.38
Ours 0-shot (Grounding)73.0584.2878.2676.5957.69
AdversarialRandom Baseline50.0050.0050.0050.0050.00
3D-LLM [28]49.9799.8866.6149.9499.94
3D-VisTA [85]48.2854.3951.1551.1452.99
LEO [31]48.4777.9859.7847.5280.45
Ours 0-shot (Grounding)69.8684.2176.3773.9560.26
+ +Table 3. 3D-POPE benchmark results for evaluating hallucination in 3D language models. Random Baseline refers to a model randomly predicting "yes" or "no" with $50\%$ chance, given the 1:1 positive/negative sample ratio in the dataset. + +since they have pure language output. 3D-GRAND is also quality-checked by humans. + +# 3. 3D-POPE: A Benchmark for Evaluating Hallucination in 3D-LLMs + +To systematically evaluate the hallucination behavior of 3D-LLMs, we introduce the 3D Polling-based Object Probing Evaluation (3D-POPE) benchmark. 3D-POPE is designed to assess a model's ability to accurately identify the presence or absence of objects in a given 3D scene. + +Dataset. To facilitate the 3D-POPE benchmark, we curate a dedicated dataset from the ScanNet [15] dataset, utilizing the semantic classes from ScanNet200 [59]. Specifically, we use the ScanNet validation set as the foundation for evaluating 3D-LLMs on the 3D-POPE benchmark. + +Benchmark design. 3D-POPE consists of triples with: a 3D scene, a posed question, and a binary answer ("Yes" or "No") indicating the presence or absence of an object (Fig. 1 middle). For balance, we maintain a 1:1 ratio of existent to nonexistent objects when constructing these triples. For negative samples (i.e., nonexistent objects), we use the following three distinct sampling strategies that are designed to challenge the model's robustness and assess its susceptibility to different levels of object hallucination. Random Sampling: Nonexistent objects are randomly selected from the set of objects not present in the 3D scene. Popular Sampling: We select the top- $k$ most frequent objects not present in the 3D scene, where $k$ equals the number of objects currently in the scene. Adversarial Sampling: For each positively identified object in the scene, we rank objects that are not present and have not been used as negative samples based on their frequency of co-occurrence with the positive object in the training dataset. The highest-ranking co-occurring object is selected as the adversarial sample, which differs from the original POPE [43] to avoid adversarial samples mirroring popular samples, as indoor scenes often contain similar objects. + +Metrics. 3D-POPE metrics include Precision, Recall, F1 Score, Accuracy, and Yes $(\%)$ . Precision and Recall assess + +the model's ability to correctly affirm the presence of objects and identify the absence of objects, respectively. Precision is particularly important as it indicates the proportion of non-existing objects generated by the 3D-LLMs. The F1 Score, combining Precision and Recall, balances the two and serves as the primary evaluation metric. Accuracy measures the proportion of correctly answered questions for "Yes" and "No" responses. Additionally, the Yes (\%) metric reports the ratio of incorrect "Yes" responses to understand the model's tendencies regarding object hallucination. + +Leaderboard. We establish a public leaderboard for the 3D-POPE benchmark, allowing researchers to submit their 3D-LLM results and compare their performance against other state-of-the-art models. The leaderboard reports the evaluation metrics for each model under the three sampling strategies, providing a transparent and standardized way to assess the hallucination performance of 3D-LLMs. + +# 4. 3D-GRAND: 3D Ground Anything Dataset + +We introduce 3D-GRAND, a large-scale, densely-grounded 3D-text dataset designed for grounded 3D instruction tuning. We describe the data collection process, dataset statistics, and the unique features that make 3D-GRAND a valuable resource for advancing research in 3D-LLMs. + +3D scene collection. The majority of 3D-text research is currently based on ScanNet scenes collected from real camera scans, which are limited in scale. However, recent advancements have led to the development of numerous synthetic data generation pipelines [18, 19, 21, 22, 26, 38, 39, 49, 50, 54, 62, 64, 75]. Given the scalability of these synthetic data generation pipelines, we explore the potential of using synthetic 3D scenes to enhance 3D-text understanding. + +Synthetic data offers significant advantages, such as lower costs and greater accessibility, making it an attractive alternative. If models trained on simulated 3D-text data can effectively transfer to real-world 3D scenes, the research community stands to benefit immensely. + +Thus, we curate a diverse collection of 40,087 high-quality 3D indoor scenes from the 3D-FRONT [24] and Structured3D [81] datasets. These datasets are chosen for their large quantities of synthetic indoor scenes with professionally designed layouts. The collection includes various room types, like living rooms, bedrooms, kitchens, office spaces, and conference rooms. We process these 3D scenes to generate per-room 3D point clouds. Details on point cloud rendering and cleaning are provided in the Appendix. + +Densely-grounded text annotation. The definition of densely-grounded text is that every noun phrase of object mentioned in the text should be associated with an 3D object in the 3D scene. This is illustrated in Figure 2. This is a difficult type of data to get annotations on. Early work such as ScanEnts3D [1] relies on hiring professional human annotators to obtain such annotations. The authors + +![](images/6466d3f1fd917389b7c3e178401abae13b7ade82ef36a7b21f5954a86cde7cc5.jpg) +Figure 2. 3D-GRAND dataset and statistics. (Left): 3D-GRAND is a large-scale, densely-grounded 3D-text dataset with 8 different tasks. (Right): From 40K 3D scenes, 3D-GRAND annotates 6.2M 3D-text pairs. + +report that crowd-sourcing annotators (Amazon Mechanical Turk (AMT) [14]) were not able to reliably complete this task and they had to hire professional annotators (error rate AMT: $16\%$ , professional: $< 5\%$ ). Yet our human quality check shows that LLMs (GPT-4 [52]) can achieve $< 8.2 - 5.6\%$ densely-grounding error rate (see Appendix for detail). This finding is in accordance with recent studies [20, 65] reporting LLMs can be human-level annotators. The accuracy of LLM-annotation provides one motivation for considering LLMs as densely grounding annotation tool. + +The second, and perhaps more critical, motivation is the scalability of annotation. While we can potentially scale up 3D scenes using synthetic data generation pipelines, manual annotation is both costly and time-consuming, especially for complex tasks like densely grounding annotation. To put the time/money cost in perspective, for the data we annotated in this paper, we estimate that obtaining the same annotations with human annotator would cost at least $539,000 and require 5.76 years (no eat, no sleep) worth of work from a professional annotator (earning minimum wage of$ 10.67 per hour). In contrast, using LLMs (GPT4 [52]), we achieve the same results for $3,030 within 2 days, representing a 178x reduction in cost and a 1051x reduction in time. At the time of writing, the cost and time further decreases by 50% to $1,500 and 1 day, with the introduction of GPT-4o [51]. + +As previously discussed, using humans to annotate 3D scenes can be an exhaustive process. Meanwhile, 2D-LLMs demonstrate remarkable capabilities in understanding visual inputs and generating language, making them well-suited for creating high-quality, grounded language annotations. However, due to the hallucination issues and data issues in 2D-LLMs, aggregating information across images, even those originating from the same scene, is not feasible yet. + +In contrast, Large Language Models (LLMs) excel at understanding structural data and generating diverse and fluent language [52]. They have demonstrated capabilities in spatial reasoning [8], solving both elementary and sophisticated math problems [36, 70]. To address the limitations of 2D-LLMs when annotate 3D scenes, we leverage the strengths of LLMs. By integrating detailed, accurate information into a reliable scene graph, we provide LLMs with the necessary data to reason effectively and generate precise annotations. + +Here are the key steps of applying our pipeline to obtain densely-grounded annotation for any synthetic 3D scene: + +- 3D Model to 2D Image. In the 3D-Front dataset, each object is sourced from 3D-Future [24], which provides a ground truth 2D image for each object. For the Structured3D dataset, individual images for each object are not available. Thus, we use set-of-mark prompting [73], where each object to be annotated is circled in red in the images. +- 2D Image to Attributes. We use GPT-4V to generate detailed language annotations for each 2D object image, including attributes like name, color, finish, and texture. The naming is open-vocabulary, contrary to being class-agnostic. +- List of Attributes to Scene Graph. We structure each individual objects' annotations into a JSON-based scene graph that captures the relationships and attributes of objects within the scene. Note that we obtain this scene graph from synthetic data which we can guarantee the correctness. +- Scene Graph to Generated Annotations. Based on the given scene graph, we will be able to produce 3D-Grounded Object Reference, 3D-Grounded Scene Description, and 3D-Grounded QA using GPT-4 [52] with various prompts, which we will show in the appendix. +- Generated Annotations to Processed Annotations. After we acquire raw annotations, we will apply hallucination fil + +![](images/cc940228c40d5899b9a9c2432c0f782eb6c34fbdd1392202058894a561f58be4.jpg) +Figure 3. 3D-GRAND Data Curation Pipeline. + +
Annotation SourceError Rate
ScanEnts3D (AMT)16%
ScanEnts3D (Professional)<5%
3D-GRAND (LLM, GPT-4)5.6-8.2%
+ +Table 4. Error rates comparison between ScanEnts3D and 3D-GRAND annotations. (AMT = Amazon Mechanical Turk) + +ters and template augmentation for the phrase tag to remove low-quality annotations and augment generated annotations. + +With this pipeline, we generate a diverse range of 3D vision-language understanding tasks as shown in Figure 2. On a high level, these tasks can be categorized into: + +- 3D-Grounded Object Reference: Given a 3D scene and an object of interest, 3D-LLM is required to generate a description that uniquely identifies the target object. The description includes text and grounding information, not only for the target object but also for any landmark objects mentioned in the description. This task is conceptually similar to Visual Grounding, Scene-aware Object Captioning, and Dense Captioning in 2D vision-language research. +- 3D-Grounded Scene Description: Given a 3D scene, the 3D-LLM generates a description that captures the salient aspects of the environment. The description includes both text and grounding information, linking the language to specific objects or regions in the scene. +- 3D-Grounded QA: Given a 3D scene and a question about the environment, the 3D-LLM generates an answer that is grounded in the scene. Both the question and answer include text and grounding information, ensuring that the 3D-LLM's responses are contextually relevant and accurate. + +Human quality check. Table 4 shows results from extensive human quality checks on 5,100 generated annotations, comparing the error rates of 3D-GRAND and previous datasets such as ScanEnts3D [1]. The results show that large lan + +guage models (LLMs) like GPT-4 can achieve error rates in densely grounded annotations comparable to those of professional human annotators. This finding aligns with studies that suggest LLMs are starting to reach human-level annotation quality on certain tasks [65]. See supplement for a more detailed description of the human quality check process. + +Dataset highlights. 3D-GRAND has several features that distinguish it from existing 3D-language datasets: (1). Large-scale: With 40,087 scenes and 6.2 million annotations, 3D-GRAND is the largest 3D-language dataset to date, providing ample data for training and evaluating 3D-LLMs. (2). Dense grounding: Unlike recent million-scale datasets like SceneVerse, which lack grounded language annotations, each language annotation in 3D-GRAND is densely grounded to specific objects or regions within the 3D scenes, facilitating fine-grained language understanding and generation. (3). Diverse language tasks: 3D-GRAND supports many grounded language tasks, including object reference, spatial reasoning, and scene understanding, making it a comprehensive benchmark for evaluating 3D-LLMs. (4). High-quality annotations: We utilize a hallucination filter to mitigate hallucination of the language annotations in 3D-GRAND. They are also human-evaluated to ensure the quality. + +These unique features establish 3D-GRAND as a valuable resource for advancing research in 3D-LLMs and embodied AI. By providing a large-scale, densely-grounded 3D-text dataset, 3D-GRAND enables the development and evaluation of more capable and reliable 3D-LLMs that can effectively understand and interact with the physical world. + +# 5. Experiments + +We present our experimental setup, including baselines, datasets, and implementation details. We then report the results of our approach, denoted as 3D-GRAND on ScanRefer [10] and the 3D-POPE benchmarks, demonstrating the effectiveness in improving grounding and reducing halluci + +nation. Finally, we report an ablation study analyzing the impact of components of our model and training strategy. + +# 5.1. Experimental Setup + +Model. Our proposed model is based on Llama-2 [66]. The input is object-centric context, including a scene graph with each object's category, centroid (x, y, z), and extent (width, height, depth), along with the text instruction and user query. During training, we utilized ground-truth centroids and extents. For inference, we used bounding boxes predicted by Mask3D [61]. Examples of input/output and details of the model can be found in the supplementary material. + +Baselines. We compare our 3D-GRAND against the following baselines: 3D-LLM [28], LEO [31], and 3D-Vista [85]. Each model, along with the specific checkpoint used to obtain the results, is documented in the appendix. + +Datasets. We evaluate our model 3D-GRAND on 3D-POPE and ScanRefer. 3D-POPE is our newly introduced benchmark dataset for evaluating object hallucination in 3D-LLMs, as described in Section 3. For ScanRefer, We utilized the validation split which contains 9,508 natural language descriptions of 2,068 objects in 141 ScanNet [15] scenes. + +Metrics. For the ScanRefer benchmark, we use the official evaluation metrics, including Accuracy@0.25IoU and Accuracy@0.5IoU. For the 3D-POPE benchmark, we report accuracy, precision, recall, F1 score, and "Yes" rate under the three sampling strategies described in Section 3. + +Implementation Details. The 3D-GRAND model is LoRAfinetuned [29] Llama-2. We use DeepSpeed ZeRO-2 [57] and FlashAttention [17] to save GPU memory and speed up training. The model is trained in BF16 precision on 12 NVIDIA A40 GPUs with a combined batch size of 96 and a learning rate of 2e-4. We use AdamW [47] with a weight decay of 0.01 and a cosine learning rate scheduler. We train the model for 10k steps, which takes $\approx 48$ hours. + +# 5.2. Results on 3D-POPE + +We first evaluate these approaches on 3D-POPE, with results in Table 3. Results show that 3D-LLM [28] almost always produces yes responses to any question. 3D-VisTA [85] performs similarly to the random baseline. LEO [31] tends to answer yes frequently, but its precision indicates a similar object hallucination rate to the random baseline. In our evaluation, 3D-GRAND achieved better performance, with $93.34\%$ precision and $89.12\%$ accuracy when tested with random sampling. Our model struggles with the more difficult splits, Popular and Adversarial, which demonstrates the effectiveness and rigorousness of 3D-POPE as a benchmark. Moreover, we emphasize that our model has never encountered ScanNet during training. More analysis on 3D hallucination can be found in the supplementary material. + +# 5.3. Results on ScanRefer + +We report results on ScanRefer in Table 5. There are a few important observations on this result: + +
ModelGenerative 3D-LLM?Never Seen ScanNet?Acc@0.25Acc@0.5
Non-LLM based
ScanReferXX37.324.3
MVTXX40.833.3
3DVG-TransXX45.934.5
ViL3DRelXX47.937.7
M3DRef-CLIPXX51.944.7
Non-Generative 3D-LLMs
3D-VisTA (zero-shot)X33.229.6
SceneVerse (zero-shot)X35.231.1
Generative 3D-LLMs
3D-LLMX30.3-
LLM-Grounder17.15.3
3D-GRAND (Ours)38.027.4
+ +Table 5. ScanRefer Results for visual grounding capability of 3D-LLMs. 3D-GRAND achieves the best zero-shot performance among 3D-LLMs, providing signals for sim-to-real transfer. + +Our 3D-LLM trained with 3D-GRAND data achieved the best Acc@0.25 among all models. Notably, our model surpasses the previous best-performing model, 3D-LLM, by $7.7\%$ on accuracy@0.25IoU. We emphasize that our model, unlike 3D-LLM, has never seen ScanNet scenes during its training (zero-shot) and is only trained on synthetic 3D scenes instead of real scans. Therefore, these results provide a promising early signal that sim-to-real transfer can be achieved via our densely-grounded large-scale dataset. + +Our generative 3D-LLM model (one that a user can chat with) performs better or on par compared to non-generative 3D-LLMs such as 3D-VisTA and SceneVerse. In the past, generative 3D-LLMs are usually significantly outperformed by non-generative 3D-LLMs, as the latter usually sacrificed the ability to chat in exchange for incorporating specialized model designs, such as producing scores for each object candidate. These designs are closer to traditional non-LLM-based specialized models. But here, we observe that the gap between the two modeling choices is closing with the help of large-scale densely-grounded data like 3D-GRAND. + +Our model is just a deliberately a primarily text-based model (Sec. 5.1) to demonstrate the effectiveness of the dataset - little visual information is conveyed in our model between the mask proposal to the LLM, in contrast to other sophisticated models where 3D object embeddings are used to better represent visual information. Thus 3D-GRAND has more potential to be unlocked in the future. + +# 5.4. Ablation Study + +To better understand the components of our 3D-LLM, we conduct an ablation study on ScanRefer and 3D-POPE. + +Grounding tokens. Table 6 shows results of our model with different types of grounding methods. We also show results on 3D-POPE in Table 7. In general, the model has a worse grounding performance and more hallucinations without grounding tokens. "Ground First" and "Ground Later" refer to whether the dense grounding (grounding every single object mentioned) of the object reference query happens + +
MethodDet.UniqueMultipleOverall
Acc@0.25Acc@0.5Acc@0.25Acc@0.5Acc@0.25Acc@0.5
Best IoU (upper bound)Mask3D (Top100)93.766.891.670.792.469.2
Best IoU (upper bound)Mask3D (Top40)81.258.780.762.480.961.0
Non-grounded ModelMask3D (Top40)51.833.121.317.934.224.3
Grounded Model (ground later)Mask3D (Top40)50.432.426.020.536.325.5
Grounded Model (ground first)Mask3D (Top40)54.436.426.020.838.027.4
Best IoU (upper bound)GT100.0100.0100.0100.0100.0100.0
Non-grounded ModelGT90.890.826.026.053.453.4
Grounded ModelGT91.091.032.132.157.057.0
+ +![](images/cb78211dc21f735e1f6d0163d4938ab562b9cdc01accf5577dced4725ba75b39.jpg) +Figure 4. Data scaling analysis on zero-shot, sim-to-real grounding capability, and hallucination. Grounding performance (left two subfigures) consistently improves as data scales up. Model trained with densely-grounded data exhibits better grounding capability compared to that trained without. Additionally (right subfigure), the model hallucinates less when exposed to more data from 3D-GRAND. Here, the Hallucination Rate is calculated as (1 - Precision) on 3D-POPE. + +![](images/e6048ddf82045c1a6727a13dcffe23c6e8ff9213ac95a0a07a73742f4912e2c1.jpg) + +![](images/1ab7701989ec9587d887fc1b5680342854b545147701ec600f52fa62bff25a05.jpg) + +Table 6. Ablation Study on Grounding Accuracy (%) on ScanRefer: Training with densely-grounded data significantly improves grounding accuracy, particularly when multiple distractor objects of the same category are present in the room. + +
3D-POPEModelPrecision
Random3D-GRAND93.34
w/o grounding tokens(-1.38)
Popular3D-GRAND73.05
w/o grounding tokens(-2.68)
Adversarial3D-GRAND69.86
w/o grounding tokens(-2.38)
+ +Table 7. Ablation on 3D-POPE. Without the grounding tokens, 3D-GRAND hallucinates more. + +before or after the model outputs the final answer for the refer expression. The former functions like a chain-of-thought reasoning process [69], which is likely why the performance increases compared to the latter. See Appendix for details. + +Mask3D proposals. Finally, we show the upper bound of our approach in Table 6, with Mask3D proposals. Due to the LLM context length, we only use top-40 proposals. + +# 5.5. Data Scaling and Sim-to-Real Transfer + +We present results in Figure 4. Our model is trained on synthetic 3D scenes from 3D-FRONT and Structured3D [24, 81], and evaluated on real-world 3D scans from ScanNet [15]. Grounding performance consistently improves and + +the hallucination rate drops as the densely-grounded data scales up. Notably, our model trained on densely grounded data scales better than the same model trained without such data. These findings pave the way for scaling 3D-text understanding using synthetic scenes obtained from simulation, which is cheaper and easier to obtain. + +# 6. Conclusion + +We introduce 3D-GRAND, a large-scale, densely-grounded 3D-text dataset designed for grounded 3D instruction tuning, and 3D-POPE, a comprehensive benchmark for evaluating object hallucination in 3D-LLMs. Through extensive experiments, we demonstrated the effectiveness of our dataset on 3D-LLMs in improving grounding and reducing hallucination, achieving state-of-the-art performance on the ScanRefer and 3D-POPE benchmarks. Our ablation analysis demonstrated the importance of densely-grounded instruction tuning, data scaling laws, and effective sim-to-real transfer in developing high-performing 3D-LLMs. We hope our findings can spark further research and innovation in this field, leading to the development of more advanced and capable 3D-LLMs for a wide range of applications. + +# 7. Acknowledgement + +This work is generously supported by NSF IIS-1949634, NSF SES-2128623, and has benefited from the Microsoft Accelerate Foundation Models Research (AFMR) grant program. + +# References + +[1] Ahmed Abdelreheem, Kyle Olszewski, Hsin-Ying Lee, Peter Wonka, and Panos Achlioptas. Scanents3d: Exploiting phrase-to-3d-object correspondences for improved visio-linguistic models in 3d scenes. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3524–3534, 2024. 3, 4, 6 +[2] Panos Achlioptas, Ahmed Abdelreehem, Fei Xia, Mohamed Elhoseiny, and Leonidas J. Guibas. ReferIt3D: Neural listeners for fine-grained 3d object identification in real-world scenes. In 16th European Conference on Computer Vision (ECCV), 2020. 3 +[3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716-23736, 2022. 2 +[4] Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoaki Kawanabe. Scanqa: 3d question answering for spatial scene understanding. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19129-19139, 2022. 3 +[5] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966, 2023. 2 +[6] Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. Experience grounds language, 2020. 1 +[7] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023. 2 +[8] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4, 2023. 5 +[9] Khyathi Raghavi Chandu, Yonatan Bisk, and Alan W Black. Grounding 'grounding' in NLP. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4283-4305, Online, 2021. Association for Computational Linguistics. 1 +[10] Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural + +language. 16th European Conference on Computer Vision (ECCV), 2020. 3, 6 +[11] Sijin Chen, Xin Chen, Chi Zhang, Mingsheng Li, Gang Yu, Hao Fei, Hongyuan Zhu, Jiayuan Fan, and Tao Chen. Ll3da: Visual interactive instruction tuning for omni-3d understanding, reasoning, and planning. arXiv preprint arXiv:2311.18651, 2023. 2, 3 +[12] Xuweiyi Chen, Ziqiao Ma, Xuejun Zhang, Sihan Xu, Shengyi Qian, Jianing Yang, David F. Fouhey, and Joyce Chai. Multi-object hallucination in vision-language models, 2024. 2 +[13] Zhenyu Chen, Ali Gholami, Matthias Nießner, and Angel X Chang. Scan2cap: Context-aware dense captioning in rgb-d scans. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3193-3203, 2021. 3 +[14] Kevin Crowston. Amazon mechanical turk: A research tool for organizations and information systems scholars. In *Shaping the Future of ICT Research*. Methods and Approaches, pages 210–221, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. 5 +[15] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828-5839, 2017. 3, 4, 7, 8 +[16] Wenliang Dai, Zihan Liu, Ziwei Ji, Dan Su, and Pascale Fung. Plausible may not be faithful: Probing object hallucination in vision-language pre-training, 2023. 2 +[17] Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. In International Conference on Learning Representations (ICLR), 2024. 7, 1 +[18] Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar, and Ali Farhadi. RoboTHOR: An Open Simulation-to-Real Embodied AI Platform. In CVPR, 2020. 4 +[19] Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Jordi Salvador, Kiana Ehsani, Winson Han, Eric Kolve, Ali Farhadi, Aniruddha Kembhavi, and Roozbeh Mottaghi. Proc-THOR: Large-Scale Embodied AI Using Procedural Generation. In NeurlPS, 2022. Outstanding Paper Award. 4 +[20] Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, and Lidong Bing. Is gpt-3 a good data annotator?, 2023. 5 +[21] Kiana Ehsani, Winson Han, Alvaro Herrasti, Eli VanderBilt, Luca Weihs, Eric Kolve, Aniruddha Kembhavi, and Roozbeh Mottaghi. ManipulaTHOR: A Framework for Visual Object Manipulation. In CVPR, 2021. 4 +[22] Epic Games. Unreal engine. 4 +[23] Yasaman Etesam, Leon Kochiev, and Angel X Chang. 3dvqa: Visual question answering for 3d environments. In 2022 19th Conference on Robots and Vision (CRV), pages 233-240. IEEE, 2022. 3 +[24] Huan Fu, Bowen Cai, Lin Gao, Ling-Xiao Zhang, Jiaming Wang, Cao Li, Qixun Zeng, Chengyue Sun, Rongfei Jia, Binqiang Zhao, et al. 3d-front: 3d furnished rooms with + +Layouts and semantics. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10933-10942, 2021. 4, 5, 8 +[25] Anisha Gunjal, Jihan Yin, and Erhan Bas. Detecting and preventing hallucinations in large vision language models. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 18135-18143, 2024. 2 +[26] Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson, and Matthias Nießner. Text2room: Extracting textured 3d meshes from 2d text-to-image models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7909-7920, 2023. 4 +[27] Yining Hong, Chunru Lin, Yilun Du, Zhenfang Chen, Joshua B Tenenbaum, and Chuang Gan. 3d concept learning and reasoning from multi-view images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9202-9212, 2023. 3 +[28] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. Advances in Neural Information Processing Systems, 36:20482-20494, 2023. 2, 3, 4, 7 +[29] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. 7, 1 +[30] Haifeng Huang, Zehan Wang, Rongjie Huang, Luping Liu, Xize Cheng, Yang Zhao, Tao Jin, and Zhou Zhao. Chat-3d v2: Bridging 3d scene and large language models with object identifiers. arXiv preprint arXiv:2312.08168, 2023. 2, 3 +[31] Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. An embodied generalist agent in 3d world. In ICML, 2024. 2, 3, 4, 7 +[32] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, 2023. 3, 1 +[33] Qidong Huang, Xiaoyi Dong, Pan Zhang, Bin Wang, Conghui He, Jiaqi Wang, Dahua Lin, Weiming Zhang, and Nenghai Yu. Opera: Alleviating hallucination in multi-modal large language models via over-trust penalty and retrospection-allocation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 2 +[34] Wencan Huang, Daizong Liu, and Wei Hu. Dense object grounding in 3d scenes. Proceedings of the 31st ACM International Conference on Multimedia, 2023. 3, 1 +[35] Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, and Li Fei-Fei. Voxposer: Composable 3d value maps for robotic manipulation with language models. arXiv preprint arXiv:2307.05973, 2023. 2 +[36] Shima Imani, Liang Du, and Harsh Shrivastava. Math-prompter: Mathematical reasoning using large language models, 2023. 5 +[37] Baoxiong Jia, Yixin Chen, Huangyue Yu, Yan Wang, Xuesong Niu, Tengyu Liu, Qing Li, and Siyuan Huang. Sceneverse: + +Scaling 3d vision-language learning for grounded scene understanding. arXiv preprint arXiv:2401.09340, 2024. 2, 3 +[38] Arthur Juliani, Vincent-Pierre Berges, Ervin Teng, Andrew Cohen, Jonathan Harper, Chris Elion, Chris Goy, Yuan Gao, Hunter Henry, Marwan Mattar, and Danny Lange. Unity: A general platform for intelligent agents, 2020. 4 +[39] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv, 2017. 4 +[40] Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. Lisa: Reasoning segmentation via large language model. arXiv preprint arXiv:2308.00692, 2023. 2, 3 +[41] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2 +[42] Mingsheng Li, Xin Chen, Chi Zhang, Sijin Chen, Hongyuan Zhu, Fukun Yin, Gang Yu, and Tao Chen. M3dbench: Let's instruct large models with multi-modal 3d prompts. arXiv preprint arXiv:2312.10763, 2023. 3 +[43] Yifan Li, Yifan Du, Kun Zhou, Jinping Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023. 3, 4, 1 +[44] Zeju Li, Chao Zhang, Xiaoyan Wang, Ruilong Ren, Yifan Xu, Ruifei Ma, and Xiangde Liu. 3dmit: 3d multi-modal instruction tuning for scene understanding. arXiv preprint arXiv:2401.03201, 2024. 3 +[45] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. 3 +[46] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, 2023. 2 +[47] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2019. 7, 1 +[48] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. In International Conference on Learning Representations, 2023. 3 +[49] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A Platform for Embodied AI Research. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019. 4 +[50] Mayank Mittal, Calvin Yu, Qinxi Yu, Jingzhou Liu, Nikita Rudin, David Hoeller, Jia Lin Yuan, Ritvik Singh, Yunrong Guo, Hammad Mazhar, Ajay Mandlekar, Buck Babich, Gavriel State, Marco Hutter, and Animesh Garg. Orbit: A unified simulation framework for interactive robot learning environments. IEEE Robotics and Automation Letters, 8(6): 3740-3747, 2023. 4 + +[51] OpenAI. Hello gpt-4o, 2024. 5 +[52] OpenAI. Gpt-4 technical report, 2024. 2, 5 +[53] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023. 2, 3 +[54] Xavi Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Ruslan Partsey, Jimmy Yang, Ruta Desai, Alexander William Clegg, Michal Hlavac, Tiffany Min, Theo Gervet, Vladimir Vondrus, Vincent-Pierre Berges, John Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, and Roozbeh Mottaghi. Habitat 3.0: A co-habitat for humans, avatars and robots, 2023. 4 +[55] Zhangyang Qi, Ye Fang, Zeyi Sun, Xiaoyang Wu, Tong Wu, Jiaqi Wang, Dahua Lin, and Hengshuang Zhao. Gpt4point: A unified framework for point-language understanding and generation, 2023. 2 +[56] Hanoona Rasheed, Muhammad Maaz, Sahal Shaji, Abdelrahman Shaker, Salman Khan, Hisham Cholakkal, Rao M Anwer, Erix Xing, Ming-Hsuan Yang, and Fahad S Khan. Glamm: Pixel grounding large multimodal model. arXiv preprint arXiv:2311.03356, 2023. 2, 3 +[57] Jeff Rasley, Samyam Rajbhandari, Olatunjri Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020. 7, 1 +[58] Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035-4045, Brussels, Belgium, 2018. Association for Computational Linguistics. 2, 3, 1 +[59] David Rozenberszki, Or Litany, and Angela Dai. Language-grounded indoor 3d semantic segmentation in the wild. In Proceedings of the European Conference on Computer Vision (ECCV), 2022. 4 +[60] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Laion-5b: An open large-scale dataset for training next generation image-text models, 2022. 2 +[61] Jonas Schult, Francis Engelmann, Alexander Hermans, Or Litany, Siyu Tang, and Bastian Leibe. Mask3d: Mask transformer for 3d semantic instance segmentation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 8216-8223. IEEE, 2023. 7 +[62] Jonas Schult, Sam Tsai, Lukas Hollein, Bichen Wu, Jialiang Wang, Chih-Yao Ma, Kunpeng Li, Xiaofang Wang, Felix Wimbauer, Zijian He, Peizhao Zhang, Bastian Leibe, Peter Vajda, and Ji Hou. Controlroom3d: Room generation using semantic proxy rooms, 2023. 4 +[63] Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu + +Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525, 2023. 2 +[64] Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimir Vondrus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, and Dhruv Batra. Habitat 2.0: Training home assistants to rearrange their habitat. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 4 +[65] Zhen Tan, Alimohammad Beigi, Song Wang, Ruocheng Guo, Amrita Bhattacharjee, Bohan Jiang, Mansoresh Karami, Jundong Li, Lu Cheng, and Huan Liu. Large language models for data annotation: A survey, 2024. 5, 6 +[66] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, MarieAnne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023. 2, 7 +[67] Tai Wang, Xiaohan Mao, Chenming Zhu, Runsen Xu, Ruiyuan Lyu, Peisen Li, Xiao Chen, Wenwei Zhang, Kai Chen, Tianfan Xue, et al. Embodiedscan: A holistic multimodal 3d perception suite towards embodied ai. arXiv preprint arXiv:2312.16170, 2023. 3 +[68] Zehan Wang, Haifeng Huang, Yang Zhao, Ziang Zhang, and Zhou Zhao. Chat-3d: Data-efficiently tuning large language model for universal dialogue of 3d scenes. arXiv preprint arXiv:2308.08769, 2023. 2 +[69] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. 8 +[70] Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, and Chi Wang. An empirical study on challenging math problem solving with gpt-4, 2023. 5 +[71] Jiarui Xu, Xingyi Zhou, Shen Yan, Xiuye Gu, Anurag Arnab, Chen Sun, Xiaolong Wang, and Cordelia Schmid. Pixel Aligned Language Models. arXiv preprint arXiv: 2312.09237, 2023. 3 + +[72] Xu Yan, Zhihao Yuan, Yuhao Du, Yinghong Liao, Yao Guo, Zhen Li, and Shuguang Cui. Clevr3d: Compositional language and elementary visual reasoning for question answering in 3d real-world scenes. arXiv preprint arXiv:2112.11691, 2021. 3 +[73] Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v, 2023. 5 +[74] Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F Fouhey, and Joyce Chai. Llm-grounder: Open-vocabulary 3d visual grounding with large language model as an agent. In ICRA, 2024. 2 +[75] Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, et al. Holodeck: Language guided generation of 3d embodied ai environments. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024), pages 20-25. IEEE/CVF, 2024. 4 +[76] Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, and Yinfei Yang. Ferret: Refer and ground anything anywhere at any granularity. In The Twelfth International Conference on Learning Representations, 2023. 2, 3 +[77] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 69-85. Springer, 2016. 3 +[78] Zhihao Yuan, Xu Yan, Zhuo Li, Xuhao Li, Yao Guo, Shuguang Cui, and Zhen Li. Toward explainable and fine-grained 3d grounding through referring textual phrases. arXiv preprint arXiv:2207.01821, 2022. 3 +[79] Bohan Zhai, Shijia Yang, Xiangchen Zhao, Chenfeng Xu, Sheng Shen, Dongdi Zhao, Kurt Keutzer, Manling Li, Tan Yan, and Xiangjun Fan. Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption. arXiv preprint arXiv:2310.01779, 2023. 2 +[80] Yichi Zhang, Ziqiao Ma, Xiaofeng Gao, Suhaila Shakiah, Qiaozi Gao, and Joyce Chai. Groundhog: Grounding large language models to holistic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 2 +[81] Jia Zheng, Junfei Zhang, Jing Li, Rui Tang, Shenghua Gao, and Zihan Zhou. Structured3d: A large photo-realistic dataset for structured 3d modeling. In Proceedings of The European Conference on Computer Vision (ECCV), 2020. 4, 8 +[82] Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. Analyzing and mitigating object hallucination in large vision-language models. In The Twelfth International Conference on Learning Representations, 2024. 2 +[83] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 2 + +[84] Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus of images interleaved with text, 2023. 2 +[85] Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li. 3d-vista: Pre-trained transformer for 3d vision and text alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2911-2921, 2023. 2, 3, 4, 7 \ No newline at end of file diff --git a/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/images.zip b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..94009bd1b229d1044756e36f4f3d8caf72235c3b --- /dev/null +++ b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f91ea3373fecb1ab9532552ca1129368343086aaa6f29f6b097f8aeb9adc3b4 +size 626537 diff --git a/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/layout.json b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2ebcd6a30483b281aae699e0e0169a8ef52bca60 --- /dev/null +++ b/CVPR/2025/3D-GRAND_ A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cfcf47dfbf355d18747624c722c799d74e479cce171d9c980736a6f0d905517 +size 378997 diff --git a/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_content_list.json b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8b4ec957cd6ecbe39495f1ab127f5568469dfb6b --- /dev/null +++ b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4923798bed750dc45f4fe3daffb2cbc21b1bc22f11c71356f7884315ed646c45 +size 92415 diff --git a/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_model.json b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..55d06b8e013d1092c95227d8f3340bf286343327 --- /dev/null +++ b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3076da908474779ef61157066e6041f2a01308e9898306a01e61c9e1f57f5247 +size 116305 diff --git a/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_origin.pdf b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d2f89bbe06ec467ec07f49b5d82043a3f09dad27 --- /dev/null +++ b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/c8fc89b6-7a4d-4610-af82-1a446e24a6b9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ad0f1494b5d14e4a82dea533e318b719e7972221b8c6d37ef16bc8ec251ac6e +size 5746782 diff --git a/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/full.md b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/full.md new file mode 100644 index 0000000000000000000000000000000000000000..70d9c55c37b71725e8b84d5495655eea83ba0123 --- /dev/null +++ b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/full.md @@ -0,0 +1,438 @@ +# 3D-GSW: 3D Gaussian Splating for Robust Watermarking + +Youngdong Jang1 Hyunjie Park1 Feng Yang2 Heeju Ko1 Euijin Choo3 Sanggil Kim1* + +$^{1}$ Korea University $^{2}$ Google DeepMind $^{3}$ University of Alberta + +# Abstract + +As 3D Gaussian Splatting (3D-GS) gains significant attention and its commercial usage increases, the need for watermarking technologies to prevent unauthorized use of the 3D-GS models and rendered images has become increasingly important. In this paper, we introduce a robust watermarking method for 3D-GS that secures copyright of both the model and its rendered images. Our proposed method remains robust against distortions in rendered images and model attacks while maintaining high rendering quality. To achieve these objectives, we present Frequency-Guided Densification (FGD), which removes 3D Gaussians based on their contribution to rendering quality, enhancing real-time rendering and the robustness of the message. FGD utilizes Discrete Fourier Transform to split 3D Gaussians in high-frequency areas, improving rendering quality. Furthermore, we employ a gradient mask for 3D Gaussians and design a wavelet-subband loss to enhance rendering quality. Our experiments show that our method embeds the message in the rendered images invisibly and robustly against various attacks, including model distortion. Our method achieves superior performance in both rendering quality and watermark robustness while improving real-time rendering efficiency. Project page: https://kuai-lab.github.io/cvpr20253dgsw/ + +# 1. Introduction + +3D representation has been at the center of computer vision and graphics. Such technology plays a pivotal role in various applications and industries, e.g., movies, games, and the Metaverse industry. Since Neural Radiance Field [31] (NeRF) has shown great success in 3D representation due to photo-realistic rendering quality, it has been at the forefront of 3D content creation. + +Recently, 3D Gaussian Splitting [15] (3D-GS) has gained attention for its real-time rendering performance and high rendering quality, compared to other radiance field methods [6, 9, 31, 34]. 3D-GS is an explicit representation + +![](images/a8c2cd071afcab6fdad754eac458e3204c0f9597c7099d0b10d55afe35117a6f.jpg) +Figure 1. The unauthorized use of the 3D Gaussian Splitting model. Our method ensures that the watermark remains detectable even in distorted images and under model attacks. + +that uses trainable 3D Gaussians. This explicit property enhances the capability of 3D-GS to generate 3D assets. Due to these properties, 3D-GS has been a transformative 3D representation. + +While 3D-GS has advanced and practical usage has increased, it raises concerns about the unauthorized use of its 3D assets. Therefore, attempts have been made to develop digital watermarking for radiance fields to address this problem, such as WaterRF [11], which integrates watermark embedding into the rendering process. However, this method presents several challenges when applied directly to 3D-GS. First, achieving high-fidelity rendering requires redundant 3D Gaussians, which leads to substantial memory and storage overheads, especially for large-scale scenes. This also makes embedding a large amount of watermark computationally expensive, significantly increasing processing time. Secondly, there are many 3D Gaussians that have minimal impact on the rendered image. Their minimal impact on the rendered image makes it difficult to embed the watermark robustly. + +To address these issues, we propose Frequency-Guided Densification (FGD) to reduce the number of 3D Gaussians to ensure both real-time rendering and robust message embedding. FGD consists of two phases. In the first phase, we + +remove 3D Gaussians based on their contribution to the rendering quality. The remaining 3D Gaussians, which significantly impact the rendered image, enable robust message embedding. In the second phase, we utilize two properties to enhance rendering quality: 1) Smaller 3D Gaussians have minimal impact on the rendered image [17]. 2) The human visual system is less sensitive to high-frequency areas [18]. To identify high-frequency areas, we apply Discrete Fourier Transform (DFT) to the rendered image in a patch-wise manner and measure the intensity of high-frequency. After that, 3D Gaussians with strong high-frequency intensity are split into smaller ones to ensure the rendering quality. + +Furthermore, significant changes in the parameters of 3D-GS, optimized for high rendering quality, lead to substantial variations in the rendered output. To minimize the adjustments, we utilize a gradient mask derived from the pre-trained parameters, transmitting smaller gradients to 3D-GS during optimization. In this way, the rendering quality is not significantly decreased. To further enhance rendering quality, we design a wavelet-subband loss. Since we split 3D Gaussians in high-frequency areas, the wavelet-subband loss enhances the local structure by leveraging only high-frequency components. + +Our experimental results show that our method effectively fine-tunes 3D-GS to embed the watermark into the rendered images from all viewpoints. We also evaluate the robustness of our method under various attacks, including image distortion and model attacks. We compare the performance of our method with other methods [11, 20] and demonstrate that our method outperforms other state-of-the-art radiance field watermarking methods across all metrics. Our main contributions are summarized as follows: + +- We propose frequency-guided densification to effectively remove 3D Gaussians without compromising the rendering quality, to embed a robust message in the rendered image while enhancing the real-time rendering. +- We propose a gradient mask mechanism that minimizes gradients to preserve similarity to the pre-trained 3D-GS and maintain high rendering quality. +- We introduce a wavelet-subband loss to enhance rendering quality, particularly in high-frequency areas. +- The proposed method achieves superior performance and demonstrates robustness against various types of attacks, including both image and model distortions. + +# 2. Related work + +# 2.1. 3D Gaussian Splitting + +Recently, 3D Gaussian Splatting (3D-GS) [15] has brought a paradigm shift in the radiance field by introducing an explicit representation and differentiable point-based splatting methods, allowing for real-time rendering. 3D-GS has been applied to various research areas, including 3D recon + +struction [14, 23, 27, 47], dynamic scenes [10, 28, 45, 48], avatar [19, 21, 33, 40] and generation [7, 22, 24, 25]. Its capability and efficiency have made 3D-GS widely used, positioning it at the forefront of 3D asset generation. As adoption grows across various applications, ensuring the integrity and reliability of generated content has become increasingly important. Therefore, the copyright protection of 3D-GS-generated assets has become essential aspect. + +# 2.2. Frequency Transform + +Discrete Fourier Transform (DFT) has played a crucial role in signal processing and image processing. Recent research [12, 13, 37, 49] has applied DFT to images and leveraged frequency signals to improve model performance and analyze images. Baig [1] utilizes DFT to estimate the quality of blurred images globally. Rao [36] leverages this ability of DFT to acquire global information about images. According to these studies, DFT can efficiently analyze global information in images. Since we need to analyze global frequency signal strength across image patches, we use DFT to transform the rendered images in a patch-wise manner. + +Discrete Wavelet Transform (DWT) analyzes signals or images by decomposing them into components with different frequencies and resolutions. DWT is particularly effective at capturing local information. In the previous works [32, 35, 44], DWT has been applied to images for denoising. Tian [43] utilizes DWT as it provides both spatial and frequency information through multi-resolution analysis, enabling effective noise suppression and detailed image restoration. For the radiance field, previous works [11, 26, 39, 46] show the compatibility between the radiance field and DWT. Leveraging these advantages, we utilize DWT to compute loss functions between high-frequency local information, thereby enhancing rendering quality. + +# 2.3. Steganography and Digital Watermarking + +Steganography is employed to maintain the confidentiality of information by embedding it invisibly within digital assets. Recently, there has been growing interest in applying steganography to the radiance field [4, 8, 20]. StegaNeRF [20] fine-tunes the pre-trained radiance fields model to invisibly embed images into the rendered image. For 3D-GS, GS-hider [51] invisibly embeds 3D scenes and images into point clouds. + +Digital watermarking protects digital assets by identifying the copyrights. The main difference lies in the priority of data embedding. The primary goal of digital watermarking is robustness, ensuring that embedded data can be detected even after distortions, whereas steganography prioritizes invisibility. To achieve robustness, the traditional watermarking methods [2, 38, 41, 42] have utilized DWT, embedding into the subbands of DWT. HiDDeN [52] is the end-to-end + +![](images/9f041040a0d4b19141badc282b10b8ea5a2c39f46d4f1e36cd5959d7e45ee0cd.jpg) +Figure 2. 3D-GSW Overview. Before fine-tuning 3D-GS, Frequency-Guided Densification (FGD) removes 3D Gaussians based on their contribution to the rendering quality and splits 3D Gaussians in high-frequency areas into smaller ones. We also construct a gradient mask based on the parameters of an FGD-processed 3D-GS. During the fine-tuning, we apply the Discrete Wavelet Transform (DWT) to the rendered image for robustness, using the low frequency as input to a pre-trained message decoder. For rendering quality, we design a wavelet-subbands loss that utilizes only high-frequency subbands. Finally, 3D-GS is optimized through the $\mathcal{L}_{\text {total }}$ . + +deep learning watermarking method, which embeds the robust message by adding a noise layer. For radiance fields watermarking, CopyRNeRF [29] explores embedding the message into the rendered image from implicit NeRF. Waterf [11] enhances both high rendering quality and robustness of watermarks through DWT. In this paper, we introduce a robust digital watermarking method for 3D-GS. + +# 3. Method + +# 3.1. Preliminary + +3D-GS [15] represents the 3D world with a set of 3D Gaussian primitives, each defined as: + +$$ +G (\mathbf {x}; \mu , \Sigma) = e ^ {- \frac {1}{2} (\mathbf {x} - \mu) ^ {\mathrm {T}} \Sigma^ {- 1} (\mathbf {x} - \mu)} \tag {1} +$$ + +, where the parameters mean $\mu$ and covariance $\Sigma$ determine spatial distribution. To render these primitives onto an image plane, each 3D Gaussian is projected into 2D-pixel space and forms a 2D Gaussian primitive $\hat{G}$ by projective transform and its Jacobian evaluation $\mu$ . The 2D Gaussian primitives are depth-ordered, rasterized, and alpha-blended using transmittance $T_{i}$ as a weight to form an image: + +$$ +I _ {\pi} [ x, y ] = \sum_ {i \in N _ {\mathcal {G}}} c _ {i} \alpha_ {i} T _ {i}, \text {w h e r e} T _ {i} = \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}) \tag {2} +$$ + +$$ +\alpha_ {i} = \sigma_ {i} \hat {G} _ {i} ^ {\pi} ([ x, y ]; \hat {\mu}, \hat {\Sigma}) \tag {3} +$$ + +, where $\pi$ , $c_{i}$ , $\sigma_{i}$ , and $\alpha_{i}$ are the viewpoint, color, opacity, and density of each Gaussian primitive evaluated at each pixel. $N_{\mathcal{G}}$ denotes the set of depth-ordered 2D Gaussian primitives that are present in the selected viewpoint. + +# 3.2. Fine-tuning 3D Gaussian Splatting + +As shown in Fig. 2, we fine-tune the pre-trained 3D Gaussian Splitting (3D-GS) $\mathcal{G}_o$ into $\mathcal{G}_w$ to ensure the rendered images from all viewpoints contain a binary message $M = (m_1,\dots,m_N)\in \{0,1\}^N$ . To achieve this, we utilize a pre-trained message decoder, HiDDeN [52], denoted as $D_{m}$ . Before fine-tuning, to enhance robustness, we employ Frequency-Guided Densification (FGD) to remove 3D Gaussians with minimal impact on the rendered image and split 3D Gaussians in high-frequency areas (see Sec. 3.3 and Sec. 3.4). After that, we construct a gradient mask based on the FGD-processed 3D-GS $\mathcal{G}_o^\prime$ (see Sec. 3.5) to ensure high rendering quality. In the fine-tuning process, $\mathcal{G}_w$ renders an image $I_w\in \mathbb{R}^{3\times H\times W}$ . $I_w$ is transformed into the wavelet subbands $\{LL_l,LH_l,HL_l,HH_l\}$ , where $l$ denotes the level of DWT. L and H are respectively denoted as low and high. Following the previous work [11], we choose the $LL_2$ subband as input $D_{m}$ and decode the message $M^{\prime} = D_{m}(LL_{2})$ , ensuring efficient and robust message embedding. Additionally, we employ high-frequency subbands for the proposed wavelet-subband loss. Further details are provided in the following sections. + +# 3.3. Measure Contribution of Rendering Quality + +The pre-trained 3D-GS includes redundant 3D Gaussians to ensure high-quality rendering. Because 3D Gaussians with minimal impact on rendering quality can also carry the message, it tends to be weakly embedded in the rendered image. To address this limitation, we remove 3D Gaussians with minimal impact on the rendered image before the fine-tuning process. Inspired by error-based densification [5], we measure the contribution of each 3D Gaussian to the rendering quality using the auxiliary loss function $L_{\pi}^{aux}$ with a new color parameter set $C'$ for the viewpoint $\pi$ : + +$$ +L _ {\pi} ^ {a u x} := \frac {\sum_ {x , y \in P i x} \mathcal {E} _ {\pi} [ x , y ] I _ {\pi} ^ {c ^ {\prime}} [ x , y ]}{H \times W} \tag {4} +$$ + +$$ +\mathcal {E} _ {\pi} = \left| I _ {\pi} ^ {c ^ {\prime}} - I _ {\pi} ^ {g t} \right| \tag {5} +$$ + +, where $I_{\pi}^{c'} \in \mathbb{R}^{3 \times H \times W}$ and $I_{\pi}^{gt} \in \mathbb{R}^{3 \times H \times W}$ are respectively denoted as a rendered image with $C'$ and ground truth. We replace the parameters $C$ with $C'$ only when $\mathcal{G}_o$ renders $I_{\pi}^{c'}$ , and set all of its values to zeros. During the backward process, the gradients of the auxiliary loss with respect to $C'$ are derived as follows: + +$$ +V _ {\pi} = \frac {\partial L _ {\pi} ^ {a u x}}{\partial C ^ {\prime}} = \sum_ {x, y \in P i x} \mathcal {E} _ {\pi} [ x, y ] w _ {\pi} \tag {6} +$$ + +$$ +w _ {\pi} = \sum_ {i \in N _ {\mathcal {G}}} c _ {i} \alpha_ {i} T _ {i} \tag {7} +$$ + +, where $c_{i}$ , $\alpha_{i}$ and $T_{i}$ are respectively denoted as the color, the density, and the transmittance of each 3D Gaussian. We utilize this $V_{\pi} \in \mathbb{R}^{N_{\mathcal{G}} \times 3}$ , as the contribution for a rendered quality at $\pi$ , as it reflects each 3D Gaussian's contribution to the color of the rendered image. + +# 3.4. Frequency-Guided Densification (FGD) + +Our method aims to embed the message $M$ robustly into the rendered image to ensure fast embedding and real-time rendering speed without a decrease in rendering quality. To achieve these objectives, we propose Frequency-Guided Densification (FGD), which removes 3D Gaussians, which have minimal impact on the rendered image, and splits 3D Gaussians in the high-frequency areas into smaller ones. + +FGD consists of two phases to achieve these goals. First, the pre-trained $\mathcal{G}_o$ renders the image $I_{\pi}^{c'}$ from all viewpoints, and we derive $V_{\pi}$ from the rendered images. Based on $V_{\pi}$ , we remove 3D Gaussians that have negligible impact on the rendering quality. Second, since large scenes require substantial memory, images rendered by 3D-GS with 3D Gaussians removed are divided into patches $P \in \mathbb{R}^{3 \times M \times N}$ to improve memory efficiency. Since FGD identifies patches with strong high-frequency signals, we utilize the Discrete Fourier Transform (DFT) for the global frequency analysis. The DFT is defined as follows: + +$$ +F [ u, v ] = \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} f [ m, n ] e ^ {- j 2 \pi \left(\frac {u}{M} m + \frac {v}{N} n\right)} \tag {8} +$$ + +, where $f$ and $F$ are respectively denoted as spatial-domain pixel value at spatial-domain image coordinate $(m,n)$ and frequency-domain pixel value at the frequency-domain image coordinate $(u,v)$ . We transform the spatial-domain patch $P$ into the frequency domain using DFT, revealing a complete spectrum of frequency components, i.e., $\tilde{P} = \mathbb{R}(F(P)) \in \mathbb{R}^{3 \times U \times V}$ . The transformed patch $\tilde{P}$ undergoes Hadamard product $\odot$ with a mask $Q \in \mathbb{R}^{3 \times U \times V}$ , designed to emphasize high-frequency signals, and the intensity of high-frequency $E$ is computed as follows: + +$$ +Q [ u, v ] = \left(\frac {2 u - U}{U}\right) ^ {2} + \left(\frac {2 v - V}{V}\right) ^ {2} \tag {9} +$$ + +$$ +E = \frac {\sum_ {u , v} (\tilde {P} \odot Q) _ {u v}}{U \times V} \tag {10} +$$ + +, where $(u,v) \in \mathbb{R}^{U \times V}$ . We select the top $K\%$ patch $\tilde{P}$ based on $E$ and track 3D Gaussians from the chosen patches. Based on $V_{\pi}$ , we choose the 3D Gaussians that have less impact on the image and split them into smaller ones to enhance rendering quality. Therefore, we effectively reduce the number of 3D Gaussians to enhance rendering speed and maintain high rendering quality. With intensive optimization of 3D Gaussians that significantly impact rendering quality, a robust message can be embedded. + +# 3.5. Gradient Mask for 3D Gaussian Splatting + +Since 3D-GS $\mathcal{G}_o^\prime$ passed through FGD renders high-quality images, we must embed the message without compromising rendering quality. To achieve this, we further reduce the gradient magnitude during fine-tuning to minimize changes in the parameters $\theta$ of $\mathcal{G}_o^\prime$ . The parameters $\theta$ consist of position $\mu$ , color $c$ , opacity $\sigma$ , rotation $r$ , and scale $s$ . + +While StegaNeRF [20] uses a gradient mask to modify the gradient, applying this method to 3D-GS is challenging due to the zero values in its parameters. To avoid dividing by zero and further reduce the magnitude of the gradient to minimize changes in parameters, we incorporate an exponential function into the mask calculation. To reduce the gradient size of the parameter $\theta$ for each 3D Gaussian, the gradient mask $z\in \mathbb{R}^{N_{\mathcal{G}_o'}}$ is calculated as follows: + +$$ +w = \frac {1}{e ^ {| \theta | ^ {\beta}}}, \quad z = \frac {w}{\sum_ {i = 1} ^ {N _ {\mathcal {G} _ {o} ^ {\prime}}} w _ {i}} \tag {11} +$$ + +, where $\mathrm{i}$ and $\beta > 0$ are respectively denoted as the index of 3D Gaussians and the strength of gradient manipulation. We calculate the mask $z$ for each parameter, $c, \sigma, r$ and $s$ . The gradient of the positions parameter, in particular $\mu$ , remains close to zero. Therefore, we apply a gradient mask to the parameters, except for $\mu$ . During the fine-tuning, the + +![](images/4f2de578939048d84279b701b2fef7fce92c87cb46826d6d8f06e7f51c3485ea.jpg) +Original Image + +![](images/5f0ef6263b0b78f4672e86cebeeb1dc85615ba5d90525c4a6f4606b7899c2eec.jpg) + +![](images/ccaea1b6e877a195a4e9134d6f8da536a03a22ca08050d75f8e808329dd90630.jpg) +3D-GSW (Ours) +(0urs) +PSNR:37.93 + +![](images/3a85396b6d39c08e21152070428dfedf8f76c2f47e7f9cd9b2fe839d1cac9538.jpg) + +![](images/583b098079ef56076e2fd259b10027c1fc411dfb6a97b289ac9aff7614c3d77b.jpg) +3D-GSW Without FGD +Bit-Acc:87.38 +PSNR:36.83 + +![](images/b32fde83fb706f4ddd55e598141ca89605436467e54368f92b6fe789a6885a8f.jpg) + +![](images/68cf2cdf7509801e1f970c8f631710520bdc63da0a861cfa2eb86721c38daa03.jpg) +Bit-Acc:91.66 +Bit-Acc:100.0 + +![](images/f08d42f39bdba99979101c0c5ac696048cb41e548d338dd200a063aa3094f07c.jpg) + +![](images/363ef31c92c35fad0d3b857a2704a78d8127eadfe95c6f98c227504b7634b6d8.jpg) + +![](images/97329443e8846fdaa28a6e5d096a0e5fc44fdbd08bdf467f77242145e4149ca9.jpg) + +![](images/348e7073cee7f4cd1a74c9c07f984f5dc5dcd5a71d4dd60dd3f9c9eebe8ece88.jpg) + +![](images/7aa5eb11b4b44341dc7bb45af86e11f2de752a0f5ce75c41ece8aafd7387b227.jpg) + +![](images/38d82387b35cd11dd4ab7ec7c73ad230cefc6433865d671c9cae78ec20215e66.jpg) +Figure 3. Rendering quality comparison of each baseline with our method. We doubled the scale of the difference map. Our method outperforms others in bit accuracy and rendering quality, using 32-bit messages for the qualitative results. + +![](images/9c9f3fa1291deccfa31a4405ddb0b0c635c11f9e5753e532532d0cb9aa879d26.jpg) +Bit-Acc:97.22 +PSNR:34.52 + +![](images/4b3f89a0383730c9178430595365068e519acd29eba4f84168ceb2e5eb4a866d.jpg) +Bit-Acc:100.0 +Bit-Acc: 93.75 + +![](images/ddf643488362503ebefe34856845ef70b6bc260dd95f87543b0a5b3ba167a8a7.jpg) +PSNR:33.36 + +![](images/c764be8b3e4478bd6df339beca44d79e1f017b30fb99c753e8c0ba974968fc8f.jpg) + +![](images/f63d763fc2d2d55ad1030cfc6202932ee8fb0954457d5d727e26b933b537f35f.jpg) +WateRF +Bit-Acc: 90.58 +PSNR:36.89 + +![](images/96f2925b7ec1f48cd926d32c20c5bfc034bd51a492cf75982ff7c06552adea2e.jpg) + +![](images/64277566ac2f6d53829a758a08cc26d31157d21dbbb2153c770ec3c04896bf34.jpg) +StegaNeRF +PSNR:35.41 + +![](images/e0d66f6291ebadda697a6c90d32ba1206287aac14e10c0dc33131a3642015ff8.jpg) +Bit-Acc:89.75 +Bit-Acc:100.0 + +![](images/e82eb16d6c0f73968e441d6b0fc86208a4d5b13995a6247a103f10b8d34b6163.jpg) +PSNR:25.59 + +![](images/0a4499357b2b8dc7896958c689522f1e0466e0521de807ee455e9ac34cd2d511.jpg) +Bit-Acc:96.73 + +![](images/e14457a1dee062bbb3f4e5a15f9ebebc362640e912a4f90b3b56657f03517873.jpg) +PSNR:29.24 + +gradient is masked as $\frac{\partial\mathcal{L}_{total}}{\partial\theta}\odot z$ , where $\mathcal{L}_{total}$ is Eq. 16 and $\odot$ denotes Hadamard product. Since small gradients are transmitted to 3D-GS, our gradient mask enables message embedding while preserving high rendering quality. + +# 3.6. Losses + +We model the objective of 3D-GS watermarking by optimizing: 1) the reconstruction loss, 2) the LPIPS loss [50] 3) the wavelet-subband loss, and 3) the message loss. For the reconstruction loss, $\mathcal{L}_{rec}$ , we measure the difference between the original image $I_{o}$ and the watermarked image $I_{w}$ . We employ the loss function $\mathcal{L}_1$ : + +$$ +\mathcal {L} _ {r e c} = \mathcal {L} _ {1} \left(I _ {w}, I _ {o}\right) \tag {12} +$$ + +For the LPIPS loss, $\mathcal{L}_{lpips}$ , we evaluate the perceptual similarity between the feature maps of $I_{o}$ and $I_{w}$ . This loss is typically computed by extracting feature maps from a pretrained network $f(x)$ : + +$$ +\mathcal {L} _ {l p i p s} = \sum_ {l} \omega_ {l} \cdot \mathbb {E} \left[ \left(f \left(I _ {w}\right) - f \left(I _ {o}\right)\right) ^ {2} \right] \tag {13} +$$ + +, where $l$ and $\omega_{l}$ are respectively denoted as the layer index of the pre-trained network and the learned scaling factor. + +Since we modify 3D Gaussians in the high-frequency areas, we design a wavelet-subband loss $\mathcal{L}_w$ to further enhance the rendering quality of high-frequency areas. Since DWT effectively analyzes local details using several subbands, we only employ high-frequency subbands $\{LH_l, HL_l, HH_l\}$ to improve the rendering quality during embedding of the message. To utilize $\mathcal{L}_w$ , $I_o$ is transformed into wavelet subbands $\{LL_l^{gt}, LH_l^{gt}, HL_l^{gt}, HH_l^{gt}\}$ . We employ the loss function $\mathcal{L}_1$ : + +$$ +\mathcal {L} _ {w} = \sum_ {l} \sum_ {S} \mathcal {L} _ {1} \left(S _ {l}, S _ {l} ^ {g t}\right), \text {w h e r e} S \in \{L H, H L, H H \} \tag {14} +$$ + +For the message loss $\mathcal{L}_m$ , we employ a sigmoid function to confine the extracted message $M'$ within the range of [0, 1]. The message loss is a Binary Cross Entropy between the fixed message $M$ and the sigmoid $sg(M')$ : + +$$ +\mathcal {L} _ {m} = - \sum_ {i = 1} ^ {N} \left(M _ {i} \cdot \log s g \left(M _ {i} ^ {\prime}\right) + \left(1 - M _ {i}\right) \cdot \log \left(1 - s g \left(M _ {i} ^ {\prime}\right)\right)\right) \tag {15} +$$ + +Finally, 3D-GS is optimized with the total loss, which is the weighted sum of all losses: + +$$ +\mathcal {L} _ {\text {t o t a l}} = \lambda_ {\text {r e c}} \mathcal {L} _ {\text {r e c}} + \lambda_ {\text {l p i p s}} \mathcal {L} _ {\text {l p i p s}} + \lambda_ {w} \mathcal {L} _ {w} + \lambda_ {m} \mathcal {L} _ {m} \tag {16} +$$ + +# 4. Experiments + +# 4.1. Experimental Setting + +Dataset & Pre-trained 3D-GS. We use Blender [31], LLFF [30] and Mip-NeRF 360 [3], which are considered standard in NeRF [31] and 3D-GS [15] literature. We follow the conventional NeRF [31] and 3D-GS [15], wherein we compare the results using 25 scenes from the full Blender, LLFF, and Mip-NeRF 360 datasets. + +Baseline. We compare our method (3D-GSW) with three strategies for fairness: 1) StegaNeRF [11]: the steganography method for NeRF models, which embed an image into the rendered image. We add three linear layers to the watermark decoder to enable message decoding. Additionally, to apply the mask of StegaNeRF [11], we set the parameters of 3D-GS to a small value of zero to a small value $10^{-4}$ . 2) Water [11] + 3D-GS [15]: currently the state-of-the-art watermarking method for NeRF models. 3) 3D-GSW without FGD: changing our method by removing FGD. + +Implementation Details. Our method is trained on a single A100 GPU. The training is completed with epochs ranging from 2 to 10. The iteration per epoch is the number of train viewpoints in the datasets. We use Adam [16] to optimize + +![](images/ca58b941af933cd3c445376de9ede89b1b421a9ac90911c792f01d7fc18eed40.jpg) + +![](images/8c5f31bc28a2bae8dc81e8b0dbd92ff2bcc18423e42c901ba73d56c69c894c70.jpg) +Bit-Acc:100.0 PSNR:34.74 +Figure 4. We present a rendering quality comparison for 32-bit, 48-bit, and 64-bit messages. The differences $(\times 2)$ between the watermarked image and the original image. Since manipulated areas are high-frequency areas where the people's eyes are less sensitive, the rendered image with our method looks more realistic and natural. + +![](images/57b1eafe777e0210afc59f37ce6d56275e3b7537f8d837d90c4e5ad35764ee17.jpg) +Bit-Acc:99.04 + +![](images/ff7a4bb117e7329344018b0a20086d62db3867951435113f4bc30935899d4107.jpg) +Bit-Acc:97.31 +PSNR:31.46 + +
Methods32 bits48 bits64 bits
Bit Acc↑PSNR ↑SSIM ↑LPIPS ↓Bit Acc↑PSNR ↑SSIM ↑LPIPS ↓Bit Acc↑PSNR ↑SSIM ↑LPIPS ↓
StegaNeRF [20]+3D-GS [15]93.1532.680.9530.04989.4332.720.9540.04885.2730.660.9250.092
WateRF [11]+3D-GS [15]93.4230.490.9560.05084.1629.920.9510.05375.1025.810.8830.108
3D-GSW without FGD94.6034.270.9750.04786.6930.460.8960.07482.4928.220.8930.077
3D-GSW (Ours)97.3735.080.9780.04393.7233.310.9700.04590.4532.470.9670.049
+ +Table 1. Bit accuracy and quantitative comparison of rendering quality with baselines. We show the results in 32, 48, and 64 bits. The results are the average of Blender, LLFF, and Mip-NeRF 360 datasets. The best performances are highlighted in bold. + +3D-GS. For the decoder, we pre-train HiDDeN [52] decoder for bits $= \{32, 48, 64\}$ and freeze the parameters during our fine-tuning process. We set $\lambda_{rec} = 1$ , $\lambda_{lpips} = 0.2$ , $\lambda_w = 0.3$ , and $\lambda_m = 0.4$ in our experiments. We remove 3D Gaussians under $V_{\pi} = 10^{-8}$ . Also, we set the patch size $|P| = 16$ , the $K = 1\%$ , and $\beta = 4$ . Our experiments are conducted on five different seeds. + +Evaluation. We consider three important aspects of watermarks: 1) Invisibility: We evaluate invisibility by using the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) [50]. 2) Robustness: We investigate robustness by measuring bit accuracy under various distortions. The following distortions for message extraction are considered: Gaussian noise ( $\sigma = 0.1$ ), Rotation (random select between $+ \pi / 6$ and $- \pi / 6$ ), Scaling (75% of the original), Gaussian blur ( $\sigma = 0.1$ ), Crop (40% of the original), JPEG compression (50% of the original), a combination of Gaussian Noise, Crop, JPEG Compression. Furthermore, we consider a distortion of the core model, such as removing 3D Gaussians (20%), cloning 3D Gaussians (20%) and adding Gaussian noise ( $\sigma = 0.1$ ) to the parameters of 3D-GS. 3) Capacity: We explore the bit accuracy across various message lengths, which are denoted as $M_b \in \{32, 48, 64\}$ . + +# 4.2. Experimental results + +Rendering Quality and Bit Accuracy. In this section, we compare the rendering quality and bit accuracy with other methods. As shown in Fig. 3, our method is most similar to the original and achieves high bit accuracy and rendering quality. In particular, since real-world scenes have complex structures, it is difficult to render them similarly to the original. From Fig. 3, while other methods have difficulty bal + +ancing the rendering quality and bit accuracy, our method achieves a good balance. Tab. 1 shows that our method ensures rendering quality and bit accuracy across all datasets compared to other methods. + +Capacity of Message. Since bit accuracy, rendering quality, and capacity have a trade-off relationship. We explore this with message bit lengths $\{32,48,64\}$ . As shown in Tab. 1, the bit accuracy, and rendering quality show a consistent decline as the message length increases. However, our method maintains a good balance between the invisibility and capacity of the message and outperforms the other methods as the message length becomes longer. Additionally, there is a further difference in performance compared to without FGD, depending on the message length. This shows that FGD is effective for large message embedding. From Fig. 4, our method guarantees a good balance between bit accuracy and rendering quality. + +Robustness for the image distortion. This section assesses the robustness of our method in situations where the rendered images are subjected to post-processing, which potentially modifies the embedded message within the rendered image. We evaluate the bit accuracy of the rendered images containing the message under various distortions. Tab. 2 shows that other methods cannot guarantee robustness. In particular, the steganography method is weak to all attacks. Additionally, 3D-GSW without FGD, which does not remove 3D Gaussians, does not fully address robustness when embedding messages into the rendered image. In contrast, our method ensures robustness against all distortions by removing 3D Gaussians that interfere with robustness. + +Robustness for the 3D-GS distortion. Since the purpose of our method is to protect both the rendered image and the core model, it is essential to consider the potential scenario of direct manipulation of the core model in cases of unau + +
MethodsBit Accuracy(%) ↑
No DistortionGaussian Noise (σ = 0.1)Rotation (±π/6)Scaling (75%)Gaussian Blur (σ = 0.1)Crop (40%)JPEG Compression (50% quality)Combined (Crop, Gaussian blur, JPEG)
StegaNeRF [20]+3D-GS [15]93.1554.4867.2273.9873.8475.8773.2876.71
WateRF [11]+3D-GS [15]93.4277.9981.6484.5087.2184.4981.8864.87
3D-GSW without FGD92.6480.4268.6684.8178.9176.9782.7184.67
3D-GSW (Ours)97.3783.8487.9494.6496.0192.8692.6590.84
+ +Table 2. Quantitative evaluation of robustness under various attacks compared to baseline methods. The results are the average of Blender, LLFF, and Mip-NeRF 360 datasets. We conduct experiments using 32-bit messages. The best performances are highlighted in bold. + +
MethodsBit Accuracy(%) ↑
No DistortionAdding Gaussian Noise (σ = 0.1)Removing 3D Gaussians (20 %)Cloning 3D Gaussians (20 %)
StegaNeRF [20]+3D-GS [15]93.1561.8260.2475.56
WateRF [11]+3D-GS [15]93.4273.8580.5882.32
3D-GSW without FGD92.6473.2087.9987.27
3D-GSW (Ours)97.3789.1196.8795.99
+ +thorized model usage. To manipulate 3D-GS, we select to directly add Gaussian noise to the 3D-GS parameters. Additionally, we randomly remove and clone 3D Gaussians. As shown in Tab. 3, our method is robust against 3D-GS distortion, outperforming the other methods. Furthermore, FGD robustly embeds the message into the rendered image, even if there is distortion in the model. + +![](images/15336e2df0b7343d2cc07f536d5972d664c6c40b4dd7191d7c3c236d10103296.jpg) + +![](images/5b0c3a8ec3f43a94e51c90a60b177f6dc8a17a0a42a6dfb19f4c219d2fe14661.jpg) +Bit-Acc: 100.0 +PSNR: 35.06 + +![](images/84b2190198655177689a9a3d561e8728ea54bd8bbfa01fc55d724733ff9ceda0.jpg) +Bit-Acc:98.47 PSNR:34.40 + +![](images/6983e988986e741d151ab1a34188ccf267a2a45a4a84dd7da9da61008db6af6b.jpg) +Bit-Acc: 100.0 PSNR: 33.09 +Figure 5. Rendering quality comparisons with full method(ours), without FGD, without gradient mask, without wavelet-subband loss, and base model. All images have 32-bits embedded. + +![](images/0f2605c874a99c93a80c013b9b26f01f45fb2c029433c2520b7ac8a9900c0bc7.jpg) +Bit-Acc: 100.0 PSNR: 33.20 + +![](images/6af276e2a8a2b1a2c6caec1ca0c43152beb83ed6688c1fa5ca1aeab15239a910.jpg) +Bit-Acc: 100.0 PSNR: 26.50 + +Table 3. Robustness under model distortion. We show the results on 32-bits. The best performances are highlighted in bold. + +
MethodsOurs (3D-GSW)
FGDMaskLwaveletBit Acc(%)↑PSNR ↑SSIM ↑LPIPS ↓
---96.5029.960.9510.072
-96.1633.560.9710.052
-96.3733.260.9670.054
-94.6034.270.9750.047
97.3735.080.9780.043
+ +![](images/66b550ec6b3ee9f13bf264802956f5a830de1e8ff9636ec987b9bf2b8402452c.jpg) +PSNR:31.28 SSIM:0.9819 LPIPS:0.022 + +![](images/1f461a9f59f3d6e2d5b91efdd6973fc0816fd3613a8023f09127cc5f4283b1a6.jpg) + +![](images/929d4225bc54650c6b3254a2ca39465fb91510c4e09dfae23ca979f2588b4c0b.jpg) +Figure 6. Qualitative result of applying FGD. We analyze the effect of FGD on the rendered image. Through FGD, we effectively control 3D Gaussians in the high-frequency area. + +# 4.3. Ablation study + +FGD, Gradient mask, and Wavelet-subband loss. In this section, we evaluate the effectiveness of FGD, gradient mask, and wavelet-subband loss. We remove each component in our method and compare the rendering quality with the bit accuracy. Fig. 5 and Tab. 4 show the results when each component is removed. First, we remove the FGD module in our method. In this case, our method tends to slightly decrease bit accuracy. Fig 6 shows that FGD effectively adjusts 3D Gaussians in high-frequency areas, resulting in a quality that is nearly identical to the original. Second, without the gradient mask and wavelet-subband loss, our method performs poorly in preserving rendering quality. When all components are absent, our method fails to maintain an appropriate trade-off between bit accuracy and rendering quality, leading to a significant decrease in rendering quality. These results show the importance of each component in achieving a good balance between the rendering quality and bit accuracy. + +Table 4. Quantitative ablation study of 3D-GSW shows that the best results are achieved when all components are combined. Results are shown for 32-bit messages. + +
SubbandBit Acc↑PSNR ↑SSIM ↑LPIPS ↓
\( LL, LH, HL, HH \)96.0134.930.9770.048
\( LH, HL, HH \)97.3735.080.9780.043
+ +Table 5. Ablation study on subband selection for wavelet-subband loss. Results represent the average score across Blender, LLFF, and Mip-NeRF 360 datasets using 32-bit messages. + +Wavelet-subband loss. Increasing the performance of both bit accuracy and rendering quality is challenging. To address this challenge, we design wavelet-subband loss. Since we modify 3D Gaussians in high-frequency areas, we utilize only the high-frequency subbands $\{LH, HL, HH\}$ to further ensure the rendering quality of those areas. Tab. 4 and Fig. 5 show that wavelet-subband loss effectively en + +![](images/11be972041f1f96d507f9a8cf98d7ba1734bcd18d2f63636a6427180374177ec.jpg) +Figure 7. Qualitative comparison of the proposed gradient mask effect. For objects without a background, our method effectively adjusts 3D Gaussian parameters to prevent rendering beyond the object's boundary, preserving the original quality. + +hances rendering quality. Additionally, Tab. 5 shows that using only high-frequency subbands results in higher rendering quality, with high bit accuracy. + +Gradient mask for 3D-GS. Before the fine-tuning process, the pre-trained 3D-GS already has a high rendering quality. Since this property, if there is a large change in 3D-GS parameters, rendering quality can be decreased. When a gradient is propagated to a parameter, the gradient mask of our method ensures that the transmitted gradient is smaller than that of previous methods. Our gradient mask controls gradient transmission and minimizes parameter changes, thereby preserving rendering quality. Fig 7 shows that our gradient mask (with exponential) enhances rendering quality more effectively than a previous method (without exponential). + +Control the number of 3D Gaussians. In this section, we present more details about the effect of controlling the number of 3D Gaussians. In the first phase of Frequency Guided Densification (FGD), we derive the contribution of rendering quality, $V_{\pi}$ , for each 3D Gaussian. Fig. 8 shows that removing 3D Gaussians with the contribution below $10^{-8}$ (removing $28.13\%$ ) has minimal impact on rendering quality and increases slightly bit accuracy. However, when FGD removal exceeds $50\%$ , the bit accuracy and performance of LPIPS decrease. From the experimental results, reducing approximately $28\%$ 3D Gaussians preserves high bit accuracy and rendering quality. + +![](images/e5a557f21b94b5fbe992ee8b1d6d9733ea1509fa35a634a95abcd8733147b470.jpg) +Comparison of time and storage. The advantages of 3D- +Figure 8. The impact of 3D Gaussians removal is based on the contribution of rendering quality. Declining 3D Gaussians refers to reducing the number of 3D Gaussians. The results are shown for 32-bit messages. + +
MethodsFine-tune ↓FPS ↑Storage ↓
3D-GS [15]-56.65833.89 MB
StegaNeRF [20]+3D-GS [15]58h 56m56.65833.89 MB
WateRF [11]+3D-GS [15]6h 47m56.65833.89 MB
3D-GSW (Ours)21m 03s72.68640.21 MB
+ +Table 6. Results on the large-scale Mip-NeRF 360 dataset for 64-bits. All scores are averaged across Mip-NeRF 360 scenes, with 'fine-tunes' referring to embedded messages. For a fair comparison, we utilize the pre-trained models provided by 3D-GS [15]. The first row is the performance of the pre-trained models. + +GS are the high rendering quality and real-time rendering. However, the pre-trained 3D-GS contains redundant 3D Gaussians to achieve high-quality results, leading to storage capacity issues and increasing the time required for message embedding. Furthermore, other methods render twice during the fine-tuning process, resulting in inefficient embedding time for 3D-GS. To address these issues, we remove 3D Gaussians without a decrease in the rendering quality. Tab. 6 shows that our method reduces storage of 3D-GS and message embedding time. In particular, our method enhances the real-time rendering. Notably, since the other methods maintain the number of 3D Gaussians, they follow the FPS and storage of pre-trained 3D-GS [15]. + +# 5. Conclusion + +We introduce the robust watermarking method for 3D Gaussian Splatting (3D-GS), developing a novel densification method, Frequency-Guided Densification (FGD), which ensures real-time rendering speed and robustness while improving rendering quality. We propose the gradient mask to ensure high rendering quality and introduce a wavelet-subband loss to enhance the rendering quality of high-frequency areas. Our experiments show that our method ensures the message and is robust against the distortion of the model compared to the other methods. Our method provides a strong foundation for exploring the broader implications and challenges of 3D-GS watermarking. It underscores the potential of advanced watermarking techniques to address ownership and security issues in the context of a rapidly evolving 3D industry. In future work, we aim to extend our approach to embed multi-modal data, further broadening its applications and enhancing its utility in diverse domains. This expansion will broaden the scope of our method's applications and enhance its adaptability and utility across a wide range of domains. + +Limitations. Since our proposed method requires the pretrained decoder, the decoder pre-training must be done first. Fortunately, the decoder only needs to be trained once per length of bits, and after training, the pre-training process for the corresponding length is not required. + +# Acknowledgements + +This research was supported by the Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2024 ( Research on neural watermark technology for copyright protection of generative AI 3D content, RS-2024-00348469, $47\%$ International Collaborative Research and Global Talent Development for the Development of Copyright Management and Protection Technologies for Generative AI, RS-2024-00345025, $42\%$ ), the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(RS-2025-00521602, $10\%$ ), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. RS-2019-II190079, Artificial Intelligence Graduate School Program(Korea University), $1\%$ ), and Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City. + +# References + +[1] Md Amir Baig, Athar A Moinuddin, Ekram Khan, and M Ghanbari. Dft-based no-reference quality assessment of blurred images. Multimedia Tools and Applications, 81(6): 7895-7916, 2022. 2 +[2] Mauro Barni, Franco Bartolini, and Alessandro Piva. Improved wavelet-based watermarking through pixel-wise masking. IEEE transactions on image processing, 10(5): 783-791, 2001. 2 +[3] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5470-5479, 2022. 5 +[4] Monsij Biswal, Tong Shao, Kenneth Rose, Peng Yin, and Sean McCarthy. Steganerv: Video steganography using implicit neural representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 888-898, 2024. 2 +[5] Samuel Rota Bulò, Lorenzo Porzi, and Peter Kontschieder. Revising densification in gaussian splatting, 2024. 4 +[6] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision, pages 333-350. Springer, 2022. 1 +[7] Zilong Chen, Feng Wang, Yikai Wang, and Huaping Liu. Text-to-3d using gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21401-21412, 2024. 2 +[8] Weina Dong, Jia Liu, Lifeng Chen, Wenquan Sun, and Xiaozhong Pan. Stega4nerf: cover selection steganography for neural radiance fields. Journal of Electronic Imaging, 33(3): 033031-033031, 2024. 2 +[9] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: + +Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5501-5510, 2022. 1 +[10] Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. Sc-gs: Sparse-controlled gaussian splatting for editable dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4220-4230, 2024. 2 +[11] Youngdong Jang, Dong In Lee, MinHyuk Jang, Jong Wook Kim, Feng Yang, and Sangpil Kim. Waterf: Robust watermarks in radiance fields for protection of copyrights, 2024. 1, 2, 3, 5, 6, 7, 8 +[12] Xi Jia, Joseph Bartlett, Wei Chen, Siyang Song, Tianyang Zhang, Xinxing Cheng, Wenqi Lu, Zhaowen Qiu, and Jinming Duan. Fourier-net: Fast image registration with band-limited deformation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1015-1023, 2023. 2 +[13] Liming Jiang, Bo Dai, Wayne Wu, and Chen Change Loy. Focal frequency loss for image reconstruction and synthesis. In Proceedings of the IEEE/CVF international conference on computer vision, pages 13919-13929, 2021. 2 +[14] Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, and Yuexin Ma. Gaussian-shader: 3d gaussian splatting with shading functions for reflective surfaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5322-5332, 2024. 2 +[15] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42 (4):1-14, 2023. 1, 2, 3, 5, 6, 7, 8 +[16] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.5 +[17] Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3d gaussian representation for radiance field. arXiv preprint arXiv:2311.13681, 2023. 2 +[18] Jong-Seok Lee and Touradj Ebrahimi. Perceptual video compression: A survey. IEEE Journal of selected topics in signal processing, 6(6):684-697, 2012. 2 +[19] Jiahui Lei, Yufu Wang, Georgios Pavlakos, Lingjie Liu, and Kostas Daniilidis. Gart: Gaussian articulated template models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19876-19887, 2024. 2 +[20] Chenxin Li, Brandon Y Feng, Zhiwen Fan, Panwang Pan, and Zhangyang Wang. Steganerf: Embedding invisible information within neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 441-453, 2023. 2, 4, 6, 7, 8 +[21] Zhe Li, Zerong Zheng, Lizhen Wang, and Yebin Liu. Animatable gaussians: Learning pose-dependent gaussian maps for high-fidelity human avatar modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19711-19722, 2024. 2 +[22] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3d generation via interval score matching. In + +Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6517-6526, 2024. 2 +[23] Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, et al. Vastgaussian: Vast 3d gaussians for large scene reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5166-5175, 2024. 2 +[24] Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, and Karsten Kreis. Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8576-8588, 2024. 2 +[25] Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, and Ziwei Liu. Humangaussian: Text-driven 3d human generation with gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6646-6657, 2024. 2 +[26] Ange Lou, Benjamin Planche, Zhongpai Gao, Yamin Li, Tianyu Luan, Hao Ding, Terrence Chen, Jack Noble, and Ziyan Wu. Darenerf: Direction-aware representation for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5031-5042, 2024. 2 +[27] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654-20664, 2024. 2 +[28] Zhicheng Lu, Xiang Guo, Le Hui, Tianrui Chen, Min Yang, Xiao Tang, Feng Zhu, and Yuchao Dai. 3d geometry-aware deformable gaussian splattering for dynamic view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8900-8910, 2024. 2 +[29] Ziyuan Luo, Qing Guo, Ka Chun Cheung, Simon See, and Renjie Wan. Copymerf: Protecting the copyright of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22401-22411, 2023. 3 +[30] Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 38(4):1-14, 2019. 5 +[31] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 1, 5 +[32] S Kother Mohideen, S Arumuga Perumal, and M Mohamed Sathik. Image de-noising using discrete wavelet transform. International Journal of Computer Science and Network Security, 8(1):213-216, 2008. 2 +[33] Arthur Moreau, Jifei Song, Helisa Dhamo, Richard Shaw, Yiren Zhou, and Eduardo Pérez-Pellitero. Human gaussian splatting: Real-time rendering of animatable avatars. In Pro + +ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 788-798, 2024. 2 +[34] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1-15, 2022. 1 +[35] Varad A Pimpalkhute, Rutvik Page, Ashwin Kothari, Kishor M Bhurchandi, and Vipin Milind Kamble. Digital image noise estimation using dwt coefficients. IEEE transactions on image processing, 30:1962-1972, 2021. 2 +[36] R Radha Kumari, V Vijaya Kumar, and K Rama Naidu. Deep learning-based image watermarking technique with hybrid dwt-svd. The Imaging Science Journal, pages 1-17, 2023. 2 +[37] Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, and Jie Zhou. Global filter networks for image classification. Advances in neural information processing systems, 34:980-993, 2021. 2 +[38] MS Raval and PP Rege. Discrete wavelet transform based multiple watermarking scheme. In TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region, pages 935-938. IEEE, 2003. 2 +[39] Daniel Rho, Byeonghyeon Lee, Seungtae Nam, Joo Chan Lee, Jong Hwan Ko, and Eunbyung Park. Masked wavelet representation for compact neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20680-20690, 2023. 2 +[40] Zhijing Shao, Zhaolong Wang, Zhuang Li, Duotun Wang, Xiangru Lin, Yu Zhang, Mingming Fan, and Zeyu Wang. Splattingavatar: Realistic real-time human avatars with mesh-embedded gaussian splatting. arXiv preprint arXiv:2403.05087, 2024. 2 +[41] Mark J Shensa et al. The discrete wavelet transform: wedding the a trous and mallat algorithms. IEEE Transactions on signal processing, 40(10):2464-2482, 1992. 2 +[42] Peining Tao and Ahmet M Eskicioglu. A robust multiple watermarking scheme in the discrete wavelet transform domain. In Internet Multimedia Management Systems V, pages 133-144. SPIE, 2004. 2 +[43] Chunwei Tian, Menghua Zheng, Wangmeng Zuo, Bob Zhang, Yanning Zhang, and David Zhang. Multi-stage image denoising with the wavelet transform. Pattern Recognition, 134:109050, 2023. 2 +[44] Una Tuba and Dejan Zivkovic. Image denoising by discrete wavelet transform with edge preservation. In 2021 13th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pages 1-4. IEEE, 2021. 2 +[45] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20310-20320, 2024. 2 +[46] Muyu Xu, Fangneng Zhan, Jiahui Zhang, Yingchen Yu, Xiaoqin Zhang, Christian Theobalt, Ling Shao, and Shijian Lu. Wavenerf: Wavelet-based generalizable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18195-18204, 2023. 2 + +[47] Zhiwen Yan, Weng Fei Low, Yu Chen, and Gim Hee Lee. Multi-scale 3d gaussian splatting for anti-aliased rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20923–20931, 2024. 2 +[48] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20331-20341, 2024. 2 +[49] Hu Yu, Jie Huang, Feng Zhao, Jinwei Gu, Chen Change Loy, Deyu Meng, Chongyi Li, et al. Deep fourier up-sampling. Advances in Neural Information Processing Systems, 35: 22995-23008, 2022. 2 +[50] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. 5, 6 +[51] Xuanyu Zhang, Jiarui Meng, Runyi Li, Zhipei Xu, Yongbing Zhang, and Jian Zhang. Gs-hider: Hiding messages into 3d gaussian splatting. arXiv preprint arXiv:2405.15118, 2024.2 +[52] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672, 2018. 2, 3, 6 \ No newline at end of file diff --git a/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/images.zip b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..306115e49c81295a753a13b6bd360b91503053fc --- /dev/null +++ b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4677af49f272aee1688d75a3ccf943531b29ee1ca103d65d4c3789631fccacf +size 680008 diff --git a/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/layout.json b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3a61a0f99d338437c5304215cfae6b50cb946c90 --- /dev/null +++ b/CVPR/2025/3D-GSW_ 3D Gaussian Splatting for Robust Watermarking/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edc950fcf896fb88eb7c20fc6bef823101c5592300c5bab1011ec48ec0d652ff +size 538032 diff --git a/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_content_list.json b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..878fa2df26344ff958729dca2f71bb7dbdc2edbd --- /dev/null +++ b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46a42900fdee2c85a68eec6a64c457d12dad2d6cc188cd200e7b0f9821acdf4b +size 70876 diff --git a/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_model.json b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_model.json new file mode 100644 index 0000000000000000000000000000000000000000..96ebd89b8a21113aa14aca72855bce2d7cc6beb8 --- /dev/null +++ b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd90eafcb981f656db1e49e1e8754b8729d015250d01287848ba7d14e33f07a9 +size 86002 diff --git a/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_origin.pdf b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..748589701d0768570316c58bdc00906b9661026b --- /dev/null +++ b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/756891c7-db71-4b8d-abcc-7c843cd22944_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51f44718321f76cf0308a48dc48c633e10f3b7c39ab8ee5f7389c37b400a8566 +size 7242337 diff --git a/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/full.md b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b48dc5e6d35fae530e1d654fc10becf0adcbfc31 --- /dev/null +++ b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/full.md @@ -0,0 +1,300 @@ +# 3D-HGS: 3D Half-Gaussian Splatting + +Haolin Li + +Jinyang Liu + +Mario Sznaier + +Octavia Camps + +{li.haoli, liu.jiyan, m.sznaier, o.camps} @ northeastern.edu Northeastern University Boston, MA + +# Abstract + +Photo-realistic image rendering from 3D scene reconstruction has advanced significantly with neural rendering techniques. Among these, 3D Gaussian Splatting (3D-GS) outperforms Neural Radiance Fields (NeRFs) in quality and speed but struggles with shape and color discontinuities. We propose 3D Half-Gaussian (3D-HGS) kernels as a plug-and-play solution to address these limitations. Our experiments show that 3D-HGS enhances existing 3D-GS methods, achieving state-of-the-art rendering quality without compromising speed. More demos and code are available at https://lihaolin88.github.io/CVPR-2025-3DHGS. + +# 1. Introduction + +The pursuit of photo-realistic and real-time rendering of 3D scenes is a core research focus in both academic and industrial sectors, with wide-ranging applications including virtual reality [12], media production [18], autonomous driving [28, 32], and extensive scene visualization [14, 16, 22]. Traditionally, meshes and point clouds have been the preferred methods for 3D scene representations due to their explicit compatibility with fast GPU/CUDA-based rasterization techniques. However, these methods often result in reconstructions of lower quality and renderings plagued by various artifacts. In contrast, recent advancements in Neural Radiance Fields (NeRF) [17] introduced continuous scene representations leveraging Multi-Layer Perceptron architectures (MLP). This approach optimizes novel-view synthesis through volumetric ray-marching techniques, providing significantly more realistic renderings. However, NeRF methods are characterized by their slow speed [5]. + +![](images/3e81d7ffb184f56bf2a5f3403e38b073dbf8b7820bfc32767d52f1e5987456fd.jpg) +(a) 3D Gaussian kernels + +![](images/173bf243cc2ba62ea2b0575df1654bfee31d5f1c959dfbb75c46c8ee6a395a63.jpg) +(b) 3D Half-Gaussian kernels +Figure 1. Illustration of the 3D-GS kernels and the proposed 3D Half-Gaussian kernels, where each half of the kernel is allowed to have different opacity parameters. + +Recently, 3D Gaussian splatting (3D-GS) [9] has emerged as a state-of-the-art approach, outperforming existing methods in terms of both rendering quality and speed. The concept of using 3D Gaussians to parameterize a scene dates back to the early 2000s [34, 35]. This technique models a 3D scene with a collection of 3D Gaussian reconstruction kernels parameterized by 59 parameters representing location, scale, orientation, color, and opacity of the kernel. Initially, each Gaussian is derived from a point in a Structure from Motion (SfM) reconstruction. These parameters are subsequently refined through a dual process involving the minimization of image rendering loss and adaptive kernel density adjustment. The efficiency of 3D-GS is enhanced by a GPU-optimized, tile-based rasterization method, enabling real-time rendering of complex 3D scenes. Employing 3D Gaussians as reconstruction kernels simplifies the volumetric rendering process and facilitates the integration of a low-pass filter within the kernel. This + +![](images/0a796eb92b480c129a378dc5914e6122424ad00798c87f5c58648592908402e3.jpg) + +![](images/4ca03a16332589e3199373844088e91694917d80fa5a89e5cc4da028763d4239.jpg) +(a) Five Gaussians fitting a square + +![](images/6e3a8f7165f7c664f653f7cc5236b67728329480a0902b9f969b23171ec7e87b.jpg) +(b) Three HGs fitting a square + +![](images/2933d37ef3f8a3bf0bdd83882bde918a0d944219a724829adb314de89016a9f3.jpg) +(c) Gaussian and Half-Gaussian in spatial domain +(d) Gaussian and Half-Gaussian in Frequency domain +Figure 2. Comparison of Half-Gaussian and Gaussian Kernels fitting a square function and their Fourier Transforms. (a): fitting a square function with 5 Gaussian kernels, and (b): fitting a square with 4 Half-Gaussian kernels. When approximating sharp edges, the Half-Gaussian kernels achieve a lower error loss (1.85) compared to Gaussian kernels (2.97). Figures (c) and (d) illustrate the Gaussian and Half-Gaussian kernels in both the spatial and frequency domains, where the Half-Gaussian demonstrates a higher bandwidth than the Gaussian kernel, indicating its superior ability to capture high-frequency components. + +addition effectively mitigates aliasing during the resampling of scene data to screen space [33, 35]. + +While 3D-GS benefits from employing 3D Gaussians, using a Gaussian kernel can be inefficient and lead to inaccuracies when modeling discontinuous functions, which are common in 3D scenes at object boundaries and texture-rich areas. As Fig. 2(a) shows, using Gaussian kernels leads to a Gibbs phenomenon with high peaks overshooting and undershooting the function value and non-vertical transitions + +![](images/f6bc0f937795b551b0592a8a2815efc6d9124475bc0229b6bb2fc86c25dc5807.jpg) +Figure 3. Performance (PSNR $\uparrow$ ) versus rendering speed for several state-of-the-art methods [1, 9, 10, 15, 29] with Gaussian kernels and the proposed half-Gaussian kernels on the Mip-NeRF360 dataset [1]. In all cases, using half-Gaussian kernels resulted in significant PSNR improvements, with similar or better rendering speed than the corresponding 3D Gaussian-based method. + +(i.e. blurry edges). To address this problem, we propose the use of 3D-Half-Gaussian Splatting (3D-HGS), which employs a 3D-Half Gaussian as a novel reconstruction kernel. As seen in Fig. 2(b), using this kernel significantly reduces the Gibbs oscillatory effect and better fits the vertical discontinuities. This can be explained by the fact that Half-Gaussians have a higher content at higher frequencies than full Gaussians do (Fig. 2 (c) and (d)). + +A pair of 3D half-Gaussians can be easily represented by adding the parameters of a vector normal to a plane splitting a 3D Gaussian through its center, and allowing each half to have different opacity values (Fig. 4(b)). By introducing this plane, the kernel can efficiently capture high-frequency information at discontinuities, while preserving the essential characteristics of the original 3D Gaussian kernel (Fig. 2). This preservation is facilitated by the symmetric pairing of 3D Half-Gaussians, which also allows to seamlessly represent full 3D Gaussians (see for example the center Gaussian in Fig. 2(b)). + +Thus, the proposed 3D-Half-Gaussian kernel preserves the key parameters in the original 3D-GS and provides the capability to be easily applied to existing 3D Gaussian kernel-based methods as a plug-and-play kernel. Our experiments (see Fig. 3 and section 4.2) show that using the proposed 3D Half-Gaussian as the reconstruction ker + +nel achieves state-of-the-art (SOTA) rendering quality performance on MipNeRF-360, Tanks&Temples, and Deep Blending datasets, with similar or better rendering speed than the corresponding 3D Gaussian-based methods. + +Our main contributions are: (1) We introduce a novel plug-and-play reconstruction kernel, the 3D Half-Gaussian, designed to enhance the performance of 3D Gaussian Splatting (3D-GS). (2) Our proposed kernel achieves state-of-the-art novel view synthesis performance across multiple datasets without compromising rendering frame rate. (3) We demonstrate the versatility and effectiveness of our kernel by applying it to other state-of-the-art methods, showcasing its broad applicability. Our code will be available on our GitHub project page. + +# 2. Related Work + +3D reconstruction and Novel View Synthesis (NVS) have long been key goals in computer vision. NeRFs [17] have significantly advanced NVS, enabling highly realistic image synthesis from new viewpoints. More recently, 3D-GS has set new state-of-the-art benchmarks in this field. This section reviews both historical and recent developments in 3D reconstruction and NVS, with a detailed analysis of 3D-GS, its methods, achievements, and impact on the field. + +# 2.1. Novel View Synthesis + +Before the advent of NeRFs, Multi-View Stereo (MVS) [20] and Structure from Motion (SfM) were commonly utilized for reconstructing 3D scenes from multiple viewpoint images. MVS relies on feature extraction from diverse images to correlate viewpoints and produce a final reconstruction, typically represented as a colored mesh grid or point cloud. However, due to its reliance solely on image features, achieving high-quality reconstructions can be challenging. SfM [19], employs multi-view images to generate a point cloud representing the 3D scene. In contrast to MVS, SfM excels in accurately estimating camera poses for different images, while MVS provides more precise 3D estimations of the scene. Notably, SfM's simultaneous estimation of camera positions and point clouds makes it a preferred preprocessing step in recent advanced NVS methods. + +NeRFs [17], stand as a significant milestone in NVS, demonstrating remarkable achievements in tasks such as image and view synthesis alongside scene reconstruction. Unlike previous methods, a Nerf represents the 3D scene using a radiance field, treating the entire scene as a continuous function parameterized by position. + +3D-GS [9], is a recently introduced NVS technique. This method boasts a remarkable reduction in both training and rendering times, surpassing the Nerf methods. Diverging from its predecessors, 3D-GS does not rely on training neural networks or any type of network architecture. Instead, it initiates from a point cloud. Unlike treating each point + +discretely, 3D-GS conceptualizes them as 3D Gaussian entities, each possessing a unique size and orientation, and spherical harmonics to depict their color. Subsequently, during the splatting stage, these 3D Gaussians are projected onto a 2D plane, with their appearances accumulated to generate renderings for a given viewing angle. + +# 2.2. Splitting methods + +The original concept of splatting was introduced by Westover [25, 26], and improved by Zwicker et. al [33-35]. Recently, the 3D-GS technique has achieved great success in photo-realistic neural rendering [9, 27]. + +Although 3D-GS has attained state-of-the-art performance in terms of rendering speed and quality, opportunities for further enhancements remain. Various studies [2, 3] have introduced modifications to the original framework. For instance, Mip-splatting [29] limits the frequency of the 3D representation to no more than half the maximum sampling frequency, as determined by the training images. Analytic-Splatting [13] employs a logistic function to approximate the Gaussian cumulative distribution function, thus refining each pixel's intensity response to minimize aliasing. Similarly, SA-GS [21] adapts during testing a 2D low-pass filter based on rendering resolution and camera distance. Scaffold-GS [15] employs voxel grids for initializing 3D scenes and utilizes Multi-Layer Perceptrons (MLPs) to constrain and learn voxel features. Following a similar paradigm, SAGS [23] incorporates a graph neural network to capture structural relationships between individual Gaussians, preserving geometric consistency across neighboring regions during rendering. FreGS [30] introduces frequency-domain regularization to the rendered 2D images to enhance the recovery of high-frequency details. 2D-GS [8] aligns 3D scenes with 2D Gaussian kernels to improve surface normal representations. MCMC-GS [10] applied Stochastic Gradient Langevin Dynamics (SGLD) to iteratively refine Gaussian positions. Lastly, GES [6] employs generalized exponential kernels to reduce the memory required to store 3D information. + +The two papers most closely related to ours are 2D-GS [8] and GES [6], as they also focus on switching the reconstruction kernels. However, GES did not change the rasterizer but learned a scaling factor for each of the points to approximate different kernels. This method faces difficulties in complex 3D scenes. On the other hand, instead of doing a volumetric rendering as in the original 3D-GS, 2D-GS tries to do a surface rendering. Our method still learns a volumetric rendering of scenes to retain the rendering performance while enabling accurate modeling of discontinuous functions. + +![](images/aa4f3fec077eb3a7feca78c4b00f3c27588217b4de12df76efee0e7bbafebd70.jpg) + +![](images/5df128fa82f814c17f3b1fc66720c5ebfbb951a9dbd2bfa27338af7dc8352619.jpg) +(a) The training and rendering pipeline for 3D Half-Gaussian +(b) 3D Half-Gaussian pair +(c) Mapping the Half-Gaussian to image plane in the ray space +Figure 4. Illustration of the 3D-HG kernel, and the mapping of a pair of 3D Half-Gaussians to a 2D image. + +# 3. Method + +3D-GS [9] models 3D scenes using 3D Gaussian kernels [34, 35]. However, these kernels frequently encounter difficulties in accurately modeling 2D/3D discontinuous functions, which are prevalent in 3D scenes at edges, corners of objects, and texture-rich areas. The inefficiencies of the 3D Gaussian kernel in representing shape and color discontinuities can compromise the model's effectiveness. To overcome these limitations, we propose the use of a Half-Gaussian kernel for novel view synthesis as illustrated in Fig. 4. Section 3.1 presents an overview of the foundational concepts relevant to 3D Gaussian Splatting. In Section 3.2, we introduce the 3D half-Gaussian kernel, followed by an in-depth description of the 3D-HGS rasterizer in Section 3.3. Finally, Section 3.4 provides a detailed account of the splatting process. + +# 3.1. Preliminaries + +3D-GS [9] represents the 3D scenes with parameterized antisotropic 3D Gaussian kernels [34, 35]. It starts from an initial set of 3D Gaussians located at a set of sparse points obtained using a Structure-from-Motion (SfM) step. Then, 3D Gaussians are mapped to 2D images through a GPU-specified tile-based rasterization. The parameters of these Gaussians are optimized, pruned, and added based on a loss function on the rendered images and ground-truth images. + +Each of the 3D Gaussians is parameterized by their position (mean) $\mu$ , covariance-related scaling matrix, opacity $\alpha$ , and Spherical harmonics coefficients for color $c$ . A 3D elliptical Gaussian $G_{\Sigma}(\mathbf{x})$ centered at a point $\mu$ with covari + +ance matrix $\pmb{\Sigma}$ is given by: + +$$ +G _ {\boldsymbol {\Sigma}} (\mathbf {x} - \mu) = \frac {1}{(2 \pi) ^ {3 / 2} | \boldsymbol {\Sigma} | ^ {\frac {1}{2}}} e ^ {- \frac {1}{2} (\mathbf {x} - \mu) ^ {T} \boldsymbol {\Sigma} ^ {- 1} (\mathbf {x} - \mu)} \tag {1} +$$ + +where $\mathbf{x}$ and $\mu$ are the column vectors $[x,y,z]^T$ and $[\mu_x,\mu_y,\mu_z]^T$ , respectively, and $\pmb{\Sigma}$ is a positive definite $3\times 3$ matrix. The covariance under the world coordinate system $\pmb{\Sigma}^w$ is further parameterized by a scaling matrix $S$ and a rotation matrix $R$ to maintain its positive definite property: + +$$ +\boldsymbol {\Sigma} ^ {\boldsymbol {w}} = \boldsymbol {R} \boldsymbol {S} \boldsymbol {S} ^ {T} \boldsymbol {R} ^ {T} \tag {2} +$$ + +Given a viewing transformation $W$ and the Jacobian of the affine approximation of the projective transformation $J$ , the covariance matrix $\pmb{\Sigma}$ in the camera coordinate system is given by: + +$$ +\boldsymbol {\Sigma} = J W \boldsymbol {\Sigma} ^ {w} W ^ {T} J ^ {T} \tag {3} +$$ + +As in the rendering function we will learn the opacity parameter $\alpha$ , we can merge the constant term $\frac{1}{(2\pi)^{3/2} |\boldsymbol{\Sigma}|^{\frac{1}{2}}}$ into $\alpha$ . Integrating a normalized 3D Gaussian along one coordinate axis results in a normalized 2D Gaussian. Thus, 3D Gaussians $G(\mathbf{x})$ can be efficiently transformed to 2D Gaussians $\hat{G}(\hat{\mathbf{x}})$ on the image plane using a ray coordinate system representation [33]: + +$$ +\int_ {\mathbb {R}} G _ {\boldsymbol {\Sigma}} (\mathbf {x} - \mu) d z = \hat {G} _ {\hat {\boldsymbol {\Sigma}}} (\hat {\mathbf {x}} - \hat {\mu}) \tag {4} +$$ + +where $\hat{\mathbf{x}} = [x,y]^T$ , $\hat{\mu} = [\mu_x,\mu_y]^T$ , and the covariance $\hat{\Sigma}$ can be easily obtained by taking the top-left $2\times 2$ sub-matrix of the transformed $\boldsymbol{\Sigma}$ : + +$$ +\boldsymbol {\Sigma} = \left( \begin{array}{c c c} a & b & c \\ b & d & e \\ c & e & f \end{array} \right) \Leftrightarrow \left( \begin{array}{c c} a & b \\ b & d \end{array} \right) = \hat {\boldsymbol {\Sigma}} \tag {5} +$$ + +Finally, the pixel values on the 2D image are obtained by $\alpha$ -blending: + +$$ +C = \sum_ {i \in \mathcal {N}} c _ {i} \sigma_ {i} \prod_ {j = 1} ^ {i - 1} (1 - \sigma_ {j}), \quad \sigma_ {i} = \alpha_ {i} \hat {G} (\hat {\boldsymbol {x}} - \hat {\boldsymbol {\mu}}) \tag {6} +$$ + +where $\hat{\mathbf{x}}$ is the queried pixel position and $\mathcal{N}$ is the set of sorted 2D Gaussians associated with $\hat{\mathbf{x}}$ . + +# 3.2. 3D Half-Gaussian Kernel + +In this section, we provide a detailed description of the proposed kernel. We begin by giving the definition of the half Gaussian kernel and how to parameterize it. + +We propose to use 3D Half-Gausians kernels, where each half can have different opacity values $\alpha_{1}$ and $\alpha_{2}$ . The half Gaussians are obtained by splitting a 3D Gaussian with a plane through its center. Note that this representation includes as a special case the 3D-GS representation in the case when $\alpha_{1} = \alpha_{2}$ . However, incorporating a planar surface into the kernel allows to represent sharp changes, such as edges and textures, as well as planar surfaces, more accurately. As seen below, the increase in the number of learned parameters is very modest: the 3D normal of the splitting plane and an additional opacity term. Furthermore, it should be noted that the plane normal can be stored using the normal field, which is available but not used, in the original 3DGS kernel implementation. Thus, 3DHGS effectively increases the memory requirements only by an extra opacity coefficient, while it increases computational cost by a multiplication by a scaling factor. + +Formally, a 3D Half-Gaussian kernel is described by: + +$$ +H G _ {\boldsymbol {\Sigma}} (\mathbf {x} - \mu) = \left\{ \begin{array}{l l} e ^ {- \frac {1}{2} (\mathbf {x} - \boldsymbol {\mu}) ^ {T} \boldsymbol {\Sigma} ^ {- 1} (\mathbf {x} - \boldsymbol {\mu})} & \mathbf {n} ^ {T} (\mathbf {x} - \boldsymbol {\mu}) \geq 0 \\ 0 & \mathbf {n} ^ {T} (\mathbf {x} - \boldsymbol {\mu}) < 0 \end{array} \right. \tag {7} +$$ + +where $\mathbf{n} = [n_1, n_2, n_3]^T$ is the normal of the splitting plane through the full Gaussian center, represented as a column vector, while the complementary half Gaussian is obtained by simply changing the sign of the normal. The integration of a 3D Half Gaussian along the $z$ axis yields a similar result as Eq. 4, except for a scaling factor related to the normal of the splitting plane: + +$$ +\int_ {\mathbf {n} ^ {T} (\mathbf {x} - \mu) \geq 0} H G _ {\boldsymbol {\Sigma}} (\mathbf {x} - \mu) d z = \frac {1}{2} I (x, y) \hat {G} _ {\hat {\boldsymbol {\Sigma}}} (\hat {\mathbf {x}} - \hat {\boldsymbol {\mu}}) \quad (8) +$$ + +where + +$$ +I (x, y) = \operatorname {e r f c} \left(- \frac {\left(n _ {1} x + n _ {2} y\right) / n _ {3} + \mu_ {z \mid x y}}{\sqrt {2} \sigma_ {z \mid x y}}\right) \tag {9} +$$ + +![](images/e9a87157aeb86ad0f5f88be622a92614713934d4ca143091fbecdf022a8f567a.jpg) +Figure 5. (Left) Histograms of the 3DGS opacity values and of the 3D-HGS mean opacity values of corresponding halves, trained on the Bonsai scene. The 3D-HGS mean opacity values cluster in a lower range than those in the 3DGS, implying that treating both halves identically while rendering increases the number of kernels involved in each tile, slowing down the overall process. (Right) Histogram of the differences between opacity values within a half Gaussian, normalized by the maximum opacity value within each kernel. Over $75\%$ of the 3D-HGS kernels have normalized opacity differences over 0.5, highlighting that each half Gaussian often represents a distinct effective area in the rendering space. + +and $\mu_{z|xy}$ , and $\sigma_{z|xy}$ refer to the mean and standard deviation for the distribution of $z$ , conditioned on given $x, y$ , i.e. $P(z|x, y) \sim \mathcal{N}(\mu_{z|xy}, \sigma_{z|xy})$ , respectively. + +Eq. 8 and Eq. 9 give a closed-form solution to calculate the integral of a 3D Half-Gaussian. The right-hand side of Eq. 8 consists of two factors: $I(x,y)$ , which is a scaling factor related to the learned normal of the splitting plane, while $\hat{\mathbf{x}} - \hat{\mu}$ remains the same as in the original 3D Gaussian in Eq. 4. For a detailed derivation please refer to the supplementary material. + +# 3.3. 3D Half-Gaussian Rasterization + +For rasterization, we follow a similar process as in 3D-GS, adapted to the proposed 3D Half-Gaussian reconstruction kernel. For enhanced parameter efficiency, we jointly represent pairs of Half-Gaussians, since the parameters for mean, rotation, scaling, and color are shared between the two halves, while we learn the orientation of the splitting plane and two opacity parameters, one for each side of the splitting plane. Thus, the volumetric alpha blending for each pixel on the image can be expressed as: + +$$ +C = \sum_ {i \in \mathcal {N}} c _ {i} H \hat {G} _ {i} (\hat {\mathbf {x}} - \hat {\mu}) \prod_ {j = 1} ^ {i - 1} \left(1 - H \hat {G} _ {j} (\hat {\mathbf {x}} - \hat {\mu})\right) \tag {10} +$$ + +where $\mathcal{N}$ is the sorted 3D Half-Gaussian set for the pixel and $H\hat{G} (x)$ is the integration of both halves of a 3D Half-Gaussian pair: + +$$ +H \hat {G} (\hat {\mathbf {x}} - \hat {\mu}) = \frac {1}{2} \left\{2 \alpha_ {2} + \left(\alpha_ {1} - \alpha_ {2}\right) I (x, y) \right\} \hat {G} _ {\hat {\Sigma}} (\hat {\mathbf {x}} - \hat {\mu}) \tag {11} +$$ + +![](images/86a43c8cd3b23363a26d1fa8e41148e9242a8137681c64e9023687210b253e18.jpg) +Figure 6. Left: Method for calculating the bounding rectangle of a half-Gaussian. Middle: Visualization when one half of the Gaussian is transparent. Right: Visualization when each side of the Gaussian has distinct opacity levels. + +![](images/5197e65d39fb049c546241b3bdfd771a556d9b4ddd27eabed63d1da0d892090c.jpg) + +![](images/e7269cd4b10e21b61b9daea2a9b56928621e78956a9384216dd2f63504da60f7.jpg) + +# 3.4. 3D Half-Gaussian Splatting + +The 3D half-Gaussian kernel has a splitting plane along with distinct opacity values for each half (Fig. 4b). Unlike conventional splatting methods - which compute the projection shape of a Gaussian as a whole - the half-Gaussian kernel requires specialized handling. + +Naively applying standard splatting uniformly across both halves is inefficient since, in general, a large number of kernels have one of their halves fully transparent. This is supported by Fig. 5, where the left plot shows that the average opacity values per half-Gaussian kernel cluster in a lower range than when using the full Gaussian kernel and the right plot shows that over $75\%$ of the kernels have normalized opacity differences greater than 0.5. Therefore, treating both halves identically while rendering would increase the number of kernels involved in each tile, slowing down the overall process. + +Thus, we propose an efficient 3D half-Gaussian splating technique that independently calculates the valid region for each half as illustrated in Fig. 6. Initially, we project the ellipsoidal splitting surface onto the 2D image plane, defining the inner rectangle (relative to the projected center) of the projected half-Gaussian (green dashed rectangle in Fig. 6). To determine the outer bound, inspired by [4], we compute the height and width of the tangent edges relative to the ellipse's center (blue dashed lines in Fig. 6), thereby establishing the limits of the valid region. This region is encapsulated by the minimal bounding rectangle that fully encloses the projected splitting ellipse along with its tangent edges (red rectangle in Fig. 6). + +In practice, if one side of the half-Gaussian is entirely transparent, only the opaque half is splatted, as depicted in Fig. 6, center. When both halves contribute, the valid region is determined by the outer bounding tangent edges and the projected splitting ellipse Fig. 6, right. + +# 4. Experiments + +# 4.1. Experimental Setup + +Datasets and Metrics. Following the published literature, we tested our design on 11 (indoor and outdoor) scenes from various datasets: 7 scenes from Mip-nerf360 dataset[1], 2 scenes from Tanks &Temples [11], and 2 scenes from DeepBlending [7]. + +Consistent with prior studies, we use PSNR, SSIM [24] and LPIPS [31] to measure the performance on each dataset. We provide the averaged metrics over all scenes for each dataset in the main paper and give the full quantitative results for each scene in the Appendix. We also report rendering times and model size. + +Baselines. To evaluate the general improvement brought by using a Half-Gaussian kernel in neural rendering, we ran experiments based on four baseline methods: vanilla-3D-GS [9] and three of its extensions Scaffold-GS [15], Mip-Splatting [29], and GS-MCMC [10], where we replaced the reconstruction kernel with the Half-Gaussian kernel. We denote our models as 3D-HGS, Scaffold-HGS, Mip-Splatting-HGS, and HGS-MCMC, respectively. We compared performance against state-of-the-art 3D reconstruction methods: 3D-GS [9], Mip-NeRF [1], 2D-GS [8], FreGS [30], Scaffold-GS [15], GES [6], Mip-Splatting [29], and GS-MCMC [10]. + +Reconstruction Kernels. We compared the rendering performance of the half-Gaussian kernel against three other reconstruction kernels: the original 3D-GS kernel, the 2D Gaussian kernel (2D-GS) and the generalized exponential kernels (GES), which were proposed in [8] and [6], respectively. For a fair comparison, we used the same loss function and training iterations as the original 3D-GS. For details on the implementation of different kernels, please refer to the supplementary material. + +Implementation. For our implementation, we utilize the three (unused) normal parameters in 3D-GS to represent the normal vector of the splitting plane. Additionally, we learn one opacity for each half of the Gaussian. This results in memory increasing by only one additional parameter for each reconstruction kernel compared to the 3D-GS. The forward and backward passes of the rasterizer are modified based on the vanilla 3D-GS and Eq. 11. For 3D-HGS and Mip-Splatting-HGS, we adhered to the training settings and hyperparameters used in [9], and [29], setting the learning rate for the normal vector at 0.3 for the Deep Blending dataset and 0.003 for the other datasets. For the implementation of Scaffold-HGS, we did not increase the number of parameters in each Gaussian. Following the referenced paper, we employed an MLP to learn the normal vector based on the feature vector of each voxel and doubled the width of the output layer of the opacity MLP to accommodate two opacity values. The rasterizer was also modified based on + +Table 1. Quantitative comparison to the SOTA methods on real-world datasets. + +
Dataset +MethodMip-NeRF360Tanks&TemplesDeep Blending
PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓
Mip-NeRF [1]29.230.8440.20722.220.7590.25729.400.9010.245
2D-GS[8]28.980.8670.18523.430.8450.18129.700.9020.250
Fre-GS [30]27.850.8260.20923.960.8410.18329.930.9040.240
GES [6]28.690.8570.20623.350.8360.19829.680.9010.252
3D-GS [9]28.880.8700.18223.600.8470.18129.410.9030.243
3D-HGS (Ours)29.66 +0.780.8730.17824.45 +0.850.8570.16929.76 +0.350.9050.242
Scaffold-GS [15]28.950.8480.22023.960.8530.17730.210.9060.254
Scaffold-HGS(Ours)29.25 +0.300.8670.18624.42 +0.460.8590.16230.36 +0.150.9100.240
Mip-Splatting [29]29.390.8800.16223.750.8570.15729.460.9030.243
Mip-Splatting-HGS(Ours)29.88 +0.490.8810.16024.53 +0.780.8650.14529.61 +0.150.9010.241
GS-MCMC [10]29.890.9000.19024.290.8600.19029.670.8900.320
HGS-MCMC(Ours)30.13 +0.240.8860.15825.08 +0.770.8410.14429.80 +0.130.8980.245
+ +Eq. 11. For HGS-MCMC, we adopt the hyperparameters and training settings from [10], with the exception that we set the opacity threshold for Gaussian relocation to 0.015 for the Playroom scene. Additional training details are provided in the supplementary material. All experiments were conducted using an NVIDIA RTX 3090 GPU. + +![](images/aa0c0573bc4334a9e60088e026e2cd05c4201de86a7a05f21d19e3fa82632def.jpg) +Figure 7. Fitting Error. Top-left: Frame from Bonsai scene. Topright: Fitting error using 3DGS and 3DHGS, along the red column. Bottom: Fitting error on the entire image, shown as a normalized heat map in the range [0,1] for 3DGS and 3DHGS. + +# 4.2. Results Analysis + +Quantitative results are presented in Figs. 3 and 7, and in Tabs. 1 and 2. Further details are provided in the supplementary material. Across all datasets and methods, the integration of the 3D Half-Gaussian kernel yields substantial gains, establishing it as a superior choice for enhancing 3D novel view synthesis accuracy. + +Fig. 7 shows plots of the grayscale fitting for the Bonsai scene along an image column containing both smooth surface regions and sharp transitions, along with the corresponding fitting error map for the entire image. The plots compare the proposed half-Gaussian kernel and standard 3D Gaussian Splitting (3DGS) against the ground truth. Consistent with the results in Fig. 2, they illustrate that + +the half-Gaussian kernel achieves a more accurate fit than 3DGS, especially in areas with sharp transitions. + +Table 1 shows that incorporating the proposed Half Gaussian kernel, 3D-HGS, Scaffold-HGS, Mip-Splatting-HGS, and MCMC-HGS achieved SOTA performance across multiple datasets, consistently outperforming their respective baselines. This improvement underscores the effectiveness of the 3D Half-Gaussian kernel as a robust choice for the novel view synthesis. On the MipNeRF360 dataset, our proposed kernel enables 3D-HGS, Scaffold-HGS, Mip-Splatting-HGS, and MCMC-HGS to surpass baseline PSNRs by 0.78, 0.30, 0.49, and 0.24, respectively, with MCMC-HGS achieving a new state of the art. For the Tanks and Temples dataset, these methods see PSNR improvements of 0.85, 0.46, 0.78, and 0.77, again with MCMC-HGS setting a new benchmark. Lastly, on the Deep Blending dataset, our methods outperform baselines by 0.35, 0.15, 0.15, and 0.13 PSNR, with Scaffold-HGS achieving state-of-the-art performance. + +Table 2. Comparison of rendering speed and storage memory between the proposed Half-Gaussian kernel-based methods and traditional Gaussian kernel-based methods. + +
Dataset MethodMip-NeRF360Tanks&TemplesDeep Blending
FPS ↑Mem ↓FPS ↑Mem ↓FPS ↑Mem ↓
3D-GS [9]115762149429104668
3D-HGS (Ours)125694160437126641
Scaffold-GS [15]1201731207712955
Scaffold-HGS(Ours)1181801158413653
GS-MCMC [10]7573213343890969
HGS-MCMC(Ours)7274313944592980
Mip-Splatting [10]7697011756991843
Mip-Splatting-HGS(Ours)83883121566102808
+ +Finally, using half-Gaussian kernels does not significantly impact the rendering speed and memory requirements, as shown in Fig. 3, Table 2, and in the ablation study in the supplemental material. + +Qualitative results are shown in Figs. 8, 9, and in the + +![](images/f35731194c0a7c97452412800d066535baa99acb9f5030bf50178c062bf84650.jpg) +Figure 8. Qualitative comparison between 3D-GS and 3D-HGS. + +Table 3. PSNR scores $\uparrow$ of the rendering performance with different reconstruction kernels. Red indicates performance improvement, while blue denotes performance decline. + +
KernelsTanks&TemplesDeep BlendingBicycleGardenKitchenMip-NeRF360CounterBonsaiAvgAVG
TruckTrainAvgPlayroomDrjohnsonAvgStumpRoomCounter
3D-GS25.3021.9123.6029.9829.0929.5325.2127.3030.8026.5631.4028.7032.2028.8828.0428.04
2D-GS25.1421.7023.4230.1829.1229.6525.0227.1431.3326.5731.3728.9732.3328.9428.0628.06
GES24.9421.7323.3430.2929.3529.8224.8727.0731.0726.1731.1728.7531.9728.7227.9427.94
3D-HGS26.2522.6524.4530.2029.3029.7625.2527.5032.2226.6432.5229.8433.5229.6628.7028.70
+ +![](images/89913a37d2fc80da8aa701552a470f43e87564a213bef857297badd9574c43ac.jpg) +Figure 9. Comparison of Depth Images and Normal Maps. We visualize the depth maps alongside normals estimated from these maps. Our method demonstrates superior performance, capturing finer details in the bench structure and the surface textures of pots. + +supplemental material. In Fig. 8 it can be observed that our method delivers better performance on fine-scale details (e.g., T&T-Truck, Mip360-Garden, Mip360-Bonsai, Mip360-Room), high-frequency textures (e.g., T&T-Train, Mip360-Counter), light rendering (e.g., Mip360-Garden, DB-DrJohnson), and shadow areas (e.g., T&T-Train, Mip360-Bonsai, DB-DrJohnson). Fig. 9 provides an example of a generated depth image and the corresponding estimated surface normals, which were produced using the method described in [8]. The proposed method captures + +better finer details in the bench structure and the surface texture of the pots. + +Different Reconstruction Kernels Table 3 provides the PSNR score for each 3D scene, the average score for each dataset, and the total average score when using the different kernels. In this experiment, we use 3D-GS as the baseline, with improved and degraded results highlighted in red and blue, respectively. Compared to the original 3D Gaussian kernel, the 3D Half-Gaussian kernel shows consistent improvement across all 3D scenes. While other kernels demonstrate superiority in some 3D scenes, they exhibit drawbacks in others. Overall, we achieved the best average performance across all 3D scenes. + +# 5. Conclusion + +We introduced Half-Gaussian Splatting, a plug-and-play method for accurate 3D view synthesis. Our approach uses two distinct opacities in each Gaussian, allowing precise rendering control without increasing inference time. This design achieves SOTA performance across multiple datasets and integrates seamlessly into most Gaussian-splatting architectures without structural changes. We validated the effectiveness of our approach across four baseline methods, consistently improving accuracy. To rigorously assess its advantages, we compared it against other kernel modification techniques, confirming Half-Gaussian Splatting as a highly effective choice for 3D splatting-based methods. + +# References + +[1] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5855–5864, 2021. 2, 6, 7 +[2] Guikun Chen and Wenguan Wang. A survey on 3d gaussian splatting. arXiv preprint arXiv:2401.03890, 2024. 3 +[3] Anurag Dalal, Daniel Hagen, Kjell G Robbersmyr, and Kristian Muri Knausgård. Gaussian splatting: 3d reconstruction and novel view synthesis, a review. arXiv preprint arXiv:2405.03417, 2024. 3 +[4] Guofeng Feng, Siyan Chen, Rong Fu, Zimu Liao, Yi Wang, Tao Liu, Zhilin Pei, Hengjie Li, Xingcheng Zhang, and Bo Dai. Flashgs: Efficient 3d gaussian splatting for large-scale and high-resolution rendering. arXiv preprint arXiv:2408.07967, 2024. 6 +[5] Kyle Gao, Yina Gao, Hongjie He, Dening Lu, Linlin Xu, and Jonathan Li. Nerf: Neural radiance field in 3d vision, a comprehensive review. arXiv preprint arXiv:2210.00379, 2022. 1 +[6] Abdullah Hamdi, Luke Melas-Kyriazi, Guocheng Qian, Jinjie Mai, Ruoshi Liu, Carl Vondrick, Bernard Ghanem, and Andrea Vedaldi. Ges: Generalized exponential splattng for efficient radiance field rendering. arXiv preprint arXiv:2402.10128, 2024.3,6,7 +[7] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics (ToG), 37(6):1-15, 2018. 6 +[8] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. arXiv preprint arXiv:2403.17888, 2024. 3, 6, 7, 8 +[9] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42 (4):1-14, 2023. 1, 2, 3, 4, 6, 7 +[10] Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Weiwei Sun, Yang-Che Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, and Kwang Moo Yi. 3d gaussian splitting as markov chain monte carlo. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 2, 3, 6, 7 +[11] Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG), 36 (4):1-13, 2017. 6 +[12] Zhe Li, Zerong Zheng, Lizhen Wang, and Yebin Liu. Animatable gaussians: Learning pose-dependent gaussian maps for high-fidelity human avatar modeling. arXiv preprint arXiv:2311.16096, 2023. 1 +[13] Zhihao Liang, Qi Zhang, Wenbo Hu, Ying Feng, Lei Zhu, and Kui Jia. Analytic-splatting: Anti-aliased 3d gaussian splatting via analytic integration. arXiv preprint arXiv:2403.11056, 2024. 3 + +[14] Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, et al. Vastgaussian: Vast 3d gaussians for large scene reconstruction. arXiv preprint arXiv:2402.17427, 2024. 1 +[15] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654-20664, 2024. 2, 3, 6, 7 +[16] Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7210-7219, 2021. 1 +[17] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 1, 3 +[18] Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, and Ziwei Liu. Dreamgaussian4d: Generative 4d gaussian splatting. arXiv preprint arXiv:2312.17142, 2023. 1 +[19] Johannes L Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4104-4113, 2016. 3 +[20] Steven M Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and evaluation of multi-view stereo reconstruction algorithms. In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06), pages 519-528. IEEE, 2006. 3 +[21] Xiaowei Song, Jv Zheng, Shiran Yuan, Huan-ang Gao, Jingwei Zhao, Xiang He, Weihao Gu, and Hao Zhao. Saags: Scale-adaptive gaussian splatting for training-free antialIASing. arXiv preprint arXiv:2403.19615, 2024. 3 +[22] Haithem Turki, Deva Ramanan, and Mahadev Satyanarayanan. Mega-nerf: Scalable construction of large-scale nerfs for virtual fly-throughs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12922–12931, 2022. 1 +[23] Evangelos Ververas, Rolandos Alexandros Potamias, Jifei Song, Jiankang Deng, and Stefanos Zafeiriou. Sags: Structure-aware 3d gaussian splatting. arXiv preprint arXiv:2404.19149, 2024. 3 +[24] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6 +[25] Lee Westover. Interactive volume rendering. In Proceedings of the 1989 Chapel Hill workshop on Volume visualization, pages 9-16, 1989. 3 +[26] Lee Westover. Footprint evaluation for volume rendering. In Proceedings of the 17th annual conference on Computer graphics and interactive techniques, pages 367-376, 1990. 3 + +[27] Tong Wu, Yu-Jie Yuan, Ling-Xiao Zhang, Jie Yang, Yan-pei Cao, Ling-Qi Yan, and Lin Gao. Recent advances in 3d gaussian splatting. arXiv preprint arXiv:2403.11134, 2024. 3 +[28] Yunzhi Yan, Haotong Lin, Chenxu Zhou, Weijie Wang, Haiyang Sun, Kun Zhan, Xianpeng Lang, Xiaowei Zhou, and Sida Peng. Street gaussians for modeling dynamic urban scenes. arXiv preprint arXiv:2401.01339, 2024. 1 +[29] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splattering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19447-19456, 2024. 2, 3, 6, 7 +[30] Jiahui Zhang, Fangneng Zhan, Muyu Xu, Shijian Lu, and Eric Xing. Fregs: 3d gaussian splatting with progressive frequency regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21424-21433, 2024. 3, 6, 7 +[31] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 6 +[32] Cheng Zhao, Su Sun, Ruoyu Wang, Yuliang Guo, Jun-Jun Wan, Zhou Huang, Xinyu Huang, Yingjie Victor Chen, and Liu Ren. TcIc-gs: Tightly coupled lidar-camera gaussian splatting for surrounding autonomous driving scenes. arXiv preprint arXiv:2404.02410, 2024. 1 +[33] Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Ewa volume splatting. In Proceedings Visualization, 2001. VIS'01., pages 29-538. IEEE, 2001. 2, 3, 4 +[34] Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Surface splatting. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 371-378, 2001. 1, 4 +[35] Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Ewa splatting. IEEE Transactions on Visualization and Computer Graphics, 8(3):223-238, 2002. 1, 2, 3, 4 \ No newline at end of file diff --git a/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/images.zip b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..89065764bd43513cd6898ebe3d8ed32c1a6c62a8 --- /dev/null +++ b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24d36c796a528df44281e85ed884c402a2a661fcdfe500bc611d260edc4618ca +size 799989 diff --git a/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/layout.json b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..84832f56aba5914dd61425304b24985f98c3e8b0 --- /dev/null +++ b/CVPR/2025/3D-HGS_ 3D Half-Gaussian Splatting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ef739ca2c5f4412edf395f637e8408e071703578bac6f93e42c08f2fe83d6c6 +size 321452 diff --git a/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_content_list.json b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6946bde0f93e4ec9fd67b221d61f44d74e032979 --- /dev/null +++ b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:455baa6275a3dde1b3f9d964c9e7084914a66166671720974bdbe161785ab268 +size 81871 diff --git a/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_model.json b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c4f4d7b31a2690b93d5f81ff65ca584a16de693d --- /dev/null +++ b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a298b209ca5ecf26b4c15dbadc449d43fe8eecad9d340b905f18120c6f5b5df1 +size 102956 diff --git a/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_origin.pdf b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..113715752dd101aaf2402405c61cba70a8cd7331 --- /dev/null +++ b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/39b23d12-60ca-4604-960c-965e629e5821_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07a6fe50b466910d3282644b086ab99041c8deabb5933fc8545470218f5ffaae +size 1391450 diff --git a/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/full.md b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c1f3a2929f90ac87e1efc06fcf7d1fb0548e8ef3 --- /dev/null +++ b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/full.md @@ -0,0 +1,286 @@ +# 3D-LLaVA: Towards Generalist 3D LMMs with Omni Superpoint Transformer + +Jiajun Deng $^{1}$ , Tianyu He $^{2}$ , Li Jiang $^{3}$ , Tianyu Wang $^{4}$ , Feras Dayoub $^{1}$ , Ian Reid $^{1,4}$ $^{1}$ Australian Institute for Machine Learning, The University of Adelaide + $^{2}$ Microsoft Research + $^{3}$ The Chinese University of Hong Kong, Shenzhen + $^{4}$ Mohamed bin Zayed University of AI + +# Abstract + +Current 3D Large Multimodal Models (3D LMMs) have shown tremendous potential in 3D-vision-based dialogue and reasoning. However, how to further enhance 3D LMMs to achieve fine-grained scene understanding and facilitate flexible human-agent interaction remains a challenging problem. In this work, we introduce 3D-LLaVA, a simple yet highly powerful 3D LMM designed to act as an intelligent assistant in comprehending, reasoning, and interacting with the 3D world. Unlike existing top-performing methods that rely on complicated pipelines—such as offline multi-view feature extraction or additional task-specific heads—3D-LLaVA adopts a minimalist design with integrated architecture and only takes point clouds as input. At the core of 3D-LLaVA is a new Omni Superpoint Transformer (OST), which integrates three functionalities: (1) a visual feature selector that converts and selects visual tokens, (2) a visual prompt encoder that embeds interactive visual prompts into the visual token space, and (3) a referring mask decoder that produces 3D masks based on text description. This versatile OST is empowered by the hybrid pretraining to obtain perception priors and leveraged as the visual connector that bridges the 3D data to the LLM. After performing unified instruction tuning, our 3D-LLaVA reports impressive results on various benchmarks. The code and model will be released at https://github.com/djiajunustc/3D-LLaVA. + +# 1. Introduction + +Recent advancements in Large Language Models (LLMs) [4, 43, 49, 55, 66] have reshaped the paradigm of artificial intelligence, positioning language as a universal interface for general-purpose reasoning and interaction. Building on this progress, 2D Large Multi-modal Models (LMMs) [2, 18, 39, 40, 42] have emerged, integrating images and texts to support a wide range of vision-language tasks. In a further step to extend these capabilities to 3D, 3D LMMs [11, 23] have huge potential to unlock a series + +![](images/08aa4d70023737f54df49cc41353a03b06e2ad802cf9e98a41d287c432253419.jpg) +Figure 1. An intuitive comparison between 3D-LLaVA and other SoTA 3D LMMs (The performance of LEO on ScanQA is omitted here since its setting is different). Our 3D-LLaVA achieves the best results among the competitors on most of the benchmarks. + +of real-world applications, such as autonomous vehicles, household robots, and augmented reality, where robust reasoning, precise 3D scene comprehension, and seamless human-agent interaction are of great significance. + +It is a non-trivial problem to empower 3D LMMs with these desired properties. Despite notable progress achieved with the 3D vision and language community towards 3D dialogue and reasoning, these 3D LMMs still rely on extra prompt encoder [40] or offline region proposal generation and feature extraction [23, 25] to enable interacting with both visual and textual prompts. Such extra modules and offline preprocessing result in a complex pipeline, which complicates deployment and limits accessibility. + +Furthermore, an effective 3D vision and language assistant should extend beyond simply generating text output; it should also be capable of grounding open-ended language expressions within a 3D scene and accurately segmenting the corresponding 3D masks. However, current referring-based 3D point cloud segmentation methods typically align and fuse text embeddings into a specialized segmentation model without LLMs. An exception is SegPoint [21], which utilizes the reasoning capabilities of LLMs to improve re + +ferring 3D point segmentation. Nonetheless, it still depends on additional modules to achieve precise segmentation, and has not demonstrated its effectiveness in other 3D vision-language tasks such as VQA and captioning. + +To overcome these limitations, we present 3D-LLaVA, a generalist 3D LMM that streamlines the pipeline while maintaining strong performance across diverse 3D tasks. In contrast to the prior works that assemble multiple models or extract features in an offline manner, 3D LLaVA bridges interactive 3D vision dialogue and point-level 3D scene comprehension in an integrated and shared architecture, eliminating the need for auxiliary modules and complicated steps. Particularly, as compared in Figure 1, the most distinguishing part of 3D-LLaVA is the novel visual connector, namely Omni Superpoint Transformer (OST). What makes it distinguishing includes how we use it as a shared module for multiple purposes and how we pretrain it together with the 3D scene encoder. + +Specifically, existing 3D LMMs generally follow the trend of the 2D domain to leverage an MLP projector [40] or Q-Former [39] as the visual connector, both of which are single-function modules designed to transform vision features into token embeddings aligned with the language semantic space. On the contrary, OST is a versatile module built on superpoint representation that plays multiple roles in our 3D-LLaVA. Specifically, in addition to feature enhancement and projection, OST has the following functions: (1) Visual Feature Selector. OST selectively retains visual tokens, distinguishing between foreground and background superpoints. This helps highlight the informative part of the complex 3D scene and manage computational overhead by reducing the number of tokens to be further processed by the LLM. (2) Visual Prompt Encoder. 3D-LLaVA does not involve an additional visual prompt encoder. When the user interacts with 3D-LLaVA with a visual prompt (such as a clicking point, a box, or a mask), OST plays the role of a visual prompt encoder, mapping the visual prompt to the same embedding space as the visual feature, which is then appended together with language token embeddings as the input of the LLM. (3) Mask Decoder. Instead of requiring an additional segmentation module for grounding language expressions onto 3D point clouds, OST directly generates 3D masks, keeping the model streamlined and self-contained. + +Moreover, at the pretaining stage, OST is connected together with the 3D scene encoder and jointly pre-trained with the hybrid supervision of instance segmentation and 2D-to-3D knowledge distillation. Here, the 2D feature is extracted from multi-view images with the visual encoder of a 2D LMM, i.e. LLaVA-1.5 [42], and lifted to 3D by the geometric correspondence [46] between the point cloud and the pixels. Such a pretraining scheme on the one hand encompasses the perception prior to our model and takes the + +well-aligned 2D data as the bridge to facilitate the alignment between 3D visual embedding and language embedding. + +We conduct end-to-end instruction tuning over various tasks and then benchmark our 3D-LLaVA on five popular 3D vision and language understanding datasets. As shown in Figure 1 Our method achieves the state-of-the-art performance on all of these datasets. Remarkably, we achieve $92.6\%$ CiDEr on the competitive ScanQA dataset, improving the previous best result by absolutely $4.9\%$ CiDEr score. + +To summarize, we make three-fold contributions: + +- We propose 3D-LLaVA, a generalist 3D LMM that unifies multiple tasks through the Omni Superpoint Transformer, streamlining the framework. +- We present a new perspective that a versatile visual connector can be leveraged to remove the task-specific modules added to the 3D LMM, making the model more elegant and integrated. +- We benchmark the proposed method on different datasets, demonstrating its great potential to be a powerful baseline in this field. + +# 2. Related Work + +3D Vision & Language Understanding. In recent years, there has been tremendous progress in understanding 3D scenes from natural language, where the language provides contextual knowledge and queries of user intentions to allow seamless interaction between humans and models. These works can be broadly categorized into four main tasks: 3D grounding [1, 7, 29, 60, 64] that localizes specific objects within the 3D scene according to the given textual queries, 3D referring segmentation [21, 27, 28, 48, 58] that predicts a point-wise mask for the described object; 3D captioning [9, 10, 13, 31, 32, 34, 61] that densely localizes objects in a 3D scene and describes them with natural language; 3D question answering [3, 44, 45] that answers given textual questions about the 3D scene. + +Although achieving great success on certain tasks, the above methods fall short in generalizing across different 3D understanding tasks. Motivated by this, recent efforts have also been dedicated to designing pre-training schemes [33, 67] or unified models [6, 8] for various tasks like 3D grounding, captioning and question answering. Despite these models achieving impressive improvements in handling diverse 3D scene tasks, their reliance on task-specific heads and limited reasoning capabilities constrain their flexibility for broader, general-purpose applications. + +3D Large Multimodal Model. The huge success of Large Language Models (LLMs) [5, 15, 22, 55] has fueled the demand for a versatile interface that can handle various modalities beyond language. In response to this demand, Large Multimodal Models (LMMs) has been developed to comprehend instructions that span vision and language [2, 40, 42, 54, 63]. PointLLM [59] integrates the + +![](images/252480fd6dd9ff9eecc7a0ec7a55c409d2190bbfb932f3e667c02646ce598e17.jpg) +Figure 2. An overview of 3D-LLaVA framework. Given input point cloud, language instruction, and optional visual prompt, 3D-LLaVA generates text output from LLM and produces 3D masks with Omni Superpoint Transformer (OST). The 3D feature out of the Sparse 3D U-Net is clustered into superpoint with Superpoint Pooling. Visual Sampler is a parameter-free module that samples point features corresponding to the visual prompt $X_P$ . Omni Superpoint Transformer takes both superpoint feature and visual prompt feature as input, produces visual feature embedding $Z_V$ and visual prompt embedding $Z_P$ , followed by a projection layer $W_V$ to obtain the token embedding $H_V$ and $H_P$ . Once the LLM outputs a special segmentation token, i.e., [SEG], the hidden state linked to [SEG] token will be sent to another projection layer $W_S$ and then input as segmentation query to the frozen OST to generate segmentation masks. + +object-level point cloud into LLM by constructing a joint embedding space among 3D points and text, enabling explain the 3D shape with language. 3D-LLM [23] extends the 2D LMM into the 3D scene, improving the capability of 3D spatial reasoning by introducing positional embeddings and location tokens. LL3DA [11] develops a Q-Former [39] to bridge the 3D point cloud, visual prompt, and language instruction. Grounded 3D-LLM [12] involves referent tokens and employs contrastive learning to unify grounding with textual response generation. Segpoint [21] attempts to unify semantic segmentation and referring segmentation with an LLM. Agent3D-Zero [62] leverages the 2D LMM to first observe from the birds' eye view and then selects the informative viewpoints for further zero-shot 3D scene understanding. Scene-LLM [20] lifts multi-view image features into 3D space, and follows the two-stage training scheme [40] to perform 3D vision and language alignment. Chat-Scene [25] proposes to achieve precise object referencing and grounding by incorporating object identifiers into the 3D LMM and fusing the offline extracted 2D and 3D instance-level features. + +# 3. Approach + +The overall framework of 3D-LLaVA is illustrated in Figure 2. It is a generalist 3D LMM, capable of conducting 3D vision-centric dialogue, being interacting seamlessly with flexible visual and textual prompts, and grounding open + +ended language description into 3D point cloud masks. In this section, we first introduce the model architecture of the 3D scene encoder (Section 3.1) and Omni Superpoin Transformer (Section 3.2). Then, in Section 3.3, we elaborate on the detail of each step in our pipeline. Finally, in Section 3.4, we introduce the training scheme. + +# 3.1. 3D Scene Encoder + +Given the point clouds input $\mathbf{X}_{\mathbf{V}} \in \mathbb{R}^{N \times 6}$ , where $\mathbf{N}$ is the number of points and the 6 channels represent the coordinates $\{x, y, z\}$ and the color information $\{r, g, b\}$ , we first convert the points into voxels based on their 3D coordinates. After obtaining the voxels, the Sparse 3D U-Net [16] is leveraged as the scene encoder to extract point cloud features. Sparse 3D U-Net is a U-Net-like architecture but consists of sparse convolutional layers. The output of the Sparse 3D U-Net has the same number as the input, resulting in an excessively large voxel count that is not feasible for the following steps. One option to reduce the number of points is to perform farthest point sampling [47]. However, the sampling operation inevitably causes information loss. In contrast, we follow [35, 36, 53] to implement the average pooling operation based on superpoints, which are generated with the bottom-up clustering algorithm [38]. The superpoint pooling reduces the quantity of 3D vision embeddings into hundreds or a few thousand, depending on the complexity of the 3D scene. + +# 3.2. Omni Superpoint Transformer + +The architecture of the proposed Omni Superpoint Transformer (OST) is shown in Figure 3 (a). Notably, the basic block of a conventional segmentation Transformer typically includes a cross-attention layer, a self-attention layer, and a feed-forward network. Here, the cross-attention layer is leveraged to abstract the information from the source feature to the object query. Although OST can perform segmentation, it is primarily composed without cross-attention layers. The superpoint features act as both queries and source features (key and value) in OST. This adjustment keeps the correspondence between the output embedding of OST and the lifted 2D feature, facilitating effective 2D-to-3D feature distillation during the pretraining phase. Additionally, to guide the superpoint queries towards relevant entities, we replace the standard self-attention layer with a distance-adaptive self-attention layer [41], which introduces a bias term based on the distances between superpoints. The pairwise attention between the i-th superpoint query and the j-th superpoint query is computed as: + +$$ +A t t n \left(Q _ {i}, K _ {j}, V _ {j}\right) = S o f t m a x \left(\frac {Q _ {i} K _ {j} ^ {T}}{\sqrt {C}} - \sigma \cdot D\right) V _ {j}, \tag {1} +$$ + +where $Q, K, V$ is the query, key, and value of the attention module, $C$ is the channel of the embedding, $\sigma$ is a learnable parameter based on the query, and $D$ indicates the Euler distance between the centroid of these two superpoints. + +There are three heads on the top of OST: a mask head, a classification head, and an alignment head. The mask head transforms each query embedding into a mask prediction kernel, which is then applied to generate binary mask prediction by performing a dot product with the input superpoint features of OST [30, 52, 53]. The classification head predicts the category of the segmentation mask by outputting the logit of each category. The output of the alignment head is denoted as $Z_{V}$ in Figure 2. It would be further leveraged to obtain the visual token of the LLM. + +# 3.3. Details in Pipeline + +Visual Feature Selection. Although superpoint pooling has reduced the query number of OST, it still results in a very long sequence if directly applied as input visual tokens of the LLM. To alleviate this issue, after obtaining $Z_{V}$ from OST, we only keep the superpoints with the top-K objectness scores. The objections score of each superpoint query is defined as the highest score among foreground categories. + +Visual Prompt Encoding. A generalist 3D LMM is supposed to be interacted with both language instructions and visual prompts. Common visual prompts include a clicking point, bounding box, or binary mask. A straightforward approach to encode these prompts is to use a prompt encoder composed of several linear layers [11], designed to project + +![](images/c21146896a8cbcd27093a80ea0226eb62897c22e80f3cabad2d0861234c38ee6.jpg) +Figure 3. An illustration of (a) the architecture of Omni Superpoint Transformer, and (b) the paradigm of the visual sampler. + +the prompt (e.g., coordinates of clicking points or bounding boxes) into an embedding that aligns with the same semantic space as visual or language tokens. However, we find that this type of prompt encoder is challenging to optimize, as it lacks explicit information indicating which areas are targeted by the prompt. + +In contrast, as shown in Figure 3 (b), 3D-LLaVA introduces a parameter-free visual sampler to encode the visual prompt $X_{P}$ and reuses OST as a visual prompt encoder to generate the corresponding visual prompt embedding $Z_{P}$ , ensuring that the prompt is embedded in the same space as the visual features. + +For a clicking point prompt, the visual sampler obtains the prompt feature through three nearest-neighbor (three-NN) interpolation [47], which first finds the three nearest points and computes the prompt feature using weighted interpolation. If the prompt is a box or mask, the visual sampler groups the points within the prompt and applies average pooling to generate the prompt feature. This prompt feature is then appended to the superpoint features and is input to OST. Here, we leverage the masked attention strategy between the superpoint feature and prompt feature. Specifically, we set the attention mask from the superpoints to the prompt as negative infinity. This prevents the prompt feature from influencing the superpoints, allowing it only to sample the relevant visual information. Similar to the visual feature embedding $Z_{V}$ , the prompt embedding $Z_{P}$ is out of the alignment head. + +Projection. After obtaining the Top-K superpoint-based visual feature embedding $Z_{v}$ and the visual prompt embedding $Z_{P}$ , we apply a projection layer $W_{V}$ to transform them into the language embedding tokens $H_{V}$ and $H_{P}$ . The projection layer consists of two linear layers and a GELU activation layer between the linear layers. + +Instruction Formulation. We present the typical language instruction in the bottom-right part of Figure 2. There are two kinds of place holders in the instruction: “” and “”. The text instruction except for the placeholder will be tokenized into the text token embedding $H_{T}$ . After tokenization, we replace “” with visual token embedding $H_{V}$ , and replace “” with the prompt token embedding $H_{P}$ . + +Mask Decoding. When the instruction prompts 3D-LLaVA to perform referring segmentation [1, 7, 37], the LLM will output a [SEG] token in its text response. Once detecting this token, we extract the last hidden state of the token before the [SEG] token. This hidden state, $H_{S}$ , is then fed into the projection layer $W_{S}$ to generate the segmentation query. + +In our method, we leverage the frozen OST to predict the segmentation mask of the referred object. Similar to the paradigm of using OST as the visual prompt encoder, the segmentation query is concatenated with the superpoint query to formulate the input of OST. We apply a mask attention strategy to prevent information flow from the segmentation query to the superpoints. Since the segmentation query lacks coordinate information, the bias term (from Equation 1) between this query and the superpoint queries is set to zero. The output kernel from the mask head that corresponds to the segmentation query is applied to the superpoint feature input to generate the mask prediction. + +# 3.4. Training Scheme + +# Stage 1: Pre-training 3D Scene Encoder and OST. + +Unlike the 2D domain, which has powerful and widely recognized vision foundation models such as CLIP [50], there is currently no such 3D foundation model that can serve as a readily usable 3D visual encoder. + +To this end, we pre-train the Sparse 3D U-Net and the OST by ourselves. Specifically, we adopt hybrid supervision that combines the vision-centric task, i.e., instance segmentation, and the 2D-to-3D knowledge distillation: + +$$ +L _ {P r e} = L _ {C l s} + L _ {M a s k} + L _ {K D}, \tag {2} +$$ + +where $L_{Cls}$ represents cross-entropy loss for multi-category classification, $L_{Mask}$ includes the binary cross-entropy loss and Dice loss for mask prediction, and $L_{KD}$ denotes the knowledge distillation loss, which includes mean squared error and cosine similarity losses. + +For instance segmentation, we leverage the annotation from ScanNet200 [51] as the training data. For 2D-to-3D knowledge distillation, we follow OpenScene [46] that first extracts multi-view 2D features and then lifts the 2D features into 3D points by the correspondence between 3D point clouds and 2D pixels. The lifted 2D features are pooled into each superpoint to generate the target feature. Here we leverage the visual encoder of LLaVA-1.5-7B [42], i.e., CLIP-ViT-L, to extract the teacher 2D feature. + +Table 1. Dataset statistics for joint instruction tuning. + +
DatasetTaskSize
ScanReferreferring segmentation37K
Nr3Dreferring segmentation29K
Multi3DReferreferring segmentation44K
ScanQAvisual question answering30K
SQA3Dvisual question answering89K
Scan2Capdense captioning37K
Nr3D*dense captioning29K
Total-295K
+ +# Stage 2: End-to-End Instruction Tuning. + +We combine various 3D vision and language understanding datasets to form our instruction-tuning data. The combined datasets include ScanRefer [7], Nr3D [1], Multi3DRefer [64], ScanQA [3], SQA3D [44], Scan2Cap [13]. The statistic of the utilized dataset is presented in Table 1. To enrich the language annotation that involves describing the object in the 3D scene, we follow [25] to use Nr3D as the complementary to the dense captioning task, which is denoted as "Nr3D*" in this table. + +The instruction tuning phase jointly optimizes 3D-LLaVA for both text generation and referring segmentation. The training objective is composed as follows: + +$$ +L _ {I F T} = L _ {\text {t e x t}} + 0. 1 \times L _ {\text {m a s k}}, \tag {3} +$$ + +where $L_{text}$ is the cross-entropy loss for next-token generation, $L_{mask}$ represents the mask prediction loss that also consists of the binary cross-entropy loss and the Dice loss, which is the same as the pertaining stage. Here, we multiply the mask loss by a coefficient of 0.1 for balance. We always keep the Sparse 3D U-Net, OST, and the main body of LLM frozen. Only the visual projector, the SEG project, and LoRA [24] parameters adopted to the LLM are updated. + +# 4. Experiments + +# 4.1. Datasets and Metrics + +Datasets. In this work, we conduct experiments on the 3D scans provided by ScanNet dataset [17], including 1,201 scenes for training and 312 for validation. At the pertaining stage of our 3D encoder, we leverage the mask annotation from ScanNet200 [51], which extends the original ScanNet with fine-grained categories. The language annotation leveraged in the instruction tuning has been introduced in Section 4. After instruction tuning, we validate the effectiveness of the proposed 3D-LLaVA on the following datasets: ScanQA [3] and SQA3D [44] for question answering, ScanRefer [7] and Multi3DRefer [64] for referring segmentation and Scan2Cap [13] for dense captioning. + +Metrics. We follow the common practice to evaluate the quality of generated text response for ScanQA and + +Table 2. Performance comparison among state-of-the-art methods. "Specialist Model" means this model can be utilized to perform 3D question answering, 3D dense captioning, or referring segmentation. "Finetuned 3D LMM" indicates the model is jointly trained and then finetuned on each dataset before evaluation. We add a "\*" to 3D LMMs that belong to this kind. "3D LMM" includes the models that are only been trained on multiple tasks. "PC" means point cloud and "T" means multi-view images. Please note that LEO [26]'s results on ScanQA is marked with a gray color and not compared to other methods, since it is in a different setting that accesses the ground truth object related to the question. The top-2 entities of each metric are marked with underline and the best one is highlighted by bolding font. + +
MethodModalityScanRefer (val) mIoU↑Multi3DRefer (val) mIoU↑ScanQA (val)SQA3D (test)Scan2Cap (val)
C↑B-4↑M↑R↑EM↑EM-R↑C@0.5↑B-4@0.5↑M@0.5↑R@0.5↑
Specialist Models:
ScanQA[3]PC--64.910.113.133.346.6-----
3D-VLP[33]PC--67.011.213.534.5--54.932.324.851.5
3D-VisTA[67]PC--69.610.413.945.748.5-61.634.126.855.0
Scan2Cap[13]PC------41.0-39.123.322.044.8
MORE[31]PC--------40.922.921.744.4
SpaCap3D[57]PC--------44.025.322.345.4
D3Net[8]PC--------46.130.324.451.7
UniT3D[14]PC--------46.727.221.946.0
3DJCG[6]PC--------49.531.024.250.8
Vote2Cap-DETR [9]PC--------61.834.526.254.4
TGNN [28]PC27.8-----------
M3DRef-CLIP [64]PC35.732.6----------
X-RefSeg3D [48]PC29.9-----------
3D-STMN [58]PC39.5-----------
Finetuned 3D LMMs:
3D-LLM[23]PC+I--69.412.014.535.7------
Scene-LLM [20]*PC+I--80.012.016.840.054.2-----
LL3DA* [11]PC--76.813.515.937.3--65.236.826.055.1
SegPoint [21]*PC41.736.1----------
3D LMMs:
LEO [26]PC+I--101.413.220.049.250.052.472.438.227.958.1
Scene-LLM [20]PC+I--80.011.715.835.953.6-----
Chat-Scene [25]PC+I--87.714.318.041.654.657.577.236.428.058.1
Grounded 3D-LLM [12]PC--72.713.4----70.635.5--
3D-LLaVA (ours)PC43.342.792.617.118.443.154.556.678.836.927.157.7
+ +Scan2Cap in terms of CiDEr (C) BLEU-4 (B-4),METEOR (M) and Rouge-L (R). Different from the conventional setting of ScanQA, there is a definite answer to situated question answering dataset SQA3D, therefore we leverage extract match accuracy (EM) as well as the refined version (EM-R) as the metric. For referring segmentation, we adopt the mean intersection over union (mIoU) for evaluation. + +# 4.2. Implementation Details + +We pre-train our 3D visual encoder on ScanNet200 for 512 epochs under the hybrid supervision of 2D-to-3D knowledge distillation and segmentation. After obtaining the 3D visual encoder, we developed our 3D-LLaVA based on the LLaVA-1.5-7B [42]. We make use of model weights of the visual projector and LLM (Vicuna-1.5-7B [43]) from the LLaVA-1.5-7B, and connect the alignment embedding out of our 3D visual encoder to the visual projector. We keep 100 superpoint features $Z_{V}$ according to their objectness score, which are then been projected to the visual token embeddings $H_{V}$ . The instruction tuning is conducted on $8 \times$ NVIDIA RTX 3090 GPUs with the acceleration of the DeepSpeed toolkit. We adopt LoRA [24] to the LLM and keep the main body of LLM and visual encoder frozen during training. The data presented in Table 1 are leveraged to perform end-to-end training for 1 epoch. We set the batch size to 2 for each GPU and update the model weights af + +ter accumulating the gradient every 8 steps. The model is optimized with the AdamW. The Cosine Annealing schedule is leveraged to update the learning rate, with the initial learning rate set as $2\mathrm{e} - 4$ + +# 4.3. Comparison with SoTA Models + +We compare the proposed 3D-LLaVA with other models and present the results in Table 2. The models compared in this table are divided into three groups: specialist models, Finetuned 3D LMMs, and 3D LMMs. The specialist model is designed to address a single kind of task. All of the specialist models in this table are without LLMs. The Finetuned 3D LMM is the 3D large multimodal model that is finetuned on each dataset. Such fine-tuning could improve the performance of the model on the corresponding dataset, but will affect its generalizability. The last kind, 3D LMM, is the large multimodal model that is trained on a unified dataset including various tasks. Particularly, among all the competitors, our 3D LLaVA is the only one that covers the typical text generation task (i.e., 3D dense captioning, and 3D vision question answering) and point-level understanding (i.e., 3D Referring Segmentation). + +3D Referring Segmentation requires the model to output the 3D mask on the point cloud according to the users' language expression, which validates the capability of grounding the text description on the 3D scene. We benchmark + +![](images/cdbaa51543999f50761eea576f35207903e5163cb54746ea2d7fd57933abbc8e.jpg) +Figure 4. Different paradigms to produce visual prompt embedding. "OST": Omni Superpoint Transformer. "P.E. Encoder": Parameter-Free Encoder. + +![](images/3f6c6f93b6e46a84332ecc00722b790a5386fb905308bb3e5214ab0a1cc2b398.jpg) + +![](images/5a9122e672b9317217941865f3c9bb0e3c99414523c0b71cff39c3ce5a28d7be.jpg) + +our methods and other state-of-the-art methods on both the single-target setting (Scanerfer [7]) and the various number setting (Multi3DRefer [64]). The referring text from Multi3DRefer can correspond to one, many, or even zero objects. If multiple objects are referred to in the instruction, we follow [21] to merge the masks into a single one for evaluation. When there is no corresponding to the referring expression, our 3D-LLaVA will output "Sorry, I cannot find this object". In this case, since there is no [SEG] token in the response, the mask decoding pipeline will not be applied, and thus all of the predicted masks will be assigned as background. As shown in the table, our 3D-LLaVA reports the best result among the competitors. Notably, our model achieves $43.3\%$ mIoU on ScanRefer and $42.7\%$ mIoU on Multi3DRefer, improving the previous best record of Seg-Point by absolutely $1.6\%$ mIoU and $6.6\%$ mIoU. + +3D Question Answering is the task that asks the model to observe the visual information of the 3D scene and give a precise response to the user's question involving some part of the scene. We conduct the comparison between our 3D-LLaVA and other methods on both the conventional 3D question-answering dataset ScanQA [3] and the situated question-answering dataset SQA3D [44]. As shown in Table 2, our method ranks the best for CiDEr, BLEU-4, and METEOR among the methods without accessing ground-truth information of the object relevant to the question. Remarkably, compared to Grounded 3D-LLM [12] which only uses point cloud as input, our 3D-LLaVA achieves a $19.9\%$ improvement. When compared to the strongest competitor Chat-Scene, our 3D-LLaVA achieves $4.9\%$ CiDEr, $2.8\%$ BLEU-4, $0.4\%$ METEOR, and $1.4\%$ Rouge-L improvements on ScanQA, respectively. On SQA3D, our 3D-LLaVA reports comparable extract match accuracy as that of Chat-Scene (54.5% V.S. 54.6%). It is worth noting that Chat-Scene uses both instance-level 3D and 2D features, which rely on complicated offline preprocessing, while our 3D-LLaVA extracts superpoint features online with the OST, which is more computation-friendly. + +3D Dense Captioning demands the model to describe the object and its spatial relationship to the surrounding instances within the scene. In this experiment, we follow the common practice of using the predicted mask proposals of Mask3D [52] as the visual prompt. Please note that we have + +Table 3. Performance comparisons for box-level 3D visual grounding. Our results are obtained by directly converting the foreground referring mask to a box, highlighted with light blue. + +
MethodsScanRefer (Box-Level)
Acc@0.25Acc@0.5
Specialist Models:
3D-VisTA [67]50.645.8
ConcretNet [56]50.646.5
3DLMMs:
3D-LLM [23]30.3-
Grounded 3D-LLM [12]48.644.0
Chat-Scene [25]55.550.2
3D-LLaVA (Ours)51.240.6
+ +Table 4. Ablation study on the paradigm to produce visual prompt embedding. The models are compared in terms of CiDEr, BLEU-4, METEOR and Rough-L on Scan2Cap [13]. The index (a), (b), and (c) in this table correspond to paradigms depicted in Figure 4. Our default setting is highlighted with light blue. + +
Visual Prompt EncodingScan2Cap
C↑B-4↑M↑R↑
(a) Coordinate Projection68.733.926.755.1
(b) Pooling76.836.626.957.5
(c) Ours with OST78.836.927.157.7
+ +not got access to the output of Mask3D in the training stage. Our OST works as a visual sampler to convert any prompts in the predefined formulation to the semantic space of visual features without the extra cost of finetuning. Results in the table show that our 3D-LLaVA also achieves the best performance in generating instance-level descriptions. This experiment further validates the effectiveness and scalability of the proposed 3D-LLaVA with OST. + +# 4.4. Experimental Analysis + +This section presents the experimental analysis of our 3D-LLaVA. Unless specified, the model evaluated in this section is trained with the same data and training scheme as the default setting introduced in the former sections. + +Box-level 3D Visual Grounding. Even not designed for 3D referring segmentation, our 3D-LLaVA can also produce box-level grounding results. Specifically, we first apply DBSCAN algorithm [19] to the foreground mask to remove outliers, and then obtain the grounding box by considering the minimum and maximum coordinates of the mask. We compare the box-level grounding performance of our 3D-LLaVA with bot the specialist model and other 3D LMMs in Table 3. Although our model is optimized for precise binary masks, whereas competitors are trained to + +Table 5. Quantitative comparison on the different number of visual tokens. The models are compared in terms of CiDEr and BLEU-4 on ScanQA [3] and Scan2Cap [13]. Our default setting is highlighted with light blue. + +
# Visual TokenScanQAScan2Cap
C↑B-4↑C↑B-4↑
5091.115.974.935.9
10092.617.178.836.9
20092.817.178.637.2
40092.316.977.736.8
+ +select best-matching proposals based on box IoU, our 3DLLaVA achieves $51.2\%$ accuracy when the IoU threshold is 0.25, better than most of the competitors in the table. Our performance lags behind Chat-Scene [25], but our method relies on neither an extra mask proposal generator nor the fusion of image and point cloud features. + +Effect of Visual Prompt Encoding. In this study, we analyze the effect of different ways to convert visual prompts into prompt embeddings (as illustrated in Figure 4). We leverage the box as the visual prompt in this experiment since the mask can be converted to a box by its boundary and the clicking point is a special case of a box without area. Among the compared paradigms, our proposed strategy to reuse OST as the visual prompt encoder, i.e. method (c), achieves the best result. On the one hand, our method avoids additional learnable parameters, which are difficult to optimize together with the LLM. On the other hand, compared to (b), appending the prompt query out of the parameter-free encoder to the superpoint queries enables deeply abstracting the superpoint features by the stack of OST encoder layers. The method (a) produces prompt embedding by applying an MLP to the box coordinates. This is because the produced prompt embedding lacks visual context, increasing the burden on the LLM in locating the corresponding region. We suppose this kind of paradigm needs more training data and training epochs to converge. + +Effect of Visual Token Number. Retaining more visual tokens leads to a rapid increase in the computation complexity of LLMs. We take this experiment to explore how many visual tokens should be exploited in our 3D-LLaVA to enable an accurate understanding of the 3D scene. As shown in Table 5, increasing the token number from 50 to 100, the CiDEr on ScanQA and Scan2Cap is improved by $1.5\%$ and $3.9\%$ , respectively. However, further increasing the token count to 200 yields no substantial performance gains. Therefore, we set the default token number to 100. + +# 4.5. Qualitative Results + +In Figure 5, we showcase several visualizations of 3D-LLaVA's performance across various 3D environments, in + +![](images/33ac805472bcc40ea7672edcb7a176d4e3c56fe11472856561ab195b6eddbfd7.jpg) +Figure 5. Visualization of 3D-LLaVA's response on various tasks. Each of these examples includes an instruction to perform referring segmentation. Besides, the examples present the result of 3D question answering [3], 3D dense captioning [65], and situated question answering [44], respectively. When the referred object is not in the given 3D scene, the model is aware of responding with "Sorry, I cannot find this object". + +cluding bedrooms, offices, and bathrooms. Our 3D-LLaVA model accurately interprets user instructions and demonstrates an ability to avoid false positives when the target object is absent from the 3D scene. + +# 5. Conclusion + +In this work, we introduce 3D-LLaVA, a new 3D LMM with streamlined architecture and powerful capability. The core component in 3D-LLaVA is a new visual connector, Omni Superpoint Transformer (OST), which serves as a multifunctional module in visual token selection, visual prompt encoding, and mask decoding. Therefore, taking advantage of the versatile OST, 3D-LLaVA is capable of conducting 3D vision-centric dialogue, enabling flexible interaction and grounding language expression into 3D point cloud masks with a universal architecture. Through extensive experiments, 3D-LLaVA achieves impressive results across multiple benchmarks. Although 3D-LLaVA has made significant improvements over the previous methods, 3D data is still the main obstacle in developing 3D LMMs. We regard the data collection and configuration as the next step. + +# References + +[1] Panos Achlioptas, Ahmed Abdelreehm, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In European Conference on Computer Vision, pages 422-440. Springer, 2020. 2, 5 +[2] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022. 1, 2 +[3] Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoaki Kawanabe. Scanqa: 3d question answering for spatial scene understanding. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19129-19139, 2022. 2, 5, 6, 7, 8 +[4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020. 1 +[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS), 33:1877-1901, 2020. 2 +[6] Daigang Cai, Lichen Zhao, Jing Zhang, Lu Sheng, and Dong Xu. 3djcg: A unified framework for joint dense captioning and visual grounding on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16464-16473, 2022. 2, 6 +[7] Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural language. In European Conference on Computer Vision, pages 202-221. Springer, 2020. 2, 5, 7 +[8] Dave Zhenyu Chen, Qirui Wu, Matthias Nießner, and Angel X Chang. D3net: A speaker-listener architecture for semi-supervised dense captioning and visual grounding in rgb-d scans. arXiv preprint arXiv:2112.01551, 2021. 2, 6 +[9] Sijin Chen, Hongyuan Zhu, Xin Chen, Yinjie Lei, Gang Yu, and Tao Chen. End-to-end 3d dense captioning with vote2cap-detr. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11124-11133, 2023. 2, 6 +[10] Sijin Chen, Hongyuan Zhu, Mingsheng Li, Xin Chen, Peng Guo, Yinjie Lei, Gang Yu, Taihao Li, and Tao Chen. Vote2cap-detr++: Decoupling localization and describing for end-to-end 3d dense captioning. arXiv preprint arXiv:2309.02999, 2023. 2 +[11] Sijin Chen, Xin Chen, Chi Zhang, Mingsheng Li, Gang Yu, Hao Fei, Hongyuan Zhu, Jiayuan Fan, and Tao Chen. Ll3da: Visual interactive instruction tuning for omni-3d understanding reasoning and planning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26428-26438, 2024. 1, 3, 4, 6 + +[12] Yilun Chen, Shuai Yang, Haifeng Huang, Tai Wang, Ruiyuan Lyu, Runsen Xu, Dahua Lin, and Jiangmiao Pang. Grounded 3d-llm with referent tokens. arXiv preprint arXiv:2405.10370, 2024. 3, 6, 7 +[13] Zhenyu Chen, Ali Gholami, Matthias Nießner, and Angel X Chang. Scan2cap: Context-aware dense captioning in rgb-d scans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3193-3203, 2021. 2, 5, 6, 7, 8 +[14] Zhenyu Chen, Ronghang Hu, Xinlei Chen, Matthias Nießner, and Angel X Chang. Unit3d: A unified transformer for 3d dense captioning and visual grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18109-18119, 2023. 6 +[15] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240): 1-113, 2023. 2 +[16] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3075-3084, 2019. 3 +[17] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828–5839, 2017. 5 +[18] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. 1 +[19] Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, pages 226-231, 1996. 7 +[20] Rao Fu, Jingyu Liu, Xilun Chen, Yixin Nie, and Henhan Xiong. Scene-llm: Extending language model for 3d visual understanding and reasoning. arXiv preprint arXiv:2403.11401, 2024. 3, 6 +[21] Shuting He, Henghui Ding, Xudong Jiang, and Bihan Wen. Segpoint: Segment any point cloud via large language model. In European Conference on Computer Vision, pages 349-367, 2024. 1, 2, 3, 6, 7 +[22] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. 2 +[23] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. Advances in Neural Information Processing Systems, 36:20482-20494, 2023. 1, 3, 6, 7 + +[24] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 5, 6 +[25] Haifeng Huang, Yilun Chen, Zehan Wang, Rongjie Huang, Runsen Xu, Tai Wang, Luping Liu, Xize Cheng, Yang Zhao, Jiangmiao Pang, et al. Chat-scene: Bridging 3d scene and large language models with object identifiers. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 1, 3, 5, 6, 7, 8 +[26] Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. An embodied generalist agent in 3d world. arXiv preprint arXiv:2311.12871, 2023. 6 +[27] Kuan-Chih Huang, Xiangtai Li, Lu Qi, Shuicheng Yan, and Ming-Hsuan Yang. Reason3d: Searching and reasoning 3d segmentation via large language model. arXiv preprint arXiv:2405.17427, 2024. 2 +[28] Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, and Tyng-Luh Liu. Text-guided graph neural networks for referring 3d instance segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1610-1618, 2021. 2, 6 +[29] Shijia Huang, Yilun Chen, Jiaya Jia, and Liwei Wang. Multiview transformer for 3d visual grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15524-15533, 2022. 2 +[30] Jitesh Jain, Jiachen Li, Mang Tik Chiu, Ali Hassani, Nikita Orlov, and Humphrey Shi. Oneformer: One transformer to rule universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2989-2998, 2023. 4 +[31] Yang Jiao, Shaoxiang Chen, Zequn Jie, Jingjing Chen, Lin Ma, and Yu-Gang Jiang. More: Multi-order relation mining for dense captioning in 3d scenes. arXiv preprint arXiv:2203.05203, 2022. 2, 6 +[32] Bu Jin, Yupeng Zheng, Pengfei Li, Weize Li, Yuhang Zheng, Sujie Hu, Xinyu Liu, Jinwei Zhu, Zhijie Yan, Haiyang Sun, et al. Tod3cap: Towards 3d dense captioning in outdoor scenes. In European Conference on Computer Vision, pages 367-384. Springer, 2024. 2 +[33] Zhao Jin, Munawar Hayat, Yuwei Yang, Yulan Guo, and Yinjie Lei. Context-aware alignment and mutual masking for 3d-language pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10984-10994, 2023. 2, 6 +[34] Minjung Kim, Hyung Suk Lim, Soonyoung Lee, Bumsoo Kim, and Gunhee Kim. Bi-directional contextual attention for 3d dense captioning. In European Conference on Computer Vision, pages 385-401. Springer, 2024. 2 +[35] Maxim Kolodiazhnyi, Anna Vorontsova, Anton Konushin, and Danila Rukhovich. Oneformer3d: One transformer for unified point cloud segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20943-20953, 2024. 3 +[36] Xin Lai, Yuhui Yuan, Ruihang Chu, Yukang Chen, Han Hu, and Jiaya Jia. Mask-attention-free transformer for 3d in + +stance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3693-3703, 2023. 3 +[37] Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. Lisa: Reasoning segmentation via large language model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9579-9589, 2024. 5 +[38] Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4558-4567, 2018. 3 +[39] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. 1, 2, 3 +[40] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. 1, 2, 3 +[41] Haisong Liu, Yao Teng, Tao Lu, Haiguang Wang, and Limin Wang. Sparsebev: High-performance sparse 3d object detection from multi-camera videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18580-18590, 2023. 4 +[42] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296-26306, 2024. 1, 2, 5, 6 +[43] LMSYS.org. Vicuna: An open-source chatbot impressing gpt-4 with 90https://lmsys.org.1,6 +[44] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. arXiv preprint arXiv:2210.07474, 2022. 2, 5, 7, 8 +[45] Maria Parelli, Alexandros Delitzas, Nikolas Hars, Georgios Vlassis, Sotirios Anagnostidis, Gregor Bachmann, and Thomas Hofmann. Clip-guided vision-language pre-training for question answering in 3d scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5606-5611, 2023. 2 +[46] Songyou Peng, Kyle Genova, Chiyu Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser, et al. Openscene: 3d scene understanding with open vocabularies. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 815-824, 2023. 2, 5 +[47] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017. 3, 4 +[48] Zhipeng Qian, Yiwei Ma, Jiayi Ji, and Xiaoshuai Sun. X-refseg3d: Enhancing referring 3d instance segmentation via structured cross-modal graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4551-4559, 2024. 2, 6 +[49] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 1 + +[50] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 5 +[51] David Rozenberszki, Or Litany, and Angela Dai. Language-grounded indoor 3d semantic segmentation in the wild. In European Conference on Computer Vision, pages 125-141, 2022. 5 +[52] Jonas Schult, Francis Engelmann, Alexander Hermans, Or Litany, Siyu Tang, and Bastian Leibe. Mask3d: Mask transformer for 3d semantic instance segmentation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 8216-8223. IEEE, 2023. 4, 7 +[53] Jiahao Sun, Chunmei Qing, Junpeng Tan, and Xiangmin Xu. Superpoint transformer for 3d scene instance segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2393-2401, 2023. 3, 4 +[54] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 2 +[55] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 1, 2 +[56] Ozan Unal, Christos Sakaridis, Suman Saha, and Luc Van Gool. Four ways to improve verbo-visual fusion for dense 3d visual grounding. In European Conference on Computer Vision, pages 196–213. Springer, 2024. 7 +[57] Heng Wang, Chaoyi Zhang, Jianhui Yu, and Weidong Cai. Spatiality-guided transformer for 3d dense captioning on point clouds. arXiv preprint arXiv:2204.10688, 2022. 6 +[58] Changli Wu, Yiwei Ma, Qi Chen, Haowei Wang, Gen Luo, Jiayi Ji, and Xiaoshuai Sun. 3d-stmn: Dependency-driven superpoint-text matching network for end-to-end 3d referring expression segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5940-5948, 2024. 2, 6 +[59] Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. arXiv preprint arXiv:2308.16911, 2023. 2 +[60] Zhengyuan Yang, Songyang Zhang, Liwei Wang, and Jiebo Luo. Sat: 2d semantics assisted training for 3d visual grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1856-1866, 2021. 2 +[61] Zhihao Yuan, Xu Yan, Yinghong Liao, Yao Guo, Guanbin Li, Shuguang Cui, and Zhen Li. X-trans2cap: Cross-modal knowledge transfer using transformer for 3d dense captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8563-8573, 2022. 2 +[62] Sha Zhang, Di Huang, Jiajun Deng, Shixiang Tang, Wanli Ouyang, Tong He, and Yanyong Zhang. Agent3d-zero: + +An agent for zero-shot 3d understanding. arXiv preprint arXiv:2403.11835, 2024. 3 +[63] Tao Zhang, Xiangtai Li, Hao Fei, Haobo Yuan, Shengqiong Wu, Shunping Ji, Chen Change Loy, and Shuicheng Yan. Omg-llava: Bridging image-level, object-level, pixel-level reasoning and understanding. arXiv preprint arXiv:2406.19389, 2024. 2 +[64] Yiming Zhang, ZeMing Gong, and Angel X Chang. Multi3drefer: Grounding text description to multiple 3d objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15225-15236, 2023. 2, 5, 6, 7 +[65] Yufeng Zhong, Long Xu, Jiebo Luo, and Lin Ma. Contextual modeling for 3d dense captioning on point clouds. arXiv preprint arXiv:2210.03925, 2022. 8 +[66] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 1 +[67] Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li. 3d-vista: Pre-trained transformer for 3d vision and text alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2911-2921, 2023. 2, 6, 7 \ No newline at end of file diff --git a/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/images.zip b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2397d4ab0290b5aacf57e309f3b29bbaff7548bd --- /dev/null +++ b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a2805a78ed8fe06b0b7910e44711c5cdcd4860ab92827751eb393e26d825cfa +size 458712 diff --git a/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/layout.json b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5480d539fde95a3a29e4b70bf3a4fae537111c87 --- /dev/null +++ b/CVPR/2025/3D-LLaVA_ Towards Generalist 3D LMMs with Omni Superpoint Transformer/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:082978eeecc84b6af6819895ceb98c35897d03ada4d7ad1bdd257feebe45069f +size 361439 diff --git a/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_content_list.json b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b4f2611a89936e5cf9655b0d3bf3f42fc224c79b --- /dev/null +++ b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53cfb673aa893124166d0f3a180ef4ca79b9a2051faf96d16a064b9ae3ad7d52 +size 73087 diff --git a/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_model.json b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..791e24ddad29a56b5ae64f14e6559546277042ce --- /dev/null +++ b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6e4145755887d5a475045ab37524d50a40442bc04b4975c7330643c6236bd18 +size 91943 diff --git a/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_origin.pdf b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..77b1c11dde0502cf864836f1150cda8ec6e7170a --- /dev/null +++ b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/1e7b2619-295e-41d1-97fc-d9acf2fa341e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e536f6bea8afc2ac682ac34ed52209a45f8988549784f65a71958b2c3091ff12 +size 1354145 diff --git a/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/full.md b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5a641ea0262637ec383e2301436ef70a22157586 --- /dev/null +++ b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/full.md @@ -0,0 +1,294 @@ +# 3D-MVP: 3D Multiview Pretraining for Manipulation + +Shengyi Qian $^{1,2*}$ , Kaichun Mo $^{1}$ , Valts Blukis $^{1}$ , David F. Fouhey $^{3}$ , Dieter Fox $^{1}$ , Ankit Goyal $^{1}$ $^{1}$ NVIDIA $^{2}$ University of Michigan $^{3}$ New York University + +https://jasonqsy.github.io/3DMVP + +# Abstract + +Recent works have shown that visual pretraining on egocentric datasets using masked autoencoders (MAE) can improve generalization for downstream robotics tasks. However, these approaches pretrain only on 2D images, while many robotics applications require 3D scene understanding. In this work, we propose 3D-MVP, a novel approach for 3D Multi-View Pretraining using masked autoencoders. We leverage Robotic View Transformer (RVT), which uses a multi-view transformer to understand the 3D scene and predict gripper pose actions. We split RVT's multi-view transformer into visual encoder and action decoder, and pretrain its visual encoder using masked autoencoding on largescale 3D datasets such as Objaverse. We evaluate 3D-MVP on a suite of virtual robot manipulation tasks and demonstrate improved performance over baselines. Our results suggest that 3D-aware pretraining is a promising approach to improve generalization of vision-based robotic manipulation policies. + +# 1. Introduction + +Building learning-based manipulation systems is challenging due to the unavailability of diverse large-scale robotics data. To address this, there has been significant interest in using computer vision techniques to learn generalizable visual representations without robotics focused data, for example by self-supervised pre-training on image datasets. In particular, inspired by the success of masked language modeling in Natural Language Processing (NLP), several recent works have explored masked autoencoding (MAE) for visual representation learning [3, 13, 22]. MAE learns to reconstruct randomly masked patches in an input image, encouraging the model to learn high-level semantic features. When applied to egocentric videos from human demonstrations, MAE has been shown to learn representations that generalize well to downstream robotics tasks such as object manipulation [9, 33, 42]. + +![](images/f2f949c8bdbc17d085e94aec9df6a45872edada6d3fc37c330c80d28edbc6c1e.jpg) +(a) 2D Pretraining +Figure 1. Comparison of 2D vs. 3D pretraining for manipulation. (Left): In 2D pretraining [42], the model is trained to do MAE reconstruction from a single image from Interaction videos [17, 39]. The encoder is then used for downstream manipulation tasks. However, the input can only be a single 2D image due to the pretraining. (Right): We propose 3D Multi-View Pretraining (3D-MVP), which uses multiple orthogonal RGB-D views of a 3D model. The model is tasked with reconstructing the masked views by leveraging information across different perspectives, enabling it to learn more robust 3D spatial representations. This multi-view approach improves downstream performance in robot manipulation tasks by capturing richer scene understanding compared to 2D-only pretraining. And it is compatible with state-of-the-art 3D manipulation method such as RVT [16] which takes 3D inputs. + +![](images/8b02a75192ee770357fc4b693e5860d9b0f79aba89c520e11c87144bc3a6d633.jpg) +(b) 3D Multiview Pretraining (Ours) + +However, current MAE approaches for robotics pretrain only on 2D images, ignoring the 3D structure of the scene. Prior works in robotics have shown that methods that build an explicit 3D visual representation of the environment are more sample efficient and generalize better than those with only 2D visual representations [16, 32, 40]. Hence, in this work, we explore how we can bring the benefits of visual pretraining to robot manipulation methods that reason with explicit 3D representations. + +We propose 3D-MVP, a method for 3D Multi-View Pretraining for robot manipulation. Our approach builds upon recent advances in robot manipulation. Specifically, we use the Robotic View Transformer (RVT), a state-of-the-art 3D manipulation method [16]. RVT takes as input a point cloud of the scene and builds a 3D representation of the scene, us + +ing a set of fixed orthogonal "virtual" RGBD images. These RGBD images are fed through a transformer model that fuses information across views and predicts robot actions in the form of future gripper poses. + +We choose RVT over other methods for manipulation that build a 3D representation of the scene (e.g. PerAct [40] and Act3D [15]), because other methods use either voxels or point clouds as input a transformer model, while RVT uses orthogonal RGBD images. The view-based representation makes RVT a suitable candidate for MAE pretraining. + +We pretrain the multi-view representation in RVT by attaching it to a lightweight MAE decoder. We then randomly mask out a subset of the visual tokens for each view and train the model to reconstruct the multiview RGB-D images. After the pre-training, we discard the decoder. We then fine-tune the visual encoder along with RVT's action decoder on various manipulation tasks. + +In order to learn generalizable and robust visual features, we use the recent works that have led to the creation of large-scale datasets of 3D scenes, such as Objaverse and 3D-FRONT [10, 11, 14, 34]. These datasets contain high-quality 3D scans of indoor environments along with realistic textures and materials. We use these datasets to create sets of orthogonal views that are similar to the 3D representation used in RVT. These sets of orthogonal views are then used for pretraining the visual encoder in RVT. We conduct experiments to ablate various different choices available for pre-training. Specifically, we study how masking strategies, dataset choice and dataset sizes affect the downstream manipulation performance. + +Finally, we evaluate 3D-MVP on the RLBench benchmark [24], a suite of manipulation tasks in a simulated environment. We find that pretraining the RVT encoder with 3D-MVP leads to significant improvements over training from scratch or pretraining with 2D MAE. These results inform how we can advance the state-of-the-art in robotic manipulation with the help of pretraining. We further evaluate 3D-MVP on the Colosseum benchmark [32], which tests a system's generalization across various unseen variations of manipulation tasks like object size change, color change, and lighting changes. We find that the proposed 3D-MVP method is more robust across various variations than RVT trained from scratch. + +In summary, our contributions are three-fold. + +- We propose 3D-MVP, a novel approach for 3D multi-view pretraining using masked autoencoding on large-scale 3D datasets. +- We study how various design choices in pretraining, like masking strategy, dataset combination and sizes, affect downstream object manipulation performance. +- We demonstrate that pretraining with 3D-MVP leads to significant improvements on object manipulation tasks. We also show that 3D-MVP enables training policies that are + +more robust to variations such as size, texture, and lightning, on the COLOSSEUM benchmark. + +We hope our work can inform future studies about pretraining for robotic applications. + +# 2. Related Work + +Our work builds upon several active areas of research, including self-supervised learning, visual pretraining for robotics, and learning robotic manipulation from demonstrations. + +Self-supervised learning. Self-supervised learning aims to learn useful representations from unlabeled data by solving pretext tasks that do not require manual annotation. Early work in this area focused on designing pretext tasks for 2D images, such as solving jagsaw puzzles [31], constractive learning [7, 21] or joint embedding approaches [1, 2, 5, 6, 18, 45]. Most related to our work is the masked autoencoder (MAE) approach proposed by He et al. [22], which learns to reconstruct randomly masked patches in an image. MAE has been shown to learn transferable representations for object detection and segmentation tasks. Furthermore, Bachmann et al demonstrates MAE pretraining can be extended to different modalities such as semantics and depth [3]. In this work, we extend the MAE approach to multi-view 3D scenes, enabling us to learn 3D-aware representations that are useful for robotic manipulation tasks. Unlike Multi-MAE which learns semantics and depth through direct supervision, 3D-MVP aims to learn a 3D-aware representation from multi view images. + +Visual pretraining for Robotics. Visual pretraining has demonstrated impressive generalization ability on computer vision tasks. Therefore, prior works have explored whether it works for robotics tasks as well. Specifically, the robotics community has trended towards learning representations using state-of-the-art self-supervised vision algorithms on diverse interaction datasets [8, 17, 39], and finetune the network on robotics tasks [9, 28-30, 33, 37, 42]. 3D-MVP follows the same procedure. However, existing robotics pretraining approaches typically learn a 2D visual encoder (e.g. ResNet [20] or ViT [12]), we find they are inferior than manipulation policies which do explicit 3D modeling (e.g. RVT [16], Act3D [15]). Migrating a pretrained ViT to 3D manipulation policies is nontrivial since they do not have a 2D visual encoder. In this paper, we propose 3D-MVP, which does 3D-aware pretraining on 3D manipulation policies, to fill the gap. + +Learning manipulation from demonstrations. Recent work has explored using transformers for multi-task manipulation policies that predict robot actions from visual and language inputs [19, 27, 38, 40, 41]. End-to-end models like RT-1 [4], GATO [35], and InstructRL [27] directly predict 6-DoF end-effector poses but require many demon + +strations to learn spatial reasoning and generalize to new scenes. To better handle 3D scenes, PerAct [40] and C2F-ARM [25] voxelize the workspace and detect the 3D voxel containing the next end-effector pose. However, precise pose prediction requires high-resolution voxels which are computationally expensive. Recently, RVT [16] proposes a multi-view transformer that attends over point cloud features from multiple camera views to predict actions. This avoids explicit voxelization and enables faster training and inference than PerAct. Act3D [15] represents the scene as a continuous 3D feature field and samples points to featurize with attention, allowing adaptive resolution. GNFactor [44] jointly optimizes a generalizable neural field for reconstruction and a Perceiver for decision-making. In contrast, our proposed 3D-MVP learns 3D scene representations through masked autoencoding pretraining on a large dataset of 3D object models. This pretraining enables 3D-MVP to build a rich understanding of 3D geometry and semantics prior to finetuning on downstream manipulation tasks. Compared to RVT and Act3D which train from scratch on target tasks, 3D-MVP's pretraining leads to improved performance, sample efficiency and generalization. Unlike GN-Factor which relies on a pretrained VLM to inject semantics, 3D-MVP directly learns 3D semantic features from object models. + +# 3. Approach + +In this section we first provide essential background on RVT, then define our method 3D-MVP that learns 3D-aware representations for robotic manipulation using masked autoencoding on multi-view 3D scenes, and finally describe how we finetune the method on downstream manipulation tasks. Figure 2 gives an overview of our approach. + +# 3.1. Background on Robotic View Transformer (RVT). + +RVT is a state-of-the-art object manipulation method [16]. It creates an explicit 3D representation of the scene by using orthogonal virtual views of a scene. Please refer to Goyal et al. [16] for a full explanation. Here, we provide a brief overview and define the notation. + +RVT takes a point cloud of the robot workspace as input (Fig. 2). RVT is agnostic to the poses of the RGB-D cameras used to construct the input point cloud. For example, it can be obtained from a combination of third-person cameras around the workspace, head cameras, or wrist cameras. RVT then renders this point cloud using a set of five "virtual" cameras placed at orthogonal locations around the robot. The virtual cameras are placed at the top, left, right, front, and back of the robot workspace with respect to the robot. Each virtual image has 10 channels: RGB (3 channels), Depth (1 channel), 3D point coordinate in world frame (3 channels), and 3D point coordinate in camera sen + +sor frame (3 channels). We denote the virtual images captured from different virtual camera poses $\{p_1,\dots ,p_5\}$ as $\{I_1,\ldots ,I_5\}$ + +These virtual images are then tokenized into $N$ patch embeddings [12], flattened into a sequence of $5N$ tokens spanning all images, and fed to a multi-view transformer. The goal of RVT's multi-view transformer is to learn a function $f_{\theta}$ that maps the virtual images as well as language instructions $L$ to the 6-DoF end-effector pose and the gripper's binary open or close state: + +$$ +a _ {\text {p o s}}, a _ {\text {r o t}}, a _ {\text {o p e n}} = f _ {\theta} \left(L, I _ {1}, p _ {1}, \dots , I _ {5}, p _ {5}\right) \tag {1} +$$ + +RVT is trained end-to-end from scratch on sampled trajectories from simulator or real robots. While RVT has shown state-of-the-art results on 3D manipulation, it does not generalize to novel objects and scenes, since we train RVT from scratch and it overfits on training demonstrations. In the next section, we describe our novel approach 3D-MVP, and how we modify and pretrain the RVT encoder using 3D-MVP, to learn a generalizable representation. + +# 3.2. 3D Multi-View Pretraining (3D-MVP) + +Architecture change to RVT. The key idea is to pretrain the RVT visual encoder $f_{\theta}$ using masked autoencoding on large-scale 3D scene datasets. However, RVT's multiview transformer $f_{\theta}$ is an end-to-end model that takes language instructions as input, and produces robot actions. Existing robotics data with language and actions is limited in terms of diversity of 3D scenes, and 3D scene datasets do not typically contain robotics annotations. + +To enable pre-training on 3D scene datasets, we first split the multiview transformer $f_{\theta}$ into an input renderer $\mathcal{R}$ , an encoder network $\mathcal{E}$ and an action decoder network $\mathcal{D}$ . The renderer $\mathcal{R}$ maps the posed input images into the five virtual images, by constructing a point cloud and rendering it from the five views: + +$$ +\left\{I _ {1}, \dots , I _ {5} \right\} = \mathcal {R} \left(I _ {1}, p _ {1}, \dots , I _ {5}, p _ {5}\right) \tag {2} +$$ + +The encoder $\mathcal{E}$ maps the virtual images into a latent embedding $z\in \mathbb{R}^{5N\times H}$ (where $H$ is the hidden size) and the action decoder $\mathcal{D}$ maps $z$ to the robotic action space, i.e., + +$$ +a _ {\text {p o s}}, a _ {\text {r o t}}, a _ {\text {o p e n}} = \mathcal {D} (L, z), \quad z = \mathcal {E} \left(I _ {1}, \dots , I _ {5}\right), \tag {3} +$$ + +where tokenization of the virtual images into $5N$ patch embeddings is subsumed into $\mathcal{E}$ . Both encoder $\mathcal{E}$ and decoder $\mathcal{D}$ are multiview transformers. We keep the decoder lightweight to focus on pretraining of the encoder. + +Pretraining encoder $\mathcal{E}$ . Our visual pretraining focuses on learning a generalizable representation for the encoder $\mathcal{E}$ . We extract point clouds from Objaverse and render the point cloud using the same five "virtual" cameras. Given 5 virtual + +![](images/e371470448c9c03d21eabfae644c408f7e0287ebb34f58bcdaff5602003d6aee.jpg) + +![](images/4047aacd8bbcb48414cc33fa757c80cb789a50828bb70f93905956cfb19bafb7.jpg) +Figure 2. Overview of 3D-MVP. (a) We first pretrain a Multiview 3D Transformer using masked autoencoder on multiview RGB-D images. (b) We then finetune the pretrained Multiview 3D Transformer on manipulation tasks. Since the MVT is pretrained, the learned manipulation policy generalizes better. For example, it is more robust to changes of texture, size and lighting. + +images $\{I_1,\ldots ,I_5\}$ , we randomly mask out a subset of the visual tokens for each view, and denote the masked inputs as $\{I_1^{\prime},\dots ,I_5^{\prime}\}$ . We use the encoder to extract the embedding $z$ from the masked inputs, + +$$ +z = \mathcal {E} \left(\left\{I _ {1} ^ {\prime}, \dots , I _ {5} ^ {\prime} \right\}\right) \tag {4} +$$ + +We use a separate, lightweight MAE decoder $\mathcal{D}_{MAE}$ to reconstruct the original image $\{I_1,\dots ,I_5\}$ from the embedding $z$ + +$$ +\{\tilde {I} _ {1}, \dots , \tilde {I} _ {5} \} = \mathcal {D} _ {M A E} (z) \tag {5} +$$ + +The encoder $\mathcal{E}$ and decoder $\mathcal{D}_{MAE}$ are trained end-to-end using a pixel-wise reconstruction loss: + +$$ +\mathcal {L} _ {\text {r e c o n}} = \frac {1}{5 W H} \sum_ {i = 1} ^ {5} \sum_ {p = 1} ^ {W \cdot H} \| \left[ I _ {i} \right] _ {(p)} - \left[ \tilde {I} _ {i} \right] _ {(p)} \| _ {2} ^ {2}, \tag {6} +$$ + +where $[I]_{(p)}$ indexes the image $I\in \mathbb{R}^{W\times H\times C}$ at pixel $p$ . By jointly learning to reconstruct all five images and varying the masking patterns during training, we hypothesize that the encoder will learn to reason across the multiple views and extract 3D-aware features that are robust to occlusions and viewpoint changes. In order to inform future works, we study how various masking strategies and dataset combinations affect the final downstream performance (See Tab. 2). + +Implementation details. We implement the pretraining using the PyTorch library and train it on 8 NVIDIA V100s. We use the Objaverse dataset for pretraining [10], which + +contain a total of $800\mathrm{K} + 3\mathrm{D}$ objects with realistic textures and materials. We sample 200K high-quality 3D models and 1000 for validation purpose. We do not construct a large-scale validation set, since the validation is only qualitative. The evaluation of pretrained representation should be mainly done on downstream manipulation tasks. + +We use a patch size of $10 \times 10$ to tokenize images. For the encoder $\mathcal{E}$ , we use a multiview transformer with 8 layers, 8 attention heads and a hidden dimension of 1024. For the decoder, we use a multiview transformer with 2 layers and 8 attention heads. We train the model for 15 epochs using the AdamW optimizer with a learning rate of 0.0001 and a weight decay of 0.01. We use a batch size of 3 and a masking probability of 0.75. + +# 3.3. Finetuning on Downstream Manipulation Tasks + +To adapt the pretrained encoder $\mathcal{E}$ for a specific manipulation task, we finetune it along with the action decoder $\mathcal{D}$ on a dataset of manipulation demonstrations. Assume a demonstration consists of tuples of virtual images, language goals and actions. During training, we randomly sample a tuple and supervise the network to predict the action given the virtual images and the language goal. + +Implementation details. For finetuning on manipulation demonstrations, we follow the standard practice [16, 32] to ensure fair comparison with baselines with 2D pretraining and without pretraining. We use 8 NVIDIA V100 (32GB) + +![](images/5f48ba683bb9983ed133cfc0b4d48b04c98b2457f82a505a3d287dc01cb343c4.jpg) +Masked Input + +![](images/a086de3af4484cc204c2c132d4b3e38767d609dc1ce892148407b001b66a6d00.jpg) +Prediction +Figure 3. MAE Reconstruction results on Objaverse. Our pretrained multi-view transformer generalizes to unseen object instances and reconstructs multi-view images from their masked versions. + +![](images/178fb4a5fb15f4c743806d44af6ecc7db87116ac328ffe170c06c3b506676e10.jpg) +Ground Truth + +for finetuning and a single V100 for evaluation. The learning rate is 1e-4 with 2000 warmup steps and cosine learning rate decay until 1e-6. We use Lamb [43] as the optimizer and the batch size is 3. We train the model for 15 epochs on both RLBench and COLOSSEUM training set. + +# 4. Experiments + +In this section, we evaluate the effectiveness of 3D-MVP for robotic manipulation tasks, and aim to answer the following questions: (1) Does 3D-aware pretraining improve manipulation performance compared to training from scratch or 2D pretraining? (2) Does 3D-aware pretraining improve robustness to environmental variations encountered in manipulation tasks? (3) How do various design choices while pretraining affect the downstream manipulation performance? To answer these questions, we evaluate 3D-MVP on two benchmarks: RLBench [24] for general manipulation performance and COLOSSEUM [32] for systematic evaluation of robustness to environmental perturbations. + +# 4.1. Validating 3D Masked Autoencoding + +We validate whether masked autoencoder works in our setup with multi-view images from 3D assets. Specifically, we check whether the pretrained multi-view transformer generalizes to unseen 3D assets from Objaverse [10]. We validate it qualitatively in Figure 3. We find that the pretrained 3D-MVP network achieves high-fidelity reconstructions despite $75\%$ of the input points being masked, suggesting that 3D-MVP learns meaningful 3D representations through this pretext task. + +# 4.2. Results on RLBench + +We then evaluate whether our proposed pretraining improves manipulation performance. The experiments are conducted RLBench, a simulated platform [24]. + +Setup. RLBench [24] is a popular simulation benchmark for learning manipulation policies. Each task requires the robot to perform a specific action, such as picking up an object, opening a drawer, or stacking blocks. We follow the simulation setup of PerAct [40] and RVT [16] and use CopppelaSim [36] to simulate 18 RLBench tasks. A Franka Panda robot with a parallel gripper is controlled to complete the tasks. The 18 RLBench tasks are the same as PerAct and RVT. The visual observations are captured from four noiseless RGB-D cameras positioned at the front, left shoulder, right shoulder, and wrist with a resolution of $128 \times 128$ . + +Baselines. We compare 3D-MVP with the following baselines on RLBench: + +(1) Image-BC [26] is an image-to-action behavior cloning approach which takes the visual observation and predict the corresponding action. We compare with two variants which use CNN and ViT as the visual encoders, and call them Image-BC (CNN) and Image-BC (ViT), respectively; +(2) C2F-ARM-BC [25] is another behavior cloning approach which converts RGB-D observations to multi-resolution voxels and predicts the next key-frame action. +(3) PerAct [40]: a multi-task a Perceiver transformer for robotic manipulation. The inputs are point clouds with color features and PerAct uses a Perceiver IO network to compress them to a fixed dimension [23]. +(4) RVT [16]: The same Robotic View Transformer architecture as 3D-MVP but trained from scratch on the downstream tasks. We do not compare with 2D pretraining methods since they do not work well on RLBench [32]. + +Metrics. We report the task success rate for each individual tasks, and the average success rate. + +Results. We show quantitative results on Table 1. For the average success rate, 3D-MVP outperforms existing state-of-the-art methods, PerAct [40] and RVT [16]. It demonstrates the effectiveness of bringing visual pretraining for + +
ModelsAverage SuccessClose JarDrag StickInsert PegMeat off GrillOpenDrawerPlace CupsPlace WinePush Buttons
Image-BC (CNN) [26, 40]1.30.00.00.00.04.00.00.00.0
Image-BC (ViT) [26, 40]1.30.00.00.00.00.00.00.00.0
C2F-ARM-BC [25, 40]20.124.024.04.020.020.00.08.072.0
PerAct [40]49.455.289.65.670.488.02.444.892.8
RVT [16]62.952.099.211.288.071.24.091.0100
3D-MVP (Ours)67.576.010020.096.084.04.010096.0
ModelsPut in CupboardPut inDrawerPut in SafeScrew BulbSlide BlockSort ShapeStack BlocksStack CupsSweep to DustpanTurn Tap
Image-BC (CNN) [26, 40]0.08.04.00.00.00.00.00.08.0
Image-BC (ViT) [26, 40]0.00.00.00.00.00.00.00.016.0
C2F-ARM-BC [25, 40]0.04.012.08.016.08.00.00.068.0
PerAct [40]28.051.284.017.674.016.826.42.452.088.0
RVT [16]49.688.091.248.081.636.028.826.472.093.6
3D-MVP (Ours)60.0100.092.060.048.028.040.036.080.096.0
+ +Table 1. Results on RLBench [24]. We report the task completion success rate for 18 RLBench tasks, as well as the average success rate. 3D-MVP reaches the state-of-the-art performance on the benchmark. The pretraining is mainly helpful for tasks with medium difficulty. + +manipulation policy which has explicit 3D modeling. We find the improvement of 3D-MVP mainly comes from tasks which has medium difficulty, such as "insert peg", "put in cupboard", and "stack blocks". If the task is too hard and PerAct/RVT is not able to solve any of them, the pretraining does not help. If the task is too easy and PerAct/RVT has already reached more than $90\%$ success rate, the pretraining has limited space of improvement. + +# 4.3. Results on COLOSSEUM + +After validating our proposed pretraining is helpful for manipulation, we further evaluate its generalization ability and robustness to environmental variations. + +Benchmark. COLOSSEUM [32] is a benchmark for evaluating generalization for robotic manipulation. It contains 20 different tasks such as hockey, empty dishwasher. For each task, it has 12 environmental perturbations, including changes in color, texture, size of objects and backgrounds, and lightnings. The objects which can be changed include Manipulation object (MO), Receiver Object (RO) and the table. Results on COLOSSEUM have also shown strong correlation with real robot experiments. Therefore, it is well-suited and comprehensive for evaluating the generalization ability of manipulation approaches with pretraining on COLOSSEUM benchmark [32]. + +Simulation setup. For simulation, we follow the original COLOSSEUM setup. We use CoppelaSim [36] to simulate all tasks. In training, we do not add any environmental perturbations and generate 100 demonstrations for each task. During test time, we generate 12 environmental perturbations for each task. For each environmental perturbation, we generate 25 demonstrations. For each demonstration, we repeat the generation 20 times if it fails. In some cases, it + +![](images/beb14ebcf0014fe75a0e53f35e0ffcd7f68c91467cccfcf3e337d919d7b33f23.jpg) +Figure 4. Results on COLOSSEUM [32]. We report the average task completion success rate for 12 environmental perturbations and no perturbation. Manipulation policies which do explicit 3D reasoning (RVT [16] works significantly better and 2D pretraining approaches (MVP [42] and R3M [30]). 3D-MVP is more robust than RVT on most perturbations. $\mathrm{MO} =$ manipulation object. $\mathrm{RO} =$ receiver object. + +is not possible to generate a plausible perturbation for some scenarios. For example, we find it's hard to find the right perturbation parameters for the task of "empty dishwasher". Therefore, we only report results on settings we are able to generate. We report baselines on exactly the same setting to make sure the comparison is fair. + +Metrics. We also report the task completion success rate on COLOSSEUM. Instead of averaging for each task, we report the average success rate for each environmental per + +
Network ArchitecturePretraining DatasetsMasking StrategySuccess Rate
3D-MVPObjaverse (full) [10]RGB67.6
3D-MVPObjaverse (small) [10]RGB65.3
3D-MVPObjaverse (full) [10]All64.4
3D-MVP3D-FRONT [14]RGB63.6
3D-MVPRLBench [24]RGB67.5
3D-MVPRLBench [24]All64.7
3D-MVPNoneNone62.9
RVT [16]NoneNone62.9
+ +Table 2. Ablation studies on the RLBench benchmark. We analyze the contribution of our network architecture, pretraining datasets, and the masking strategy. For each variant, we report the average task completion success rate on RLBench [24]. + +turbation as it will highlight how each method is robust to different perturbations. + +Baselines. We compare 3D-MVP with state-of-the-art baselines reported on COLOSSEUM, which includes RVT and two 2D pretraining approaches: + +(1) MVP [33, 42]: a 2D pretraining approach using MAE reconstruction losses. It is pretrained on a collection of interaction datasets, such as Ego4D [17], EpicKitchen [8], and 100DOH [39]. The pretrained encoder is then finetuned and evaluated on COLOSSEUM. +(2) R3M [30]: a 2D pretraining approach using a combination of reconstruction and contrastive losses. It is pretrained on Ego4D [17]. The pretrained encoder is then finetuned and evaluated on COLOSSEUM. +(3) RVT [16]: Trained on COLOSSEUM from scratch. + +Results. We show results in Figure 4. First, our method outperforms existing 2D pretraining (MVP [42], and R3M [30]) significantly. It indicates existing 2D pretraining methods are not ready for complicated robotic manipulation. Compared with RVT which is trained from scratch, our method is more robust to most perturbations. It is especially robust to the change of texture and size of Receiver Object (RO), size of the manipulation object (MO), Light color and Table color. We believe it is because the pretraining stage enables our approach to see diverse 3D objects. + +# 4.4. Ablation Studies + +To analyze the impact of different design choices in 3D-MVP, we conduct ablation studies on the RLBench benchmark. Table 2 shows the average success rates of 3D-MVP with different network architecture, masking strategies and pretraining datasets. And we discuss results as follows. + +Does the performance boost come from pretraining or network architecture? We first validate whether the performance boost comes from architecture change in Sec. 3.2 or the pretraining itself. we finetune our model from scratch and find the performance is 62.9, similar to the original + +RVT [16]. It indicates the performance boost comes from our proposed 3D pretraining. + +Should we pretrain on object or room-level data? The choice of pretraining datasets are typically critical for self-supervised learning [9]. In our experiments, we mainly use Objaverse [10], which is a object-centric 3D datasets. Since we are mainly evaluated on tabletop manipulation, we also try room-level 3D datasets such as 3D-FRONT [14]. We conduct the experiments on RLBench, and observe pretraining on 3D-FRONT only boosts the performance mildly from 62.9 to 63.6. In comparison, pretraining on Objaverse boosts the performance to 67.6. We believe it is because 3D-FRONT has limited diversity and scale of objects and rooms compared with Objaverse. And the diversity and scale of the pretraining data are one of the most important factors for learning generalizable representations. + +Does more pretraining data help? To validate it, we sampled 18K objects from Objaverse and called it Objaverse (small). The full Objaverse dataset we use has 200K objects. When we pretrain the encoder with Objaverse (small), we get a 65.3 mean success rate, which is worse than using Objaverse (full). This suggests that a larger dataset of pretraining helps performance in the downstream task. If we can scale the training data to larger 3D model collections such as the full Objaverse-XL [11], that might improve the performance further. + +Can we pretrain on RLBench? Instead of relying on a different large-scale pretraining dataset such as Objaverse, we investigate whether we can just pretrain on RLBench point clouds, since the proposed 3D pretraining is self-supervised and only requires the 3D point cloud. We extract the RL-Bench point cloud and build a pretraining dataset. After the pretraining, we finetune the model on training demonstrations as usual. We find it can achieve an average success rate of 67.5 on RLBench test set, which is comparable to Objaverse. However, the pretrained model suffers from generalization issues. As is shown in Figure 5, the encoder has overfitted to RLBench, and does not work on other en + +![](images/f48c7533af11a26bffc1cf7b516ec6a4a66b8620bb7431fa5ddd23fbff9b68b5.jpg) +Figure 5. Pretraining MAE on RLBench scenes leads poor generalization performance. (Left): MAE reconstruction results on unseen RLBench renderings. (Right): MAE reconstruction results on Objverse renderings. While the reconstruction is reasonable on RLBench unseen renderings, it overfits to RLBench and does not learn a general representation. + +vironments such as COLOSSEUM. + +Masking strategy. We also observe that masking RGB channels performs better than masking all channels. We hypothesize that the network finds it very challenging to reconstruct all the channels and is unable to learn better visual representations. We view our findings as similar to He et al. [22], who find that the benefits of pretraining diminish if the pretraining task becomes too difficult, like when the masking ratio becomes very high ( $>80\%$ ). + +# 5. Conclusion + +In this work, we introduced 3D-MVP, a novel approach for 3D multi-view pretraining using masked autoencoders to improve the performance and generalization of robotic manipulation policies. By pretraining the encoder of Robotic View Transformer (RVT) on the large-scale Objaverse 3D object dataset using masked autoencoding, we demonstrate that the learned 3D representations lead to improved sample efficiency and robustness when finetuned on downstream manipulation tasks. We evaluated our approach on two benchmarks: RLBench, for general manipulation performance and COLOSSEUM, for systematic evaluation of robustness to environmental perturbations. On RLBench, 3D-MVP outperforms state-of-the-art manipulation baselines, achieving higher success rates. On COLOSSEUM, which tests 12 axes of variations such as object color, size, texture, lighting and more, 3D-MVP maintains higher success rates compared to baselines as the magnitude of perturbations increases. These results suggest that scalable 3D-aware pretraining on diverse object datasets is a promising approach to developing general-purpose robotic manipulation systems. + +Limitations and future work. While 3D-MVP achieves promising results on the RLBench and COLOSSEUM benchmarks, there are several limitations that we plan to address in future work. First, the current version of 3D- + +MVP uses a fixed set of camera viewpoints and does not explicitly reason about occlusions and spatial relationships between objects. In future work, we plan to explore more advanced 3D representations, such as neural radiance fields, that can handle arbitrary camera viewpoints and model the 3D structure of the scene more explicitly. Second, the current version of 3D-MVP assumes that the scene, robot, and objects follow quasi-static dynamics, and does not handle dynamic interactions between the robot and the environment. In future, we plan to explore techniques for learning action-conditional representations that can predict the effect of the robot's actions on the 3D scene. Third, the current version of 3D-MVP requires a small amount of labeled data for each downstream task. In future, we plan to explore how to enable 3D-MVP to generate to novel manipulation tasks which have not been finetuned on. + +Social impacts. The development of more generalized and robust robotic manipulation systems enabled by 3D-MVP has the potential for significant societal impact. On the positive side, such systems could automate many repetitive or dangerous manual labor tasks, improving worker safety and productivity. Assisting robots could also improve quality of life for the elderly and people with disabilities. However, the increased automation from these systems may also displace some jobs, disproportionately impacting workers with lower levels of education and technical skills. Additionally, the datasets used for pretraining these models, like Objaverse, may encode biases that could be reflected in the downstream robotic system's behavior. + +Acknowledgment. We thank Ishika Singh and Jiafei Duan for their help with COLOSSEUM, and Jesse Zhang for his help with some baseline results. + +# References + +[1] Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, + +Mike Rabbat, and Nicolas Ballas. Masked siamese networks for label-efficient learning. In European Conference on Computer Vision, pages 456-473. Springer, 2022. +[2] Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15619–15629, 2023. +[3] Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. Multimae: Multi-modal multi-task masked autoencoders. In ECCV, 2022. +[4] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817, 2022. +[5] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems, 33:9912-9924, 2020. +[6] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650-9660, 2021. +[7] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. +[8] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Scaling egocentric vision: The epic-kitchens dataset. In ECCV, 2018. +[9] Sudeep Dasari, Mohan Kumar Srirama, Unnat Jain, and Abhinav Gupta. An unbiased look at datasets for visuo-motor pre-training. In Conference on Robot Learning, 2023. +[10] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13142-13153, 2023. +[11] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, et al. Objverse-xl: A universe of $10\mathrm{m} + 3\mathrm{d}$ objects. Advances in Neural Information Processing Systems, 36, 2024. +[12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. +[13] Christoph Feichtenhofer, Yanghao Li, Kaiming He, et al. + +Masked autoencoders as spatiotemporal learners. NeurIPS, 2022. +[14] Huan Fu, Bowen Cai, Lin Gao, Ling-Xiao Zhang, Jiaming Wang, Cao Li, Qixun Zeng, Chengyue Sun, Rongfei Jia, Binqiang Zhao, et al. 3d-front: 3d furnished rooms with layouts and semantics. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10933-10942, 2021. +[15] Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, and Katerina Fragkiadaki. Act3d: Infinite resolution action detection transformer for robotic manipulation. arXiv preprint arXiv:2306.17817, 2023. +[16] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. Rvt: Robotic view transformer for 3d object manipulation. In Conference on Robot Learning, pages 694–710. PMLR, 2023. +[17] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. +[18] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020. +[19] Pierre-Louis Guhur, Shizhe Chen, Ricardo Garcia Pinel, Makarand Tapaswi, Ivan Laptev, and Cordelia Schmid. Instruction-driven history-aware policies for robotic manipulations. In Conference on Robot Learning, pages 175-187. PMLR, 2023. +[20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[21] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738, 2020. +[22] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000-16009, 2022. +[23] Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, et al. Perceiver io: A general architecture for structured inputs & outputs. arXiv preprint arXiv:2107.14795, 2021. +[24] Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019-3026, 2020. +[25] Stephen James, Kentaro Wada, Tristan Laidlow, and Andrew J Davison. Coarse-to-fine q-attention: Efficient learn + +ing for visual robotic manipulation via discretisation. In CVPR, 2022. +[26] Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, and Chelsea Finn. Bc-z: Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learning, pages 991-1002. PMLR, 2022. +[27] Hao Liu, Lisa Lee, Kimin Lee, and Pieter Abbeel. Instruction-following agents with jointly pre-trained vision-language models. 2022. +[28] Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, and Amy Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022. +[29] Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Tingfan Wu, Jay Vakil, et al. Where are we in the search for an artificial visual cortex for embodied intelligence? Advances in Neural Information Processing Systems, 36, 2024. +[30] Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022. +[31] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision, pages 69-84. Springer, 2016. +[32] Wilbert Pumacay, Ishika Singh, Jiafei Duan, Ranjay Krishna, Jesse Thomason, and Dieter Fox. The colosseum: A benchmark for evaluating generalization for robotic manipulation. arXiv preprint arXiv:2402.08191, 2024. +[33] Ilija Radosavovic, Tete Xiao, Stephen James, Pieter Abbeel, Jitendra Malik, and Trevor Darrell. Real-world robot learning with masked visual pre-training. In Conference on Robot Learning, pages 416-426. PMLR, 2023. +[34] Santhosh K Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alex Clegg, John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, et al. Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai. arXiv preprint arXiv:2109.08238, 2021. +[35] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. A generalist agent. arXiv preprint arXiv:2205.06175, 2022. +[36] Eric Rohmer, Surya PN Singh, and Marc Freese. V-rep: A versatile and scalable robot simulation framework. In 2013 IEEE/RSJ international conference on intelligent robots and systems, pages 1321-1326. IEEE, 2013. +[37] Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, and Pieter Abbeel. Multi-view masked world models for visual robotic manipulation. In International Conference on Machine Learning, pages 30613-30632. PMLR, 2023. +[38] Nur Muhammad Shafiullah, Zichen Cui, Ariuntuya Arty Altanzaya, and Lerrel Pinto. Behavior transformers: Cloning + +$k$ modes with one stone. Advances in neural information processing systems, 35:22955-22968, 2022. +[39] Dandan Shan, Jiaqi Geng, Michelle Shu, and David Fouhey. Understanding human hands in contact at internet scale. In CVPR, 2020. +[40] Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Perceiver-actor: A multi-task transformer for robotic manipulation. In Conference on Robot Learning, pages 785–799. PMLR, 2023. +[41] Anthony Simeonov, Ankit Goyal, Lucas Manuelli, Lin Yen-Chen, Alina Sarmiento, Alberto Rodriguez, Pulkit Agrawal, and Dieter Fox. Shelving, stacking, hanging: Relational pose diffusion for multi-modal rearrangement. arXiv preprint arXiv:2307.04751, 2023. +[42] Tete Xiao, Ilija Radosavovic, Trevor Darrell, and Jitendra Malik. Masked visual pre-training for motor control. arXiv preprint arXiv:2203.06173, 2022. +[43] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962, 2019. +[44] Yanjie Ze, Ge Yan, Yueh-Hua Wu, Annabella Macaluso, Yuying Ge, Jianglong Ye, Nicklas Hansen, Li Erran Li, and Xiaolong Wang. Gnfactor: Multi-task real robot learning with generalizable neural feature fields. In Conference on Robot Learning, pages 284-301. PMLR, 2023. +[45] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. ibot: Image bert pre-training with online tokenizer. arXiv preprint arXiv:2111.07832, 2021. \ No newline at end of file diff --git a/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/images.zip b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..25a691fdd0975264f6b8edf31194e828dfb48446 --- /dev/null +++ b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98fcda59340a15d36e5b2054a0180396f56b6eb8ae19da3ded2a6d8484420b92 +size 488654 diff --git a/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/layout.json b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a8b0b6748b30ca94141e327aca99cdf415d89ebb --- /dev/null +++ b/CVPR/2025/3D-MVP_ 3D Multiview Pretraining for Manipulation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6150e084be4376385d58bafbb5ab0b154cf06fe554a144ef2d99f478c27717fb +size 336784 diff --git a/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_content_list.json b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..24e0025ab85420eefa545995d1f2044e967dabe6 --- /dev/null +++ b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c1a0b12a49ae1cba1f45f8fbc55d89501e1f4082cc76a9f4ab3a593e8914030 +size 74457 diff --git a/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_model.json b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d3963694d3ee2ce5a3eadc0c4709ee949251f077 --- /dev/null +++ b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6a258fb1ae770bba94da16ed6c7b3b7f2f234e64bea53af6a189bd9725a64e5 +size 90827 diff --git a/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_origin.pdf b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5aadcdb439a6af7ed8e906a5cae64f26c45b5e4b --- /dev/null +++ b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/d0002279-be38-4de0-a167-db2dd5083a0c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b479cd0ca716383e9edf6e0d10bfe25b131d2865f835434c7a5b5a0302e6d373 +size 1458478 diff --git a/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/full.md b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..34c7595ba22c484d50ae5e30cbd30681ffea4d74 --- /dev/null +++ b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/full.md @@ -0,0 +1,269 @@ +# 3D-Mem: 3D Scene Memory for Embodied Exploration and Reasoning + +Yuncong Yang $^{1*}$ , Han Yang $^{2*}$ , Jiachen Zhou $^{3}$ , Peihao Chen $^{1}$ , Hongxin Zhang $^{1}$ , Yilun Du $^{4}$ , Chuang Gan $^{1,5}$ + +1UMass Amherst, 2CUHK, 3Columbia University + +$^{4}$ MIT, $^{5}$ MIT-IBM Watson AI Lab + +yuncongyang@umass.edu + +https://umass-embodied-agi.github.io/3D-Mem/ + +![](images/ce176a3123bac6f9ceee58b2ef420b6415e409d21e4b9c373ab32c8c6f5e4c36.jpg) +Figure 1. With 3D-Mem, explored regions are represented by a set of Memory Snapshots capturing clusters of co-visible objects, i.e., the objects observable in a single image observation, along with their spatial relationships and background context, as shown in the bottom-left example. Unexplored regions are represented by navigable frontiers along with image observations, referred to as Frontier Snapshots. + +# Abstract + +Constructing compact and informative 3D scene representations is essential for effective embodied exploration and reasoning, especially in complex environments over extended periods. Existing representations, such as object-centric 3D scene graphs, oversimplify spatial relationships by modeling scenes as isolated objects with restrictive textual relationships, making it difficult to address queries requiring nuanced spatial understanding. Moreover, these representations lack natural mechanisms for active explo + +ration and memory management, hindering their application to lifelong autonomy. In this work, we propose 3D-Mem, a novel 3D scene memory framework for embodied agents. 3D-Mem employs informative multi-view images, termed Memory Snapshots, to capture rich visual information of explored regions. It further integrates frontier-based exploration by introducing Frontier Snapshots—glimpses of unexplored areas—enabling agents to make decisions by considering both known and potential new information. To support lifelong memory in active exploration settings, we present an incremental construction pipeline for 3D-Mem, as well as a memory retrieval technique for memory man + +agement. Experimental results on three benchmarks demonstrate that 3D-Mem significantly enhances agents' exploration and reasoning capabilities in 3D environments, highlighting its potential for advancing applications in embodied AI. + +# 1. Introduction + +Embodied agents operating in complex 3D environments require robust and lifelong scene memory to store rich visual information about the environment, supporting effective exploration and reasoning over extended periods. Currently, there are two main streams of scene representations. The first stream focuses on object-centric representations, particularly 3D scene graphs [8, 36], that represent scenes using nodes for objects and edges for inter-object relationships. Another stream directly uses dense 3D representations, such as point clouds [4, 5, 14, 42] or neural fields [15, 23, 35], filling 3D space with dense visual features for querying. + +However, these scene representations exhibit significant limitations when employed as scene memory for embodied agents. Object-centric representations like 3D scene graphs tend to oversimplify 3D scenes by modeling them as individual objects and quantizing inter-object relationships into restrictive textual descriptions. This approach results in the loss of critical spatial information, making it difficult to answer questions that require an understanding of spatial relationships. As illustrated in the bottom-left example from Figure 1, when attempting to answer the question "Is there enough room to place a coffee table in front of the armchair?" using a 3D scene graph, the only spatial-related information we can access are the numerical 3D bounding boxes of the objects and the textual descriptions of the spatial relationships among them. It is challenging to measure unoccupied space or determine oriented spatial relationships such as "in front of" based solely on this information. On the other hand, dense 3D representations like point clouds or neural fields [10, 12, 38, 40] are computationally expensive and lack scalability as the scene grows during the agent's exploration. Meanwhile, unlike vision-language models (VLMs) and large language models (LLMs) that effectively reason over images and text, current foundation models lack sufficient reasoning capabilities for dense 3D modalities due to limited training data. Additionally, neither type of existing scene representation can model unexplored regions, thus failing to support agents in active exploration, which restrains agents from leveraging their scene memory to expand their knowledge and achieve their goals for embodied tasks. + +To address these challenges, we introduce 3D-Mem, a more capable 3D scene memory for embodied agents. 3D-Mem employs a set of informative multi-view images, + +termed Memory Snapshots, to encompass the explored regions of a 3D scene. Each memory snapshot captures all objects visible in that snapshot along with their surroundings. Our intuition is that a snapshot image alone is sufficient to capture rich visual information of a region. For example, revisiting the challenge illustrated in Figure 1, a memory snapshot clearly shows that there is sufficient space “in front of” the armchair to place a coffee table. With the recent advancements in VLMs, these models can extract such spatial information from images [2, 22], much like humans do through intuitive observations. + +Building upon the concept of employing multi-view images as 3D scene memory, 3D-Mem integrates with frontier-based exploration frameworks and extends the concept of "frontiers" to Frontier Snapshots, which represent glimpses of unexplored regions in a 3D scene. In the scenario depicted in the top-left example from Figure 1, we can observe that the frontier snapshot corresponds to a corridor, which is the most probable path leading to the front door. By maintaining these frontier snapshots, the agent can choose either to complete tasks based on its accumulated knowledge or navigate to unexplored regions for new information, as illustrated in the embodied question-answering tasks in Figure 1. Meanwhile, representing both explored and unexplored regions through multi-view images allows us to better leverage the decision-making capability of VLMs, enhancing the agent's ability to reason and plan effectively. + +Additionally, as agents in lifelong settings continuously explore and expand their knowledge, the 3D scene memory system must operate effectively as the memory grows. To achieve this, 3D-Mem supports real-time incremental memory aggregation. To efficiently manage the ever-expanding scene memory, we introduce Prefiltering as an effective memory retrieval mechanism to select relevant memory during decision-making. This framework enables the agent to perform continuous exploration and navigation over extended periods without excessive computational burdens. + +Extensive experiments and superior performance on three benchmarks demonstrate that 3D-Mem significantly enhances agents' capabilities in reasoning and lifelong exploration within 3D environments. + +Our contributions can be summarized as follows: + +- We introduce 3D-Mem, a compact scene memory that constructs informative multi-view snapshot images to capture diverse and robust information among co-visible objects and their surroundings in 3D scenes. +- By introducing frontier snapshots to include unexplored regions, 3D-Mem enables agents to actively explore and acquire new information. +- We incorporate 3D-Mem with incremental memory aggregation and prefiltering strategies that enable agents to expand their knowledge and adapt over extended periods, + +supporting lifelong learning in 3D environments. + +# 2. Related Work + +3D Scene Representations. Recent works [24, 32, 39, 42] have focused on establishing dense 3D representations by grounding 2D representations captured by Vision-Language Foundation Models (e.g. CLIP [26], BLIP [19], SEEM [44]) into 3D scenes, which showcase impressive results on tasks such as open-vocabulary object segmentation and language-guided object grounding [9]. However, such representations are limited due to high resource consumption and the inability to support dynamic updates. 3D scene graphs address these limitations by formulating the scene as a compact graph, where nodes represent objects, and edges encode inter-object relationships as textual descriptions [1, 7, 8, 18, 36], enabling real-time establishment and dynamic update for hierarchical scene representations [13, 29, 37]. While such object-centric representations have demonstrated effectiveness in various tasks, they remain constrained for oversimplifying inter-object relationships with restrictive text descriptions. To tackle this challenge, our work leverages a set of informative memory snapshots to visually capture spatial and semantic relationships among objects, offering a more sophisticated understanding of the scene. + +VLM for Exploration and Reasoning. Vision-Language Models (VLMs) have shown promising results in solving embodied exploration and reasoning tasks by leveraging commonsense reasoning and internet-scale knowledge. Existing exploration approaches can be divided into two categories. The former directly employs consecutive observations together with instructions as input, requiring the VLM to predict the next-step action [43] while the latter grounds the exploration target in the 3D scene through visual prompting, establishing a semantic map to guide the exploration process [21, 28, 33, 41]. However, both approaches are constrained by their memory representations. For the former, vanilla past observations can only serve as short-term memory. For the latter, their semantic maps are target-specific and cannot be generalized to future tasks. To address these limitations, our work introduces the first lifelong and target-agnostic scene memory that can be seamlessly integrated with VLM for further reasoning, stepping closer to the ultimate goal of lifelong autonomy. + +Topological Mapping. Besides the 3D scene representations mentioned above, prior 2D topological mapping methods like TSGM [17] and RoboHop [6] provide valuable context. They construct navigation graphs from images and objects but do not fully capture all objects or their interrelationships. In contrast, 3D-Mem clusters co-visible objects to represent inter-object spatial relationships, enabling tasks such as embodied Q&A that extend beyond route planning. Moreover, its efficient memory retrieval mechanism scales to lifelong exploration, leveraging VLMs for advanced rea + +soning and decision-making. + +# 3. Approach + +3D-Mem contains two types of snapshots: Memory Snapshots and Frontier Snapshots. A memory snapshot represents a cluster of objects and their surroundings in the explored regions, while a frontier snapshot represents an unexplored region along with its exploration direction. During exploration, we construct the set of memory snapshots from a stream of RGB-D images with poses and maintain frontier snapshots using frontier algorithms and occupancy maps. Given an objective and the set of memory and frontier snapshots, a VLM agent iteratively selects the most promising frontier snapshot and moves toward it, continually updating both sets of snapshots while making decisions, until the objective is achieved based on the current memory snapshots. + +First, in Section 3.1, we introduce how the set of memory snapshots is constructed from a series of RGB-D observations with poses. Next, in Section 3.2, we extend this algorithm to incremental construction in active exploration settings. By integrating frontier snapshots, we enable the agent to actively conduct frontier-based exploration. We also design methods to efficiently retrieve memory as it scales up. Finally, in Section 3.3, we detail how the agent utilizes the constructed memory and frontier snapshots for exploration and reasoning. + +Algorithm 1 Co-Visibility Clustering for Memory Snapshots +1: Initial clusters $\mathcal{C} = \{\mathcal{O}\}$ +2: Memory snapshot set $S = \varnothing$ +3: All frame candidates $\mathcal{I}$ +4: Score function $\mathcal{F}$ +5: while $\mathcal{C}$ is not empty do +6: $\mathcal{O}^* = \arg \max_{\mathcal{O}\in \mathcal{C}}|\mathcal{O}|$ +7: $\mathcal{I}^* = \{I|I\in \mathcal{I},\mathcal{O}^*\subseteq \mathcal{O}_I\}$ +8: if $\mathcal{I}^*$ is not empty then +9: $I^{*} = \arg \max_{I\in \mathcal{I}^{*}}\mathcal{F}(I)$ +10: $S^{*} = \langle \mathcal{O}^{*},I^{*}\rangle$ +11: $S = S\cup \{S^{*}\}$ +12: else +13: Use K-Means to split $\mathcal{O}^*$ into two clusters $\mathcal{O}^* = \mathcal{O}_1^*\cup$ $\mathcal{O}_2^*$ based on 2D horizontal positions $(x,y)$ +14: $C = C\cup \{\mathcal{O}_1^*,\mathcal{O}_2^*\}$ +15: end if +16: $C = C - \{\mathcal{O}^*\}$ +17: end while +18: +19: while exists $S_{j},S_{k}\in S$ such that $I_{S_j} = I_{S_k}$ do +20: $S = S\setminus \{S_j,S_k\}$ +21: $S_{l} = \langle \mathcal{O}_{S_{j}}\cup \mathcal{O}_{S_{k}},I_{S_{j}}\rangle$ +22: $S = S\cup \{S_l\}$ +23: end while return S + +# 3.1. 3D-Mem Construction + +Inspired by the idea that an image itself inherently contains rich and robust information to represent a small area of a scene, we propose a novel way that utilizes a set of multiview snapshot images to cover the whole informative areas of a scene. Instead of the object-centric representation proposed by ConceptGraph[8], in which only object-level visual features are stored and managed, we propose using one snapshot image to represent a cluster of co-visible objects within that image, namely a Memory Snapshot. This approach not only encapsulates foreground object-level details but also captures contextual room-level cues embedded in the background, providing a more holistic representation of the scene. + +# 3.1.1. Memory Snapshot Formulation + +Specifically, given a set of $N$ image observations $\mathcal{I}^{obs} = \{I_1^{obs}, I_2^{obs}, \dots, I_N^{obs}\}$ , where each $I_i^{obs}$ consists of an RGB-D image and its pose, we aim to construct a set of memory snapshots, a minimal subset of $\mathcal{I}^{obs}$ , to cover all salient observed objects. We first follow ConceptGraph [8] to perform a series of object segmentation, spatial transformations, matching, and merging, resulting in an object set that contains all detected objects from the observations $\mathcal{O} = \{o_1, o_2, \dots, o_M\}$ , where each object is characterized by an object category, detection confidence, and 3D location. Meanwhile, we obtain a set of frame candidates $\mathcal{T} = \{I_1, I_2, \dots, I_N\}$ , where each $I_i = \langle I_i^{obs}, \mathcal{O}_{I_i} \rangle$ consists of the image observation $I_i^{obs}$ together with a set of all detected objects $\mathcal{O}_{I_i}$ , i.e., all co-visible objects in $I_i^{obs}$ . + +We define $S$ as a set of memory snapshots $\{S_1, S_2, \dots, S_K\}$ of size $K \leq N$ , where each memory snapshot $S_k = \langle \mathcal{O}_{S_k}, I_{S_k} \rangle$ is characterized by a frame candidate $I_{S_k} \in \mathcal{I}$ and a cluster of objects $\mathcal{O}_{S_k}$ , a subset of all detected objects $\mathcal{O}_{I_{S_k}}$ in the image $I_{S_k}^{obs}$ . Therefore, an image $I_{S_k}^{obs}$ serves as a shared visual feature among $\mathcal{O}_{S_k}$ . Since $S$ needs to cover the whole object set $\mathcal{O}$ , and each object $o_j$ needs to be uniquely mapped to one memory snapshot $S_k$ (although it may still be visible in other memory snapshots), we require $\mathcal{O}_{S_1} \cup \mathcal{O}_{S_2} \cup \dots \cup \mathcal{O}_{S_K} = \mathcal{O}$ and $\mathcal{O}_{S_i} \cap \mathcal{O}_{S_j} = \emptyset$ for $\forall S_i, S_j \in S$ . + +# 3.1.2. Co-Visibility Clustering + +To acquire the desired set of memory snapshots, we designed Co-Visibility Clustering based on the classical hierarchical clustering [30] to split $\mathcal{O}$ into clusters, each of which is a subset of the detected object set $\mathcal{O}_{I_i}$ of a certain frame candidate $I_{i}$ . As detailed in the pseudocode in Algorithm 1, we define a cluster set $\mathcal{C}$ composed of all unsettled object clusters that haven't been matched with observations, initialized to contain the full object set $\{\mathcal{O}\}$ , and the memory snapshot set $\mathcal{S}$ , initialized to $\varnothing$ . Each time, we pick the largest unsettled cluster $\mathcal{O}^*$ from $\mathcal{C}$ and search + +through all frame candidates for capable candidates $I^{*}$ such that $\mathcal{O}^*$ is a subset of the detected object list of $I^{*}$ . When such candidates exist, we rank them based on a score function $\mathcal{F}$ and pick the top-ranked frame candidate $I^{*}$ to create a new memory snapshot $S^{*} = \langle \mathcal{O}^{*}, I^{*} \rangle$ and add it to $S$ . In practice, we choose $\mathcal{F}(I_i) = |\mathcal{O}_{I_i}|$ to select the frame candidate that contains the most objects, and when more than one frame candidate has the same largest object number, we choose the one with the highest sum of detection confidence. If there is no feasible frame candidate, we then use K-Means to further divide $\mathcal{O}^*$ into two subclusters $\mathcal{O}_1^*$ and $\mathcal{O}_2^*$ based on the 2D horizontal positions of the objects, and add them to $\mathcal{C}$ . We repeat the above process until no clusters remain in $\mathcal{C}$ . Note that the process is guaranteed to terminate since every object has been captured in certain observations. Ultimately, after all objects have been assigned to corresponding snapshots, memory snapshots sharing the same frame candidates are merged to achieve the final compact memory representation $S$ . + +In each memory snapshot, not only is the visual information of each object stored, but also the spatial relationships between objects and the room-level information are provided by visual cues in the background. With the increasing perception abilities of VLMs, such multi-view images representations can provide richer and more robust visual information for VLMs to complete difficult tasks. + +# 3.2. 3D-Mem with Frontier-based Exploration + +We adapt the memory construction algorithm introduced above to the frontier-based exploration pipeline [28], where we introduce frontier snapshot as an extension of frontier (Section 3.2.1). In the frontier-based exploration episode, an agent is initialized in an unknown scene and explores the environment step by step. At each step, the agent receives a series of observations, including rgb, depth, and pose, which are used to update frontier and memory snapshots (Section 3.2.2). As the 3D scene memory grows during exploration, the decision-making becomes computationally heavy. Therefore we introduce Prefiltering as an efficient memory retrieval mechanism (Section 3.2.3). + +# 3.2.1. Introduction of Frontier Snapshot + +In a common frontier-based exploration framework, an occupancy map is kept to record the explored regions, defined as the nearby areas along the agent's trajectory, and the unexplored regions, defined as navigable but yet-to-be-explored areas. A frontier is then defined to represent such an unexplored region that could be further explored. In this work, we extend this concept by using a snapshot to represent a frontier, similar to memory snapshots. We define a Frontier Snapshot $F = \langle r,p,I^{obs}\rangle$ , consisting of the unexplored region $r$ it represents, a navigable location $p$ , and an image observation $I^{obs}$ from the agent's position toward that unexplored region. Therefore, the frontier and memory + +![](images/97a994a3432b6392b7542107ed82e636835f3aed354d533a2004fc998dd3c6e6.jpg) +Figure 2. The memory aggregation process of 3D-Mem. At each step $t$ , the object set $\mathcal{O}_t$ is first updated using the object-wise update pipeline introduced in Section 3.1. The newly detected objects and the updated existing objects are then jointly clustered into new memory snapshots using co-visibility clustering (Algorithm 1), which are used to update the memory snapshot set $S_t$ . + +snapshots, which are all raw images, can be directly used as visual input for the VLMs. More details about frontier-based exploration are provided in Appendix 9. + +# 3.2.2. Incremental Construction of 3D-Mem + +Throughout the exploration process, the scene memory is dynamically and incrementally constructed. At each exploration step, the agent observes its surroundings and updates the scene memory and frontiers. At step $t$ , we denote the current object set as $\mathcal{O}_t$ , the frontier set as $\mathcal{F}_t$ , the memory snapshot set as $S_t$ , and the frame candidate set as $\mathcal{I}_t$ , all of which are initialized as $\varnothing$ at the beginning of the episode. + +Object Update. As illustrated in Figure 2, at each time step $t$ , the agent first captures $N$ egocentric views $\mathcal{I}^{obs} = \{I_1^{obs}, I_2^{obs}, \dots, I_N^{obs}\}$ , which are used to extract the object set $\mathcal{O}$ and the frame candidate set $\mathcal{I}$ similar to Section 3.1. Specifically, a threshold "max_dist" is set to ensure only objects within a certain distance from the agent are added to the scene graph, as the memory snapshot should only represent objects from a local area. It is important to note that the object set $\mathcal{O}$ detected in these egocentric views may contain both newly identified objects and those already present in the previous set $\mathcal{O}_{t-1}$ . Subsequently, the full object set and frame candidate set are updated as $\mathcal{O}_t = \mathcal{O}_{t-1} \cup \mathcal{O}$ and $\mathcal{I}_t = \mathcal{I}_{t-1} \cup \mathcal{I}$ , respectively. + +Memory Snapshot Update. We implement the可视ability clustering in Section 3.1 incrementally. At each time step $t$ , instead of performing clustering on the entire object set $\mathcal{O}_t$ , we focus on clustering objects related to $\mathcal{O}$ , the objects detected from the egocentric views at this step. In $\mathcal{O}$ , some objects may have already been assigned to specific memory snapshots in $S_{t-1}$ . We refer to those memory snapshots as $S_{prev} = \{S | S \in S_{t-1}, \mathcal{O}_S \cap \mathcal{O} \neq \emptyset\}$ . All objects from $S_{prev}$ , along with the newly detected objects in $\mathcal{O}$ , are used as input for clustering, denoted as + +$\mathcal{O}_{input}$ . Then, the memory snapshot set is updated as $\mathcal{S}_t = (\mathcal{S}_{t-1} \setminus \mathcal{S}_{prev}) \cup \text{Cluster}(\mathcal{O}_{input}, \mathcal{I}_t)$ + +Frontier Snapshot Update. At each step $t$ , an existing frontier from $\mathcal{F}_{t-1}$ may be modified if the unexplored region it represents has been updated, or it may be removed if the region has been fully explored. Additionally, new frontiers may be introduced. For each newly added or modified frontier, a new snapshot is taken to update its image representation. Then, $\mathcal{F}_{t-1}$ is updated to $\mathcal{F}_t$ . More implementation details are in Appendix 11. + +# 3.2.3. Memory Retrieval with Prefiltering + +During exploration, when we incrementally construct memory snapshots, the large volumes of memory can make the agent's decision-making inefficient and introduce noise unrelated to the objectives. Therefore, we designed a memory retrieval mechanism called Prefiltering. Since most memory snapshots are irrelevant to a given instruction and processing them consumes substantial computational resources without meaningful benefit, we present the VLM agent with the question along with all object categories in $\mathcal{O}_t$ . The VLM agent then outputs the relevant object categories ranked by importance, retaining only the top $K$ categories determined by a hyperparameter $K$ . Memory snapshots that do not contain any of these $K$ selected categories are filtered out. Figure 3 illustrates Prefiltering in an embodied question-answering task, showing how the original set of memory snapshots is reduced based on relevancy. This technique significantly reduces resource consumption, allowing us to include images directly within the prompt and enhancing the agent's efficiency. + +# 3.3. Exploration and Reasoning with 3D-Mem + +With the updated frontier and memory snapshots, we can directly leverage the perception and reasoning capabilities + +![](images/997cff518efbf11209daeda40a5cef34bbf6106e5491bf697b7b6f41c7d4702d.jpg) +Figure 3. 3D-Mem as visual input for the VLM in embodied question answering. The VLM first retrieves relevant memory snapshots with prefiltering, then utilizes the frontier snapshots and memory snapshots to perceive the scene and reason about the embodied questions. + +of large VLMs, as the image nature of frontier and memory snapshots makes them easily interpretable by VLMs. This is a core advantage of 3D-Mem over other 3D representations such as point clouds, as recent VLMs, trained on internet-scale data, often have stronger visual capabilities than specially fine-tuned models such as 3D-LLM [10]. A pseudo-code overview of the whole exploration and reasoning framework is in Figure 12 in Appendix. + +Versatility in Task Applications. 3D-Mem is versatile and can be applied to various tasks. In the case of embodied question answering (illustrated in Figure 3), the VLM agent decides whether to explore a frontier or answer the question based on the current memory snapshots. If the agent chooses a frontier, it provides a rationale for exploring that direction; otherwise, it directly answers the question, concluding the exploration episode. In object navigation tasks, where the agent aims to find a specific object, we augment each memory snapshot with image crops of the objects it contains. The agent then selects an object from one of the memory snapshots to navigate toward. Detailed experiments on these two tasks are presented in Section 4.1 and 4.3, with the complete prompt provided in Appendix 15. + +Navigation Strategy. If the VLM agent chooses a frontier $F_{i}$ to explore, the agent then directly navigates towards the location of the frontier $p_{i}$ from its current pose. The agent stops either when it has moved a maximum distance of $1.0m$ from its original pose or when it arrives within $0.5m$ of $p_{i}$ . We assume a collision-free planner is available to guide the agent along the shortest path within the explored region between the two locations. Experimentally, we adopt the pathfinder provided by Habitat-sim to compute such paths and move the agent incrementally at each time step. This strategy is similar to that of ExploreEQA + +[28], which also uses VLMs to guide exploration. + +# 4. Experiments + +As a form of 3D scene memory, 3D-Mem stores rich and compact visual information, serving as an effective framework for a lifelong agent to explore and reason about a 3D scene. To comprehensively evaluate 3D-Mem, we begin with Active Embodied Question Answering (Section 4.1), where the scene is initially unknown. This assessment tests 3D-Mem's overall performance in scenarios that require both embodied exploration and reasoning. Next, we examine 3D-Mem's efficiency in representing 3D scene information through Episodic Memory Embodied Question Answering (Section 4.2). In this evaluation, the scene scan of the ground truth region is provided and no exploration is needed for question answering. Following this, we evaluate 3D-Mem on GOAT-Bench (Section 4.3), a multi-modal lifelong navigation benchmark, to demonstrate 3D-Mem's effectiveness as a lifelong memory system. Finally, we conduct a series of ablation studies to determine key components and hyperparameter choices in Appendix 13. + +Since 3D-Mem is a versatile scene memory, we adapt it to different benchmarks in slightly different ways, as detailed in Appendix 10. We also include more discussion and qualitative analysis in Appendix 7. + +# 4.1. Active Embodied Question Answering + +On A-EQA [22] benchmark (Table 1), we evaluate 3D-Mem's ability to dynamically construct scene representations for exploration and reasoning given complex questions. More implementation details are in Appendix 10.1. + +
MethodLLM-Match ↑LLM-Match SPL ↑
Blind LLMs
GPT-4*35.5N/A
GPT-4o†35.9N/A
Question Agnostic Exploration
CG Scene-Graph Captions*34.46.5
SVM Scene-Graph Captions*34.26.4
LLaVA-1.5 Frame Captions*38.17.0
Multi-Frame*†41.87.5
VLM Exploration
Explore-EQA†46.923.4
CG w/ Frontier Snapshots†47.233.3
3D-Mem (Ours)†52.642.0
Human Agent*85.1N/A
+ +Table 1. Experiments on A-EQA. "CG" denotes ConceptGraphs. Methods with * are reported from OpenEQA [22], and with † are evaluated on the 184-question subset. + +Benchmark. A-EQA consists of 557 questions drawn from 63 scenes in HM3D [27]. Due to resource limitations, our evaluation focuses on a subset of 184 questions, as mentioned in the OpenEQA benchmark [22]. We also include the evaluation of 3D-Mem on the full set in Appendix 6. + +The open-vocabulary and open-ended questions in A-EQA encompass diverse daily tasks such as object recognition, functional reasoning, and spatial understanding. For each question, an agent is initialized at a specific location and is required to explore the scene to gather the necessary information for answering the question. + +Metrics. Following OpenEQA, we employ LLM-Match and LLM-Match SPL for quantitative evaluation. We first rate each predicted answer from 1 to 5 using GPT-4 to compare ground-truth and predicted answers. Given the predicted answers, LLM-Match, which measures the answer accuracy, is calculated as the average score for each question, mapped to a 0-100 scale. LLM-Match SPL, which measures the exploration efficiency, is then calculated by weighting the LLM-Match score by exploration path length. For the questions where the VLM Exploration methods failed to provide an answer, we ask GPT-4o to directly guess an answer without visual inputs, setting the SPL to 0.0. + +Baselines. For baselines that use VLM for exploration, we mainly compare 3D-Mem with Explore-EQA [28] and ConceptGraph [8] w/ frontier snapshots. We adapt Explore-EQA for open-ended questions by halting exploration and answering the question with the egocentric view once the VLM's confidence in the question exceeds a predetermined threshold. We integrate ConceptGraph into our exploration pipeline by replacing memory snapshots with object image crops, while maintaining other settings the same, including prefiltering and how answers are obtained. We adopt GPT-4o as the choice of VLM by directly utilizing the OpenAI API. Besides the methods that can do active exploration above, we also include other simple baselines implemented by OpenEQA. The group of question-agnostic exploration baselines employ question-agnostic frontier exploration to obtain an episodic memory of image frames. These frames are subsequently used to prompt VLMs directly (Multi-Frame), generate frame captions as prompts for LLMs (LLaVA-1.5 Frame-Captions), or construct textual scene-graph representation using ConceptGraph (CG) and Sparse Voxel Map (SVM) to prompt LLMs. Additionally, blind LLM experiments are included, where the LLM is tasked with answering questions without any visual information. Note that the Multi-Frame baseline uses 75 frames for each question, and is evaluated on the 184-question subset. Other baselines from OpenEQA are evaluated on the full 557-question set. + +Analysis. As shown in Table 1, 3D-Mem significantly outperforms previous methods in both accuracy and efficiency. The major takeaways are as follows: 1) The superior performance in open-ended embodied question answering highlights the advantages of using snapshots as a memory format, which store richer and more flexible visual information for the VLM to address complex questions. In contrast, object-based memory systems, which use ei + +
MethodsAvg. FramesLLM-Match
Blind LLM*035.5
CG Captions*034.4
SVM Captions*034.2
Frame Captions*038.1
Multi-Frame3.048.1
3D-Mem (Ours)3.157.2
HumanFull86.8
+ +Table 2. Experiments on EM-EQA. Frame Efficiency and performance. Methods denoted by * use GPT-4 to generate answers, as reported in OpenEQA + +![](images/58680ddd9ff75891978269ba828c67e83f385b37bde1bf5fce356f16898dc92c.jpg) +Figure 4. Frame Efficiency of 3D-Mem on EM-EQA. LLM-Match Score vs. Average Number of Frames for 3D-Mem and Multi-Frame both using GPT-4o + +ther image crops or language captions to represent objects and spatial relationships, are less robust when handling diverse questions, as they rely on rigid object-level features. 2) The multi-frame VLM implemented by OpenEQA also achieves inferior results, despite using a similar multi-view imagesrepresentation, because this baseline linearly selects episodic frames and includes excessive repetitive or irrelevant information for the questions. In contrast, 3D-Mem creates an average of 10.94 memory snapshots from 39.76 egocentric observations after each episode. After prefetching with $K = 10$ , only 3.26 memory snapshots are retained as direct input for VLMs. These results demonstrate the compactness and efficiency of 3D-Mem as scene memory. + +# 4.2. Episodic-Memory Embodied Q&A + +We evaluate the representation capability of 3D-Mem on EM-EQA [22] to further demonstrate 1) the effectiveness of image memory compared to captions, 2) the compact and informative nature of our method. More implementation details are in Appendix 10.2. + +Benchmark. EM-EQA is an Embodied Q&A benchmark that contains over 1600 questions from 152 ScanNet [3] and HM3D[27] scenes. The open-vocabulary and open-ended questions in EM-EQA encompass diverse daily tasks such as object recognition, functional reasoning, and spatial understanding. For each question, a trajectory comprising RGB-D observations and the corresponding camera poses + +at each step is provided, offering necessary contextual information needed to answer the questions. + +Baselines. We compare against language-only scene representations, including ConceptGraphs captions, Sparse Voxel Maps Captions, and Frame Captions. We also compare against Multi-Frame, which directly processes 2 to 6 linearly sampled frames using GPT-4o. + +Analysis. From the results, we can observe that: 1) As shown in Table 2, both 3D-Mem and Multi-Frame significantly outperform methods that rely on captions to represent a 3D scene while using only approximately three frames. This demonstrates the effectiveness of using a set of images to represent a 3D scene and highlights the limitations of 3D scene graph captions in addressing complex queries involving relationships between objects. 2) In both Table 2 and Figure 4, 3D-Mem surpasses Multi-Frame in frame efficiency, underscoring the compact and informative nature of our proposed 3D scene memory. + +# 4.3. GOAT-Bench + +On GOAT-Bench [16] (Table 3), we evaluate 3D-Mem's effectiveness as a lifelong memory system that facilitates efficient exploration and reasoning. More implementation details are in Appendix 10.3. + +Benchmark. GOAT-Bench is a multimodal lifelong navigation benchmark, where an agent is tasked with sequentially navigating to several objects in an unknown scene, with each target described by either a category name (e.g., microwave), a language description (e.g., the microwave on the kitchen cabinet near the fridge), or an image of the target object. Due to the large size of GOAT-Bench and the resource limitations, we assess a 1/10-size subset of the "Val Unseen" split, consisting of one exploration episode for each of the 36 scenes, totaling 278 navigation subtasks. For reference, we also include the evaluation of 3D-Mem on the full test set in Appendix 6. + +Metrics. GOAT-Bench employs the Success Rate and Success weighted by Path Length (SPL), similar to A-EQA dataset. A navigation task is deemed success if the agent's final location is within 1 meter from the navigation goal. SPL is the success score weighted by exploration distances. + +Baselines. Similar to the experiments in A-EQA, we compare 3D-Mem with Explore-EQA [28] and Concept-Graph [8] baselines. Due to implementation differences in Explore-EQA, we introduce an additional success criterion for this baseline: a subtask is considered successful if the target object is visible in the final observation. This extra criterion leverages ground truth grounding, thereby enhancing the baseline's capability. To demonstrate the effectiveness of 3D-Mem's lifelong memory, we include another baseline (3D-Mem w/o memory) in which we clear the constructed scene graph after each subtask. We also directly include baselines implemented in GOAT-Bench. However, these baselines are simple RNN-based models trained via + +reinforcement learning, which causes their performance to lag behind the baselines we implemented. + +Analysis. As shown in Table 3, 3D-Mem achieves the highest scores compared to previous methods in both accuracy and efficiency. Our major observations are as follows: 1) Even though GOAT-Bench is an object-based navigation benchmark, which is well-suited for ConceptGraph settings, 3D-Mem still outperforms ConceptGraph w/ frontier snapshots. This can be attributed 3D-Mem's multi-view images representation, which captures more comprehensive information, making it easier to match with the diverse descriptions in GOAT-Bench. 2) When compared with the original 3D-Mem, the performance of 3D-Mem w/o memory declines for both GPT-4o and LLaVA-7B models, particularly in efficiency (SPL), indicating that 3D-Mem is beneficial as a memory system for lifelong learning. 3) Explore-EQA, which uses a traditional value map for each subtask to indicate regions of interest, also performs worse, as it lacks the mechanism to memorize information in explored regions. 4) In terms of efficiency, after each episode, 3D-Mem generates an average of 16.58 memory snapshots from 91.37 egocentric observations, and after prefiltering with $K = 10$ , only 4.66 memory snapshots are kept for each query. + +
MethodSuccess Rate ↑SPL ↑
GOAT-Bench Baselines
Modular GOAT*24.917.2
Modular CLIP on Wheels*16.110.4
SenseAct-NN Skill Chain*29.511.3
SenseAct-NN Monolithic*12.36.8
Open-Sourced VLM Exploration
3D-Mem w/o memory†40.614.6
3D-Mem (Ours)†49.629.4
GPT-4o Exploration
Explore-EQA†55.037.9
CG w/ Frontier Snapshots†61.545.3
3D-Mem w/o memory†58.638.5
3D-Mem (Ours)†69.148.9
+ +Table 3. Experiments on GOAT-Bench. Evaluated on the "Val Unseen" split. "CG" denotes ConceptGraphs. Methods denoted by * are from GOAT-Bench, and with † are evaluated on the subset. + +# 5. Conclusion + +We present 3D-Mem, a 3D scene memory framework that uses a set of informative multi-view snapshot images to store robust visual information of a 3D scene. With the integration of the frontier-based exploration framework, 3D-Mem allows the agent to either leverage the memory of explored regions to solve tasks or further explore the scene to expand its knowledge. With its incremental construction and efficient memory retrieval mechanism, 3D-Mem serves as an effective memory system for lifelong agents. Extensive experiments demonstrate the significant advantages of 3D-Mem over traditional scene representations. + +# References + +[1] Iro Armeni, Zhi-Yang He, JunYoung Gwak, Amir R Zamir, Martin Fischer, Jitendra Malik, and Silvio Savarese. 3d scene graph: A structure for unified semantics, 3d space, and camera. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5664-5673, 2019. 3 +[2] Boyuan Chen, Zhuo Xu, Sean Kirmani, Brain Ichter, Dorsa Sadigh, Leonidas Guibas, and Fei Xia. Spatialvlm: Endowing vision-language models with spatial reasoning capabilities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14455-14465, 2024. 2 +[3] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017. 7, 3 +[4] Runyu Ding, Jihan Yang, Chuhui Xue, Wenqing Zhang, Song Bai, and Xiaojuan Qi. Pla: Language-driven open-vocabulary 3d scene understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7010-7019, 2023. 2 +[5] Runyu Ding, Jihan Yang, Chuhui Xue, Wenqing Zhang, Song Bai, and Xiaojuan Qi. Lowis3d: Language-driven open-world instance-level 3d scene understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 2 +[6] Sourav Garg, Krishan Rana, Mehdi Hosseinzadeh, Lachlan Mares, Niko Suenderhauf, Feras Dayoub, and Ian Reid. Robohop: Segment-based topological map representation for open-world visual navigation. arXiv, 2023. 3, 9 +[7] Paul Gay, James Stuart, and Alessio Del Bue. Visual graphs from motion (vgfm): Scene understanding with object geometry reasoning. In Computer Vision-ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part III 14, pages 330–346. Springer, 2019. 3 +[8] Qiao Gu, Ali Kuwajerwala, Sacha Morin, Krishna Murthy Jatavallabhula, Bipasha Sen, Aditya Agarwal, Corban Rivera, William Paul, Kirsty Ellis, Rama Chellappa, Chuang Gan, Celso Miguel de Melo, Joshua B. Tenenbaum, Antonio Torralba, Florian Shkurti, and Liam Paull. Conceptgraphs: Open-vocabulary 3d scene graphs for perception and planning. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 5021-5028, 2024. 2, 3, 4, 7, 8 +[9] Yining Hong, Yilun Du, Chunru Lin, Josh Tenenbaum, and Chuang Gan. 3d concept grounding on neural fields. Advances in Neural Information Processing Systems, 35:7769-7782, 2022. 3 +[10] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. arXiv, 2023. 2, 6 +[11] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. + +Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 8 +[12] Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. An embodied generalist agent in 3d world. arXiv preprint arXiv:2311.12871, 2023. 2 +[13] Nathan Hughes, Yun Chang, and Luca Carlone. Hydra: A real-time spatial perception system for 3d scene graph construction and optimization. arXiv preprint arXiv:2201.13360, 2022. 3 +[14] Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, et al. Conceptfusion: Open-set multimodal 3d mapping. arXiv preprint arXiv:2302.07241, 2023. 2, 10 +[15] Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. Lerf: Language embedded radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19729-19739, 2023. 2 +[16] Mukul Khanna*, Ram Ramrakhya*, Gunjan Chhablani, Sriram Yenamandra, Theophile Gervet, Matthew Chang, Zsolt Kira, Devendra Singh Chaplot, Dhruv Batra, and Roozbeh Mottaghi. Goat-bench: A benchmark for multi-modal life-long navigation. In CVPR, 2024. 8 +[17] Nuri Kim, Obin Kwon, Hwiyeon Yoo, Yunho Choi, Jeongho Park, and Songhawi Oh. Topological Semantic Graph Memory for Image Goal Navigation. In CoRL, 2022. 3, 9 +[18] Ue-Hwan Kim, Jin-Man Park, Taek-Jin Song, and Jong-Hwan Kim. 3-d scene graph: A sparse and semantic representation of physical environments for intelligent agents. IEEE transactions on cybernetics, 50(12):4921-4933, 2019. 3 +[19] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International conference on machine learning, pages 12888-12900. PMLR, 2022. 3 +[20] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. 7, 8 +[21] Arjun Majumdar, Gunjan Aggarwal, Bhavika Devnani, Judy Hoffman, and Dhruv Batra. Zson: Zero-shot object-goal navigation using multimodal goal embeddings. Advances in Neural Information Processing Systems, 35:32340-32352, 2022. 3 +[22] Arjun Majumdar, Anurag Ajay, Xiaohan Zhang, Pranav Putta, Sriram Yenamandra, Mikael Henaff, Sneha Silwal, Paul Mcvay, Oleksandr Maksymets, Sergio Arnaud, Karmesh Yadav, Qiyang Li, Ben Newman, Mohit Sharma, Vincent Berges, Shiqi Zhang, Pulkit Agrawal, Yonatan Bisk, Dhruv Batra, Mrinal Kalakrishnan, Franziska Meier, Chris Paxton, Sasha Sax, and Aravind Rajeswaran. Open EQ: Embodied question answering in the era of foundation models. In Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 2, 6, 7 +[23] Kirill Mazur, Edgar Sucar, and Andrew J Davison. Featurerealistic neural fusion for real-time, open set scene un + +derstanding. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 8201-8207. IEEE, 2023. 2 +[24] Songyou Peng, Kyle Genova, Chiyu Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser, et al. Openscene: 3d scene understanding with open vocabularies. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 815-824, 2023. 3 +[25] Xavi Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Ruslan Partsey, Jimmy Yang, Ruta Desai, Alexander William Clegg, Michal Hlavac, Tiffany Min, Theo Gervet, Vladimir Vondrus, Vincent-Pierre Berges, John Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, and Roozbeh Mottaghi. Habitat 3.0: A co-habitat for humans, avatars and robots, 2023. 7, 8 +[26] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 3 +[27] Santhosh K Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alex Clegg, John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, et al. Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai. arXiv preprint arXiv:2109.08238, 2021. 6, 7, 8 +[28] Allen Z Ren, Jaden Clark, Anushri Dixit, Masha Itkina, Anirudha Majumdar, and Dorsa Sadigh. Explore until confident: Efficient exploration for embodied question answering. arXiv preprint arXiv:2403.15941, 2024. 3, 4, 6, 7, 8 +[29] Antoni Rosinol, Andrew Violette, Marcus Abate, Nathan Hughes, Yun Chang, Jingnan Shi, Arjun Gupta, and Luca Carlone. Kimera: From slam to spatial perception with 3d dynamic scene graphs. The International Journal of Robotics Research, 40(12-14):1510-1546, 2021. 3 +[30] Sergio M Savaresi and Daniel L Boley. On the performance of bisecting k-means and pddp. In Proceedings of the 2001 SIAM International Conference on Data Mining, pages 1-14. SIAM, 2001. 4 +[31] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A Platform for Embodied AI Research. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019. 7, 8 +[32] Nur Muhammad Mahi Shafiullah, Chris Paxton, Lerrel Pinto, Soumith Chintala, and Arthur Szlam. Clip-fields: Weakly supervised semantic fields for robotic memory. arXiv preprint arXiv:2210.05663, 2022. 3 +[33] Dhruv Shah, Michael Robert Equi, Błazej Osiński, Fei Xia, Brian Ichter, and Sergey Levine. Navigation with large language models: Semantic guesswork as a heuristic for planning. In Conference on Robot Learning, pages 2683-2699. PMLR, 2023. 3 +[34] Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, + +Devendra Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimir Vondrus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, and Dhruv Batra. Habitat 2.0: Training home assistants to rearrange their habitat. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 7, 8 +[35] Nikolaos Tsagkas, Oisin Mac Aodha, and Chris Xiaoxuan Lu. VI-fields: Towards language-grounded neural implicit spatial representations. arXiv preprint arXiv:2305.12427, 2023. 2 +[36] Johanna Wald, Helisa Dhamo, Nassir Navab, and Federico Tombari. Learning 3d semantic scene graphs from 3d indoor reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3961-3970, 2020. 2, 3 +[37] Shun-Cheng Wu, Johanna Wald, Keisuke Tateno, Nassir Navab, and Federico Tombari. Scenegraphfusion: Incremental 3d scene graph prediction from rgb-d sequences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7515-7525, 2021. 3 +[38] Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. In European Conference on Computer Vision, pages 131-147. Springer, 2025. 2 +[39] Kashu Yamazaki, Taisei Hanyu, Khoa Vo, Thang Pham, Minh Tran, Gianfranco Doretto, Anh Nguyen, and Ngan Le. Open-fusion: Real-time open-vocabulary 3d mapping and queryable scene representation. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 9411-9417. IEEE, 2024. 3 +[40] Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F Fouhey, and Joyce Chai. Llm-grounder: Open-vocabulary 3d visual grounding with large language model as an agent. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 7694-7701. IEEE, 2024. 2 +[41] Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, and Bernadette Bucher. Vlfm: Vision-language frontier maps for zero-shot semantic navigation. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 42-48. IEEE, 2024. 3 +[42] Junbo Zhang, Runpei Dong, and Kaisheng Ma. Clip-fo3d: Learning free open-world 3d scene representations from 2d dense clip. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2048–2059, 2023. 2, 3 +[43] Jiazhao Zhang, Kunyu Wang, Rongtao Xu, Gengze Zhou, Yicong Hong, Xiaomeng Fang, Qi Wu, Zhizheng Zhang, and Wang He. Nvid: Video-based vlm plans the next step for vision-and-language navigation. arXiv preprint arXiv:2402.15852, 2024. 3 +[44] Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao, and Yong Jae Lee. Segment everything everywhere all at once. Advances in Neural Information Processing Systems, 36, 2024. 3 \ No newline at end of file diff --git a/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/images.zip b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5fc062e6d6f6493e4332b131a1ca0391765692bf --- /dev/null +++ b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50ca9279c475272dd379c8edf822c82c2791044e43f3fa62e438051f8f4d1ca2 +size 370414 diff --git a/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/layout.json b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5a7f86d8942affcc1bd05d5bf00ad84488ee814e --- /dev/null +++ b/CVPR/2025/3D-Mem_ 3D Scene Memory for Embodied Exploration and Reasoning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35095028c8fd6b902245044a26d8ef744c0179b6b2fb26cd0ca95a9cf4dc1f29 +size 373214 diff --git a/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_content_list.json b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..60b8f30fdb63ec0a50dadc3a28c7c2c9438a5083 --- /dev/null +++ b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3431e231cc82737679590e8c03d3a44c8802a9a73cb3df5640487072fd1e174e +size 72757 diff --git a/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_model.json b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e2b33af3161e2606ff88223281a810a58a823f7b --- /dev/null +++ b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1944d108f1fdf1848c89984b320b394c739b44a61d10ae955728531cfd0f146 +size 92006 diff --git a/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_origin.pdf b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ee26fdab933d3c60eccb349a9001ebc2094dc349 --- /dev/null +++ b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/4284846f-ef79-4d62-9c0f-2fa490a19f60_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cac3ef5520cff40ff67b695ea024a9a9bc825ba3031ed9ff31d8718f08c934f +size 2273938 diff --git a/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/full.md b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/full.md new file mode 100644 index 0000000000000000000000000000000000000000..822b1912cb2cee6c7be40a2194503b9db4face5a --- /dev/null +++ b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/full.md @@ -0,0 +1,286 @@ +# 3D-SLNR: A Super Lightweight Neural Representation for Large-scale 3D Mapping + +Chenhui Shi $^{1,2}$ Fulin Tang $^{1\dagger}$ Ning An $^{3,4}$ Yihong Wu $^{1,2\dagger}$ + +$^{1}$ Institute of Automation, Chinese Academy of Sciences + +$^{2}$ School of Artificial Intelligence, University of Chinese Academy of Sciences + +$^{3}$ China Coal Research Institute $^{4}$ State Key Laboratory of Intelligent Coal Mining and Strata Control + +shichenhui2022@ia.ac.cn,tangfulin2015@ia.ac.cn,ning.an@ccteg-bigdata.com,yhwu@nlpr.ia.ac.cn + +# Abstract + +We propose 3D-SLNR, a new and ultra-lightweight neural representation with outstanding performance for large-scale 3D mapping. The representation defines a global signed distance function (SDF) in near-surface space based on a set of band-limited local SDFs anchored at support points sampled from point clouds. These SDFs are parameterized only by a tiny multi-layer perceptron (MLP) with no latent features, and the state of each SDF is modulated by three learnable geometric properties: position, rotation, and scaling, which make the representation adapt to complex geometries. Then, we develop a novel parallel algorithm tailored for this unordered representation to efficiently detect local SDFs where each sampled point is located, allowing for real-time updates of local SDF states during training. Additionally, a prune-and-expand strategy is introduced to enhance adaptability further. The synergy of our low-parameter model and its adaptive capabilities results in an extremely compact representation with excellent expressiveness. Extensive experiments demonstrate that our method achieves state-of-the-art reconstruction performance with less than 1/5 of the memory footprint compared with previous advanced methods. + +# 1. Introduction + +This paper aims to reconstruct large-scale scenes from point clouds, covering from indoors to outdoors, and even extending to city-scale environments. This is a key research topic in computer vision and graphics and remains greatly promising in applications such as self-driving, robotics, and mixed reality. However, the task presents significant challenges, as larger scene scales introduce more complex geometry and higher memory demands. Thus, an expressive yet highly compact map representation is desirable. + +Previous approaches have extensively explored the optimal scene representations. Traditional approaches usually directly model scenes as discrete distance grids [28, 42] or predefined basis functions [13, 15], which then are converted into 3D fields through techniques like truncated signed distance function (TSDF) fusion [26] or finite element analysis [13, 15]. To enable scalability for large scenes, various sparse data structures [9, 12, 14, 28, 42] are widely employed. However, capturing detailed geometry with these methods necessitates a large allocation of data nodes, resulting in a substantial increase in memory usage. Building on the above traditional concepts, many learning-based methods [20, 37, 38, 48, 52] represent scenes as continuous and differentiable implicit fields using neural networks, often incorporating sparse feature grids [22, 25, 40] to facilitate scalability and expressiveness. Some methods [11, 46] reconstruct scenes using data-driven neural kernels, offering greater expressiveness than the basis functions used in traditional methods [13, 15]. These methods mitigate the need for extensive data node allocation but require numerous high-dimensional latent features to capture intricate geometric details, introducing additional memory overhead. The recent point-based learning methods [30, 35, 47] show great potential for lightweight representation due to their inherent adaptivity. However, they still heavily rely on latent features, neglecting the further exploitation of the adaptivity. In summary, prior methods often prioritize expressiveness at the expense of memory efficiency. + +In this paper, we present 3D-SLNR, a new and super lightweight representation with exceptional expressiveness for large-scale 3D mapping. A visualization is provided in Fig. 1. Our method combines the principles from neural implicit representations and basis functions while fully exploiting the adaptive nature of the point-based representations. Specifically, we define a continuous and differentiable global SDF using a set of band-limited local SDFs anchored at support points sampled from point + +![](images/060be77f697393ddc45e8cc5da79867a8418bf207b7da949e8244a24ea25a87d.jpg) +Figure 1. Our method defines a global SDF based on a set of band-limited local SDFs anchored at support points. It reconstructs more consistent and smoother meshes in a kilometer-scale scene in the KITTI-360 dataset with less than $1/5$ memory consumption (short in MC in megabytes) for map representation, compared to previous advanced methods. + +clouds. Unlike the previous feature-dependent methods, we only parameterize these local SDFs with a small MLP, effectively serving as a learnable basis SDF, which significantly reduces the number of model parameters. To enhance expressiveness, we control the state of each local SDF by three learnable geometric properties—position, rotation, and scaling—allowing our representation to adapt flexibly to complex geometries. Furthermore, we introduce a novel parallel algorithm to efficiently detect the local SDFs where each sampled point is located, enabling real-time updates of the states of local SDFs during training. This capability is critical to making our unordered representation applicable to large-scale mapping. In addition, we propose a prune-and-expand strategy that removes redundant support points while allocating more points to under-reconstructed regions, further boosting adaptability. The above innovations make our representation extremely compact and strongly expressive. The main contributions of this paper are summarized as follows: + +- A new and ultra-lightweight neural representation is proposed for large-scale 3D mapping that is feature-independent and highly expressive. +- A novel parallel local SDF detection algorithm tailored for the proposed representation is developed to enable real-time updates during training. +- A prune-and-expand strategy is introduced to enhance adaptability by dynamically controlling point allocation. +- Extensive experiments demonstrate that our method achieves state-of-the-art reconstruction performance with less than 1/5 of the memory footprint compared with previous advanced methods. + +# 2. Related work + +Traditional 3D reconstruction. Poisson reconstruction [13, 15] is one of the most popular methods. The cen + +tral step is solving a Poisson equation to convert discrete points into a continuous indicator field. Specifically, it employs B-spline basis functions anchored at multi-resolution grid nodes [14] to fit the indicator function, followed by solving for the weights of each basis function. However, these basis functions are predefined, limiting their expressiveness. In complex scenes, they tend to fit well only in very localized regions, requiring a substantial increase in data nodes to improve expressiveness, which in turn leads to a large memory footprint. Another prominent approach is TSDF fusion-based reconstruction [26], which fuses per-frame TSDFs into a global representation. By leveraging sparse data structures such as octrees [9, 41], hash tables [12, 28, 29], and VDBs [42], reconstruction in large-scale scenes has been achieved. Nevertheless, significant numbers of data nodes still need to be assigned to accommodate intricate geometries. While our expressiveness comes from its adaptivity and the learnable basis SDF, avoiding excessive memory usage. + +Neural implicit reconstruction. With the advent of seminal works such as Occupancy networks [23], DeepSDF [31], and NeRF [24], reconstruction methods based on neural implicit representations have rapidly emerged. Early approaches [1, 7, 27, 43, 50] only use single MLP architectures to represent scenes; however, the limited capacity of MLPs renders them inadequate for reconstructing large-scale scenes. To address this, a plethora of methods [17, 18, 44, 49] adopt sparse feature grids [25, 40] to enhance model capacity and improve geometric fidelity. Subsequently, methods like SHINE-Maping [52], along with [37], [48], [6], [38] and [32] demonstrate effective large-scale reconstruction utilizing ranging data. Furthermore, NKSR [11] reconstructs large-scale scenes using data-driven neural kernels [46] that are more expressive than predefined basis functions employed in Poisson methods [13, 15]. Despite these advancements, existing meth + +![](images/1cc869dae9146609e50aa7b810325cd3d80e3f7383d3176b2e42d5949b98523c.jpg) +Figure 2. Overview of the proposed large-scale 3D mapping approach. (a) We construct a spatial hash grid near geometric surfaces and sample support points from the scene point cloud. (b) A global SDF constrained in the hash grid is initialized by setting up local SDFs anchored at these support points. (c) We employ our parallel algorithm to efficiently detect local SDFs where each sampled point is located. (d) The 3D coordinate of the sampled point is mapped from the world coordinate system to local coordinate systems defined by geometric parameters of the detected local SDFs. Through a tiny MLP, these local coordinates are converted into an SDF value. (e) We compute the SDF label for the point sampled in the hash grid and construct the loss function for training. (f) During training, a prune-and-expand strategy is performed at regular iterations. + +ods for large-scale reconstruction heavily rely on numerous high-dimensional latent features, resulting in substantial memory overhead. In contrast, our approach eliminates the dependence on such latent features, significantly reducing memory consumption. + +Point-based reconstruction. Surfel [33] is a type of classical point-based representation that consists of a set of oriented 2D disks attached to geometric surfaces. It has been widely adopted by 3D mapping methods [3, 5, 36, 45] due to its compactness and adaptability. However, this representation is discrete and unordered, limiting its ability to model continuous surfaces. Recent methods, such as Point-SLAM [35] and PIN-SLAM [30], represent scenes as continuous SDFs based on neural points, exhibiting promising results for 3D reconstruction. Yet, these methods still heavily depend on latent features and fail to leverage the natural adaptability of point-based representations fully. Approaches [10, 51] based on 3D Gaussian splatting (3DGS) [16] adopt efficient splatting and densification algorithms to ensure adaptive updates of point-based Gaussians; nevertheless, these algorithms are primarily designed for real-time image rendering, and their memory footprint escalates significantly during training. Our approach can be considered as a point-based continuous representation but does not rely on latent features. And, the introduced local SDF detection algorithm and prune-and-expand strategy are specifically developed for our representation, ensuring sufficient adaptability and efficiency. + +# 3. Method + +An overview of our approach is shown in Fig. 2. First, a spatial hash grid near geometric surfaces is constructed [37] + +and support points are sampled from the scene point cloud generated by accumulating posed point cloud frames. Then, we set up local SDFs anchored at these support points, creating an initial global SDF constrained in the near-surface space by the hash grid. Subsequently, our parallel algorithm is employed to efficiently detect the local SDFs at each sampled point in the hash grid. After that, we can obtain the SDF value of each sampled point through the geometric parameters of these detected local SDFs and a small MLP. Finally, we compute the SDF labels and construct the loss function for training. In the training process, a prune-and-expand strategy is performed at regular iterations. After training, we can extract meshes from the optimized global SDF by marching cubes algorithm [21]. In this section, we first describe our map representation in Sec. 3.1, and then introduce the local SDF detection algorithm in Sec. 3.2, followed by the prune-and-expand strategy in Sec. 3.3, and finally, the training process in Sec. 3.4. + +# 3.1. Map representation + +Our map representation, 3D-SLNR, defines a global SDF $\mathcal{F}^g$ that is composed of a series of local SDFs with the number of $N$ : + +$$ +\mathcal {F} ^ {g} = \left\{\boldsymbol {F} _ {i} ^ {l} = \left(\mathbf {x} _ {i}, \mathbf {r} _ {i}, \mathbf {s} _ {i}, F _ {\theta}\right) \mid i = 1, \dots , N \right\}, \tag {1} +$$ + +where each local SDF $F_{i}^{l}$ is derived from a band-limited basis SDF $F_{\theta}$ with a radius of $m = 3$ parameterized by a tiny MLP with the parameter $\theta$ . And each local SDF is anchored at a support point with position $\mathbf{x}_i \in \mathbb{R}^3$ and shaped by a rotation vector $\mathbf{r}_i \in \mathbb{R}^3$ and a scaling vector $\mathbf{s}_i \in \mathbb{R}^3$ . Therefore, the learnable parameters are $\{\mathbf{x}_i, \mathbf{r}_i, \mathbf{s}_i, \theta\}$ , not including any latent features. + +Given a sampled point $\pmb{p}$ in the world coordinate system and its $K$ detected local SDFs $\{F_j^p \mid j = 1, \dots, K\}$ , we can query the SDF value of $\pmb{p}$ as presented in Fig. 2(d). Firstly, we transform $\pmb{p}$ from the world coordinate system to the local coordinate system of the basis SDF through the geometric parameters of these detected local SDFs: + +$$ +\boldsymbol {p} _ {j} ^ {l} = \frac {\mathbf {R} _ {j} ^ {- 1} (\boldsymbol {p} - \mathbf {x} _ {j})}{\exp (\mathbf {s} _ {j})}, \tag {2} +$$ + +where $\mathbf{R}_j\in \mathbb{R}^{3\times 3}$ is a rotation matrix transferred by the rotation vector $\mathbf{r}_j$ and the exponentiation of $\mathbf{s}_j$ is to ensure that the scales of local SDFs are always greater than 0. Then, we can query the local SDF values from the MPL-parameterized basis SDF: + +$$ +f _ {j} ^ {\boldsymbol {p}} = F _ {\theta} \left(\boldsymbol {p} _ {j} ^ {l}\right). \tag {3} +$$ + +Finally, the SDF value of $p$ can be obtained by a weighted average: + +$$ +S = \sum_ {j = 1} ^ {K} \frac {w _ {j}}{\sum_ {j = 1} ^ {K} w _ {j}} f _ {j} ^ {\boldsymbol {p}}, \tag {4} +$$ + +where $w_{j}$ is a weight factor, which is calculated by: + +$$ +w _ {j} = \exp \left(- \| \boldsymbol {p} _ {j} ^ {l} \| _ {2} ^ {2}\right). \tag {5} +$$ + +The above formulation is continuous and differentiable for all parameters. As a result, we can update the distribution and shapes of these local SDFs during training, which endows our representation with high expressiveness and adaptivity. + +According to the descriptions above, our approach can be viewed as a basis function-based neural implicit representation method. Compared to the predefined basis in traditional methods [13, 15], our basis SDF is data-driven and thus is more expressive. In comparison to the feature-derived kernels in learning-based methods [11, 46], ours is feature-independence and extremely compact. The proposed approach is also a point-based method but more adaptive than previous methods [30, 35, 47]. + +Initialization. We construct a spatial hash grid following [37], constraining the global SDF and point sampling to the near-surface space. The scene point cloud is voxel-downsampled with a voxel size of $s_v$ to get the initial support points $\mathbf{x}^{init}$ , which are also initial positions of local SDFs. The scaling vectors $\mathbf{s}^{init}$ of local SDFs are initialized as $[ln(s_v), ln(s_v), ln(s_v)]^T$ . We perform principal component analysis (PCA) on neighbor points to estimate the normals $\mathbf{n}$ of support points and correct these normals toward the data acquisition sensor. Then, the initial rotation vectors $\mathbf{r}^{init}$ are derived by: + +$$ +\mathbf {r} ^ {\text {i n i t}} = \alpha \cdot \mathbf {r} _ {b}, \alpha = < \mathbf {z} _ {\text {a x i s}} \cdot \mathbf {n} >, \mathbf {r} _ {b} = \mathbf {z} _ {\text {a x i s}} \times \mathbf {n}, \tag {6} +$$ + +where $\mathbf{z}_{axis} = [0,0,1]^T$ . That makes $\mathbf{n}$ become the $\mathbf{z}$ -axis of the local coordinate system of each local SDF. The basis SDF $F_{\theta}^{init}$ is initialized to an implicit plane. + +![](images/47c21c31e9beeae4392137033761fa8767417514ab01a7bcdedddbdd62061779.jpg) +Figure 3. In an iteration, our parallel local SDF detection algorithm (a) first selects candidate local SDFs according to the current observation range, (b) then constructs a grid index suitable for parallel retrieval in the bounding box determined by these candidates, and finally detects local SDFs in the voxel where each sampled point is located in parallel. + +# 3.2. Parallel local SDF detection for sample points + +Fast detection of local SDFs where each sampled point is located is vital for the proposed unordered representation to enable large-scale 3D mapping. The common practice is to leverage special data structures like R-trees [2, 8], but their construction speed fails to meet the real-time demands of local SDF updates. Existing point-based methods, such as Point-NeRF [47] and Point-SLAM [35], utilize inefficient closest neighborhood searching, limiting them to small scenes. Although PIN-SLAM [30] enhances neighborhood querying through a hash index, it sacrifices the inherent adaptivity of point-based representations. + +The recently rising 3DGS [16] has inspired us, which employs a highly efficient rasterizer for 3D Gaussians to achieve real-time rendering. Its core mechanism involves parallel detection of 2D Gaussians at the locations of regular image pixels in 2D screen space. In contrast, we focus on the parallel detection of 3D local SDFs at the locations of unordered sampled points in 3D space. Drawing on the insight of the similarity between these processes, we introduce a novel parallel algorithm specifically tailored for our representation. The algorithm is implemented using CUDA. We start with simplifying local SDFs as oriented bounding boxes determined by the geometric parameters $\{\mathbf{x},\mathbf{r},\mathbf{s}\}$ and the radius $m$ followed by selecting candidate local SDFs according to the current observation range of a sensor, such as a LiDAR or a depth camera, as shown in Fig. 3(a). Then, a regular grid is built covering these candidates. Subsequently, we instantiate each local SDF by signing keys for the candidates according to the IDs of intersected grid voxels. After that, these instances are sorted by their keys and stored in an array. We identify the first and last instances intersecting each grid voxel in the sorted array. At this stage, a grid index suitable for parallel retrieval is established, as illustrated in Fig. 3(b). Finally, for each sampled point in the observation range, we only need to detect the local SDFs in the voxel where the sampled point is located. As shown + +![](images/96371cbcbce0ee73bcb76b4c032b30396b4a9db360b144215237661cee508ab2.jpg) +Figure 4. Prune-and-expand strategy. +in Fig. 3(b), local SDF detection is performed for four sampled points in parallel. An overview of our algorithm is presented in Suppl 1. + +# 3.3. Prune-and-expand strategy + +In this section, we introduce a prune-and-expand strategy to enhance the adaptivity of our representation further. First, we observe that the support points extracted from the scene point clouds can contain significant noise, such as that introduced by dynamic objects, and some may deviate from the geometric surface during training. This will compromise the accuracy of the reconstruction. Therefore, we remove the support points where the absolute values of their SDF values exceed a threshold $d_{th}$ in the training process, as presented in Fig. 4(b). However, the subsequent problem is that removing the support points leads to a considerable loss of local SDFs, which hampers the completeness of the reconstruction. To address this, we aim to generate new local SDFs in the underfitting regions. We find that the local SDFs in these areas exhibit significant gradients $\nabla \mathbf{x}_e$ on planes (named $e$ ) orthogonal to the normals of the support points, as illustrated in Fig. 4(a). If the norms $|\nabla \mathbf{x}_e|$ exceed a threshold $x_{th}$ (set to $2 \times 10^{-4}$ ), new local SDFs will be created, as shown in Fig. 4(b). When the maximum scaling value $s_e$ on the $e$ -plane is less than a threshold $s_{th}$ (set to $0.8s_v$ ), we clone a new local SDF along $\nabla \mathbf{x}_e$ . And when $s_e$ is larger than $s_{th}$ , we split the local SDF into two smaller ones. The above process produces significantly different results from the densification strategy employed in 3DGS [16]. The densification of 3DGS leads to an extreme increase in Gaussians, whereas ours tends to remove more outlier local SDFs, resulting in reduced memory consumption (Tab. 4). + +# 3.4. Training + +As described in Sec. 3.1, a spatial hash grid has been constructed near the geometric surfaces. Then, we sample $M$ points in the hash grid along rays [37], as shown in Fig. 2(e). Given a sampled point $\pmb{p}$ , an SDF value $S$ can be queried from our representation. Subsequently, we obtain the SDF label $S_{gt}$ according to the distance from the sampled point to the endpoint along the ray. Finally, the loss + +function is defined as follows: + +$$ +L = \left\{ \begin{array}{l} | S - S _ {g t} | + \lambda (| | \nabla_ {\boldsymbol {p}} S | | _ {2} - 1) ^ {2}, i f | S _ {g t} | < t r \\ | S - t r |, o t h e r w i s e \end{array} , \right. \tag {7} +$$ + +where $tr$ is a truncation distance, $\lambda$ is a weight factor set to 0.02. In fact, the loss function makes our representation become a truncated signed distance function (TSDF). During training, the prune-and-expand is performed every 200 iterations. For more efficient backward computing, we derive the gradients of Eq. 2 and implement them with CUDA (see Suppl. 2 for derivation details). + +# 4. Experiments + +Settings. We compare our approach with a wide range of 3D mapping methods, including VDB-Fusion [42], SPSR [13], NKSR [11], SHINE-Mapping [52] and PIN-SLAM [30], in large-scale indoor scenes and outdoor scenes. We adopt the commonly used metrics [39] named accuracy (Acc.) in centimeter $(cm)$ , completeness (Comp.) in $cm$ , precision (Prec.) in percent $(\%)$ , recall in $\%$ , and F-score in $\%$ , where Prec., recall and F-score are presented with a $10cm$ error threshold in indoor scenes and a $20cm$ error threshold in outdoor scenes. We also evaluate the memory consumption for map representation (MC) in megabytes $(M)$ . All experiments are conducted on a single RTX4090 GPU. More setting details are presented in Suppl. 3. + +# 4.1. Evaluation in indoor scenes + +We randomly select five scenes from a large-scale indoor RGB-D dataset, Matterport3D [4], as the experimental data. Tab. 1 presents the quantitative results in the dataset, where the values are averages of the results assessed in these scenes. Fig. 5 shows the results evaluated in a three-story loft (See Suppl. 4 for more results). The experimental results demonstrate that our approach achieves state-of-the-art 3D reconstruction performance in large-scale indoor scenes with less than $1/6$ of memory consumption for map representation. Compared to NKSR and SHINE-Mapping, which depend on high-dimensional latent features, our approach eliminates this reliance, significantly reducing the memory footprint. Although VDB-Fusion and SPSR do not rely on latent features, they need to allocate a large number of data nodes to enhance the expressiveness, leading to a substantial increase in memory usage. In contrast, the expressiveness of our method comes from the adaptivity of our point-based representation and the data-driven basis SDF, avoiding excessive memory consumption. + +# 4.2. Evaluation in outdoor scenes + +We evaluate the 3D mapping performance of our approach in large-scale outdoor scenes on three LiDAR + +![](images/77946946a09e58b1231e1845aa58a2f1009e36b8a0452896ba0b2397eb3b3102.jpg) +VDB-Fusion (F-score: 87.4, MC: 67.4) + +![](images/8bd83e36a2cbdc5b0fdbcfb0aff86d59c7240cceae4c6b669317a672fd413579.jpg) +SPSR (F-score: 91.8, MC: 87.0) + +![](images/02600a57eb1805df9c7a9819c18750c50baf9234fc971067efbd4ee9d41b2134.jpg) +NKSR (F-score: 92.6, MC: 97.9) + +![](images/a8aa9a55f9d7226210d0e5b53f7762c8ac1ca4c7597db0542dae2dd303fbe7ee.jpg) +SHINE-Mapping (F-score: 92.8, MC: 145.6) + +![](images/6b8b77ff9e9749bbd6112750581b91d0fac6188c03ae6eb95a756a4f9ffd9b03.jpg) +Ours (F-score: 92.8, MC: 10.9) +Figure 5. Experimental results evaluated in a three-story loft (about $11.8m \times 21.8m \times 11.7m$ ) in the Matterport3D dataset. The reconstructed meshes are colored by surface normals. + +![](images/e5e3c1ffb5f86af0cc48a7c5f69f634d3bd5057b2543d187581f5939661a2c40.jpg) +ground truth mesh + +
MethodAcc.↓Comp.↓Prec.↑Recall↑F-Score↑MC↓
VDB-Fusion4.65.490.392.591.3112.5
SPSR5.03.891.196.893.880.5
NKSR4.73.892.196.594.2128.9
SHINE-Mapping4.93.790.195.892.8167.4
Ours4.33.792.396.594.413.7
+ +Table 1. Quantitative results of different methods evaluated on the Matterport3D Dataset. + +datasets, including the Newer College dataset [34], a self-collected street-level dataset (See Suppl. 3.1 for detailed description), and the KITTI-360 dataset [19]. The quantitative results listed in Tab. 2 demonstrate that our approach achieves state-of-the-art mapping performance (F-score) in the Newer College dataset while utilizing less than $1/5$ of the memory footprint for map representation compared to the previous advanced methods. The best mapping performance (F-score) mainly benefits from the high accuracy, as Acc. and Prec. indicated. Fig. 6 shows the qualitative results evaluated in the scene of Math Institute in the Newer College dataset. VDB-Fusion, SPSR, and NKSR introduce artifacts associated with dynamic objects, as highlighted in the black dashed circles. SHINE-Mapping exhibits notable robustness against dynamic elements, resulting in cleaner meshes. Our method shares this advantage with SHINE-Mapping, while concurrently eliminating inconsistent meshes and achieving improved accuracy, as evidenced by the error maps. PIN-SLAM is sensitive to noise and yells uneven surfaces, which are also incomplete, as shown in the orange dashed box. In comparison, ours delivers a smoother result, primarily through the full utilization of the adaptability inherent in point-based representations. + +Fig. 7 shows the results evaluated in a street scene in the street-level dataset. In this experiment, we adjust the + +memory footprint of other methods to be as close to that of ours as possible. VDB-Fusion produces many fragment meshes. The outputs of SPSR and NKSR tend to be overly smooth, resulting in a loss of crucial geometric details. This is particularly evident in NKSR, which shows an obvious decrease in F-score. PIN-SLAM has an F-sore close to ours, but its produced mesh is too noisy. While SHINE-Mapping can generate a visually plausible result, it significantly sacrifices accuracy, as indicated by the error maps. Our method achieves the best reconstruction performance due to its compact presentation and excellent expressiveness. See Suppl. 4.2 for more results. + +The qualitative results evaluated on the KITTI-360 dataset are shown in Fig. 1. In the kilometer-level scene, our method continues to deliver the best mapping performance with less than $1/5$ memory footprint. The meshes generated by SPSR suffer from bulging, mainly due to its sensitivity to the accuracy of point normals, which tend to be poorly estimated from the noisy point cloud. NKSR yells extremely redundant meshes. VDB-Fusion introduces artifacts of dynamic objects. SHINE-Mapping and PIN-SLAM produce uneven road surfaces. In contrast, our method is more robust to normal noise and dynamic objects and produces more consistent and smoother results. See Suppl. 4.2 for more results. + +Running time. As listed in Tab. 3, for 100 thousand sampled points, our parallel local SDF detection algorithm achieves millisecond-level running efficiency and our CUDA-improved backward computing is twice as fast as the native backward computing implemented in Pytorch, which facilitates the real-time updates of local SDFs. + +# 4.3. Ablation study + +Geometric parameters. As shown in Table 4, the update of geometric parameters significantly enhances map + +
MethodQuadMath Institute
Acc.↓Comp.↓Prec.↑Recall↑F-Score↑MC↓Acc.↓Comp.↓Prec.↑Recall↑F-Score↑MC↓
VDB-Fusion11.88.485.996.991.120.712.113.579.888.383.843.9
SPSR11.29.787.896.191.839.611.631.880.685.583.0112.3
NKSR12.38.986.196.791.1368.511.913.380.289.884.7408.7
SHINE-Mapping11.67.885.697.591.215.812.99.977.092.584.033.8
PIN-SLAM13.38.882.794.688.318.816.612.768.184.475.442.9
Ours9.79.789.496.392.73.311.220.382.488.585.38.0
+ +Table 2. Quantitative results of different methods evaluated on the Newer College Dataset. The range of Quad is about ${52.5m} \times {71.3m} \times$ ${21.1m}$ and the range of Math Institute is about ${103.1m} \times {109.0m} \times {19.0m}$ . + +![](images/324e8083e0e216fc7fb8b15eca1827df91631f06e0cfb3500d426164c4ba55d7.jpg) +Figure 6. Qualitative results evaluated in the scene of Math Institute in the Newer College dataset. The reconstructed meshes are colored by surface normals. The error maps show the error distribution with the ground truth point cloud as a reference, where the redder areas mean larger errors up to $20cm$ . + +![](images/9f97f5df09aa1acfb025de2700a54b5bfa2d65f4d19a4d858a7b5820aca3b708.jpg) +Figure 7. Experimental results evaluated in a street scene (about $235m$ long) in the street-level dataset. The reconstructed meshes are colored by surface normals. The error maps show the error distribution with the ground truth point cloud as a reference, where the redder areas mean larger errors up to $20cm$ . + +ping performance. With the sequential introduction of each geometric parameter, the F-score progressively improves, mainly due to a marked increase in precision. This incorporation fully leverages the adaptability of point-based representations, resulting in substantial performance gains in + +3D mapping. These improvements are also attributed to our parallel local SDF detection algorithm, which enables the real-time updates of these geometric parameters. + +Prune-and-expand. As shown in Table 4, our prune-and-expand strategy provides a boost to mapping perfor + +
Processlocal SDF detectionBackward (CUDA)TotalBackward (Pytorch)
Time8 ms26 ms75 ms51 ms
+ +Table 3. The running time of main modules in one iteration during training, which are evaluated in Math Institute in Newer College dataset for 100 thousand sampled points. + +
MethodPrec.↑Recall↑F-Score↑MC↓
w/o geo77.887.882.59.9
+rot80.090.084.79.9
+rot+scale81.588.885.09.9
+rot+scale+pos82.388.485.29.9
Ours (+geo+pe)82.488.585.38.0
+ +Table 4. Quantitative ablation results for geometric parameters and prune-and-expand strategy evaluated in Math Institute. "geo" denotes all geometric parameters, "rot" denotes rotation parameters, "scale" denotes scaling parameters, "pos" denotes position parameters, and "pe" denotes prune-and-expand. + +![](images/c4b347582b0c737dd9aaee3f523c85af40bc7aa0605557a74a1ad814aaea52df.jpg) + +![](images/0d8d668ae7475e98e460d53c51f35260825957f54c7b508a7701d6f84ce0d99d.jpg) +w/o prune-and-expand +w prune-and-expand +Figure 8. Qualitative ablation results for prune-and-expand evaluated in Math Institute. The left shows the error maps of support points, where the bluer points mean larger errors up to $20cm$ . The right shows the close-ups of the reconstructed meshes. + +mance in terms of both accuracy and completeness while concurrently reducing memory consumption. The reason is that the remove operation deletes most of the outlier support points, and clone and split operations complete the underreconstructed regions, as presented in Fig. 8. This further enhances adaptability and expressiveness. The reduction in memory footprint results from the fact that more support points are removed than added, which significantly differs from the densification results of 3DGS [16]. According to the above results, our prune-and-expand strategy can be viewed as a form of parameter regularization. + +Support point initialization. As illustrated in Fig. 9, we initialize support points by voxel-downsampling the scene point cloud with the voxel size $s_v = 1.0m$ , $0.3m$ , and $0.2m$ respectively. The memory footprint increases in order. Nev- + +![](images/6ca7d137354c38cea4e7cd7aa155d0910936596441d8a52a5cc83528a20f8d38.jpg) +Figure 9. Ablation results for support point initialization evaluated in Quad. + +ertheless, the F-score shows a trend of increasing and then decreasing. When the number of parameters is low, underfitting results will be produced. As shown in Fig. 9(a), the wall is too smooth and the ground has holes. When the number of parameters is high, overfitting results will be delivered. As shown in Fig. 9(c), the reconstructed wall is too noisy. When the number of parameters is moderate, the desired results can be generated, as shown in Fig. 9(b). + +# 5. Conclusion and future work + +In this paper, we have presented 3D-SLNR, a super lightweight neural representation with excellent performance for large-scale 3D mapping. The representation introduces a new map structure that leverages a set of band-limited local SDFs to define a continuous and differentiable global SDF near the geometric surfaces. Each local SDF is determined only by the MLP-parameterized basis SDF and three geometric parameters, eliminating the need for latent features and significantly reducing the number of parameters. Furthermore, we introduce a novel parallel local SDF detection algorithm tailored for our representation to ensure the real-time updates of the local SDFs. This allows our method to fully exploit its adaptivity, enhancing expressiveness. Additionally, we develop a prune-and-expand strategy to remove outlier support points and refine under-reconstructed regions, functioning as a form of parameter regularization and further improving overall mapping performance. Our method achieves state-of-the-art 3D mapping performance with an exceptionally low memory footprint, spanning various large-scale environments, from indoor and outdoor scenes to kilometer-level landscapes. + +In future work, we will further optimize the running efficiency and integrate the representation into a simultaneous localization and mapping (SLAM) system for online city-scale reconstruction. Besides, exploring open-scene 3D generation based on our representation presents a potential research direction. + +Acknowledgements. This work is supported by the National Natural Science Foundation of China under Grant No. 62202468, Beijing Science and Technology Plan Project under Grant Z231100007123005, and Beijing Natural Science Foundation under Grant L241012. + +# References + +[1] Dejan Azinović, Ricardo Martin-Brualla, Dan B Goldman, Matthias Nießner, and Justus Thies. Neural rgb-d surface reconstruction. In Proceedings of the IEEE/CVF Conference on CVPR, pages 6290-6301, 2022. 2 +[2] Norbert Beckmann, Hans-Peter Kriegel, Ralf Schneider, and Bernhard Seeger. The $r^*$ -tree: An efficient and robust access method for points and rectangles. In Proceedings of the 1990 ACM SIGMOD international conference on Management of data, pages 322-331, 1990. 4 +[3] Jens Behley and Cyril Stachniss. Efficient surfel-based slam using 3d laser range data in urban environments. In Robotics: science and systems, page 59, 2018. 3 +[4] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017. 5 +[5] Xieyuanli Chen, Andres Milioto, Emanuele Palazzolo, Philippe Giguere, Jens Behley, and Cyril Stachniss. Suma++: Efficient lidar-based semantic slam. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4530-4537. IEEE, 2019. 3 +[6] Junyuan Deng, Qi Wu, Xieyuanli Chen, Songpengcheng Xia, Zhen Sun, Guoqing Liu, Wenxian Yu, and Ling Pei. Nerfloan: Neural implicit representation for large-scale incremental lidar odometry and mapping. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8218-8227, 2023. 2 +[7] Haoyu Guo, Sida Peng, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, and Xiaowei Zhou. Neural 3d scene reconstruction with the manhattan-world assumption. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5511-5520, 2022. 2 +[8] Antonin Guttman. R-trees: A dynamic index structure for spatial searching. In Proceedings of the 1984 ACM SIGMOD international conference on Management of data, pages 47-57, 1984. 4 +[9] Armin Hornung, Kai M Wurm, Maren Bennewitz, Cyril Stachniss, and Wolfram Burgard. Octomap: An efficient probabilistic 3d mapping framework based on octrees. Autonomous robots, 34:189-206, 2013. 1, 2 +[10] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. In ACM SIGGRAPH 2024 Conference Papers, pages 1-11, 2024. 3 +[11] Jiahui Huang, Zan Gojcic, Matan Atzmon, Or Litany, Sanja Fidler, and Francis Williams. Neural kernel surface reconstruction. In Proceedings of the IEEE/CVF Conference on CVPR, pages 4369-4379, 2023. 1, 2, 4, 5 +[12] Olaf Kähler, Victor Adrian Prisacariu, Carl Yuheng Ren, Xin Sun, Philip Torr, and David Murray. Very high frame rate volumetric integration of depth images on mobile devices. IEEE transactions on visualization and computer graphics, 21(11):1241-1250, 2015. 1, 2 +[13] Michael Kazhdan and Hugues Hoppe. Screened poisson sur + +face reconstruction. ACM Transactions on Graphics (ToG), 32(3):1-13, 2013. 1, 2, 4, 5 +[14] Misha Kazhdan and Hugues Hoppe. An adaptive multi-grid solver for applications in computer graphics. In Computer graphicsforum, pages 138-150. Wiley Online Library, 2019. 1, 2 +[15] Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Poisson surface reconstruction. In Proceedings of the fourth Eurographics symposium on Geometry processing, 2006. 1, 2, 4 +[16] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 3, 4, 5, 8 +[17] Hai Li, Xingrui Yang, Hongjia Zhai, Yuqian Liu, Hujun Bao, and Guofeng Zhang. Vox-surf: Voxel-based implicit surface representation. IEEE Transactions on Visualization and Computer Graphics, 30(3):1743-1755, 2022. 2 +[18] Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. Neuralangelo: High-fidelity neural surface reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8456-8465, 2023. 2 +[19] Yiyi Liao, Jun Xie, and Andreas Geiger. Kitti-360: A novel dataset and benchmarks for urban scene understanding in 2d and 3d. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3):3292-3310, 2022. 6 +[20] Jianheng Liu and Haoyao Chen. Towards large-scale incremental dense mapping using robot-centric implicit neural representation. In 2024 IEEE ICRA, pages 4045-4051. IEEE, 2024. 1 +[21] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Semin al graphics: pioneering efforts that shaped the field, pages 347-353. 1998. 3 +[22] Julien N. P. Martel, David B. Lindell, Connor Z. Lin, Eric R. Chan, Marco Monteiro, and Gordon Wetzstein. Acorn: Adaptive coordinate networks for neural scene representation. ACM Trans. Graph. (SIGGRAPH), 40(4), 2021. 1 +[23] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4460-4470, 2019. 2 +[24] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 2 +[25] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1-15, 2022. 1, 2 +[26] Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. Kinectfusion: Real-time dense surface mapping and track + +ing. In 2011 10th IEEE international symposium on mixed and augmented reality, pages 127-136. IEEE, 2011. 1, 2 +[27] Michael Oechsle, Songyou Peng, and Andreas Geiger. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In Proceedings of the IEEE/CVF ICCV, pages 5589-5599, 2021. 2 +[28] Helen Oleynikova, Zachary Taylor, Marius Fehr, Roland Siegwart, and Juan Nieto. Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning. In 2017 IEEE/RSJ IROS, pages 1366-1373. IEEE, 2017. 1, 2 +[29] Yue Pan, Yves Kompis, Luca Bartolomei, Ruben Mascaro, Cyril Stachniss, and Margarita Chli. Voxfield: Non-projective signed distance fields for online planning and 3d reconstruction. In 2022 IEEE/RSJ International Conference on IROS, pages 5331-5338. IEEE, 2022. 2 +[30] Yue Pan, Xingguang Zhong, Louis Wiesmann, Thorbjörn Posewsky, Jens Behley, and Cyril Stachniss. Pin-slam: Lidar slam using a point-based implicit neural representation for achieving global map consistency. arXiv preprint arXiv:2401.09101, 2024. 1, 3, 4, 5 +[31] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 165-174, 2019. 2 +[32] Minseong Park, Suhan Woo, and Euntai Kim. Decomposition of neural discrete representations for large-scale 3d mapping. In ECCV, pages 360-376. Springer, 2024. 2 +[33] Hanspeter Pfister, Matthias Zwicker, Jeroen Van Baar, and Markus Gross. Surfels: Surface elements as rendering primitives. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 335-342, 2000. 3 +[34] Milad Ramezani, Yiduo Wang, Marco Camurri, David Wisth, Matias Mattamala, and Maurice Fallon. The newer college dataset: Handheld lidar, inertial and vision with ground truth. In 2020 IEEE/RSJ International Conference on IROS, pages 4353-4360. IEEE, 2020. 6 +[35] Erik Sandström, Yue Li, Luc Van Gool, and Martin R Oswald. Point-slam: Dense neural point cloud-based slam. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18433-18444, 2023. 1, 3, 4 +[36] Thomas Schops, Torsten Sattler, and Marc Pollefeys. Bad slam: Bundle adjusted direct rgb-d slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 134-144, 2019. 3 +[37] Chenhui Shi, Fulin Tang, Yihong Wu, Xin Jin, and Gang Ma. Accurate implicit neural mapping with more compact representation in large-scale scenes using ranging data. IEEE Robotics and Automation Letters, 2023. 1, 2, 3, 4, 5 +[38] Shuangfu Song, Junqiao Zhao, Kai Huang, Jiaye Lin, Chen Ye, and Tiantian Feng. N $\hat{\cdot}$ 3)-mapping: Normal guided neural non-projective signed distance fields for large-scale 3d mapping. IEEE Robotics and Automation Letters, 2024. 1, 2 +[39] Jiaming Sun, Yiming Xie, Linghao Chen, Xiaowei Zhou, and Hujun Bao. Neuralrecon: Real-time coherent 3d reconstruction from monocular video. In Proceedings of the IEEE/CVF conference on CVPR, pages 15598-15607, 2021. 5 + +[40] Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler. Neural geometric level of detail: Real-time rendering with implicit 3d shapes. In Proceedings of the IEEE/CVF Conference on CVPR, pages 11358-11367, 2021. 1, 2 +[41] Emanuele Vespa, Nikolay Nikolov, Marius Grimm, Luigi Nardi, Paul HJ Kelly, and Stefan Leutenegger. Efficient octree-based volumetric slam supporting signed-distance and occupancy mapping. IEEE Robotics and Automation Letters, 3(2):1144-1151, 2018. 2 +[42] Ignacio Vizzo, Tiziano Guadagnino, Jens Behley, and Cyril Stachniss. Vdbfusion: Flexible and efficient tsdf integration of range sensor data. Sensors, 22(3):1296, 2022. 1, 2, 5 +[43] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689, 2021. 2 +[44] Yiming Wang, Qin Han, Marc Habermann, Kostas Dani-ilidis, Christian Theobalt, and Lingjie Liu. Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3295–3306, 2023. 2 +[45] Thomas Whelan, Stefan Leutenegger, Renato F Salas-Moreno, Ben Glocker, and Andrew J Davison. Elasticfusion: Dense slam without a pose graph. In Robotics: science and systems, page 3. Rome, Italy, 2015. 3 +[46] Francis Williams, Zan Gojcic, Sameh Khamis, Denis Zorin, Joan Bruna, Sanja Fidler, and Or Litany. Neural fields as learnable kernels for 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18500-18510, 2022. 1, 2, 4 +[47] Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, and Ulrich Neumann. Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5438-5448, 2022. 1, 4 +[48] Dongyu Yan, Xiaoyang Lyu, Jieqi Shi, and Yi Lin. Efficient implicit neural reconstruction using lidar. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 8407-8414. IEEE, 2023. 1, 2 +[49] Xingrui Yang, Hai Li, Hongjia Zhai, Yuhang Ming, Yuqian Liu, and Guofeng Zhang. Vox-fusion: Dense tracking and mapping with voxel-based neural implicit representation. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages 499-507. IEEE, 2022. 2 +[50] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. Advances in Neural Information Processing Systems, 34:4805-4815, 2021. 2 +[51] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splatting. In Proceedings of the IEEE/CVF Conference on CVPR, pages 19447-19456, 2024. 3 +[52] Xingguang Zhong, Yue Pan, Jens Behley, and Cyril Stachniss. Shine-mapping: Large-scale 3d mapping using sparse hierarchical implicit neural representations. In 2023 IEEE ICRA, pages 8371-8377. IEEE, 2023. 1, 2, 5 \ No newline at end of file diff --git a/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/images.zip b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1011d81bb9192b33b7c14692ae5297f52e17b5b6 --- /dev/null +++ b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91759d2eb0d2f46e7287b5e9c493ebcc23bb616de7fb71b97fed10649fed2fef +size 801855 diff --git a/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/layout.json b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..30d77ed0b6958311de3a80265c348e7bfa151602 --- /dev/null +++ b/CVPR/2025/3D-SLNR_ A Super Lightweight Neural Representation for Large-scale 3D Mapping/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c68d77067d83a3d1e4fec12003e1a1e4fb48a5771b3c230386f7f03716bf8331 +size 368242 diff --git a/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_content_list.json b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..aa91e372965ab0678ebe7ae55a0b1a58b7ea5564 --- /dev/null +++ b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecbc2488626cb2a885d0f207efe8491f1bcc3ba2ce4b497150a88925f0930462 +size 88476 diff --git a/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_model.json b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..02d42a889801ca3456adea888d24f9be3ef4a7ab --- /dev/null +++ b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:670df029231f53c9d449e2dab4d939e467cf3c68698fe3805bed80da7a6ce810 +size 119110 diff --git a/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_origin.pdf b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..01884cbd10afe08008191a2c500868d7c6399f34 --- /dev/null +++ b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/3f4f27a6-0bf6-4821-b4d9-5b5287d3f5c3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecdce19a11a63629531a50ec0ba0c8ecd9b51ee2b7295d7f5e03a12e506b55e5 +size 4334832 diff --git a/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/full.md b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7378217edeb5bde786953790d4caf63ed9ba4920 --- /dev/null +++ b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/full.md @@ -0,0 +1,403 @@ +# 3DENHANCER: Consistent Multi-View Diffusion for 3D Enhancement + +Yihang Luo Shangchen Zhou† Yushi Lan Xingang Pan Chen Change Loy† S-Lab, Nanyang Technological University https://yihangluo.com/projects/3DEnhancer + +![](images/45d58563bc7bb49b5acc24162afe9c02da505aeebce03e7373ba33076a62d5cc.jpg) +nnaeennnnnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeannnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeennnnaeenn +Zero-1-to-3 + +![](images/99fb7e742bd713c1eee785072df9a8408acb417a291edac45e6ad7d0158c5876.jpg) +Ae +"A painting of sunflowers" + +![](images/002d29f895dd1e160d31e8cb7435eaf6f8d2c11b4696b5ee20358b69b567fedb.jpg) +SyncDreamer + +![](images/4a7e7066d7b318d76ab9c485ce3de42a4252505a7326cfce01a6e76a490a9119.jpg) +"Cleanrot armor" + +![](images/4853447f196c8e0b9575638d0481c1ee4aabb2ea85f6ba1566ecc5b5e6535b5f.jpg) +MVDream + +![](images/11fb2ad045442085935f9db7dbdcab8c34d8b7bd53a34c561c9a1f7da029392c.jpg) +"A bulldog with a pirate hat" + +![](images/c0cbd23b19360796de6b5f3fcd023dd50633f609b43751d80f235a7a420fddf2.jpg) +Era3D + +![](images/18a17d70486d0175fc1f5b6401cdb2c6e670cc2bf0aeaf7151ab66412baedc67.jpg) +"A stone" + +![](images/57dde6a4ecad7a4a8d5e7b4a5941578794da575e45454e4fbb9b2478f7545372.jpg) +SnipE mAn-nnN (q) +Sunjudui +Coarse MV Input (w/ masks) + +![](images/88a9a90722f9ba7b7449930c3bec30686dab77870595678fe17763025d137407.jpg) + +![](images/bd05b499fa08fdf7d8e38af7ab700d7fe37ed4270af225eb1a9a339cf3beba5a.jpg) +Coarse MV Input (w/ masks) + +![](images/c4871617bd1cf4effc0a1006e7c3f8fb4b8b0c5ce780ecb2213f7b23c42a03c2.jpg) +MV Inpainting + +![](images/16b2ebb47ce540ea24fc22d4f08a5dadb59d6313203595bec08342eec75ee561.jpg) +ON +Coarse MV Input + +![](images/a5be1b349d216e742eb0c1272462e9bde626c18822c216801ccd8accf51672b9.jpg) +MV Inpainting +Noise Level $\delta$ +δ=0 +Figure 1. Our proposed 3DEnhancer showcases excellent capabilities in enhancing multi-view images generated by various models. As shown in (a), it can significantly improve texture details, correct texture errors, and enhance consistency across views. Beyond enhancement, as illustrated in (b), 3DEnhancer also supports texture-level editing, including regional inpainting, and adjusting texture enhancement strength via noise level control. (Zoom-in for best view) + +![](images/ab1400652b9c645387675eed83dabd9a60aa82d30bed3411348a045bca77ec5f.jpg) +$\delta = 100$ + +![](images/6470dae824f9eeef589a8b5e7e999cecbe98c25835068e0f434df2d40ad4f5c2.jpg) +$\delta = 200$ + +# Abstract + +Despite advances in neural rendering, due to the scarcity of high-quality 3D datasets and the inherent limitations of multi-view diffusion models, view synthesis and 3D model generation are restricted to low resolutions with suboptimal multi-view consistency. In this study, we present a novel 3D enhancement pipeline, dubbed 3DENHANCER, which employs a multi-view latent diffusion model to enhance coarse 3D inputs while preserving multi-view con + +sistency. Our method includes a pose-aware encoder and a diffusion-based denoiser to refine low-quality multi-view images, along with data augmentation and a multi-view attention module with epipolar aggregation to maintain consistent, high-quality 3D outputs across views. Unlike existing video-based approaches, our model supports seamless multi-view enhancement with improved coherence across diverse viewing angles. Extensive evaluations show that 3DENHANCER significantly outperforms existing methods, boosting both multi-view enhancement and per-instance 3D optimization tasks. + +# 1. Introduction + +The advancements in generative models [19, 24] and differentiable rendering [44] have paved the way for a new research field known as neural rendering [60]. In addition to pushing the boundaries of view synthesis [32], the generation and editing of 3D models [13, 25, 30, 34-36, 39, 54, 81, 84] has become achievable. These methods are trained on the large-scale 3D datasets, e.g., Objverse dataset [14], enabling fast and diverse 3D synthesis. + +Despite these advances, several challenges remain in 3D generation. A key limitation is the scarcity of high-quality 3D datasets; unlike the billions of high-resolution image and video datasets available [50], current 3D datasets [15] are limited to a much smaller scale [47]. Another limitation is the dependency on multi-view (MV) diffusion models [53, 54]. Most state-of-the-art 3D generative models [58, 71] follow a two-stage pipeline: first, generating multi-view images conditioned on images or text [54, 65], and then reconstructing 3D models from these generated views [27, 58]. Consequently, the low-quality results and view inconsistency issues of multi-view diffusion models [54] restrict the quality of the final 3D output. Besides, existing novel view synthesis methods [32, 44] usually require dense, high-resolution input views for optimization, making 3D content creation challenging when only low-resolution sparse captures are available. + +In this study, we address these challenges by introducing a versatile 3D enhancement framework, dubbed 3DENHANCER, which leverages a text-to-image diffusion model as the 2D generative prior to enhance general coarse 3D inputs. The core of our proposed method is a multi-view latent diffusion model (LDM) [48] designed to enhance coarse 3D inputs while ensuring multi-view consistency. Specifically, the framework consists of a pose-aware image encoder that encodes low-quality multi-view renderings into latent space and a multi-view-based diffusion denoiser that refines the latent features with view-consistent blocks. The enhanced views are then either used as input for multiview reconstruction or directly serve as reconstruction targets for optimizing the coarse 3D inputs. + +To achieve practical results, we introduce diverse degradation augmentations [69] to the input multi-view images, simulating the distribution of coarse 3D data. In addition, we incorporate efficient multi-view row attention [28, 36] to ensure consistency across multi-view features. To further reinforce coherent 3D textures and structures under significant viewpoint changes, we also introduce near-view epipolar aggregation modules, which directly propagate corresponding tokens across near views using epipolar-constrained feature matching [11, 18]. These carefully designed strategies effectively contribute to achieving high-quality, consistent multi-view enhancement. + +The most relevant works to our study are 3D enhancement approaches using video diffusion models [52, 74]. + +While video super-resolution (SR) models [76] can also be adapted for 3D enhancement, several challenges that make them less suitable for use as generic 3D enhancers. First, these methods are limited to enhancing 3D model reconstructions through per-instance optimization, whereas our approach can seamlessly enhance 3D outputs by integrating multi-view enhancement into the existing two-stage 3D generation frameworks (e.g., from "MVDream [54] $\rightarrow$ LGM [58]" to "MVDream $\rightarrow$ 3DENHANCER $\rightarrow$ LGM"). Second, video models often struggle with long-term consistency and fail to correct generation artifacts in 3D objects under significant viewpoint variations. Besides, video diffusion models based on temporal attention [3] face limitations in handling long videos due to memory and speed constraints. In contrast, our multi-view enhancer models texture correspondences across various views both implicitly and explicitly, by utilizing multi-view row attention and near-view epipolar aggregation, leading to superior view consistency and higher efficiency. + +In summary, we present a novel 3DENHANCER for generic 3D enhancement using multi-view denoising diffusion. Our contributions include a robust data augmentation pipeline, and the hybrid view-consistent blocks that integrate multi-view row attention and near-view epipolar aggregation modules to promote view consistency. Compared to existing enhancement methods, our multi-view 3D enhancement framework is more versatile and supports texture refinement. We conduct extensive experiments on both multi-view enhancement and per-instance optimization tasks to evaluate the model's components. Our proposed pipeline significantly improves the quality of coarse 3D objects and consistently surpasses existing alternatives. + +# 2. Related Work + +3D Generation with Multi-view Diffusion. The success of 2D diffusion models [24, 57] has inspired their application to 3D generation. Score distillation sampling (SDS) [46, 70] distills 3D from a 2D diffusion model but faces challenges like expensive optimization, mode collapse, and the Janus problem. More recent methods propose learning the 3D via a two-stage pipeline: multi-view images generation [43, 53, 54, 71] and feed-forward 3D reconstruction [27, 58, 75]. Though yielding promising results, their performance is bounded by the quality of the multi-view generative models, including the violation of strict view consistency [39] and failing to scale up to higher resolution [53]. Recent work has focused on developing more 3D-aware attention operations, such as epipolar attention [28, 61] and row-wise attention [36]. However, we find that enforcing strict view consistency remains challenging when relying solely on attention-based operations. + +Image and Video Super-Resolution. Image and video SR aim to improve visual quality by upscaling low-resolution content to high resolution. Research in this field has evolved from focusing on pre-defined single degradations [5, 6, 9, + +12, 37, 38, 66, 67, 86, 88] (e.g., bicubic downsampling) to addressing unknown and complex degradations [7, 69, 82] in real-world scenarios. To tackle real-world enhancement, some studies [7, 69, 82, 90] introduce effective degradation pipelines that simulate diverse degradations for data augmentation during training, significantly boosting performance in handling real-world cases. To achieve photorealistic enhancement, recent work has integrated various generative priors to produce detailed textures, including StyleGAN [4, 68, 79], codebook [8, 89], and the latest diffusion models [64, 91]. For instance, StableSR [64] leverages the pretrained image diffusion model, i.e., Stable Diffusion (SD) [48], for image enhancement, while Upscale-A-Video [91] further extends the diffusion model for video upscaling. Video SR networks commonly employ recurrent frame fusion [67, 87], optical flow-guided propagation [5-7, 38] or temporal attention [91] to enhance temporal consistency across adjacent frames. However, due to large spatial misalignments from viewpoint changes, these methods face challenges in establishing long-range correspondences across multi-view images, making them unsuitable for multi-view fusion for 3D. In this study, we focus on exploiting a image diffusion model to achieve robust 3D enhancement while preserving view consistency. + +3D Texture Enhancement. With the rapid advancement of 3D generative models [2, 13, 34, 35, 84], attention is paid to further improve 3D generation quality through a cascade 3D enhancement module. Meta 3D Gen [1, 2] proposes a UV space enhancement model to achieve sharper textures. However, training the UV-specific enhancement model requires spatially continuous UV maps, which are limited in both quantities [14] and qualities [29]. Intex [59] and SyncMVD [41] also employ UV space for generating and enhancing 3D textures. However, these techniques are specifically designed for 3D mesh with UV coordinates, making them unsuitable for other 3D representations like 3DGS [32]. Unique3D [72] and CLAY [84] apply 2D enhancement module RealESRGAN [69] directly to the generated multi-view outputs. Though straightforward, this approach risks compromising 3D consistency across the multi-view results. MagicBoost [78] introduces a 3D refinement pipeline but relies on computationally expensive SDS optimization. Deceptive-NeRF/3DGS [40] uses an image diffusion model to generate high-quality pseudoobservations for novel views but requires a few accurately captured sparse views as key inputs. SuperGaussian [52] and 3DGS-Enhancer [74] propose to enhance 3D through 2D video generative priors [3, 76]. These pre-trained video models struggle to maintain long-range consistency under large viewpoint variations, making them less effective at fixing texture errors in multi-view generation. + +# 3. Methodology + +A common pipeline in current 3D generation involves an image-to-multiview stage [65], followed by multiview-to + +3D [58] generation that converts these multi-view images into a 3D object. However, due to limitations in resolution and view consistency [39], the resulting 3D outputs often lack high-quality textures and detailed geometry. The proposed multi-view enhancement network, 3DENHANCER, aims at improving the quality of 3D representations. Our motivation is that if we can obtain high-quality and view-consistent multi-view images, then the quality of 3D generation can be correspondingly enhanced. + +As illustrated in Fig. 2, our framework employs a Diffusion Transformer (DiT) based LDM [10, 45] as the backbone. We incorporate a pose-aware encoder and view-consistent DiT blocks to ensure multi-view consistency, allowing us to leverage the powerful multi-view diffusion models to enhance both coarse multi-view images and 3D models. The enhanced multi-view images can improve the performance of pre-trained feed-forward 3D reconstruction models, e.g., LGM [58], as well as optimize a coarse 3D model through iterative updates. + +Preliminary: Multi-view Diffusion Models. LDM [24, 48, 62] is designed to acquire a prior distribution $p_{\theta}(\mathbf{z})$ within the perceptual latent space, whose training data is the latent obtained from the trained VAE encoder $\mathcal{E}_{\phi}$ . By training to predict a denoised variant of the noisy input $\mathbf{z}_t = \alpha_t \mathbf{z} + \sigma_t \boldsymbol{\epsilon}$ at each diffusion step $t$ , $\boldsymbol{\epsilon}_{\Theta}$ gradually learns to denoise from a standard Normal prior $\mathcal{N}(\mathbf{0}, \mathbf{I})$ by solving a reverse SDE [24]. + +Similarly, multi-view diffusion generation models [54, 77] consider the joint distribution of multi-view images $\mathcal{X} = \{\mathbf{x}_1,\dots ,\mathbf{x}_N\}$ , where each set of $\mathcal{X}$ contains RGB renderings $\mathbf{x_v}\in \mathbb{R}^{H\times W\times 3}$ from the same 3D asset given viewpoints $\mathcal{C} = \{\pi_1,\ldots ,\pi_N\}$ . The latent diffusion process is identical to diffusing each encoded latent $\mathbf{z} = \mathcal{E}_{\phi}(\mathbf{x})$ independently with the shared noise schedule: $\mathcal{Z}_t = \{\alpha_t\mathbf{z} + \sigma_t\epsilon \mid \mathbf{z}\in \mathcal{Z}\}$ . Formally, given the multi-view data $\mathcal{D}_{mv}:= \{\mathcal{X},\mathcal{C},y\}$ , the corresponding diffusion loss is defined as: + +$$ +\mathcal {L} _ {M V} (\theta , \mathcal {D} _ {m v}) = \mathbb {E} _ {\mathcal {Z}, y, \pi , t, \epsilon} \left[ \| \epsilon - \epsilon_ {\Theta} (\mathcal {Z} _ {t}; y, \pi , t) \| _ {2} ^ {2} \right], \tag {1} +$$ + +where $y$ is the optional text or image condition. + +# 3.1. Pose-aware Encoder + +Given the posed multi-view images $\mathcal{X}$ , we add controllable noise to the images as an augmentation to enable controllable refinement, as described later in Sec. 3.3. To further inject camera condition for each view $v$ , we follow the prior work [35, 55, 58, 77], and concatenate Plücker coordinates $\mathbf{r}_{\mathbf{v}}^{i} = (\mathbf{d}^{i}, \mathbf{o}^{i} \times \mathbf{d}^{i}) \in \mathbb{R}^{6}$ with image RGB values $\mathbf{x}_{\mathbf{v}}^{i} \in \mathbb{R}^{3}$ along the channel dimension. Here, $\mathbf{o}^{i}$ and $\mathbf{d}^{i}$ are the ray origin and ray direction for pixel $i$ from view $\mathbf{v}$ , and $\times$ denotes the cross product. We then send the concatenated results to a trainable pose-aware multi-view encoder $\mathcal{E}_{\psi}$ , whose outputs are injected into the pre-trained DiT through a learnable copy [83]. + +![](images/9b4b44050c6d923e4298c15548ae8784afa681b8cae56918529ee3bcb8765a6b.jpg) + +![](images/9763a31509ccdceb5f64458147078829bc837f6b5f7b2e9b424b35585a7728c7.jpg) +Figure 2. An overview of 3DENHANCER. By harnessing generative priors, 3DENHANCER adapts a text-to-image diffusion model to a multi-view framework aimed at 3D enhancement. It is compatible with multi-view images generated by models like MVDream [54] or those rendered from coarse 3D representations like NeRFs [44] and 3DGS [32]. Given LQ multi-view images along with their corresponding camera poses, 3DENHANCER aggregates multi-view information within a DiT [45] framework using row attention and epipolar aggregation modules, improving visual quality while preserving consistency across views. Furthermore, the model supports texture-level editing via text prompts and adjustable noise levels, allowing users to correct texture errors and control the enhancement strength. + +![](images/061419b69f6c20b745a906b76f288a18ceae032a519149d9d1d40094a333bef7.jpg) + +# 3.2. View-Consistent DiT Block + +The main challenge of 3D enhancement is achieving precise view consistency across generated 2D multi-view images. Multi-view diffusion methods commonly rely on multiview attention layers to exchange information across different views, aiming to generate multiview-consistent images. A prevalent approach is extending self-attention to all views, known as dense multi-view attention [43, 54]. While effective, this method significantly raises both computational demands and memory requirements. To further enhance the effectiveness and efficiency of inter-view aggregation, we introduce two efficient modules in the DiT blocks: multi-view row attention and near-view epipolar aggregation, as shown in Fig. 2. + +Multi-view Row Attention. To enhance the noisy input views to higher resolution, e.g., $512 \times 512$ , efficient multi-view attention is required to facilitate cross-view information fusion. Considering the epipolar constraints [21], the 3D correspondences across views always lie on the epipolar line [28, 61]. Since our diffusion denoising is performed on $16 \times$ downsampled features [10], and typical multi-view settings often involve elevation angles around $0^{\circ}$ , we assume that horizontal rows approximate the epipolar line. Therefore, we adopt the special epipolar attention, specifically the multi-view row attention [36], enabling efficient information exchange among multi-view features. + +Specifically, the input cameras are chosen to look at the object with their $Y$ axis aligned with the gravity direction and cameras' viewing directions are approximately hori + +zontal (i.e., the pitch angle is generally level, with no significant deviation). This case is visualized in Fig. 2, for a coordinate $(u,v)$ in the attention feature space of one view, the corresponding epipolar line in the attention feature space of other views can be approximated as $Y = v$ . This enables the extension of self-attention layers calculated on tokens within the same row across multiple views to learn 3D correspondences. As ablated in Tab. 4, the multi-view row attention can efficiently encourage view consistency with minor memory consumption. + +Near-view Epipolar Aggregation. Though multi-view attention can effectively facilitate view consistency, we observe that the attention-only operation still struggles with accurate correspondences across views. To address this issue, we incorporate explicit feature aggregation among neighboring views to ensure multi-view consistency. Specifically, given the output features $\{\mathbf{f}_{\mathbf{v}}\}_{\mathbf{v} = 1}^{N}$ from the multi-view row attention layers for each posed multi-view input, we propagate features by finding near-view correspondences with epipolar line constraints. Formally, for the feature map $\mathbf{f}_{\mathbf{v}}$ corresponding to the posed image $\mathbf{x}_{\mathbf{v}}$ , we calculate its correspondence map $M_{\mathbf{v},\mathbf{k}}$ with the near views $\mathbf{k}$ as follows: + +$$ +M _ {\mathbf {v}, \mathbf {k}} [ i ] = \underset {j, j ^ {\top} F i = 0} {\arg \min } D \left(\mathbf {f} _ {\mathbf {v}} [ i ], \mathbf {f} _ {\mathbf {k}} [ j ]\right), \tag {2} +$$ + +where $D$ denotes the cosine distance, and $\mathbf{k} \in \{\mathbf{v} - 1, \mathbf{v} + 1\}$ represents the two nearest neighbor views of the given pose. Here, $i$ and $j$ are indices of the spatial locations in the feature maps, $F$ is the fundamental matrix relating the + +two views $\mathbf{v}$ and $\mathbf{k}$ , and the index $j$ lies on the epipolar line in the view $\mathbf{k}$ , subject to the constraint $j^{\top}Fi = 0$ . We then obtain the aggregated feature map $\widetilde{\mathbf{f}}_{\mathbf{v}}$ of the view $\mathbf{v}$ by linearly combining features of correspondences from the two nearest views via: + +$$ +\begin{array}{l} \widetilde {\mathbf {f}} _ {\mathbf {v}} [ i ] = w \cdot \mathbf {f} _ {\mathbf {v} - 1} \left[ M _ {\mathbf {v}, \mathbf {v} - 1} [ i ] \right] \tag {3} \\ + (1 - w) \cdot \mathbf {f} _ {\mathbf {v} + 1} [ M _ {\mathbf {v}, \mathbf {v} + 1} [ i ] ], \\ \end{array} +$$ + +where $w$ represents the weight to combine the features of the two nearest views. The calculation of $w$ uses a hybrid fusion strategy, which ensures that the weight assignment accounts for both the physical camera distance and the token feature similarity (see the Appendix Sec. A.3). As the feature aggregation process is non-differentiable, we adopt the straight-through estimator $\mathrm{sg}[\cdot]$ in VQVAE [63] to facilitate gradient back-propagation in the token space. Near-view epipolar aggregation explicitly propagates tokens from neighboring views, which greatly improves view consistency. However, due to substantial view changes, the corresponding tokens may not be available, leading to unexpected artifacts during token replacement. To address this, we fuse the feature $\mathbf{f}_{\mathbf{v}}$ of the current view with the feature $\widetilde{\mathbf{f}}_{\mathbf{v}}$ from near-view epipolar aggregation, with 0.5 averaging. This effectively combines multi-view row attention and near-view epipolar aggregation, thereby enhancing view consistency both implicitly and explicitly. + +This approach is similar to token-space editing methods like TokenFlow [18] and DGE [11]. However, we propose a trainable version that considers both geometric and feature similarity for effective feature fusion. + +# 3.3. Multi-view Data Augmentation + +Our goal is to train a versatile and robust enhancement model that performs well on low-quality multi-view images from diverse data sources, such as those generated by image-to-3D models or rendered from coarse 3D representations. To achieve this, we carefully design a comprehensive data augmentation pipeline to expand the distribution of distortions in our base training data, bridging the domain gap between training and inference. + +Texture Distortion. To emulate the low-quality textures and local inconsistencies found in synthesized multi-view images, we employ a texture degradation pipeline commonly used in 2D enhancement [69, 91]. This pipeline randomly applies downsampling, blurring, noise, and JPEG compression to degrade the image quality. + +Texture Deformation and Camera Jitter. As in LGM [58], we introduce grid distortion to simulate texture inconsistencies in multi-view images and apply camera jitter augmentation to introduce variations in the conditional camera poses of multi-view inputs. + +Color Shift. We also observe color variations in corresponding regions between multi-view images generated by image-to-3D models. By randomly applying color changes + +to some image patches, we encourage the model to produce results with consistent colors. In addition, renderings from a coarse 3DGS sometimes result in a grayish overlay or ghosting artifacts, akin to a translucent mask. To simulate this effect, we randomly apply a semi-transparent object mask to the image, allowing the model to learn to remove the overlay and improve 3D visual quality. + +Noise-level Control. To control the enhancement strength, we apply noise augmentation by adding controllable noise to the input multi-view images. This noise augmentation process is similar to the diffusion process in diffusion models. This approach can further enhance the model's robustness in handling unseen artifacts [91]. + +# 3.4. Inference for 3D Enhancement + +We present two ways to utilize our 3DENHANCER for 3D enhancement: + +- The proposed method can be directly applied to generation results from existing multi-view diffusion models [26, 39, 42, 54], and the enhanced output shall serve as the input to the multi-view 3D reconstruction models [22, 58, 71, 80]. Given the enhanced multi-view inputs with sharper textures and view-consistent geometry, our method can be directly used to improve the quality of existing multi-view to 3D reconstruction frameworks. +- Our method can also be used for directly enhancing a coarse 3D model through iterative optimization. Specifically, given an initial coarse 3D reconstruction as $\mathcal{M}$ and a set of viewpoints $\{\pi_{\mathbf{v}}\}_{\mathbf{v} = 1}^{N}$ , we first render the corresponding views $\mathcal{X} = \{\mathbf{x_v}\}_{\mathbf{v} = 1}^N$ where $\mathbf{x_v} = \operatorname{Rend}(\mathcal{M}, \pi_{\mathbf{v}})$ is obtained with the corresponding rendering techniques [32, 44]. Let $\mathcal{X}' = \{\mathbf{x_v}'\}_{\mathbf{v} = 1}^N$ be the enhanced multi-view images, we can then update the 3D model $\mathcal{M}$ by supervising it with $\mathcal{X}'$ as + +$$ +\mathcal {M} ^ {\prime} = \underset {\mathcal {M}} {\arg \min } \sum_ {\mathbf {v} = 1} ^ {N} \mathcal {L} \left(\mathbf {x} _ {\mathbf {v}} ^ {\prime}, \operatorname {R e n d} \left(\mathcal {M}, \pi_ {\mathbf {v}}\right)\right). \tag {4} +$$ + +Following previous methods that reconstruct 3D from synthesized 2D images [17, 73], we use a mixture of $\mathcal{L}_1$ and $\mathcal{L}_{\mathrm{LPIPS}}$ [85] for robust optimization. In practice, unlike iterative dataset updates (IDU) [20], we found that inferring the enhanced views $\mathcal{X}'$ once already yields high-quality results. More implementation details and results for this part are provided in the Appendix Sec. D.2. + +# 4. Experiments + +# 4.1. Datasets and Implementation + +Datasets. For training, we use the Objaverse dataset [14], specifically leveraging the G-buffer Objaverse [47], which provides diverse renderings on Objaverse instances. We construct LQ-HQ view pairs following the augmentation pipeline outlined in Sec. 3.3 and then split the dataset into separate training and test sets. Overall, approximately $400\mathrm{K}$ + +![](images/87f6b1dcef9307e39d493c4f9f08cee227edd616ae2b482f5b0f9b80e00ac69f.jpg) +Input + +![](images/e29f8fe6d76ccdb920d5659a312cda2aa7a311118e04f5c8ac7f6a33de8f9585.jpg) +GT + +![](images/ce26c25e3f1edaf01fa5338627dead4791268dba4a42fccb0a9cc3d4f2285496.jpg) +RealESRGAN + +![](images/cdd2c4dc12ee39243f04d900116bb2fae2dec7d6c615727ef87c755285a6593c.jpg) +StableSR + +![](images/636cb025c860f437d8366bddb451a9bf00cbd3b2291b53c25b2d42f57fdd0bd2.jpg) +RealBasicVSR +Figure 3. Qualitative comparisons of enhancing multi-view synthesis on the Objverse synthetic dataset. As can be seen, only 3DENHANCER can correct flowed and missing textures with view consistency. + +![](images/4d4d0947fd26ef02fd988708df8191d52ed6cc533c64ffc94c1f7078b71752d1.jpg) +Upscale-A-Video + +![](images/1e1dc345e3b4dece262bffb0d8c703eea849151b1458a37eaf51cdb46709bd8e.jpg) +Ours + +objects are used for training. For each object, we randomly sample 4 input views with azimuth angles ranging from $0^{\circ}$ to $360^{\circ}$ and elevation angles between $-5^{\circ}$ and $30^{\circ}$ . + +For evaluation, we use a test set containing 500 objects from different categories within our synthesized Objaverse datasets. We further evaluate our model on the zero-shot in-the-wild dataset by selecting images from the GSO dataset [16], image diffusion model outputs [48], and web-sourced content. These images are then processed using several novel view synthesis methods [36, 39, 42, 54] to create our in-the-wild test set, containing a total of 400 instances. + +Implementation Details. We employ PixArt- $\Sigma$ [10], an efficient DiT model, as our backbone. Our model is trained on images with a resolution of $512 \times 512$ . The AdamW optimizer [33] is used with a fixed learning rate of 2e-5. Our training is conducted over 10 days using 8 Nvidia A100-80G GPUs, with a batch size 256. For inference, we employ a DDIM sampler [56] with 20 steps and set the Classifier-Free Guidance (CFG) [23] scale to 4.5. + +Baselines. To assess the effectiveness of our approach, we adopt two image enhancement models, RealESRGAN [69] and StableSR [64], along with two video enhancement models, RealBasicVSR [7] and Upscale-a-Video [91] as our baselines. For a fair comparison, we further fine-tune all these methods on the Objaverse dataset to minimize potential data domain discrepancies. During inference, since Real-ESRGAN, RealBasicVSR, and Upscale-a-Video by default produce images upscaled by a factor of $\times 4$ , we resize their outputs to a uniform resolution of $512 \times 512$ for comparison. + +Metrics. We evaluate the effectiveness of our methods on + +Table 1. Quantitative comparisons of enhancing multi-view synthesis on the Objaverse synthetic dataset, the best and second-best results are marked in red and blue, respectively. + +
MethodPSNR↑SSIM↑LPIPS ↓
Input26.150.90560.1257
Real-ESRGAN [69]26.020.91850.0877
StableSR [64]25.120.89140.1130
RealBasicVSR [7]26.210.92120.0888
Upscale-A-Video [91]25.570.89370.1153
3DEnhancer (Ours)27.530.92650.0626
+ +two tasks: multi-view synthesis enhancement and 3D reconstruction improvement. We employ standard metrics including PSNR, SSIM, and LPIPS [85] on our synthetic dataset, along with non-reference metrics including FID [51], Inception Score [49], and MUSIQ [31] on the in-the-wild dataset. For FID computation, we use the rendered images from Objaverse to represent the real distribution. + +# 4.2. Comparisons + +Enhancing Multi-view Synthesis. The output images from multi-view synthesis models often lack texture details or exhibit inconsistencies across views, as shown in Fig. 1. To demonstrate that 3DENHANCER can correct flawed textures and recover missing textures, we provide quantitative results on both the Objaverse synthetic dataset and the inthe-wild dataset in Tab. 1 and Tab. 2, respectively. Qualitative comparisons on both test sets are presented in Fig. 3 and Fig. 4. As can be seen, our method outperforms oth- + +![](images/57e6eadaac57439d88e9ac0d46ff7e3455f655f82cf83a224156c66d76083d74.jpg) + +![](images/b08ab8dd29355edce1b67808e5083da8b0caac09940d64846d53d5daeb6a72eb.jpg) + +![](images/88ab8d3c2177ccd575694f97ec24e999f467da7914bb8a7b7c1ed5e24b30b153.jpg) + +![](images/2297bc66cb21eccc6d276713ede5016bf346ebf459568fd2b4ba3d3601ec70c5.jpg) + +![](images/a772aa8ba9d9b4e36535bf4c3f50edb043adbbe709e2c4e73c0d7926b4608ae1.jpg) + +![](images/db8f915a56bfae6926da485c1f0a1844e850334c17f7d14778825eb8275273cc.jpg) + +![](images/9dcb64c0b65170588948241c9d46cf650e23a65b669228e021c06e93995b953f.jpg) + +![](images/1796b4211e93f3251cdf1e6b2c58ebd5b95948de635a1e36a25555ceaaf90a2e.jpg) + +![](images/d9d19f4d03bcfc04f6eba2b3fc9e5ba009d2db6ad512eeb9f40941985977a6c2.jpg) +Input Image + +![](images/84be3cbe0006bdf54635ba9c90367899596b69b16d365615e071e7c036f6b432.jpg) +Generated MV + +![](images/63dd294d4e451db60b71da5ba859e26b2ce62c3e860828377239c820aed97c19.jpg) +RealBasicVSR + +![](images/36f69508270f792878f91908fb0c8e2781f66a8b1190429bdab8882b7b7c5ac7.jpg) +Upscale-A-Video +Figure 4. Qualitative comparisons of enhancing multi-view synthesis with RealBasicVSR[7] and Upscale-A-Video[91] on the in-the-wild dataset. Visually inspecting, 3DENHANCER yields sharp and consistent textures with intact semantics, such as the eyes of the girl. + +![](images/bd2497c39bcee34a20bc15d8835a7c392cac1325c1132a5c49d031f52728403b.jpg) +3DEnhancer (Ours) + +Table 2. Quantitative comparisons of enhancing multi-view synthesis on the in-the-wild dataset. + +
MethodMUSIQ↑FID↓IS↑
Generated MV52.77112.127.68 ± 0.86
Real-ESRGAN [69]72.47114.257.31 ± 0.89
StableSR [64]70.43111.537.59 ± 0.97
RealBasicVSR [7]74.07128.307.09 ± 0.87
Upscale-A-Video [91]71.73114.817.75 ± 0.96
3DEnhancer (Ours)73.32108.407.93 ± 1.11
+ +ers across most metrics. While RealBasicVSR achieves a higher MUSIQ score on the in-the-wild dataset, it fails to generate visually plausible images, as shown in Fig. 4. The image enhancement models RealESRGAN and StableSR can recover textures to some extent in individual views, but they fail to maintain consistency across multiple views. Video enhancement models, such as RealBasicVSR and Upscale-A-Video, also fail to correct texture distortions effectively. For example, both models fail to generate smooth facial textures in the first example shown in Fig. 4. In contrast, our method generates more natural and consistent details across views. + +Enhancing 3D Reconstruction. In this section, we present 3D reconstruction comparisons based on rendering views from 3DGS generated by LGM [58]. Quantitative comparisons are shown in Tab. 3. Our method outperforms previous approaches in terms of 3D reconstruction. For qualitative evaluation, we visualize the results of two video enhancement models, RealBasicVSR and Upscale-A-Video. As shown in Fig. 5, these baselines suffer from a lack of + +Table 3. Quantitative comparisons of enhancing 3D reconstruction on the in-the-wild dataset. + +
MethodMUSIQ↑FID↓IS↑
Input41.0477.548.98 ± 0.65
Real-ESRGAN [69]65.2574.298.29 ± 0.35
StableSR [64]65.7172.989.51 ± 0.78
RealBasicVSR [7]65.7074.587.77 ± 0.48
Upscale-A-Video [91]64.0574.129.77 ± 0.79
3DEnhancer (Ours)66.0471.789.96 ± 0.96
+ +multi-view consistency, leading to misalignment, such as the misalignment of the teeth in the first skull example and the ghosting in the example of Mario's hand. In contrast, our model maintains consistency and produces high-quality texture details. We further demonstrate our approach can optimize coarse differentiable representations. As shown in Fig. 6, our method is capable of refining low-resolution Gaussians [52]. More details and results of refining coarse Gaussians are provided in the Appendix Sec. D.2. + +Table 4. Ablation study of cross-view modules. + +
Exp.Multi-view Attn.Epipolar Agg.PSNR↑SSIM↑LPIPS ↓
(a)25.110.90670.081
(b)25.950.91470.072
(c)26.920.92260.0642
(d)27.530.92650.0626
+ +# 4.3. Ablation Study + +Effectiveness of Cross-View Modules. To evaluate the effectiveness of our proposed cross-view modules, we ab + +![](images/6a5e52e2c5bb01354e1af568e420906cc9a5d54d177c0ce0adbf7c117849acef.jpg) +Figure 5. Qualitative comparisons of enhancing 3D reconstruction given generated multi-view images on the in-the-wild dataset. Multi-view models produce low-quality, view-inconsistent outputs, leading to flawed 3D reconstructions. Existing methods fail to correct texture artifacts, while our method produces both geometrically accurate and visually appealing results. + +![](images/5416fd55469cdf417e9087e02c9a6413634ab20aade0816f2ab59707fb688d51.jpg) +Figure 6. Low-resolution GS optimization with 3DENHANCER. + +![](images/48f56b74ddd453c3b7f2094f1745703693f257b8e9d4c7f3a3c0faea48c1cc73.jpg) +Figure 7. Effectiveness of cross-view modules. + +late two modules: multi-view row attention and near-view epipolar aggregation. As shown in Tab. 4, removing either module results in worse textures between views. The visual comparison in Fig. 7 also validates this observation. Without the multi-view row attention module, the model fails to produce smooth textures, as shown in Fig. 7 (b). Without the epipolar aggregation module, reduced texture consistency is observed, as depicted in Fig. 7 (c). + +Besides, the epipolar constraint is essential for preventing the model from learning textures from incorrect regions in other views and contributes to the overall consistency. As demonstrated in Fig. 8, without the epipolar constraint, the + +![](images/0a2ad7dfb0ab438d65d48b81709321b30b64dbff4f9a1c0db7bcb45edc0eddf3.jpg) +Figure 8. Comparisons of enhancing multi-view images with and without epipolar aggregation. The red line denotes the epipolar line corresponding to the circled area, while the dotted arrow indicates the corresponding area from one view to another. + +texture of the top part of the flail is incorrectly aggregated from the grip in the other view, thus resulting in inconsistency across views. + +Effectiveness of Noise Level. As shown in Fig. 1, our model can generate diverse textures by adjusting noise levels. Low noise levels generally result in outputs with blurred details, while high noise levels produce sharper, more detailed textures. However, high noise levels may also reduce the fidelity of the input images. + +# 5. Conclusion + +In conclusion, this work presents a novel 3D enhancement framework that leverages view-consistent latent diffusion model to improve the quality of given coarse multi-view images. Our approach introduces a versatile pipeline combining data augmentation, multi-view attention and epipolar aggregation modules that effectively enforces view consistency and refines textures across multi-view inputs. Extensive experiments and ablation studies demonstrate the superior performance of our method in achieving high-quality, consistent 3D content, significantly outperforming existing alternatives. This framework establishes a flexible and powerful solution for generic 3D enhancement, with broad applications in 3D content generation and editing. + +Acknowledgement. This study is supported under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also supported by Singapore MOE AcRF Tier 2 (MOE-T2EP20221-0011) and the National Research Foundation, Singapore, under its NRF Fellowship Award (NRF-NRFF16-2024-0003). + +# References + +[1] Raphael Bensadoun, Yanir Kleiman, Idan Azuri, Omri Harosh, Andrea Vedaldi, Natalia Neverova, and Oran Gafni. Meta 3d texturegen: Fast and consistent texture generation for 3d objects. arXiv preprint arXiv:2407.02430. 3 +[2] Raphael Bensadoun, Tom Monnier, Yanir Kleiman, Filippos Kokkinos, Yawar Siddiqui, Mahendra Kariya, Omri Harosh, Roman Shapovalov, Benjamin Graham, Emilien Garreau, et al. Meta 3D Gen. arXiv preprint arXiv:2407.02599, 2024. 3 +[3] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable Video Diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 2, 3 +[4] Kelvin C.K. Chan, Xintao Wang, Xiangyu Xu, Jinwei Gu, and Chen Change Loy. GLEAN: Generative latent bank for large-factor image super-resolution. In CVPR, 2021. 3 +[5] Kelvin C.K. Chan, Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy. BasicVSR: The search for essential components in video super-resolution and beyond. In CVPR, 2021. 2, 3 +[6] Kelvin CK Chan, Shangchen Zhou, Xiangyu Xu, and Chen Change Loy. Improving video super-resolution with enhanced propagation and alignment. In CVPR, 2022. 2 +[7] Kelvin CK Chan, Shangchen Zhou, Xiangyu Xu, and Chen Change Loy. Investigating tradeoffs in real-world video super-resolution. In CVPR, 2022. 3, 6, 7 +[8] Chaofeng Chen, Shangchen Zhou, Liang Liao, Haoning Wu, Wenxiu Sun, Qiong Yan, and Weisi Lin. Iterative token evaluation and refinement for real-world super-resolution. In ACM MM, 2023. 3 +[9] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In CVPR, 2021. 2 +[10] Junsong Chen, Chongjian Ge, Enze Xie, Yue Wu, Lewei Yao, Xiaozhe Ren, Zhongdao Wang, Ping Luo, Huchuan Lu, and Zhenguo Li. PixArt- $\Sigma$ : Weak-to-strong training of diffusion transformer for 4K text-to-image generation. arXiv preprint arXiv:2403.04692, 2024. 3, 4, 6 +[11] Minghao Chen, Iro Laina, and Andrea Vedaldi. DGE: Direct gaussian 3D editing by consistent multi-view editing. arXiv preprint arXiv:2404.18929, 2024. 2, 5 +[12] Xiangyu Chen, Xintao Wang, Jiantao Zhou, Yu Qiao, and Chao Dong. Activating more pixels in image superresolution transformer. In CVPR, 2023. 3 +[13] Yongwei Chen, Yushi Lan, Shangchen Zhou, Tengfei Wang, and XIngang Pan. SAR3D: Autoregressive 3D object gener + +ation and understanding via multi-scale 3D VQVAE. arXiv preprint arXiv:2411.16856, 2024.2,3 +[14] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objverse: A universe of annotated 3D objects. CVPR, 2023. 2, 3, 5 +[15] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, Eli VanderBilt, Aniruddha Kembhavi, Carl Vondrick, Georgia Gkioxari, Kiana Ehsani, Ludwig Schmidt, and Ali Farhadi. Objverse-xl: A universe of $10\mathrm{m} + 3\mathrm{D}$ objects. In NeurIPS, 2024. 2 +[16] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3D scanned household items. In ICRA, 2022. 6 +[17] Ruiqi Gao*, Aleksander Holynski*, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul P. Srinivasan, Jonathan T. Barron, and Ben Poole*. Cat3d: Create anything in 3d with multi-view diffusion models. NeurIPS, 2024. 5 +[18] Michal Geyer, Omer Bar-Tal, Shai Bagon, and Tali Dekel. TokenFlow: Consistent diffusion features for consistent video editing. *ICLR*, 2024. 2, 5 +[19] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014. 2 +[20] Ayaan Haque, Matthew Tancik, Alexei Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct-NeRF2NeRF: Editing 3d scenes with instructions. In CVPR, 2023. 5 +[21] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. 4 +[22] Zexin He and Tengfei Wang. OpenLRM: Open-source large reconstruction models. https://github.com/3DTopia/OpenLRM, 2023.5 +[23] Jonathan Ho. Classifier-free diffusion guidance. In NeurIPS, 2021. 6 +[24] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020. 2, 3 +[25] Lukas Hollein, Aljaž Božić, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, and Matthias Nießner. Viewdiff: 3d-consistent image generation with text-to-image models. In CVPR, 2024. 2 +[26] Fangzhou Hong, Zhaoxi Chen, Yushi Lan, Liang Pan, and Ziwei Liu. EVA3D: Compositional 3D human generation from 2d image collections. In ICLR, 2022. 5 +[27] Yicong Hong, Kai Zhang, Jiuming Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. In ICLR, 2024. 2 +[28] Zehuan Huang, Hao Wen, Junting Dong, Yaohui Wang, Yangguang Li, Xinyuan Chen, Yan-Pei Cao, Ding Liang, Yu Qiao, Bo Dai, et al. EpiDiff: Enhancing multi-view synthesis via localized epipolar-constrained diffusion. In CVPR, 2024. 2, 4 +[29] Jpcy. Jpcy/xatlas: Mesh parameterization / uv unwrapping library. 3 + +[30] Heewoo Jun and Alex Nichol. Shap-E: Generating conditional 3D implicit functions. arXiv preprint arXiv:2305.02463, 2023. 2 +[31] Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. MUSIQ: Multi-scale image quality transformer. In ICCV, 2021. 6 +[32] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3D gaussian splatting for real-time radiance field rendering. ACM TOG, 42(4):1-14, 2023. 2, 3, 4, 5 +[33] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 6 +[34] Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, and Chen Change Loy. GaussianAnything: Interactive point cloud latent diffusion for 3D generation. 2, 3 +[35] Yushi Lan, Fangzhou Hong, Shuai Yang, Shangchen Zhou, Xuyi Meng, Bo Dai, Xingang Pan, and Chen Change Loy. LN3Diff: Scalable latent neural fields diffusion for speedy 3D generation. In ECCV, 2024. 3 +[36] Peng Li, Yuan Liu, Xiaoxiao Long, Feihu Zhang, Cheng Lin, Mengfei Li, Xingqun Qi, Shanghang Zhang, Wenhan Luo, Ping Tan, et al. Era3D: High-resolution multiview diffusion using efficient row-wise attention. NeurIPS, 2024. 2, 4, 6 +[37] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. SwinIR: Image restoration using swim transformer. In ICCV, 2021. 3 +[38] Jingyun Liang, Yuchen Fan, Xiaoyu Xiang, Rakesh Ranjan, Eddy Ilg, Simon Green, Jiezhang Cao, Kai Zhang, Radu Timofte, and Luc Van Gool. Recurrent video restoration transformer with guided deformable attention. In NeurIPS, 2022. 3 +[39] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3D object. In CVPR, 2023. 2, 3, 5, 6 +[40] Xinhang Liu, Jiaben Chen, Shiu-hong Kao, Yu-Wing Tai, and Chi-Keung Tang. Deceptive-nerf/3dgs: Diffusion-generated pseudo-observations for high-quality sparse-view reconstruction. In ECCV, 2024. 3 +[41] Yuxin Liu, Minshan Xie, Hanyuan Liu, and Tien-Tsin Wong. Text-guided texturing by synchronized multi-view diffusion. arXiv preprint arXiv:2311.12891, 2023. 3 +[42] Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. Syncdreamer: Generating multiview-consistent images from a single-view image. In ICLR, 2024. 5, 6 +[43] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3d: Single image to 3D using cross-domain diffusion. In CVPR, 2024. 2, 4 +[44] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. 2, 4, 5 +[45] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 3, 4 +[46] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. DreamFusion: Text-to-3D using 2D diffusion. *ICLR*, 2022. 2 + +[47] Lingteng Qiu, Guanying Chen, Xiaodong Gu, Qi Zuo, Mutian Xu, Yushuang Wu, Weihao Yuan, Zilong Dong, Liefeng Bo, and Xiaoguang Han. Richdreamer: A generalizable normal-depth diffusion model for detail richness in text-to-3d. In CVPR, 2024. 2, 5 +[48] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. 2, 3, 6 +[49] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NeurIPS, 2016. 6 +[50] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. LAION-5B: An open large-scale dataset for training next generation image-text models. In NeurIPS, 2022. 2 +[51] Maximilian Seitzer. pytorch-fid: FID Score for PyTorch. https://github.com/mseitzer/pytorch-fid, 2020. Version 0.3.0.6 +[52] Yuan Shen, Duygu Ceylan, Paul Guerrero, Zexiang Xu, Niloy J. Mitra, Shenlong Wang, and Anna Fruhstuck. SuperGaussian: Repurposing video models for 3D super resolution. In ECCV, 2024. 2, 3, 7 +[53] Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su. Zero123++: a single image to consistent multi-view diffusion base model. arXiv preprint arXiv:2310.15110, 2023. 2 +[54] Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3D generation. In ICLR, 2024. 2, 3, 4, 5, 6 +[55] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. NeurIPS, 2021. 3 +[56] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In ICLR, 2021. 6 +[57] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021. 2 +[58] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. LGM: Large multi-view gaussian model for high-resolution 3D content creation. In ECCV, 2024. 2, 3, 5, 7 +[59] Jiaxiang Tang, Ruijie Lu, Xiaokang Chen, Xiang Wen, Gang Zeng, and Ziwei Liu. Intex: Interactive text-to-texture synthesis via unified depth-aware inpainting. arXiv preprint arXiv:2403.11878, 2024. 3 +[60] Anju Tewari, Otto Fried, Justus Thies, Vincent Sitzmann, S. Lombardi, Z Xu, Tanaba Simon, Matthias Nießner, Edgar Tretschk, L. Liu, Ben Mildenhall, Pranatharthi Srinivasan, R. Pandey, Sergio Orts-Escalano, S. Fanello, M. Guang Guo, Gordon Wetzstein, J y Zhu, Christian Theobalt, Manju Agrawala, Donald B. Goldman, and Michael Zollhöfer. Advances in neural rendering. Computer Graphics Forum, 41, 2021. 2 + +[61] Hung-Yu Tseng, Qinbo Li, Changil Kim, Suhib Alsisan, Jia-Bin Huang, and Johannes Kopf. Consistent view synthesis with pose-guided diffusion models. In CVPR, 2023. 2, 4 +[62] Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. In NeurIPS, 2021. 3 +[63] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In NeurIPS, 2017. 5 +[64] Jianyi Wang, Zongsheng Yue, Shangchen Zhou, Kelvin C. K. Chan, and Chen Change Loy. Exploiting diffusion prior for real-world image super-resolution. In IJCV, 2024. 3, 6, 7 +[65] Peng Wang and Yichun Shi. ImageDream: Image-prompt multi-view diffusion for 3D generation. arXiv preprint arXiv:2312.02201, 2023. 2, 3 +[66] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. ESRGAN: Enhanced super-resolution generative adversarial networks. In ECCVW, 2018.3 +[67] Xintao Wang, Kelvin C.K. Chan, Ke Yu, Chao Dong, and Chen Change Loy. EDVR: Video restoration with enhanced deformable convolutional networks. In CVPRW, 2019. 3 +[68] Xintao Wang, Yu Li, Honglun Zhang, and Ying Shan. Towards real-world blind face restoration with generative facial prior. In CVPR, 2021. 3 +[69] Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In ICCVW, 2021. 2, 3, 5, 6, 7 +[70] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3D generation with variational score distillation. In NeurIPS, 2023. 2 +[71] Zhengyi Wang, Yikai Wang, Yifei Chen, Chendong Xiang, Shuo Chen, Dajiang Yu, Chongxuan Li, Hang Su, and Jun Zhu. CRM: Single image to 3D textured mesh with convolutional reconstruction model. In ECCV, 2024. 2, 5 +[72] Kailu Wu, Fangfu Liu, Zhihan Cai, Runjie Yan, Hanyang Wang, Yating Hu, Yueqi Duan, and Kaisheng Ma. Unique3D: High-quality and efficient 3D mesh generation from a single image. arXiv preprint arXiv:2405.20343, 2024. 3 +[73] Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, and Aleksander Holynski. Reconfusion: 3d reconstruction with diffusion priors. In CVPR, 2024. 5 +[74] Liu Xi, Zhou Chaoyi, and Huang Siyu. 3DGS-Enhancer: Enhancing unbounded 3D gaussian splattering with view-consistent 2d diffusion priors. NeurIPS, 2024. 2, 3 +[75] Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Shenghua Gao, and Ying Shan. InstantMesh: Efficient 3D mesh generation from a single image with sparse-view large reconstruction models. arXiv preprint arXiv:2404.07191, 2024.2 +[76] Yiran Xu, Taesung Park, Richard Zhang, Yang Zhou, Eli Shechtman, Feng Liu, Jia-Bin Huang, and Difan Liu. VideoGigaGAN: Towards detail-rich video super-resolution. arXiv preprint arXiv:2404.12388, 2024. 2, 3 +[77] Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, + +Zexiang Xu, and Kai Zhang. DMV3D: Denoising multi-view diffusion using 3D large reconstruction model. In ICLR, 2024. 3 +[78] Fan Yang, Jianfeng Zhang, Yichun Shi, Bowen Chen, Chenxu Zhang, Huichao Zhang, Xiaofeng Yang, Jiashi Feng, and Guosheng Lin. Magic-boost: Boost 3d generation with multi-view conditioned diffusion. arXiv preprint arXiv:2404.06429, 2024. 3 +[79] Tao Yang, Peiran Ren, Xuansong Xie, and Lei Zhang. Gan prior embedded network for blind face restoration in the wild. In CVPR, 2021. 3 +[80] Xu Yinghao, Shi Zifan, Yifan Wang, Chen Hansheng, Yang Ceyuan, Peng Sida, Shen Yujun, and Wetzstein Gordon. GRM: Large gaussian reconstruction model for efficient 3D reconstruction and generation. arXiv preprint arXiv:2403.14621, 2024.5 +[81] Biao Zhang, Jiapeng Tang, Matthias Nießner, and Peter Wonka. 3DShape2VecSet: A 3D shape representation for neural fields and generative diffusion models. ACM TOG, 42 (4), 2023. 2 +[82] Kai Zhang, Jingyun Liang, Luc Van Gool, and Radu Timofte. Designing a practical degradation model for deep blind image super-resolution. In ICCV, 2021. 3 +[83] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In ICCV, 2023. 3 +[84] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. CLAY: A controllable large-scale generative model for creating high-quality 3D assets. ACM TOG, 2024. 2, 3 +[85] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. 5, 6 +[86] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In ECCV, 2018. 3 +[87] Shangchen Zhou, Jiawei Zhang, Jinshan Pan, Haozhe Xie, Wangmeng Zuo, and Jimmy Ren. Spatio-temporal filter adaptive network for video deblurring. In ICCV, 2019. 3 +[88] Shangchen Zhou, Jiawei Zhang, Wangmeng Zuo, and Chen Change Loy. Cross-scale internal graph neural network for image super-resolution. In NeurIPS, 2020. 3 +[89] Shangchen Zhou, Kelvin CK Chan, Chongyi Li, and Chen Change Loy. Towards robust blind face restoration with codebook lookup transformer. In NeurIPS, 2022. 3 +[90] Shangchen Zhou, Chongyi Li, and Chen Change Loy. LEDNet: Joint low-light enhancement and deblurring in the dark. In ECCV, 2022. 3 +[91] Shangchen Zhou, Peiqing Yang, Jianyi Wang, Yihang Luo, and Chen Change Loy. Upscale-A-Video: Temporal-consistent diffusion model for real-world video superresolution. In CVPR, 2024. 3, 5, 6, 7 \ No newline at end of file diff --git a/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/images.zip b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f608d730b0f6f1a4424c03870600bbe0acf1a762 --- /dev/null +++ b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ac45bc6748b851f8eea05704a742a8a4b26d94cc3684b0c221f50d0c4d8b8d8 +size 766438 diff --git a/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/layout.json b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3fcf0df2369237294fefcafcaecf4d5bee7c1876 --- /dev/null +++ b/CVPR/2025/3DEnhancer_ Consistent Multi-View Diffusion for 3D Enhancement/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b6898906e1bf46d66b8698f032d36272bc0d6d2a2ba263f6d0584781189af23 +size 509626 diff --git a/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_content_list.json b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e06bc5af861f0c7a233aa82316b2013c08363a0f --- /dev/null +++ b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84859febfb3b08688d91fab3e6a2d5128588b0fd53921a932f16dd0a8b5cd84d +size 84308 diff --git a/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_model.json b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_model.json new file mode 100644 index 0000000000000000000000000000000000000000..606073465ba60f03fe45b976cd6ce973396fb099 --- /dev/null +++ b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9f931ea1e889cacb742c4c6b0046e370d47068960c6c260891a503177ff81a1 +size 106307 diff --git a/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_origin.pdf b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2fe82adaab4d812f9c1100832b8d7d998729644b --- /dev/null +++ b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/4d137404-b3cb-4e57-a972-c50723461082_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0300c4cc3986efe1492fd52bdfc9bc6d8098232186a58ee1920ed60f0006fe3a +size 5255975 diff --git a/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/full.md b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3334ec0ead8033b53a4bae719374b3e35cf77a26 --- /dev/null +++ b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/full.md @@ -0,0 +1,370 @@ +# 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splitting + +Qi Wu $^{1,*}$ , Janick Martinez Esturo $^{1,*}$ , Ashkan Mirzaei $^{1,2}$ , Nicolas Moenne-Loccoz $^{1}$ , Zan Gojcic $^{1}$ $^{1}$ NVIDIA, $^{2}$ University of Toronto + +https://research.nvidia.com/labs/toronto-ai/3DGUT + +![](images/c1c37e5aeef3cd70a1763bccfc5ea10587a2aea3d0aa06c52d9814a539680b20.jpg) +Ground Truth +Trained w/ Undistorted Views +Trained w/ Original Views +Figure 1. We extend 3D Gaussian Splatting (3DGS) to support nonlinear camera projections and secondary rays for simulating phenomena such as reflections and refractions. By replacing EWA splatting rasterization with the Unscented Transform, our approach retains real-time efficiency while accommodating complex camera effects like rolling shutter. (Left) A comparison of our model trained on undistorted views vs. the original distorted fisheye views, showing that training on the full set of pixels improves visual quality. (Right) Two synthetic objects, a reflective sphere and a refractive statue, inserted into a scene reconstructed with our model. + +![](images/10d23bddabc46db1c473c305e98019623f6c7510fc97417881a509c645ffcdfb.jpg) +Secondary Effects + +# Abstract + +3D Gaussian Splating (3DGS) enables efficient reconstruction and high-fidelity real-time rendering of complex scenes on consumer hardware. However, due to its rasterization-based formulation, 3DGS is constrained to ideal pinhole cameras and lacks support for secondary lighting effects. Recent methods address these limitations by tracing the particles instead, but, this comes at the cost of significantly slower rendering. In this work, we propose 3D Gaussian Unscented Transform (3DGUT), replacing the EWA splating formulation with the Unscented Transform that approximates the particles through sigma points, which can be projected exactly under any nonlinear projection function. This modification enables trivial support of distorted cameras with time dependent effects such as rolling shutter, while retaining the efficiency of rasterization. Additionally, we align our rendering formulation with that of tracing-based methods, enabling secondary ray tracing required to represent phenomena such as reflections and refraction within the same 3D representation. The source code is available at: https://github.com/ny-tlabs/3dgrut. + +# 1. Introduction + +Multiview 3D reconstruction and novel view synthesis is a classical problem in computer vision, for which several scene representations have been proposed in recent years, including points [22, 40], surfaces [5, 39, 53], and volumetric fields [33, 35, 50, 52]. Most recently, driven by 3D Gaussian Splatting [18] (3DGS), volumetric particle-based representations have gained significant popularity due to their high visual fidelity and fast rendering speeds. The core idea of 3DGS is to model scenes as an unstructured collection of fuzzy 3D Gaussian particles, each defined by its location, scale, rotation, opacity, and appearance. These particles can be rendered differentiably in real time via rasterization, allowing their parameters to be optimized through a re-rendering loss function. + +High frame-rates of 3DGS, especially compared to volumetric ray marching methods, can be largely accredited to the efficient rasterization of particles. However, this reliance on rasterization also imposes some inherent limitations. The EWA splattering formulation [57] does not support highly-distorted cameras with complex time dependent effects such as rolling shutter. Additionally, rasterization cannot simulate secondary rays required for representing phenomena like reflection, refraction, and shadows. + +Instead of rasterization, recent works have proposed to + +render the volumetric particles using ray tracing [7, 30, 34]. While this mitigates the shortcomings of rasterization, it does so at the expense of significantly reduced rendering speed, even when the tracing formulation is heavily optimized for semi-transparent particles [34]. In this work, we instead aim to overcome the above limitations of 3DGS while remaining in the realm of rasterization, thereby maintaining the high-rendering rates. To this end, we seek answer to the following two questions: + +What makes 3DGS ill-suited to represent distorted cameras and rolling shutter? To project 3D Gaussian particles onto the camera image plane, 3DGS relies on an EWA splattering formulation that requires computing the Jacobian of the non-linear projection function. This leads to approximation errors, even for perfect pinhole cameras, and the errors become progressively worse with increasing distortion [14]. Moreover, it is unclear how to even represent time-dependent effect such as rolling-shutter within the EWA splattering formulation. + +Instead of approximating the non-linear projection function, we draw inspiration from the classical literature of Unscented Kalman Filter [16] and approximate the 3D Gaussian particles using a set of carefully selected sigma points. These sigma points can be projected exactly onto the camera image plane by applying an arbitrarily complex projection function to each point, after which a 2D Gaussian can be re-estimated from them in form of a Unscented Transform (UT) [12]. Apart from a better approximation quality, UT is derivative-free and completely avoids the need to derive the Jacobians for different camera models (Fig. 1 left). Moreover, complex effects such as rolling shutter distortions can directly be represented by transforming each sigma point with a different extrinsic matrix. + +Can we align the rasterization rendering formulation with the one of ray-tracing? The rendering formulations mainly differ in terms of: (i) determining which particles contribute to which pixels, (ii) the order in which the particles are intersected, (iii) how the particles are evaluated. To align the representations we therefore follow 3DGRT [34] and evaluate the Gaussian particle response in 3D, while sorting them in order similar to Radl et al. [37]. While small differences persist, this provides us with a representation that can be both rasterized and ray-traced, enabling secondary-rays required to simulate phenomena like refraction and reflection (Fig. 1 right). + +In summary, we propose 3D Gaussian Unscented Transform (3DGUT), where our main contributions are: + +- We derive a rasterization formulation that approximates the 3D Gaussian particles instead of the non-linear projection function. This simple change enables us to extend 3DGS to arbitrary camera models and to support complex time dependent effects such as rolling shutter. +- We align the rendering formulation with 3DGRT, which + +allows us to render the same representation with rasterization and ray-tracing, supporting phenomena such as refraction and reflections. + +On multiple datasets, we demonstrate that our formulation leads to comparable rendering rates and image fidelity to 3DGS, while offering greater flexibility and outperforming dedicated methods on datasets with distorted cameras. + +# 2. Related Work + +Neural Radiance Fields Neural Radiance Fields (NeRFs) [33] have transformed the field of novel view synthesis, by modeling scenes as emissive volume encoded within coordinate-based neural network. These networks can be queried at any spatial location to return the volume density and view-dependent radiance. Novel views are synthesized by sampling the network along camera rays and accumulating radiance through volumetric rendering. While the original formulation [33] utilized a large, global multi-layer perceptron (MLP), subsequent work has explored more efficient scene representations, including voxel grids [27, 42, 45], triplanes [3], low-rank tensors [4], and hash tables [35]. Despite these advances, even highly optimized NeRF implementations [35] still struggle to achieve real-time inference rates due to the computational cost of ray marching. + +To accelerate inference, several efforts have focused on converting the radiance fields into more efficient representations, such as meshes [5, 53], hybrid surface-volume representations [44, 47, 49, 51], and sparse volumes [8, 9, 38]. However, these approaches generally require a cumbersome two-step pipeline: first training a conventional NeRF model and then baking it into a more performant representation, which further increases the training time and complexity. + +Volumetric Particle Representations Differentiable rendering via alpha compositing has also been explored in combination with volumetric particles, such as spheres [23]. More recently, 3D Gaussian Splatting [18] replaced spheres with fuzzy anisotropic 3D Gaussians. Instead of ray marching, these explicit volumetric particles can be rendered through highly efficient rasterization, achieving competitive results in terms of quality and efficiency. Due to its simplicity and flexibility, 3DGS has inspired numerous follow-up works focusing on improving memory efficiency [24, 29, 31], developing better densification and pruning heuristics [20, 54], enhancing surface representation [10, 11], and scaling up to large scenes [19, 26, 28]. However, while rasterization is very efficient, it also introduces trade-offs, such as being limited to perfect pinhole cameras. Prior work has attempted to work around these limitations and support complex camera models such as fisheye cameras [25] or rolling shutter [43]. But these works still require dedicated formulation for each camera type and + +exhibit quality degradation with increased complexity and distortion of the camera models [14]. + +In response, recent works have explored replacing rasterization entirely and instead rendering the 3D Gaussians using ray tracing [7, 30, 34]. Ray tracing inherently supports complex camera models and enables secondary effects like shadows, refraction, and reflections through secondary rays. However, this comes with a substantial decrease in rendering efficiency: even the most optimized ray-tracing methods are still 3-4 times slower than rasterization [34]. + +In this work, we instead propose a generalized approach for efficiently handling complex camera models within the rasterization framework, thereby preserving the computational efficiency. Additionally we unify our rendering formulation with the one of ray-tracing, enabling a hybrid rendering technique within the same representation. + +Unscented Transform Computing the statistics of a random variable that has undergone a transformation is one of the fundamental tasks in the fields of estimation and optimization. When the transformation is non-linear, however, no closed form solution exists, so several approximations have been proposed. The simplest and perhaps most widely used approach is to linearize the non-linear transformation using the first order Taylor approximation. However, the local linearity assumption is often violated, and derivation of the Jacobian matrix is non-trivial and error prone. The Unscented Transform (UT) [16, 17] was proposed to address these limitations. The key idea of UT is to approximate the distribution of the random variable using a set of Sigma points that can be transformed exactly, after which they can be used to re-estimate the statistics of the random variable in the target domain. Originally, UT was devised for filtering-based state estimation [16, 48], but it has since found applications in computer vision [2, 15]. Notably, UT has even been explored in the context of novel-view synthesis [2], where it was used to estimate the ray frustum from samples that match its first and second moments. + +# 3. Preliminaries + +We provide a short review of 3D Gaussian parametrization, volumetric particle rendering, and EWA splatting. + +3D Gaussian Splatting Representation: Kerbl et al. [18] represent scenes using an unordered set of 3D Gaussian particles whose response function $\rho : \mathbb{R}^3 \to \mathbb{R}$ is defined as + +$$ +\rho (\boldsymbol {x}) = \exp \left(- \frac {1}{2} (\boldsymbol {x} - \boldsymbol {\mu}) ^ {T} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu})\right), \tag {1} +$$ + +where $\pmb{\mu} \in \mathbb{R}^3$ denotes the particle's position and $\pmb{\Sigma} \in \mathbb{R}^{3 \times 3}$ its covariance matrix. To ensure that $\pmb{\Sigma}$ remains positive semi-definite during gradient-based optimization, it is decomposed into a rotation matrix $R \in \mathrm{SO}(3)$ and a scaling + +![](images/957fdab1be084237b4478feb9efbfc7c743282a330ffc49022941d31aca98e9a.jpg) +Figure 2. When projecting a Gaussian particle from 3D space onto the camera image plane, Monte Carlo sampling (left) provides the most accurate estimate but is costly to compute. EWA Splitting formulation used in [18] approximates the projection function via linearization, which requires a dedicated Jacobian $J$ for each camera model and leads to approximation errors with increasing distortion. Unscented Transform instead approximates the particle with Sigma points than can be projected exactly and from which the 2D conic can then be estimated. + +matrix $S\in \mathbb{R}^{3\times 3}$ , such that + +$$ +\boldsymbol {\Sigma} = \boldsymbol {R} \boldsymbol {S} \boldsymbol {S} ^ {T} \boldsymbol {R} ^ {T} \tag {2} +$$ + +In practice, both $\pmb{R}$ and $S$ are stored as vectors—a quaternion $\pmb{q} \in \mathbb{R}^4$ for the rotation and a vector $\pmb{s} \in \mathbb{R}^3$ for the scaling. Each particle is also associated with an opacity coefficient, $\sigma \in \mathbb{R}$ , and a view-dependent parametric radiance function $\phi_{\beta}(\pmb{d}) : \mathbb{R}^3 \to \mathbb{R}^3$ , with $\pmb{d}$ the incident ray direction, which is in practice represented using spherical harmonics functions of order $m = 3$ . + +Determining the Particle Response: Within the 3DGS rasterization framework, the 3D particles first need to be projected to the camera image plane in order to determine their contributions to the individual pixels. To this end, 3DGS follows [57] and computes a covariance matrix $\pmb{\Sigma}^{\prime}\in \mathbb{R}^{2\times 2}$ for a projected Gaussian in image coordinates via first-order approximation as + +$$ +\boldsymbol {\Sigma} ^ {\prime} = \boldsymbol {J} _ {[: 2,: 3 ]} \boldsymbol {W} \boldsymbol {\Sigma} \boldsymbol {W} ^ {T} \boldsymbol {J} _ {[: 2,: 3 ]} ^ {T} \tag {3} +$$ + +where $W \in \mathrm{SE}(3)$ transforms the particle from the world to the camera coordinate system, and $J \in \mathbb{R}^{3 \times 3}$ denotes the Jacobian matrix of the affine approximation of the projective transformation, which is obtained by considering the linear terms of its Taylor expansion. The Gaussian response of a particle $i$ for a position $\pmb{x} \in \mathbb{R}^3$ can then be computed in 2D from its projection on the image plane $\pmb{v}_{\pmb{x}} \in \mathbb{R}^2$ as + +$$ +\rho_ {i} (\boldsymbol {x}) = \exp \left(- \frac {1}{2} \left(\boldsymbol {v} _ {\boldsymbol {x}} - \boldsymbol {v} _ {\boldsymbol {\mu} _ {i}}\right) ^ {T} \boldsymbol {\Sigma} _ {i} ^ {\prime} ^ {- 1} \left(\boldsymbol {v} _ {\boldsymbol {x}} - \boldsymbol {v} _ {\boldsymbol {\mu} _ {i}}\right)\right) \tag {4} +$$ + +where $\pmb{v}_{\pmb{\mu}_i} \in \mathbb{R}^2$ denotes the projected mean of the particle. + +Volumetric Particle Rendering: The color $\pmb{c} \in \mathbb{R}^3$ of a camera ray $\pmb{r}(\tau) = \pmb{o} + \tau \pmb{d}$ with origin $\pmb{o} \in \mathbb{R}^3$ and direction $\pmb{d} \in \mathbb{R}^3$ can be rendered from the above volumetric particle representation using numerical integration + +$$ +\boldsymbol {c} (\boldsymbol {o}, \boldsymbol {d}) = \sum_ {i = 1} ^ {N} \boldsymbol {c} _ {i} (\boldsymbol {d}) \alpha_ {i} \prod_ {j = 1} ^ {i - 1} 1 - \alpha_ {j}, \tag {5} +$$ + +where $N$ denotes the number of particles that contribute to the given ray and opacity $\alpha_{i}\in \mathbb{R}$ is defined as $\alpha_{i} = \sigma_{i}\rho_{i}(\pmb {o} + \tau \pmb {d})$ for any $\tau \in \mathbb{R}^{+}$ . + +# 4. Method + +Our aim is to extend 3DGS [18] and 3DGRT [34] methods by developing a formulation that: + +- accommodates highly distorted cameras and time-dependent camera effects, such as rolling shutter, +- unifies the rendering formulation to allow the same reconstructions to be rendered using either splatting or tracing, enabling hybrid rendering with traced secondary rays, + +all while preserving the efficiency of rasterization. We begin by detailing our approach to bypass the linearization steps of 3DGS [18] in Sec. 4.1, followed by an approach to evaluate the particles in order and directly in 3D (Sec. 4.2). The former enables support for complex camera models, while the latter aligns the rendering formulation with 3DGRT [34]. + +# 4.1. Unscented Transform + +As illustrated in Fig. 2, the EWA splattering formulation used in 3DGS for projecting 3D Gaussian particles onto the camera image plane relies on the linearization of the affine approximation of the projective transform (Eq. (3)). This approach, however, has several notable limitations: (i) it neglects higher-order terms in the Taylor expansion, leading to projection errors even with perfect pinhole cameras [14], and these errors increase with camera distortion; (ii) it requires deriving a new Jacobian for each specific camera model (e.g., the equidistant fisheye model in [25]), which is cumbersome and error prone; (iii) it necessitates representing the projection as a single function, which is particularly challenging when accounting for time-dependent effects such as rolling shutter. + +To overcome these limitations, we leverage the idea of the Unscented Transform (UT) and propose to instead approximate the volumetric $N$ -dimensional particle using a set of carefully selected Sigma points. Generally, $2N + 1$ points are required to match at least the first three moments of the target distribution. Consider the 3D Gaussian scene representation described in Sec. 3, where particles are characterized by their position $\pmb{\mu}$ and covariance matrix $\pmb{\Sigma}$ , the Sigma points $\mathcal{X} = \{\pmb{x}_i\}_{i=0}^6$ are then defined as + +$$ +\boldsymbol {x} _ {i} = \left\{ \begin{array}{l l} \boldsymbol {\mu} & \text {f o r} i = 0 \\ \boldsymbol {\mu} + \sqrt {(3 + \lambda) \boldsymbol {\Sigma}} _ {[ i ]} & \text {f o r} i = 1, 2, 3 \\ \boldsymbol {\mu} - \sqrt {(3 + \lambda) \boldsymbol {\Sigma}} _ {[ i - 3 ]} & \text {f o r} i = 4, 5, 6 \end{array} \right. \tag {6} +$$ + +using the available factorization Eq. (2) of the covariance to read of the matrix square-root. + +Their corresponding weights $\mathcal{W} = \{w_i\}_{i=0}^6$ are given as + +$$ +w _ {i} ^ {\mu} = \left\{ \begin{array}{l l} \frac {\lambda}{3 + \lambda} & \text {f o r} i = 0 \\ \frac {1}{2 (3 + \lambda)} & \text {f o r} i = 1, \dots , 6 \end{array} \right. \tag {7} +$$ + +$$ +w _ {i} ^ {\Sigma} = \left\{ \begin{array}{l l} \frac {\lambda}{3 + \lambda} + (1 - \alpha^ {2} + \beta) & \text {f o r} i = 0 \\ \frac {1}{2 (3 + \lambda)} & \text {f o r} i = 1, \dots , 6 \end{array} \right. \tag {8} +$$ + +where $\lambda = \alpha^2 (3 + \kappa) - 3$ , $\alpha$ is a hyperparameter that controls the spread of the points around the mean, $\kappa$ is a scaling parameter typically set to 0, and $\beta$ is used to incorporate prior knowledge about the distribution [48]. + +Each Sigma point can then be independently projected onto the camera image plane using the non-linear projection function $\pmb{v}_{x_i} = g(\pmb{x}_i)$ . The 2D conic can subsequently be approximated as the weighted posterior sample mean and covariance matrix of the Gaussian: + +$$ +\boldsymbol {v} _ {\boldsymbol {\mu}} = \sum_ {i = 0} ^ {6} w _ {i} ^ {\mu} \boldsymbol {v} _ {x _ {i}} \tag {9} +$$ + +$$ +\boldsymbol {\Sigma} ^ {\prime} = \sum_ {i = 0} ^ {6} w _ {i} ^ {\Sigma} \left(\boldsymbol {v} _ {x _ {i}} - \boldsymbol {v} _ {\boldsymbol {\mu}}\right) \left(\boldsymbol {v} _ {x _ {i}} - \boldsymbol {v} _ {\boldsymbol {\mu}}\right) ^ {\mathrm {T}} \tag {10} +$$ + +With the 2D conic computed, we can apply the same tiling and culling procedures as proposed by [18, 37] to determine which particles influence which pixels. As described in the following section, our particle response evaluation does not depend on the 2D conic. Instead, UT only acts as an acceleration structure to efficiently determine the particles that contribute to each pixel thus avoiding the need for computing the backward pass through the non-linear projection function. + +# 4.2. Evaluating Particle Response + +Once the Gaussian particles contributing to each pixel have been identified, we need to determine how to evaluate their response. Following 3DGRT [34], we evaluate particles directly in 3D by using a single sample located at the point of maximum particle response along a given ray. + +A comparison between 3DGS's 2D conic response evaluation method and our 3D response evaluation method is provided in Fig. 3. Specifically, we compute the distance + +![](images/516920289b5a446940979009d10291acc6f24f5b4c1e6a3c7bf5a7047b6d2d35.jpg) +Figure 3. For a given ray, 3DGS [18] evaluates the response of the Gaussian particle in 2D after the projection onto the camera image plane. This requires backpropagation through the (approximated) projection function. Instead, we follow [34] and evaluate particles in 3D at the point of the maximum response along the ray. + +$\tau_{\mathrm{max}} = \operatorname{argmax}_{\tau} \rho(\boldsymbol{o} + \tau \boldsymbol{d})$ , which maximizes the particle response along the ray $\boldsymbol{r}(\tau)$ , as + +$$ +\tau_ {\max } = \frac {(\boldsymbol {\mu} - \boldsymbol {o}) ^ {T} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {d}}{\boldsymbol {d} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {d}} = \frac {- \boldsymbol {o} _ {g} ^ {T} \boldsymbol {d} _ {g}}{\boldsymbol {d} _ {g} ^ {T} \boldsymbol {d} _ {g}} \tag {11} +$$ + +where $\pmb{o}_g = \pmb{S}^{-1}\pmb{R}^T(\pmb{o} - \pmb{\mu})$ and $d_g = \pmb{S}^{-1}\pmb{R}^T\pmb{d}$ . + +Unlike 3DGS, which performs particle evaluations in 2D, our approach avoids propagating gradients through the projection function, thereby avoiding the approximations and mitigating potential numerical instabilities. Due to limited space, we provide the derivation of the numerically stable backward pass in the Supplementary Material Sec. B. + +# 4.3. Sorting Particles + +The proposed volumetric rendering formulation, i.e. both the rendering equation Eq. (5) and the particle evaluation Eq. (11), is equivalent to the one used in 3DGRT. However, while 3DGRT is able to collect the hit particles in their exact $\tau_{\mathrm{max}}$ order along the ray thanks to a dedicated acceleration structure [36], 3DGS sorts them globally for each tile. In order to get a better approximation of the $\tau_{\mathrm{max}}$ order we propose to use the multi-layers alpha blending approximation (MLAB) [41] following [37].1 It consists in storing the per-ray $k$ -farthest hit particles (typically using $k = 16$ ) in a buffer. The closest hits which cannot be stored in the buffer are incrementally alpha-blended until the transmittance of the blended part vanishes. + +As an alternative, the hybrid transparency (HT) blending strategy [32] has been recently used for splatting Gaussian particles [13]. Instead of storing the $k$ -farthest hit particles and incrementally blending the closest hits, HT stores the $k$ -closest and incrementally blends the farthest hits. This permits to recover the exact $k$ -closest hit particles, but requires to go through all particles, which may be prohibitively slow without dedicated optimizations and heuristics. + +# 4.4. Implementation and Training + +We build on the work of [18, 34] and implemented our method in PyTorch, using custom CUDA kernels for the compute-intensive parts. Additionally, we employ advanced culling strategies proposed by Radl et al. [37]. Unless otherwise specified, we adopt all parameters from 3DGS [18] to ensure a fair comparison and keep them consistent across all evaluations. + +Similar to [34] we don't have access to 2D screen space gradients, so we follow 3DGRT [34] and replace them with the 3D positional gradients divided by half of the distance to the camera and perform densification and pruning every 300 iterations. For the UT, we set $\alpha = 1.0$ , $\beta = 2.0$ and $\kappa = 0.0$ in all evaluations. We train our model for 30k iterations using the weighted sum of the L2-loss $\mathcal{L}_2$ and the perceptual loss $\mathcal{L}_{\mathrm{SSIM}}$ such that $\mathcal{L} = \mathcal{L}_2 + 0.2\mathcal{L}_{\mathrm{SSIM}}$ . + +# 5. Experiments and Ablations + +In this section, we first evaluate the proposed approach on standard novel-view synthesis benchmark datasets [1, 21], analyzing both quality and speed. We additionally evaluate our method on an indoor dataset captured with fisheye cameras [55], as well as an autonomous driving dataset captured using distorted cameras with rolling shutter effect [46]. Ablation studies on key design choices and additional details on experiments and implementation are provided in the Supplementary Material. + +Model Variants. In the following evaluation, we will refer to two variants of our method. We use Ours to denote the version that extends 3DGS [18] with the UT formulation (Sec. 4.1) and particle evaluation in 3D (Sec. 4.2). The second variant Ours (sorted) additionally uses the per-ray sorting strategy as detailed in Sec. 4.3 that leads to unification with 3DGRT [34]. + +Metrics. We evaluate the perceptual quality of the novel views using peak signal-to-noise ratio (PSNR), learned perceptual image patch similarity (LPIPS), and structural similarity (SSIM) metrics. To assess performance, we measure the time required for rendering a single image, excluding any overhead from data storage or visualization. For all evaluations, we use the datasets' default resolutions and report frames per second (FPS) measured on a single NVIDIA RTX 6000 Ada GPU. + +Baselines. There have been many follow up works that improve or extend 3DGS in different aspects [7, 13, 20, 29, 56]. Many of these improvements are compatible with our approach, so we limit our comparison to the original 3DGS [18] and StopThePop [37] as the representative splatting methods, along with 3DGRT [34] and EVER [30] as volumetric particle tracing methods that natively support distorted cameras and secondary lighting effects. On + +![](images/14ec4d52a8f4e54802dc6c644ed650a83ff4033f38ce11af542fd2e7a211d86a.jpg) +Ground Truth + +![](images/fb83d2d0abeb9376695dddabaab3796b94164fda8d2fb280bb5e57c7247b281e.jpg) +3DGS +Figure 4. Qualitative comparison of our novel-view synthesis results against the baselines on the MipNERF360 dataset [1]. + +![](images/9a31d390918c5fcb690890b5dcb0279379495f8871f17a0559018175dde9023d.jpg) +3DGRT + +![](images/909ae760fed13c0edd01c6d1a1fa64ccf7cd84dd5f971feecdc38d2399c8dcc6.jpg) +EVER + +![](images/0c20bfb76ba13cadb134a354ecca3985c7324b718d34268d9d4d3312c0604629.jpg) +StopThePop + +![](images/8b30d879c398ad774501e0f30649cf9b07ac10c2ed39b5d22efdd31693cace41.jpg) +Ours + +Table 1. Quantitative results of our approach and baselines on the MipNERF360 [1] and Tanks & Temples [21] datasets. + +
Method\MetricComplex CamerasWithout PoppingMipNeRF360Tanks & Temples
PSNR↑SSIM↑LPIPS↓FPS ↑PSNR↑SSIM↑LPIPS↓FPS ↑
ZipNeRF [2]/28.540.8280.2190.2////
3DGS [18]XX27.260.8030.24034723.640.8370.196476
OursX27.260.8100.21826523.210.8410.178277
StopThePop [37]X27.140.8040.23534023.150.8370.189482
3DGRT [34]27.200.8180.2485223.200.8300.222190
EVER [30]27.510.8250.23336////
Ours (sorted)27.260.8120.21520022.900.8440.172272
+ +Table 2. Detailed timings on the MipNeRF360 [1] dataset + +
Timings in msPreprocessDuplicateSortRenderTotal
3DGS [18]0.590.340.551.272.88
Ours1.340.310.331.613.77
StopThePop [37]0.570.270.141.832.94
3DGRT [34]///19.2419.24
Ours (sorted)1.240.470.242.854.98
+ +the dataset captured with fisheye cameras, we compare our method to FisheyeGS [25] which extended 3DGS to fisheye cameras by deriving the Jacobian of the equidistant fisheye camera model. In addition to volumetric particle-based methods, we also compare our approach to state-of-the-art NeRF method ZipNeRF [2]. + +# 5.1. Novel View Synthesis Benchmarks + +MipNeRF360 [1]. is the most popular novel-view synthesis benchmark consisting of nine large scale outdoor and indoor scenes. Following prior work, we used the images downsampled by a factor of four for the outdoor scenes, and by a factor of two for the indoor scenes. To enable comparison with other splatting method, we use rectified images + +provided by Kerbl et al. [18]. + +Tab. 1 depicts the quantitative comparison, while the qualitative comparison on selected scenes is provided in Fig. 4. As anticipated, on this dataset with perfect pinhole inputs, both Ours and Ours (sorted) achieve comparable perceptual quality to other splatting and tracing methods. In terms of inference runtime Tab. 1, our method achieves comparable frame rates to 3DGS [18], while greatly outperforming all other methods that support complex cameras at more than 265FPS while the closest competitor, 3DGRT [34], achieves 52FPS. + +Tanks & Temples [21]. contains two large-scale outdoor scenes where the camera circulates around a prominent object (Truck and Train). Both scenes include lighting variations, and the Truck scene also contains transient objects that should ideally be ignored by reconstruction methods. Tab. 1 depicts the quantitative comparison while the qualitative results are provided in the Supplementary Material. + +Scannet++ [55]. is a large-scale indoor dataset captured with a fisheye camera at a resolution of $1752 \times 1168$ pixels. For our evaluation, we use the same six scenes as + +![](images/a970c810dfdad416cb126e5c3dcecc3d7dd0a8fd529eb589eda139375de00b42.jpg) +Ground Truth +Fisheye-GS +Ours (sorted) +Figure 5. Comparison of our renderings against Fisheye-GS [25], on scenes from the Scannet++ dataset [55]. + +Table 3. When evaluated on a dataset acquired with equidistant fisheye cameras, our general method outperforms [25] which derived the linerization for this specific camera model. Undistortion removes large parts of the original images and results in underobserved regions [18]. Results marked with $\dagger$ are taken from [25]. + +
Method\MetricScannet++
PSNR↑SSIM↑LPIPS↓N. Gaussians↓
3DGS†22.760.798/1.31M
FisheyeGS† [25]27.860.897/1.25M
FisheyeGS [25]28.150.9010.2611.07M
Ours (sorted)29.110.9100.2520.38M
+ +FisheyeGS [25] and follow the same pre-processing steps. Specifically, we convert the images to an equidistant fisheye camera model to match the requirements of [25].2 + +On this dataset, we compare Ours to FisheyeGS [25] and 3DGS [25]. The results for the latter are taken from [25] where they were obtained by: (i) undistorting the training images and training with the official 3DGS [18] implementation, and (ii) rendering equidistant fisheye test views from that representation using the FisheyeGS [25] formulation. This setting is unfavorable for 3DGS [25] as significant portions of the images are lost during undistortion, but it highlights the problem of being limited to perfect pinhole cameras. The quantitative comparison is shown in Tab. 3 and qualitative results are provided in Fig. 5. Ours significantly outperforms FisheyeGS [25] across all perceptual metrics, while using less than half the particles (1.07M vs. 0.38M). This result underscores the flexibility and potential of our approach. Despite FisheyeGS [25] deriving a Jacobian for this particular camera model—limiting its applicability even to similar models (e.g., fisheye with distortions)—it still underperforms our simple formulation that can be trivially applied to any camera model. + +![](images/5d801755d5be349a6f7069d9a4d67c18d6b1ff5fe17c132417a29b4e5e94fecf.jpg) +Figure 6. Qualitative comparison of our novel-view synthesis results against 3DGRT on the Waymo dataset [46]. + +![](images/e9c6ec08b7bd3cf582271641362eb176afa6dc981bfe639ac8ea7cce448ba64a.jpg) + +![](images/2b35066519ff02305a26e154a8555e239eafb97b33d2204264dc7a406642d2c3.jpg) + +![](images/9bdf0d997cbb03d27a05532f83574972db6ee79774ec03f8997aca4deb7dd07c.jpg) +Figure 7. Multiple frame tiles $f_{i}$ of a single solid box rendered by a left- and right-panning rolling shutter camera with a top-to-bottom shutter direction illustrate this time-dependent sensor effect (data from [34]). While ray-tracing-based methods like 3DGRT naturally support compensating for these time-dependent effects (a), traditional splatting methods struggle to model these (c), whereas our UT-based splatting formulation faithfully incorporates the sensor's motion into the projection formulation and recuperates the true undistorted geometry (b). + +![](images/c80fac75e7862740d4972419c496feb847f21aa500bcfbe14d4774500f3bf559.jpg) + +![](images/ff1603878e332876f190b62a155aa15ee57155244af3868a49cfb9f4644295da.jpg) + +Waymo [46]. is a large scale autonomous driving dataset captured using distorted cameras with rolling-shutter. We follow 3DGRT [34] and select 9 scenes with no dynamic objects to ensure accurate reconstructions. Fig. 6 show qualitative results. Ours (sorted) can faithfully represent complex camera mounted on a moving platform and reaches comparable performance to 3DGRT [34]. More results are provided in the Supplementary Material. + +# 6. Applications + +3DGUT also enables novel applications and techniques that were previously unattainable with particle scene representation within a rasterization framework. + +# 6.1. Complex cameras + +Distorted Camera Models. Projection of particles using UT enables 3DGUT not only to train with distorted cameras, but also to render different camera models with varying distortion from scenes that were trained using perfect pinhole camera inputs (Fig. 9 top row). + +Rolling Shutter. Apart from the modeling of distorted cameras, 3DGUT can also faithfully incorporate the camera motion into the projection formulation, hence offering support for time-dependent camera effects such as rolling-shutter, which are commonly encountered in the fields of autonomous driving and robotics. Although optical distor + +![](images/fec97fcc80a082e31713212a8ac8e83aa9cc4d8e28424283f93d036dbd7221d9.jpg) +Figure 8. Scenes trained with different methods and rendered using 3DGRT [34]. Our method is the most consistent with the tracing approach, allowing for seamless hybrid rendering with splatting for primary and tracing for secondary rays. + +![](images/00a524c7778022d0f156faaa2263d98f4cabf1d29ceeb6facb3b8de6434ba4b9.jpg) + +tion can be addressed with image rectification3, incorporating time-dependency of the projection function in the linearization framework is highly non-trivial. + +To illustrate the impact of rolling shutter on various reconstruction methods, in Fig. 7 we use the synthetic dataset provided by Moenne-Loccoz et al. [34] where the motion of the camera and the shutter time are provided. + +# 6.2. Secondary rays and lighting effects + +Aligning the representation with 3DGRT [34]. The rendering formulations of 3DGS and 3DGRT mainly differ in terms of (i) determining which particles contribute to which pixels, (ii) the order of particles evaluation, and (iii) the computation of the particles response. In Secs. 4.2 and 4.3, our goal was to reduce these differences to arrive at a common 3D representation that can be both rasterized and traced. Fig. 8 shows the comparison of 3D representations trained with different methods and evaluated with 3DGRT [34]. While some discrepancies naturally remain, Ours (sorted) achieves much better alignment with 3DGRT than StopThePop or 3DGS. + +Secondary rays. Aligning our rendering formulation to 3DGRT [34] enables hybrid rendering by rasterizing the primary and tracing the secondary rays within the same representations. Specifically, we first compute all the primary rays intersections with the scene, then render these primary rays using rasterization and discard Gaussian hits that fall behind a ray's closest intersection. Next, we compute and trace the secondary rays using 3DGRT. This hybrid rendering method allows us to achieve complex visual effects, such as reflections and refractions, that would otherwise only be possible with ray tracing. + +![](images/c4ae350d7e948c4a97dc6a71173368e909854a77f221360acf0393bc22e019cd.jpg) + +![](images/af5eafea4d7b0403db60fd6d3aea8ba41260ac14026cbaaee248bce10403f58e.jpg) +Figure 9. Illustration of the effects unlocked by our method. Top-left: rendering an image with rolling-shutter. Top-right: applying a strong lens distortion. Bottom: hybrid splatting / tracing rendering. Primary rays are splatted using our method while secondary rays are traced using 3DGRT [34]. This hybrid formulation allows us to simulate refraction (left) and reflections (right). + +![](images/0d4a54cd328df2989dcbf364d106bb27551925a0205e9fcb7f12fb3840dfbca9.jpg) + +![](images/cba4cd85e9970b4c530f4ed9a073b457fbcc2752f1003973fc62e80bb6824ec4.jpg) + +# 7. Discussion + +We proposed a simple idea to replace the linearization of the non-linear projection function in 3DGS [18] with the Unscented Transform. This modification enables us to seamlessly generalize 3DGS to distorted cameras, support time-dependent effects such as rolling shutter, and align our rendering formulation with 3DGRT [34]. The latter enables us to perform hybrid rendering and unlock secondary rays for lighting effects. + +Limitations and Future Work. Our method is significantly more efficient than ray-tracing-based methods [7, 30, 34], but it is still marginally slower than [18] (see details in Tab. 2). While being more general, the UT evaluation and the added complexity of 3D particle evaluation impact rendering times. Additionally, although UT permits exact projection of sigma points under arbitrary distortions, the resulting projected shape deviates from a 2D Gaussian in case of large distortions. This degrades the approximation of which particles contribute to which pixels. Finally, as our method still uses a single point to evaluate each primitive, it is currently unable to render overlapping Gaussians accurately. Approaches such as EVER [30] may offer promising directions for addressing this limitation. Looking ahead, we hope that this work could inspire new research, particularly in fields like autonomous driving and robotics, where training and rendering with distorted cameras is essential. Our alignment with 3DGRT [34] also opens interesting opportunities for future research in inverse rendering and relighting. + +# 8. Acknowledgements + +We thank our colleagues Riccardo De Lutio, Or Perel, and Nicholas Sharp for their help in setting up experiments and for their valuable insights that helped us improve this work. + +# References + +[1] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. CVPR, 2022. 5, 6, 1, 2, 3 +[2] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Zip-nerf: Anti-aliased grid-based neural radiance fields. ICCV, 2023. 3, 6 +[3] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In CVPR, 2022. 2 +[4] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision (ECCV), 2022. 2 +[5] Zhiqin Chen, Thomas Funkhouser, Peter Hedman, and Andrea Tagliasacchi. Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In The Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 1, 2 +[6] Ziyu Chen, Jiawei Yang, Jiahui Huang, Riccardo de Lutio, Janick Martinez Esturo, Boris Ivanovic, Or Litany, Zan Gojcic, Sanja Fidler, Marco Pavone, Li Song, and Yue Wang. Omnire: Omni urban scene reconstruction. arXiv preprint arXiv:2408.16760, 2024. 2 +[7] Jorge Condor, Sebastien Speierer, Lukas Bode, Aljaz Bozic, Simon Green, Piotr Didyk, and Adrian Jarabo. Don't Splat your Gaussians: Volumetric Ray-Traced Primitives for Modeling and Rendering Scattering and Emissive Media, 2024. 2, 3, 5, 8 +[8] Daniel Duckworth, Peter Hedman, Christian Reiser, Peter Zhizhin, Jean-François Thibert, Mario Lucic, Richard Szeliski, and Jonathan T. Barron. Smerf: Streamable memory efficient radiance fields for real-time large-scene exploration, 2023. 2 +[9] Stephan J Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. Fastnerf: High-fidelity neural rendering at 200fps. arXiv preprint arXiv:2103.10380, 2021. 2 +[10] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. CVPR, 2024. 2 +[11] Antoine Guédon and Vincent Lepetit. Gaussian frosting: Editable complex radiance fields with real-time rendering. ECCV, 2024. 2 +[12] Fredrik Gustafsson and Gustaf Hendeby. Some relations between extended and unscented kalman filters. IEEE Transactions on Signal Processing, 60(2):545-555, 2012. 2 +[13] Florian Hahlbohm, Fabian Friederichs, Tim Weyrich, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Martin Eisemann, and Marcus Magnor. Efficient perspective-correct 3d gaussian splatting using hybrid transparency, 2024. 5 +[14] Letian Huang, Jiayang Bai, Jie Guo, Yuanqi Li, and Yanwen Guo. On the error analysis of 3d gaussian splatting and an optimal projection strategy. arXiv preprint arXiv:2402.00752, 2024. 2, 3, 4 + +[15] Faris Janjos, Lars Rosenbaum, Maxim Dolgov, and J. Marius Zöllner. Unscented autoencoder, 2023. 3 +[16] Simon J. Julier and Jeffrey K. Uhlmann. New extension of the kalman filter to nonlinear systems. In *Defense, Security, and Sensing*, 1997. 2, 3 +[17] Simon J Julier, Jeffrey K Uhlmann, and Hugh F Durrant-Whyte. A new approach for filtering nonlinear systems. In Proceedings of 1995 American Control Conference-ACC'95, pages 1628-1632. IEEE, 1995. 3 +[18] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42 (4), 2023. 1, 2, 3, 4, 5, 6, 7, 8 +[19] Bernhard Kerbl, Andreas Meuleman, Georgios Kopanas, Michael Wimmer, Alexandre Lanvin, and George Drettakis. A hierarchical 3d gaussian representation for real-time rendering of very large datasets. ACM Transactions on Graphics, 43(4), 2024. 2 +[20] Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Weiwei Sun, Jeff Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, and Kwang Moo Yi. 3d gaussian splatting as markov chain monte carlo. arXiv preprint arXiv:2404.09591, 2024. 2, 5 +[21] Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics, 36(4), 2017. 5, 6, 3, 4 +[22] Georgios Kopanas, Julien Philip, Thomas Leimkuhler, and George Drettakis. Point-based neural rendering with perview optimization. Computer Graphics Forum (Proceedings of the Eurographics Symposium on Rendering), 40(4), 2021. 1 +[23] Christoph Lassner and Michael Zollhofer. Pulsar: Efficient sphere-based neural rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1440-1449, 2021. 2 +[24] Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3d gaussian representation for radiance field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21719-21728, 2024. 2 +[25] Zimu Liao, Siyan Chen, Rong Fu, Yi Wang, Zhongling Su, Hao Luo, Linning Xu, Bo Dai, Hengjie Li, Zhilin Pei, et al. Fisheye-gs: Lightweight and extensible gaussian splatting module for fisheye cameras. arXiv preprint arXiv:2409.04751, 2024. 2, 4, 6, 7 +[26] Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, and Wenming Yang. Vastgaussian: Vast 3d gaussians for large scene reconstruction. In CVPR, 2024. 2 +[27] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. NeurIPS, 2020. 2 +[28] Yang Liu, He Guan, Chuanchen Luo, Lue Fan, Junran Peng, and Zhaoxiang Zhang. Citygaussian: Real-time high-quality large-scale scene rendering with gaussians, 2024. 2 +[29] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d + +gaussians for view-adaptive rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654-20664, 2024. 2, 5 +[30] Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jonathan T. Barron, and Yinda Zhang. Ever: Exact volumetric ellipsoid rendering for real-time view synthesis, 2024. 2, 3, 5, 6, 8 +[31] Saswat Subhajyoti Mallick, Rahul Goel, Bernhard Kerbl, Francisco Vicente Carrasco, Markus Steinberger, and Fernando De La Torre. Taming 3dgs: High-quality radiance fields with limited resources, 2024. 2 +[32] Marilena Maule, João Comba, Rafael Torchelsen, and Rui Bastos. Hybrid transparency. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, page 103-118, New York, NY, USA, 2013. Association for Computing Machinery. 5 +[33] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. 1, 2 +[34] Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Ricardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, and Zan Gojcic. 3d gaussian ray tracing: Fast tracing of particle scenes. ACM Transactions on Graphics and SIGGRAPH Asia, 2024. 2, 3, 4, 5, 6, 7, 8, 1 +[35] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):102:1-102:15, 2022. 1, 2 +[36] Steven G. Parker, James Bigler, Andreas Dietrich, Heiko Friedrich, Jared Hoberock, David Luebke, David McAllister, Morgan McGuire, Keith Morley, Austin Robison, and Martin Stich. Optix: A general purpose ray tracing engine. ACM Trans. Graph., 29(4), 2010. 5 +[37] Lukas Radl, Michael Steiner, Mathias Parger, Alexander Weinrauch, Bernhard Kerbl, and Markus Steinberger. StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering. ACM Transactions on Graphics, 4 (43), 2024. 2, 4, 5, 6, 3 +[38] Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, and Peter Hedman. Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes. SIGGRAPH, 2023. 2 +[39] Gernot Riegler and Vladlen Koltun. Free view synthesis. In European Conference on Computer Vision, 2020. 1 +[40] Darius Rückert, Linus Franke, and Marc Stamminger. Adop: Approximate differentiable one-pixel point rendering. ACM Transactions on Graphics (ToG), 41(4):1-14, 2022. 1 +[41] Marco Salvi and Karthikeyan Vaidyanathan. Multi-layer alpha blending. Proceedings of the 18th meeting of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2014. 5 +[42] Sara Fridovich-Keil and Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In CVPR, 2022. 2 + +[43] Otto Seiskari, Jerry Yilammi, Valtteri Kaatrasalo, Pekka Rantalankila, Matias Turkulainen, Juho Kannala, and Arno Solin. Gaussian splattering on the move: Blur and rolling shutter compensation for natural camera motion, 2024. 2 +[44] Gopal Sharma, Daniel Rebain, Kwang Moo Yi, and Andrea Tagliasacchi. Volumetric rendering with baked quadrature fields. arXiv preprint arXiv:2312.02202, 2023. 2 +[45] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In CVPR, 2022. 2 +[46] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Yu Zhang, Jonathon Shlens, Zhifeng Chen, and Dragomir Anguelov. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 5, 7, 2, 4 +[47] Haithem Turki, Vasu Agrawal, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Deva Ramanan, Michael Zollhöfer, and Christian Richardt. Hybridnerf: Efficient neural rendering via adaptive volumetric surfaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19647-19656, 2024. 2 +[48] Eric A Wan and Rudolph Van Der Merwe. The unscented kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 adaptive systems for signal processing, communications, and control symposium (Cat. No. 00EX373), pages 153-158. IEEE, 2000. 3, 4 +[49] Ziyu Wan, Christian Richardt, Aljaz Božić, Chao Li, Vijay Rengarajan, Seonghyeon Nam, Xiaoyu Xiang, Tuotuo Li, Bo Zhu, Rakesh Ranjan, et al. Learning neural duplex radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8307-8316, 2023. 2 +[50] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. NeurIPS, 2021. 1 +[51] Zian Wang, Tianchang Shen, Merlin Nimier-David, Nicholas Sharp, Jun Gao, Alexander Keller, Sanja Fidler, Thomas Müller, and Zan Gojcic. Adaptive shells for efficient neural radiance field rendering. ACM Transactions on Graphics (TOG), 42(6):1-15, 2023. 2 +[52] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. 1 +[53] Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, and Ben Mildenhall. Bakedsdf: Meshing neural sdfs for real-time view synthesis. arXiv, 2023. 1, 2 +[54] Zongxin Ye, Wenyu Li, Sidun Liu, Peng Qiao, and Yong Dou. Absgs: Recovering fine details for 3d gaussian splatt ing, 2024. 2 + +[55] Chandan Yeshwanth, Yueh-Cheng Liu, Matthias Nießner, and Angela Dai. Scannet++: A high-fidelity dataset of 3d indoor scenes. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 5, 6, 7 +[56] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splattering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19447-19456, 2024. 5 +[57] Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Ewa splatting. IEEE Transactions on Visualization and Computer Graphics, 8(3):223-238, 2002. 1, 3 \ No newline at end of file diff --git a/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/images.zip b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..87463cc07247bca1bb1f8bc8e4dfa1e9e88d2bcf --- /dev/null +++ b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c5075247b4b5c89a474734f7fb701a35f0a8c6cbc7349a6a104a2d458a7017f +size 750809 diff --git a/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/layout.json b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..43d50e2f6eafdc1b66cb61958f5057ed839a7971 --- /dev/null +++ b/CVPR/2025/3DGUT_ Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9393d6d282a026ca5053a0e0f3f04ce094f3bb2374cf4f53e44b2f6046b491e4 +size 418929 diff --git a/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_content_list.json b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9088373a74e22a914bab06ff39e4e3b1cbdd3f0c --- /dev/null +++ b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd5c8051f4f25adc969efeb80e62af9f596b235ee7c289701f511469112175f2 +size 85854 diff --git a/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_model.json b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_model.json new file mode 100644 index 0000000000000000000000000000000000000000..41f7b9b06fd7f9eb82dc36abec365e9cab147e29 --- /dev/null +++ b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d85fe6d67980d0e07f5bf158c00013014f5002b9ba095a5b3a02cbe1e40648f1 +size 108409 diff --git a/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_origin.pdf b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d1cb8e76e860c6db24edc4c086b0c2e01fe53f22 --- /dev/null +++ b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/75fbc8e4-4797-48af-9122-0c9fec5fabaf_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2c505ac797a5fbbcc08044d8b378ab307df7484a77672cc4fcdc3b316cfe784 +size 6131381 diff --git a/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/full.md b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5277d4f94d4b1a6a171c599336bf274063a21c06 --- /dev/null +++ b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/full.md @@ -0,0 +1,363 @@ +# 3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion + +Zhaoxi Chen $^{1}$ Jiaxiang Tang $^{2}$ Yuhao Dong $^{1,3}$ Ziang Cao $^{1}$ Fangzhou Hong $^{1}$ Yushi Lan $^{1}$ Tengfei Wang $^{3}$ Haozhe Xie $^{1}$ + +Tong Wu $^{3,4}$ Shunsuke Saito Liang Pan $^{3}$ Dahua Lin $^{3,4\boxtimes}$ Ziwei Liu $^{1\boxtimes}$ $^{1}$ S-Lab, Nanyang Technological University $^{2}$ Peking University + $^{3}$ Shanghai AI Laboratory $^{4}$ The Chinese University of Hong Kong + +https://3dtopicia.github.io/3DTopicia-XL/ + +![](images/aae0ac26bd9cabe98985dc213e98cb8f903122d4d36cfd6cdade490d2342db3c.jpg) +Figure 1. 3DTopia-XL generates high-quality 3D assets with smooth geometry and spatially varied textures and materials. The output asset (GLB mesh) can be seamlessly ported into graphics engines for physically-based rendering. + +# Abstract + +The increasing demand for high-quality 3D assets across various industries necessitates efficient and automated 3D content creation. Despite recent advancements in 3D generative models, existing methods still face challenges with optimization speed, geometric fidelity, and the lack of assets for physically based rendering (PBR). In this paper, we introduce 3DTopia-XL, a scalable native 3D generative model designed to overcome these limitations. 3DTopia-XL leverages a novel primitive-based 3D representation, PrimX, which encodes detailed shape, albedo, and material field into a compact tensorial format, facilitating the modeling of high-resolution geometry with PBR assets. On top of the + +novel representation, we propose a generative framework based on Diffusion Transformer (DiT), which comprises 1) Primitive Patch Compression, 2) and Latent Primitive Diffusion. 3DTopia-XL learns to generate high-quality 3D assets from textual or visual inputs. Extensive qualitative and quantitative evaluations are conducted to demonstrate that 3DTopia-XL significantly outperforms existing methods in generating high-quality 3D assets with fine-grained textures and materials, efficiently bridging the quality gap between generative models and real-world applications. + +# 1. Introduction + +High-quality 3D assets are essential for various real-world applications like films, gaming, and virtual reality. How- + +ever, creating high-quality 3D assets involves extensive manual labor and expertise. Therefore, it further fuels the demand for automatic 3D content creation techniques, which automatically generate 3D assets from visual or textual inputs by using 3D generative models. + +Fortunately, rapid progress has been witnessed in the field of 3D generative models recently. Existing state-of-the-art techniques can be sorted into three categories. 1) Methods based on Score Distillation Sampling (SDS) [37, 44] lift 2D diffusion priors into 3D representation by per-scene optimization. However, these methods suffer from time-consuming optimization, poor geometry, and multifaceted inconsistency. 2) Methods based on sparse-view reconstruction [14, 54] that leverage large models to regress 3D assets from single- or multi-view images. Most of these methods are built upon triplane-NeRF [3] representation. However, due to the triplane's parameter inefficiency, the valid parameter space is limited to low resolutions in those models, leading to relatively low-quality 3D assets. Plus, reconstruction-based models also suffer from a low-diversity problem as deterministic methods. 3) Methods as native 3D generative models [23, 58] aim to model the probabilistic distribution of 3D assets, generating 3D objects given input conditions. Yet, few of them are capable of generating high-quality 3D objects with Physically Based Rendering (PBR) assets, which are geometry, texture, and material packed into a GLB file. + +To address the limitations above, we propose 3DTopia-XL, a high-quality native 3D generative model for 3D assets at scale. Our key idea is scaling the powerful diffusion transformer [36] on top of a novel primitive-based 3D representation. At the core of 3DTopia-XL is an efficient 3D representation, PrimX, which encodes the shape, albedo, and material of a textured mesh in a compact $N \times D$ tensor, enabling the modeling of high-resolution geometry with PBR assets. In specific, we anchor $N$ primitives to the positions sampled on the mesh surface. Each primitive is a tiny voxel, parameterized by its 3D position, a global scale factor, and corresponding spatially varied payload for SDF, RGB, and material. Note that the proposed representation differentiates itself from the shape-only representation M-SDF [58] that PrimX encodes shape, color, and material in a unified way. It also supports efficient differentiable rendering, leading to the great potential to learn from not only 3D data but also image collections. Moreover, we carefully design initialization and fine-tuning strategy that enables PrimX to be rapidly tensorized from a textured mesh (GLB file) which is ten times faster than the triplane under the same setting. + +Thanks to the tensorial and compact PrimX, we scale the 3D generative modeling using latent primitive diffusion with Transformers, where we treat each 3D object as a set of primitives. In specific, the proposed 3D generation framework consists of two modules. 1) Primitive Patch Compress + +sion uses a 3D VAE for spatial compression of each individual primitive to get latent primitive tokens; and 2) Latent Primitive Diffusion leverages the Diffusion Transformers (DiT) [36] to model global correlation of latent primitive tokens for generative modeling. The significant efficiency of the proposed representation allows us to achieve high-resolution generative training using a clean and unified framework without super-resolution to upscale the underlying 3D representation or post-hoc optimization-based mesh refinement. + +In addition, we also carefully design algorithms for high-quality 3D PBR asset extraction from PrimX, to ensure reversible transformations between PrimX and textured mesh. An issue for most 3D generation models [49, 54] is that they use vertex coloring to represent the object's texture, leading to a significant quality drop when exporting their generation results into mesh format. Thanks to the high-quality surface modeled by Signed Distance Field (SDF) in PrimX, we propose to extract the 3D shape with zero-level contouring and sample texture and material values in a high-resolution UV space. This leads to high-quality asset extraction with considerably fewer vertices, which is also ready to be packed into GLB format for downstream tasks. + +Extensive experiments are conducted both qualitatively and quantitatively to evaluate the effectiveness of our method in text-to-3D and image-to-3D tasks. Moreover, we do extensive ablation studies to motivate our design choices for a better efficiency-quality tradeoff in the context of generative modeling with PrimX. In conclusion, we summarize our contributions as follows: 1) We propose a novel 3D representation, PrimX, for high-quality 3D content creation, which is efficient, tensorial, and renderable. 2) We introduce a scalable generative framework, 3DTopia-XL, tailored for generating high-quality 3D assets with high-resolution geometry, texture, and materials. 3) Practical techniques for assets extraction from 3D representation to avoid quality gap. 4) We demonstrate the superior quality and impressive applications of 3DTopia-XL for image-to-3D and text-to-3D tasks. + +# 2. Related Work + +Sparse-view Reconstruction Models. Recent advancements have been focusing on deterministic reconstruction methods that regress 3D assets from single- or multi-view images. Large Reconstruction Model (LRM) [10, 14] has shown that end-to-end training of a triplane-NeRF [3] regression model scales well to large datasets and can be highly generalizable. Although it can significantly accelerate generation speed, the generated 3D assets still exhibit relatively lower quality due to representation inefficiency and suffer from a low-diversity problem as a deterministic method. Subsequent works have extended this method to improve generation quality. For example, us + +ing multi-view images [1, 15, 20, 43, 47, 49, 52, 54] generated by 2D diffusion models as the input can effectively enhance the visual quality. However, the generative capability is actually enabled by the frontend multi-view diffusion models [22, 25, 29, 42] which cannot produce multiview images with accurate 3D consistency. Another direction is to use more efficient 3D representations such as Gaussian Splatting [4, 17, 45, 56, 59, 62] and triangular mesh [21, 27, 50, 61, 66]. However, few of them can generate high-quality PBR assets with sampling diversity. + +Native 3D Diffusion Models. Similar to 2D diffusion models for image generation, efforts have been made to train a 3D native diffusion model [7, 24, 39, 65] on conditional 3D generation. However, unlike the universal image representation in 2D, there are many different choices for 3D representations. Voxel-based methods [32] can be directly extended from 2D methods, but they are constrained by the demanding memory usage, and suffer from scaling up to high-resolution data. Point cloud based methods [33, 34] are memory-efficient and can adapt to large-scale datasets, but they hardly represent the watertight and solid surface of the 3D assets. Implicit representations such as triplane-NeRF offer a better balance between memory and quality [2, 5, 7, 9, 16, 26, 35, 48]. There are also methods based on other representations such as meshes and primitives [6, 28, 55, 57, 58]. However, these methods still struggle with generalization or producing high-quality assets. Recent methods attempt to adapt latent diffusion models to 3D [13, 19, 23, 46, 51, 60, 63, 64]. These methods first train a 3D compression model such as a VAE to encode 3D assets into a more compact form, which allows the diffusion model to train more effectively and show strong generalization. However, they either suffer from low-resolution results or are incapable of modeling PBR materials. In this paper, we propose a new 3D latent diffusion model on a novel representation, PrimX, which can be efficiently computed from a textured mesh and unpacked into high-resolution geometry with PBR materials. + +# 3. Methodology + +# 3.1. PrimX: An Efficient Representation for Shape, Texture, and Material + +Before diving into details, we outline the following design principles for 3D representation in the context of high-quality large-scale 3D generative models: 1) Parameter-efficient: provides a good trade-off between approximation error and parameter count; 2) Rapidly tensorizable: can be efficiently transformed into a tensor, which facilitates generative modeling with modern neural architectures; 3) Differentiably renderable: compatible with differentiable renderer, enabling learning from both 3D and 2D data. + +Given the aforementioned principles, we propose a novel + +primitive-based 3D representation, namely PrimX, which represents the 3D shape, texture, and material of a textured mesh as a compact $N \times D$ tensor. It can be efficiently computed from a textured mesh (in GLB format) and directly rendered into 2D images via a differentiable rasterizer. + +# 3.1.1. Definition + +Preliminaries. Given a textured 3D mesh, we denote its 3D shape as $S \in \mathbb{R}^3$ , where $\mathbf{x} \in S$ are spatial points inside the occupancy of the shape, and $\mathbf{x} \in \partial S$ are the points on the shape's boundary, i.e. the shape's surface. We model the 3D shape as SDF as follows: + +$$ +F _ {\mathcal {S}} ^ {\mathrm {S D F}} (\mathbf {x}) = \operatorname {s i g n} (\mathbf {x}, \mathcal {S}) \cdot d (\mathbf {x}, \partial \mathcal {S}), \tag {1} +$$ + +where $d(\mathbf{x},\partial \mathcal{S}) = \min_{\mathbf{y}\in \mathcal{S}}||\mathbf{x} - \mathbf{y}||_2$ is the distance function, $\mathrm{sign}(\mathbf{x},\mathcal{S}) = 1$ when $\mathbf{x}\in S$ , and $\mathrm{sign}(\mathbf{x},\mathcal{S}) = -1$ when $\mathbf{x}\notin S$ . Moreover, given the neighborhood of the surface, $\mathcal{U}(\partial \mathcal{S},\delta) = \{d(\mathbf{x},\partial \mathcal{S}) < \delta \}$ , the space-varied color and material functions of the target mesh are defined: + +$$ +F _ {\mathcal {S}} ^ {\mathrm {R G B}} (\mathbf {x}) = \operatorname {s i g n} (\mathbf {x}, \mathcal {U}) \cdot C (\mathbf {x}), F _ {\mathcal {S}} ^ {\mathrm {M a t}} (\mathbf {x}) = \operatorname {s i g n} (\mathbf {x}, \mathcal {U}) \cdot \rho (\mathbf {x}), \tag {2} +$$ + +where $\mathrm{sign}(\mathbf{x},\mathcal{U}) = 1$ when $\mathbf{x}\in \mathcal{U}$ , and $\mathrm{sign}(\mathbf{x},\mathcal{U}) = 0$ when $\mathbf{x}\notin \mathcal{U}$ . The $C(\mathbf{x}):\mathbb{R}^3\to \mathbb{R}^3$ and $\rho (\mathbf{x}):\mathbb{R}^3\to \mathbb{R}^2$ are corresponding texture sampling functions to get albedo and material (metallic and roughness) from UV-aligned texture maps given the 3D point $\mathbf{x}$ . Eventually, all shape, texture, and material information of a 3D mesh can be parameterized by the volumetric function $F_{S} = (F_{S}^{\mathrm{SDF}}\oplus F_{S}^{\mathrm{RGB}}\oplus F_{S}^{\mathrm{Mat}}):\mathbb{R}^{3}\to \mathbb{R}^{6}$ , where $\oplus$ denotes concatenation. + +$\mathbf{PrimX}$ Representation. We aim to approximate $F_{S}$ with a neural volumetric function $F_{\mathcal{V}}:\mathbb{R}^{3}\to \mathbb{R}^{6}$ parameterized by a $N\times D$ tensor $\nu$ . For efficiency, our key insight is to define $F_{\mathcal{V}}$ as a set of $N$ volumetric primitives distributed on the surface of the mesh: + +$$ +\mathcal {V} = \left\{\mathcal {V} _ {k} \right\} _ {k = 1} ^ {N}, \text {w h e r e} \mathcal {V} _ {k} = \left\{\mathbf {t} _ {k}, s _ {k}, \boldsymbol {X} _ {k} \right\}. \tag {3} +$$ + +Each primitive $\mathcal{V}_k$ is a tiny voxel with a resolution of $a^3$ , parameterized by its 3D position $\mathbf{t}_k \in \mathbb{R}^3$ , a global scale factor $s_k \in \mathbb{R}^+$ , and corresponding spatially varied feature payload $\mathbf{X}_k \in \mathbb{R}^{a \times a \times a \times 6}$ within the voxel. Note that, the payload in PrimX could be spatially varied features with any dimensions. Our instantiation here is to use a six-channel local grid $\mathbf{X}_k = \{\mathbf{X}_k^{\mathrm{SDF}}, \mathbf{X}_k^{\mathrm{RGB}}, \mathbf{X}_k^{\mathrm{Mat}}\}$ to parameterize SDF, RGB color, and material respectively. + +Inspired by Yariv et al. [58] where mosaic voxels are globally weighted to get a smooth surface, the approximation of a textured mesh is then defined as a weighted combination of primitives: + +$$ +F _ {\mathcal {V}} (\mathbf {x}) = \sum_ {k = 1} ^ {N} \left[ w _ {k} (\mathbf {x}) \cdot \mathcal {I} \left(\boldsymbol {X} _ {k}, \left(\mathbf {x} - \mathbf {t} _ {k}\right) / s _ {k}\right) \right], \tag {4} +$$ + +where $\mathcal{I}(\mathcal{V}_k,\mathbf{x})$ denotes the trilinear interpolant over the voxel grid $X_{k}$ at position $\mathbf{x}$ . The weighting function $w_{k}(\mathbf{x})$ + +![](images/7b353f4d82b215fe569640900d59aa22b38a2cc406d1ef96c66a7d313023a495.jpg) +Figure 2. Illustration of PrimX. We propose to represent the 3D shape, texture, and material of a textured mesh as a compact $N\times D$ tensor (Sec. 3.1.1). We anchor $N$ primitives to the positions sampled on the mesh surface. Each primitive $\mathcal{V}_k$ is a tiny voxel with a resolution of $a^3$ , parameterized by its 3D position $\mathbf{t}_k\in \mathbb{R}^3$ , a global scale factor $s_k\in \mathbb{R}^+$ , and corresponding spatially varied payload $\pmb {X}_k\in \mathbb{R}^{a\times a\times a\times 6}$ for SDF, RGB, and material. This tensorial representation can be rapidly computed from a textured mesh within 1.5 minutes (Sec. 3.1.2). + +of each primitive is defined as: + +$$ +w _ {k} (\mathbf {x}) = \frac {\hat {w} _ {k} (\mathbf {x})}{\sum_ {j = 1} ^ {N} \hat {w} _ {j} (\mathbf {x})}, \tag {5} +$$ + +$$ +\text {s . t .} \hat {w} _ {k} (\mathbf {x}) = \max (0, 1 - | | \frac {\mathbf {x} - \mathbf {t} _ {k}}{s _ {k}} | | _ {\infty}). \tag {6} +$$ + +Once the payload of primitives is determined, we can leverage a highly efficient differentiable renderer to turn PrimX into 2D images. In specific, given a camera ray $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ with camera origin $\mathbf{o}$ and ray direction $\mathbf{d}$ , the corresponding pixel value $I$ is solved by the following integral: + +$$ +I = \int_ {t _ {\min }} ^ {t _ {\max }} F _ {\mathcal {V}} ^ {\mathrm {R G B}} (\mathbf {r} (t)) \frac {d T (t)}{d t} d t, \tag {7} +$$ + +where $T(t)$ is an exponential function of the SDF field to represent the opacity field: + +$$ +T (t) = \int_ {t _ {\min }} ^ {t} \exp \left[ - \left(\frac {F _ {\mathcal {V}} ^ {\mathrm {S D F}} (\mathbf {r} (t))}{\alpha}\right) ^ {2} \right] d t. \tag {8} +$$ + +And $\alpha = 0.005$ is the hyperparameter that controls the variance of the opacity field during this conversion. + +To wrap up, the learnable parameters of a textured 3D mesh modeled by $\mathrm{PrimX}$ are primitive position $\mathbf{t} \in \mathbb{R}^{N \times 3}$ , primitive scale $s \in \mathbb{R}^{N \times 1}$ , and voxel payload $\mathbf{X} \in \mathbb{R}^{N \times a^3 \times 6}$ for SDF, albedo, and material. Therefore, each textured mesh can be represented as a compact $N \times D$ tensor, where $D = 3 + 1 + a^3 \times 6$ by concatenation. + +PBR Asset Extraction. Once PrimX is constructed, it encodes all geometry and appearance information of the target mesh within the $N \times D$ tensor. Now, we introduce our efficient algorithm to convert PrimX back into a textured mesh in GLB file format. For geometry, we can easily extract the corresponding 3D shape with Marching Cubes algorithm [30] on zero level set of $F_{\mathcal{V}}^{\mathrm{SDF}}$ . For PBR texture maps, we first perform UV unwrapping in a high-resolution + +UV space $(1024 \times 1024)$ . Then, we get sampling points in 3D and query $\{F_{\mathcal{V}}^{\mathrm{RGB}}, F_{\mathcal{V}}^{\mathrm{Mat}}\}$ to get corresponding albedo and material values. Note that, we mask the UV space to get the index of valid vertices for efficient queries. Moreover, we dilate the UV texture maps and inpaint the dilated region with the nearest neighbors of existing textures, ensuring albedo and material maps smoothly blend outwards for anti-aliasing. Finally, we pack geometry, UV mapping, albedo, and material maps into a GLB file, which is ready for the graphics engine and various downstream tasks. + +# 3.1.2. Computing PrimX from Textured Mesh + +We introduce our efficient fitting algorithm in this section that computes $\mathrm{PrimX}$ from the input textured mesh in a short period of time so that it is scalable on large-scale datasets for generative modeling. Given a textured 3D mesh $F_{S}$ , our goal is to compute $\mathrm{PrimX}$ such that $F_{\mathcal{V}}(\mathbf{x}) \approx F_{S}(\mathbf{x})$ , s.t. $\mathbf{x} \in \mathcal{U}(\partial S, \delta)$ . Our key insight is that the fitting process can be efficiently achieved via a good initialization followed by lightweight finetuning. + +Initialization. We assume all textured meshes are provided in GLB format which contains triangular meshes, texture and material maps, and corresponding UV mappings. The vertices of the target mesh are first normalized within the unit cube. To initialize the position of primitives, we first apply uniform random sampling on the mesh surface to get $\hat{N}$ candidate initial points. Then, we perform farthest point sampling on this candidate point set to get $N$ valid initial positions for all primitives. This two-step initialization of position ensures good coverage of $F_{\mathcal{V}}$ over the boundary neighborhood $\mathcal{U}$ while also keeping the high-frequency shape details as much as possible. Then, we compute the L2 distance of each primitive to its nearest neighbors, taking the value as the initial scale factor for each primitive. + +To initialize the payload of primitives, we first compute candidate points in global coordinates using initialized positions $\mathbf{t}_k$ and scales $s_k$ as $\mathbf{t}_k + s_k\mathbf{I}$ for each primitive, + +![](images/c5d5ee4ea0ef58e753131062672075e8102ce31c1f1e6f46d1938025e304ad78.jpg) +Figure 3. Overview of 3DTopia-XL. As a native 3D diffusion model, 3DTopia-XL is built upon a novel 3D representation PrimX (Sec. 3.1). This compact and expressive representation encodes the shape, texture, and material of a textured mesh efficiently, which allows modeling high-resolution geometry with PBR assets. Furthermore, this tensorial representation facilitates our patch-based compression using primitive patch VAE (Sec. 3.2). We then use our novel latent primitive diffusion (Sec. 3.3) for 3D generative modeling, which operates the diffusion and denoising process on the set of latent PrimX, naturally compatible with Transformer-based neural architectures. + +where $\pmb{I}$ is the unit local voxel grid with a resolution of $a^3$ . To initialize the SDF value, we query the SDF function converted from the 3D shape at each candidate point, i.e. $\pmb{X}_k^{\mathrm{SDF}} = F_S^{\mathrm{SDF}}(\mathfrak{t}_k + s_k\pmb{I})$ . Notably, it is non-trivial to get a robust conversion from arbitrary 3D shape to volumetric SDF function. Our implementation is based on an efficient ray marching with bounding volume hierarchy that works well with non-watertight topology. To initialize the color and material values, we sample the corresponding albedo colors and material values from UV space using geometric functions $F_{S}^{\mathrm{RGB}}$ and $F_{S}^{\mathrm{Mat}}$ . Specifically, we compute the closest face and corresponding barycentric coordinates for each candidate point on the mesh, then interpolate UV coordinates and sample from the texture maps to get the value. + +Finetuning. Even if the initialization above offers a fairly good estimate of $F_{S}$ , a rapid finetuning process can further decrease the approximation error via gradient descent. Specifically, we optimize the well-initialized PrimX with a regression-based loss on SDF, albedo, and material values: + +$$ +\begin{array}{l} \mathcal {L} (\mathbf {x}; \mathcal {V}) = \lambda_ {\mathrm {S D F}} | | F _ {\mathcal {S}} ^ {\mathrm {S D F}} (\mathbf {x}) - F _ {\mathcal {V}} ^ {\mathrm {S D F}} (\mathbf {x}) | | _ {1} \\ + \lambda (| | F _ {\mathcal {S}} ^ {\mathrm {R G B}} (\mathbf {x}) - F _ {\mathcal {V}} ^ {\mathrm {R G B}} (\mathbf {x}) | | _ {1} \tag {9} \\ + \left\| F _ {S} ^ {\operatorname {M a t}} (\mathbf {x}) - F _ {\mathcal {V}} ^ {\operatorname {M a t}} (\mathbf {x}) \right\| _ {1}), \\ \end{array} +$$ + +where $\forall \mathbf{x} \in \mathcal{U}$ , and $\lambda_{\mathrm{SDF}}, \lambda$ are loss weights. We employ a two-stage finetuning strategy where we optimize with $\lambda_{\mathrm{SDF}} = 10$ and $\lambda = 0$ for the first $1k$ iterations and $\lambda_{\mathrm{SDF}} = 0$ and $\lambda = 1$ for the second $1k$ iterations. More details are provided in the supplementary document. + +# 3.2. Primitive Patch Compression + +In this section, we introduce our lightweight patch-based compression on individual primitives that turns 3D primitives into latent tokens for efficient generative modeling. + +We opt for using a variational autoencoder [18] (VAE) operating on local voxel patches which compresses the payload of each primitive into latent tokens, i.e. $F_{\mathrm{ae}}: \mathbb{R}^D \to \mathbb{R}^d$ . Specifically, the autoencoder $F_{\mathrm{ae}}$ consists of an encoder $E$ and a decoder $D$ building with 3D convolutional layers. The encoder $F_{\mathrm{ae}}$ has a downsampling rate of 48 that compresses the voxel payload $X_k \in \mathbb{R}^{a^3 \times 6}$ into the voxel latent $\hat{X}_k \in \mathbb{R}^{(a/2)^3 \times 1}$ . We train $F_{\mathrm{ae}}$ with reconstruction loss: + +$$ +\mathcal {L} _ {\mathrm {a e}} (\boldsymbol {X}; E, D) = \mathbb {E} [ | | \boldsymbol {X} _ {k} - D (E (\boldsymbol {X} _ {k})) | | _ {2} + \lambda_ {\mathrm {k l}} \mathcal {L} _ {\mathrm {k l}} (\boldsymbol {X} _ {k}, E) ], \tag {10} +$$ + +where $\lambda_{\mathrm{kl}}$ is the weight for KL regularization over the latent space. Note that, unlike other works on 2D/3D latent diffusion models [40, 63] that perform global compression over all patches, our VAE only compresses each local primitive patch independently and defers the modeling of global semantics and inter-patch correlation to the diffusion model. Once the VAE is trained, we can compress the raw PrimX as $\mathcal{V}_k = \{\mathbf{t}_k,s_k,E(\mathbf{X}_k)\}$ . It leads to a low-dimensional parameter space for the diffusion model as $\mathcal{V} \in \mathbb{R}^{N \times d}$ , where $d = 3 + 1 + (a / 2)^3$ . In practice, this compact parameter space significantly allows more model parameters given a fixed computational budget, which is the key to scaling up 3D generative models in high resolution. + +# 3.3. Latent Primitive Diffusion + +On top of PrimX (Sec. 3.1) and the corresponding VAE (Sec. 3.2), the problem of 3D object generation is then converted to learning the distribution $p(\mathcal{V})$ over large-scale datasets. Our goal is to train a diffusion model [12] that takes as input random noise $\mathcal{V}^T$ and conditions $\mathbf{c}$ , and predicts PrimX samples. Note that, the target space for denoising is $\mathcal{V}^T \in \mathbb{R}^{N \times d}$ , where $d = 3 + 1 + (a/2)^3$ . + +In specific, the diffusion model learns to denoise $\mathcal{V}^T\sim$ $\mathcal{N}(0,\mathbf{I})$ through denoising steps $\{\mathcal{V}^{T - 1},\dots ,\mathcal{V}^0\}$ given conditional signal c. As a set of primitives, PrimX is naturally compatible with Transformer-based architectures, where we treat each primitive as a token. Moreover, the permutation equivariance of PrimX removes the need for any positional encoding in Transformers. + +Our largest latent primitive diffusion model $g_{\Phi}$ is a 28-layer transformer, with cross-attention layers to incorporate conditional signals, self-attention layers for modeling interprimitive correlations, and adaptive layer normalization to inject timestep conditions. The model $g_{\Phi}$ learns to predict at timestep $t$ given input condition signal: + +$$ +g _ {\Phi} \left(\mathcal {V} ^ {t}, t, \mathbf {c}\right) = \left\{\text {A d a L N} [ \text {S e l f A t t n} (\text {C r o s s A t t n} \left(\mathcal {V} ^ {t}, \mathbf {c}, \mathbf {c}\right)), t ] \right\} ^ {2 8}, \tag {11} +$$ + +where $\operatorname{CrossAttn}(\mathbf{q},\mathbf{k},\mathbf{v})$ denotes the cross-attention layer with query, key, and value as input. SelfAttn(·) denotes the self-attention layer. AdaLN(·,t) denotes adaptive layer normalization layers to inject timestep conditioned + +![](images/4b2f64c44d1057ec523297874ee101972afc2d00cccf57cb38f0970484fde1fa.jpg) +Figure 4. Evaluations of different 3D representations. We evaluate the effectiveness of different representations in fitting the ground truth's shape, texture, and material (right). All representations are constrained to a budget of $1.05\mathrm{M}$ parameters. PrimX achieves the highest fidelity in terms of geometry and appearance with significant strength in runtime efficiency (Table 1) at the same time. + +Table 1. Quantitative evaluations of different 3D representations. We evaluate the approximation error of different representations for shape, texture, and material. All representations adhere to a parameter budget of $1.05\mathrm{M}$ . PrimX shows the best fitting quality, especially for the geometry (also shown in Figure 4), while having the most speedy fitting runtime. The top three techniques are highlighted in red, orange, and yellow, respectively. + +
RepresentationRuntimeCD ×10-4↓PSNR-FS^SDF ↑PSNR-FRGB ↑PSNR-FMat ↑
MLP14 min4.50240.7321.1913.99
MLP w/ PE14 min4.63840.8221.7812.75
Triplane16 min9.67839.8818.2816.46
Dense Voxels10 min7.01241.7020.0115.98
PrimX1.5 min1.31041.7421.8616.50
+ +modulation to cross-attention, self-attention, and feedforward layers. Moreover, we employ the pre-normalization scheme [53] for training stability. For noise scheduling, we use discrete 1,000 noise steps with a cosine scheduler during training. We opt for "v-prediction" [41] with Classifier-Free Guidance (CFG) [11] as the training objective for better conditional generation quality and faster convergence: + +$$ +\begin{array}{l} \mathcal {L} _ {\text {d i f f}} (\Phi) = \underset {\longrightarrow} {\mathbb {E}} _ {t \sim [ 1, T ], \nu^ {0}, \nu^ {t}} [ ] | ( \tag {12} \\ \sqrt {\bar {\alpha} _ {t}} \epsilon - \sqrt {1 - \bar {\alpha} _ {t}} \mathcal {V} ^ {0}) - g _ {\Phi} (\mathcal {V} ^ {t}, t, \bar {\mathbf {c}} (b)) | | _ {2} ^ {2} ], \\ \end{array} +$$ + +where $\epsilon$ is the noise sampled from Gaussian distribution, $\bar{\alpha}_t = \prod_{i=0}^t (1 - \beta_i)$ and $\beta_t$ comes from our cosine beta scheduler. And $b \sim \mathcal{B}(p_0)$ is a random variable sampled from Bernoulli distribution taking 0, 1 with probability $p_0$ and $1 - p_0$ respectively. And the condition signal under CFG is defined as $\bar{\mathbf{c}}(b) = b \cdot \mathbf{c} + (1 - b) \cdot \varnothing$ , where $\varnothing$ is the learnable embedding for unconditional generation. + +# 4. Experiments + +# 4.1. Representation Evaluation + +Evaluation Protocol. We first evaluate different designs of 3D representations in the context of 3D generative modeling. Our evaluation principles focus on two aspects: 1) runtime from GLB mesh to the representation, and 2) approximation error for shape, texture, and material given a fixed computational budget. Given 30 GLB meshes ran + +domly sampled from our training dataset, we take the average fitting time till convergence as runtime measured on an A100 GPU. For geometry quality, we evaluate the Chamfer Distance (CD) and the Peak Signal-to-Noise Ratio (PSNR) of SDF values between ground truth meshes and fittings. For appearance quality, we evaluate the PSNR of albedo and materials values. All metrics are computed on a set of 500k points sampled near the shape surface. + +Baselines. Given our final hyperparameters of PrimX, where $N = 2048$ , $a = 8$ , we fix the number of parameters of all representations to $2048 \times 8^3 \approx 1.05\mathrm{M}$ for comparisons. We compare four alternative representations: 1) MLP: a pure Multi-Layer Perceptron with 3 layers and 1024 hidden dimensions; 2) MLP w/ PE: the MLP baseline with Positional Encoding (PE) [31] to the input coordinates; 3) Triplane [3]: three orthogonal 2D planes with a resolution of $128 \times 128$ and 16 channels, followed by a two-layer MLP decoder with 512 hidden dimensions. 4) Dense Voxels: a dense 3D voxel with a resolution of $100 \times 100 \times 100$ . 5) Sparse Points: 2048 latent points followed by a three-layer MLP as the decoder. All methods are trained with the same objective (Eq. 9) and points sampling strategy as ours. Results. Quantitative results are presented in Table 1, which shows that PrimX achieves the least approximation error among all methods, especially for geometry (indicated by CD). Besides the best quality, the proposed representation demonstrates significant efficiency in terms of runtime with nearly 7 times faster convergence speed compared with the second best, making it scalable on large-scale datasets. Figure 4 shows qualitative comparisons. MLP-based methods have periodic artifacts, especially for the geometry. Triplane and dense voxels yield bumpy surfaces and grid artifacts. Instead, PrimX produces the best quality with smooth geometry and fine-grained details like the tapering beard. + +# 4.2. Image-to-3D Generation + +Comparison Methods. We run evaluations against two types of methods: 1) sparse-view reconstruction models, and 2) image-conditioned diffusion models. + +![](images/6834e03ff360d3f9a03dc4f92119d2121c6a3774b65c7c73aab8358c9cc66201.jpg) +HDRI for Rendering + +![](images/07bf48da46757f4ca1884c70530a56104ec53f463d6b6329477777ec04a3e07f.jpg) + +![](images/78c21e03685ef300f3891f75bb02b34cb2845535726b56f02312c3db7745a271.jpg) + +![](images/25840295fa4fae42b00f0f63c88ca21b5ee7e77057ddd0f00fc80ca1cb97a29a.jpg) + +![](images/198ea8298f6b8343d6b46d21f2e9f16bb5b41a16196d2dd0960744776c26621f.jpg) + +![](images/e7842bd02c464598a8c5ce25fd17cdf389089acbb4fc8fff04ceeeb26937486d.jpg) + +![](images/71ba81eede00b27efaa1f34b816543378b784785238b74475c7756ec324404bf.jpg) +HDRI for Rendering + +![](images/42d217974da91e4e5d5d7e12088f570eec8759c93413bdc860360442da07b13d.jpg) + +![](images/89bd158419b9f98b4a6c2e21bb314b4582ed5085139b12db63f728038a78daf4.jpg) + +![](images/a8fc5b6353ff067fbac52326570970e52bbc5353c035ac9086c7164eeb9de4db.jpg) + +![](images/c52f591e2803a534e31fa1e8f8642d439700c53da45c6aa23622029d81f1af06.jpg) + +![](images/50962628682a4adc71f143fa85573ebc4dbab55bf6b367a19ff56e773cbbdbf1.jpg) + +![](images/230830bff556f3512e2be492fc95b88f68debf938151abca7a551536f04ba211.jpg) +HDRI for Rendering + +![](images/7a27dd1a53d8e5cf710fa0cf901fc927f110c6146a5756c58f75e6ddb03b184f.jpg) +Input Image +LGM +InstantMesh +Real3D +CRM +CraftsMan +ShapE +LN3Diff +Figure 5. Image-to-3D comparisons. For each method, we take the textured mesh predicted from the input image into Blender and render it with the target environment map. We compare our single-view conditioned model with sparse-view reconstruction models and image-conditioned diffusion models. 3DTopia-XL achieves the best visual and geometry quality. Thanks to our capability to generate spatially varied PBR assets shown on the rightmost, our generated mesh can also produce vivid reflectance with specular highlights and glossiness. + +![](images/99233948d9c180625d478404cae9a3d4f3b6d37053779e04191e94e93780e15c.jpg) +Ours +Metallic / +Roughness + +The reconstruction-based models, like LGM [45], InstantMesh [54], Real3D [15], CRM [49], are deterministic methods that learn to reconstruct 3D objects given four or six input views. They enable single-view to 3D synthesis by leveraging pretrained diffusion models [22, 42] to generate multiple views from the input single image. However, those methods may suffer from multi-view inconsistency caused by the frontend 2D diffusion models. The feed-forward diffusion models, like CraftsMan [23], ShapeE [16], LN3Diff [19], are probabilistic methods that learn to generate 3D objects given input image conditions. All methods above only model the shape and color without considering roughness and metallic while our method is suitable to produce those assets with sampling diversity. To fairly compare the quality in real-world use cases, we place the output mesh of each method into Blender [8] and render it with an environment map. For methods that cannot produce PBR materials, we assign the default diffuse material. + +Results. Figure 5 demonstrates qualitative results. Existing reconstruction-based models fail to produce good results which may suffer from multiview inconsistency and incapability to support spatially varied materials. Moreover, most of reconstruction models are built upon triplane rep + +Table 2. Analysis of number $\left( N\right)$ and resolution (a) of PrimX. + +
# PrimitivesResolution# ParametersPSNR-FSDF↑PSNR-FRGB↑PSNR-FMat↑
N = 64a3=3232.10M61.0522.1818.10
N = 256a3=1631.05M59.0523.5018.61
N = 512a3=830.26M59.5722.5818.50
N = 512a3=1632.10M62.8923.9218.21
N = 2048a3=831.05M62.5224.2318.53
+ +resentation which is not parameter-efficient. This downside limits the spatial resolution of the underlying 3D representation, leading to the bumpy surface indicated by the rendered normal. On the other hand, existing 3D diffusion models fail to generate objects that are visually aligned with the input condition. While CraftsMan is the only method with a comparable surface quality as ours, they are only capable of generating 3D shapes without textures and materials. In contrast, 3DTopia-XL achieves the best visual and geometry quality among all methods. Thanks to our capability to generate spatially varied PBR assets, our generated mesh can produce vivid reflectance with specular highlights even under harsh environmental illuminations. + +# 4.3. Text-to-3D Generation + +We conduct quantitative evaluations against native text-to-3D generative models, which achieves text-to-3D generation without text-to-multiview diffusion models. Given a + +![](images/5ba94a70fade7dc4e6e4a950ac377d595d73c80711a7372243f7df2f8e9de124.jpg) +Figure 6. Ablation studies of the number and resolution of primitives. Our final setting $(N = 2048, a = 8)$ has the optimal approximation quality of ground truth, especially for fine-grained details like thin rotor blades. + +![](images/5b84465d8f06bb7ef9ac4d5ffd0038abfb1a456f99e6ba3106282e7009b6a82c.jpg) + +set of unseen text prompts, we take the CLIP Score [38] as the evaluation metric following previous work [13, 16]. We mainly compare two methods with open-source implementations: Shap-E [16] and 3DTopia [13]. As shown in Table 4, our method achieves better alignment between input text and rendering of the generated asset. We defer the qualitative results in the supplementary. + +# 4.4. Further Analysis + +Number and Resolution of Primitives. As a structured and serialized 3D representation, the number of primitives and the resolution of each primitive are two critical factors for the efficiency-quality tradeoff in PrimX. More and larger primitives often lead to better approximation quality. However, it results in a longer set length and deeper feature dimensions, causing inefficient long-context attention computation and training difficulty of the diffusion model. Therefore, we explore the impact of the number and resolution of primitives on different parameter budgets. We evaluate the PSNR of SDF, albedo, and material values given 500k points sampled near the surface. As shown in Table 2 and Figure 6, given a fixed parameter count, a larger set with compact primitives lead to better approximations. Furthermore, a longer sequence will increase the GFlops of DiT which also leads to better generation quality shown in the supplementary. Therefore, we use a large set of primitives with a relatively small local resolution. + +Patch Compression Rate. The spatial compression rate of our primitive patch-based VAE (Sec. 3.2) is also an important design choice. Empirically, a higher compression rate leads to a more efficient latent diffusion model but risks information loss. Therefore, we analyze different compression rates given two different set lengths $N = 256$ and $N = 2048$ with the same parameter count of PrimX. We measure the PSNR between the VAE's output and input on 1k random samples. Table 3 shows the results where the final choice of $N = 2048$ with compression rate $f = 48$ achieves the optimal VAE reconstruction. The setting with $N = 256$ , $f = 48$ has the same compression rate but lower reconstruction quality and a latent space with higher resolution, which we find difficulty in the convergence of the + +Table 3. Analysis of different compression rates for VAE. The scalar $f$ stands for the compression rate between the input to the VAE and the latent code (i.e. the ratio between the 2nd and 3rd column of the table). + +Table 4. Text-to-3D Evaluations. We evaluate the CLIP Score between input prompts and front-view renderings of output 3D assets. + +
# PrimitivesVAE inputLatentfPSNR ↑
N = 2566 × 1636 × 436422.92
N = 2566 × 1631 × 4338419.80
N = 2566 × 1631 × 834823.33
N = 20486 × 831 × 2338418.48
N = 20486 × 831 × 434824.51
+ +
MethodsCLIP Score ↑
ShapE21.98
3DTopology22.54
Ours24.33
+ +![](images/c254a4a20fb395a13305b8c57b9e15c5ae5289daf2a75974e6a2a3277e638ced.jpg) +Figure 7. 3D Generative Applications. a) 3D Inpainting for image-to-3D and b) Interpolation between text-to-3D samples. + +latent primitive diffusion model $g_{\Phi}$ + +Inpainting and Interpolation. We show 3D generative applications like 3D inpainting and interpolation in Figure 7, showing the unique property of 3D native diffusion model compared to reconstruction methods. + +Besides the ablation studies above, we also analyze 1) the model scaling, 2) the sampling diversity, 3) PrimX initialization, 4) user study, 5) more comparisons and 6) more visual results, which are deferred to the supplementary due to the space limit. + +# 5. Conclusion + +We present 3DTopia-XL, a native 3D diffusion model for PBR asset generation given textual or visual inputs. Central to our approach is PrimX, an innovative primitive-based 3D representation that is parameter-efficient, tensorial, and renderable. It encodes shape, albedo, and material into a compact $N \times D$ tensor, enabling the modeling of high-resolution geometry with PBR assets. On top of PrimX, we propose Latent Primitive Diffusion for scalable 3D generative models, together with practical techniques to export PBR assets ready for graphics pipelines. + +Acknowledgement. This study is supported by the National Key R&D Program of China No.2022ZD0160102. This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-08-019), the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221-0012, MOE-T2EP20223-0002), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). + +# References + +[1] Mark Boss, Zixuan Huang, Aaryaman Vasishta, and Varun Jampani. Sf3d: Stable fast 3d mesh reconstruction with uv-unwrapping and illumination disentanglement, 2024. 3 +[2] Ziang Cao, Fangzhou Hong, Tong Wu, Liang Pan, and Ziwei Liu. Large-vocabulary 3d diffusion model with transformer. arXiv preprint arXiv:2309.07920, 2023. 3 +[3] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16123-16133, 2022. 2, 6 +[4] Anpei Chen, Haofei Xu, Stefano Esposito, Siyu Tang, and Andreas Geiger. Lara: Efficient large-baseline radiance fields. arXiv preprint arXiv:2407.04699, 2024. 3 +[5] Hansheng Chen, Jiatao Gu, Anpei Chen, Wei Tian, Zhuowen Tu, Lingjie Liu, and Hao Su. Single-stage diffusion nef: A unified approach to 3d generation and reconstruction. arXiv preprint arXiv:2304.06714, 2023. 3 +[6] Zhaoxi Chen, Fangzhou Hong, Haiyi Mei, Guangcong Wang, Lei Yang, and Ziwei Liu. Primdiffusion: Volumetric primitives diffusion for 3d human generation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. 3 +[7] Yen-Chi Cheng, Hsin-Ying Lee, Sergey Tulyakov, Alexander G Schwing, and Liang-Yan Gui. Sdfusion: Multimodal 3d shape completion, reconstruction, and generation. In CVPR, pages 4456-4465, 2023. 3 +[8] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. 7 +[9] Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, and Barlas Oğuz. 3dgen: Triplane latent diffusion for textured mesh generation. arXiv preprint arXiv:2303.05371, 2023. 3 +[10] Zexin He and Tengfei Wang. Openlrm: Open-source large reconstruction models. https://github.com/3DTopia/OpenLRM, 2023. 2 +[11] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 6 +[12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 33:6840-6851, 2020. 5 + +[13] Fangzhou Hong, Jiaxiang Tang, Ziang Cao, Min Shi, Tong Wu, Zhaoxi Chen, Tengfei Wang, Liang Pan, Dahua Lin, and Ziwei Liu. 3dtopia: Large text-to-3d generation model with hybrid diffusion priors. arXiv preprint arXiv:2403.02234, 2024. 3, 8 +[14] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. arXiv preprint arXiv:2311.04400, 2023. 2 +[15] Hanwen Jiang, Qixing Huang, and Georgios Pavlakos. Real3d: Scaling up large reconstruction models with real-world images, 2024. 3, 7 +[16] Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023. 3, 7, 8 +[17] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ToG, 42(4):1-14, 2023. 3 +[18] Diederik P Kingma. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 5 +[19] Yushi Lan, Fangzhou Hong, Shuai Yang, Shangchen Zhou, Xuyi Meng, Bo Dai, Xingang Pan, and Chen Change Loy. Ln3diff: Scalable latent neural fields diffusion for speedy 3d generation. arXiv preprint arXiv:2403.12019, 2024. 3, 7 +[20] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model. arXiv preprint arXiv:2311.06214, 2023. 3 +[21] Mengfei Li, Xiaoxiao Long, Yixun Liang, Weiyu Li, Yuan Liu, Peng Li, Xiaowei Chi, Xingqun Qi, Wei Xue, Wenhan Luo, et al. M-lrm: Multi-view large reconstruction model. arXiv preprint arXiv:2406.07648, 2024. 3 +[22] Peng Li, Yuan Liu, Xiaoxiao Long, Feihu Zhang, Cheng Lin, Mengfei Li, Xingqun Qi, Shanghang Zhang, Wenhan Luo, Ping Tan, et al. Era3d: High-resolution multiview diffusion using efficient row-wise attention. arXiv preprint arXiv:2405.11616, 2024. 3, 7 +[23] Weiyu Li, Jiarui Liu, Rui Chen, Yixun Liang, Xuelin Chen, Ping Tan, and Xiaoxiao Long. Craftsman: High-fidelity mesh generation with 3d native generation and interactive geometry refiner. arXiv preprint arXiv:2405.14979, 2024. 2, 3, 7 +[24] Yuhan Li, Yishun Dou, Xuanhong Chen, Bingbing Ni, Yilin Sun, Yutian Liu, and Fuzhen Wang. 3dqd: Generalized deep 3d shape prior via part-discretized diffusion process, 2023. 3 +[25] Minghua Liu, Ruoxi Shi, Linghao Chen, Zhuoyang Zhang, Chao Xu, Xinyue Wei, Hansheng Chen, Chong Zeng, Jiayuan Gu, and Hao Su. One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion. arXiv preprint arXiv:2311.07885, 2023. 3 +[26] Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Zexiang Xu, Hao Su, et al. One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. arXiv preprint arXiv:2306.16928, 2023. 3 +[27] Minghua Liu, Chong Zeng, Xinyue Wei, Ruoxi Shi, Linghao Chen, Chao Xu, Mengqi Zhang, Zhaoning Wang, Xiaoshuai + +Zhang, Isabella Liu, Hongzhi Wu, and Hao Su. Meshformer: High-quality mesh generation with 3d-guided reconstruction model. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. 3 +[28] Zhen Liu, Yao Feng, Michael J Black, Derek Nowrouzezahrai, Liam Paull, and Weiyang Liu. Meshdiffusion: Score-based generative 3d mesh modeling. arXiv preprint arXiv:2303.08133, 2023. 3 +[29] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3d: Single image to 3d using cross-domain diffusion. arXiv preprint arXiv:2310.15008, 2023. 3 +[30] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Semin al graphics: pioneering efforts that shaped the field, pages 347-353, 1998. 4 +[31] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. 6 +[32] Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulo, Peter Kontschieder, and Matthias Nießner. Diffrf: Rendering-guided 3d radiance field diffusion. In CVPR, pages 4328-4338, 2023. 3 +[33] Charlie Nash and Christopher KI Williams. The shape variational autoencoder: A deep generative model of part-segmented 3d objects. In Computer Graphics Forum, pages 1-12. Wiley Online Library, 2017. 3 +[34] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 3 +[35] Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc Van Gool, and Sergey Tulyakov. Autodecoding latent 3d diffusion models. arXiv preprint arXiv:2307.05445, 2023. 3 +[36] William Peebles and Saining Xie. Scalable diffusion models with transformers. arXiv preprint arXiv:2212.09748, 2022. 2 +[37] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022. 2 +[38] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763. PMLR, 2021. 8 +[39] Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, and Francis Williams. Xcube: Large-scale 3d generative modeling using sparse voxel hierarchies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 3 +[40] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 5 + +[41] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022. 6 +[42] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. Mvdream: Multi-view diffusion for 3d generation. arXiv preprint arXiv:2308.16512, 2023. 3, 7 +[43] Yawar Siddiqui, Tom Monnier, Filippos Kokkinos, Mahendra Kariya, Yanir Kleiman, Emilien Garreau, Oran Gafni, Natalia Neverova, Andrea Vedaldi, Roman Shapovalov, et al. Meta 3d assetgen: Text-to-mesh generation with high-quality geometry, texture, and pbr materials. arXiv preprint arXiv:2407.02445, 2024. 3 +[44] Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. arXiv preprint arXiv:2309.16653, 2023. 2 +[45] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. Lgm: Large multi-view gaussian model for high-resolution 3d content creation. arXiv preprint arXiv:2402.05054, 2024. 3, 7 +[46] Zhicong Tang, Shuyang Gu, Chunyu Wang, Ting Zhang, Jianmin Bao, Dong Chen, and Baining Guo. Volumediffusion: Flexible text-to-3d generation with efficient volumetric encoder. arXiv preprint arXiv:2312.11459, 2023. 3 +[47] Peng Wang, Hao Tan, Sai Bi, Yinghao Xu, Fujun Luan, Kalyan Sunkavalli, Wenping Wang, Zexiang Xu, and Kai Zhang. Pf-lrm: Pose-free large reconstruction model for joint pose and shape prediction. arXiv preprint arXiv:2311.12024, 2023. 3 +[48] Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, et al. Rodin: A generative model for sculpting 3d digital avatars using diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4563-4573, 2023. 3 +[49] Zhengyi Wang, Yikai Wang, Yifei Chen, Chengdong Xiang, Shuo Chen, Dajiang Yu, Chongxuan Li, Hang Su, and Jun Zhu. Crm: Single image to 3d textured mesh with convolutional reconstruction model. arXiv preprint arXiv:2403.05034, 2024. 2, 3, 7 +[50] Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, and Zexiang Xu. Meshlrm: Large reconstruction model for high-quality mesh. arXiv preprint arXiv:2404.12385, 2024. 3 +[51] Shuang Wu, Youtian Lin, Feihu Zhang, Yifei Zeng, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. arXiv preprint arXiv:2405.14832, 2024. 3 +[52] Desai Xie, Sai Bi, Zhixin Shu, Kai Zhang, Zexiang Xu, Yi Zhou, Soren Pirk, Arie Kaufman, Xin Sun, and Hao Tan. Lrm-zero: Training large reconstruction models with synthesized data. arXiv preprint arXiv:2406.09371, 2024. 3 +[53] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In International Conference on Machine Learning, pages 10524-10533. PMLR, 2020. 6 + +[54] Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Shenghua Gao, and Ying Shan. Instantmesh: Efficient 3d mesh generation from a single image with sparse-view large reconstruction models. arXiv preprint arXiv:2404.07191, 2024. 2, 3, 7 +[55] Xiang Xu, Joseph Lambourne, Pradeep Jayaraman, Zhengqing Wang, Karl Willis, and Yasutaka Furukawa. Brepgen: A b-rep generative diffusion model with structured latent geometry. ACM Transactions on Graphics (TOG), 43 (4):1-14, 2024. 3 +[56] Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen, and Gordon Wetzstein. Grm: Large gaussian reconstruction model for efficient 3d reconstruction and generation. arXiv preprint arXiv:2403.14621, 2024. 3 +[57] Xingguang Yan, Han-Hung Lee, Ziyu Wan, and Angel X Chang. An object is worth 64x64 pixels: Generating 3d object via image diffusion. arXiv preprint arXiv:2408.03178, 2024.3 +[58] Lior Yariv, Omri Puny, Natalia Neverova, Oran Gafni, and Yaron Lipman. Mosaic-sdf for 3d generative models. arXiv preprint arXiv:2312.09222, 2023. 2, 3 +[59] Xuanyu Yi, Zike Wu, Qiuhong Shen, Qingshan Xu, Pan Zhou, Joo-Hwee Lim, Shuicheng Yan, Xinchao Wang, and Hanwang Zhang. Mvgamba: Unify 3d content generation as state space sequence modeling. arXiv preprint arXiv:2406.06367, 2024. 3 +[60] Biao Zhang, Jiapeng Tang, Matthias Niessner, and Peter Wonka. 3dshape2vecset: A 3d shape representation for neural fields and generative diffusion models. arXiv preprint arXiv:2301.11445, 2023. 3 +[61] Chubin Zhang, Hongliang Song, Yi Wei, Yu Chen, Jiwen Lu, and Yansong Tang. Geolrm: Geometry-aware large reconstruction model for high-quality 3d gaussian generation. arXiv preprint arXiv:2406.15333, 2024. 3 +[62] Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, and Zexiang Xu. Gs-lrm: Large reconstruction model for 3d gaussian splatting. arXiv preprint arXiv:2404.19702, 2024. 3 +[63] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets. arXiv preprint arXiv:2406.13897, 2024. 3, 5 +[64] Zibo Zhao, Wen Liu, Xin Chen, Xianfang Zeng, Rui Wang, Pei Cheng, Bin Fu, Tao Chen, Gang Yu, and Shenghua Gao. Michelangelo: Conditional 3d shape generation based on shape-image-text aligned latent representation. arXiv preprint arXiv:2306.17115, 2023. 3 +[65] Xin-Yang Zheng, Hao Pan, Peng-Shuai Wang, Xin Tong, Yang Liu, and Heung-Yeung Shum. Locally attentional sdf diffusion for controllable 3d shape generation. ACM Transactions on Graphics (SIGGRAPH), 42(4), 2023. 3 +[66] Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. arXiv preprint arXiv:2312.09147, 2023. 3 \ No newline at end of file diff --git a/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/images.zip b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f8441c4c7184a3c7c7cc4d997a280c39970814dd --- /dev/null +++ b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a0622f0e7c19a6646e18b7940baa6c62a09c0bcb16d5d343c77747b70d047a9 +size 748118 diff --git a/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/layout.json b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..87ea430e490ecb1b9355c8c1d067d40a59f35825 --- /dev/null +++ b/CVPR/2025/3DTopia-XL_ Scaling High-quality 3D Asset Generation via Primitive Diffusion/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9967e56ffed3c356a4de4154817d786f700aaa2acfa27fe43176b290cd60e9b9 +size 489545 diff --git a/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_content_list.json b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..69b8351c900b191cfec8aaa12d2ab4c1ce7fff69 --- /dev/null +++ b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0aea5263cfcd94c78ad6b66641fc090c001db8f0a70b70a60116d1b49b9a217a +size 79366 diff --git a/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_model.json b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2da2f253665afe29130d6a6032427be52b43565e --- /dev/null +++ b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e69924b467c722bc29e8471a59e78d81f7b3d707c774914c5a6832f60d935a9e +size 99953 diff --git a/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_origin.pdf b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..55e7e04b9974e6cda1b684549f161903a71262b8 --- /dev/null +++ b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/402df206-80ec-4c4d-b554-9a26536c90ad_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0dca7fa525d283acc2c9234ec93615964c011af32abf9db81ca3d7e7d851e11f +size 4658546 diff --git a/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/full.md b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..036719ad3dfcc19dc655f18314b51ae74e910e05 --- /dev/null +++ b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/full.md @@ -0,0 +1,306 @@ +# 4D LangSplat: 4D Language Gaussian Splatting via Multimodal Large Language Models + +Wanhua Li $^{1, *}$ , Renping Zhou $^{1,2, *}$ , Jiawei Zhou $^{3}$ , Yingwei Song $^{1,4}$ , Johannes Herter $^{1,5}$ , Minghan Qin $^{2}$ , Gao Huang $^{2, \text{图}}$ , Hanspeter Pfister $^{1, \text{图}}$ + +$^{1}$ Harvard University $^{2}$ Tsinghua University $^{3}$ Stony Brook University $^{4}$ Brown University $^{5}$ ETH Zürich + +Project page: https://4d-langsplat.github.io/ + +![](images/03c34551186079501d8f52a1126de2260113591b72bd7b2305dbb9ca65e22122.jpg) +Figure 1. Visualization of the learned language features of our 4D LangSplat. We observe that 4D LangSplat effectively learns dynamic semantic features that change over time, such as the gradual diffusion of coffee shown in the first two rows, and the "chicken" toggling between open and closed states in the latter two rows. Additionally, our semantic field captures consistent features for semantics that remain unchanged over time, with the clear object boundaries in the visualization demonstrating the precision of our semantic field. + +# Abstract + +Learning 4D language fields to enable time-sensitive, open-ended language queries in dynamic scenes is essential for many real-world applications. While LangSplat successfully grounds CLIP features into 3D Gaussian representations, achieving precision and efficiency in 3D static scenes, it lacks the ability to handle dynamic 4D fields as CLIP, + +designed for static image-text tasks, cannot capture temporal dynamics in videos. Real-world environments are inherently dynamic, with object semantics evolving over time. Building a precise 4D language field necessitates obtaining pixel-aligned, object-wise video features, which current vision models struggle to achieve. To address these challenges, we propose 4D LangSplat, which learns 4D language fields to handle time-agnostic or time-sensitive open-vocabulary queries in dynamic scenes efficiently. 4D + +LangSplat bypasses learning the language field from vision features and instead learns directly from text generated from object-wise video captions via Multimodal Large Language Models (MLLMs). Specifically, we propose a multimodal object-wise video prompting method, consisting of visual and text prompts that guide MLLMs to generate detailed, temporally consistent, high-quality captions for objects throughout a video. These captions are encoded using a Large Language Model into high-quality sentence embeddings, which then serve as pixel-aligned, object-specific feature supervision, facilitating open-vocabulary text queries through shared embedding spaces. Recognizing that objects in 4D scenes exhibit smooth transitions across states, we further propose a status deformable network to model these continuous changes over time effectively. Our results across multiple benchmarks demonstrate that 4D LangSplat attains precise and efficient results for both time-sensitive and time-agnostic open-vocabulary queries. + +# 1. Introduction + +The ability to construct a language field [17, 34] that supports open vocabulary queries holds significant promise for various applications such as robotic navigation [13], 3D scene editing [19], and interactive virtual environments [29]. Due to the scarcity of large-scale 3D datasets with rich language annotations, current methods [17, 29, 37] leverage pre-trained models like CLIP [35] to extract pixel-wise features, which are then mapped to 3D spaces. Among them, LangSplat [34] received increasing attention due to its efficiency and accuracy, which grounds the precise masks generated by the Segment Anything Model (SAM) [18] with CLIP features into 3D Gaussians, achieving an accurate and efficient 3D language field by leveraging 3D Gaussian Splitting (3D-GS) [16]. LangSplat supports open-vocabulary queries in various semantic levels by learning three SAM-defined semantic levels. + +Nothing endures but change. Real-world 3D scenes are rarely static, and they continuously change and evolve. To enable open-vocabulary queries in dynamic 4D scenes, it is crucial to consider that target objects may be in motion or transformation. For instance, querying a scene for "dog" in a dynamic environment may involve the dog running, jumping, or interacting with other elements. Beyond spatial changes, users may also want time-related queries, such as "running dog", which should only respond during the time segments when the dog is indeed running. Therefore, supporting time-agnostic and time-sensitive queries within a 4D language field is essential for realistic applications. + +A straightforward approach to extend LangSplat to a 4D scene is to learn a deformable Gaussian field [26, 54, 57] with CLIP features. However, it cannot model the dynamic, time-evolving semantics as CLIP, designed for static image + +text matching [10, 24], struggles to capture temporal information such as state changes, actions, and object conditions [43, 44]. Learning a precise 4D language field would require pixel-aligned, object-level video features as the 2D supervision to capture the spatiotemporal semantics of each object in a scene, yet current vision models [53, 55] predominantly extract global, video-level features. One could extract features by cropping interested objects and then obtain patch features. It inevitably includes background information, leading to imprecise semantic features [34]. Removing the background and extracting vision features only from the foreground object with accurate object masks leads to ambiguity in distinguishing between object and camera motion, since only the precise foreground objects are visible without a reference to the background context. These pose significant challenges for building an accurate and efficient 4D language field. + +To address these challenges, we propose 4D LangSplat, which constructs a precise and efficient 4D Language Gaussian field to support time-agnostic and time-sensitive open-vocabulary queries. We first train a 4D Gaussian Splitting (4D-GS) [54] model to reconstruct the RGB scene, which is represented by a group of Gaussian points and a deformable decoder defining how the Gaussian point changes its location and shape over time. Our 4D LangSplat then enhances each Gaussian in 4D-GS with two language fields, where one learns time-invariant semantic fields with CLIP features as did in LangSplat, and the other learns time-varying semantic field to capture the dynamic semantics. The time-invariant semantic field encodes semantic information that does not change over time such as "human", "cup", and "dog". They are learned with CLIP features on three SAM-defined semantic levels. + +For the time-varying semantic field, instead of learning from vision features, we propose to directly learn from textual features to capture temporally dynamic semantics. Recent years have witnessed huge progress [31, 41] of Multimodal Large Language Models (MLLMs), which take multimodal input, including image, video, and text, and generate coherent responses. Encouraged by the success of MLLMs, we propose a multimodal object-wise video prompting method that combines visual and text prompts to guide MLLMs in generating detailed, temporally consistent, high-quality captions for each object throughout a video. We then encode these captions using a large language model (LLM) to extract sentence embeddings, creating pixel-aligned, object-level features that serve as supervision for the 4D Language field. Recognizing the smooth transitions exhibited by objects across states in 4D scenes, we further introduce a status deformable network to model these continuous state changes effectively over time. Our network captures the gradual transitions across object states, enhancing the model's temporal consistency + +and improving its handling of dynamic scenes. Figure 1 visualizes the learned time-varying semantic field. Our experiments across multiple benchmarks validate that 4D LangSplat achieves precise and efficient results, supporting both time-agnostic and time-sensitive open-vocabulary queries in dynamic, real-world environments. + +In summary, our contributions are threefold: + +- We introduce 4D LangSplat for open-vocabulary 4D spatial-temporal queries. To the best of our knowledge, we are the first to construct 4D language fields with object textual captions generated by MLLMs. +- To model the smooth transitions across states for objects in 4D scenes, we further propose a status deformable network to capture continuous temporal changes. +- Experiential results show that our method attains state-of-the-art performance for both time-agnostic and time-sensitive open-vocabulary queries. + +# 2. Related Work + +3D Gaussian Splatting. 3D-GS [16] is a powerful volumetric rendering technique that has gained attention for its real-time, high-quality rendering ability. It represents complex surfaces and scenes by projecting 3D Gaussian distributions into 2D image space. It has been widely used for many applications such as human reconstruction [22, 36], 3D editing [6, 49], mesh extraction [12, 48], autonomous driving [61, 63]. Recent work [1, 26, 27, 57] including 4D Gaussian Splatting (4D-GS) [54] has extended Gaussian Splatting to 4D by introducing deformable fields, allowing for dynamic scenes where Gaussian parameters evolve over time to capture both spatial and temporal transformations. However, 4D-GS primarily focuses on visual fidelity rather than semantic understanding, which limits its applicability in open-vocabulary language queries. + +3D Language Field. Some early work [19, 47] usually ground 2D foundation model features [5, 20, 35] into a neural radiance field (NeRF) [30]. For example, Distilled Feature Fields (DFFs) propose to distill CLIP-LSeg [20] into NeRF for semantic scene editing. LERF [17] proposes to distill CLIP [35] features into NeRF to support open-vocabulary 3D querying. With the emergence of 3D-GS, many methods [38, 58, 59, 62] adopt 3D-GS as the 3D scene representation and lift 2D foundation model features into 3D Gaussians. Among them, LangSpalt [34] attains precise and efficient language fields due to the introduction of SAM masks. By incorporating multiple levels of semantic granularity, LangSpat effectively supports open-vocabulary queries across whole objects, parts, and subparts. Although significant advances have been made in 3D language fields, 4D language fields for dynamic scenes remain largely unexplored, which is the focus of this paper. + +Multimodal Large Language Models. The remarkable success of LLMs [2, 9, 45, 46] has shown their ability to per + +form new tasks [25] following human instructions. Based on LLMs, the research on MLLMs [3, 28, 32] explores the possibility of multimodal chat ability [14], which represents a significant step forward in integrating visual and textual modalities for complex scene understanding. MLLMs usually employ a vision encoder to extract visual features and learn a connector to align visual features with LLMs. The recent models [8, 21, 52] demonstrate remarkable capabilities in generating coherent captions from multimodal inputs, including images and videos. In this paper, we propose to utilize the powerful multimodal process ability of MLLMs to convert video data into object-level captions, which are then used to train a 4D language field. + +# 3. Method + +# 3.1. Preliminaries + +3D Gaussian Splatting. In 3D-GS [16], a scene is represented as a set of 3D Gaussian points. Each pixel in 2D images is computed by blending $N$ sorted 3D Gaussian points that overlap the pixel: + +$$ +C = \sum_ {i = 1} ^ {N} c _ {i} \alpha_ {i} \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}), \tag {1} +$$ + +where $c_{i}$ and $\alpha_{i}$ are the color and density of $i$ -th Gaussian. LangSplat. Building upon 3D-GS, LangSplat [34] grounds 2D CLIP features into 3D Gaussians. To obtain a precise field, SAM is used to obtain accurate object masks and then CLIP features are extracted with masked objects. LangSplat adopts feature splatting to train the 3D language field: + +$$ +\boldsymbol {F} = \sum_ {i = 1} ^ {N} \boldsymbol {f} _ {i} \alpha_ {i} \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}), \tag {2} +$$ + +where $f_{i}$ represents the language feature of the $i$ -th Gaussians and $F$ is the rendered embedding in 2D images. + +4D Gaussian Splatting. 4D-GS [54] extends the 3D-GS for dynamic scenes by introducing a deformable Gaussian field. Here, Gaussian parameters, including position, rotation, and scaling factor, are allowed to vary over time: + +$$ +\left(\mathcal {X} ^ {\prime}, r ^ {\prime}, s ^ {\prime}\right) = \left(\mathcal {X} + \Delta \mathcal {X}, r + \Delta r, s + \Delta s\right), \tag {3} +$$ + +where $\mathcal{X}$ , $r$ , and $s$ represent the position, rotation, and scaling parameters, respectively. $\Delta \mathcal{X}$ , $\Delta r$ , and $\Delta s$ denote the corresponding deformable networks, which are implemented by lightweight MLPs. The HexPlane [4, 11] representation is used to obtain rich 3D Gaussian features. + +A straightforward approach to adapting LangSplat for 4D scenes is to extend its static 3D language Gaussian field with a deformable Gaussian field, as done in 4D-GS. However, this approach faces significant limitations due to the nature of CLIP features. CLIP [35] is designed primarily for + +static image-text alignment, making it ill-suited for capturing dynamic and time-evolving semantics in video. Recent research [43, 44, 51] further confirms that it struggles with understanding state changes, actions, object conditions, and temporal context. For a precise and accurate 4D language field, it is essential to obtain pixel-aligned, object-level features that track temporal semantics with fine-grained detail for each object in a scene. However, existing vision models [53, 55] primarily offer global, video-level features that overlook specific object-level information, making it difficult to represent spatiotemporal semantics at the object level. While cropping objects and obtaining patch-based features is possible, this includes background information, leading to inaccurate language fields. Further cropping objects with accurate masks makes it difficult for vision models to distinguish between object movement and camera motion, as there is no background reference. + +# 3.2. 4D LangSplat Framework + +To address these challenges, we introduce 4D LangSplat, which constructs accurate and efficient 4D language fields to support both time-sensitive and time-agnostic open-vocabulary queries in dynamic scenes. We first reconstruct the 4D dynamic RGB scene using 4D-GS [54]. In this stage, the RGB scene is represented by a set of deformable Gaussian points, each with parameters that adjust over time to capture object movement and shape transformations within the scene. Building on the learned 4D-GS model, we extend each Gaussian point with language embeddings to learn 4D language fields. To further capture temporal and spatial details, and to handle both time-sensitive and time-agnostic queries effectively, we simultaneously construct two types of semantic fields: a time-agnostic semantic field and a time-varying semantic field. The time-agnostic semantic field focuses on capturing semantic information that does not change over time. Although objects in the scene are dynamic, they still exhibit attributes that remain constant across time, such as static properties of entities like "dog", "human", and other objects within the environment. This semantic field emphasizes spatial details of these time-agnostic semantics. Conversely, the time-varying semantic field captures temporally dynamic semantics, such as "a running dog", emphasizing semantic transitions over time. + +For the time-agnostic semantic field, we still use CLIP features and lift them to 4D space, as they are sufficient for capturing time-agnostic semantics. Specifically, we learn a static language embedding for each deformable Gaussian point in the 4D-GS model. Similar to LangSplat, we utilize SAM's hierarchical segmentation masks, learning three distinct time-agnostic semantic fields corresponding to the three levels of semantic granularity provided by SAM. Although each Gaussian point's position and shape dynamically change over time, its semantic feature remains static. + +These static embeddings ensure spatial accuracy while focusing on stable semantic information derived from CLIP features. On the other hand, to learn the time-varying semantic field, we propose a novel approach that bypasses the limitations of vision-based feature supervision. Instead, visual data is converted into object-level captions by leveraging MLLMs. These captions are then encoded using an LLM to extract sentence embeddings, which are used as pixel-aligned, object-level features for training the semantic field. To effectively model the smooth, continuous transitions of Gaussian points between a limited set of states, we further introduce a status deformable network to enhance reconstruction quality. The framework of training time-varying 4D fields is illustrated in Figure 2. + +# 3.3. Multimodal Object-Wise Video Prompting + +Constructing a high-quality, dynamic 4D semantic field requires detailed, pixel-aligned object-level features that capture time-evolving semantics in video data. However, obtaining these fine-grained visual features is challenging due to the limitations of current vision models in distinguishing object-level details over time. To overcome this, we propose converting video segments into object-wise captions and extracting sentence embeddings from these captions to serve as precise, temporally consistent features. + +Advances in MLLMs like GPT-4o [32], LLaVA-OneVision [21], and Qwen2-VL [52] enable high-quality language generation from multimodal inputs. These models process video, image, and text inputs to generate temporally consistent responses. Leveraging these capabilities, we propose a multimodal object-wise video prompting method, which combines visual and textual prompts to guide the MLLM in generating temporally consistent, object-specific, high-quality captions across video frames, encapsulating both spatial and temporal details. + +Formally, let $V = \{I_1, I_2, \ldots, I_T\}$ be a video segment of $T$ frames. For each frame, we apply SAM [18] in conjunction with DEVA tracking [7] to segment objects and maintain consistent object identities over time. This process yields temporally consistent masks for $n$ objects present in the video, denoted as $\{M_1, M_2, \ldots, M_n\}$ , where each mask $M_i$ represents a specific object tracked across frames. Each frame $I_t$ is segmented with the object masks at time step $t$ $\{M_{1,t}, M_{2,t}, \ldots, M_{n,t}\}$ . + +To effectively generate instance-wise, object-specific captions while preserving the broader scene context, we need to guide the MLLM through precise prompting. Our goal is for the MLLM to generate captions focused solely on the target object without introducing details of other objects. However, the presence of other objects as background reference remains essential; without this context, the MLLM may lose track of spatial relationships and environmental context, which are critical for understanding the action and + +![](images/b8e2e55383dca0e4a822d46a27557e4357ec518154190d73e6afb8a4a4591498.jpg) + +![](images/9883c1852e9b20b4d8c636747eb4157dfee67aad9af44c4a5f545cfece444d5e.jpg) +Figure 2. The framework of constructing a time-varying semantic field in 4D LangSplat. We first use multimodal object-wise prompting to convert a video into pixel-aligned object-level caption features. Then, we learn a 4D language field with a status deformable network. + +status of the target object. Thus, our approach employs prompting techniques to direct the MLLM's attention to each object, enabling region-specific captioning that maintains overall scene awareness. Inspired by the recent visual prompting progress [39, 40, 56], we first use visual prompts to highlight the object of interest. Specifically, we build a visual prompt $\mathcal{P}_{i,t}$ for each object $i$ in frame $I_{t}$ : + +$$ +\mathcal {P} _ {i, t} = \operatorname {C o n t o u r} \left(M _ {i, t}\right) \cup \operatorname {G r a y} \left(M _ {i, t}\right) \cup \operatorname {B l u r} \left(M _ {i, t}\right), \tag {4} +$$ + +where $\mathrm{Contour}(M_{i,t})$ highlights $M_{i,t}$ with a red contour, $\mathrm{Gray}(M_{i,t})$ converts the non-object area to grayscale, and $\mathrm{Blur}(M_{i,t})$ applies a Gaussian blur to the background pixels. This prompt preserves essential background information while ensuring focus on the object of interest, improving the MLLM's attention to the relevant target. + +For temporal coherence, we first generate a high-level video-level motion description for object $i$ , noted as $\mathcal{D}_i$ , which summarizes the motion dynamics over $T$ frames. This description is derived by prompting the MLLM with the entire video sequence $V$ to capture object motion and interactions, defined as: + +$$ +\mathcal {D} _ {i} = \operatorname {M L L M} \left(\left\{\mathcal {P} _ {i, 1}, \dots , \mathcal {P} _ {i, T} \right\}, \mathcal {T} _ {\text {v i d e o}}, V\right), \tag {5} +$$ + +where $\mathcal{T}_{video}$ denotes the textual prompt that instructs the MLLM to generate video-level motion descriptions based on the visual prompts. This description $\mathcal{D}_i$ is then used as context for generating frame-specific captions. For each frame $I_{t}$ , we combine $\mathcal{D}_i$ with the visual prompt $\mathcal{P}_{i,t}$ to generate a time-specific caption $C_{i,t}$ , capturing both the temporal and contextual details for object $i$ in frame $I_{t}$ : + +$$ +C _ {i, t} = \operatorname {M L L M} \left(\mathcal {D} _ {i}, \mathcal {P} _ {i, t}, \mathcal {T} _ {\text {f r a m e}}, V _ {t}\right), \tag {6} +$$ + +where $\mathcal{T}_{\mathrm{frame}}$ denotes the textual prompt that instructs the MLLM to generate an object caption describing the object's current action and status at a specific time step. + +Each caption $C_{i,t}$ provides semantic information for an object $i$ at time $t$ . To encode this semantic data into features for training the 4D language field, we extract sentence embeddings $e_{i,t}$ for each caption $C_{i,t}$ . As LLMs exhibit strong processing ability for free-form text [42, 45], we further propose to utilize them to extract sentence embeddings. Specifically, a fined-tuned LLM [50] for sentence embedding tasks is used to extract features. This design choice allows our model to respond effectively to open-vocabulary queries as the embeddings are generated within a shared language space that aligns with natural language queries. Thus, for every pixel $(x,y) \in M_{i,t}$ within object $i$ 's mask in frame $I_t$ , the feature $F_{x,y,t}$ is given by: + +$$ +\boldsymbol {F} _ {x, y, t} = \boldsymbol {e} _ {i, t}, \tag {7} +$$ + +where the embeddings $F_{x,y,t}$ serve as 2D supervision for the time-variable semantic field, providing pixel-aligned, object-wise features across frames. + +# 3.4. Status Deformable Network + +With the 2D semantic feature supervision information available, we use it to train a 4D field. A straightforward approach, analogous to the method used 4D-GS, would be to directly learn a deformation field $\Delta f$ for the semantic features of deformable Gaussian points. However, this straightforward approach allows the semantic features of each Gaussian point to change to any arbitrary semantic state, potentially increasing the learning complexity and compromising the temporal consistency of the features. In + +real-world dynamic scenes, each Gaussian point typically exhibits a gradual transition between a limited set of semantic states. For instance, an object like a person may transition smoothly among a finite set of actions (e.g., standing, walking, running), rather than shifting to entirely unrelated semantic states. To model these smooth transitions and maintain a stable 4D semantic field, we propose a status deformable network that restricts the Gaussian point's semantic features to evolve within a predefined set of states. + +Specifically, we represent the semantic feature of a Gaussian point $i$ at any time $t$ as a linear combination of $K$ state prototype features, $\{S_{i,1}, S_{i,2}, \ldots, S_{i,K}\}$ , where each state captures a specific, distinct semantic meaning. The semantic feature $f_{i,t}$ of a Gaussian point $i$ at time $t$ is: + +$$ +\boldsymbol {f} _ {i, t} = \sum_ {k = 1} ^ {K} w _ {i, t, k} \boldsymbol {S} _ {i, k}, \tag {8} +$$ + +where $w_{i,t,k}$ denotes the weighting coefficient for each state $k$ at time $t$ , with $\sum_{k=1}^{K} w_{i,t,k} = 1$ . This linear combination ensures that each Gaussian point's semantic features transition gradually between predefined states. + +To determine the appropriate weighting coefficients $w_{k,t}$ for each Gaussian point over time, we employ an MLP decoder $\phi$ . This MLP takes as input the spatial-temporal features from Hexplane [4] and predicts weighting coefficients that reflect the temporal progression of semantic states. The MLP decoder $\phi$ and the per-Gaussian states $\{S_{i,1}, S_{i,2}, \dots, S_{i,K}\}$ are jointly trained. This design ensures that the status deformable network adapts to both the spatial and temporal context, enabling smooth, consistent transitions among semantic states. + +# 3.5. Open-vocabulary 4D Querying + +After training, 4D LangSplat enables both time-agnostic and time-sensitive open-vocabulary queries. For time-agnostic queries, we utilize only the time-agnostic semantic field. We first render a feature image and then compute the relevance score [17] between this rendered feature image and the query. Following the post-processing strategy in LangSplat [34], we obtain the segmentation mask for each frame from the relevance score maps. + +For time-sensitive queries, we combine both the time-agnostic and time-sensitive semantic fields. First, the time-agnostic semantic field is used to derive an initial mask for each frame, following the same procedure described above. This mask identifies where the queried object or entity exists, irrespective of time. To refine the query to specific time segments where the queried term is active (e.g., an action occurring within a particular timeframe), we calculate the cosine similarity between the time-sensitive semantic field on the initial mask region and the query text. This similarity is computed across each frame within the masked region + +to determine when the time-sensitive characteristics of the query term are most strongly represented. Using the mean cosine similarity value across the entire video as a threshold, we identify the frames that exceed this threshold, indicating relevant time segments. The spatial mask obtained with the time-agnostic field is retained as the final mask prediction for the identified time segments. + +This combination of time-agnostic and time-sensitive semantic fields enables accurate and efficient spatiotemporal querying, allowing 4D LangSplat to capture both the persistent and dynamic characteristics of objects in the scene. + +# 4. Experiment + +# 4.1. Setup + +Datasets. We conduct evaluations using two widely adopted datasets: HyperNeRF [33] and Neu3D [23]. Given the absence of semantic segmentation annotations for dynamic scenes in these datasets, we perform manual annotations to facilitate evaluation. More details regarding this process are provided in the Appendix ?? + +Implementation Details. All experiments are conducted on a single Nvidia A100 GPU. For extracting CLIP features, we use the OpenCLIP ViT-B/16 model. For dynamic semantics, we leverage the Qwen2-VL-7B model as the backbone MLLM to generate time-varying captions, and use e5-mistral-7b [50] to encode them into embeddings. Following LangSplat [34], we also train an autoencoder to compress the feature dimension. The CLIP and the text features are compressed into 3 and 6 dimensions, respectively. + +Baselines. Due to the absence of publicly available models for 4D language feature rendering, we use several 3D language feature rendering methods as baselines for evaluating time-agnostic querying, including LangSplat [34] and Feature-3DGS [62]. We also incorporate segmentation-based techniques, such as Gaussian Grouping [58], to assess semantic mask generation quality in our approach. Inspired by Segment Any 4D Gaussians [15], we enhance Gaussian Grouping to adapt to dynamic scenes. + +Given the lack of dynamic language field rendering methods, we consider two additional baselines besides LangSplat for time-sensitive querying: Deformable CLIP and Non-Status Field. Deformable CLIP only utilizes the time-agnostic semantic fields of our method, which first trains a 4D-GS model to learn dynamic RGB fields, and then learns static CLIP fields on these pre-trained RGB fields. The Non-Status Field method utilizes both the time-agnostic semantic field and the time-sensitive semantic field of our method while removing the status deformable network. Instead, it directly learns a deformation field $\Delta f$ . + +Metrics. For time-agnostic querying, we evaluate performance using mean accuracy (mAcc) and mean intersection over union (mIoU), calculated across all frames + +
Methodamericanochickchickensplit-jellyespressoAverage
Acc(%)vIoU(%)Acc(%)vIoU(%)Acc(%)vIoU(%)Acc(%)vIoU(%)Acc(%)vIoU(%)
LangSplat [34]45.1923.1653.2618.2073.5833.0844.0316.1554.0122.65
Deformable CLIP60.5739.9652.1742.7789.6275.2844.8520.8661.8044.72
Non-Status Field83.6559.5994.5686.2891.5078.4678.6047.9587.5868.57
Ours89.4266.0796.7390.6295.2883.1481.8949.2090.8372.26
+ +![](images/717584c316858010cf331873b31ffcfc8b9d426829e873576eae37a4045aa4e9.jpg) +Figure 3. Visualization of time-sensitive querying results between Deformable CLIP and ours. The bottom row depicts the cosine similarity across frames, rescaled to (0,1) for direct comparison, while the horizontal bars indicate frames identified as relevant time segments. We observed that the CLIP-based method cannot understand dynamic semantics correctly, while our method recognizes them. + +![](images/b088e8c3df13a656ef62a5ace6473dc0f05b48f94a38fcc0a909600d3ef74526.jpg) + +Table 1. Quantitative comparisons of time-sensitive querying on the HyperNeRF [33] dataset. + +
MethodHyperNeRFNeu3D
mIoUmAccmIoUmAcc
Feature-3DGS [62]36.6374.0234.9687.12
Gaussian Grouping [58]50.4980.9249.9395.05
LangSplat [34]74.9297.7261.4991.89
Ours82.4898.0185.1198.32
+ +Table 2. Quantitative comparisons of time-agnostic querying on the HyperNeRF [33] and Neu3D [23] datasets (Numbers in %). + +
BlurGrayContourΔsimVideoImageΔsim
0.330.14
2.151.01
3.323.32
+ +Table 3. Comparisons of Visual prompts. +Table 4. Comparisons of Text prompts. + +
K23456
Acc (%)94.5697.8295.6594.5694.56
vIoU (%)88.0591.9389.1188.9886.28
+ +Table 5. Results for different state numbers on chick_chicken. + +in the test set. For time-sensitive querying, we evaluate temporal performance using an accuracy metric, defined as $\mathrm{Acc} = n_{\mathrm{correct}} / n_{\mathrm{all}}$ , where $n_{\mathrm{correct}}$ and $n_{\mathrm{all}}$ represent the number of correctly predicted frames and the total frames in the test set, respectively. To assess segmen + +tation quality, we adopt the metric from [60] and define $\mathrm{vIoU} = \frac{1}{|S_u|}\sum_{t\in S_i}\mathrm{IoU}(\hat{s}_t,s_t)$ , where $\hat{s}_t$ and $s_t$ are the predicted and ground truth masks at time $t$ , and $S_{u}$ and $S_{i}$ are the sets of frames in the union and intersection. + +# 4.2. Main Results + +Time-Agnostic Querying. Table 2 shows our results on two datasets. Our approach achieves the highest mIoU and mAcc scores, demonstrating strong segmentation performance across both datasets. In contrast, other methods struggle to capture object movement and shape changes, leading to worse performance on dynamic objects. + +Time-Sensitive Querying. We perform dynamic querying on the HyperNeRF dataset, with Acc and vIoU results presented in Table 1. Our approach outperforms not only the LangSplat method but also the Deformable CLIP and Non-Status Field approaches. Specifically, our method achieves accuracy improvements of $29.03\%$ and $3.25\%$ and vIoU gains of $28.04\%$ and $4.19\%$ , respectively. Our approach introduces a multimodal object-wise video prompting method that surpasses traditional CLIP-based techniques. In comparison to Deformable CLIP, our time-varying semantic field effectively integrates spatial and temporal information. This ensures fluidity and coherence in semantic state transitions, underscoring the importance of MLLM video prompting (Section 3.3). Additionally, when compared to the Non-Status Field method, our approach + +![](images/1aaff0e5519ffa173bbeefd9f61748b81acd324def641636dc5b840aae7c240a.jpg) +Figure 4. Comparison of time-sensitive query mask. We compare time-sensitive query masks between Deformable CLIP and ours. The CLIP-based method fails to identify time segments accurately, especially at the demarcation points during state transitions. + +highlights the significance of status modelling by introducing a status deformable network (Section 3.4), which enhances the model's capability to handle complex, evolving states and further solidifies the robustness and versatility of our method in capturing nuanced dynamics. + +Visualization. To demonstrate our learned time-sensitive language field, we applied PCA to reduce the dimensionality of the learned semantic features, producing a 3D visualization as shown in Figure 1. Our method better captures the dynamic semantic features of objects and renders consistent features accurately. In Figure 3, we illustrate the change in query-frame similarity scores over time for time-sensitive queries, comparing our approach to a CLIP-based method. As shown, CLIP, which is optimized for static image-text alignment, struggles to capture the most relevant time segments within dynamic video semantics, whereas our method successfully identifies these segments. In Figure 4, we present specific query masks. We observe that the CLIP-based approach fails to accurately capture time segments, especially at transition points in object states. For example, CLIP cannot reliably detect subtle transitions, such as when a cookie has just cracked or when a glass cup has started dripping coffee. In contrast, our method effectively identifies these nuanced changes, demonstrating its capability to handle dynamic state transitions accurately. + +# 4.3. Ablation Studies + +Multimodal Prompting. We evaluate the quality of generated captions using different combinations of textual and visual prompting methods. To quantify this, we defined a metric, $\Delta_{\mathrm{sim}} = \overline{score}_{\mathrm{pos}} - \overline{score}_{\mathrm{neg}}$ , where $\overline{score}_{\mathrm{pos}}$ and $\overline{score}_{\mathrm{neg}}$ represent the average cosine similarity scores between query and caption features, encoded by the e5 model, + +for positive and negative samples, respectively. A higher $\Delta_{sim}$ indicates a stronger distinction between positive and negative examples, suggesting that the generated caption more effectively captures the spatiotemporal dynamics and semantic features of objects in the scene. Table 3 shows that utilizing all three visual prompting strategies maximizes the MLLM's focus on target objects. As shown in Table 4, incorporating pre-generated video-level motion descriptions resulting in a $0.87\%$ improvement. Furthermore, adding image prompts enables a more accurate description. + +State Numbers. Table 5 shows the ablation results of the status number $K$ . We observe that an appropriate increase in $K$ led to better results, with $K = 3$ achieving the optimal performance, which was adopted in our experiments. + +# 5. Conclusion + +We present 4D LangSplat, a novel approach to constructing a dynamic 4D language field that supports both time-agnostic and time-sensitive open-vocabulary queries within evolving scenes. Our method leverages MLLMs to produce high-quality, object-specific captions that capture temporally consistent semantics across video frames. This enables 4D LangSplat to overcome the limitations of traditional vision feature-based approaches, which struggle to generate precise, object-level features in dynamic contexts. By incorporating multimodal object-wise video prompting, we obtain pixel-aligned language embeddings as training supervision. Furthermore, we introduce a status deformable network, which enforces smooth, structured transitions across limited object states. Our experimental results across multiple benchmarks demonstrate that 4D LangSplat achieves state-of-the-art performance in dynamic scenarios. + +# Acknowledgements + +The work is supported in part by the National Key R&D Program of China under Grant 2024YFB4708200 and National Natural Science Foundation of China under Grant U24B20173, and in part by US NIH grant R01HD104969. + +# References + +[1] Jeongmin Bae, Seoha Kim, Youngsik Yun, Hahyun Lee, Gun Bang, and Youngjung Uh. Per-gaussian embedding-based deformation for deformable 3d gaussian splatting. arXiv preprint arXiv:2404.03613, 2024. 3 +[2] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023. 3 +[3] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966, 1(2):3, 2023. 3 +[4] Ang Cao and Justin Johnson. Hexplane: A fast representation for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 130-141, 2023. 3, 6 +[5] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650-9660, 2021. 3 +[6] Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21476-21485, 2024. 3 +[7] Ho Kei Cheng, Seoung Wug Oh, Brian Price, Alexander Schwing, and Joon-Young Lee. Tracking anything with decoupled video segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1316–1326, 2023. 4 +[8] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, et al. Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms. arXiv preprint arXiv:2406.07476, 2024. 3 +[9] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with $90\%$ * chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2(3):6, 2023. 3 +[10] Tong Ding, Wanhua Li, Zhongqi Miao, and Hanspeter Pfister. Tree of attributes prompt learning for vision-language models. arXiv preprint arXiv:2410.11201, 2024. 2 +[11] Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbek Warburg, Benjamin Recht, and Angjoo Kanazawa. K-planes: + +Explicit radiance fields in space, time, and appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12479-12488, 2023. 3 + +[12] Lin Gao, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu, and Yu-Kun Lai. Mesh-based gaussian splatting for real-time large-scale deformation. arXiv preprint arXiv:2402.04796, 2024.3 + +[13] Chenguang Huang, Oier Mees, Andy Zeng, and Wolfram Burgard. Visual language maps for robot navigation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 10608-10615. IEEE, 2023. 2 + +[14] Gao Huang. Dynamic neural networks: advantages and challenges. National Science Review, 11(8):nwae088, 2024. 3 +[15] Shengxiang Ji, Guanjun Wu, Jiemin Fang, Jiazhong Cen, Taoran Yi, Wenyu Liu, Qi Tian, and Xinggang Wang. Segment any 4d gaussians. arXiv preprint arXiv:2407.04504, 2024.6 +[16] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 2, 3 +[17] Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. Leref: Language embedded radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19729-19739, 2023. 2, 3, 6 +[18] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015-4026, 2023. 2, 4 +[19] Sosuke Kobayashi, Eiichi Matsumoto, and Vincent Sitzmann. Decomposing nerf for editing via feature field distillation. Advances in Neural Information Processing Systems, 35:23311-23330, 2022. 2, 3 +[20] Boyi Li, Kilian Q Weinberger, Serge Belongie, Vladlen Koltun, and René Ranftl. Language-driven semantic segmentation. arXiv preprint arXiv:2201.03546, 2022. 3 +[21] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326, 2024. 3, 4 +[22] Mengtian Li, Shengxiang Yao, Zhifeng Xie, and Keyu Chen. Gaussianbody:Clothed human reconstruction via 3d gaussian splatting.arXiv preprint arXiv:2401.09720,2024.3 +[23] Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, et al. Neural 3d video synthesis from multi-view video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5521-5531, 2022. 6, 7 +[24] Wanhua Li, Xiaoke Huang, Zheng Zhu, Yansong Tang, Xiu Li, Jie Zhou, and Jiwen Lu. Ordinalclip: Learning rank prompts for language-guided ordinal regression. Advances in Neural Information Processing Systems, 35:35313-35325, 2022. 2 + +[25] Wanhua Li, Zibin Meng, Jiawei Zhou, Donglai Wei, Chuang Gan, and Hanspeter Pfister. Socialgpt: Prompting llms for social relation reasoning via greedy segment optimization. In Advances in Neural Information Processing Systems, pages 2267-2291, 2024. 3 +[26] Zhan Li, Zhang Chen, Zhong Li, and Yi Xu. Spacetime gaussian feature splatting for real-time dynamic view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8508-8520, 2024. 2, 3 +[27] Youtian Lin, Zuozhuo Dai, Siyu Zhu, and Yao Yao. Gaussian-flow: 4d reconstruction with dynamic 3d gaussian particle. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21136-21145, 2024. 3 +[28] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296-26306, 2024. 3 +[29] Kunhao Liu, Fangneng Zhan, Jiahui Zhang, Muyu Xu, Yingchen Yu, Abdulmotaleb El Saddik, Christian Theobalt, Eric Xing, and Shijian Lu. Weakly supervised 3d open-vocabulary segmentation. Advances in Neural Information Processing Systems, 36:53433-53456, 2023. 2 +[30] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 3 +[31] OpenAI. Gpt-4v. https://openai.com/index/gpt-4v-system-card/, 2023.2 +[32] OpenAI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/, 2024.3,4 +[33] Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo MartinBrualla, and Steven M Seitz. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228, 2021. 6, 7 +[34] Minghan Qin, Wanhua Li, Jiawei Zhou, Haoqian Wang, and Hanspeter Pfister. Langsplat: 3d language gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20051-20060, 2024. 2, 3, 6, 7 +[35] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 3 +[36] Zhijing Shao, Zhaolong Wang, Zhuang Li, Duotun Wang, Xiangru Lin, Yu Zhang, Mingming Fan, and Zeyu Wang. Splattingavatar: Realistic real-time human avatars with mesh-embedded gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1606–1616, 2024. 3 +[37] William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, and Phillip Isola. Distilled feature fields enable + +few-shot language-guided manipulation. In Conference on Robot Learning, pages 405-424. PMLR, 2023. 2 +[38] Jin-Chuan Shi, Miao Wang, Hao-Bin Duan, and Shao-Hua Guan. Language embedded 3d gaussians for open-vocabulary scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5333-5343, 2024. 3 +[39] Aleksandar Shtedritski, Christian Rupprecht, and Andrea Vedaldi. What does clip know about a red circle? visual prompt engineering for vlms. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11987-11997, 2023. 5 +[40] Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach. Reclip: A strong zero-shot baseline for referring expression comprehension. arXiv preprint arXiv:2204.05991, 2022. 5 +[41] Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530, 2024. 2 +[42] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. 5 +[43] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024. 2, 4 +[44] Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, and Saining Xie. Eyes wide shut? exploring the visual shortcomings of multimodal llms. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9568-9578, 2024. 2, 4 +[45] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 3, 5 +[46] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 3 +[47] Vadim Tschernezki, Iro Laina, Diane Larlus, and Andrea Vedaldi. Neural feature fusion fields: 3d distillation of self-supervised 2d image representations. In 2022 International Conference on 3D Vision (3DV), pages 443-453. IEEE, 2022. 3 +[48] Joanna Waczyńska, Piotr Borycki, Sławomir Tadeja, Jacek Tabor, and Przemysław Spurek. Games: Mesh-based adapting and modification of gaussian splatting. arXiv preprint arXiv:2402.01459, 2024. 3 +[49] Junjie Wang, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, and Qi Tian. Gaussianeditor: Editing 3d gaussians delicately + +with text instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20902-20911, 2024. 3 +[50] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. Improving text embeddings with large language models. arXiv preprint arXiv:2401.00368, 2023. 5, 6 +[51] Mengmeng Wang, Jiazheng Xing, and Yong Liu. Actionclip: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472, 2021. 4 +[52] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 3, 4 +[53] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Intervid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 2, 4 +[54] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20310-20320, 2024. 2, 3, 4 +[55] Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. Videoclip: Contrastive pre-training for zero-shot video-text understanding. arXiv preprint arXiv:2109.14084, 2021. 2, 4 +[56] Lingfeng Yang, Yueze Wang, Xiang Li, Xinlong Wang, and Jian Yang. Fine-grained visual prompting. Advances in Neural Information Processing Systems, 36, 2024. 5 +[57] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20331-20341, 2024. 2, 3 +[58] Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. Gaussian grouping: Segment and edit anything in 3d scenes. In European Conference on Computer Vision, pages 162-179. Springer, 2025. 3, 6, 7 +[59] Justin Yu, Kush Hari, Kishore Srinivas, Karim El-Refai, Adam Rashid, Chung Min Kim, Justin Kerr, Richard Cheng, Muhammad Zubair Irshad, Ashwin Balakrishna, et al. Language-embedded gaussian splats (legs): Incrementally building room-scale representations with a mobile robot. arXiv preprint arXiv:2409.18108, 2024. 3 +[60] Zhu Zhang, Zhou Zhao, Yang Zhao, Qi Wang, Huasheng Liu, and Lianli Gao. Where does it exist: Spatio-temporal video grounding for multi-form sentences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10668-10677, 2020. 7 +[61] Hongyu Zhou, Jiahao Shao, Lu Xu, Dongfeng Bai, Weichao Qiu, Bingbing Liu, Yue Wang, Andreas Geiger, and Yiyi Liao. Hugs: Holistic urban 3d scene understanding via gaussian splatting. In Proceedings of the IEEE/CVF Conference + +on Computer Vision and Pattern Recognition, pages 21336-21345, 2024. 3 +[62] Shijie Zhou, Haoran Chang, Sicheng Jiang, Zhiwen Fan, Zehao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, and Achuta Kadambi. Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21676-21685, 2024. 3, 6, 7 +[63] Xiaoyu Zhou, Zhiwei Lin, Xiaojun Shan, Yongtao Wang, Deqing Sun, and Ming-Hsuan Yang. Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21634-21643, 2024. 3 \ No newline at end of file diff --git a/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/images.zip b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5ee1ca4f57d5c16f5850c89fb3a8b81599fc8283 --- /dev/null +++ b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e71ade5cba8064a37fb031a7b79391769101c1389ed70b0aae339bf75737746a +size 700580 diff --git a/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/layout.json b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..92c81dae7c930a823589a86e52560ea1e5c7980c --- /dev/null +++ b/CVPR/2025/4D LangSplat_ 4D Language Gaussian Splatting via Multimodal Large Language Models/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7b045dc937a9858e6fc85c4ce1773278eac09193d7bf694aa046659efeb7a3b +size 395745 diff --git a/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_content_list.json b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..53d92e8fe94e261a69dfbc2f7eaf7ef242b82049 --- /dev/null +++ b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01af380162eb5b7eb7274adcfeeabe9ac61e304adcf352c758a4b1d4dae4c4bc +size 80154 diff --git a/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_model.json b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f15614fc1c3031128f34e9964f582d423de826c9 --- /dev/null +++ b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cca89497c2526ed6cc99cdd37bcc98de180046e5e7cbd20bdb5cb8068c63b88 +size 98585 diff --git a/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_origin.pdf b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..de845bf47e0dcb7a58047891de0228d4b94292db --- /dev/null +++ b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/bec555e7-c847-4ed7-801a-a559d1ce7b48_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:827d47571aab4cda26538f3015914e20862afff443d884d58589e5cb23f8c245 +size 5822980 diff --git a/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/full.md b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fb766e90320c5d70f84853b479ae4511016228a3 --- /dev/null +++ b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/full.md @@ -0,0 +1,321 @@ +# 4D-Fly: Fast 4D Reconstruction from a Single Monocular Video + +Diankun Wu $^{1}$ , Fangfu Liu $^{1}$ , Yi-Hsin Hung $^{1}$ , Yue Qian $^{2}$ , Xiaohang Zhan $^{2}$ , Yueqi Duan $^{1\dagger}$ $^{1}$ Tsinghua University + $^{2}$ Tencent + +![](images/629ad016abfd180edca186ebdbd3e4a10a5df15698adaa28d63edce3a09cbdad.jpg) +Figure 1. In this work, we propose 4D-Fly for fast reconstructing 4D scenes from monocular videos in minutes. Compared to previous methods, our approach achieves $20\times$ speed-up while maintaining comparable or superior reconstruction quality. + +# Abstract + +4D reconstruction from a single monocular video is an important but challenging task due to its inherent underconstrained nature. While most existing 4D reconstruction methods focus on multi-camera settings, they always suffer from limited multi-view information in monocular videos. Recent studies have attempted to mitigate the ill-posed problem by incorporating data-driven priors as additional supervision. However, they require hours of optimization to align the splatted 2D feature maps of explicit Gaussians with various priors, which limits the range of applications. To address the time-consuming issue, we propose 4DFly, an efficient and effective framework for reconstructing the 4D scene from a monocular video (hundreds of frames within 6 minutes), more than $20 \times$ faster and even achieving higher quality than previous optimization methods. Our key insight is to unleash the explicit property of Gaussian primitives and directly apply data priors to them. Specifically, we build a streaming 4D reconstruction paradigm that includes: propagating existing Gaussian to the next timestep with an anchor-based strategy, expanding the 4D scene map with the canonical Gaussian map, and an efficient 4D scene + +optimization process to further improve visual quality and motion accuracy. Extensive experiments demonstrate the superiority of our 4D-Fly over state-of-the-art methods in terms of speed and quality. Project page: https://diankunwu.github.io/4D-Fly/. + +# 1. Introduction + +In the field of computer vision and graphics, scene reconstruction has been a crucial task with a broad range of applications, including virtual reality [15], autonomous navigation [1, 54], and beyond [3, 4, 24, 25, 29]. While recent works [18, 30] have made significant progress in modeling static scenes, dynamic scene reconstruction remains a substantial challenge due to the geometry-motion interplay and spatiotemporal sparse observations [12, 37, 44]. + +Recent advancements in this area are largely driven by 3D Gaussian Splatting (3DGS) [18], which has emerged as a state-of-the-art approach for 3D reconstruction. By representing a static scene with a collection of 3D Gaussian primitives and splatting them onto the image plane, 3DGS enables efficient and photometric rendering, thereby facilitating both training and inference. Building upon 3DGS, several studies have extended it to the 4D domain by learning a deformation field [12, 23, 46, 50], optimizing the trajectory + +ries of Gaussians [9, 28, 36] or directly modeling Gaussians in 4D [8, 49]. These methods are primarily designed for multi-camera settings, which provide sufficient view information and thus facilitate accurate scene geometry recovery. Despite promising results through an analysis-by-synthesis framework, they struggle to reconstruct scenes effectively from only a single monocular video, which is an inherently ill-posed problem with limited multi-view information. + +More recently, some researchers [19, 35, 38, 44, 45] have sought to address the under-determined monocular video setting by incorporating a set of data-driven priors to regularize the optimization process. To achieve this, they need to sput various features of Gaussians (e.g., depths, segmentations, and frame-to-frame offsets) and align them to a 2D feature plane. However, since the supervision signal originates solely from the 2D projection plane, optimizing the 3D Gaussians to satisfy multiple feature constraints results in a highly non-convex problem, making the optimization process computationally intensive and time-consuming. Specifically, these methods often take more than hours to reconstruct a video with 100 frames, limiting their practicality for large-scale applications. + +In this paper, we propose 4D-Fly, a more efficient and effective framework for reconstructing 4D scenes from a single monocular video. Our key insight is to unleash the explicit nature of Gaussian primitive. Since 3D Gaussian primitives are explicitly defined in 3D space, we can lift their attributes directly based on prior knowledge of the 4D space (e.g., depths and tracks) rather than solely relying on gradient-based optimization processes. Based on this insight, we design our framework in a streaming manner, achieving better control and faster speed over the 4D scene reconstruction informed by the priors. Specifically, we maintain a global 4D scene map throughout the reconstruction process. For each timestep, we first propagate existing dynamic Gaussian primitives to the next timestep according to the new observation. Rather than optimization, we design an anchor-based strategy to complete this process by lifting 2D tracking and depth priors. After that, we identify the unconstructed areas in both foreground and background based on an unconstructed mask and add new Gaussians according to the Canonical Gaussian Map (CGM). Finally, we utilize a fast optimization process to further improve the scene quality and motion accuracy. An overview of 4D-Fly is shown in Figure 1. + +In summary, our main contributions are: + +- We propose 4D-Fly, an efficient and effective framework to achieve fast 4D scene reconstruction from only a monocular video. +- We design our framework in a streaming manner to unleash the explicit nature of Gaussian primitives and speed up the reconstruction process, which includes: anchor-based Gaussian propagation, 4D scene map expansion by + +CGM, with a fast optimization scheme. + +- We conduct extensive experiments on novel view synthesis and point tracking tasks, demonstrating that our method can reconstruct scene geometry and 3D motion from a single monocular video with comparable quality, achieving at least a $20 \times$ speed-up over existing methods. + +# 2. Related Work + +Dynamic Scene Reconstruction with NeRF. Dynamic scene reconstruction and novel view synthesis have been long-standing problems. One line of research extends neural radiance fields (NeRF) [30] to handle dynamic scenes. To adapt static NeRF to dynamic 3D scenes, researchers have proposed various methods, including using time-conditioned latent codes [20, 21], incorporating explicit deformation fields [31-33], or factorizing the four-dimensional spatio-temporal domain into two-dimensional feature planes [2, 10]. In casual monocular settings, Li et al. [22] and Tian et al. [41] explore combining NeRF with image-based rendering, while Wang et al. [42] use a diffusion prior to regularize the optimization of four-dimensional NeRF. However, achieving practical rendering speeds with these methods remains challenging due to the time-consuming volumetric rendering process. Unlike these methods, our framework employs explicit 3D Gaussian Splatting as the basic representation, thus benefiting from its real-time rendering and enhanced control over the reconstruction process. + +Dynamic Scene Reconstruction with 3DGS. Recently, 3D Gaussian Splatting (3DGS) [18] has emerged as a powerful 3D representation for 3D scene reconstruction. To model dynamic scenes with 3DGS, some works [12, 23, 46, 50] leverage time-conditioned deformation networks, such as MLPs and Hexplane [2], to warp a canonical set of Gaussians for each timestep. Another line of works [9, 28, 36] focuses on learning 3D Gaussian trajectories over time by sequentially optimizing per-Gaussian offsets between frames. These methods often exhibit impressive tracking of scene geometry. Meanwhile, there are also some works [8, 49] that try to model Gaussian in 4D (i.e., 3D space and time) directly. Although these methods can capture spatiotemporal dynamics, they typically assume a multi-camera or pseudo-monocular setting (see [11]), limiting their applicability. More recently, some works [19, 26, 35, 44, 45] focus on the more ill-posed and under-constrained monocular setting and often use data-driven priors (e.g., depths, segmentations, and optical flows) to regularize the optimization process. However, these methods typically require hours to complete due to the inefficient optimization required to align the projections of explicit Gaussians with various priors. Our method also uses data-driven priors as additional information but applies them directly to Gaussians, resulting in a more efficient approach. + +![](images/9e0077cb5e16cbc0f2a023ea07d25a108043a028d2cd4c9b323c0d4451007eba.jpg) +Figure 2. Pipeline of 4D-Fly. Our method takes as input a casually captured video with the camera intrinsics and extrinsic of each frame, aiming to reconstruct the dynamic 3D scene and the underlying motion of every point. We represent the underlying 4D scene as a global 4D scene map $\mathcal{G}$ and construct it in a streaming way. Assume we have construct $G^{1\rightarrow t}$ . For the new frame at timestep $t + 1$ , we first compute its monocular depth map, segmentation mask, and 2D tracks using off-the-shelf models. Then, we extend the 4D scene map using: (1) anchor-based propagation to propagate existing dynamic Gaussians to the next timestep (Section 3.2); (2) 4D scene expansion with a canonical Gaussian map (Section 3.3); and (3) fast optimization for both foreground and background (Section 3.4). After training over the entire sequence, our 4D scene map allows for novel view rendering at any queried timestep and can also be used for point tracking. + +Point Tracking. Point tracking aims to follow both static and dynamic points in a scene. Early work such as Particle Video [34] modeled video motion using particles to refine optical flow estimates for long-term consistency and occlusion handling. Subsequent methods [7, 13, 53] improved particle motion estimates and matching models. CoTracker [16] further enhanced accuracy by exploiting correlations between all image tracks using transformer networks. More recently, approaches [28, 43, 47] lift 2D points into 3D space to better handle occlusions and maintain temporal coherence. As a byproduct of dynamic scene reconstruction, our model can also achieve point tracking by utilizing the motion trajectories of 3D Gaussian primitives. + +# 3. Method + +In this section, we present the framework of 4D-Fly. Given a casually captured video with $T$ frames $\{\mathbf{I}_t\}_{t=1}^T$ along with camera intrinsics $\mathbf{K}_t \in \mathbb{R}^{3 \times 3}$ and extrinsics $\mathbf{E}_t \in \mathbb{S}\mathbb{E}(3)$ of each frame $\mathbf{I}_t$ , our goal is to reconstruct the dynamic 3D (i.e. also called 4D) scene within the monocular video. Rather than the time-consuming global Gaussian optimization framework from recent work [35, 44], we explore a novel streaming 4D reconstruction paradigm that can better unleash the data-driven priors (e.g., depth maps $\{\mathbf{D}_t\}$ , segmentation of foreground $\{\mathbf{M}_t\}$ , and 2D tracks $\{\mathbf{U}_{t \to t'}\}$ ) to align with the explicit property of Gaussian Splatting. + +Specifically, we maintain a 4D scene map $\mathcal{G}^{1\rightarrow t}$ to represent the 4D scene constructed by the observation from 1 to $t$ .As a new frame $\mathbf{I}_{t + 1}$ comes, we first guide existing dynamic Gaussian primitives propagating to the next time step with anchor-based strategy, and obtain $\mathcal{G}_{init}^{1\to t + 1}$ (Sec + +tion 3.2). Then we build a canonical Gaussian map (CGM) for $\mathbf{I}_{t + 1}$ and incorporate new Gaussians to $\mathcal{G}_{init}^{1\rightarrow t + 1}$ based on CGM (Section 3.3). Finally, we further improve the quality of the current 4D scene map $\mathcal{G}^{1\rightarrow t + 1}$ with a fast component-specific optimization strategy in a few steps (Section 3.4). Before detailing our method, we first review the 3DGS [18] and introduce the formulation of the 4D scene map. We show a schematic of our pipeline in Figure 2 and the overall algorithm in Algorithm 1. + +# 3.1. Scene Representation + +Preliminaries of 3DGS. 3D Gaussian Splitting (3DGS) [18] represents a 3D scene as a collection of anisotropic 3D Gaussian primitives $\mathcal{G} = \{G_i\}_{i=0}^M$ . Each Gaussian is parameterized by its position (mean) $\pmb{\mu} \in \mathbb{R}^3$ , rotation $\mathbf{R} \in \mathbb{SO}(3)$ , scale $\mathbf{S} \in \mathbb{R}^3$ , opacity $o \in \mathbb{R}$ , and spherical harmonics coefficients $\mathbf{sh} \in \mathbb{R}^k$ . The influence of each Gaussian primitive in 3D space is defined by + +$$ +G _ {i} (\mathbf {x}) = e ^ {- \frac {1}{2} \left(\mathbf {x} - \boldsymbol {\mu} _ {i}\right) ^ {T} \boldsymbol {\Sigma} ^ {- 1} \left(\mathbf {x} - \boldsymbol {\mu} _ {i}\right)}, \tag {1} +$$ + +where $\mathbf{x}$ represents an arbitrary position in 3D space and $\pmb{\Sigma} = \mathbf{R}\pmb{\mathbb{S}}\pmb{\mathbb{S}}^T\mathbf{R}^T$ denotes the positive semi-definite covariance matrix of the Gaussian. + +To render an image, each 3D Gaussian is projected onto the image plane as a 2D Gaussian $G_{i}^{\prime}$ parameterized by $\pmb{\mu}^{\prime} \in \mathbb{R}^{2}$ and $\pmb{\Sigma}^{\prime} \in \mathbb{R}^{2 \times 2}$ following the process as described in [55]. These 2D Gaussians can then be efficiently rasterized into an RGB image via alpha compositing + +$$ +\hat {\mathbf {I}} (\mathbf {p}) = \sum_ {i \in H (\mathbf {p})} \mathbf {c} _ {i} \alpha_ {i} \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}) \tag {2} +$$ + +Algorithm 1 The Framework of 4D-Fly +1: Input: A video with $T$ frames $\{\mathbf{I}_t\}_{t = 1}^T$ , camera intrinsics $\{\mathbf{K}_t\}_{t = 1}^T$ +camera extrinsics $\{\mathbf{E}_t\}_{t = 1}^T$ +2: Prior Preparation: monocular depth maps $\{\mathbf{D}_t\}_{t = 1}^T$ , foreground segmentation $\{\mathbf{M}_t\}_{t = 1}^T$ , and 2D tracks $\{\mathbf{U}_{t\rightarrow t + 1}\}_{t = 1}^{T - 1}$ +3: Initialization: $\mathcal{G}_{init}^{1\to 1}\gets \emptyset ,\lambda_1,\lambda_2$ are hyperparameters. +4: for $t = 0,\dots,T - 1$ do +5: if $t > 0$ then +6: $\mathcal{G}_{init}^{1\to t + 1}\gets$ Propogation $(\mathcal{G}^{1\to t},\mathbf{D}_{t + 1},\mathbf{U}_{t\to t + 1})$ +7: $\hat{S} (\mathbf{p}),\hat{\mathbf{D}} (\mathbf{p})\gets$ RenderSD $(\mathcal{G}_{init}^{1\to t + 1},t,\mathbf{E}_{t + 1},\mathbf{K}_{t + 1})$ +8: $\hat{M} (\mathbf{p})\gets (\hat{S} (\mathbf{p}) < \lambda_1)\wedge (\hat{D} (\mathbf{p}) - D(\mathbf{p}) > \lambda_2)$ for all p +9: else +10: $\hat{M} (\mathbf{p})\gets 1$ for all p +11: end if +12: $CGM(\mathbf{p})\gets$ BuildCGM $(\mathbf{I}_{t + 1},\mathbf{D}_{t + 1},\mathbf{M}_{t + 1},\mathbf{K}_{t + 1},\mathbf{E}_{t + 1})$ +13: $\mathcal{G}^{1\to t + 1}\gets \mathcal{G}_{init}^{1\to t + 1}\cup \{\mathcal{CGM}(\mathbf{p})|\hat{M} (\mathbf{p}) = 1\}$ +14: for $i = 1,\ldots ,n_{1}$ do +15: Foreground $(\mathcal{G}^{1\to t + 1},\mathbf{I}_{t + 1},\mathbf{D}_{t + 1},\mathbf{M}_{t + 1},\mathbf{E}_{t + 1},\mathbf{K}_{t + 1})$ +16: end for +17: for $i = 1,\dots,n_{2}$ do +18: Sample $t_i\sim \{1,\dots,t + 1\}$ +19: Background $(\mathcal{G}^{1\to t + 1},\mathbf{I}_{t_i + 1},\mathbf{D}_{t_i + 1},\mathbf{M}_{t_i + 1},\mathbf{E}_{t_i + 1},\mathbf{K}_{t_i + 1})$ +20: end for +21: end for + +where $\alpha_{i} = o_{i}\cdot \exp \left(-\frac{1}{2} (\mathbf{p} - \boldsymbol{\mu}^{\prime})^{T}\boldsymbol{\Sigma}^{\prime}(\mathbf{p} - \boldsymbol{\mu}^{\prime})\right)$ , $\mathbf{c}_i$ is the view-dependent color computed by $\mathbf{sh}_i$ , and $H(\mathbf{p})$ is the set of Gaussians that intersect the ray originating from the pixel $\mathbf{p}$ . The entire process is differentiable, allowing parameters of the Gaussian primitives to be directly optimized. + +Formulation of 4D Scene Map. To represent the 4D scene, we adopt a trajectory-based representation similar to [35]. Specifically, we define the dynamic Gaussian primitive incorporated into the 4D scene map $\mathcal{G}^{1\rightarrow t}$ at timestep $s_i$ as $G_{d,i}^{s_i} = (\pmb {\mu}_i,\mathbf{R}_i,\mathbf{S}_i,o_i,\mathbf{c}_i,\Delta \pmb {\mu}_i,\Delta \mathbf{c}_i,s_i)$ . We further define $(\pmb {\mu}_i,\mathbf{R}_i,\mathbf{S}_i,o_i,\mathbf{c}_i,s_i)$ as the canonical state of the Gaussian, which is established in Section 3.3 and remains fixed after timestep $s_i$ . The dynamics of the Gaussian are represented by the parameters $\Delta \pmb {\mu}_i\in \mathbb{R}^{(t - s_i)\times 3}$ and $\Delta \mathbf{c}_i\in \mathbb{R}^{(t - s_i)\times 3}$ , which define the position and color offsets of each Gaussian from its canonical state at subsequent timesteps. Note that we allow for color offsets in Gaussians because, in practice, we observe that the color of the same point in a scene can undergo slight changes over time (e.g., due to lighting variations or reflections). We apply a regularization term to control this variation, preventing color adjustments from replacing actual translations (see Section 3.4). + +We use a set of dynamic Gaussians $\mathcal{G}_d^{1\to t} = \{G_{d,i}^{s_i}\}_{i = 1}^{M_d}$ to model the dynamic component of $\mathcal{G}^{1\rightarrow t}$ and a set of persistent static Gaussians $\mathcal{G}_s^{1\rightarrow t} = \{G_{s,i}\}_{i = 1}^{M_s}$ is maintained to represent the static component. Therefore, the overall 4D scene map is represented as $\mathcal{G}^{1\rightarrow t} = \mathcal{G}_d^{1\rightarrow t}\cup \mathcal{G}_s^{1\rightarrow t}$ . In our implementation, we constrain both dynamic Gaussian and static Gaussian to be isotropic to prevent overfitting to a single view in the under-constrained monocular setting. Following, we will detail how we construct it frame by frame. + +# 3.2. Anchor-based Gaussian Propagation + +Assume that a 4D scene map $\mathcal{G}^{1\rightarrow t}$ has been constructed from observations of timesteps 1 to $t$ . Given a new frame $\mathbf{I}_{t + 1}\in \mathbb{R}^{H\times W\times 3}$ with additional data-driven priors, we first determine the position offset $\Delta \pmb{\mu}_{i,t + 1}^{init}$ for every dynamic Gaussian in $\mathcal{G}_d^{1\rightarrow t}$ , thus propagating them to next timestep. While existing methods primarily focus on learning it through a prior guided global [35, 44] or perframe [28, 36] optimization process which consumes hours of reconstruction time, we explore the potential of directly guiding the motion of Gaussian with 2D tracks. However, the process is not trivial because (1) there is no direct one-to-one mapping between pixel-level derived 2D tracks and 3D Gaussian primitives; and (2) only visible regions of the scene are tracked, so motion in occluded areas cannot be captured through 2D tracks. To overcome this, we introduce an anchor-based Gaussian propagation strategy with data-driven and physical motion priors. First, we lift the queried pixels $\{\mathbf{p}_j\}$ (i.e., sampled in the foreground area of $\mathbf{I}_t$ ) of 2D tracks $\{\mathbf{U}_{t\to t + 1}\}$ into 3D space to build a set of anchor points $\{\mathbf{x}_j\}$ + +$$ +\mathbf {x} _ {j} = \mathcal {F} \left(\mathbf {p} _ {j}, \mathbf {D} _ {t} \left(\mathbf {p} _ {j}\right)\right), \tag {3} +$$ + +where $\mathcal{F}$ is the unprojection function and $\mathbf{D}_t(\mathbf{p}_j)$ represents the depth of pixel $\mathbf{p}_j$ derived from the depth prior $\mathbf{D}_t$ . Then we compute the 3D motion of each anchor point from timestep $t$ to timestep $t + 1$ : + +$$ +\mathbf {a} _ {j} = \mathcal {F} \left(\mathbf {U} _ {t \rightarrow t + 1} (\mathbf {p}), D _ {t + 1} \left(\mathbf {U} _ {t \rightarrow t + 1} (\mathbf {p})\right)\right) - x _ {j}. \tag {4} +$$ + +The anchor points and their motions collectively form a motion field from timestep $t$ to timestep $t + 1$ . We assume the 3D motion is locally rigid and compute the 3D motion of Gaussian $G_{d,i}^{s_i}$ based on the 3D motion of its nearby anchor points as: + +$$ +\Delta \boldsymbol {\mu} _ {i, t + 1} ^ {i n i t} = \Delta \boldsymbol {\mu} _ {i, t} + \sum_ {x _ {j} \in \mathcal {N} (\boldsymbol {\mu} _ {i})} S o f t m a x (- \| \mathbf {x} _ {j} - \boldsymbol {\mu} _ {i} \|) \mathbf {a} _ {j}. \tag {5} +$$ + +where $\mathcal{N}(\pmb{\mu}_i)$ is the K nearest anchor points of $G_{d,i}^{s_i}$ computed with KNN algorithm. After this step, all dynamic Gaussians in the 4D scene map $\mathcal{G}^{1\rightarrow t}$ is propagating into timestep $t + 1$ guided by both data-driven and physical motion priors while the static Gaussians remain unchanged as they persist across frames. We denote the 4D scene map after this step as $\mathcal{G}_{init}^{1\rightarrow t + 1}$ . + +# 3.3. Expand 4D Scene Map with CGM + +Given the 4D scene map $\mathcal{G}_{init}^{1\rightarrow t + 1}$ , we aim to incorporate new Gaussian primitives to model regions that are not observed from timestep 1 to timestep $t$ while observed at timestep $t + 1$ . We first identify the mask of unreconstructed areas $\hat{M} (\mathbf{p})$ based on the density map $\hat{S} (\mathbf{p})$ and depth map + +$\hat{D} (\mathbf{p})$ [17], which are obtained by render $\mathcal{G}_{init}^{1\rightarrow t + 1}$ with $\mathbf{E}_{t + 1}$ and $\mathbf{K}_{t + 1}$ : + +$$ +\hat {S} (\mathbf {p}) = \sum_ {i \in H (\mathbf {p})} \alpha_ {i} \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}), \tag {6} +$$ + +$$ +\hat {D} (\mathbf {p}) = \sum_ {i \in H (\mathbf {p})} d _ {i} \alpha_ {i} \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}). \tag {7} +$$ + +Then we compute $\hat{M} (\mathbf{p})$ by: + +$$ +\hat {M} (\mathbf {p}) = \left(\hat {S} (\mathbf {p}) < \lambda_ {1}\right) \wedge \left(\hat {D} (\mathbf {p}) - D (\mathbf {p}) > \lambda_ {2}\right). \tag {8} +$$ + +The first term of Equation 8 indicates the too sparse area of $\mathcal{G}_{init}^{1\rightarrow t + 1}$ and the second term means new geometry in front of the current constructed geometry. Both regions should be constructed based on the observation of timestep $t + 1$ + +Based on $\hat{M}(\mathbf{p})$ , we determine which pixels in $\mathbf{I}_{t+1}$ should be added to $\mathcal{G}_{init}^{1 \rightarrow t+1}$ . For static settings, unprojection-based methods offer a simple yet effective approach to lift these pixels into Gaussian primitives [17]. However, for dynamic Gaussians, once added into $\mathcal{G}_{init}^{1 \rightarrow t+1}$ , only their offsets can be optimized, with other canonical properties remaining fixed in subsequent timesteps (see Section 3.4). This makes it crucial for dynamic Gaussians to have a well-defined canonical state. Unfortunately, the canonical state provided by unprojection-based methods is often too coarse to determine an accurate motion. To address this, we propose to construct a canonical Gaussian map $CGM(\mathbf{p})$ before adding Gaussians to $\mathcal{G}_{init}^{1 \rightarrow t+1}$ . The $CGM(\mathbf{p})$ defines a one-to-one mapping between pixels and Gaussians, i.e., $G_{\mathbf{p}} = CGM(\mathbf{p})$ represents the Gaussian corresponding to pixel $p$ . To get the form of CGM, we first use unprojection-based methods to initialize each $G_{\mathbf{p}}$ and then run an optimization process supervised by + +$$ +\mathcal {L} = \sum_ {F \in \left\{\mathbf {I}, D, M \right\}} \lambda_ {F} \mathcal {L} _ {F} + \lambda_ {\text {a l i g n}} \mathcal {L} _ {\text {a l i g n}}, \tag {9} +$$ + +where $\mathcal{L}_{\mathbf{I}}$ is the $L_{1}$ loss between $\mathbf{I}_{t + 1}$ and $\hat{\mathbf{I}}_{CGM}$ , which is obtained by rendering $\{G_p\}_{p = 1}^P$ with $\mathbf{E}_{t + 1}$ and $\mathbf{K}_{t + 1}$ . Similar definitions apply to $\mathcal{L}_M$ and $\mathcal{L}_D$ . $\mathcal{L}_{align}$ is a regularization term to keep Gaussian $G_{\mathbf{p}}$ aligned with $\mathbf{p}$ : + +$$ +\mathcal {L} _ {\text {a l i g n}} = \sum_ {p = 1} ^ {P} \left\| \Pi \left(\boldsymbol {\mu} _ {G _ {p}}\right) - \mathbf {p} \right\| _ {2} ^ {2}, \tag {10} +$$ + +where $\Pi$ is the perspective projection transformation with respect to $\mathbf{E}_{t + 1}$ and $\mathbf{K}_{t + 1}$ . Once the Canonical Gaussian Map is constructed, the gaussian set $\{CGM(\mathbf{p})\mid \hat{M} (\mathbf{p}) = 1\}$ is added to the scene map to model the areas unconstricted in previous timesteps. + +# 3.4. Fast 4D Scene Optimization + +Building upon the reliable initialization $\mathcal{G}^{1\rightarrow t + 1}$ from the previous stages, we further enhance the visual quality and motion accuracy of $\mathcal{G}^{1\rightarrow t + 1}$ using a carefully designed optimization strategy. Since $\mathcal{G}^{1\rightarrow t + 1}$ is mostly constructed, this step primarily focuses on correcting noise introduced by data-driven priors and physical-based assumptions. Therefore, the optimization process is highly efficient, requiring less than 2.5 seconds per frame in our streaming framework. Optimization for Dynamic Gaussians. We only optimize position offsets $\Delta \pmb{\mu}_{i,t + 1}$ and color offsets $\Delta \mathbf{c}_{i,t + 1}$ of dynamic Gaussians. The optimization process is supervised by + +$$ +\mathcal {L} _ {d} = \sum_ {F \in \{I, D, M \}} \lambda_ {d, F} \mathcal {L} _ {d, F} + \lambda_ {\text {r e g , r i g}} \mathcal {L} _ {\text {r e g , r i g}} + \lambda_ {\text {r e g , c o l}} \mathcal {L} _ {\text {r e g , c o l}}, \tag {11} +$$ + +where $\mathcal{L}_{d,\mathbf{I}}$ , $\mathcal{L}_{d,D}$ , and $\mathcal{L}_{d,M}$ represent the $L_{1}$ norm distances between the rendered and ground-truth images, depths, and foreground masks at timestep $t + 1$ , respectively. $\mathcal{L}_{reg,rig}$ is a regularization term based on both physics motion priors and the motion learned in Section 3.2, which is essential for maintaining the correct motion of Gaussians. Specifically, we apply a rigidity constraint among nearby Gaussians that exhibit similar motion trends during the optimization process. To achieve this, we first define the initial velocity for dynamic Gaussian $G_{d,i}^{s_i}$ based on the offsets computed in Section 3.2: + +$$ +\mathbf {v} _ {i, t + 1} ^ {\text {i n i t}} = \Delta \boldsymbol {\mu} _ {i, t + 1} ^ {\text {i n i t}} - \Delta \boldsymbol {\mu} _ {i, t}. \tag {12} +$$ + +The Gaussian $G_{j}$ which maintains rigidity with Gaussian $G_{i}$ should not only be spatially close to it but also close to it in terms of $\mathbf{v}_{i,t+1}^{init}$ , which reflects motion trend of a Gaussian determined by the 2D track prior model. (Consider an animal's legs move alternately while walking, certain points on the inner sides of the legs are close in space, yet their velocity are opposite). Thus, we define $\mathcal{L}_{reg,rig}$ by + +$$ +\mathcal {L} _ {r e g, r i g} = \frac {1}{k _ {2} \left| \mathcal {G} _ {d} \right|} \sum_ {G _ {i} \in \mathcal {G} _ {d}} \sum_ {G _ {j} \in \mathcal {N} _ {i}} \mathcal {L} _ {i, j} ^ {r i g}, \tag {13} +$$ + +where $\mathcal{N}_i$ is the neighbor Gaussians defined by both spacial distance and initial velocity distance. To get $\mathcal{N}_i$ , we first utilize KNN algorithm to compute $\mathcal{N}_i' = k_1 NN(G_i)$ within all dynamic Gaussians with spacial distance as metric. Then we compute $\mathcal{N}_i = k_2 NN(G_i), K_2 < K_1$ within $G_i'$ with initial velocity distance as distance metric. $\mathcal{L}_{i,j}^{rig}$ is the rigidity loss between $G_i$ and $G_j$ , defined by + +$$ +\mathcal {L} _ {i, j} ^ {r i g} = w _ {i, j} \left\| \left(\boldsymbol {\mu} _ {j, t} - \boldsymbol {\mu} _ {i, t}\right) - \left(\boldsymbol {\mu} _ {j, t + 1} - \boldsymbol {\mu} _ {i, t + 1}\right) \right\| _ {2}, \tag {14} +$$ + +where $w_{i,j}$ is a weighting factor for the Gaussian pair based on both spatial distance and velocity distance: + +$$ +w _ {i, j} = \exp \left(- \lambda_ {\boldsymbol {\mu}} \left\| \boldsymbol {\mu} _ {i, t} - \boldsymbol {\mu} _ {j, t} \right\| _ {2} - \lambda_ {\mathbf {v}} \left\| \mathbf {v} _ {i, t} ^ {i n i t} - \mathbf {v} _ {j, t} ^ {i n i t} \right\| _ {2}\right). \tag {15} +$$ + +![](images/82ab785a753fcdb4da13835e7afa0914ef5e7d595ebd68eb2b11c7d849369563.jpg) +Figure 3. Qualitative Comparison on the DyCheck iPhone Dataset. We present the qualitative comparison between our method and baseline methods for novel-view synthesis on the iPhone dataset. The "Train-view" column displays the training view at the same time step as the validation views shown in the other columns. Time is averaged per 100 frames. + +$\mathcal{L}_{reg, col}$ is a regularization term to keep the color of dynamic Gaussian consistent over time, defined by + +$$ +\mathcal {L} _ {\text {r e g , c o l}} = \frac {1}{| \mathcal {G} _ {d} |} \sum_ {G _ {i} \in \mathcal {G} _ {d}} \| \mathbf {c} _ {i, t + 1} - \mathbf {c} _ {i, t} \| _ {2} \tag {16} +$$ + +During the dynamic Gaussian optimization process, we only allow dynamic Gaussians $\{G_{d,i}^{s_i}\}$ to receive gradient, and static Gaussians $\{G_{s,i}\}$ will not be optimized. + +Optimization for Static Gaussians. The static Gaussians $\{G_{s,i}\}$ are persistent across different timesteps. In order to prevent the optimization for the timestep $t + 1$ from disrupting the reconstruction results from timestep 1 to $t$ , we randomly sample frames from 1 to $t + 1$ for optimization. The optimization process is supervised by the $L_{1}$ norm distances between the rendered and ground-truth images, depths, and foreground masks at the sampled timestep. + +# 4. Experiments + +# 4.1. Implementation Details + +Data-driven Priors. For videos captured in the wild, we use DepthCrafter [14] to generate depth maps and Track Anything [48] to obtain foreground segmentation masks. Subsequently, we sample a dense grid (every four pixels) within the foreground and compute the 2D tracks of these points using TAPIR [6]. Finally, we employ Droid-SLAM [39] to estimate the camera parameters. + +Initialization for Canonical Gaussian Map. We initialize $\mu_{i}$ by lifting pixels to 3D using the depth map and the camera projection matrix. The rotation $\mathbf{R}$ is represented by a quaternion fixed to the identity to satisfy the isotropic assumption. The Gaussian scale is set to be isotropic and is initialized based on the depth value and camera intrinsics, i.e., $s_i = 2d / (f_x + f_y)$ . We initialize the opacity $o_{i}$ by applying the sigmoid function to 1, i.e., $o_{i} = \mathrm{Sigmoid}(1)$ . As + +Table 1. Quantitative comparisons for novel view synthesis on the DyCheck iPhone [11] dataset and the NVIDIA Dynamic Scenes [51] dataset. "Time" denotes the average time required to process 100 frames on each dataset, measured in minutes, while "FPS" represents the average frame rate during test rendering. For the DyCheck iPhone dataset, metrics are reported using a co-visibility mask. For the NVIDIA Dynamic Scenes dataset, all pixels are evaluated. + +
MethodTime↓DyCheck iPhoneNVIDIA Dynamic Scenes
FPS ↑mSSIM ↑mLPIPS ↓mPSNR ↑Time↓FPS ↑mSSIM ↑mLPIPS ↓mPSNR ↑
T-NeRF [11]>1200<10.550.5615.72>1200<10.590.1720.76
HyperNeRF [32]>1200<10.580.5315.83>1200<10.570.1820.05
4D Gaussians [46]22.1760.390.7313.1121.3810.480.3817.69
Dynamic GM [35]187.61720.590.4415.79156.71750.660.1522.36
Shape of Motion [44]109.439.20.620.4316.42102.240.30.630.1522.07
4D-Fly (Ours)5.31630.600.3717.035.11690.690.1422.52
+ +pixels are lifted into Gaussians, we assign the segmentation label of each pixel to the corresponding Gaussian primitive, which remains fixed thereafter. + +Optimization and Time. We use the Adam optimizer in our optimization process. For each frame, our framework performs a total of 200 optimization iterations: 50 iterations to build the canonical Gaussian map (Section 3.3), 50 iterations to optimize the background, and 100 iterations to optimize the foreground (Section 3.4). Detailed learning rate settings are provided in the supplementary material. For a sequence of 100 frames at $960 \times 720$ resolution, our method takes approximately 5.3 minutes to complete the training process. Our rendering speed is around 160 fps. + +# 4.2. Datasets and Evaluation Metrics + +We evaluate our framework quantitatively and qualitatively on the DyCheck iPhone dataset [11] and NVIDIA Dynamic Scenes dataset [51]. + +DyCheck iPhone dataset. The DyCheck iPhone dataset [11] comprises 14 sequences of everyday dynamic scenes, each containing 200 to 500 training frames captured using an iPhone. For seven of these sequences, the dataset includes additional synchronized recordings from two static cameras positioned with a wide baseline, providing novel-view data for evaluating novel view synthesis tasks. Furthermore, the dataset provides metric depth obtained from LiDAR sensors, co-visible masks that indicate regions visible between training and evaluation views, and annotations of 5 to 15 keypoints at ten equally spaced time steps for each sequence. We utilize the metric depth from LiDAR sensors in all baseline methods requiring depth input. Apart from novel view synthesis, we also evaluate 3D tracking and 2D tracking based on keypoint annotations. + +NVIDIA Dynamic Scenes dataset. The NVIDIA Dynamic Scenes dataset [51] comprises sequences of 90 to 200 frames captured with a rig of 12 calibrated cameras. Following [35], we use seven scenes that exhibit various dynamic activities for the evaluation. Since it lacks keypoint + +annotations, we only use it to evaluate novel view synthesis. Metrics. For the NVS task, we report covisibility masked image metrics: masked Peak Signal-to-Noise Ratio (mP-SNR), masked Structural Similarity (mSSIM), and masked Learned Perceptual Image Patch Similarity (mLPIPS) [11]. Since the NVIDIA Dynamic Scenes dataset does not provide covisibility mask annotations, we evaluate on all pixels. For 3D tracking, we use 3D end-point error (3D EPE), $\delta_{3D}^{5}$ , $\delta_{3D}^{10}$ as our metric [40, 44], which denotes Euclidean distance between the ground truth 3D tracks and predicted tracks at each target time step, percentage of points that fall within $5\mathrm{cm}$ and $10\mathrm{cm}$ of the ground truth 3D location, respectively. For 2D tracking, we use Average Jaccard (AJ), average position accuracy ( $<\delta_{avg}$ ), and Occlusion Accuracy (OA) [5]. + +# 4.3. Comparisons + +Novel View Synthesis. For this task, we selected a comprehensive set of competitive baseline methods. Among NeRF-based approaches, we compare our methods with T-NeRF [11] and Hyper-NeRF [32]. For Gaussian-based dynamic reconstruction methods designed for multi-view (pseudo-monocular) settings, we selected 4D Gaussians [8]. Additionally, we compare our method with more recent Gaussian-based approaches that leverage various priors to regularize optimization, including Dynamic Gaussian Marbles [35] and Shape of Motion [44]. As shown in Table 1, 4D-Fly significantly outperforms NeRF-based and pseudomonocular Gaussian-based methods in terms of mPSNR, mSSIM, and mLPIPS on both the DyCheck iPhone dataset and the NVIDIA Dynamic Scenes dataset. Notably, our method demonstrates substantial improvements on the DyCheck iPhone dataset, which provides limited multiview information. Among Gaussian-based methods that also incorporate various priors, 4D-Fly surpasses Dynamic Gaussian Marbles and achieves comparable performance to Shape of Motion. By utilizing prior information more efficiently, 4D-Fly achieves a $35 \times$ speed-up over Dynamic Gaussian Mar + +Table 2. Quantitative comparisons for point tracking on the DyCheck iPhone [11] dataset. We report both 3D tracking results and 2D tracking results of 4D-Fly and baseline methods. + +
Method3D Tracking2D Tracking
EPE ↓δ53↑δ10↑AJ ↑δavg ↑OA ↑
HyperNeRF [32]0.1926.244.610.019.355.0
4D Gaussians [46]0.1727.246.812.529.863.9
CoTracker [16]+DC [14]0.2033.256.124.234.072.7
TAPIR [7]+DC [14]0.1138.163.227.441.367.2
Shape of Motion [44]0.0843.272.334.444.082.7
4D-Fly (Ours)0.0840.170.334.142.185.4
+ +bles and a $20 \times$ speed-up over Shape of Motion. Figure 3 presents qualitative comparisons between our framework and baseline methods. + +Point Tracking. Leveraging a trajectory-based dynamic Gaussian representation, our method generates consistent 2D and 3D tracks directly from the learned Gaussian trajectories in the reconstruction process. To extract 2D and 3D tracks, we follow the approach detailed in [28]. We extensively compare our framework against five baseline methods. For HyperNeRF [32], we combine the learned inverse mapping with a forward mapping [11, 44] to produce 3D tracks at specific query points. For 4DGS, we compute 4D motion from the canonical positions and deformations encoded in Hexplane [2]. For Shape of Motion [44], we utilize the authors' official open-source implementation. We also evaluate our method against state-of-the-art long-range 2D tracking approaches, i.e., CoTracker [16] and TAPIR [5]. To lift the 2D tracks into 3D, we employ aligned monocular depth maps generated by DepthCrafter [14]. Table 2 shows that our 4D-Fly surpasses all baseline approaches in both 2D and 3D tracking except for Shape of Motion [44]. In our framework, we use 2D tracks from TAPIR [7] as prior information. Through our tailored fast optimization process and regularization terms, we exceed TAPIR's performance across all tracking metrics while achieving high-quality novel-view rendering. For Shape of Motion [44], although our method only relies on track priors from adjacent frames during training (unlike Shape of Motion, which uses priors across all frame pairs), we significantly reduce training time and still attain close tracking performance. + +# 4.4. Ablation Study and Analysis + +In Table 3, we ablate various components of 4D-fly and report the novel view synthesis results on the Dycheck iPhone dataset. Specifically, we first ablate the segmentation masks, modeling all parts as foreground. A noticeable decline is observed across all metrics, as treating background regions as dynamic prevents them from leveraging multiview constraints to improve geometries. We then ablate three key components of our framework: Anchor-Based Gaussian Propagation, Canonical Gaussian Map, and Fast + +Table 3. Ablation Study. We perform an ablation study on the components of 4D-Fly, demonstrating that each component is essential for achieving high-quality novel view synthesis. + +
SettingmSSIM ↑mLPIPS ↓mPSNR ↑
w/o Segmentation0.420.5015.51
w/o Anchor-Based Prop.0.450.4816.16
w/o CGM0.520.4616.32
w/o Fast Optimization0.430.4915.52
w/o Lreg,rig0.540.4716.27
w/o velocity distance0.570.4016.52
w/o Lreg,col0.500.4616.34
4D-Fly (Ours)0.600.3717.03
+ +4D Scene Optimization. The results show that the absence of any component leads to a decline in performance, with Fast 4D Scene Optimization and Anchor-Based Gaussian Propagation having the most significant impact on the results. Finally, we ablate regularization terms used in the Fast 4D Scene Optimization process. Here, "w/o velocity distance" signifies using only spatial distance as the distance metric in $\mathcal{L}_{reg,rig}$ . + +# 5. Conclusion + +We introduce 4D-Fly, a novel efficient framework for fast 4D scene reconstruction from a single monocular video that addresses the challenges of under-constrained data and time-consuming optimization in existing methods. By unleashing the explicit nature of 3D Gaussian Splatting, our approach achieves over 20 times faster reconstruction than previous methods with higher quality. The streaming reconstruction paradigm—including anchor-based Gaussian propagation, scene expansion with canonical Gaussian maps, and efficient optimization—significantly improves visual quality and motion accuracy. We conduct quantitative experiments on DyCheck iPhone dataset and NVIDIA Dynamic Scene dataset and qualitative experiments on the DAVIS dataset and show that 4D-Fly outperforms state-of-the-art methods in terms of speed and quality. + +Limitations and Future work. While 4D-Fly achieves fast 4D reconstruction and long-range point tracking, several challenges remain. First, our method relies on prior knowledge from other vision foundation models, which may be inaccurate. Second, our method exhibits limitations in handling complex illumination variations and non-rigid deformations modeling. In the future, exploring feed-forward approaches for robust 4D scene reconstruction and generation presents a promising research direction. + +Acknowledgements. This work was supported in part by the National Natural Science Foundation of China under Grant 62206147, and in part by 2024 Tencent AI Lab Rhino-Bird Focused Research Program. + +# References + +[1] Michal Adamkiewicz, Timothy Chen, Adam Caccavale, Rachel Gardner, Preston Culbertson, Jeannette Bohg, and Mac Schwager. Vision-only robot navigation in a neural radiance world. IEEE Robotics and Automation Letters, 7(2): 4606-4613, 2022. 1 +[2] Ang Cao and Justin Johnson. Hexplane: A fast representation for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 130-141, 2023. 2, 8 +[3] Weiliang Chen, Fangfu Liu, Diankun Wu, Haowen Sun, Haixu Song, and Yueqi Duan. Dreamcinema: Cinematic transfer with free camera and 3d character. ArXiv, abs/2408.12601, 2024. 1 +[4] Anurag Dalal, Daniel Hagen, Kjell G Robbersmyr, and Kristian Muri Knausgård. Gaussian splatting: 3d reconstruction and novel view synthesis, a review. IEEE Access, 2024. 1 +[5] Carl Doersch, Ankush Gupta, Larisa Markeeeva, Adria Recasens, Lucas Smaira, Yusuf Aytar, Joao Carreira, Andrew Zisserman, and Yi Yang. Tap-vid: A benchmark for tracking any point in a video. In Advances in Neural Information Processing Systems, pages 13610-13626. Curran Associates, Inc., 2022. 7, 8 +[6] Carl Doersch, Yi Yang, Mel Vecerik, Dilara Gokay, Ankush Gupta, Yusuf Aytar, Joao Carreira, and Andrew Zisserman. Tapir: Tracking any point with per-frame initialization and temporal refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10061-10072, 2023. 6 +[7] Carl Doersch, Yi Yang, Mel Vecerik, Dilara Gokay, Ankush Gupta, Yusuf Aytar, Joao Carreira, and Andrew Zisserman. Tapir: Tracking any point with per-frame initialization and temporal refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10061-10072, 2023. 3, 8 +[8] Yuanxing Duan, Fangyin Wei, Qiyu Dai, Yuhang He, Wenzheng Chen, and Baoquan Chen. 4d gaussian splatting: Towards efficient novel view synthesis for dynamic scenes. arXiv preprint arXiv:2402.03307, 2024. 2, 7 +[9] Bardienus P Duisterhof, Zhao Mandi, Yunchao Yao, Jia-Wei Liu, Mike Zheng Shou, Shuran Song, and Jeffrey Ichnowski. Md-splating: Learning metric deformation from 4d gaussians in highly deformable scenes. arXiv preprint arXiv:2312.00583, 2023. 2 +[10] Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbæk Warburg, Benjamin Recht, and Angjoo Kanazawa. K-planes: Explicit radiance fields in space, time, and appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12479–12488, 2023. 2 +[11] Hang Gao, Ruilong Li, Shubham Tulsiani, Bryan Russell, and Angjoo Kanazawa. Monocular dynamic view synthesis: A reality check. Advances in Neural Information Processing Systems, 35:33768-33780, 2022. 2, 7, 8, 3 +[12] Zhiyang Guo, Wengang Zhou, Li Li, Min Wang, and Houqiang Li. Motion-aware 3d gaussian splatting for efficient dynamic scene reconstruction. arXiv preprint arXiv:2403.11447, 2024. 1, 2 + +[13] Adam W Harley, Zhaoyuan Fang, and Katerina Fragkiadaki. Particle video revisited: Tracking through occlusions using point trajectories. In European Conference on Computer Vision, pages 59-75. Springer, 2022. 3 +[14] Wenbo Hu, Xiangjun Gao, Xiaoyu Li, Sijie Zhao, Xiaodong Cun, Yong Zhang, Long Quan, and Ying Shan. Depthcrafter: Generating consistent long depth sequences for open-world videos. arXiv preprint arXiv:2409.02095, 2024. 6, 8 +[15] Ying Jiang, Chang Yu, Tianyi Xie, Xuan Li, Yutao Feng, Huamin Wang, Minchen Li, Henry Lau, Feng Gao, Yin Yang, et al. Vr-gs: A physical dynamics-aware interactive gaussian splatting system in virtual reality. In ACM SIGGRAPH 2024 Conference Papers, pages 1-1, 2024. 1 +[16] Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Co-tracker: It is better to track together. arXiv preprint arXiv:2307.07635, 2023. 3, 8 +[17] Nikhil Keetha, Jay Karhade, Krishna Murthy Jatavallabhula, Gengshan Yang, Sebastian Scherer, Deva Ramanan, and Jonathon Luiten. Splatam: Splat track & map 3d gaussians for dense rgb-d slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21357-21366, 2024. 5, 1 +[18] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 2023. 1, 2, 3 +[19] Jiahui Lei, Yijia Weng, Adam Harley, Leonidas Guibas, and Kostas Daniilidis. Mosca: Dynamic gaussian fusion from casual videos via 4d motion scaffolds. arXiv preprint arXiv:2405.17421, 2024. 2 +[20] Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, et al. Neural 3d video synthesis from multi-view video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5521-5531, 2022. 2 +[21] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. Neural scene flow fields for space-time view synthesis of dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6498-6508, 2021. 2 +[22] Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, and Noah Snavely. Dynibar: Neural dynamic image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4273-4284, 2023. 2 +[23] Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, and Lei Xiao. Gaufre: Gaussian deformation fields for real-time dynamic novel view synthesis. arXiv preprint arXiv:2312.11458, 2023.1,2 +[24] Fangfu Liu, Diankun Wu, Yi Wei, Yongming Rao, and Yueqi Duan. Sherpa3d: Boosting high-fidelity text-to-3d generation via coarse 3d prior. ArXiv, abs/2312.06655, 2023. 1 +[25] Fangfu Liu, Wenqiang Sun, Hanyang Wang, Yikai Wang, Haowen Sun, Junliang Ye, Jun Zhang, and Yueqi Duan. Re + +conx: Reconstruct any scene from sparse views with video diffusion model. arXiv preprint arXiv:2408.16767, 2024. 1 +[26] Qingming Liu, Yuan Liu, Jiepeng Wang, Xianqiang Lv, Peng Wang, Wenping Wang, and Junhui Hou. Modgs: Dynamic gaussian splatting from causally-captured monocular videos. arXiv preprint arXiv:2406.00434, 2024. 2 +[27] Jiahao Lu, Tianyu Huang, Peng Li, Zhiyang Dou, Cheng Lin, Zhiming Cui, Zhen Dong, Sai-Kit Yeung, Wenping Wang, and Yuan Liu. Align3r: Aligned monocular depth estimation for dynamic videos. arXiv preprint arXiv:2412.03079, 2024. 2, 3 +[28] Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan. Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis. In 2024 International Conference on 3D Vision (3DV), pages 800-809, 2024. 2, 3, 4, 8 +[29] Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7210–7219, 2021. 1 +[30] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 1, 2 +[31] Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5865-5874, 2021. 2 +[32] Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo MartinBrualla, and Steven M Seitz. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228, 2021. 7, 8, 2 +[33] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10318–10327, 2021. 2 +[34] Peter Sand and Seth Teller. Particle video: Long-range motion estimation using point trajectories. International journal of computer vision, 80:72-91, 2008. 3 +[35] Colton Stearns, Adam Harley, Mikaela Uy, Florian Dubost, Federico Tombari, Gordon Wetzstein, and Leonidas Guibas. Dynamic gaussian marbles for novel view synthesis of casual monocular videos. arXiv preprint arXiv:2406.18717, 2024. 2, 3, 4, 7 +[36] Jiakai Sun, Han Jiao, Guangyuan Li, Zhanjie Zhang, Lei Zhao, and Wei Xing. 3dgstream: On-the-fly training of 3d gaussians for efficient streaming of photo-realistic free-viewpoint videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20675-20685, 2024. 2, 4 + +[37] Wenqiang Sun, Shuo Chen, Fangfu Liu, Zilong Chen, Yueqi Duan, Jun Zhang, and Yikai Wang. Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion. arXiv preprint arXiv:2411.04928, 2024. 1 +[38] Yang-Tian Sun, Yi-Hua Huang, Lin Ma, Xiaoyang Lyu, YanPei Cao, and Xiaojuan Qi. Splatter a video: Video gaussian representation for versatile processing. arXiv preprint arXiv:2406.13870, 2024. 2 +[39] Zachary Teed and Jia Deng. Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras. In Advances in Neural Information Processing Systems, pages 16558-16569. Curran Associates, Inc., 2021. 6 +[40] Zachary Teed and Jia Deng. Raft-3d: Scene flow using rigid-motion embeddings. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8375–8384, 2021. 7 +[41] Fengrui Tian, Shaoyi Du, and Yueqi Duan. Mononerf: Learning a generalizable dynamic radiance field from monocular videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17903-17913, 2023. 2 +[42] Chaoyang Wang, Peiye Zhuang, Aliaksandr Siarohin, Junli Cao, Guocheng Qian, Hsin-Ying Lee, and Sergey Tulyakov. Diffusion priors for dynamic view synthesis from monocular videos. arXiv preprint arXiv:2401.05583, 2024. 2 +[43] Qianqian Wang, Yen-Yu Chang, Ruojin Cai, Zhengqi Li, Bharath Hariharan, Aleksander Holynski, and Noah Snavely. Tracking everything everywhere all at once. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19795-19806, 2023. 3 +[44] Qianqian Wang, Vickie Ye, Hang Gao, Jake Austin, Zhengqi Li, and Angjoo Kanazawa. Shape of motion: 4d reconstruction from a single video. ArXiv, abs/2407.13764, 2024. 1, 2, 3, 4, 7, 8 +[45] Shizun Wang, Xingyi Yang, Qiuhong Shen, Zhenxiang Jiang, and Xinchao Wang. Gflow: Recovering 4d world from monocular video. arXiv preprint arXiv:2405.18426, 2024.2, 3 +[46] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20310-20320, 2024. 1, 2, 7, 8 +[47] Yuxi Xiao, Qianqian Wang, Shangzhan Zhang, Nan Xue, Sida Peng, Yujun Shen, and Xiaowei Zhou. Spatialtracker: Tracking any 2d pixels in 3d space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20406-20417, 2024. 3 +[48] Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, and Feng Zheng. Track anything: Segment anything meets videos. arXiv preprint arXiv:2304.11968, 2023. 6 +[49] Zeyu Yang, Hongye Yang, Zijie Pan, and Li Zhang. Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting. arXiv preprint arXiv:2310.10642, 2023. 2 + +[50] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20331-20341, 2024. 1, 2 +[51] Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz. Novel view synthesis of dynamic scenes with globally coherent depths from a monocular camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5336-5345, 2020. 7, 3 +[52] Junyi Zhang, Charles Herrmann, Junhwa Hur, Varun Jampani, Trevor Darrell, Forrester Cole, Deqing Sun, and Ming-Hsuan Yang. Monst3r: A simple approach for estimating geometry in the presence of motion. arXiv preprint arXiv:2410.03825, 2024. 2, 3 +[53] Yang Zheng, Adam W Harley, Bokui Shen, Gordon Wetzstein, and Leonidas J Guibas. Pointodyssey: A large-scale synthetic dataset for long-term point tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19855-19865, 2023. 3 +[54] Xiaoyu Zhou, Zhiwei Lin, Xiaojun Shan, Yongtao Wang, Deqing Sun, and Ming-Hsuan Yang. Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21634-21643, 2024. 1 +[55] M. Zwicker, H. Pfister, J. van Baar, and M. Gross. Ewa volume splatting. In Proceedings Visualization, 2001. VIS '01., pages 29-538, 2001. 3 \ No newline at end of file diff --git a/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/images.zip b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0c60d2d85ac9452c955740aaec44e95ccd6f4457 --- /dev/null +++ b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b21a6aee50cc082d40085d19f118b9eb2470c275cd807897ece44f08cccee71 +size 721585 diff --git a/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/layout.json b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e3d973d3328ca222989fdad6fdcc7757b3269b06 --- /dev/null +++ b/CVPR/2025/4D-Fly_ Fast 4D Reconstruction from a Single Monocular Video/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f8c95739eab53452bcc218f06331723f9a0b4350f113cef15098d7d42f4b14c +size 451118 diff --git a/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_content_list.json b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5088021b67d0ffc4405a19d066f7c6bc11a5b824 --- /dev/null +++ b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b2b83c5ae36375965d4a70e00a658f8b37a7d19ed71efaa57b3ae976a34a735 +size 93152 diff --git a/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_model.json b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b7f4a0bd447296296e5ab3be77bd430db71a8bf1 --- /dev/null +++ b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1166cc703d159306f17b354127c4bbe654ec19da5de49e96fb2459c8e4c6d02 +size 117724 diff --git a/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_origin.pdf b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..29fc11ea98e940a1fb1bf0e3a4ab5b8e92a05112 --- /dev/null +++ b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/d2b94c9c-2214-43ee-865b-39c67991e81c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:968a13619ddf051d434be4f39c262f6cedec2a20f7a9052448c6e4b7298ac5f6 +size 8786377 diff --git a/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/full.md b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/full.md new file mode 100644 index 0000000000000000000000000000000000000000..eb11c6648fc7460c49880e11ef9f5bdda807b90e --- /dev/null +++ b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/full.md @@ -0,0 +1,405 @@ +# 4DGC: Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video + +Qiang Hu $^{1*}$ Zihan Zheng $^{1*}$ Houqiang Zhong $^{2}$ Sihua Fu $^{1}$ Li Song $^{2}$ XiaoyunZhang $^{1\dagger}$ Guangtao Zhai $^{2}$ Yanfeng Wang $^{3\dagger}$ + +1 Cooperative Medianet Innovation Center, Shanghai Jiao Tong University + +$^{2}$ School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University $^{3}$ School of Artificial Intelligence, Shanghai Jiao Tong University + +![](images/0d9524b337ae66a9dc4aa734fbc4a4804b7619cc4439087ec1be01d2ec7a50ac.jpg) + +![](images/056305ab2a3b89ca427a3e976ad7983ad8ea4cff6ee49eac9e660a98d503d35a.jpg) +Figure 1. Left: 4DGC results, showcasing flexible quality levels across various bitrates. Middle: Comparison of visual quality and bitrate with state-of-the-art methods. Right: The RD performance of our approach surpasses that of prior work (e.g. 3DGStream [61], ReRF [64], TeTriRF [69]). + +![](images/9bc8fa7051e9422a4c69713083ce92d2ec78eb44f53ae4e12b61b5be36f50157.jpg) + +![](images/64ab2018466d9d6f9cb151b87e1d6bf35b6815e6b43848c48810d8f50d5e6abc.jpg) + +![](images/f3ada7bd7e1f7ffcf8b7bc0bcc96e093f909d684070a7cb660537f8198b5987b.jpg) + +![](images/75ecd4fddbc0eb814578ffd423a9007f75af423732225ffd23d71e09242c37e4.jpg) + +![](images/87ed3cd98abe55d7cdf18d9bbc92f67bda08199d5a86883c0416f141b4301b4c.jpg) + +![](images/6070c19a07f642d7bbce07ac348b752a518f3295b6dffc6698d13f2a784b949a.jpg) + +![](images/214d2653a00702fcc281b0a413119953c5d156e6380259814d26d001c04a8ea3.jpg) + +# Abstract + +3D Gaussian Splatting (3DGS) has substantial potential for enabling photorealistic Free-Viewpoint Video (FVV) experiences. However, the vast number of Gaussians and their associated attributes poses significant challenges for storage and transmission. Existing methods typically handle dynamic 3DGS representation and compression separately, neglecting motion information and the rate-distortion (RD) trade-off during training, leading to performance degradation and increased model redundancy. To address this gap, we propose 4DGC, a novel rate-aware 4D Gaussian compression framework that significantly reduces storage size while maintaining superior RD performance for FVV. Specifically, 4DGC introduces a motion-aware dynamic Gaussian representation that utilizes a compact motion grid combined with sparse compensated Gaussians to exploit inter-frame similarities. This representation effectively handles large motions, preserving quality and reducing temporal redundancy. Furthermore, we present an end-to-end compression scheme that employs differentiable quantiza + +tion and a tiny implicit entropy model to compress the motion grid and compensated Gaussians efficiently. The entire framework is jointly optimized using a rate-distortion trade-off. Extensive experiments demonstrate that 4DGC supports variable bitrates and consistently outperforms existing methods in RD performance across multiple datasets. + +# 1. Introduction + +Free-Viewpoint Video (FVV) enables immersive real-time navigation of scenes from any perspective, enhancing user engagement with high interactivity and realism. This makes FVV ideal for applications in entertainment, virtual reality, sports broadcasting, and telepresence. However, streaming and rendering high-quality FVV remains challenging, particularly for sequences with large motions, complex backgrounds, and extended durations. The primary difficulty lies in developing an efficient representation and compression method for FVV that supports streaming with limited bitrate while maintaining high fidelity. + +Traditional approaches to FVV reconstruction have primarily relied on point cloud-based methods [22] and depth-based techniques [9], which struggle to deliver high rendering quality and realism, especially in complex scenes. Neu + +ral Radiance Fields (NeRF) and its variants [11, 18, 20, 26, 30, 45, 54] have demonstrated impressive results in reconstructing FVV by learning continuous 3D scene representations, yet they face limitations in supporting long sequences and streaming. Recent approaches [64, 65, 69, 76, 77] address these issues by compressing explicit features of dynamic NeRF. For example, TeTriRF [69] employs a hybrid representation with tri-planes to model dynamic scenes and applies a traditional video codec to further reduce redundancy. However, these methods often suffer from slow training and rendering speeds. + +Recently, 3D Gaussian Splatting (3DGS) [29] has demonstrated exceptional rendering speed and quality compared to NeRF-based approaches for static scenes. Several methods [35, 68] have attempted to extend 3DGS to dynamic settings by incorporating temporal correspondence or time-dependency, but these approaches require loading all frames into memory for training and rendering, limiting their practicality for streaming applications. 3DGStream [61] models the inter-frame transformation and rotation of 3D Gaussians as a neural transformation cache, which reduces per-frame storage requirements for FVV. However, the overall data volume remains substantial, hindering its ability to support efficient FVV transmission. Although a few studies [27, 28] have explored the compression of dynamic 3DGS, these methods heavily struggle with real-world dynamic scenes containing backgrounds, which limits their practical effectiveness. Additionally, they optimize representation and compression independently, overlooking the rate-distortion (RD) trade-off during training, which ultimately restricts compression efficiency. + +In this paper, we propose 4DGC, a novel rate-aware compression method tailored for Gaussian-based FVV. Our key idea is to explicitly model motion between adjacent frames, estimate the bitrate of the 4D Gaussian representation during training, and incorporate rate and distortion terms into the loss function to enable end-to-end optimization. This allows us to achieve a compact and compression-friendly representation with optimal RD performance. We realize this through two main innovations. First, we introduce a motion-aware dynamic Gaussian representation for inter-frame modeling within long sequences. We train the 3DGS on the first frame (keyframe) to obtain the initial reference Gaussians. For each subsequent frame, 4DGC utilizes a compact multi-resolution motion grid to estimate the rigid motion of each Gaussian from the previous frame to the current one. Additionally, compensated Gaussians are sparsely added to account for newly observed regions or objects, further enhancing the accuracy of the representation. By leveraging inter-frame similarities, 4DGC effectively reduces temporal redundancy and ensures that the representation remains compact while maintaining high visual fidelity. + +Second, we propose a unified end-to-end compression + +scheme that efficiently encodes the initial Gaussian spherical harmonics (SH) coefficients, the motion grid, and the compensated Gaussian SH coefficients. This compression approach incorporates differentiable quantization to facilitate gradient back-propagation and a tiny implicit entropy model for accurate bitrate estimation. By optimizing the entire scheme through a rate-distortion trade-off during training, we significantly enhance compression performance. Experimental results show that our 4DGC supports variable bitrates and achieves state-of-the-art RD performance across various datasets. Compared to the SOTA method 3DGStream [61], our approach achieves approximately a $16\mathbf{x}$ compression rate without quality degradation, as illustrated in Fig. 1. + +In summary, our key contributions include: + +- We present a compact and compression-friendly 4D Gaussian representation for streamable FVV that effectively captures dynamic motion and compensates for newly emerging objects, minimizing temporal redundancy and enhancing reconstruction quality. +- We introduce an end-to-end 4D Gaussian compression framework that jointly optimizes representation and entropy models using a rate-distortion loss function, ensuring a low-entropy 4D Gaussian representation and significantly enhancing RD performance. +- Extensive experimental results on the real-world datasets demonstrate that our 4DGC achieves superior reconstruction quality, bitrate efficiency, training time, and rendering speed compared to existing state-of-the-art dynamic scene compression methods. + +# 2. Related Work + +Dynamic Modeling with NeRF. Building on NeRF's success in static scene synthesis [5-7, 13, 43, 45, 47, 50, 55, 56], several works have extended these methods to dynamic scenes. Flow-based methods [33, 34] construct 3D features from monocular videos with impressive results, at the cost of more extra priors like depth and motion for complex scenes. Deformation field methods [16, 30, 49, 54, 60] warp frames to a canonical space to capture temporal features but suffer from slow training and rendering speeds. To accelerate the speeds, some methods [11, 18, 20, 26, 32, 51, 58, 62, 63] extend the radiance field into four dimensions using grid representation, plane-based representation or tensor factorization. However, these methods typically suffer from storage efficiency and are not suitable for streaming. + +Dynamic Modeling with 3DGS. Recent advancements in 3DGS [29] and its variants [12, 19, 21, 24, 25] have achieved photorealistic static scene rendering with high efficiency. However, for dynamic scenes, the per-frame 3DGS approach neglects temporal consistency, causing visual artifacts and model size growth. Some methods [23, 35, 68, 70, 73, 74] model Gaussian attributes over time to represent + +![](images/2f37644948b0c5340193d3b0cff3389b4f353115d3838e3addc3b73c822e5ac7.jpg) +Figure 2. Illustration of the 4DGC Framework. The reconstructed Gaussians from the previous frame, $\hat{\mathbf{G}}_{t-1}$ , are retrieved from the reference buffer and combined with the input images of the current frame to facilitate learning of the motion grid $\mathbf{M}_t$ and the compensated Gaussians $\Delta \mathbf{G}_t$ through a two-stage training process. In the first stage, the motion grid and its associated entropy model are optimized. In the second stage, the compensated Gaussians are refined along with their corresponding entropy model. Both stages are supervised by a rate-distortion trade-off, employing simulated quantization and an entropy model to jointly optimize representation and compression. + +dynamic scenes as a unified model, improving quality but requiring the simultaneous loading of all data, which limits practical use in long-sequence streaming. Other methods [23, 40, 61] track Gaussian motion frame by frame, which is suitable for streaming, but the large size of each frame hinders transmission efficiency. In contrast, our approach employs a compact multi-resolution motion grid combined with per-frame Gaussian compensation, reducing temporal redundancy and enhancing reconstruction quality. + +Dynamic Scene Compression. Recent advances in deep learning-based image and video compression [1, 3, 4, 14, 31, 36, 38, 41, 42, 44, 59, 71, 72, 75] have demonstrated strong RD performance. In FVV compression, current approaches [15, 53, 57, 60, 64, 65, 69, 76, 77] primarily focus on compressing dynamic NeRF features to improve storage and transmission efficiency. Techniques like ReRF [64] and TeTriRF [69] apply traditional image/video encoding methods to dynamic scenes without end-to-end optimization, sacrificing dynamic detail and compression efficiency. Some approaches [76, 77] achieve end-to-end optimization but struggle with scalability in open scenes and slow rendering. For 3DGS-based methods, most [17, 39, 48, 66] focus on static scene compression, while dynamic scene techniques [27, 28] remain limited and typically support only background-free scenarios without comprehensive optimization. Our method achieves both high RD performance and fast decoding and rendering times in real-world scenarios thanks to our proposed motion-aware dynamic Gaussian representation and end-to-end joint compression. + +# 3. Method + +In this section, we introduce the details of the 4DGC framework. Fig. 2 illustrates the overall architecture of 4DGC. Our approach begins with a motion-aware dynamic Gaus + +sian representation, composed of a compact motion grid and sparse compensated Gaussians (Sec. 3.1). Subsequently, we describe a two-stage method combining motion estimation and Gaussian compensation to generate this representation, effectively capturing both spatial and temporal variations (Sec. 3.2). Finally, we introduce an end-to-end compression scheme that jointly optimizes representation and entropy models, ensuring a low-entropy representation and greatly improving RD performance (Sec. 3.3). + +# 3.1. Motion-aware Dynamic Gaussian Modeling + +Recall that 3DGS represents scenes using a collection $\mathbf{G}$ of Gaussian primitives as an explicit representation similar to point clouds. Each Gaussian $\mathcal{G} \in \mathbf{G}$ is defined by a set of estimizable parameters $\{\mu; \mathbf{R}; \mathbf{f}; \mathbf{s}; \alpha\}$ , where $\mu$ is the center location, $\mathbf{R}$ is the rotation matrix, $\mathbf{f}$ represents SH coefficients for view-dependent color $\mathbf{c}$ , $\mathbf{s}$ is the scaling vector, and $\alpha$ is the opacity. For a point $\mathbf{x}$ located within a Gaussian primitive, the spatial distribution is determined by $\mathcal{G}(\mathbf{x})$ , + +$$ +\boldsymbol {\mathcal {G}} (\mathbf {x}) = \exp \left(- \frac {1}{2} (\mathbf {x} - \boldsymbol {\mu}) ^ {T} \boldsymbol {\Sigma} ^ {- 1} (\mathbf {x} - \boldsymbol {\mu})\right) \tag {1} +$$ + +where $\boldsymbol{\Sigma} = \mathbf{R}\mathbf{s}\mathbf{s}^T\mathbf{R}^T$ . The rendering color $\mathbf{c}$ of a pixel is computed by alpha-blending the $N$ Gaussians overlapping the pixel in depth order: + +$$ +\mathbf {c} = \sum_ {i \in N} \mathbf {c} _ {i} \alpha^ {\prime} _ {i} \prod_ {j = 1} ^ {i - 1} \left(1 - \alpha^ {\prime} _ {j}\right) \tag {2} +$$ + +where $\alpha_{i}^{\prime}$ is the projection from the $i$ -th Gaussian opacity onto the image plane, $\mathbf{c}_i$ is the $i$ -th Gaussian color in viewing direction. + +![](images/a8e1c5b782277f804194864c99a12ad570f5f356cb1afa8f5ad8687715f91d4d.jpg) +Figure 3. Illustration of our motion-aware dynamic Gaussian modeling that utilizes a multi-resolution motion grid $\mathbf{M}_t$ with sparse compensated Gaussians $\Delta \mathbf{G}_t$ to exploit inter-frame similarities. + +![](images/84388e86b7f979585be9411f3ce9a354f1a12fb4de11d5c6a1e82c68d4c41d17.jpg) + +![](images/885cab7fa279c7a4e1eb983f39dca6d5aeca22e15c8ee1764e1270d06cd6c74d.jpg) + +When extending the representation from static to dynamic scenes, a straightforward approach is stacking framewise static Gaussians to form a dynamic sequence. However, this method neglects temporal coherence, resulting in significant temporal redundancy, particularly in scenes with dense Gaussian primitives. Alternative methods [35, 68] extend Gaussians to 4D space for modeling the entire dynamic scene. Such approaches suffer from performance degradation in long sequences and are not suitable for streaming applications. To overcome these limitations, we propose a motion-aware dynamic Gaussian representation, which explicitly models and tracks motion between adjacent frames to maintain spatial and temporal coherence. + +Our modeling approach employs a complete 3DGS representation as the initial Gaussians $\mathbf{G}_1$ for the first frame (keyframe). For each subsequent frame, we utilize a multi-resolution motion grid $\mathbf{M}_t$ with two shared global lightweight MLPs, $\Phi_{\mu}$ and $\Phi_{\mathbf{R}}$ , to estimate the rigid motion of each Gaussian from the previous frame to the current one. This grid captures the multi-scale nature of motion, enabling precise modeling even for objects that move at varying speeds or directions. However, rigid transformation alone is insufficient for accurately representing newly emerging regions. To address this, we dynamically add sparse compensated Gaussians $\Delta \mathbf{G}_t$ to account for newly observed regions in the current frame. Finally, our 4DGC sequentially represents the dynamic scene with N frames as $\mathbf{G}_1$ , $\{\mathbf{M}_t,\Delta \mathbf{G}_t\}_{t = 2}^N$ , $\Phi_{\mu}$ , and $\Phi_{\mathbf{R}}$ as shown in Fig. 3. A major benefit of this design is that 4DGC fully exploits inter-frame similarities, reducing temporal redundancy and enhancing reconstruction quality. + +# 3.2. Sequential Representation Generation + +Here, we introduce our two-stage scheme, which combines motion estimation and Gaussian compensation to generate a dynamic Gaussian representation that effectively captures both spatial and temporal variations in the scene. This pro + +cess begins with motion estimation, which tracks and models frame-by-frame transformations in both translation and rotation, reducing inter-frame redundancy. To address scenarios where newly emerging objects or complex motion cannot be fully captured through estimation alone, a gaussian compensation step is applied to refine representation quality in suboptimal areas. Together, these stages form a flexible and high-fidelity representation that improves compression efficiency and supports high-quality rendering for streamable applications. We detail each stage below. + +Motion Estimation. For each inter-frame, we load the reconstructed Gaussians of the previous frame $\hat{\mathbf{G}}_{t - 1}$ from the reference buffer, providing a stable reference for tracking transformations in the current frame. By combining these reference Gaussians with the input images of the current frame, we employ a motion grid, $\mathbf{M}_t$ , along with two shared lightweight MLPs, $\Phi_{\mu}$ and $\Phi_{\mathbf{R}}$ , to predict the translation $(\Delta \pmb{\mu}_t)$ and rotation $(\Delta \mathbf{R}_t)$ for each Gaussian. Specifically, to achieve accurate motion estimation, we use a multi-resolution motion grid $\mathbf{M}_t = \{\mathbf{M}_t^l\}_{l = 1}^L$ where $L$ denotes the number of resolution levels, to capture complex motions across various scales. For a Gaussian primitive $\pmb {\mathcal{G}}\in \hat{\mathbf{G}}_{t - 1}$ in the previous reconstructed frame, its center location $\pmb{\mu}_{t - 1}$ is mapped to multiple frequency bands $\mathbf{P}_{t - 1}$ via positional encoding: + +$$ +\mathbf {P} _ {t - 1} = \left\{\mathbf {P} _ {t - 1} ^ {l} \right\} _ {l = 1} ^ {L} = \left\{\sin \left(2 ^ {l} \pi \boldsymbol {\mu} _ {t - 1}\right), \cos \left(2 ^ {l} \pi \boldsymbol {\mu} _ {t - 1}\right) \right\} _ {l = 1} ^ {L} \tag {3} +$$ + +At each level, we use the mapped position to perform trilinear interpolation on $\mathbf{M}_t$ , producing motion features across different scales. These multi-scale features are then concatenated and fed into two lightweight MLPs, $\Phi_{\mu}$ and $\Phi_{\mathbf{R}}$ to compute translation $\Delta \mu_t$ and rotation $\Delta \mathbf{R}_t$ for each Gaussian. Thus, our motion estimation is formalized as follows: + +$$ +\Delta \boldsymbol {\mu} _ {t} = \Phi_ {\boldsymbol {\mu}} \left(\bigcup_ {l = 1} ^ {L} \operatorname {i n t e r p} \left(\mathbf {P} _ {t - 1} ^ {l}, \mathbf {M} _ {t} ^ {l}\right)\right) \tag {4} +$$ + +$$ +\Delta \mathbf {R} _ {t} = \Phi_ {\mathbf {R}} \left(\bigcup_ {l = 1} ^ {L} \operatorname {i n t e r p} \left(\mathbf {P} _ {t - 1} ^ {l}, \mathbf {M} _ {t} ^ {l}\right)\right) +$$ + +where $\mathrm{interp}(\cdot)$ represents the grid interpolation operation. + +With the translation $\Delta \pmb{\mu}_t$ and rotation $\Delta \mathbf{R}_t$ , transformations are applied to each Gaussian in $\hat{\mathbf{G}}_{t - 1}$ , achieving smooth alignment from the previous to the current frame: + +$$ +\begin{array}{l} \mathbf {G} _ {t} ^ {\prime} = \hat {\mathbf {G}} _ {t - 1} (\boldsymbol {\mathcal {G}} \oplus \mathbf {M} _ {t} (\boldsymbol {\mathcal {G}})) \tag {5} \\ = \left\{\boldsymbol {\mathcal {G}} \left(\boldsymbol {\mu} _ {t - 1} + \Delta \boldsymbol {\mu} _ {t}; \Delta \mathbf {R} _ {t} \mathbf {R} _ {t - 1}; C\right) \mid \boldsymbol {\mathcal {G}} \in \hat {\mathbf {G}} _ {t - 1} \right\} \\ \end{array} +$$ + +where $\mathbf{G}_t^\prime$ denotes the transformed Gaussians for the current frame, $C$ represents the fixed parameters including $\mathbf{f}_{t - 1},\mathbf{s}_{t - 1}$ , and $\alpha_{t - 1}$ . $\oplus$ denotes the operation of updating position $\pmb{\mu}$ and rotation $\mathbf{R}$ for each Gaussian $\pmb{\mathcal{G}}$ in $\hat{\mathbf{G}}_{t - 1}$ + +according to $\mathbf{M}_t$ . This hierarchical approach achieves precise motion prediction across multiple scales, capturing essential transformations and effectively reducing inter-frame redundancy. + +Gaussian Compensation. Although motion estimation effectively captures the dynamics of previously observed objects within a scene, we found that relying solely on motion estimation is insufficient for achieving high-quality detail, particularly in cases involving newly emerging objects and subtle motion transformations. To address this limitation, we adopt a Gaussian compensation strategy, refining representation quality by integrating sparse compensated Gaussians $\Delta \mathbf{G}_t$ into $\mathbf{G}_t^\prime$ in suboptimal regions. + +We first identify the suboptimal areas requiring compensation. These regions are classified into two primary types: (1) regions with significant gradient changes, typically corresponding to newly appearing objects or scene edges, and (2) larger Gaussian primitives undergoing rapid transformations, leading to multiview perspective differences. For the first type, we apply gradient thresholding, cloning the Gaussian primitives at locations where the gradient exceeds a predefined threshold $\tau_{g}$ as $\Delta \mathbf{G}_t^g$ , ensuring accurate representation of newly observed elements. For the second type, involving larger Gaussians impacted by rapid transformations, we clone two additional Gaussian primitives from the original Gaussian when its motion parameters exceed specified thresholds: $\tau_{\mu}$ for translation $|\Delta \pmb{\mu}_t|$ and $\tau_{R}$ for rotation $|\Delta \mathbf{R}_t|$ . The scale of two cloned Gaussians is reduced to $\frac{\mathbf{s}}{100}$ to capture detailed motion dynamics more precisely. + +These newly compensated Gaussians are sampled in $\mathcal{N}(\mu, 2\Sigma)$ around the original Gaussian and optimized in the second training stage. This Gaussian splitting process yields a fine-grained and adaptive representation, capturing complex motion patterns and enhancing continuity across frames. Overall, this compensation strategy significantly improves detail accuracy, reduces artifacts, and ensures high-quality reconstruction of dynamic scenes. + +# 3.3. End-to-end Joint Compression + +We also propose an end-to-end 4D Gaussian compression framework that jointly optimizes representation and entropy models through a two-stage training process. To enable gradient back-propagation, we employ differentiable quantization, along with a compact implicit entropy model for accurate bitrate estimation of the motion grid $\mathbf{M}_t$ and compensated Gaussians $\Delta \mathbf{G}_t$ . The first stage focuses on optimizing the motion grid alongside its associated entropy model, while the second stage refines the compensated Gaussians with their corresponding entropy model. Each stage is guided by a rate-distortion trade-off, ensuring a low-entropy 4D Gaussian representation and substantially improving RD performance. + +Simulated Quantization & Rate Estimation. Quanti- + +zation and entropy coding effectively reduce the bitrate during compression at the expense of some information loss. However, the rounding operation in quantization prevents gradient propagation, which is incompatible with end-to-end training. To address this, we implement a differentiable quantization strategy using simulated quantization noise. Specifically, uniform noise $u \sim U\left(-\frac{1}{2q}, \frac{1}{2q}\right)$ is added to simulate quantization effects with step size $q$ , enabling robust training while preserving gradient flow. For rate estimation, we use a tiny and trainable implicit entropy model [8] to approximate the probability mass function (PMF) of the quantized values, $\hat{y}$ . Unlike the learned entropy model in image compression [46], which is learned from large-scale training datasets, our implicit entropy model is learned on-the-fly with the corresponding 4D Gaussian representation in the training. The PMF is derived using the cumulative distribution function (CDF) as follows: + +$$ +P _ {P M F} (\hat {y}) = P _ {C D F} \left(\hat {y} + \frac {1}{2}\right) - P _ {C D F} \left(\hat {y} - \frac {1}{2}\right). \tag {6} +$$ + +Incorporating this rate estimation into the loss function enables the network to learn feature distributions with inherently lower entropy. This effectively imposes a bitrate constraint during training while ensuring the compatibility of gradient-based optimization, balancing compression efficiency and model accuracy. + +Stage 1: Motion Grid Compression. In the first training stage, we jointly optimize $\Phi_{\mu}, \Phi_{\mathbf{R}}$ , the multi-resolution motion grid $\mathbf{M}_t$ and its corresponding entropy model. This process enhances motion prediction accuracy while encouraging low-entropy characteristics in $\mathbf{M}_t$ . Specifically, we apply simulated quantization to discretize the motion grid, ensuring compatibility with entropy encoding. The entropy model then assigns probabilities to each quantized element based on a learned probability mass function to estimate the bitrate of the motion grid more effectively. The loss function $\mathcal{L}_{s1}$ of this stage comprises a photometric term $\mathcal{L}_{color}$ and a rate term $\mathcal{L}_{rate}$ : + +$$ +\mathcal {L} _ {s 1} = \mathcal {L} _ {\text {c o l o r}} + \lambda_ {1} \mathcal {L} _ {\text {r a t e}} ^ {M E} \tag {7} +$$ + +$$ +\mathcal {L} _ {\text {c o l o r}} = \left(1 - \lambda_ {2}\right) \| \mathbf {c} _ {g} - \hat {\mathbf {c}} \| _ {1} + \lambda_ {2} \mathcal {L} _ {S S I M} \tag {8} +$$ + +$$ +\mathcal {L} _ {\text {r a t e}} ^ {M E} = - \frac {1}{N} \sum_ {\hat {y} \in \hat {\mathbf {M}} _ {t}} \log_ {2} \left(P _ {P M F} ^ {1} (\hat {y})\right) \tag {9} +$$ + +where $\mathcal{L}_{rate}^{ME}$ represents the estimated rate derived from $\hat{\mathbf{M}}_t$ . $\mathbf{c}_g$ and $\hat{\mathbf{c}}$ refer to the ground truth and reconstructed colors, respectively. $\mathcal{L}_{SSIM}$ is the D-SSIM[37] metric between ground truth and the result rendered by 4DGC, and $\lambda_{2}$ serves as a weight parameter. The parameter $\lambda_{1}$ balances the trade-off between rate and distortion, thus controlling model size and reconstruction quality. + +Stage 2: Compensated Gaussians Compression. In the second training stage, we focus on optimizing the com + +![](images/9800fc5fde000cd761261a3c8f6b6e53154fec206fd73ff1c725a1625471793f.jpg) +Figure 4. Rate-distortion curves across different datasets. Rate-distortion curves not only illustrate the superiority of our method over ReRF [64], TeTriRF [69], and 3DGStream [61], but also demonstrate the efficiency of various components within our method. + +![](images/f91cf920c924c0f8d0322e58c189274512aa56178fba619e0370ae6607e0e7d8.jpg) + +![](images/70cfd117320669d46780d281c7c4646e0ee531a341d5bc08deddc9d5c7d819b5.jpg) + +pensated Gaussians $\Delta \mathbf{G}_t$ and their entropy model to enhance detail capture and compression efficiency. While attributes like position and rotation are crucial for rendering, they require little storage. Thus, the main emphasis is on compressing the SH coefficients, which account for the largest storage cost. + +Leveraging the trained motion grid $\mathbf{M}_t$ from Stage 1, we transform Gaussian primitives from the previous frame to the current frame while preserving fixed attributes like position, rotation, and scale. To enhance representation fidelity, we augment these transformed Gaussians with compensated Gaussians $\Delta \mathbf{G}_t$ . The SH coefficients of $\Delta \mathbf{G}_t$ undergo simulated quantization and are processed by an implicit entropy model for accurate rate estimation. The loss function $\mathcal{L}_{s2}$ optimizes this compression process. + +$$ +\mathcal {L} _ {s 2} = \mathcal {L} _ {\text {c o l o r}} + \lambda_ {1} \mathcal {L} _ {\text {r a t e}} ^ {M C} \tag {10} +$$ + +$$ +\mathcal {L} _ {\text {r a t e}} ^ {M C} = - \frac {1}{M} \sum_ {\hat {y} \in \hat {\mathbf {f}} _ {t} ^ {C}} \log_ {2} \left(P _ {P M F} ^ {2} (\hat {y})\right) \tag {11} +$$ + +where $\hat{\mathbf{f}}_t^C$ represents the quantized SH coefficients of the compensated Gaussians. This strategy is similarly applied to the initial Gaussians $\mathbf{G}_1$ in the keyframe. The joint optimization approach for representation and compression results in a compact and high-quality 4D Gaussian representation, facilitating efficient storage and transmission for FVV applications. + +Once the training of the current frame is finished, we reconstruct the complete Gaussian representation for the current frame, $\hat{\mathbf{G}}_t$ , as follows: + +$$ +\hat {\mathbf {G}} _ {t} = \hat {\mathbf {G}} _ {t - 1} (\boldsymbol {\mathcal {G}} \oplus \hat {\mathbf {M}} _ {t} (\boldsymbol {\mathcal {G}})) + \Delta \hat {\mathbf {G}} _ {t} \tag {12} +$$ + +where $\hat{\mathbf{M}}_t$ and $\Delta \hat{\mathbf{G}}_t$ represent the reconstructed motion grid and compensated Gaussians, respectively. Finally, $\hat{\mathbf{G}}_t$ is stored in the reference buffer to facilitate the reconstruction of the next frame. + +# 4. Experiments + +# 4.1. Configurations + +Datasets. We validate the effectiveness of 4DGC using three real-world datasets: N3DV dataset [32], MeetRoom + +Table 1. Quantitative comparison on the N3DV [32] dataset. The PSNR, SSIM, size, and rendering speed are averaged over the whole 300 frames for each scene. + +
MethodPSNR↑ (dB)SSIM↑Size↓ (MB)Render↑ (FPS)Streamable/ Variable-bitrate
K-Planes [20]29.910.9201.00.15X/X
HyperReel [2]31.100.9381.22.0X/X
MixVoxels [62]30.800.9311.716.7X/X
NeRFPlayer [60]30.690.93117.70.05✓/X
StreamRF [30]30.610.9307.68.3✓/X
ReRF [64]29.710.9180.772.0✓/✓
TeTriRF [69]30.650.9310.762.7✓/✓
D-3DG [40]30.670.9319.2460✓/X
3DGStream [61]31.540.9428.1215✓/X
Ours31.580.9430.5168✓/✓
+ +dataset [30], and Google Immersive dataset [10]. Each dataset reserves 1 camera view for testing, with the remaining views used for training. + +Implementation. Our experimental setup includes an Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz and an RTX 3090 graphics card. We set the resolution levels of the multi-resolution motion grid to $L = 3$ . During training, the initial settings are as follows: $\lambda_{1}$ is set to 0.0003, 0.0001, 0.00005, 0.00001 to achieve different bitrates, while $\lambda_{2}$ is set to 0.2. The number of iterations for motion estimation and Gaussian compensation is set to 400 and 100, respectively. + +Metrics. To evaluate the compression performance of our method on the experimental datasets, we use Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) [67] as quality metrics, along with bitrate measured in MB per frame. For comprehensive RD performance analysis, we apply Bjontegaard Delta Bit-Rate (BDBR) and Bjontegaard Delta PSNR (BD-PSNR) [52]. Rendering efficiency is assessed by measuring the frames rendered per second (FPS). + +# 4.2. Comparison + +Quantitative comparisons. To validate the effectiveness of our method, we compare it against several state-of-the-art approaches including K-Planes [20], HyperReel [2], MixVoxels [62], NeRFPlayer [60], StreamRF [30], ReRF [64], TeTriRF [69], D-3DG [40] and 3DGStream [61]. Tab. 1 shows the detailed quantitative results on the N3DV + +![](images/8a0fcb4149c917f0d7e32355521c8a02fe27a56a61469d87a295302d52653741.jpg) + +![](images/3b95e5d9e0578d198a588e05d862b4d5dd9f4b5ae59a95874d844d6c48836ec5.jpg) +GT + +![](images/57789482758d667421318ec2d7e5b73506280008824048068eeec48ef09aecf8.jpg) + +![](images/8f7d631b4e87fa3fd5e73d460c7ccc40f48efd5f07bf7de63b19e452bfdac890.jpg) +ReRF + +![](images/122b8c65090477a52461a5234991c6c90a6c74383abdf88d47e6a564e90b5504.jpg) + +![](images/4e5818bd1b005c3af74402815f87a6911c6ba3f593e6de7caa3d15cb64fe1811.jpg) +TeTriRF +Figure 5. Qualitative comparison on the N3DV [32] and MeetRoom [30] datasets against ReRF [64], TeTriRF [69], and 3DGStream [61]. + +![](images/bdf6e135f7cb577b61d0f7fafd75727a28f37b3281b75b7714742c8cb5dba21e.jpg) + +![](images/48cc3c7cb9ef12cb2fe10642fffb118d4c809b00e92bcf7b39d625a1a1784bb6.jpg) +3DGStream + +![](images/c37fe6afa7f047ba66fac303212c19f93006bc7e9a457b066ab7c30d44327fcf.jpg) + +![](images/4b25fd15f85b51175f6269ba5b6bfc1c784a48c39425aad5b36f92bd275eaac6.jpg) +Ours + +Table 2. Quantitative comparison on the MeetRoom dataset [30] and Google Immersive dataset [10]. + +
DatasetMethodPSNR↑(dB)SSIM↑Size↓(MB)Render↑(FPS)Streamable/Variable-bitrate
MeetRoom dataset [30]StreamRF [30]26.710.9138.2310✓/X
ReRF [64]26.430.9110.632.9✓/✓
TeTriRF [69]27.370.9170.613.8✓/✓
3DGStream [61]28.030.9218.21288✓/X
Ours28.080.9220.42213✓/✓
Google Immersive dataset [10]StreamRF [30]28.140.92910.248.0✓/X
ReRF [64]27.750.9280.931.4✓/✓
TeTriRF [69]28.530.9310.832.1✓/✓
3DGStream [61]29.660.93510.33199✓/X
Ours29.710.9350.61145✓/✓
+ +dataset. It can be seen that our method outperforms other methods and achieves the best reconstruction quality with the lowest bitrate. Specifically, 3DGStream [61] requires 8.1 MB to achieve a comparable quality level to our 4DGC, which only needs 0.5 MB. Although TeTriRF [69] achieves a similar bitrate, its reconstruction quality is lower due to separate optimization of representation and compression. By contrast, our approach jointly optimizes the entire framework through a rate-distortion trade-off, which significantly enhances compression performance. To demonstrate the generality of our method, we conduct experiments on the MeetRoom and Google Immersive datasets, providing a quantitative comparison against StreamRF [30], ReRF [64], TeTriRF [69], and 3DGStream [61], as illustrated in Tab. 2. Our method still outperforms others in PSNR, SSIM, and bitrate. + +Fig. 4 illustrates the RD curves of our 4DGC compared to ReRF [64], TeTriRF [69], and 3DGStream [61] across various sequences from the three datasets. The RD curves clearly show that our 4DGC achieves the best RD performance across a wide range of bitrates. Furthermore, we calculate the BDBR relative to ReRF [64] and TeTriRF [69], as presented in Tab. 3. On the N3DV dataset, our 4DGC achieves an average BDBR reduction of $68.59\%$ compared + +to TeTriRF [69]. Similar BDBR savings of $40.71\%$ and $59.99\%$ are observed on the MeetRoom and Google Immersive datasets, respectively. Against ReRF [64], our 4DGC also demonstrates significantly better RD performance. + +Tab. 4 compares the computational complexity of our 4DGC with the state-of-the-art dynamic scene compression methods, ReRF [64] and TeTriRF [69]. Our 4DGC significantly improves computational efficiency, with a training time of $0.83\mathrm{min}$ versus $42.73\mathrm{min}$ for ReRF and $1.04\mathrm{min}$ for TeTriRF. In rendering, 4DGC requires only $0.006\mathrm{s}$ , vastly outperforming ReRF (0.502s) and TeTriRF (0.375s). For encoding and decoding, 4DGC achieves times of $0.72\mathrm{s}$ and $0.09\mathrm{s}$ , respectively, surpassing both ReRF and TeTriRF. These results highlight 4DGC as a more efficient solution for FVV compression. + +Qualitative comparisons. We present a qualitative comparison with ReRF [64], TeTriRF [69], and 3DGStream [61] on the coffee_martini sequence from the N3DV dataset and the trimming sequence from the MeetRoom dataset, as shown in Fig. 5. Our approach achieves comparable reconstruction quality to 3DGStream [61] at a substantially lower bitrate, achieving a compression rate exceeding $16\mathrm{x}$ . Compared to ReRF [64] and TeTriRF [69], our 4DGC more effectively preserves finer details such as the head, window, bottles, and books in coffee_martini and the face, hand, plant, and scissor in trimming, which are lost in the reconstructions of these two methods. This demonstrates that our 4DGC captures dynamic scene elements accurately and maintains high-quality detail in intricate objects while achieving a highly compact model size. + +# 4.3. Ablation Studies + +We conduct three ablation studies to evaluate the effectiveness of motion estimation, Gaussian compensation, and joint optimization of representation and compression by disabling each component individually during training. In + +Table 3. The BDBR and BD-PSNR results of our 4DGC and ReRF [64] when compared with TeTriRF [69] on different datasets. + +
DatasetMethodBDBR(%)↓BD-PSNR(dB)↑
N3DV [32]ReRF371.10-0.78
Ours-68.591.12
MeetRoom [30]ReRF134.69-0.99
Ours-40.710.55
GoogleReRF324.91-0.93
Immersive [10]Ours-59.991.03
+ +Table 4. Complexity comparison of our 4DGC method with dynamic scene compression methods, ReRF [64] and TeTriRF [69]. + +
MethodTrain(min)Render(s)Encode(s)Decode(s)
ReRF [64]42.730.5023.030.28
TeTriRF [69]1.040.3750.790.31
4DGC0.830.0060.720.09
+ +Table 5. The BD-PSNR results of the ablation studies when compared with our full method on different datasets. + +
N3DVMeetRoomGoogle Immersive
w/o Motion Estimation-1.86 dB-1.13 dB-1.62 dB
w/o Gaussian Compensation-0.23 dB-0.22 dB-0.09 dB
w/o Joint Optimization-0.28 dB-0.34 dB-0.40 dB
+ +the first study, we apply motion estimation but exclude Gaussian compensation. In the second, we omit motion estimation, training only the compensated Gaussians for each frame based on the previous frame. In the final study, we separately train the motion-aware representation and entropy models rather than optimizing them jointly. + +The RD curves of the ablation studies are shown in Fig. 4. These curves illustrate that disabling motion estimation, Gaussian compensation or joint optimization results in reduced RD performance across various bitrates, underscoring the importance of these modules. Additionally, the minus BD-PSNR values observed in the three experiments compared to 4DGC, as shown in Tab. 5, further confirm the effectiveness of our 4DGC in compressing dynamic scenes. + +Fig. 6 illustrates a qualitative comparison of the complete 4DGC at different bitrates against its variants. When motion estimation is absent, details are added directly to the initial frame without tracking object motion, resulting in overlapping artifacts and increased temporal redundancy. The variant without Gaussian compensation struggles to capture newly appearing regions, such as fire eruptions. Moreover, The variant lacking joint optimization disregards the distribution characteristics of both the motion grid and compensated Gaussians, limiting the encoder's ability to achieve low entropy. These findings highlight the effectiveness of our motion estimation, Gaussian compensation, and joint optimization in the 4DGC. + +Furthermore, we analyze the average bit consumption for keyframe and inter-frame under different $\lambda_{1}$ configurations, as shown in Tab. 6. The significantly lower bit consumption for inter-frames demonstrates the effectiveness of our dynamic modeling in reducing inter-frame redundancy and lowering inter-frame bitrates. + +![](images/af40c5e21914ae678c7524584eab708822af4734cfc4655f75ac320de2843450.jpg) +Ours (low) + +![](images/be9e90946ec32330ace08b021363c1b75b89db84cc9f80963c5ba6a9af9ca4e3.jpg) +Ours (medium) + +![](images/1440d9daa01932d8a6a837bfb171b0affe2f3c56c9b2fec80753ee44cadc6dcb.jpg) +Ours (high) + +![](images/8bc2672436da854e06cd440cc28c62c12c5061fab8e99390a16868c50c59979f.jpg) +w/o Motion Estimation +Table 6. Analysis of average bit consumption for keyframe and inter-frame with different ${\lambda }_{1}$ on the N3DV dataset. + +![](images/2645168a6d80bd32a5d73eb66409e6247009c560a2efc9923086cf39e91f1631.jpg) +w/o Gaussian Compensation + +![](images/b3b3a7e9d8b98e7d3f28ca4fe7778c3b92089bdc84de69f5c137883170b39192.jpg) +w/o Joint Optimization +Figure 6. Qualitative results of 4DGC and its variants. Excluding any module leads to lower reconstruction quality and increased bitrate. + +
λ1=0.00001 λ1=0.00005 λ1=0.0001 λ1=0.0003
keyframe (MB)17.314.410.67.3
inter-frame (MB)1.210.780.480.23
+ +# 5. Discussion + +Limitation. As a novel and efficient rate-aware compression framework tailored for 4D Gaussian-based Free-Viewpoint Video, our method has several limitations. First, it relies on the reconstruction quality of the first frame, where poor initialization may degrade overall performance. Second, our method depends on multi-view video input and struggles with sparse-view reconstruction. Finally, the decoding speed is slower compared to rendering, which could be improved using advanced entropy decoding techniques. + +Conclusion. We propose a novel rate-aware compression framework tailored for 4D Gaussian-based Free-Viewpoint Video. Leveraging a motion-aware 4D Gaussian representation, 4DGC effectively captures inter-frame dynamics and spatial details while sequentially reducing temporal redundancy. Our end-to-end compression scheme incorporates an implicit entropy model combined with rate-distortion tradeoff parameters, enabling variable bitrates while jointly optimizing both representation and entropy model for enhanced performance. Experiments show that 4DGC not only achieves superior rate-distortion performance but also adapts to variable bitrates, supporting photorealistic FVV applications with reduced storage and bandwidth requirements in AR/VR contexts. + +# 6. Acknowledgements + +This work was supported by National Natural Science Foundation of China (62271308), STCSM (24ZR1432000, 24511106902, 24511106900, 22511105700, 22DZ2229005), 111 plan (BP0719010) and State Key Laboratory of UHD Video and Audio Production and Presentation. + +# References + +[1] Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, and Luc V Gool. Soft-to-hard vector quantization for end-to-end learning compressible representations. Advances in neural information processing systems, 30, 2017. 3 +[2] Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhoefer, Johannes Kopf, Matthew O'Toole, and Changil Kim. HyperReel: High-fidelity 6-DoF video with ray-conditioned sampling. In CVPR, 2023. 6 +[3] Johannes Balle, Valero Laparra, and Eero P Simoncelli. End-to-end optimization of nonlinear transform codes for perceptual quality. In 2016 Picture Coding Symposium (PCS), pages 1-5. IEEE, 2016. 3 +[4] Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. In ICLR, 2018. 3 +[5] Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. ICCV, 2021. 2 +[6] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. CVPR, 2022. +[7] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Zip-nerf: Anti-aliased grid-based neural radiance fields. ICCV, 2023. 2 +[8] Jean Bégaint, Fabien Racapé, Simon Feltman, and Akshay Pushparaja. Compressai: a pytorch library and evaluation platform for end-to-end compression research. arXiv preprint arXiv:2011.03029, 2020. 5 +[9] Jill M Boyce, Renaud Dore, Adrian Dziembowski, Julien Fleureau, Joel Jung, Bart Kroon, Basel Salahieh, Vinod Kumar Malamal Vadakital, and Lu Yu. Mpeg immersive video coding standard. Proceedings of the IEEE, 109(9):1521-1536, 2021. 1 +[10] Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, and Paul Debevec. Immersive light field video with a layered mesh representation. 39(4):86:1-86:15, 2020. 6, 7, 8 +[11] Ang Cao and Justin Johnson. Hexplane: A fast representation for dynamic scenes. In CVPR, pages 130-141, 2023. 2 +[12] David Charatan, Sizhe Li, Andrea Tagliasacchi, and Vincent Sitzmann. pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. In arXiv, 2023. 2 +[13] Yu Chen and Gim Hee Lee. Dbarf: Deep bundle-adjusting generalizable neural radiance fields. In CVPR, pages 24-34, 2023. 2 +[14] Zhibo Chen, Tianyu He, Xin Jin, and Feng Wu. Learning for video compression. IEEE Transactions on Circuits and Systems for Video Technology, 30(2):566-576, 2020. 3 +[15] Chenxi Lola Deng and Enzo Tartaglione. Compressing explicit voxel grid representations: fast nerfs become also + +small. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1236-1245, 2023. 3 +[16] Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, and Jiajun Wu. Neural radiance flow for 4d view synthesis and video processing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021. 2 +[17] Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, and Zhangyang Wang. Lightgaussian: Unbounded 3d gaussian compression with $15\mathrm{x}$ reduction and $200+$ fps, 2023. 3 +[18] Jiemin Fang, Taoran Yi, Xinggang Wang, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Matthias Nießner, and Qi Tian. Fast dynamic radiance fields with time-aware neural voxels. In SIGGRAPH Asia 2022 Conference Papers. ACM, 2022. 2 +[19] Guofeng Feng, Siyan Chen, Rong Fu, Zimu Liao, Yi Wang, Tao Liu, Zhilin Pei, Hengjie Li, Xingcheng Zhang, and Bo Dai. Flashgs: Efficient 3d gaussian splatting for large-scale and high-resolution rendering, 2024. 2 +[20] Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbæk Warburg, Benjamin Recht, and Angjoo Kanazawa. K-planes: Explicit radiance fields in space, time, and appearance. In CVPR, pages 12479–12488, 2023. 2, 6 +[21] Zhongpai Gao, Benjamin Planche, Meng Zheng, Anwesa Choudhuri, Terrence Chen, and Ziyan Wu. 6dgs: Enhanced direction-aware gaussian splatting for volumetric rendering, 2024. 2 +[22] Danillo Graziosi, Ohji Nakagami, Satoru Kuma, Alexandre Zaghetto, Teruhiko Suzuki, and Ali Tabatabai. An overview of ongoing point cloud compression standardization activities: Video-based (v-pcc) and geometry-based (g-pcc). AP-SIPA Transactions on Signal and Information Processing, 9: e13, 2020. 1 +[23] Zhiyang Guo, Wen gang Zhou, Li Li, Min Wang, and Houqiang Li. Motion-aware 3d gaussian splatting for efficient dynamic scene reconstruction. ArXiv, abs/2403.11447, 2024. 2, 3 +[24] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. In SIGGRAPH 2024 Conference Papers. Association for Computing Machinery, 2024. 2 +[25] Lukas Hollein, Aljaž Božić, Michael Zollhöfer, and Matthias Nießner. 3dgs-lm: Faster gaussian-splatting optimization with levenberg-marquardt, 2024. 2 +[26] Mustafa Isik, Martin Rünz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, and Matthias Nießner. Humanrf: High-fidelity neural radiance fields for humans in motion. ACM Transactions on Graphics (TOG), 42(4), 2023. 2 +[27] Yuheng Jiang, Zhehao Shen, Yu Hong, Chengcheng Guo, Yize Wu, Yingliang Zhang, Jingyi Yu, and Lan Xu. Robust dual gaussian splatting for immersive human-centric volumetric videos. arXiv preprint arXiv:2409.08353, 2024. 2, 3 +[28] Yuheng Jiang, Zhehao Shen, Penghao Wang, Zhuo Su, Yu Hong, Yingliang Zhang, Jingyi Yu, and Lan Xu. Hifi4g: + +High-fidelity human performance rendering via compact gaussian splatting. In CVPR, pages 19734-19745, 2024. 2, 3 +[29] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42 (4), 2023. 2 +[30] Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, and Ping Tan. Streaming radiance fields for 3d video synthesis. Advances in Neural Information Processing Systems, 35: 13485-13498, 2022. 2, 6, 7, 8 +[31] Mu Li, Wangmeng Zuo, Shuhang Gu, Debin Zhao, and David Zhang. Learning convolutional networks for content-weighted image compression. In CVPR, pages 3214-3223, 2018. 3 +[32] Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, et al. Neural 3d video synthesis from multi-view video. In CVPR, pages 5521-5531, 2022. 2, 6, 7, 8 +[33] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. Neural scene flow fields for space-time view synthesis of dynamic scenes. In CVPR, pages 6494-6504, 2021. 2 +[34] Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, and Noah Snavely. Dynibar: Neural dynamic image-based rendering. In CVPR, pages 4273-4284, 2023. 2 +[35] Zhan Li, Zhang Chen, Zhong Li, and Yi Xu. Spacetime gaussian feature splatting for real-time dynamic view synthesis. In CVPR, pages 8508-8520, 2024. 2, 4 +[36] Kai Lin, Chuanmin Jia, Xinfeng Zhang, Shanshe Wang, Siwei Ma, and Wen Gao. Dmvc: Decomposed motion modeling for learned video compression. IEEE Transactions on Circuits and Systems for Video Technology, 33(7):3502-3515, 2023. 3 +[37] Artur Loza, Lyudmila Mihaylova, Nishan Canagarajah, and David Bull. Structural similarity-based object tracking in video sequences. In 2006 9th International Conference on Information Fusion, pages 1-6, 2006. 5 +[38] Guo Lu, Wanli Ouyang, Dong Xu, Xiaoyun Zhang, Chunlei Cai, and Zhiyong Gao. Dvc: An end-to-end deep video compression framework. In CVPR, pages 10998-11007, 2019. 3 +[39] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffolds: Structured 3d gaussians for view-adaptive rendering. In CVPR, pages 20654-20664, 2024. 3 +[40] Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan. Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis. In 3DV, 2024. 3, 6 +[41] Haichuan Ma, Dong Liu, Ning Yan, Houqiang Li, and Feng Wu. End-to-end optimized versatile image compression with wavelet-like transform. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3):1247-1263, 2022. 3 +[42] Jue Mao and Lu Yu. Convolutional neural network based bi-prediction utilizing spatial and temporal information in video coding. IEEE Transactions on Circuits and Systems for Video Technology, 30(7):1856-1870, 2020. 3 + +[43] Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In CVPR, 2021. 2 +[44] Xiandong Meng, Chen Chen, Shuyuan Zhu, and Bing Zeng. A new hevc in-loop filter based on multi-channel long-short-term dependency residual networks. In 2018 Data Compression Conference, pages 187-196, 2018. 3 +[45] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 2 +[46] David Minnen, Johannes Balle, and George D Toderici. Joint autoregressive and hierarchical priors for learned image compression. Advances in neural information processing systems, 31, 2018. 5 +[47] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):102:1-102:15, 2022. 2 +[48] KL Navaneet, Kossar Pourahmadi Meibodi, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash. Compgs: Smaller and faster gaussian splatting with vector quantization. ECCV, 2024. 3 +[49] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. In ICCV (ICCV), pages 5865-5874, 2021. 2 +[50] Keunhong Park, Philipp Henzler, Ben Mildenhall, and Ricardo Barron, Jonathan T.and Martin-Brualla. Camp: Camera preconditioning for neural radiance fields. ACM Trans. Graph., 2023. 2 +[51] Sungheon Park, Minjung Son, Seokhwan Jang, Young Chun Ahn, Ji-Yeon Kim, and Nahiyup Kang. Temporal interpolation is all you need for dynamic neural radiance fields. CVPR, pages 4212-4221, 2023. 2 +[52] By S Pateux and J. Jung. An excel add-in for computing bjontegaard metric and its evolution," in vceg meeting. 2007. 6 +[53] Sida Peng, Yunzhi Yan, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Representing volumetric videos as dynamic mlp maps. In CVPR, pages 4252-4262, 2023. 3 +[54] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-NeRF: Neural Radiance Fields for Dynamic Scenes. In CVPR, 2020. 2 +[55] Saskia Rabich, Patrick Stotko, and Reinhard Klein. Fpo++: Efficient encoding and rendering of dynamic neural radiance fields by analyzing and enhancing fourier plenoctrees. arXiv preprint arXiv:2310.20710, 2023. 2 +[56] Christian Reiser, Rick Szeliski, Dor Verbin, Pratul Srinivasan, Ben Mildenhall, Andreas Geiger, Jon Barron, and Peter Hedman. Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes. ACM Transactions on Graphics (TOG), 42(4):1-12, 2023. 2 +[57] Daniel Rho, Byeonghyeon Lee, Seungtae Nam, Joo Chan Lee, Jong Hwan Ko, and Eunbyung Park. Masked wavelet + +representation for compact neural radiance fields. In CVPR, pages 20680-20690, 2023. 3 +[58] Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, and Yebin Liu. Tensor4d: Efficient neural 4d decomposition for high-fidelity dynamic reconstruction and rendering. In CVPR, pages 16632-16642, 2023. 2 +[59] Xihua Sheng, Jiahao Li, Bin Li, Li Li, Dong Liu, and Yan Lu. Temporal context mining for learned video compression. IEEE Transactions on Multimedia, 25:7311-7322, 2023. 3 +[60] Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu, and Andreas Geiger. Nerfplayer: A streamable dynamic scene representation with decomposed neural radiance fields. IEEE Transactions on Visualization and Computer Graphics, 29(5):2732-2742, 2023. 2, 3, 6 +[61] Jiakai Sun, Han Jiao, Guangyuan Li, Zhanjie Zhang, Lei Zhao, and Wei Xing. 3dgstream: On-the-fly training of 3d gaussians for efficient streaming of photo-realistic free-viewpoint videos. In CVPR, pages 20675-20685, 2024. 1, 2, 3, 6, 7 +[62] Feng Wang, Sinan Tan, Xinghang Li, Zeyue Tian, Yafei Song, and Huaping Liu. Mixed neural voxels for fast multiview video synthesis. In ICCV, pages 19649-19659, 2023. 2, 6 +[63] Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minve Wu, Jingyi Yu, and Lan Xu. Fourier plenoctrees for dynamic radiance field rendering in real-time. In CVPR, pages 13514-13524, 2022. 2 +[64] Liao Wang, Qiang Hu, Qihan He, Ziyu Wang, Jingyi Yu, Tinne Tuytelaars, Lan Xu, and Minye Wu. Neural residual radiance fields for streamably free-viewpoint videos. In CVPR, pages 76-87, 2023. 1, 2, 3, 6, 7, 8 +[65] Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi Yu, Lan Xu, and Minye Wu. Videorf: Rendering dynamic radiance fields as 2d feature video streams, 2023. 2, 3 +[66] Yufei Wang, Zhihao Li, Lanqing Guo, Wenhan Yang, Alex C Kot, and Bihan Wen. Contextgs: Compact 3d gaussian splatting with anchor level context model. arXiv preprint arXiv:2405.20721, 2024. 3 +[67] Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612, 2004. 6 +[68] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In CVPR, pages 20310-20320, 2024. 2, 4 +[69] Minye Wu, Zehao Wang, Georgios Kouros, and Tinne Tuytelaars. Tetrirf: Temporal tri-plane radiance fields for efficient free-viewpoint video. In CVPR, pages 6487-6496, 2024. 1, 2, 3, 6, 7, 8 +[70] Jinbo Yan, Rui Peng, Luyang Tang, and Ronggang Wang. 4d gaussian splatting with scale-aware residual field and adaptive optimization for real-time rendering of temporally complex dynamic scenes. In ACM MM, pages 7871-7880, 2024. 2 + +[71] Ning Yan, Dong Liu, Houqiang Li, Bin Li, Li Li, and Feng Wu. Invertibility-driven interpolation filter for video coding. IEEE Transactions on Image Processing, 28(10):4912-4925, 2019. 3 +[72] R. Yang, M. Xu, Z. Wang, and T. Li. Multi-frame quality enhancement for compressed video. In CVPR, pages 6664-6673, Los Alamitos, CA, USA, 2018. 3 +[73] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. arXiv preprint arXiv:2309.13101, 2023. 2 +[74] Zeyu Yang, Hongye Yang, Zijie Pan, and Li Zhang. Realtime photorealistic dynamic scene representation and rendering with 4d gaussian splatting. 2024. 2 +[75] Zhenghui Zhao, Shiqi Wang, Shanshe Wang, Xinfeng Zhang, Siwei Ma, and Jiansheng Yang. Enhanced biprediction with convolutional neural network for high-efficiency video coding. IEEE Transactions on Circuits and Systems for Video Technology, 29(11):3291-3301, 2019. 3 +[76] Zihan Zheng, Houqiang Zhong, Qiang Hu, Xiaoyun Zhang, Li Song, Ya Zhang, and Yanfeng Wang. Hpc: Hierarchical progressive coding framework for volumetric video. In ACM MM, page 7937-7946, New York, NY, USA, 2024. Association for Computing Machinery. 2, 3 +[77] Zihan Zheng, Houqiang Zhong, Qiang Hu, Xiaoyun Zhang, Li Song, Ya Zhang, and Yanfeng Wang. Jointrf: End-to-end joint optimization for dynamic neural radiance field representation and compression. In 2024 IEEE International Conference on Image Processing (ICIP), pages 3292-3298, 2024. 2, 3 \ No newline at end of file diff --git a/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/images.zip b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8559f14ee1b7365507eca7d443a43b217a68b2eb --- /dev/null +++ b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87ee65a2cc58bde4eb29b2d8af1a78f49c486f62b3da49925f6590a8b067555b +size 714307 diff --git a/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/layout.json b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..159c5f748517f12e313128468988a4daa17ea881 --- /dev/null +++ b/CVPR/2025/4DGC_ Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9de12dce04a2c20246cb622e95e544fdd8684d4e9545eda92fdcaafa2e8a1301 +size 515650 diff --git a/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_content_list.json b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d9c0b8073015b83aeb05dbbacb0a4b3c6c50d1d9 --- /dev/null +++ b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6d478a548c42d06680bdee4942f972cfff93a93adf162e4521fe5d53648fb19 +size 92433 diff --git a/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_model.json b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ecfb92eaa65e2aadc6c9502c4c4088f40c8ac343 --- /dev/null +++ b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13695f48fc3ad333bfa57b6d3c95590649e8c4910f4d92306678d23ad0e5fd79 +size 113229 diff --git a/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_origin.pdf b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4ead4e652ea7149b34af513c2b2731e6d4b4f7c3 --- /dev/null +++ b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/4cb7c164-1232-44e5-9451-158b5d7048fc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28221265fb342544ca2a574f0cae71a30de91cf076a146283a420a54972e36b0 +size 7942692 diff --git a/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/full.md b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/full.md new file mode 100644 index 0000000000000000000000000000000000000000..859d2bca199ce8c5d1ff5e57e963c47ca7ec1d94 --- /dev/null +++ b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/full.md @@ -0,0 +1,442 @@ +# 4DTAM: Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians + +Hidenobu Matsuki + +Gwangbin Bae + +Andrew J. Davison + +Dyson Robotics Laboratory, Imperial College London + +![](images/9878a3ffdcee97c8d2c71686589553463ef274f8c89d581200af22e9750c16cb.jpg) +Figure 1. 4DTAM jointly estimates camera-egomotion, appearance, geometry and scene dynamics without any template. + +![](images/39891c2e203ed09fd736dbadf23eb4b38664852ddd3eaa7a00ca98a216b5f1fe.jpg) + +![](images/e64892b810e659d120393938c10f3e26f2db579edbcc6e34b099bdc3ec7b21f1.jpg) + +![](images/d7d1035ffeb22d0b01c7f2780bb8b9a83c2c23acd8e6acbb7793ad3a75bffbd2.jpg) + +![](images/7f7b3f8ea67355554ef4e715043e40bd7ae8b3111ba9d56fcb320fcea4e68a2b.jpg) + +![](images/af91e61f7e4bce9217b6654284a997db78fca21c16368d06503b277f19f75f1b.jpg) + +# Abstract + +We propose the first 4D tracking and mapping method that jointly performs camera localization and non-rigid surface reconstruction via differentiable rendering. Our approach captures 4D scenes from an online stream of color images with depth measurements or predictions by simultaneously optimizing scene geometry, appearance, dynamics, and camera ego-motion. Although natural environments exhibit complex non-rigid motions, 4D-SLAM remains relatively underexplored due to its inherent challenges; even with 2.5D signals, the problem is ill-posed because of the high dimensionality of the optimization space. To overcome these challenges, we first introduce a SLAM method based on Gaussian surface primitives that leverages depth signals more effectively than 3D Gaussians, thereby achieving accurate surface reconstruction. To further model non-rigid deformations, we employ a warp-field represented by a multi-layer perceptron (MLP) and introduce a novel camera pose estimation technique along with surface regularization terms that facilitate spatio-temporal reconstruction. In addition to these algorithmic challenges, a significant hurdle in 4D-SLAM research is the lack of publicly available datasets with reliable ground truth and evaluation protocols. To address this, we present a novel synthetic dataset of everyday objects with diverse motions, leveraging largescale object models and animation modeling. In summary, we open up the modern 4D-SLAM research by introducing a + +novel method and evaluation protocols grounded in modern vision and rendering techniques. + +# 1. Introduction + +The world we live in has many moving elements. Rivers flow, trees sway, cookies crumble, and humans walk. Although Simultaneous Localization and Mapping (SLAM) methods which assume that most of the world is static are highly useful, embodied agents which aim to navigate and interact with their environments in the most general way should be able to operate in dynamic scenes. There are several ways to segment and ignore moving scene elements, and a SLAM system can be assembled by integrating these individual modules so that it can reconstruct the static parts of a scene and estimate camera ego-motion. However, in this work, we aim for a more comprehensive spatiotemporal (4D) reconstruction of scenes exhibiting significant dynamic motion. Our primary focus is on a unified framework that leverages intrinsic capabilities of the underlying scene representation without heavily relying on prior assumptions about moving elements. 4D-SLAM with general scene motion is difficult primarily because of the complex and high-dimensional nature of modeling non-rigid motions (and potential topological changes) while simultaneously optimizing the pose of a moving camera. There is much more redundancy than in rigid SLAM, and some prior assumptions are needed to combat this. Another chal + +lenge lies in the lack of datasets to train and/or evaluate techniques. Recent advances in computer vision and graphics make it a good time to revisit this problem. New 3D representations (e.g. neural fields and Gaussian splats) allow differentiable rendering of complex 3D scenes and optimization via 2D observations, and to model deformation fields smoothly without more specific assumptions. Also, the availability of high-quality 3D meshes on Internet and rendering software (e.g. Blender) gives the ability to render non-rigidly moving objects with ground truth. + +We present 4DTAM, a novel approach for 4D Tracking And Mapping in dynamic scenes. We use Gaussian surface primitives to represent the scene and introduce a neural warp-field represented by a multi-layer perception (MLP) to model continuous temporal changes. We then utilize differentiable rendering to jointly optimize the scene geometry, appearance, dynamics, and camera ego-motion from an online stream of a single RGB-D camera. This enables accurate 3D reconstruction and real-time rendering, even in the presence of complex non-rigid deformations. To facilitate future research, we also introduce a new synthetic dataset of dynamic objects. Our focus in this dataset is realistic, complex motion of scenes that are not well represented by existing deformable object models. Animated 3D meshes are rendered and the ground truth depth, surface normals, and foreground masks are extracted together with the camera poses/intrinsics. This dataset provides challenging scenarios for 4D reconstruction methods. We also release the full rendering script to allow the generation of custom 4D datasets. Our experimental results demonstrate that 4DTAM achieves good performance in both camera tracking and scene reconstruction in the presence of dynamic objects. It can handle the complex motion of articulated objects (e.g., drawers) and non-rigid objects (e.g., curtains, flags, and animals), showcasing its potential for applications in robotics, augmented reality, and other fields requiring real-time dynamic scene understanding. We primarily use RGB-D sensor input, but also demonstrate an extension to monocular RGB streams by incorporating a monocular depth prediction network in the supplementary material. + +In summary, the contributions of this paper are: + +- 4DTAM, the first 4D tracking and mapping method that uses differentiable rendering and Gaussian surface primitives for dynamic environments. +- The first 2DGS [17]-based SLAM method with analytic camera pose gradients, normal initialization, and regularization to fully exploit depth signals. +- An MLP-based warp-field for modeling non-rigid scene, complemented by a novel camera localization technique and rigidity regularization of surface Gaussians. +- A novel 4D-SLAM dataset with complex object motions, ground-truth camera trajectories, and dynamic object meshes, along with an evaluation protocol. + +- Extensive evaluations demonstrating that the method achieves state-of-the-art performance. + +# 2. Related Work + +# 2.1. Visual SLAM + +Visual SLAM has been an extensively researched field, with Dense SLAM specifically focusing on capturing detailed scene geometry [34] and semantics [30]. A central aspect of these methods lies in the choice of scene representation and the corresponding optimization framework. Dense SLAM methods based on traditional scene representations, such as volumetric Truncated Signed Distance Functions (TSDF) [22, 33, 58] or Surfels [43, 59], project 2D observations into 3D space and employ specific data fusion algorithms. While effective, these methods often fail to keep consistency between the model and sensor observations across multiple viewpoints, posing challenges for long-term operation. + +However, recent advancements in graphics hardware have facilitated the adoption of differentiable rendering frameworks, which have revolutionized inverse rendering and scene reconstruction [23, 31, 32, 36]. Differentiable rendering ensures multi-view consistency through streamlined backpropagation, enhancing scene reconstruction accuracy. Notably, 3D Gaussian Splatting (3DGS) [25] has gained attention due to its flexible resource allocation and rapid forward rendering capabilities. Initially developed for photorealistic view synthesis, recent research has extended its application to surface reconstruction [15, 64]. Enhanced methods, such as 2D Gaussian Splatting (2DGS) [17], achieve superior geometry reconstruction by reducing the Gaussian dimension and explicitly defining surface normals. These differentiable rendering representations have been applied to visual SLAM, from coordinate-based MLPs [51] to explicit voxel grids [21, 54, 61, 65], points [42], and 3D Gaussians [24, 29, 60]. + +# 2.2. SLAM for 4D Scene Reconstruction + +3D reconstruction of dynamic scenes has been extensively studied, with notable achievements using optimization methods, even for unknown non-rigid objects observed by a single moving RGB camera [14, 52]. However, these approaches typically require batch optimization and are limited to smaller scenes. In contrast, dynamic SLAM targets incremental, reconstruction and tracking of large, continuously moving scenes ideally in real-time. Most methods to date have relied on RGB-D data from moving depth cameras. + +While many methods detect and exclude dynamic objects to focus on static scene reconstruction [44], full spatiotemporal reconstruction (which we refer to as $4D$ -SLAM) requires more advanced solutions. For instance, tracking + +and reconstructing rigid moving objects separately [41] or employing parametric shape models for known semantic classes like humans or animals [26] are effective strategies. Specialized domains, such as endoscopic imaging, have utilized scene-specific priors or deformation models to handle non-rigid dynamics [28, 40]. + +An incremental 4D-SLAM for general dynamic scenes has remained more challenging, but has been addressed based on various regularizing assumptions and representations. DynamicFusion [35] pioneered a line of work [13, 19, 46, 47] which captures temporal evolution in the scene geometry by jointly optimizing a canonical volumetric representation (e.g., TSDF volume [35]) and a deformation field. As the solution space is extremely high-dimensional, additional constraints are often introduced to regularize the motion field [46, 47] or to align visual features [4, 19]. Recent advances in 3D representations, such as neural fields and Gaussian primitives, have opened new possibilities for dynamic scene reconstruction. Canonical radiance and motion fields can be jointly optimized via differentiable rendering, as demonstrated with NeRF [37, 39, 53] and SDF [7, 55]. For 3D Gaussians, which can explicitly represent points, motion can be estimated either through perprimitive trajectories [27] or learnable motion bases [56]. However, warp-field-based motion representation offers inherent smoothness regularization, leveraging the properties of neural fields [10, 18, 62, 63]. Most existing methods, however, rely on known camera poses or multi-camera setups to capture dense spatiotemporal observations. While DyNoMo [45] supports camera pose optimization, its 3D Gaussian representation is not suited for geometrically accurate reconstruction. In contrast, our 4DTAM framework enables 4D reconstruction using a single RGB-D camera, jointly optimizing camera poses, appearance, geometry, and dynamics, making it practical for most embodied agents. + +# 2.3. Datasets for 4D Reconstruction + +4D reconstruction has been studied extensively for the case of the human body. Datasets like Human3.6M [20], DeepCap [16], and ZJU-MoCap [38] capture diverse human motions under a multi-camera setup. The cameras are fixed, synchronized, and calibrated to reduce the difficulty in establishing dense multi-view correspondences. Only a small number of datasets provide single-stream RGB-D sequences captured from a moving camera [5, 12, 46]. Recovering the camera poses is not trivial for such real-world captures, and additional post-processing (e.g. robust depth map alignment [55]) is required. Another challenge lies in ground truth acquisition. Besides the depth measurements, other ground truths (e.g., scene flow, object mask) often require manual labeling. On the contrary, synthetic datasets [6, 57] provide perfect ground truths. Recent advances in open-source datasets [9] and rendering soft- + +ware [8] also close the synthetic-to-real domain gap significantly. To this end, we introduce a new high-quality synthetic dataset tailored for 4D reconstruction and camera pose estimation. + +# 3. Method + +# 3.1.2D Gaussian Splitting + +Our geometric scene representation is based on 2D Gaussian Splatting (2DGS) [17]. Unlike 3D Gaussian Splatting (3DGS), which uses blob-like splats, 2DGS functions as a stretchable surfel with explicitly defined surface normal directions. This property makes 2DGS particularly well-suited for non-rigid scene reconstruction with a single camera, where effectively handling 2.5D input signals is critical. + +Each 2D Gaussian $\mathcal{G}$ is represented by its 3D mean position $\mathbf{P}_{\mu}$ , rotation $\mathbf{R} \in SO(3)$ , color $\mathbf{c}$ , opacity $o$ , and a scaling vector $\mathbf{S} \in \mathbb{R}^2$ . The rotation matrix $\mathbf{R}$ is decomposed as $\mathbf{R} = [\mathbf{t}_u, \mathbf{t}_v, \mathbf{t}_w]$ , where $\mathbf{t}_u$ and $\mathbf{t}_v$ represent two principal tangential vectors, and $\mathbf{t}_w$ is the normal vector, defined as $\mathbf{t}_w = \mathbf{t}_u \times \mathbf{t}_v$ . For simplicity, spherical harmonics are omitted in this work. + +The 2D Gaussian function is parameterized on the local tangent plane in world space as: + +$$ +P (u, v) = \mathbf {P} _ {\mu} + s _ {u} \mathbf {t} _ {u} u + s _ {v} \mathbf {t} _ {v} v = \mathbf {H} (u, v, 1, 1) ^ {\mathrm {T}} \quad (1) +$$ + +$$ +\text {w h e r e} \mathbf {H} = \left[ \begin{array}{c c c c} s _ {u} \mathbf {t} _ {u} & s _ {v} \mathbf {t} _ {v} & \mathbf {0} & \mathbf {p} _ {k} \\ 0 & 0 & 0 & 1 \end{array} \right] = \left[ \begin{array}{c c} \mathbf {R S} & \mathbf {p} _ {k} \\ \mathbf {0} & 1 \end{array} \right] \tag {2} +$$ + +For a point $\mathbf{u} = (u,v)$ in the tangential plane of 2D Gaussian $(uv$ space), its projection onto the image plane is given by + +$$ +\mathbf {x} = (x z, y z, z, 1) ^ {\mathrm {T}} = \mathbf {W P} (u, v) = \mathbf {W H} (u, v, 1, 1) ^ {\mathrm {T}} \tag {3} +$$ + +where $\mathbf{W} \in \mathbb{R}^{4 \times 4}$ is the transformation matrix from world space to screen space. + +To avoid numerically unstable matrix inversion of $\mathbf{M} = (\mathbf{W}\mathbf{H})^{-1}$ , 2DGS applies ray-splat intersection by finding the intersection of non-parallel planes (x-plane and y-plane). The ray $\mathbf{x} = (x,y)$ is determined by the intersection of the $x$ -plane $\mathbf{h}_x$ and the $y$ -plane $\mathbf{h}_y$ , represented as $\mathbf{h}_x = (-1,0,0,x)^{\mathrm{T}}$ and $\mathbf{h}_y = (0,-1,0,y)^{\mathrm{T}}$ , respectively. In the uv coordinates of the 2D Gaussian, this is expressed as: + +$$ +\mathbf {h} _ {u} = (\mathbf {W H}) ^ {\mathrm {T}} \mathbf {h} _ {x} \quad \text {a n d} \quad \mathbf {h} _ {v} = (\mathbf {W H}) ^ {\mathrm {T}} \mathbf {h} _ {y} \tag {4} +$$ + +The intersection point meets the following condition, + +$$ +\mathbf {h} _ {u} \cdot (u, v, 1, 1) ^ {\mathrm {T}} = \mathbf {h} _ {v} \cdot (u, v, 1, 1) ^ {\mathrm {T}} = 0 \tag {5} +$$ + +This leads to an solution for the intersection point $\mathbf{u}(\mathbf{x})$ : + +$$ +u (\mathbf {x}) = \frac {\mathbf {h} _ {u} ^ {2} \mathbf {h} _ {v} ^ {4} - \mathbf {h} _ {u} ^ {4} \mathbf {h} _ {v} ^ {2}}{\mathbf {h} _ {u} ^ {1} \mathbf {h} _ {v} ^ {2} - \mathbf {h} _ {u} ^ {2} \mathbf {h} _ {v} ^ {1}} \quad v (\mathbf {x}) = \frac {\mathbf {h} _ {u} ^ {4} \mathbf {h} _ {v} ^ {1} - \mathbf {h} _ {u} ^ {1} \mathbf {h} _ {v} ^ {4}}{\mathbf {h} _ {u} ^ {1} \mathbf {h} _ {v} ^ {2} - \mathbf {h} _ {u} ^ {2} \mathbf {h} _ {v} ^ {1}} \tag {6} +$$ + +![](images/89fc13acecd75befa4653907395b9b284b47298eb65e2aaad618c9b9f2d0e808.jpg) +Figure 2. Method overview of 4DTAM. + +where $\mathbf{h}_u^i,\mathbf{h}_v^i$ are the $i$ -th parameter of the 4D homogeneous plane parameters. + +The 2D Gaussian at $(u,v)$ is evaluated as: + +$$ +\mathcal {G} (\mathbf {u}) = \exp \left(- \frac {u ^ {2} + v ^ {2}}{2}\right) \tag {7} +$$ + +The 2D Gaussians are sorted along the camera ray by their center depth and organized into image tiles. Per-pixel color is rendered via volumetric alpha blending: + +$$ +c (\mathbf {x}) = \sum_ {i = 1} \mathbf {c} _ {i} \alpha_ {i} \mathcal {G} _ {i} (\mathbf {u} (x)) \prod_ {j = 1} ^ {i - 1} \left(1 - \alpha_ {j} \mathcal {G} _ {j} (\mathbf {u} (x))\right) \tag {8} +$$ + +where depth and normal can be rendered similarly. + +# 3.2. Analytic Camera Pose Jacobian + +One major advantage of Gaussian Splatting is its analytical formulation of gradient flow for model parameters, enabling real-time full-resolution rendering. However, it assumes posed images as input and does not provide gradients for camera poses. To accelerate optimization, we derive the analytic Jacobian of the camera pose for 2D Gaussian Splatting and implement it using a CUDA kernel. This formulation has potential applications for a wide range of tasks involving pose estimation in surface-based Gaussian Splatting. + +We use Lie algebra to derive the minimal Jacobians for the camera pose matrix from the world coordinate system to the camera's local coordinate system, defining $T_{CW} \in SE(3)$ and $\tau \in \mathfrak{se}(3)$ . Since 2DGS backpropagates gradients to $\mathbf{M}^T = \mathbf{W}\mathbf{H}$ during the optimization of the 3D + +mean, we require the partial derivative $\frac{\partial\mathbf{M}^T}{\partial\tau}$ . Let $\mathbf{K} \in \mathbb{R}^{4 \times 4}$ represent the camera projection matrix. Then, equation 3 is rewritten as: + +$$ +\mathbf {x} = \mathbf {M} ^ {T} (u, v, 1, 1) ^ {\mathrm {T}} = \mathbf {K} \mathbf {T} _ {C W} \mathbf {H} (u, v, 1, 1) ^ {\mathrm {T}} \tag {9} +$$ + +Using the chain rule, the partial derivatives are computed as: + +$$ +\frac {\partial \mathbf {M} ^ {T}}{\partial \boldsymbol {\tau}} = \frac {\partial \mathbf {M} ^ {T}}{\partial \mathbf {W}} \frac {\partial \mathbf {W}}{\partial \boldsymbol {T} _ {C W}} \frac {\partial \boldsymbol {T} _ {C W}}{\partial \boldsymbol {\tau}}, \tag {10} +$$ + +$$ +\frac {\partial \boldsymbol {T} _ {C W}}{\partial \boldsymbol {\tau}} = \left[ \begin{array}{l l} \mathbf {0} & - \boldsymbol {R} _ {C W,: 1} ^ {\times} \\ \mathbf {0} & - \boldsymbol {R} _ {C W,: 2} ^ {\times} \\ \mathbf {0} & - \boldsymbol {R} _ {C W,: 3} ^ {\times} \\ \boldsymbol {I} & - \mathbf {t} _ {C W} ^ {\times} \end{array} \right] \tag {11} +$$ + +where $R_{CW} \in SO(3)$ and $\mathbf{t}_{CW} \in \mathbb{R}^3$ denote the rotation and translation parts of $T_{CW}$ , respectively. The notation $^{\times}$ represents the skew-symmetric matrix of a 3D vector, and $R_{CW,:i}$ denotes the $i$ th column of $R_{CW}$ . + +2DGS also renders a normal map, which can be supervised using the loss computed from the rendered normals. Let $\mathbf{n}_c$ denote the camera-space normal. The normal of a 2D Gaussian in the camera's local coordinate system is defined as: + +$$ +\mathbf {n} _ {c} = \boldsymbol {T} _ {C W} \mathbf {t} _ {w} \tag {12} +$$ + +where $\mathbf{t}_w$ is the surface normal in the world coordinate system. + +Borrowing the notation of the left Jacobian for Lie groups from [48], the partial derivative is given by: + +$$ +\frac {\partial \mathbf {n} _ {c}}{\partial \boldsymbol {\tau}} = \frac {\mathcal {D} \mathbf {n} _ {c}}{\mathcal {D} \boldsymbol {T} _ {C W}} = \left[ \begin{array}{l l} \boldsymbol {I} & - \mathbf {n} _ {c} ^ {\times} \end{array} \right] \tag {13} +$$ + +Further details of the derivation are provided in the supplementary material. + +# 3.3. Warp Field + +To model time-varying deformations, we use a warp-field represented by a coordinate-based network [62, 63]. In our hand-held single-camera setup, the limited view coverage of dynamic objects necessitates structural priors in the motion representation. For this, we employ a compact MLP as the warp-field to estimate transitions from the canonical Gaussians following [62]. + +Given time $t$ and center position $\mathbf{x}$ of 2D Gaussians in canonical space as inputs, the deformation MLP $\mathbf{f}_{\theta}$ produces offsets, which subsequently transform the canonical 2D Gaussians to the deformed space: + +$$ +\left(\delta \boldsymbol {x}, \delta \boldsymbol {r}, \delta \boldsymbol {s}\right) = \mathbf {f} _ {\theta} \left(\gamma_ {1} (\boldsymbol {x}), \gamma_ {2} (t)\right) \tag {14} +$$ + +where $\delta \pmb{x} \in \mathbb{R}^3$ , $\delta \pmb{r} \in SO(3)$ , $\delta \pmb{s} \in \mathbb{R}^2$ denotes the offsets of 2D Gaussian's mean position, rotation and scale respectively, $\gamma$ denotes the frequency-based positional encoding [31]. For deformable SLAM applications, we leverage a CUDA-optimized MLP implementation [32] to enable fast, interactive reconstruction. + +# 3.4. Tracking and Mapping Framework + +Our SLAM method follows the standard tracking and mapping architecture, where the tracking module is in charge of fast online camera pose estimation while the mapping performs a relatively more involved joint opmtization of the camera poses, geometry and motion of selected keyframes. Further details of the hyperparameters are available in the supplementary material. + +# 3.4.1. Tracking + +The tracking module estimates the coarse camera pose for the latest incoming frame. This is achieved by minimizing the photometric and depth rendering errors between the sensor observation and the rendering from the deformable Gaussian model. Unlike static 3DGS SLAM methods, we estimate the camera pose relative to the warped Gaussians at the latest keyframes timestamp $t_{kf}$ , assuming the deformed scene structure at $t_{kf}$ is closest to the current state. We define photometric rendering loss as: + +$$ +L _ {p} = \left\| I \left(\mathcal {G} _ {\text {c a n o}}, \boldsymbol {T} _ {\text {C W}}, t _ {k f}\right) - \bar {I} \right\| _ {1} \tag {15} +$$ + +Here $I(\mathcal{G},\mathbf{T}_{CW})$ denotes a rendered color image from the cannonical Gaussians $\mathcal{G}_{\mathrm{cano}}$ , timestamp of the latest keyframe $t_{kf}$ and camera pose $\mathbf{T}_{CW}$ , and $\bar{I}$ is an observed image. Similarly, we also minimize geometric depth error: + +$$ +L _ {g} = \left\| D \left(\mathcal {G} _ {\text {c a n o}}, \boldsymbol {T} _ {C W}, t _ {k f}\right) - \bar {D} \right\| _ {1} \tag {16} +$$ + +Following MonoGS [29], we further optimize affine brightness parameters. Keyframes are selected every N-th + +![](images/6c18ed7ee3473afd95b99fd7ac198e0e83fb16e2a72e589b068af6f9d909524c.jpg) +Figure 3. 2D Gaussian's Surface Normal Rendering based on Different Initialization. Left: Random initialization. Right: Our initialization aligned with sensor measurement. + +![](images/f831fe0c55f697678e3e1f45948a5b2d04315e5aa450a7c9514fed9a8adcea5b.jpg) + +frame and sent to the mapping process for further refinement. + +# 3.4.2. Mapping + +The mapping module performs joint optimization of the camera pose, canonical Gaussians, and the warp field within a sliding window. + +Gaussian Management When a new keyframe is registered, we add new Gaussians to the canonical Gaussians $\mathcal{G}_{\text {cano }}$ , based on the back-projected point cloud from the RGB-D observations. Unlike 3DGS, 2DGS explicitly encodes surface normal information in its rotation vector, making it beneficial to initialize using surface normals estimated from sensor depth measurements. To achieve this, we compute the surface normals of the current depth observation by taking the finite difference of neighboring back-projected depth points and assign them as the normal vectors of the 2D Gaussians $\mathbf{t}_w$ . This is formulated as: + +$$ +\mathbf {t} _ {w} = \frac {\nabla_ {x} \mathbf {p} _ {d} \times \nabla_ {y} \mathbf {p} _ {d}}{| \nabla_ {x} \mathbf {p} _ {d} \times \nabla_ {y} \mathbf {p} _ {d} |} \tag {17} +$$ + +where $\mathbf{p}_d$ denotes points back-projected by the current sensor depth observation. We store the computed normal information as a 2D image $\mathbf{N}_{\text {sensor }}$ for normal supervision. Pruning and densification parameters follow MonoGS, which effectively prunes the wrongly inserted Gaussians in the canonical space due to the object movement. + +4D Map optimization We perform joint optimization of the camera ego-motion, appearance, geometry and scene dynamics. In a single-camera setup, the lack of spatiotemporally dense observations makes fully capturing dynamic scenes challenging, as complete spatial (xyz) coverage over time (t) is only feasible with multi-camera systems. To address this, we introduce regularization terms for both shape and motion. + +In addition to photometric and depth losses, we apply a normal regularization based on sensor measurements to better align 2D Gaussians. Unlike the original 2DGS methods, which compute normals by finite differences of rendered + +depth during every optimization step—leading to high computational costs—we instead propose to use normals precomputed from depth input as supervision. This reduces computational overhead, as normals are calculated only when a new keyframe is inserted: + +$$ +L _ {n} = \sum_ {i \in h \times w} \left(1 - \mathbf {n} _ {i} ^ {\mathrm {T}} \mathbf {N} _ {\text {s e n s o r}, i}\right) \tag {18} +$$ + +To constrain motion in unobserved regions, we apply an as-rigid-as possible regularization loss $L_{ARAP}$ from [27] to the Gaussian means. Additionally, we introduce a novel surface normal rigidity loss, constraining the 2D Gaussians' surface normals to stay similar between timesteps $t_1$ and $t_2$ , preserving local surface rigidity: + +$$ +L _ {A R A P - n} = w _ {i, j} \left\| \left(\mathbf {t} _ {w}\right) _ {i, t _ {1}} ^ {T} \left(\mathbf {t} _ {w}\right) _ {j, t _ {1}} - \left(\mathbf {t} _ {w}\right) _ {i, t _ {2}} ^ {T} \left(\mathbf {t} _ {w}\right) _ {j, t _ {2}} \right\| _ {1} \tag {19} +$$ + +where $w_{i,j}$ is a distance-based weighting factor like $L_{ARAP}$ . We apply ARAP regularizers between the oldest and latest keyframe in the current window. + +Together with the isotropic loss $L_{iso}$ proposed in [29], we minimize the following total cost function: + +$$ +\begin{array}{l} L _ {t o t a l} = \lambda_ {p} L _ {p} + \lambda_ {g} L _ {g} + \lambda_ {n} L _ {n} \\ + \lambda_ {i s o} L _ {i s o} + L _ {A R A P} + L _ {A R A P - n} \tag {20} \\ \end{array} +$$ + +The optimization is based on the sliding window heuristics in [11], with two additional keyframes randomly selected from the history. + +Global Optimization Sliding window-based optimization prioritizes the latest frame, causing past keyframe information to degrade over time. After tracking, if required we can perform global optimization to finalize the map, which takes less than 1 minute on an RTX 4090. During this step, the poses and number of Gaussians are fixed, and one keyframe is randomly selected per iteration. The process uses the normal consistency loss of 2DGS, ensuring global consistency despite being relatively slow. + +# 3.5. Dataset Generation + +We introduce Sim4D, a new synthetic dataset for 4D reconstruction. Recently, a large number of photo-realistic, animated 3D meshes have become available [2, 9]. Combined with open-source graphics software [3, 8], such meshes provide a scalable way of generating datasets for non-rigid 4D reconstruction. The data generation pipeline is illustrated in Fig. 4. + +Meshes and background. We collected 50 high-quality, animated 3D meshes from Objaverse [9] and Sketchfab [2], all of which are under CC-BY license. The collected meshes exhibit a wide variety of motions, including non-rigid deformation and topological changes. We then place + +the object inside a cube and randomize the background texture. Texture maps are collected from Poly Haven [1] and are all under CC0 license. + +Rendering. We render 240 to 540 frames for each object. The camera trajectories are defined along arcs of 20 degrees, and test viewpoints are defined outside of these arcs to evaluate the performance of novel-view synthesis and to quantify the accuracy of the reconstructed geometry. At each timestamp, the RGB image, ground truth depth, surface normals, and foreground mask are rendered and the camera intrinsics/extrinsics saved. Please refer to the supplementary material for additional details. + +# 4. Evaluation + +# 4.1. Experimental Setup + +We extensively evaluate our non-rigid SLAM method on both synthetic and real-world datasets. Previous non-rigid RGB-D SLAM work has primarily focused on qualitative demonstrations using limited datasets, showcasing the early-stage potential of the field. To advance research, we introduce a quantitative evaluation protocol with the new Sim4D dataset. Our evaluation covers camera pose accuracy, as well as the appearance and geometric quality of the reconstructed models. Additionally, we demonstrate realworld performance using a self-captured dataset. + +While designed primarily for dynamic scenes, our method is the first to leverage surface Gaussian splatting for both static SLAM and non-rigid RGB-D reconstruction. To further validate our approach, we perform a detailed quantitative component-wise ablation analysis. + +Metrics and Datasets For our main Non-Rigid SLAM evaluation, we evaluate our method on 8 sequences from the Sim4D dataset. We first report ATE RMSE for trajectory evaluation. To assess SLAM map quality, we report depth rendering error (L1 error) for geometry and PSNR, SSIM, and LPIPS for appearance evaluation. For Sim4D, metrics are calculated from test views (extrapolated positions across different timestamps). The estimated and ground truth trajectories are aligned on the first frame, and test view positions are queried in the ground truth trajectory's coordinate system. Details about the test viewpoints are in the supplementary material. Since SurfelWarp [13] requires explicit foreground segmentation, we collect its results only on pixels with valid reconstruction. For Static SLAM ablation, we report ATE RMSE, rendering performance, and TSDF-fused mesh metrics, following the protocol in [42]. We evaluate our method on the Replica [49] dataset and the TUM RGB-D dataset [50]. To isolate the impact of scene representation from system differences, we replaced MonoGS's representation with 2DGS while keeping all other system configurations identical. For Offline + +![](images/b7e14a96608d14ab78f26b8d1d9d456ceaaeffbc8e3dbbe91e5f5aa1e65951d6.jpg) +Figure 4. Sim4D dataset. We create a new dataset for 4D reconstruction by rendering animated 3D meshes. + +![](images/5b9c1418bcac0f52034350541358dabf53ca5d29e16ab51c43494e1e1cb770fa.jpg) + +![](images/e4041a15ca3e5a32c9e4add789cec7ae75a1c5c057ec650e587bc1832e736c56.jpg) + +![](images/f729d875140b51281b50d90fbbc7b29d9013664b1a7c9ba30faee641bfdfc0f0.jpg) + +Non-Rigid Reconstruction ablation, we report the average geometry and appearance rendering metrics on subsets of the DeepDeform [5], KillingFusion [46], and iPhone datasets [12], which are used in [55]. Numerical quantities for each sequence is available in supplementary material. Since [55] primarily focuses on object shape completion, metrics are calculated only within the given segmentation mask. The camera pose is provided by the dataset, and pose optimization is disabled to focus solely on reconstruction performance. We perform 30000 iteration for training, which takes approximately 30 mins. + +Baseline Methods For quantitative non-rigid SLAM evaluation, we compare our method with SurfelWarp [13], the only non-rigid RGB-D SLAM method with publicly available code. For component-wise ablation analysis, we compare against MonoGS [29] for static SLAM evaluation and Morpheus [55] for offline reconstruction. + +Implementation Details Our SLAM system runs on a desktop equipped with an Intel Core i9-12900K (3.50GHz) processor and a single NVIDIA GeForce RTX 4090 GPU. The camera pose jacobian for 2DGS, described in Section 3.2, is implemented using a CUDA rasterizer, similar to other gradients in Gaussian Splatting. For real-world data capture, we used the Realsense D455. + +# 4.2. Quantitative Evaluation + +Table 1 compares our method with SurfelWarp [13]. Our method outperforms SurfelWarp across all metrics. To analyze this further, Fig. 5 provides qualitative visualizations and trajectory plots for the modular_vehicle sequence. Since SurfelWarp relies on a foreground mask, its reconstruction lacks scene completeness. In contrast, our method reconstructs the entire scene within a joint optimization framework, providing more comprehensive coverage. Additionally, compared to SurfelWarp's back-projection and Surfel fusion scheme, our differentiable rendering-based + +![](images/47f2807d4611af1588e7c4c9b7cb61726e6b6bef21e027bff53bc6eeadbde888.jpg) +Figure 5. Qualitative comparison to SurfelWarp. Left: Rendered image, Middle: Rendered normal map, Right: Estimated camera trajectory + +![](images/3f3b07aeaae9fc4848a1a53ed6604f8110a17cde727ac32bd767e88bc6e8d54f.jpg) + +![](images/041fde2097608b6112b33f5e02072559a15814039f982f038f7bf91a16f17b0c.jpg) + +![](images/25bfece5b46d86fcb17f4fa0074e7132a3a9a23ff6fe852eb2ac74b973c3e6d1.jpg) + +![](images/29b39cb32752f0af61577b4fd7def49303de824e8d683cd4cbbba3a71060bbe3.jpg) + +![](images/577afb375de2e85e95923b5191db325336ce95b71b6fa2ce3611650a0fca4951.jpg) + +![](images/e038e34816cad34f73bb8fb981bcb594c876c557c0b35ecab3bcac9fed02a217.jpg) +Figure 6. Qualitative Results on Real-World Datset. Our method effectively handles dynamic objects compared to MonoGS. + +![](images/eb89c9e7b133ab5f9b71f3f587f81ad7697230f4cd7820a4dcc8b93b5c83d2b9.jpg) + +![](images/2a54a9b07064b5b02aea82f017b35e8f6af1e10648838e262c289ef22d5a69d6.jpg) + +optimization enforces multi-view consistency over time, resulting in superior camera tracking and consistent 3D reconstruction. Our method achieves camera pose estimation at approximately 1.5 fps and completes the final global optimization in 1 minute. + +# 4.3. Qualitative Evaluation + +Fig. 6 presents qualitative reconstruction results on real-world dynamic scenes. Our method successfully reconstructs dynamic scenes with non-rigid deformations, whereas MonoGS fails to handle such complexities. + +
MethodCategoryMetriccurtainflagmercedesmodularvehiclerhinoshoe_rackwater-effectwavetoy
SurfelWarp [13]TrajectoryATE RMSE[cm]↓6.1031.95.214.212.812.162.601.45
GeometryL1 Depth[cm]↓49.150.85.210.344.94.2546.747.5
AppearancePSNR [dB] ↑15.7811.0425.719.4516.7626.317.316.4
SSIM ↑0.4680.3430.7790.3620.1880.7950.3250.364
LPIPS ↓0.560.6590.4830.6380.6650.3970.5870.555
OursTrajectoryATE RMSE[cm]↓0.251.000.180.310.250.180.290.32
GeometryL1 Depth[cm]↓0.963.580.621.441.850.993.203.43
AppearancePSNR [dB] ↑28.0121.0132.1330.5924.1331.727.1227.10
SSIM ↑0.7870.6010.8940.8010.7420.9010.7950.794
LPIPS ↓0.0960.1500.1380.2100.2600.120.9080.097
+ +![](images/f4d7c0256f669d9b49fff00d993145a025b1a05d327ca449defb9a21b9d3e604.jpg) +Figure 7. 3D Reconstruction Result on Replica Office4. Left: MonoGS. Right: Ours. + +![](images/bf9403676abeb30d3983b256f1c028c9cb119e8fd1443bc91ede0de2301918d4.jpg) + +# 4.4. Ablation Study + +Table 1. Non-rigid SLAM Evaluation on Sim4D Dataset. + +
Metricr0r1r2o0o1o2o3o4avg
MonoGSATE RMSE[cm]↓0.440.320.310.440.520.230.172.250.59
Depth L1[cm]↓3.003.474.663.106.086.154.774.944.52
Precision[%]↑39.028.828.939.715.828.032.525.529.7
Recall[%]↑44.234.532.847.624.330.035.428.534.6
F1[%]↑41.531.430.743.319.129.033.926.931.9
MonoGS-2DATE RMSE[cm]↓0.420.430.350.190.190.220.270.800.36
Depth L1[cm]↓0.450.280.570.370.590.850.620.630.54
Precision[%]↑97.097.097.097.197.995.894.883.995.0
Recall[%]↑85.586.084.889.485.181.881.572.483.3
F1[%]↑90.991.390.593.191.188.287.677.788.8
+ +Table 2. Static SLAM Ablation on Replica. + +
MethodMetricfr1/deskfr2/xyzfr3/officeavg.
MonoGSATE RMSE[cm]↓1.501.441.491.47
Depth L1[cm]↓6.213.013.010.7
MonoGS-2DATE RMSE[cm]↓1.581.201.831.57
Depth L1[cm]↓3.002.304.303.2
+ +Table 3. Static SLAM Ablation on TUM + +
MethodMetricKillingFusionDeepDeformiPhone
Morpheus [55]Depth L1[cm]↓3.21.92.4
PSNR [dB] ↑27.0226.8125.28
SSIM ↑0.770.810.46
LPIPS ↓0.400.380.63
OursDepth L1[cm]↓4.91.10.57
PSNR [dB] ↑31.1324.1527.54
SSIM ↑0.930.900.79
LPIPS ↓0.130.270.26
+ +Table 4. Offline Non-Rigid Reconstruction Ablation: Rendering Error Metrics on Real-world Dataset. + +Static SLAM Table 2 provides the camera ATE and 3D reconstruction evaluation results. Our 2DGS-based implementation shows competitive performance and achieves the best result in 6 out of 8 sequences for camera ATE, and consistently better result on rendering and 3D reconstruction metrics. The reconstruction is visualized in Fig 7 which shows the comparison of the mesh generated by TSDF Fusion between MonoGS and MonoGS-2D. Table 3 provides the camera ATE and rendering metrics evaluation on TUM dataset. Our method shows on par camera ATE but shows + +![](images/4eaf174cd53cbe6153b5425596af1f173c876b91595e7f61a91b19b80d0e3394.jpg) + +![](images/7435ac44f54276e7e080c7f425a9c76c2576e89dcf1c4ecd946aa6c932f67380.jpg) + +![](images/166a72518799e401350d039c23916c7c5ff1ffce63ad11bb3d67ba4e30b068f9.jpg) + +![](images/ecd94bae941ef134e1ca0d0d070fcc111e1d28be1c998ad767d77ac47a988e40.jpg) +Figure 8. Non-rigid Reconstruction Results. Our method flexibly models non-rigid deformations without requiring any shape templates or foreground/background separation. + +![](images/f3561cd3f3ada7741e739ae17c4ff87922ecd6bf84f595ddff30e8f6bb6c0db0.jpg) + +![](images/bd8d2b80ddb1d2466fa8498890b77fbdfc7adffe51b3537ece7db5ec2c34d61d.jpg) + +the increased geometric reconstruction quality. + +Offline Non-rigid RGB-D Surface Reconstruction Table 4 reports offline reconstruction results, where camera poses are given. Our 2DGS+MLP deformation model shows competitive rendering performance compared to NeRF based methods. Note that Gaussian Splatting has the additional advantage of its rendering speed. We further provide qualitative visualizations in Fig. 8. + +# 5. Conclusion + +We presented the first tracking and mapping method for non-rigid surface reconstruction using Surface Gaussian Splatting. Our approach integrates a 2DGS + MLP warfield SLAM framework with camera pose estimation and regularization, leveraging RGB-D input. To support further research, we also introduced a novel dataset for dynamic scene reconstruction with reliable ground truth. Experimental results demonstrate that our method outperforms traditional non-rigid SLAM approaches. + +Limitations: Our method has primarily been tested on small-scale scenes; extending it to complex real-world scenarios may require 2D priors like point tracking or optical flow. The current implementation runs at 1.5 fps, limiting real-time use. Developing interactive dynamic scene scanning remains important future work. + +# 6. Acknowledgement + +Research presented in this paper has been supported by Dyson Technology Ltd. We are very grateful to members of the Dyson Robotics Lab for their advice and insightful discussions. + +# References + +[1] Poly haven. https://polyhaven.com/textures/fabric. Accessed: 2024-11-01. 6 +[2] Sketchfab. https://sketchfab.com/. Accessed: 2024-11-01.6 +[3] Oliver Boyne. Blendersynth. https://ollieboyne.github.io/BlenderSynth, 2023.6 +[4] Aljaž Božić, Pablo Palafox, Michael Zollhöfer, Justus Thies, Angela Dai, and Matthias Nießner. Neural deformation graphs for globally-consistent non-rigid reconstruction. arXiv preprint arXiv:2012.01451, 2020. 3 +[5] Aljaž Božić, Michael Zollhöfer, Christian Theobalt, and Matthias Nießner. Deepdeform: Learning non-rigid rgb-d reconstruction with semi-supervised data. 2020. 3, 7 +[6] D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black. A naturalistic open source movie for optical flow evaluation. In Proceedings of the European Conference on Computer Vision (ECCV), 2012. 3 +[7] Hongrui Cai, Wanquan Feng, Xuetao Feng, Yan Wang, and Juyong Zhang. Neural surface reconstruction of dynamic scenes with monocular rgb-d camera. In Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS), 2022. 3 +[8] Blender Online Community. Blender - a 3d modelling and rendering package, 2018. 3, 6 +[9] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13142-13153, 2023. 3, 6 +[10] Bardienus P Duisterhof, Zhao Mandi, Yunchao Yao, Jia-Wei Liu, Jenny Seidenschwarz, Mike Zheng Shou, Ramanan Deva, Shuran Song, Stan Birchfield, Bowen Wen, and Jeffrey Ichnowski. DeformGS: Scene flow in highly deformable scenes for deformable object manipulation. WAFR, 2024. 3 +[11] J. Engel, V. Koltun, and D. Cremers. Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2017. 6 +[12] Hang Gao, Ruilong Li, Shubham Tulsiani, Bryan Russell, and Angjoo Kanazawa. Monocular dynamic view synthesis: A reality check. In NeurIPS, 2022. 3, 7 +[13] Wei Gao and Russ Tedrake. Surfelwarp: Efficient nonvolumetric single view dynamic reconstruction. In Proceedings of Robotics: Science and Systems (RSS), 2018. 3, 6, 7, 8 +[14] R. Garg, A. Roussos, and L. Agapito. Dense variational reconstruction of non-rigid surfaces from monocular video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. 2 +[15] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. 2024. 2 +[16] Marc Habermann, Weipeng Xu, Michael Zollhofer, Gerard Pons-Moll, and Christian Theobalt. Deepcap: Monocular human performance capture using weak supervision. In Pro + +ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5052-5063, 2020. 3 +[17] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. In Proceedings of SIGGRAPH, 2024. 2, 3 +[18] Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. Sc-gs: Sparse-controlled gaussian splatting for editable dynamic scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 3 +[19] Matthias Innmann, Michael Zollhöfer, Matthias Nießner, Christian Theobalt, and Marc Stamminger. VolumeDeform: Real-time Volumetric Non-rigid Reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. 3 +[20] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325-1339, 2013. 3 +[21] M. M. Johari, C. Carta, and F. Fleuret. ESLAM: Efficient dense slam system based on hybrid representation of signed distance fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 2 +[22] Olaf Kähler, Victor Adrian Prisacariu, and David W. Murray. Real-time large-scale dense 3d reconstruction with loop closure. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. 2 +[23] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3D mesh renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3907-3916, 2018. 2 +[24] Nikhil Keetha, Jay Karhade, Krishna Murthy Jatavallabhula, Gengshan Yang, Sebastian Scherer, Deva Ramanan, and Jonathon Luiten. Splatam: Splat, track map 3d gaussians for dense rgb-d slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 2 +[25] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3D gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics (TOG), 2023. 2 +[26] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1-248:16, 2015. 3 +[27] Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan. Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis. 3DV, 2024. 3, 6 +[28] Ruibin Ma, Rui Wang, Yubo Zhang, Stephen Pizer, Sarah K McGill, Julian Rosenman, and Jan-Michael Frahm. Rnnslam: Reconstructing the 3d colon to visualize missing regions during a colonoscopy. Medical image analysis, 72: 102100, 2021. 3 +[29] Hidenobu Matsuki, Riku Murai, Paul H. J. Kelly, and Andrew J. Davison. Gaussian Splitting SLAM. 2024. 2, 5, 6, 7 + +[30] J. McCormac, A. Handa, A. J. Davison, and S. Leutenegger. SemanticFusion: Dense 3D semantic mapping with convolutional neural networks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2017. 2 +[31] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proceedings of the European Conference on Computer Vision (ECCV), 2020. 2, 5 +[32] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (TOG), 2022. 2, 5 +[33] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon. KinectFusion: Real-Time Dense Surface Mapping and Tracking. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2011. 2 +[34] R. A. Newcombe, S. Lovegrove, and A. J. Davison. DTAM: Dense Tracking and Mapping in Real-Time. In Proceedings of the International Conference on Computer Vision (ICCV), 2011. 2 +[35] Richard A Newcombe, Dieter Fox, and Steven M Seitz. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 3 +[36] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2 +[37] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. ICCV, 2021. 3 +[38] Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9054–9063, 2021. 3 +[39] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-NeRF: Neural Radiance Fields for Dynamic Scenes. 3 +[40] Juan J. Gomez Rodriguez, J. M. M Montiel, and Juan D. Tardos. Nr-slam: Non-rigid monocular slam. IEEE Transactions on Robotics (T-RO), 2023. 3 +[41] Martin Rünz and Lourdes Agapito. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2017. 3 +[42] Erik Sandström, Yue Li, Luc Van Gool, and Martin R. Oswald. Point-slam: Dense neural point cloud-based slam. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 2, 6 + +[43] Thomas Schöps, Torsten Sattler, and Marc Pollefeys. Bad slam: Bundle adjusted direct rgb-d slam. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2 +[44] Raluca Scona, Mariano Jaimez, Yvan R Petillot, Maurice Fallon, and Daniel Cremers. StaticFusion: Background reconstruction for dense rgb-d slam in dynamic environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2018. 2 +[45] Jenny Seidenschwarz, Qunjie Zhou, Bardienus Duisterhof, Deva Ramanan, and Laura Leal-Taixe. Dynomo: Online point tracking by dynamic online monocular gaussian reconstruction, 2024. 3 +[46] Miroslava Slavcheva, Maximilian Baust, Daniel Cremers, and Slobodan Ilic. Killingfusion: Non-rigid 3d reconstruction without correspondences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 3, 7 +[47] Miroslava Slavcheva, Maximilian Baust, and Slobodan Ilic. Sobolevfusion: 3d reconstruction of scenes undergoing free non-rigid motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 3 +[48] J. Solà, J. Deray, and D. Atchuthan. A micro Lie theory for state estimation in robotics. arXiv:1812.01537, 2018. 4 +[49] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard Newcombe. The Replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 6 +[50] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A Benchmark for the Evaluation of RGB-D SLAM Systems. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), 2012. 6 +[51] E. Sucar, S. Liu, J. Ortiz, and A. J. Davison. iMAP: Implicit mapping and positioning in real-time. In Proceedings of the International Conference on Computer Vision (ICCV), 2021. +[52] L. Torresani, A. Hertzmann, and C. Chris Bregler. Nonrigid structure-from-motion: Estimating shape and motion with hierarchical priors. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 30(5), 2008. 2 +[53] Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, and Christian Theobalt. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. 2021. 3 +[54] Hengyi Wang, Jingwen Wang, and Lourdes Agapito. Coslam: Joint coordinate and sparse parametric encodings for neural real-time slam. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 2 +[55] Hengyi Wang, Jingwen Wang, and Lourdes Agapito. Morpheus: Neural dynamic 360deg surface reconstruction from + +monocular rgb-d video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20965-20976, 2024. 3, 7, 8 +[56] Qianqian Wang, Vickie Ye, Hang Gao, Jake Austin, Zhengqi Li, and Angjoo Kanazawa. Shape of motion: 4d reconstruction from a single video. 2024. 3 +[57] Wenshan Wang, Delong Zhu, Xiangwei Wang, Yaoyu Hu, Yuheng Qiu, Chen Wang, Yafei Hu, Ashish Kapoor, and Sebastian Scherer. Tartanair: A dataset to push the limits of visual slam. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4909-4916. IEEE, 2020. 3 +[58] T. Whelan, M. Kaess, H. Johannsson, M. F. Fallon, J. J. Leonard, and J. B. McDonald. Real-time large scale dense RGB-D SLAM with volumetric fusion. International Journal of Robotics Research (IJRR), 34(4-5):598-626, 2015. 2 +[59] T. Whelan, S. Leutenegger, R. F. Salas-Moreno, B. Glocker, and A. J. Davison. ElasticFusion: Dense SLAM without a pose graph. In Proceedings of Robotics: Science and Systems (RSS), 2015. 2 +[60] Chi Yan, Delin Qu, Dan Xu, Bin Zhao, Zhigang Wang, Dong Wang, and Xuelong Li. Gs-slam: Dense visual slam with 3d gaussian splatting. In CVPR, 2024. 2 +[61] Xingrui Yang, Hai Li, Hongjia Zhai, Yuhang Ming, Yuqian Liu, and Guofeng Zhang. Vox-fusion: Dense tracking and mapping with voxel-based neural implicit representation. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2022. 2 +[62] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. 2024. 3, 5 +[63] Zeyu Yang, Hongye Yang, Zijie Pan, Xiatian Zhu, and Li Zhang. Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting. Proceedings of the International Conference on Learning Representations (ICLR), 2024. 3, 5 +[64] Zehao Yu, Torsten Sattler, and Andreas Geiger. Gaussian opacity fields: Efficient adaptive surface reconstruction in unbounded scenes. ACM Transactions on Graphics (TOG), 2024. 2 +[65] Zihan Zhu, Songyou Peng, Viktor Larsson, Weiwei Xu, Hujun Bao, Zhaopeng Cui, Martin R. Oswald, and Marc Pollefeys. Nice-slam: Neural implicit scalable encoding for slam. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2 \ No newline at end of file diff --git a/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/images.zip b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fae32e891dc328b65637321971e4a2427b367bee --- /dev/null +++ b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91da473b6c22cc6e4f0e9f8b4766c23fafe03e5a1a2efb986dd37cad081650ac +size 718083 diff --git a/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/layout.json b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..15de60f7a43a9bcdb5f1208d532280081f3e0b82 --- /dev/null +++ b/CVPR/2025/4DTAM_ Non-Rigid Tracking and Mapping via Dynamic Surface Gaussians/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0ca1022837410e1b9d5633b6354858d81468b0a5fbc6bdb7d712fc459d2dfb7 +size 448870 diff --git a/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_content_list.json b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6841cb2426e9ddb28372a1c6ca10b6042ed199c2 --- /dev/null +++ b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01b5a365a214a3a78f66122a054b8291349fea743cf6b867cfc76ba17ad4444d +size 78203 diff --git a/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_model.json b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3520f77a1c87131f0dfe8489389d67a6c7b2fdf2 --- /dev/null +++ b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c92e10ae2bc13178d6b95508a485b0701d91744dcb30188f8e051e215727ea0 +size 96882 diff --git a/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_origin.pdf b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..883d40764e7cb9def5916f8080dbc6048d12cc51 --- /dev/null +++ b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/a98600ea-d152-4146-b6fe-1c8c3f931210_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b0c9a520f7fe31714b46e7d0b57d613dfc6a02d6cc18195484911a8d9e5c9e0 +size 5452485 diff --git a/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/full.md b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6686e1f722d90baddfc549d84a98256a5c158dad --- /dev/null +++ b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/full.md @@ -0,0 +1,329 @@ +# 4Deform: Neural Surface Deformation for Robust Shape Interpolation + +Lu Sang $^{1,2}$ , Zehranaz Canfes $^{1}$ , Dongliang Cao $^{3}$ , Riccardo Marin $^{1,2}$ , Florian Bernard $^{3}$ , Daniel Cremers $^{1,2}$ + +$^{1}$ Technical University of Munich, $^{2}$ Munich Center of Machine Learning, $^{3}$ University of Bonn + +![](images/4e58379c9bb538ba8c88008f3131f5527ee843dd2353633cc1f978d39107cea6.jpg) +Figure 1. 4Deform takes a sparse temporal sequence of point clouds as input and generates realistic intermediate shapes. Starting from just pairs of point clouds and estimated sparse, noisy correspondences (indicated using colors in the point clouds), our method obtains realistic long-range interpolations, even for shapes with changing topology (e.g., the human-object interaction in the top row), and can generalize the interpolation results to real-world data (Kinect point clouds in the second row). Meanwhile, our method can handle non-isometrically deformed shapes (bottom left) as well as partial shapes (bottom right). + +# Abstract + +Generating realistic intermediate shapes between non-rigidly deformed shapes is a challenging task in computer vision, especially with unstructured data (e.g., point clouds) where temporal consistency across frames is lacking, and topologies are changing. Most interpolation methods are designed for structured data (i.e., meshes) and do not apply to real-world point clouds. In contrast, our approach, 4Deform, leverages neural implicit representation (NIR) to enable free topology changing shape deformation. Unlike previous mesh-based methods that learn vertex-based deformation fields, our method learns a continuous velocity field in Euclidean space. Thus, it is suitable for less structured data such as point clouds. Additionally, our method does not require intermediate-shape supervision during training; instead, we incorporate physical and geometrical constraints to regularize the velocity field. We reconstruct intermediate surfaces using a modified level-set equation, directly linking our NIR with the velocity field. Experiments show that our method significantly outperforms previous NIR approaches across various scenarios (e.g., noisy, partial, topology-changing, non-isometric shapes) and, for the first time, enables new applications like 4D Kinect sequence upsampling and real-world high-resolution mesh deformation. + +# 1. Introduction + +Inferring the dynamic 3D world from just a sparse set of discrete observations is among the fundamental goals of computer vision. These observations might come, for example, from video sequences [50], Lidar scans [22, 51] or RGB-D cameras [43, 47]. Even more challenging is the recovery of plausible motion in between such observations. Despite the relevance of this problem, just a few works addressed this, likely because the solution requires merging concepts of camera-based reconstruction with techniques of 3D shape analysis and interpolation. + +In the computer graphics literature, researchers have developed interpolation approaches for mesh representations [1, 21, 48]. These often require a dense, exact point-to-point correspondence between respective frames [9, 11, 21], which is usually unpractical and rare in real-world applications. Also, such methods rely on a predefined topology and do not support changes (e.g., partiality, the interaction of multiple independent components). Moreover, the recovered interpolation is defined only on the mesh surface, which limits the applicability to other data forms. + +The recent advent of neural implicit fields [27, 29, 44, 46] opened the door to more flexible solutions. These methods convert the start and final meshes or point clouds into im + +
MethodsCorr.Est.Non-iso.Part.Topo.Seq.Real.
SMS [9]XXX
Neuromorph [21]XXX
LIMP [13]XXXX
NFGP [55]XXXXX
NISE [38]-XXXX
LipMLP [32]-XXX
[45]XX
Ours
+ +Table 1. Summary of Methods Capability. We list the capabilities of previous mesh and NIR-based methods. The column Corr. Est. indicates if the method can estimate the correspondences (✓) or needs ground-truth correspondences as input (X); If the methods can handle non-isometric deformation (Non-iso.), partial shape deformation (Part.), topological changes (Topo.), work for Sequences (Seq.), and for the real-world data (Real.). + +plicit representations and recover the intermediate frames by either latent space optimization [32, 38] or deformation modeling [45, 55]. The theoretical advantage comes from the topological flexibility of implicit representations [34]. However, the latent space-based methods generally do not consider the physical properties of the recovered intermediate shapes and therefore fail to produce reasonable interpolations such as rigid movements [32, 38]. Some other methods propose physical constraints during optimization [45, 55] but they either assume ground truth correspondences [45] or user-defined handle points [55], which makes it fail on complicated deformation or non-isometric deformation. Tab. 1 summarizes the strengths and limitations of different methods. + +In this work, we address the challenging task of recovering motion between sparse keyframe shapes, relying only on coarse and incomplete correspondences. Our method begins by establishing correspondences through a matching module, followed by representing the shapes with an implicit field and modeling deformation via a velocity field. We introduce two novel loss functions to minimize distortion and stretching, ensuring physically plausible deformations. Our approach encodes shapes in a latent space, enabling both sequence representation and extrapolation of new dynamics. + +For the first time, we present results that begin with imprecise correspondences obtained from standard shape matching and registration pipelines and even not always spatially aligned with the input points e.g., real-world data. Our method not only outperforms state-of-the-art approaches, even when they are designed to overfit specific frame pairs, but also demonstrates versatility in real-world applications, including Kinect point cloud interpolation for human-motion interaction sequences, upsampling of real captures with partial supervision, and sequence extrapolation, as shown in Fig. 1. In summary, our contributions are: + +- A data-driven framework for 3D shape interpolation that merely requires an estimated (noisy and incomplete) corre + +spondence, and for the first time demonstrates applicability to real data such as noisy and partial point clouds. + +- The derivation of two losses to prevent distortion and stretching in implicit representation, promoting desirable physical properties on the interpolated sequence. +- Experimental results confirming state-of-the-art performance on shape interpolation and the applicability to challenging downstream tasks like temporal super-resolution and action extrapolation. + +Our code is available on our project page https:// 4deform.github.io/. + +# 2. Related Work + +4D reconstruction from discrete observations involves recovering a continuous deformation space that not only aligns with the input data at specific time steps but also provides plausible intermediate reconstructions. The reconstructed sequences should preserve the input information while filling in any missing details absent in the original data, ultimately creating a complete and coherent representation of the observed sequence. This task involves shape deformation and interpolation. + +# 2.1. Surface Deformation + +Modeling the 3D dynamic world involves surface deformation, essential in fields like gaming, simulation, and reconstruction. Physically plausible deformations are often needed. However, the task relies heavily on the representation of the data. + +Mesh Deformation. Mesh deformation typically involves directly adjusting vertex positions within a mesh pair, taking advantage of the inherent neighboring information. This allows mesh-based methods to incorporate physical constraints, such as As-Rigid-As-Possible (ARAP) [7, 48], area-preserving or elasticity loss [2] to constrain the deformation. While mesh deformation is well-studied [10, 11, 19-21], it requires predefined spatial discretization and fixed vertex connectivity to maintain topology. This leads to challenges with topology changes [9] and resolution limitations, as methods must process all vertices together. Consequently, techniques like LIMP [13] downsample meshes to 2,500 vertices, and others [9, 21, 41] re-mesh inputs to 5,000 vertices, with output resolution tied to these constraints. + +Implicit Field Deformation. Unlike mesh representations, implicit surface representations offer several advantages. First, neural implicit fields allow flexible spatial resolution during inference, as discretization isn't pre-fixed. Second, they can represent arbitrary topologies, making them well-suited for complex cases that mesh-based methods struggle with, such as noisy or partial observations. However, directly deforming an implicitly represented surface is challenging because surface properties aren't explicitly stored, limiting direct manipulation of surface points. This area remains + +underexplored, with only a few approaches addressing it. For instance, NFGP [55] introduces a deformation field on the top of an implicit field, constraining it physically using user-defined handle points to match target shapes. This pioneering work directly deforms the implicit field; however, its practical applications are limited as it only provides the starting and ending shapes, with processing times for a single shape pair extending over several hours. Other methods, such as those in [37, 38], focus on shape smoothing or morphing without targeting specific shapes. The work [45] introduced a fast, flexible framework for directly deforming the implicit field, capable of generating physically plausible intermediate shapes. However, as an optimization-based approach, it requires training separately for each shape pair and struggles with handling large deformations. + +# 2.2. Shape Interpolation + +Shape interpolation is a subset of shape deformation that involves generating intermediate shapes between a start and target shape. The interpolated sequence should reflect a physically meaningful progression, making it essential that the interpolated shapes are not only geometrically accurate but also serve to complete the narrative implied by the initial and final shapes. Therefore, we emphasize the concept of physically plausible shape interpolation, which should be a guiding principle for tasks in this area. There are two main approaches: generative models and physics-based methods. Generative models [25, 28] rely on extensive datasets to produce shapes, but their outputs can lack realism due to data dependency. Physics-based methods, like [32, 38, 45], optimize over a shape pair to generate intermediates but may have limited applications. Another way is to learn a deformation space of shapes, such as a latent space [26], and hope that generating intermediate shapes is equivalent to interpolating the latent shape [13]. However, such methods suffer from the same problem as generative models: the latent space interpolation may be far from physically plausible. To address these limitations, we propose a lightweight solution that can be trained on datasets of any size. Our model adopts an AutoDecoder architecture to maintain generative capability and combines physical and geometric constraints to ensure the generated results are physically plausible. + +# 3. From Implicit Surface Deformation ... + +Implicit representation of the moving surface. We represent the moving surface $S_{t}$ in the volume $\Omega \subset \mathbb{R}^3$ implicitly as the zero-crossing of a time-evolving signed distance function $\phi :\Omega \times [0,T]\to \mathbb{R}$ : + +$$ +\mathcal {S} _ {t} = \left\{\mathbf {x} \in \Omega \mid \phi (\mathbf {x}, t) = 0 \right\}. \tag {1} +$$ + +This implies that + +$$ +\phi \left(\mathcal {S} _ {t}, t\right) = 0 \quad \forall t. \tag {2} +$$ + +It follows that the total time derivative is zero, i.e. + +$$ +\frac {\mathrm {d}}{\mathrm {d} t} \phi (\mathbf {x}, t) = \partial_ {t} \phi + \mathcal {V} ^ {\top} \nabla \phi = 0, \tag {3} +$$ + +where $\mathcal{V} = \frac{d}{dt}\mathbf{x}$ denotes the velocity of the moving surface. Eq. (3) is known as the level-set equation [14, 15]. It tells us how to move the implicitly represented surface $S_{t}$ along the vector field $\mathcal{V}$ by deforming the level-set function $\phi$ . Since over time, the level-set function will generally lose its property of being a signed distance function, we add an Eikonal regularizer with a weight $\lambda_{l}$ to obtain the modified level-set equation [45], i.e. + +$$ +\partial_ {t} \phi + \mathcal {V} ^ {\top} \nabla \phi = - \lambda_ {l} \phi \mathcal {R} (\mathbf {x}, t) \text {i n} \Omega \times \mathcal {I}, \tag {4} +$$ + +where $\mathcal{R}(\mathbf{x},t) = -\langle (\nabla \mathcal{V})\frac{\nabla\phi}{\|\nabla\phi\|},\frac{\nabla\phi}{\|\nabla\phi\|}\rangle$ . This equation combines the level-set equation and Eikonal constraint, freeing the level-set approach from the reinitialization process [6, 23, 45]. + +Spatial Smoothness and Volume Preservation. To make sure the velocity field models physical movement, we can impose respective regularizers on the velocity field. Two popular regularizers are spatial smoothness $\mathcal{L}_s$ [18, 45] and volume preservation $\mathcal{L}_v$ [13, 20]: + +$$ +\mathcal {L} _ {s} = \int_ {\Omega} \| (- \alpha \Delta + \gamma \mathbf {I}) \mathcal {V} (\mathbf {x}), t \| _ {l ^ {2}} d \mathbf {x}, \tag {5} +$$ + +$$ +\mathcal {L} _ {v} = \int_ {\Omega} | \nabla \cdot \mathcal {V} (\mathbf {x}, \mathbf {t}) | \mathrm {d} \mathbf {x}. +$$ + +# 4. ... to Neural Implicit Surface Deformation + +Previous implicit or point cloud deformation methods either rely on ground truth correspondences, struggle with large deformations [45], or are limited to shape pairs [32, 38, 45], restricting their applicability. To overcome these limitations, we propose a method that first incorporates a corresponding block to obtain sparse correspondences, then handles larger deformations by establishing stronger physical constraints, and extends functionality to temporal sequence of point clouds via an AutoDecoder architecture [26]. Given point clouds sequence $\{\mathcal{P}_0,\mathcal{P}_1,\ldots ,\mathcal{P}_n,\ldots \}$ with an initialized latent vector to each point cloud $\{\mathbf{z}_0,\mathbf{z}_1,\dots ,\mathbf{z}_n,\dots \}$ , where $\mathcal{P}_k = \{\mathbf{x}_i^k\}_{i}\subset \mathbb{R}^3$ and $\mathbf{z}_i\in \mathbb{R}^m$ is a trainable latent vector that is assigned to each input point cloud. We pair the inputs as a source point cloud and a target point cloud (for convenience we label them as $\mathcal{P}_0$ and $\mathcal{P}_1$ ). We aim to generate physically plausible intermediate stages of all training pairs accordingly. To this end, we propose to model the 4D movements using a time-varying implicit neural field of the form: + +$$ +\mathcal {S} _ {t} = \left\{\mathbf {x} \mid \phi_ {\mathbf {z}} (\mathbf {x}, t) = 0 \right\}, \text {f o r} \mathbf {z} := \mathbf {z} _ {0} \oplus \mathbf {z} _ {1}. \tag {6} +$$ + +![](images/1ab4a18791554a4cb1f488441e0b91ba538cdc1284d92fc41478631508928812.jpg) +Figure 2. Pipeline of 4Deform: Given a temporal sequence of inputs, we initialize a latent vector to each point cloud. Then the network takes pairs of point clouds $\mathcal{P}_0$ and $\mathcal{P}_1$ (with sparse correspondences), together with the concatenated latent vector $\mathbf{z}_0$ and $\mathbf{z}_1$ as input. At training time, we jointly optimize two neural fields: a time-varying implicit representation (Implicit Net $\phi$ ) and a velocity field (Velocity Net $\mathcal{V}$ ) with proposed geometric and physical constraints losses. Conditioning on a time stamp $t$ , we instantaneously obtain a continuous time-varying signed distance function (SDF), an offset of the input toward the target (velocity field). + +Each shape $S_{t}$ in time $t$ is encoded by the zero-crossing of the implicit function $\phi$ . Particularly, $S_{0}$ and $S_{1}$ should coincide with $\mathcal{P}_0$ and $\mathcal{P}_1$ . + +# 4.1. Correspondence Block + +Instead of relying on ground-truth correspondences, which are difficult to obtain in real-world settings, our method obtains the correspondences based on the state-of-the-art unsupervised non-rigid 3D shape matching method [8]. This method is based on the functional map framework [39] and follow-up learning-based approaches [17, 42]. The key ingredient of the functional map framework is that, instead of directly finding correspondences in the spatial domain, it encodes the correspondences in the compact spectral domain and thus is robust to large deformation [8]. More details about functional maps can be found in this lecture note [40]. It is worth noting that our method is agnostic to the choice of shape-matching methods. For instance, for specific types of input shapes (e.g. humans), specialized registration methods can also be utilized to obtain more accurate correspondences [3, 35, 36]. + +# 4.2. Implicit and Velocity Fields + +To ensure the physically plausible intermediate stage, we model the deformation from the source point cloud to the target point cloud by tracking the point cloud path using the velocity field $\nu_{\mathbf{z}}(\mathbf{x},t)\in \mathbb{R}^3$ . We adopt the velocity field for two reasons, (i) The velocity field allows us to control the generated deformation. It is easy to add physical-based constraints directly on velocity to force the intermediate movement to follow certain physical laws. (ii) There is a link from the velocity field to the implicit field as the implicit field can be seen as a macroscopic field. We can perform the material deviation to derive the natural relationship between velocity field $\nu_{\mathbf{z}}(\mathbf{x},t)$ and implicit field $\phi_{\mathbf{z}}(\mathbf{x},t)$ . For simplicity, we omit the latent vector $\mathbf{z}$ in the following equations. + +We follow [45] to use the Modified Level Set equation to link the velocity field and implicit field via level-set loss + +$$ +\mathcal {L} _ {i} = \int_ {\Omega} \| \partial_ {t} \phi + \mathcal {V} \cdot \nabla \phi + \lambda_ {l} \phi \mathcal {R} (\mathbf {x}, t) \| _ {l ^ {2}} d \mathbf {x}. \tag {7} +$$ + +Since the surface normal $\mathbf{n}$ coincides as implicit function gradient $\nabla \phi$ , this loss offers geometry constraint to Velocity Net, and as the point is moved by Velocity Net, Eq. (7) also works as a physical constraint from Velocity Net to Implicit Net, as shown in Fig. 2. We force the Velocity Net to move the point with known correspondences to the target input position by + +$$ +\mathcal {L} _ {m} = \int_ {\Omega^ {*}} \left\| \mathbf {x} ^ {0} + \int_ {0} ^ {1} \mathcal {V} (\mathbf {x}, \tau) \mathrm {d} \tau - \mathbf {x} ^ {1} \right\| _ {l ^ {2}} \mathrm {d} \mathbf {x}, \tag {8} +$$ + +where $\Omega^{*}$ only contains points with known correspondences. + +# 4.3. Physical Deformation Loss + +A key challenge in deforming implicit neural fields with external velocity fields is ensuring physically realistic movement, as commonly used constraints like as-rigid-as-possible (ARAP) [48] are difficult to enforce without a connectivity structure. To address this, we introduce new physical deformation losses on both the implicit and velocity fields to better control movement. These losses do not require connectivity information, thus it can be enforced on unordered input and allow arbitrary resolution of input. + +Distortion Loss. In the Eulerian perspective of continuum mechanics, the rate of deformation tensor provides a measure of how the fluid or solid material deforms over time from a fixed reference point in space, excluding rigid body rotations. It measures the rate of stretching, compression, and shear that a material element undergoes as it moves through the flow field [49]. The rate of deformation tensor is defined by + +$$ +\mathbf {D} = \frac {1}{2} \left(\nabla \mathcal {V} + (\nabla \mathcal {V}) ^ {\top}\right). \tag {9} +$$ + +The distortion of the particle moved under the velocity $\mathcal{V}$ can be described by the deviatoric form + +$$ +\mathcal {L} _ {d} = \int_ {\Omega} \left\| \frac {1}{6} \operatorname {T r} (\mathbf {D}) ^ {2} - \frac {1}{2} \operatorname {T r} (\mathbf {D} \cdot \mathbf {D}) ^ {2} \right\| _ {F} \mathrm {d} \mathbf {x}. \tag {10} +$$ + +Eq. (10) is the complement of the volumetric changes. This term removes the volumetric part, leaving behind the deviatoric (distortional) component. It offers physical constraint to both networks during training. + +Stretching Loss. By tracking point cloud movement with a velocity field in our approach, we can establish constraints to control surface stretching along the deformation. We follow the idea of strain tensor from continuum mechanics [49]. Consider deformation happens in infinitesimal time $\Delta t$ , the displacement of point $\mathbf{x}$ is moved to $\mathbf{x}'$ such that + +$$ +\mathbf {x} ^ {\prime} = \mathbf {x} + \mathcal {V} (\mathbf {x}, t) \Delta t. \tag {11} +$$ + +Consider a infinitesimal neighbourhood of $\mathbf{x}'$ , denote as $\mathrm{dx}'$ the length of it is given by $(\mathrm{ds}')^2 = \mathrm{dx}'^\top \mathrm{dx}'$ . Similarly, the length of infinitesimal neighborhood of $\mathbf{x}$ is given by $(\mathrm{ds})^2 = \mathrm{dx}^\top \mathrm{dx}$ . Together with Eq. (11), the stretched length after deformation is given by + +$$ +\left(\mathrm {d} \mathbf {s} ^ {\prime}\right) ^ {2} - \left(\mathrm {d} \mathbf {s}\right) ^ {2} = \mathrm {d} \mathbf {x} ^ {\top} \left(\mathbf {F} ^ {\top} \mathbf {F} - \mathbf {I}\right) \mathrm {d} \mathbf {x}, \tag {12} +$$ + +where $\mathbf{F} = \partial \mathbf{x}' / \partial \mathbf{x} = \mathbf{I} + \nabla \mathcal{V}$ . However, instead of considering the stretch on the neighborhood patch of one surface point, preventing stretching on the tangent plane of a point is what makes a deformation physically realistic [55]. We project dx in Eq. (12) to its tangent space using projection operator $\mathbf{P}(\mathbf{x}) = \mathbf{I} - \mathbf{n}(\mathbf{x})^\top \mathbf{n}(\mathbf{x})$ to compute the stretching on the tangent plane, where $\mathbf{n}(\mathbf{x})$ is the normal vector on point $\mathbf{x}$ . Thus, the stretching on the tangent plane is + +$$ +\left(\mathrm {d} l\right) ^ {2} = \mathrm {d} \mathbf {x} ^ {\top} \mathbf {P} ^ {\top} (\mathbf {x}) \left(\mathbf {F} ^ {\top} \mathbf {F} - \mathbf {I}\right) \mathbf {P} (\mathbf {x}) \mathrm {d} \mathbf {x}. \tag {13} +$$ + +Finally, thanks to the nice properties of the implicit field, we have $\mathbf{n}(\mathbf{x}) = \frac{\nabla\phi(\mathbf{x},t)}{||\nabla\phi(\mathbf{x},t)||}$ . Therefore, for any $\mathrm{dx}$ , we constraint the matrix Frobenius norm as + +$$ +\mathcal {L} _ {s t} = \int_ {\Omega} \left\| \mathbf {P} ^ {\top} \left(\nabla \mathcal {V} ^ {\top} \nabla \mathcal {V} + \nabla \mathcal {V} + \nabla \mathcal {V} ^ {\top}\right) \mathbf {P} \right\| _ {F} d x \tag {14} +$$ + +where $\mathbf{P} = \mathbf{I} - \nabla \phi \nabla \phi^{\top}$ + +# 4.4. Training and Inference + +Training. Given a temporal sequence of inputs $\{\mathcal{P}_k\}_k$ which may be point clouds or meshes, we start by using our correspondence blocks to obtain the correspondences of each training pair. Importantly, our method does not require full correspondence for every training point; it only requires a subsample of points. During training, we initialize a trainable latent vector for each shape. We concatenate the + +latent vectors of each training pair and optimize them using our Implicit Net $\phi$ and Velocity Net $\mathcal{V}$ jointly. We sample $T + 1$ discrete time steps uniformly for $t = \in \{0,1 / T,\dots ,1\}$ to compute the loss at each time step. The total loss is + +$$ +\mathcal {L} = \lambda_ {i} \mathcal {L} _ {i} + \lambda_ {s} \mathcal {L} _ {s} + \lambda_ {v} \mathcal {L} _ {v} + \lambda_ {s t} \mathcal {L} _ {s t} + \lambda_ {m} \mathcal {L} _ {m}, \tag {15} +$$ + +where $\lambda_{i},\lambda_{s},\lambda_{v},\lambda_{st}$ and $\lambda_{m}$ are weights for each loss term. For further details about implementation and training, we refer to supplementary materials. + +Inference. During inference, we give the optimized latent vector for each trained pair into the Implicit Net $\phi$ to generate intermediate shapes at different discrete time steps $t$ . Given an initial point cloud, we pass it with the optimized latent vector to the Velocity Net $\mathcal{V}$ , producing a sequence of deformed points at each time step. + +# 5. Experimental Results + +Datasets. We validate our method on a number of data from different shape categories. We considered registered human shapes from FAUST [5], non-isometric four-legged animals from SMAL [56], and partial shapes from SHREC16 [12]. The input shapes are roughly aligned and we train our correspondence block on each one of them individually [8]. These datasets do not include temporal sequences, so we train on all possible pair combinations. We also evaluate our method on real-world scans, using motion sequences scans of clothed humans from 4D-DRESS [54], and noisy Kinect acquisitions of human-object interactions from BeHave [4]. For both cases, correspondences are obtained by template shape registration. Notably, the obtained correspondences are rough estimations, often imprecise, and thus do not guarantee continuous bijective maps between shapes, e.g., due to garments, occlusions, or noise in the acquisitions. + +**Baseline methods.** We compare our method against recent NIR-based methods that solve similar problems. LipMLP [32] encourages smoothness in the pair-wise interpolation by relying on Lipschitz constraints; NISE [38], similar to us, relies on the level-set equation and uses predefined paths to interpolate between neural implicit surfaces. NFGP [55] relies on a user-defined set of points as handles, together with rotation and translation parameters. We also consider [45] as the most relevant baseline method. Nevertheless, all these methods are tailored for shape pairs. To this end, to evaluate the performance of our method in the context of shape sequences, we compare our method to LIMP [13], a mesh-based approach that constructs a latent space and preserves geometric metrics during interpolations. + +Training time. We train all methods on a commodity GeForce GTX TITAN X GPU. Our method needs approximately 8 to 10 minutes per pair, i.e., for a sequence with 10 pairs, our method takes roughly 1.5 hours. The work [45] and LipMLP [32] require similar runtimes as our method + +when training one pair. NISE [38] takes around 2 hours for each pair. LIMP [13] first needs to downsample mesh to 2,500 vertices and training time takes around 30 minutes per pair. NFGP [55] requires training separately for each step and each step takes 8 to 10 hours, which makes the training time go up to 40 hours for recovering 5 intermediate shapes. + +Evaluation Protocol. We compare three main settings. First, we train on a single registered shape pair (S) and test the interpolation quality (Pairs S/S). Second, we consider the case of training on registered shape sequences and test the interpolation quality (Seq. S/S), for which we rely on similar metrics. Finally, we consider training on registered shape pairs but testing on Real point clouds $(R)$ (Pairs S/R). As metrics, we consider the standard deviation of surface area $\mathrm{SA}\sigma = \sqrt{\sum_{t=0}^{N}(A_t - \bar{A})^2 / N}$ , where $A_t$ is the surface area for mesh at time $t$ and $\bar{A}$ is the average surface area over the interpolated meshes), which is expected to be close to 0 for the isometric cases. When we have access to ground truth for the intermediate frames, we also report the Chamfer Distance (CD) and the Hausdorff Distance (HD) errors of the predictions. For the Pairs S/R setting, we also report the pointwise root-mean-square error (P-RMSE), which indicates the Euclidean distance of deformed mesh vertices to the ground truth mesh vertices. In the case a method is not applicable to a certain setting, we note that with a cross $(X)$ . + +# 5.1. Isometric Shape Interpolation + +Quantitative comparison. We show a quantitative comparison of isometric human shapes from 4D-Dress for the three settings in Table 2. We chose this dataset since the frequency of the scans lets us have ground truth for the intermediate frames, as well as access to real scans. Despite competitors being tailored for the Pairs S/S setting, our method performs on par, with better area preservation. Our method also supports sequences, contrary to the majority of previous methods. LIMP fails to generate intermediate shapes that are faithful to the ground truth. We believe that LIMP's poor performance is caused by its strong data demand that is not fulfilled here. To prove our generalization, we also show results on the interpolation of SMAL isometric animals in Table 3. The ground-truth evaluation frames are obtained by interpolation of SMAL pose parameters. Our method outperforms the competitors on all the metrics. We report all qualitative results in supplementary materials. + +Large Deformations. A major advantage of our method is that we support large deformations. We show an example of this between two FAUST shapes in Figure 3. Although isometric, their drastic change of limbs constitutes a challenge for purely extrinsic methods, causing evident artifacts. Our method preserves areas an order of magnitude better than mesh-based methods (LIMP). + +![](images/dea8d1189b41094d90c5a1d2483f37e0e56b24d77d7c1f663302c304522185ca.jpg) +Figure 3. Large Deformations. 4Ddeform handles large deformations better than previous works, providing one order of magnitude less area distortion, even compared to mesh-based ones (LIMP [13]). In the top row, we visualize the error in the estimated input correspondence. + +# 5.2. Non-isometry and Partiality + +Non-isometry. A significant challenge is modeling interpolation when the shape metric is drastically changing between frames. This significantly hampers the chances of obtaining reliable correspondence and control over the full-shape geometry. In Fig. 4, we show an interpolation between a cougar and a cow from SMAL. As can be seen on top of the image, the correspondence error is quite noisy. As a consequence, our method is the only one that shows consistency also in the thin geometry (e.g., legs). We argue this is a direct contribution of our losses Eq. (10) and Eq. (14). Due to the lack of ground truth intermediate shapes, we only report SA $\sigma$ for each method's results. + +Partially. Finally, an extremely challenging case are partially observed shapes. Here, an ideal interpolation would provide a smooth interpolation of the overlapping part, while keeping the non-overlapping part as consistent and rigid as possible. We provide an example of a cat from SHREC16 in Fig. 5. Despite the highly imprecise correspondence, we see that our method is the one with the best preservation of + +
Pairs S/SSeq. S/SPairs S/R
CD (×104) ↓HD (×102) ↓SAσ(×10) ↓CD (×104) ↓HD (×102) ↓SAσ(×10) ↓CD (×104) ↓HD (×102) ↓P-RMSE (×10) ↓
LIMP [13]21.9803.1750.507136.78715.9740.155XXX
NFGP [55]0.2720.0250.075XXXXXX
LipMLP [32]14.992.1251.252XXXXXX
NISE [38]6.5882.1670.321XXXXXX
[45]0.2790.0470.023XXX0.5480.0830.024
Ours0.2690.0460.0180.3270.0380.0630.3900.1010.014
+ +Table 2. Human dataset isometric deformation metrics. We evaluate our method on both pairs input (Pairs S/S) and sequence input (Seq. S/S) and compare against previous methods that either only work for temporal sequences or pairs. Additionally, as our method can directly operate on real-world data that is not trained on, we also report the error w.r.t. to real-world results (see Sec. 5.4) on (Seq. S/R). For the methods that cannot directly deform real-world data, thus the Pairs S/R columns are marked as $\times$ . + +
Pairs S/S
CD (×103) ↓HD (×102) ↓SAσ(×10) ↓P-RMSE (×10) ↓
NFGP [55]0.7700.9060.217
LipMLP [32]68.45243.3271.192
NISE [38]7.2231.2370.771
[45]0.1730.6260.0640.081
Ours0.1370.2210.0620.061
+ +Table 3. Animal dataset isometric deformation metrics. We show that our method achieves the best results on the SMAL dataset. + +the absent area and, overall, the smallest area distortion. For both non-isometric and partial shapes, our method provides more realistic deformations than [45]. + +![](images/91a1c561feb1a4985e152d9e9f154700a883774da36a13a777cba1768d91ef42.jpg) +Figure 4. Non-isometric deformation. We deform two different animals from SMAL, relying on a noisy correspondence (top row). Compared to the previous methods, our method results in plausible deformations, while preserving thin geometric details (e.g., legs). + +# 5.3. Ablation Study + +To demonstrate the effectiveness of our distortion loss $\mathcal{L}_d$ and stretching loss $\mathcal{L}_{st}$ , we perform a quantitative comparison on the 4D-Dress dataset. We report quantitative evaluation in Tab. 4. We highlight that although the losses have a minor influence in the Pair S/S case, they are useful in providing + +![](images/d0df95e477a542a50b9c2a67f3d8305aa928d6e0e52918363044fea6fb073a2c.jpg) +Figure 5. Partial shape deformation. We consider the case in which one of the input shapes is only partially available while having noisy correspondences (correspondence error visualized in the top row). Other methods often collapse the unseen part or create unreasonable stretches. Similar effects are observed when we remove some of our novel losses. Our method provides plausible interpolations, both for the visible and missing parts. + +consistency when the network has to capture relations on a wider set of shapes (Seq. S/S); further, they show robustness in the presence of real noise (Pairs S/R). This follows our intuition that such losses serve as regularization, especially in the more challenging cases. This is further highlighted by the qualitative results of partial shapes of Fig. 5. + +# 5.4. Applications + +Our method enables a series of new applications, such as the upsampling of real captures and the handling of noisy point clouds. We also show that the learned networks are capable + +
Pairs S/SSeq. S/SPairs S/R
CD (×104) ↓HD (×102) ↓SAσ(×10) ↓CD (×104) ↓HD (×102) ↓SAσ(×10) ↓CD (×104) ↓HD (×102) ↓P-RMSE (×10) ↓
w/o Ld0.2650.0410.1150.3480.1010.0650.4900.2330.023
w/o Lst0.2840.0450.0180.3630.0910.0620.6200.1260.023
w/o both0.2790.0470.0230.3900.0840.0650.5480.0830.024
Ours0.2690.0460.0180.3270.0380.0630.3900.1010.014
+ +Table 4. Loss ablation. We ablate our method on pair setting (Paris S/S) and temporal sequences setting (Seq. S/S) and report the quantitative measurements. We also report the error on the real-world mesh CD and HD in Pairs (S/R) columns. + +![](images/b9b8e04f8f2c8ec79999a6b95c73fe5f96a5c7a801d19e4794322593c4c77f31.jpg) +Figure 6. Upsampling and extrapolation. The top shows an example of the BeHave sequence. Starting from a sparse set of keyframes (1fps, colored point clouds), our method lets us interpolate the registration (first row), as well as the real Kinect point clouds (second row) between keyframes at an arbitrary continuous resolution. On the bottom, we show extrapolation on a 4D-Dress sequence. With just a few key frames, we can deform the real point cloud even beyond the final frame, obtaining an estimation of the plausible continuation of the action. + +of generalizing and extrapolating in the time domain. + +Temporal Upsampling. In many real-world datasets, human movements are recorded by sensors such as RGBD cameras, which provide real-world point clouds of humans and possible object interactions. However, due to technical constraints or device setup differences, human motion datasets have different frame rates for recording the movement [4, 52, 54]. This is the case for BeHave [4], where the input Kinect sequences are carefully annotated with significant manual intervention to align SMPL and an object template to the input, resulting in a frame rate of only 1 FPS. We show that our interpolation method can efficiently upsample not only the annotated SMPL data, but also the noisy real-world Kinect point clouds without additional effort. We highlight that our Velocity Net can easily be generalized to untrained real-world point clouds to obtain upsampling sequences. We show the upsampling results in Fig. 6 + +Real-World Data Deformation. Here, we show another application scenario in high-quality meshes. High-quality data with tens of vertices, defining a dense, precise point-to-point correspondence is demanding and often unfeasible to be processed. Therefore, it is a common practice to fit a template to such input high-quality data and use it as rough guidance. In Fig. 6, we show how, from just a few frames equipped with a rough SMPL alignment, we can manipulate + +a 40k vertices real scan, maintaining its structure along all the sequences. We refer to Tab. 2 as an evaluation of our method's performance both on the real-world and SMPL interpolation. + +Extrapolation for Movement Generation. Our method lets us obtain dense intermediate frames at arbitrary temporal resolution and allows us to deform the data that is a bit far from the sparse input correspondence. Moreover, the learned physical deformation allows us to generate movements even beyond the considered sequence. Our velocity field can extrapolate outside of the training time domain (0 to 1), while remaining physically plausible, as shown in Fig. 6. + +# 6. Discussion and Future Work + +While our method integrates physical plausibility, certain types of deformations, such as mechanical joints and fluid dynamics, may not yet be fully captured by our model. Future work could incorporate additional physical constraints to address these complexities. Additionally, some applications require separate deformation estimation, as in the case of a human with loose clothing, where the deformations of the body and clothing do not align. We plan to extend our work to handle these cases in future developments. + +# Acknowledgements + +This project is supported by ERC Advanced Grant "SIMULACRON" (agreement #884679), GNI Project "AI4Twinning", and DFG project CR 250/26-1 "4D YouTube". We also would like to thank Weirong Chen for helpful insights and proof reading. + +# References + +[1] Seung-Yeob Baek, Jeonghun Lim, and Kunwoo Lee. Isometric shape interpolation. Computers & Graphics, 46:257-263, 2015. 1 +[2] Lennart Bastian, Yizheng Xie, Nassir Navab, and Zorah Lahner. Hybrid functional maps for crease-aware nonisometric shape matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3313-3323, 2024. 2 +[3] Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, and Gerard Pons-Moll. Loopreg: Self-supervised learning of implicit surface correspondences, pose and shape for 3d human mesh registration. Advances in Neural Information Processing Systems, 33:12909-12922, 2020. 4 +[4] Bharat Lal Bhatnagar, Xianghui Xie, Ilya Petrov, Cristian Sminchisescu, Christian Theobalt, and Gerard Pons-Moll. Behave: Dataset and method for tracking human object interactions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. 5, 8, 2, 4 +[5] Federica Bogo, Javier Romero, Matthew Loper, and Michael J. Black. FAUST: Dataset and evaluation for 3D mesh registration. In CVPR, 2014. 5 +[6] Dieter Bothe, Mathis Fricke, and Kohei Soga. Mathematical analysis of modified level-set equations. Mathematische Annalen, pages 1-41, 2024. 3 +[7] Mario Botsch, Mark Pauly, Markus H Gross, and Leif Kobbelt. Primo: coupled prisms for intuitive surface modeling. In Symposium on Geometry Processing, 2006. 2 +[8] Dongliang Cao, Paul Roetzer, and Florian Bernard. Unsupervised learning of robust spectral shape matching. ACM Transactions on Graphics, 42(4):1-15, 2023. 4, 5 +[9] Dongliang Cao, Marvin Eisenberger, Nafie El Amrani, Daniel Cremers, and Florian Bernard. Spectral meets spatial: Harmonising 3d shape matching and interpolation. In CVPR, 2024. 1, 2 +[10] Dongliang Cao, Paul Roetzer, and Florian Bernard. Revisiting map relations for unsupervised non-rigid shape matching. In International Conference on 3D Vision (3DV), 2024. 2 +[11] Wei Cao, Chang Luo, Biao Zhang, Matthias Nießner, and Jiapeng Tang. Motion2vecsets: 4d latent vector set diffusion for non-rigid shape reconstruction and tracking. In CVPR, 2024. 1, 2 +[12] Luca Cosmo, Emanuele Rodola, Michael M Bronstein, Andrea Torsello, Daniel Cremers, Y Sahillioğlu, et al. Shrec'16: Partial matching of deformable shapes. In Eurographics Workshop on 3D Object Retrieval, EG 3DOR, 2016. 5 +[13] Luca Cosmo, Antonio Norelli, Oshri Halimi, Ron Kimmel, and Emanuele Rodola. LIMP: Learning Latent Shape Rep + +resentations with Metric Preservation Priors, page 19-35. Springer International Publishing, 2020. 2, 3, 5, 6, 7, 1 +[14] A. Dervieux and F. Thomasset. A finite element method for the simulation of Raleigh-Taylor instability. Springer Lect. Notes in Math., 771:145-158, 1979. 3 +[15] A. Dervieux and F. Thomasset. Multifluid incompressible flows by a finite element method. Lecture Notes in Physics, 11:158-163, 1981. 3 +[16] George Ellwood Dieter and David Bacon. Mechanical metallurgy. McGraw-hill New York, 1976. 1 +[17] Nicolas Donati, Abhishek Sharma, and Maks Ovsjanikov. Deep geometric functional maps: Robust feature learning for shape correspondence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8592-8601, 2020. 4 +[18] Paul Dupuis, Ulf Grenander, and Michael I Miller. Variational problems on flows of diffeomorphisms for image matching. Quarterly of applied mathematics, pages 587-600, 1998. 3 +[19] M. Eisenberger and D. Cremers. Hamiltonian dynamics for real-world shape interpolation. In ECCV, 2020. 2 +[20] Marvin Eisenberger, Zorah Lähner, and Daniel Cremers. Divergence-free shape interpolation and correspondence. arXiv preprint arXiv:1806.10417, 2018. 3 +[21] Marvin Eisenberger, David Novotny, Gael Kerchenbaum, Patrick Labatut, Natalia Neverova, Daniel Cremers, and Andrea Vedaldi. Neuromorph: Unsupervised shape interpolation and correspondence in one go. In CVPR, 2021. 1, 2 +[22] Gianfranco Forlani, Carla Nardinocchi, Marco Scaioni, and Primo Zingaretti. C complete classification of raw lidar data and 3d reconstruction of buildings. Pattern analysis and applications, 8:357-374, 2006. 1 +[23] Mathis Fricke, Tomislav Marić, Aleksandar Vucković, Ilia Roisman, and Dieter Bothe. A locally signed-distance preserving level set method (sdpls) for moving interfaces. arXiv preprint arXiv:2208.01269, 2022. 3 +[24] Michael Garland and Paul S Heckbert. Surface simplification using quadric error metrics. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 209-216, 1997. 1 +[25] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2014. 3 +[26] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. 3 +[27] Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, and Yaron Lipman. Implicit geometric regularization for learning shapes. In ICML, 2020. 1 +[28] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, pages 6840-6851. Curran Associates, Inc., 2020. 3 +[29] L Harenstam-Nielsen, L Sang, A Saroha, N Araslanov, and D Cremers. Diffcd: A symmetric differentiable chamfer distance for neural implicit surface fitting. In European Conference on Computer Vision (ECCV), 2024. 1 + +[30] Fridtjov Irgens. Continuum Mechanics in Curvilinear Coordinates, pages 599-624. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. 1 +[31] A Kaye, RFT Stepto, WJ Work, JV Aleman, and A Ya Malkin. Definition of terms relating to the non-ultimate mechanical properties of polymers (recommendations 1998). Pure and applied chemistry, 70(3):701-754, 1998. 1 +[32] Hsueh-Ti Derek Liu, Francis Williams, Alec Jacobson, Sanja Fidler, and Or Litany. Learning smooth neural functions via lipschitz regularization. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1-13, 2022. 2, 3, 5, 7, 1 +[33] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1-248:16, 2015. 6 +[34] R. Malladi, J.A. Sethian, and B.C. Vemuri. Shape modeling with front propagation: a level set approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(2): 158-175, 1995. 2 +[35] Riccardo Marin, Simone Melzi, Emanuele Rodola, and Umberto Castellani. Farm: Functional automatic registration method for 3d human bodies. In Computer Graphics Forum, pages 160-173. Wiley Online Library, 2020. 4 +[36] Riccardo Marin, Enric Corona, and Gerard Pons-Moll. Nicp: Neural icp for 3d human registration at scale. In European Conference on Computer Vision, 2024. 4 +[37] Ishit Mehta, Manmohan Chandraker, and Ravi Ramamoorthi. A level set theory for neural implicit evolution under explicit flows. In ECCV, 2022. 3 +[38] Tiago Novello, Vinícius da Silva, Guilherme Schardong, Luiz Schirmer, Hélio Lopes, and Luiz Velho. Neural implicit surface evolution. In ICCV, 2023. 2, 3, 5, 6, 7 +[39] Maks Ovsjanikov, Mirela Ben-Chen, Justin Solomon, Adrian Butscher, and Leonidas Guibas. Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics (ToG), 31(4):1-11, 2012. 4 +[40] Maks Ovsjanikov, Etienne Corman, Michael Bronstein, Emanuele Rodola, Mirela Ben-Chen, Leonidas Guibas, Frederic Chazal, and Alex Bronstein. Computing and processing correspondences with functional maps. In SIGGRAPH ASIA 2016 Courses, pages 1-60. 2016. 4 +[41] Paul Roetzer, Ahmed Abbas, Dongliang Cao, Florian Bernard, and Paul Swoboda. Discomatch: Fast discrete optimisation for geometrically consistent 3d shape matching. In In Proceedings of the European conference on computer vision (ECCV), 2024. 2 +[42] Jean-Michel Roufosse, Abhishek Sharma, and Maks Ovsjanikov. Unsupervised deep learning for structured shape matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1617-1627, 2019. 4 +[43] L Sang, B Haefner, X Zuo, and D Cremers. High-quality rgb-d reconstruction via multi-view uncalibrated photometric stereo and gradient-sdf. In IEEE Winter Conference on Applications of Computer Vision (WACV), Hawaii, USA, 2023. 1 +[44] L Sang, A Saroha, M Gao, and D Cremers. Weight-aware implicit geometry reconstruction with curvature-guided sampling. arXiv preprint arXiv:2306.02099, 2023. 1 + +[45] L Sang, Z Canfes, D Cao, F Bernard, and D Cremers. Implicit neural surface deformation with explicit velocity fields. In International Conference on Learning Representations (ICLR), 2025. 2, 3, 4, 5, 7, 1 +[46] Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In NeurIPS, 2020. 1 +[47] C Sommer, L Sang, D Schubert, and D Cremers. Gradient-SDF: A semi-implicit surface representation for 3d reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 1 +[48] Olga Sorkine and Marc Alexa. As-rigid-as-possible surface modeling. In Symposium on Geometry processing, pages 109–116. CiteSeer, 2007. 1, 2, 4 +[49] Anthony James Merrill Spencer. Continuum mechanics. Courier Corporation, 2004. 4, 5 +[50] Jiaming Sun, Yiming Xie, Linghao Chen, Xiaowei Zhou, and Hujun Bao. NeuralRecon: Real-time coherent 3D reconstruction from monocular video. CVPR, 2021. 1 +[51] Julian Tachella, Yoann Altmann, Ximing Ren, Aongus McCarthy, Gerald S Buller, Stephen Mclaughlin, and Jean-Yves Tourneret. Bayesian 3d reconstruction of complex scenes from single-photon lidar data. SIAM Journal on Imaging Sciences, 12(1):521-550, 2019. 1 +[52] Omid Taheri, Nima Ghorbani, Michael J. Black, and Dimitrios Tzionas. GRAB: A dataset of whole-body human grasping of objects. In European Conference on Computer Vision (ECCV), 2020. 8 +[53] Stephen P Timoshenko. Strength of Materials: Part II Advanced Theory and Problems. D. Van Nostrand, 1956. 1 +[54] Wenbo Wang, Hsuan-I Ho, Chen Guo, Boxiang Rong, Artur Grigorev, Jie Song, Juan Jose Zarate, and Otmar Hilliges. 4d-dress: A 4d dataset of real-world human clothing with semantic annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 5, 8, 2, 6 +[55] Guandao Yang, Serge Belongie, Bharath Hariharan, and Vladlen Koltun. Geometry processing with neural fields. In NeurIPS, 2021. 2, 3, 5, 6, 7, 1 +[56] Silvia Zuffi, Angjoo Kanazawa, David Jacobs, and Michael J. Black. 3D menagerie: Modeling the 3D shape and pose of animals. In CVPR, 2017. 5, 2, 3 \ No newline at end of file diff --git a/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/images.zip b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..74c1349c76bb3686430999eab70c70f02c088fa8 --- /dev/null +++ b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30b473b2ededee52d8a30833a85d023e9c094868d659413747d3665b1de1f8b0 +size 559266 diff --git a/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/layout.json b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ba137ceae137324c54c00bdfffc6ea69cfd4b7b8 --- /dev/null +++ b/CVPR/2025/4Deform_ Neural Surface Deformation for Robust Shape Interpolation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:426812e21b9a0f5c5ee5ccd83f707dd3b0d7adb269bdd4a569340b86a03e3026 +size 371204 diff --git a/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_content_list.json b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6c19aca9f881b63d1f796cc13c12f44d6f44bc45 --- /dev/null +++ b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eeff7501dcfc438dde4ca9387fe86de8603d05c6fd17239fd0c03283e00828d7 +size 75942 diff --git a/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_model.json b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..33816bc83142a85683148fa472d84e892fe249b7 --- /dev/null +++ b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9581f870bb9a01342c3f108eda5685040d26c0e5a28aa05656d3393259487d15 +size 94550 diff --git a/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_origin.pdf b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bfe6313cc9eda9093715d865587b0bc504749c51 --- /dev/null +++ b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/0cf19282-fa7f-4680-bc87-175d439267a8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:518e481067c2ee0ab6229975cddafc247f8b12a3fdb2bc21452aa891eb3f3703 +size 7373920 diff --git a/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/full.md b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c4495cac65a074c7288631f8557e942bb7b0b6f8 --- /dev/null +++ b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/full.md @@ -0,0 +1,321 @@ +# 4Real-Video: Learning Generalizable Photo-Realistic 4D Video Diffusion + +Chaoyang Wang $^{1,\ast\dagger}$ Peiye Zhuang $^{1,*}$ Tuan Duc Ngo $^{1,2,*}$ Willi Menapace $^{1}$ Aliaksandr Siarohin $^{1}$ Michael Vasilkovsky $^{1}$ Ivan Skorokhodov $^{1}$ Sergey Tulyakov $^{1}$ Peter Wonka $^{1,3}$ Hsin-Ying Lee $^{1}$ $^{1}$ Snap Inc $^{2}$ Umass Amherst $^{3}$ KAUST + +https://snap-research.github.io/4Real-Video/ + +![](images/ab82a26de6b3b26556addd94a40df86fef92f462b8aaa25f74593ff715802c0c.jpg) + +![](images/8728c3ab28313cfa4c0506d2fccaddaca8f9d4f5b45483be8cc3071bcaeddb4d.jpg) + +![](images/d35ea1fd2d8a3c287adf4a911fdab0870a8567e820ed4e5975176e1696ceaaf5.jpg) + +![](images/602c99eb12eea2d142e0edacdbaa1c688b51c3cee668e22c08527eb9edffa450.jpg) + +![](images/512870464f9d04bd16c6e5aa2eeb557d864c23d443f449e284cd4f67a1b712d0.jpg) + +![](images/d5c0dc617d249372fb0425ce35acb3670b6b01dc9253fb3b5a3c6e40c0666a93.jpg) +"move a truck towards right" + +![](images/4a1f1194eb47e0e8ffc61873ee70993686bd3b1b89d2c0112caf8c11dee42321.jpg) +Figure 1. 4Real-Video is a 4D generation framework that (top-left) takes a fixed-view video and a freeze-time video as input and generates a grid of consistent video frames. One axis of the grid varies in time, and the other axis varies the viewpoint. The input videos can be real videos or videos generated by a video model. Note that our method can generate grids larger than $8 \times 8$ videos. Here, we present subsets of frames as an example. (top-right) 4D videos generated from generated videos. (bottom) We can also capture a real-world scene, and generate a 4D video given different prompts. + +![](images/255197e00d2ba2b37e5eee576b86522cd47e98310bb34bd3b23136cff4a124a5.jpg) +"a yellow lego brick turns into a cat" + +# Abstract + +We propose 4Real-Video, a novel framework for generating 4D videos, organized as a grid of video frames with both time and viewpoint axes. In this grid, each row contains frames sharing the same timestep, while each column contains frames from the same viewpoint. We propose a novel two-stream architecture. One stream performs viewpoint updates on columns, and the other stream performs tempo + +ral updates on rows. After each diffusion transformer layer, a synchronization layer exchanges information between the two token streams. We propose two implementations of the synchronization layer, using either hard or soft synchronization. This feedforward architecture improves upon previous work in three ways: higher inference speed, enhanced visual quality (measured by FVD, CLIP, and VideoScore), and improved temporal and viewpoint consistency (measured by VideoScore and Dust3R-Confidence). + +# 1. Introduction + +With the recent rise of video diffusion models [22, 25, 44], $4D$ video generation has emerged as an important extension. $4D$ video generation has numerous potential applications, including creating dynamic scenes and objects through post-processing and enabling immersive experiences via image-based rendering techniques. To position our work, we define $4D$ video as follows: 4D video is a grid of video frames with a time and a view-point axis. In our arrangement, all frames in a row share a timestamp, and all in a column share a viewpoint (see Fig. 1, left for an example). Our definition contrasts with recent work that also uses the term "4D video", to describe video generation with camera and motion control [49]. To clarify this distinction, we will refer to such approaches as camera-aware. While both paradigms share similar applications, we believe that $4D$ video has two important advantages compared to camera-aware video: (a) a complete space-time grid can provide full 4D experiences and enable easier dynamic reconstruction, yet it is non-trivial for camera-aware methods to complete such a grid, and (b) videos generated by camera-aware methods tend to have inferior multi-view consistency [15]. + +As 4D video generation is a very recent topic, there are only a few competing approaches. Some works [16, 41, 47] propose training the 4D models directly using the limited available 4D data, such as synthetic animated 3D assets from Objaverse [7] or a human-specific dataset [30]. These models can generate a space-time grid, yet they cannot generalize beyond the limited training data distribution. Furthermore, the architecture designs, which sequentially interleave temporal and view attention, often fail to account for their interdependence, leading to artifacts or reduced generalization. + +To address the challenges of generating 4D videos, we introduce a novel multi-view video generation model leveraging a two-stream architecture to enhance multi-view and temporal consistency. Our approach extends existing transformer-based video diffusion models by splitting video tokens into two streams: one dedicated to capturing temporal updates across fixed viewpoints and the other focused on view updates across freeze-time frames. These streams are processed independently using pre-trained transformer layers to reuse existing models efficiently. To ensure coherence between the streams, we introduce a synchronization layer that dynamically exchanges information between the temporal and view tokens. Inspired by the optimization literature, we propose two types of synchronization layers that perform either hard or soft synchronization updates, with the latter providing greater flexibility by learning adaptive weights to modulate token interactions across layers. This design avoids the distributional shifts observed in sequential model designs, preserves the consistency of the original video model, and enables high-quality 4D generation. + +The proposed architecture design can generate diverse, dynamic multi-view videos in approximately 1 minute ( $8 \times 8$ frames at a resolution of $288 \times 512$ ), as opposed to hours required by previous SDS-based approaches [2, 18, 27, 46]. Beyond its speed advantage, the model generalizes well with limited 4D training data. This is achieved by initially training on 2D transformed videos to simulate synchronized camera motion, followed by fine-tuning on a small amount of animated Objaverse data [7]. Moreover, our model does not rely on explicit camera conditioning modules. Instead, it takes a real or generated freeze-time video and a fixed-view video as conditional inputs, automatically inferring the viewpoints and motion to be generated. This effectively decomposes the problem, allowing us to leverage recent advancements in camera-controlled video generation as conditional input. It also simplifies the process of animating existing freeze-time videos by removing the requirement for users to provide camera poses explicitly. + +In summary, we make the following contributions: + +- We propose a two-stream architecture for 4D video generation that independently handles temporal and view updates, synchronizing streams to ensure consistency. +- We propose flexible synchronization mechanisms that enable efficient and adaptive token interactions, preserving the generation quality of pre-trained video layers. +- Our model is data-efficient and can produce high-resolution 4D videos in a fraction of the time required by prior methods. We obtain state-of-the-art results in terms of video quality and multi-view consistency. + +# 2. Related Work + +Optimization-Based 4D Generation. Score Distillation Sampling (SDS) [6, 17, 26, 36, 38, 51] has been used to generate 3D content by obtaining gradients from pre-trained models like text-to-image [28, 29] and text-to-multiview models [19, 32]. Extending this approach, a branch of 4D generation methods [2, 14, 18, 27, 33, 45, 48] leverages additional text-to-video supervision [5, 8, 12] to generate dynamic content. However, these methods require time-consuming optimization processes, often requiring hours to produce a 4D output. Furthermore, most methods derive 3D priors from multi-view diffusion models [19, 32] trained on an object-centric and synthetic dataset [7], resulting in a bias toward object-centric, non-photorealistic outputs. + +Camera-aware video generation Text-to-video models [21, 22, 25, 44] have shown promising results in generating coherent and photorealistic video content. To enable a more controllable and interactive content creation process, camera control in video generation has gained attention. These approaches [3, 9, 39, 40, 43] propose adding camera control by injecting camera pose information into the temporal layers. Additionally, some approaches [42, 49] collect and annotate videos with camera poses to fine-tune + +![](images/78fbc5f481fbb3a30e47cbf5c5032652e318630c67ddd527b05b03ee5895f088.jpg) +Figure 2. Overview of 4Real-Video. Left: we initialize the grid of frames with a (generated or real) fixed-viewpoint video in the first row and a freeze-time video in the first column. Middle: our architecture consists of two parallel token streams. The top part processes $\mathbf{x}_l^{\mathrm{v}}$ with viewpoint updates and the bottom part processes $\mathbf{x}_l^{\mathrm{t}}$ with temporal updates. Subsequently, a synchronization layer computes the new tokens $\mathbf{x}_{l + 1}^{\mathrm{v}}\mathbf{x}_{l + 1}^{\mathrm{t}}$ for the next layer in the diffusion transformer architecture. Right: we propose two implementations of the synchronization layer: hard and soft synchronization. + +video models. However, despite being visually consistent, the content generated by camera-aware methods tends to have multi-view inconsistencies. Furthermore, it is not trivial for camera-aware video models to generate a complete space-time grid of 4D videos, limiting their applicability in fully 4D scenarios. + +4D video generation In this work, we define 4D video as a video grid organized along both temporal and viewpoint axes. Some work [16, 41, 47] trains 4D models using existing 4D data, typically subsets of Objaverse [7]. Although these models are conceptually capable of moving beyond object-centric content, their outputs remain constrained by the limited diversity of available 4D datasets in practice. To reduce reliance on synthetic data, alternatives like 4Real [46] relies solely on a video model to generate consistent dynamic and freeze-time videos, followed by an optimization-based 4D reconstruction to obtain underlying 4D contents. However, the dynamic and freeze-time videos alone cannot guarantee consistency within the 4D grid, and the optimization is computationally expensive. CVD [15] tackles this limitation by fine-tuning video models to simultaneously generate structurally consistent video pairs, using pseudo-paired datasets curated from monocular video datasets [4, 34]. Although CVD proposes strategies to extend generation to multiple views, its consistency and efficiency remains suboptimal for multi-view 4D generation. + +# 3. Method + +Problem setup. We aim to generate a structured grid of video frames $\{I_{ij}\}$ , where all frames in a row share a viewpoint, and all frames in a column share a timestep. In other + +words, each row is a fixed-view video, and each column is a freeze-time video. The inputs to our method are the first row $I_{1*}$ and the first column $I_{*1}$ of the frames. These inputs are from either real-world videos or synthetic outputs from existing video generation models. The task is to synthesize the remaining frames while ensuring both temporal and multi-view consistency (see Fig. 2 left). + +# 3.1. Base video model training + +Freeze-time and dynamic video generation. Inspired by 4Real [46], we train a base video model to support two distinct generation modes: freeze-time video that depicts static scenes with changes in viewpoint, and dynamic video that captures object motion. We group datasets into two categories: (1) videos with arbitrary camera and scene motions, and (2) videos of static scenes. Each group is associated with a unique context embedding that controls the generation process to align with the respective distributions. + +Masked training. To handle flexible input configurations, the model is trained using a random masking strategy. This enables the model to predict unseen frames based on any subset of input frames. The design (1) allows for autoregressive generation of long videos by progressively synthesizing frames, and (2) provides essential flexibility for the 4D video model to condition on various input video frames. + +# 3.2. Multi-view video model + +# 3.2.1. Two-stream architecture + +Current state-of-the-art video diffusion models [22, 25, 44] mostly leverage a transformer-based architecture such as DiT [24], which forwards video tokens through a series of spatial-temporal transformer blocks with skip connections. + +Specifically, each DiT transformer block $\varphi_{l}$ produces an update $\Delta \mathbf{x}_l$ to the current video tokens $\mathbf{x}_l$ at the $l$ -th layer with condition $\mathbf{c}$ : + +$$ +\Delta \mathbf {x} _ {l} = \varphi_ {l} (\mathbf {x} _ {l}; \mathbf {c}), \quad \mathbf {x} _ {l + 1} = \mathbf {x} _ {l} + \Delta \mathbf {x} _ {l}. \tag {1} +$$ + +In our setting, we need to extend to a set of tokens describing all frames in the grid. We use $\mathbf{x}_l$ to denote the set of all tokens at layer $l$ and $\mathbf{x}_{l,i,j}$ to describe the set of tokens for a frame at layer $l$ with time stamp $i$ and viewpoint $j$ . As our goal is to reuse pre-trained high-quality video models as much as possible, we can utilize pre-trained DiT transformer layers to either update a row for view-point $i$ (Eq. 2) or a column for timestep $j$ (Eq. 3) of our frame grid, + +$$ +\varphi_ {l} ^ {\mathrm {v}} \left(\left\{\mathbf {x} _ {l, i, 1}, \dots , \mathbf {x} _ {l, i, T} \right\}; \mathbf {c}\right) \quad \text {f o r} 1 \leq i \leq V, \tag {2} +$$ + +$$ +\varphi_ {l} ^ {\mathrm {i}} \left(\left\{\mathbf {x} _ {l, 1, j}, \dots , \mathbf {x} _ {l, V, j} \right\}; \mathbf {c}\right) \quad \text {f o r} 1 \leq j \leq Y. \tag {3} +$$ + +Since we have a total of $T$ timesteps and $V$ viewpoints, we can process the complete grid by either performing $V$ row updates or $T$ column updates in parallel when reusing existing DiT transformer blocks. To avoid overly complex notation, we write $\varphi_l^{\mathrm{v}}(\mathbf{x}_l,\mathbf{c})$ to denote the update of a single row or a parallel update of all $V$ rows jointly (and use analogous notation for column updates with $\varphi_l^{\mathrm{t}}(\mathbf{x}_l,\mathbf{c}))$ . + +Our first important design idea is variable (token) splitting to create two separate sets of tokens to encode the complete frame grid, $\mathbf{x}_l^{\mathrm{t}}$ for temporal and $\mathbf{x}_l^{\mathrm{v}}$ for view updates. The set $\mathbf{x}_l^{\mathrm{v}}$ will be processed using $T$ parallel row updates and the set $\mathbf{x}_l^{\mathrm{t}}$ will be processed using $V$ parallel column updates. Updates are computed independently and in parallel: + +$$ +\mathbf {y} _ {l} ^ {\mathrm {v}} = \mathbf {x} _ {l} ^ {\mathrm {v}} + \varphi_ {l} ^ {\mathrm {v}} \left(\mathbf {x} _ {l} ^ {\mathrm {v}}; \mathbf {c} ^ {\mathrm {v}}\right); \quad \mathbf {y} _ {l} ^ {\mathrm {t}} = \mathbf {x} _ {l} ^ {\mathrm {t}} + \varphi_ {l} ^ {\mathrm {t}} \left(\mathbf {x} _ {l} ^ {\mathrm {t}}; \mathbf {c} ^ {\mathrm {t}}\right). \tag {4} +$$ + +We propose a synchronization layer after each DiT transformer layer $l$ , which exchanges information between the two token streams. The synchronization layer $f$ computes a function $(\mathbf{x}_{l + 1}^{\mathrm{v}},\mathbf{x}_{l + 1}^{\mathrm{t}}) = f(\mathbf{y}_l^{\mathrm{v}},\mathbf{y}_l^{\mathrm{t}})$ in order to obtain the input tokens for the next layer. This architecture is shown in Fig. 2. Several model designs have been proposed to extend pre-trained video models for 4D video generation. Next, we will review and analyze designs in previous works (Sec. 3.2.2). Then we introduce our design of the synchronization layer (Sec. 3.2.3-3.2.4). + +# 3.2.2. Sequential interleaving + +A competing design choice would be to compute alternating updates for temporal and multi-view consistency: + +$$ +\mathbf {y} _ {l} = \mathbf {x} _ {l} + \varphi_ {l} ^ {\mathrm {v}} (\mathbf {x} _ {l}; \mathbf {c} ^ {\mathrm {v}}), \quad \mathbf {x} _ {l + 1} = \mathbf {y} _ {l} + \varphi_ {l} ^ {\mathrm {t}} (\mathbf {y} _ {l}; \mathbf {c} ^ {\mathrm {t}}). \tag {5} +$$ + +where $\varphi_l^{\mathrm{v}}$ and $\varphi_l^{\mathrm{v}}$ denote applying the transformer layer $\varphi_l$ across the view and the time axes, respectively. Most prior works [15, 16, 47] that sequentially interleave cross-view attention and cross-time attention, could be interpreted as performing the above steps. + +While conceptually simple, this approach has limitations: (i) It does not fully account for the interdependence + +between temporal and view consistency, treating them as independent objectives during each update. (ii) Outputs from the view update $\mathbf{y}_l$ may be out-of-distribution for the temporal update, leading to artifacts or reduced generalization. Prior works either train an additional network to adapt $\mathbf{y}_l$ to become in-distribution [16], or fine-tune the cross-view attention or the cross-time attention to compensate for the discrepancy [15, 47]. However, due to the limited available 4D data, fine-tuning the attention layers could degrade the quality of the video model, limiting its generalization capability to domains outside the 4D training set. + +# 3.2.3. Synchronization in Optimization + +One can interpret the DiT blocks of the video model as functions performing a fixed number of iterative variable updates to optimize an implicit cost function $\mathcal{C}(\mathbf{x};\mathbf{c})$ [1, 13]. The implicit cost function $\mathcal{C}$ can be thought of as an abstract measure of "closeness" to realistic videos given context $\mathbf{c}$ . Under certain restricted assumptions, Ahn et al. [1] prove that for a transformer with $L$ layers, it learns to perform $L$ iterations of preconditioned gradient descent to reach certain critical points of the training loss. + +The intuition that the transformer architecture can be seen as an iterative optimization solver motivates us to create a link to the optimization literature to explain our synchronization layer design choices. Using the optimization analogy, our video model solves a combined optimization problem for 4D generation: + +$$ +\min _ {\mathbf {x}} \mathcal {C} _ {\mathrm {v}} (\mathbf {x}) + \mathcal {C} _ {\mathrm {t}} (\mathbf {x}). \tag {6} +$$ + +where $\mathcal{C}_{\mathrm{v}}$ ensures that each row of the grid is a fixed-view video and $\mathcal{C}_{\mathrm{t}}$ ensures that each column is a freeze-time video. Using the idea of variable splitting, this problem can be transformed into the equivalent problem: + +$$ +\min _ {(\mathbf {x} ^ {\mathrm {v}}, \mathbf {x} ^ {\mathrm {t}})} \mathcal {C} _ {\mathrm {v}} (\mathbf {x} ^ {\mathrm {v}}) + \mathcal {C} _ {\mathrm {t}} (\mathbf {x} ^ {\mathrm {t}}) \quad s. t. \mathbf {x} ^ {\mathrm {v}} = \mathbf {x} ^ {\mathrm {t}}. \tag {7} +$$ + +An optimization problem with this structure can be tackled by algorithms like projected gradient descent, which performs a projection on the constraint manifold at every iteration. This leads to the design of a hard synchronization between the two token streams. Alternatively, one can employ a quadratic regularization or an algorithm like ADMM [23] that does not strictly enforce the constraint at every iteration but makes the token streams more similar. This leads to the design of a soft synchronization between the two token streams. + +# 3.2.4. Synchronization layer design + +The synchronization layer maintains consistency between the two token streams, as defined in E.q. 4. Following this, we explore two synchronization strategies: + +Hard synchronization. Hard synchronization strictly enforces the constraint $\mathbf{x}_l^{\mathrm{t}} = \mathbf{x}_l^{\mathrm{v}}$ at every iteration. A + +straightforward approach to hard synchronization is to compute an update by averaging tokens. However, in contrast to traditional optimization, we can generalize this step to compute a weighted combination with learned weights: + +$$ +\mathbf {x} _ {l + 1} = \mathbf {W} _ {l} ^ {\mathrm {v}} \mathbf {y} _ {l} ^ {\mathrm {v}} + \mathbf {W} _ {l} ^ {\mathrm {t}} \mathbf {y} _ {l} ^ {\mathrm {t}}, \tag {8} +$$ + +where $\mathbf{W}_l^{\mathrm{v}}$ , $\mathbf{W}_l^{\mathrm{t}}$ are linear weights for merging each token with initial values, i.e., $\frac{1}{2}\mathbf{I}$ . The weights are modulated by the diffusion time $\sigma$ to make them adaptive to different stages of the diffusion process. + +Empirically, the 4D model with hard sync can indeed generate temporally consistent 4D videos. However, it tends to produce less desirable frames when the viewpoint differs significantly from the input fixed-view video. Common artifacts include objects appearing stretched in the direction of camera movement or unintended object motion when the time stamp is intended to be frozen (Refer to visual examples in Fig. 7). We hypothesize that the limitation of hard sync is that the merged video tokens are aggregated from both the freeze-time and fixed-view videos, causing a discrepancy in the learned distribution of the base video model. + +Soft synchronization. The above observation motivates an alternative soft synchronization strategy – the video tokenens $\mathbf{x}_l^{\mathrm{v}},\mathbf{x}_l^{\mathrm{t}}$ are kept in two separate streams instead of merging them into a single copy as in Eq. (8). A soft update is used to make the streams more similar. This design gives additional flexibility for the model to adaptively adjust the strength of synchronization at different layers. Again, we can design a more general solution as would be available in traditional optimization and use a modulated linear layer to predict asymmetrical token updates: + +$$ +\left(\Delta \mathbf {y} _ {l} ^ {\mathrm {v}}, \Delta \mathbf {y} _ {l} ^ {\mathrm {t}}\right) = \operatorname {M o d} \operatorname {L i n e a r} \left(\mathbf {y} _ {l} ^ {\mathrm {v}}, \mathbf {y} _ {l} ^ {\mathrm {t}}; \sigma\right). \tag {9} +$$ + +Then, the tokens are updated separately: + +$$ +\mathbf {x} _ {l + 1} ^ {\mathrm {v}} = \mathbf {y} _ {l} ^ {\mathrm {v}} + \Delta \mathbf {y} _ {l} ^ {\mathrm {v}}, \quad \mathbf {x} _ {l + 1} ^ {\mathrm {t}} = \mathbf {y} _ {l} ^ {\mathrm {t}} + \Delta \mathbf {y} _ {l} ^ {\mathrm {t}}. \tag {10} +$$ + +Soft synchronization offers more flexibility, adapting the strength of synchronization across layers. Empirically, this results in better consistency and fewer artifacts in challenging scenarios, such as large viewpoint changes. We visualize the update strength and token similarity in Fig. 3. We observe that the update strength increases for layers deeper in the network. The token similarity is initially drifting between the two token streams before they are made more similar by the increased update strength in later layers. + +# 3.3. Implementation + +Training. The model is trained with the velocity matching loss of rectified flow [20], leveraging two data sources: (1) 2D transformed videos: we apply a sequence of continuous 2D affine transformations on the fly to video frames to mimic camera motion. This provides large-scale pseudo + +![](images/caf38a9cc447b349852a71e8a6f62e027f6743b827e6edc88e8417936aee461e.jpg) +(a) Relative magnitude of $\Delta \mathbf{y}_l^{\mathrm{v}}$ $\Delta \mathbf{y}_l^{\mathrm{t}}$ in Eq. (10) at each layer. +Figure 3. The dynamics of soft synchronization during inference. + +![](images/c37f73d9f5ed98a2b28417adf5d3e0849fe13664cfc597225e9f75686f52b0ab.jpg) +(b) Similarity between $\mathbf{x}_l^{\mathrm{v}}$ and $\mathbf{x}_l^{\mathrm{t}}$ at each layer. + +4D data to train the model to generate synchronized motions. However, models only trained with this source tend to generate flattened foreground objects that are noticeable when changing viewpoints. (2) Animated Objaverse dataset: We render around 15,000 multi-view videos using animated 3D assets from Objaverse [7], positioning the rendering cameras on a circular trajectory around each asset. Fine-tuning with this small-scale, synthetic, object-centric 4D dataset quickly equips the model with the ability to maintain both temporal and multi-view consistency, even in complex scenes containing multiple objects and intricate environments. + +Alternatively, the model can be trained on a mix of pseudo-4D and Objaverse data, instead of sequentially pretraining on pseudo-4D and finetuning on Objaverse. However, we found no combination of mixed training that outperformed the sequential approach. We find that the pseudo data distracts the model, causing it to focus on 2D rather than true perspective transformations, thereby reducing quality. Thus, we recommend using pseudo-4D data only during pretraining. + +Extending to a wider view and longer time. The model is trained to generate an $8 \times 8$ frame grid in each step. For input fixed-view videos or freeze-time videos with extended durations, we generate frames autoregressively, advancing along the time and view axes in a sliding window fashion. + +# 4. Experiments + +# 4.1. Implementation details + +Base video model. The base video model consists of 600M parameters, with 24 DiT blocks of 1024 channel size. We found that pixel-based diffusion models train faster and produce more coherent motion compared to latent-based models of similar model size. Thus, we opt to train the base model to directly output pixel values, given limited accessible GPU resources. The model is progressively trained from a resolution of $36 \times 64$ to $72 \times 128$ using 24 A100 GPUs for 12 days. We then train a diffusion-based upsampler to upsample the video to the target $288 \times 512$ resolution. More + +
MethodFID ↓CLIP ↑FVD ↓FVD-Test ↓Visual Quality↑Temporal Consist. ↑Factual Consist. ↑
TimeViewTimeViewTimeViewTimeViewTimeView
SV4D [41]204.8119.461053.101245.42814.50323.992.262.022.031.682.121.99
MotionCtrl [39]87.1020.201556.361509.761170.04302.182.362.302.382.252.382.33
Sequential96.6428.161662.541797.15897.08597.192.302.282.212.152.232.20
Soft w/o Obj80.1728.111392.481720.47318.18302.182.412.392.372.312.352.33
Hard Sync79.9228.16972.871045.35316.14251.442.422.402.402.332.372.34
Soft Sync78.3628.22906.161036.00308.15261.022.432.422.412.362.382.36
+ +![](images/c6d5429ea523b91294f56eb4fb80edb6a0ff46d3a5d2a96cacfee7d4294c44e9.jpg) +Figure 4. Visual Comparisons. We show two viewpoints for a fixed time for each method. Our method produces high-quality images, even under significant camera motion. In contrast, frames generated by 4Real and SV4D tend to appear more blurred, with objects notably distorted in SV4D. MotionCtrl struggles to generate frames under substantial camera motion. We use red bounding boxes to highlight regions with distortions and flickering, which become particularly noticeable when viewed as a video. + +Table 1. Quantitative ablation. We evaluate the visual quality, temporal consistency, and text-video alignment using various metrics. + +
GIM-Confidence↑Dust3R-Confidence↑
τ = 0.1τ = 0.5τ = 0.7τ = 2.0τ = 2.5τ = 3.0
Sequential65.028.618.533.524.616.6
Soft w/o Obj80.849.636.139.131.424.0
Hard Sync78.744.831.439.331.523.8
Soft Sync79.647.133.841.033.425.7
+ +Table 2. Multi-view consistency is measured using an image-matching method and a 3D reconstruction method. + +training details are included in the supplementary. + +4D model. The 4D video model is trained progressively from low to high resolution. It is first trained using pseudo 4D videos for $20\mathrm{k}$ iterations, followed by fine-tuning for $3\mathrm{k}$ iterations on the Animated Objaverse dataset [7]. Notably, longer training on the Objaverse data led to a slight decrease in quality when applied to real-world scenes. Note that fine-tuning only affects the weights of the synchronization layers to avoid shifting the video distribution away from real videos. + +# 4.2. Evaluation + +Test sets. We use the Snap Video Model [21] to generate pairs of freeze-time and fixed-view videos, given a diverse + +set of text prompts. Each video is 2 seconds long, consisting of 16 frames. In total, we collected 200 pairs to serve as testing inputs. Some samples of our results on the test set are shown in Fig. 5. + +Evaluation metrics. Evaluating 4D video generation is challenging without ground truth data. We employ the following metrics to assess generation quality: + +- VideoScore [10] is a video quality evaluation network that outputs scores assessing visual quality, temporal consistency, text-video alignment, motion degree, and factual consistency. In our case, we drop text-video alignment and motion degree scores since these scores are more related to the input conditional videos instead of generated frames. + +- FVD [35] evaluates the Frechet Distance between the generated video distribution and the data distribution. We reported two versions of the FVD score: (1) against a large dataset of real videos, where the score is relatively high due to the distribution mismatch caused by the out-of-distribution content of our test cases, and (2) against statistics computed from the input test set videos to provide a more relevant comparison. + +- CLIP Score [11] evaluates the similarity between generated images against the text prompt. It also reflects the visual quality of the generated frames. + +- GIM-Confidence. We utilize GIM [31], a state-of-the-art 2D image matching method, to measure the consistency of appearance across views. Specifically, we report the proportion of matching pixels across views under different confidence thresholds. Note the GIM focuses on 2D image matching and cannot reflect 3D consistency well. + +- Dust3R-Confidence. To further evaluate $3D$ multiview consistency, we use Dust3R [37], a state-of-the-art 3D reconstruction network to analyze generated freeze-time videos. Dust3R provides pixel-wise confidence scores reflecting 3D multi-view consistency, and we report the proportion of pixels above different confidence thresholds. + +We evaluated VideoScore and FVD for both videos playing either along the time axes as fixed-view videos, or along the view axes as freeze-time videos, in order to evaluate both temporal and multi-view consistency of the generated frame grid. GIM and Dust3R-Confidence are used only in ablations with fixed camera trajectories, where the confidence scores are comparable. + +![](images/4843eabcae56623b14b580f811fd20b4460b852112917cdb040f4f4d968560b9.jpg) +Figure 5. Results from 4Real-Video. We can generate diverse and high-quality dynamic content. + +![](images/3318903c88057d23e8f4a23f0f75cd9988e57b2478a8f49b091e644768472155.jpg) +Figure 6. Deformable 3D Gaussian Splitting Reconstruction from the generated 4D videos demonstrate the spatial and temporal consistency of the proposed method. + +Comparison against 4D video generation baselines. There is currently no prior method that functions exactly like ours, so we establish two baselines: (1) we use MotionCtrl [39], a state-of-the-art camera control video generation method, to generate freeze-time videos for each frame of the input fixed-view videos. The videos are generated with "Round-RI_90" camera trajectory and speed parameter set to 4.0 to encourage larger camera motions. (2) We run SV4D [41], a state-of-the-art 4D video model trained specifically for animated 3D objects. + +MotionCtrl fails to generate temporally coherent videos because freeze-time videos are generated independently, ignoring temporal dependencies. It also tends to generate very small camera motion despite being given a large input speed. On the other hand, SV4D fails to create meaningful results when applied to realistic-style videos, which are out of its training domain. In comparison, our method gener + +ates realistic and coherent frame grids and achieves higher scores across different metrics, as shown in Table 1. + +Comparison against optimization-based 4D generation baselines. We also compare against recent sate-of-the-arts 4D generation methods [2, 18, 46, 50] that rely on computationally expensive score distillation sampling [26]. Due to the limited number of samples we can acquire from these methods, and the fact that these samples were generated using different settings (object-centric v.s. scene-level), we conducted a user study instead. + +The study involves 10 evaluators per video pair. In each session, evaluators were presented with two anonymized videos. Each video depicted a dynamic object or scene, with the camera moving along a circular trajectory and stopping at 2-4 poses to highlight object motions. We obtained 16 videos for 4Dfy [2], 14 videos for Dream-in-4D [50], 14 videos for AYG [18] and 36 videos for 4Real [46] from + +![](images/b6bc8133f8c9fd1ae758715fa36537f2aa0e65bf7c41bda89159b2933d0463e2.jpg) +Figure 7. Ablation comparisons. We visually compare the video quality and consistency among different design choices. + +![](images/5ae35ac4ab74a2d22bd10ff7d5a1ab59d55a0ce9fd5d3c992eabfbd3819c4a5f.jpg) +Figure 8. User study against optimization-based 4D generation methods across different rating criteria. + +their respective project web pages. Evaluators were tasked with selecting their preferences based on seven criteria: motion realism, foreground/background quality, shape realism, general quality, motion quality, and video-text alignment. As shown in Figure 8, our method outperformed the competition in every category by a large margin. More details of the user study are provided in the supplementary. + +# 4.3. Ablations + +We analyze our method by comparing it against the following variations: (1) a sequential interleaved architecture (see Eq. (5)); (2) training only with a pseudo-4D video dataset without Objaverse; (3) using hard synchronization; and (4) our full method with soft synchronization. The results are shown in Table 1, Table 2 and visualized in Fig. 7. Further details of each ablated design are provided in the supplementary material. We make the following observations: First, the proposed parallel architecture achieves better performance compared to the sequential architecture. Second, training our model without any 4D data can still produce competitive results compared to baselines, showing the robustness of our approach. It obtains higher GIM-Confidence as the metric favors only image matching instead of real 3D consistency. Finally, soft synchroniza + +tion improves quality over hard synchronization, leading to more coherent and visually appealing outputs. + +# 4.4. Reconstruction from generated 4D videos + +To further validate the effectiveness of our method in generating explicit 3D representations, we fit deformable 3D Gaussian Splatting (3DGS) to the generated 4D videos. Fig. 6 qualitatively shows reconstructed 3DGS at different times and viewpoints. More details of the reconstruction pipeline are included in the supplementary. + +# 5. Conclusion + +We propose 4Real-Video, a novel framework for 4D video generation. The core idea of our framework is to process a grid of frames using two separate token streams that are processed in parallel, with a synchronization layer coordinating between the two streams. Remarkably, our model can generate diverse photorealistic 4D videos without requiring access to such a dataset. Despite its strengths, our current implementation has several limitations that we aim to address in future work. First, the base video model's small size constrains its capability, limiting the visual quality and resolution of the generated videos. This can be improved by incorporating more advanced and larger-scale video models. Second, our framework currently lacks support for $360^{\circ}$ video generation. Enhancing this capability will involve improving the training of the base video model and incorporating camera pose conditioning. Third, generating freeze-time videos remains a significant challenge, particularly for dynamic elements such as running horses or fires, where robustness is limited. Finally, our approach requires post-processing steps to construct explicit 3D representations of the generated dynamic scenes. In the future, it would be exciting to explore the possibility of a single feedforward model for 4D generation and further advancing the field. + +# References + +[1] Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, and Suvrit Sra. Transformers learn to implement preconditioned gradient descent for in-context learning. NeurIPS, 2023. 4 +[2] Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, and David B Lindell. 4d-fy: Text-to-4d generation using hybrid score distillation sampling. In CVPR, 2024. 2, 7 +[3] Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang, Jiaxu Zou, Andrea Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint arXiv:2407.12781, 2024. 2 +[4] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In ICCV, 2021. 3 +[5] Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter1: Open diffusion models for high-quality video generation. arXiv preprint arXiv:2310.19512, 2023. 2 +[6] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation. In ICCV, 2023. 2 +[7] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13142-13153, 2023. 2, 3, 5, 6 +[8] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Imagine your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 2 +[9] Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrol: Enabling camera control for text-to-video generation. arXiv preprint arXiv:2404.02101, 2024. 2 +[10] Xuan He, Dongfu Jiang, Ge Zhang, Max Ku, Achint Soni, Sherman Siu, Haonan Chen, Abhranil Chandra, Ziyan Jiang, Aaran Arulraj, et al. Mantisscore: Building automatic metrics to simulate fine-grained human feedback for video generation. arXiv preprint arXiv:2406.15252, 2024. 6 +[11] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. CoRR, abs/2104.08718, 2021. 6 +[12] Jonathan Ho, William Chan, Chitwan Sahara, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022. 2 +[13] Stanisław Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, and Yoshua Bengio. Residual con + +nections encourage iterative inference. arXiv preprint arXiv:1710.04773, 2017. 4 +[14] Yanqin Jiang, Li Zhang, Jin Gao, Weimin Hu, and Yao Yao. Consistent4d: Consistent 360 $\{\backslash \mathrm{deg}\}$ dynamic object generation from monocular video. arXiv preprint arXiv:2311.02848, 2023. 2 +[15] Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, and Gordon. Wetzstein. Collaborative video diffusion: Consistent multi-video generation with camera control. In arXiv, 2024. 2, 3, 4 +[16] Bing Li, Cheng Zheng, Wenxuan Zhu, Jinjie Mai, Biao Zhang, Peter Wonka, and Bernard Ghanem. Vivid-zoo: Multi-view video generation with diffusion model, 2024. 2, 3, 4 +[17] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In CVPR, 2023. 2 +[18] Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, and Karsten Kreis. Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models. In CVPR, 2024. 2, 7 +[19] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In ICCV, 2023. 2 +[20] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 5 +[21] Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, Ekaterina Deyneka, Tsai-Shien Chen, Anil Kag, Yuwei Fang, Aleksei Stoliar, Elisa Ricci, Jian Ren, et al. Snap video: Scaled spatiotemporal transformers for text-to-video synthesis. In CVPR, 2024. 2, 6 +[22] OpenAI. Video generation models as world simulators, 2024. Accessed: 2024-11-08. 2, 3 +[23] Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and trends® in Optimization, 2014. 4 +[24] William Peebles and Saining Xie. Scalable diffusion models with transformers. arXiv preprint arXiv:2212.09748, 2022. 3 +[25] Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, ChihYao Ma, Ching-Yao Chuang, David Yan, Dhruv Choudhary, Dingkang Wang, Geet Sethi, Guan Pang, Haoyu Ma, Ishan Misra, Ji Hou, Jialiang Wang, Kiran Jagadeesh, Kunpeng Li, Luxin Zhang, Mannat Singh, Mary Williamson, Matt Le, Matthew Yu, Mitesh Kumar Singh, Peizhao Zhang, Peter Vajda, Quentin Duval, Rohit Girdhar, Roshan Sumbaly, Sai Saketh Rambhatla, Sam Tsai, Samaneh Azadi, Samyak Datta, Sanyuan Chen, Sean Bell, Sharadh Ramaswamy, Shelly Sheynin, Siddharth Bhattacharya, Simran Motwani, Tao Xu, Tianhe Li, Tingbo Hou, Wei-Ning Hsu, Xi Yin, Xiaoliang Dai, Yaniv Taigman, Yaqiao Luo, Yen-Cheng Liu, Yi-Chiao Wu, Yue Zhao, Yuval Kirstain, Zecheng He, Zijian He, Albert Pumarola, Ali Thabet, Artsiom Sanakoyeu, Arun Mallya, Baishan Guo, Boris Araya, Breena Kerr, Carleigh Wood, Ce Liu, Cen Peng, Dimitry Vengertsev, Edgar Schonfeld, Elliot Blanchard, Felix Juefei-Xu, Fraylie Nord, Jeff + +Liang, John Hoffman, Jonas Kohler, Kaolin Fire, Karthik Sivakumar, Lawrence Chen, Licheng Yu, Luya Gao, Markos Georgopoulos, Rashel Moritz, Sara K. Sampson, Shikai Li, Simone Parmeggiani, Steve Fine, Tara Fowler, Vladan Petrovic, and Yuming Du. Movie gen: A cast of media foundation models, 2024. 2, 3 +[26] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In ICLR, 2023. 2, 7 +[27] Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, and Ziwei Liu. Dreamgaussian4d: Generative 4d gaussian splatting. arXiv preprint arXiv:2312.17142, 2023. 2 +[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. 2 +[29] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 2022. 2 +[30] Ruizhi Shao, Youxin Pang, Zerong Zheng, Jingxiang Sun, and Yebin Liu. Human4dit: Free-view human video generation with 4d diffusion transformer. arXiv preprint arXiv:2405.17405, 2024. 2 +[31] Xuelun Shen, Zhipeng Cai, Wei Yin, Matthias Müller, Zijun Li, Kaixuan Wang, Xiaozhi Chen, and Cheng Wang. Gim: Learning generalizable image matcher from internet videos. arXiv preprint arXiv:2402.11095, 2024. 6 +[32] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. Mvdream: Multi-view diffusion for 3d generation. In ICLR, 2024. 2 +[33] Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, et al. Text-to-4d dynamic scene generation. arXiv preprint arXiv:2301.11280, 2023. 2 +[34] Richard Tucker and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. In ACM TOG, 2018. 3 +[35] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphaël Marinier, Marcin Michalski, and Sylvain Gelly. Fvd: A new metric for video generation. arXiv preprint arXiv:1812.01717, 2018. 6 +[36] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. In CVPR, 2023. 2 +[37] Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In CVPR, 2024. 6 +[38] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. In NeurIPS, 2023. 2 +[39] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. + +Motionctrl: A unified and flexible motion controller for video generation. In ACM SIGGRAPH, 2024. 2, 6, 7 +[40] Daniel Watson, Saurabh Saxena, Lala Li, Andrea Tagliasacchi, and David J Fleet. Controlling space and time with diffusion models. arXiv preprint arXiv:2407.07860, 2024. 2 +[41] Yiming Xie, Chun-Han Yao, Vikram Voleti, Huaizu Jiang, and Varun Jampani. Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency. arXiv preprint arXiv:2407.17470, 2024. 2, 3, 6, 7 +[42] Dejia Xu, Weili Nie, Chao Liu, Sifei Liu, Jan Kautz, Zhangyang Wang, and Arash Vahdat. Camco: Camera-controllable 3d-consistent image-to-video generation. arXiv preprint arXiv:2406.02509, 2024. 2 +[43] Shiuyuan Yang, Liang Hou, Haibin Huang, Chongyang Ma, Pengfei Wan, Di Zhang, Xiaodong Chen, and Jing Liao. Direct-a-video: Customized video generation with user-directed camera movement and object motion. In ACM SIGGRAPH, 2024. 2 +[44] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 2, 3 +[45] Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, and Yunchao Wei. 4dgen: Grounded 4d content generation with spatial-temporal consistency. arXiv preprint arXiv:2312.17225, 2023. 2 +[46] Heng Yu, Chaoyang Wang, Peiye Zhuang, Willi Menapace, Aliaksandr Siarohin, Junli Cao, Laszlo A Jeni, Sergey Tulyakov, and Hsin-Ying Lee. 4real: Towards photorealistic 4d scene generation via video diffusion models. In NeurIPS, 2024. 2, 3, 7 +[47] Haiyu Zhang, Xinyuan Chen, Yaohui Wang, Xihui Liu, Yunhong Wang, and Yu Qiao. 4diffusion: Multi-view video diffusion model for 4d generation. arXiv preprint arXiv:2405.20674, 2024. 2, 3, 4 +[48] Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, and Gim Hee Lee. Animate124: Animating one image to 4d dynamic scene. arXiv preprint arXiv:2311.14603, 2023. 2 +[49] Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, and Lijuan Wang. Genxd: Generating any 3d and 4d scenes. arXiv preprint arXiv:2411.02319, 2024. 2 +[50] Yufeng Zheng, Xueting Li, Koki Nagano, Sifei Liu, Otmar Hilliges, and Shalini De Mello. A unified approach for text- and image-guided 4d scene generation. In CVPR, 2024. 7 +[51] Joseph Zhu and Peiye Zhuang. Hifa: High-fidelity text-to-3d with advanced diffusion guidance. In ICLR, 2023. 2 \ No newline at end of file diff --git a/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/images.zip b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..cfc3513525d6131ecfb288e3a333f7eaf32de861 --- /dev/null +++ b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d860465d16b894e6b934255ecd6d1b9ff0b9ab4b17c00adca1af765daaa4fcd +size 1033856 diff --git a/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/layout.json b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ef554f1d2e4b913b4da9f9f0bc4d0bb6176ccc09 --- /dev/null +++ b/CVPR/2025/4Real-Video_ Learning Generalizable Photo-Realistic 4D Video Diffusion/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61ed1a7a27667ce46a0616df011897cf709b5985a7753f0fd6ef38d238083b02 +size 381681 diff --git a/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_content_list.json b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4702c31034c8779b7889fbc09b25974bea3e2ba3 --- /dev/null +++ b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec7b19733a35d3cd12c506fc92ee744c3c2c709b86981e1360a068adb887f2bb +size 81270 diff --git a/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_model.json b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d3d55db6e2873b0c67a2bd17ad4dce234d597558 --- /dev/null +++ b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee70e2d8378d5cc66e4219cdd7a640fe1b5940db8c1ee678b3579da259d6cad4 +size 105696 diff --git a/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_origin.pdf b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fe2c5f3db49ecfa61ad4ec956b45d131ba4fa28f --- /dev/null +++ b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/1a75cdda-a594-42aa-b287-0cf484866607_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faff08d28fe23e12d5b8232767c54a41fe61fba34eb3be45301a71c6b0abe0f6 +size 1150321 diff --git a/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/full.md b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0c0d5784a37d754c08c8e46f78e14fff7c29d4ba --- /dev/null +++ b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/full.md @@ -0,0 +1,322 @@ +![](images/d14683986cd9fa854458e3d89884fc78c27b212a7c6048c5b81a0cae32e540d8.jpg) + +# $5\% > 100\%$ : Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks + +Dongshuo Yin $^{1,2*}$ , Leiyi Hu $^{3*}$ , Bin Li $^{4}$ , Youqun Zhang $^{4}$ , Xue Yang $^{1\dagger}$ + +$^{1}$ Department of Automation, Shanghai Jiao Tong University, + +$^{2}$ BNRist, Department of Computer Science and Technology, Tsinghua University, + +3University of Chinese Academy of Sciences, 4Alibaba Group + +yinds@mail.tsinghua.edu.cn, huleiyi21@mails.ucas.ac.cn + +{zhuyi.lb, youqun.zhangyq}@alibaba-inc.com, yangxue-2019-sjtu@sjtu.edu.cn + +# Abstract + +Pre-training & fine-tuning can enhance the transferring efficiency and performance in visual tasks. Recent delta-tuning methods provide more options for visual classification tasks. Despite their success, existing visual delta-tuning art fails to exceed the upper limit of full fine-tuning on challenging tasks. To find a competitive alternative to full fine-tuning, we propose the Multi-cognitive Visual Adapter (Mona) tuning, a novel adapter-based tuning method. First, we introduce multiple vision-friendly filters into the adapter to enhance its ability for processing visual signals, while previous methods mainly rely on language-friendly linear filters. Second, we add the scaled layer-norm in the adapter to regulate the distribution of input features for visual filters. To fully demonstrate the practicality and generality of Mona, we conduct experiments on representative visual tasks, including instance segmentation on COCO, semantic segmentation on ADE20K, object detection on Pascal VOC, oriented object detection on DOTA/STAR, and image classification on three common datasets. Exciting results illustrate that Mona surpasses full fine-tuning on all these tasks by tuning less than $5\%$ params of the backbone, and is the only delta-tuning method outperforming full fine-tuning on all tasks. For example, Mona achieves $1\%$ performance gain on the COCO compared to full fine-tuning. Comprehensive results suggest that Monatuning is more suitable for retaining and utilizing the capabilities of pre-trained models than full fine-tuning. The code is publicly available on https://github.com/Leiyi-Hu/mona. + +![](images/d90f88200736698c251df7279c6f035f03f04990bd98e76f3ac6978c63d34cf7.jpg) +Performance on Representative Visual Tasks +Figure 1. Comparisons of our method with full fine-tuning and recent delta-tuning art on representative visual tasks. Blue dashed line is the performance of full fine-tuning on ADE20K and COCO. The proposed Mona outperforms full fine-tuning on representative visual tasks, which promotes the upper limit of previous delta-tuning art. The results demonstrate that the adapter-tuning paradigm can replace full fine-tuning and achieve better performance in common visual tasks. Full fine-tuning is not the only preferred solution for transfer learning in the future. + +# 1. Introduction + +Pre-training & fine-tuning paradigm [1] can perform impressive transfer learning between homo-modal tasks, as has been demonstrated in computer vision (CV) [2, 3] and natural language processing (NLP) [4-6]. Pre-trained models are often trained by well-resourced and experienced teams with large amounts of clean data [7]. Exceptional pretrained models can help hardware- and data-limited teams save plenty of training costs and train well-performing deep models on new tasks [8-11]. In the era of large models, the efficiency of tuning pre-trained models is an im + +portant issue. Full fine-tuning has been widely used with great success in CV tasks, which tunes all parameters in the pre-trained backbone as well as additional task-specific heads/necks during the training process. Many impressive CV art push the limit of visual tasks through pretraining & full fine-tunning. However, is full fine-tuning still the best way to fine-tune visual tasks now? + +Apart from full fine-tuning, Delta tuning [12, 13] has recently attracted attention in NLP and CV tasks. Delta tuning comes from NLP, which tunes only part of the backbone network or extra lightweight structures for efficient transfer learning [12]. Delta tuning methods generally fix most backbone parameters and achieve comparable or even better performance than full fine-tuning on simple tasks (including classification tasks in NLP [14, 15] and CV [16-19]). VPT [16] first explores the potential of prompt-tuning on visual classification tasks. LoRand [20] pioneers adapter-tuning on dense predictions and reduces the gap between delta tuning and full fine-tuning on visual tasks. However, existing methods cannot outperform full fine-tuning on visual recognition tasks, including semantic and instance segmentation. + +To challenge the dominance of full fine-tuning in CV, we propose Mona-tuning, a novel tuning paradigm based on Multi-cognitive visual adapters (Mona). We analyse recent art and summarise two issues in existing visual adapters. First, the designs of existing CV adapters [17-19] follow linear adapters in NLP [21]. In fact, visual tasks process visual signals, which are significantly different from linguistic signals and have unique 2D convolutional operations [22-24]. Our experiments show that convolution-based filters can transfer visual knowledge from pre-trained models to other tasks better, so we propose a practical convolution-based adapter for visual tasks. Second, most existing adapters compress the upstream features with a single linear layer [17, 18]. Previous works claim that models have different cognition of features at different filter scales [25-27]. Thus, we employ multiple convolutional filters behind the adapter's reduction layer to enhance the cognitive abilities of the adapters. We demonstrate the generality and superiority of Mona-tuning on plenty of representative visual tasks, including image classification, object detection, semantic segmentation, instance segmentation, and oriented object detection [28, 29]. We employ the SwinTransformer [2] series trained on ImageNet-22k [30] as pre-trained models. Extensive experiments indicate that the proposed method outperforms the traditional full fine-tuning paradigm both on simple image classification tasks and complex visual tasks. For example, Mona-tuning outperforms full fine-tuning on the COCO dataset [31] by $1\%$ mAP. The results suggest that full fine-tuning is not the optimal choice for many visual tasks. As far as we known, Mona is the only Adapter-based tuning method that surpasses full fine-tuning on semantic segmentation, instance + +segmentation, and oriented object detection. Figure 1 illustrates the superiority of the proposed method on the challenging instance segmentation and semantic segmentation tasks. Our contributions can be three-fold: + +- We demonstrate that the adapter-based tuning can surpass full fine-tuning on visual tasks, and perform better than full fine-tunning with fewer new parameters. +- We propose Mona-tuning, a novel and practical training paradigm based on multi-cognitive visual adapters (Mona). Mona employs vision-friendly filters to optimise traditional linear adapters and improve the transferring efficiency of visual pre-trained knowledge through multiple cognitive perspectives. +- Extensive experiments demonstrate that Mona-tuning outperforms full fine-tuning and other recent art on representative visual tasks, including image classification, object detection, semantic segmentation, instance segmentation, and oriented object detection. + +# 2. Related Work + +# 2.1. Delta-tuning + +The development of large models has produced dramatic shocks throughout artificial intelligence [32-34]. The efficiency of transfer learning attracts researchers' interest [17-19, 35]. Delta tuning [12, 13, 21, 36, 37] (or parameter efficient fine-tuning, PEFT) is dedicated to improving the efficiency of fine-tuning. Delta-tuning methods can be divided into three groups [12, 38]. The first group fixes most of the parameters in the pre-trained backbone and fine-tune a small number of them, e.g., BitFit [39] tunes bias, Norm Tuning [40] tunes norm layers, and Partial-1 [41] only tunes the last block. The second group reparameterises some parameters in the pre-trained model, e.g. the LoRA [36] optimises low-rank subspaces. The third group fixes the pretrained backbone's original parameters and adds additional trainable structures, including prompt series [16, 42, 43] and adapter series [19, 44, 45]. Our experiments compare Mona with these three groups. + +# 2.2. Computer Vision Meets Delta-tuning + +Although derived from NLP, delta tuning is also explored in CV. VPT [16] is the first to introduce delta-tuning (prompt-tuning) to visual classification tasks. [46] adds adapters to a trainable backbone to improve performance rather than parameter efficiency. [47, 48] differentiates the parameters and fine-tunes some of the parameters in the extra structures. SSF [49] scales and shifts features for fine-tuning. [50] designs adapters that can be adjusted to a specific task. AdaptFormer [19] designs a parallel adapter structure to improve delta-tuning performance on visual classification. KAdaptation [45] optimises the adapter through the Kronecker product. The above art is the pioneer in visual + +![](images/45cf9e939b51d652ba14469b3acb6a5f16e96dce04a95a35a87f65dab664a4a1.jpg) +Figure 2. Left: The proposed Mona-tuning. We add Mona after MSA and MLP in each SwinBlock. The proposed method fixes the parameters of pre-trained layers and updates the parameters of Mona. Right: Details of Mona. Mona has a scaled LayerNorm before the down projection. A multi-cognitive convolutional filter group and an aggregation filter are behind the down projection. We add skip-connections at four places inside Mona to strengthen its adaptation capabilities. Mona enables the adapter-based fine-tuning paradigm to outperform full fine-tuning in typical visual tasks comprehensively. + +tasks, revealing the potential of delta-tuning on visual classification. LoRand [20] brings impressive performance on dense prediction tasks via multi-branch low-rank adapters but still cannot surpass full fine-tuning on all visual recognition tasks. Recent art indicates that delta-tuning cannot completely replace full fine-tuning on vision tasks. Therefore, we propose Mona-tuning, an alternative to full finetuning for more visual tasks, which outperforms full finetuning in both new parameter size and performance. + +# 3. Methods + +In this section, we present the proposed method in four parts, including the adapter-tuning, Mona, and parameter analysis. + +# 3.1. Adapter-tuning + +Previous work [20] discusses adapter fine-tuning, and we briefly introduce related concepts here. Full fine-tuning updates all parameters in the pre-trained backbone, while + +adapter-tuning fixes the pre-trained parameters and updates the parameters in adapters. For dataset $D = \{(x_{i},y_{i})\}_{i = 1}^{N}$ , the optimization process of full fine-tuning and adapter-tuning can be expressed as Eq. 1 and Eq. 2: + +$$ +\theta \leftarrow \underset {\theta} {\arg \min } l o s s (D, \theta), \tag {1} +$$ + +$$ +\omega \leftarrow \underset {\omega} {\arg \min } l o s s (D, \theta_ {F}, \omega), \tag {2} +$$ + +where $loss$ is the training loss, $\theta$ represents parameters of the whole framework, and $\theta_{F}$ is the fixed parameters in adapter-tuning. $\omega$ represents updated parameters in adapter-tuning, including parameters in adapters and outside the backbone. + +# 3.2. Mona + +Typical linear adapters suffer from two problems when applied to visual tasks. First, fixed layer parameters cannot be fine-tuned to match the data distribution of new tasks, resulting in a biased feature distribution passed to the adapter. Therefore, it is important for the adapter to optimize its input distribution from fixed layers. Second, Vanilla adapter [21] is designed for natural language signals and is not optimized for visual signals. Previous CV adapter art [16-19] is based on linear filters (mainly including down projection, nonlinear activation, up projection, and skip connections), which is not efficient for transferring visual knowledge. To address these two issues, we perform input optimization and design multi-cognitive visual filters. + +Input Optimization. We enable Mona to adjust the input distributions and the proportion of inputs from the fixed layers. Specifically, we add a norm layer and two learnable weights, $s_1$ and $s_2$ , to the top end of Mona to adjust the input distribution. Previous work indicates that normalization [51] helps to stabilize the forward input distribution and the backpropagated gradient. We find in practice that LayerNorm (LN) [51] is better than BatchNorm [52], so we employ LN in Mona. Figure 2 illustrates our design, which can be formulated as: + +$$ +x _ {n o r m} = s _ {1} \cdot | x _ {0} | _ {L N} + s _ {2} \cdot x _ {0}, \tag {3} +$$ + +where $|\cdot|_{LN}$ denotes LayerNorm and $x_0$ denotes the original input of Mona. + +Multi-Cognitive Visual Filters. For visual cognition, human eyes process visual signals from different scales and integrate them for better understanding [53-55]. Adapters should also process upstream features from multiple cognitive perspectives for better performance on downstream tasks. We introduce multiple convolutional filters to Mona to increase the cognitive dimension. Instead of standard convolutions, Depth-Wise Convolutions [55] (DWConv) are employed in Mona to minimize additional parameter sizes. Specifically, the upstream features go through three + +DWConv filters after down projection. Convolution kernels are $3 \times 3$ , $5 \times 5$ and $7 \times 7$ . We compute the average results from three filters and aggregate features with a $1 \times 1$ convolution. Skip-connections are added to two types of convolutions. We use three depth-wise convolutions with weight $\omega_{dw}^{i} \in \mathbb{R}^{C_{in}^{D} \times K_{i} \times K_{i} \times C_{out}^{D}}$ ( $i \in 1,2,3$ ) for the first multi-filter convolution and a point-wise convolution with weight $\omega_{pw}^{i} \in \mathbb{R}^{C_{in}^{P} \times 1 \times 1 \times C_{out}^{P}}$ for the second convolution. The above two convolution steps can be formulated as follows: + +$$ +f _ {d w} = x + a v g \left(\sum_ {i = 1} ^ {3} \omega_ {d w} ^ {i} \hat {\otimes} x\right), \tag {4} +$$ + +$$ +f _ {p w} = x + \omega_ {p w} \bar {\otimes} x, +$$ + +where $\hat{\otimes}$ and $\overline{\otimes}$ denote depth-wise and point-wise convolution. Then, features are nonlinearized by GeLU and recovered by up projection. The overall calculation process of Mona can be formulated as follows: + +$$ +x = x _ {0} + U ^ {l} \sigma \left(f _ {p w} \left(f _ {d w} \left(D ^ {l} \left(x _ {\text {n o r m}}\right)\right)\right), \right. \tag {5} +$$ + +where $D^l$ and $U^l$ denote down and up projections of the $l^{th}$ adapter, and $\sigma$ denotes GeLU activation. + +# 3.3. Parameter Analysis + +The parameters of Mona come from LN, scaling factors, linear layers, DWconv, and $1 \times 1$ conv. Assuming that the input dimension of the adapter is $m$ and the dimension after down projection is $n$ , the parameters of the LN and scaling factors are $2m + 2$ , the parameters of the two linear layers are $2mn + m + n$ , the parameters of the DWConv layer are $(3^2 + 5^2 + 7^2)n = 83n$ , and the PWConv is $n^2$ . The total parameter of each Mona module are: + +$$ +(2 n + 3) m + n ^ {2} + 8 4 n + 2. \tag {6} +$$ + +For each block, all Mona parameters are: $2 \times ((2n + 3)m + n^2 + 84n + 2)$ . We set the value of $n$ to a constant (64) to reduce parameters in Mona. + +# 4. Experiments + +We implement sufficient experiments on multiple representative visual tasks to demonstrate the superiority of Monatuning. This section includes experimental settings, results, convergence analysis, and some ablation experiments. Hyperparameters and detailed settings for training are presented in the Supplementary Material. + +# 4.1. Datasets + +Object Detection. Pascal VOC 0712 [56] has $16\mathrm{k} / 5\mathrm{k}$ training/validation images and is used for object detection tasks. We employ Swin-Large + RetinaNet for training. The evaluation metric for object detection task is the most commonly used $\mathrm{AP}_{box}$ . + +Semantic Segmentation. ADE20K [57] is the most widely used semantic segmentation dataset containing 20K training and 2K validation images. We employ Swin-Large + UperNet for experiments on semantic segmentation. The evaluation metric is the most commonly used mIoU. + +Instance Segmentation. MS COCO [31] is a representative instance segmentation dataset with 118k training images and 5k validation images. We employ Swin-Base + Cascade Mask RCNN for training. Evaluation metrics for instance segmentation task are $\mathrm{AP}_{\text {box }}$ and $\mathrm{AP}_{\text {Mask }}$ . + +Oriented Object Detection. Oriented object detection considers angle information in the annotation and inference process, which can effectively improve the performance and efficiency of object detection in fields like remote sensing. This task requires more accurate annotation information and more complex detection models, which is more challenging than horizontal object detection. Two representative remote sensing datasets, DOTA [58] and STAR [59], are selected for our experiments. We also experiment with multiple detection frameworks on the more challenging STAR dataset. The meteic here is $\mathrm{AP}_{\text {box }}$ . + +Image Classification. Classification tasks have been well studied in previous art. We also conduct experiments on Oxford 102 Flower [60], Oxford-IIIT Pet [61], and VOC 2007 Classification dataset [62] to increase the broadness of our experiments. The top-1, top-5, and average accuracy of each method are reported. + +# 4.2. Pre-trained Models and Toolkits + +The Swin Transformer series [2] are employed as the backbone for all experiments. The pre-trained models are trained on ImageNet-22k [30], and toolkits like MMDetection [63], MMSegmentation [64], MMRotate [65] and MMClassification [66] are used for verification. The image resolution of the pre-trained task is $224 \times 224$ . Most tasks employ SwinLarge as the backbone. Backbones for COCO, DOTA, and STAR are Swin-Base, considering the memory consumption of these tasks. + +# 4.3. Baselines + +We compare Mona with multiple recent methods. Baselines can be grouped into methods without or with extra structure: + +- Without extra structure: + +- FULL: Update all parameters in the framework. +- FIXED: Fix the backbone and update other parameters. +- BITFIT [39]: Update bias in backbone and parameters outside of backbone. +- NORMTUNING [40]: Update norm layers in backbone and parameters outside the backbone. +- PARTIAL-1 [41]: Update the last block in the backbone and parameters outside the backbone. +- With extra structure: + +
Swin-B (89M)Trained* Params%ΔFullExtra StructureCOCO (Cascade Mask R-CNN)
APBoxΔFullAPMaskΔFull
Baselines
FULL89.14 M100.00 %-×52.40 %-45.10 %-
FIXED0.00 M0.00 %- 100.00 %×48.00 %- 4.40 %41.60 %- 3.50 %
BITFIT0.21 M0.23 %- 99.77 %×50.10 %- 2.30 %43.60 %- 1.50 %
NORMTUNING0.06 M0.07 %- 99.93 %×50.10 %- 2.30 %43.50 %- 1.60 %
PARTIAL-112.95 M14.53 %- 85.47 %×50.60 %- 1.80 %43.70 %- 1.40 %
ADAPTER3.19 M3.58 %- 96.42 %52.10 %- 0.30 %45.00 %- 0.10 %
LORA3.06 M3.43 %- 96.57 %50.40 %- 2.00 %43.90 %- 1.20 %
ADAPTFORMER1.60 M1.79 %- 98.21 %51.70 %- 0.70 %44.60 %- 0.50 %
LORAND4.68 M5.23 %- 94.77 %51.90 %- 0.50 %44.70 %- 0.40 %
Our Method
MONA4.16 M4.67 %- 95.33 %53.40 %+ 1.00 %46.00 %+ 0.90 %
+ +Table 1. Results of baselines and our methods on COCO benchmarks. Swin-B is employed as the pre-trained model here. We present the numbers and percentages of trainable backbone parameters on the left and all the performances on the right. * denotes the trainable parameters in backbones. The best AP in each column is bolded. + +
Swin-L (198M)Trained* Params%ΔFullExtra StructurePascal VOC (RetinaNet)ADE20K (UperNet)
APBoxΔFullmIoUΔFull
Baselines
FULL198.58 M100.00 %-×83.70 %-51.18 %-
FIXED0.00 M0.00 %- 100.00 %×83.80 %+ 0.10 %46.84 %- 4.34 %
BITFIT0.30 M0.15 %- 99.85 %×85.40 %+ 1.70 %48.37 %- 2.81 %
NORMTUNING0.10 M0.05 %- 99.95 %×85.50 %+ 1.80 %47.89 %- 3.29 %
PARTIAL-128.77 M14.53 %- 85.47 %×85.50 %+ 1.80 %47.44 %- 3.74 %
ADAPTER4.61 M2.33 %- 97.67 %86.70 %+ 3.00 %50.78 %- 0.40 %
LORA4.57 M2.31 %- 97.69 %85.40 %+ 1.70 %50.34 %- 0.84 %
ADAPTFORMER2.34 M1.18 %- 98.82 %86.60 %+ 2.90 %50.83 %- 0.35 %
LORAND5.20 M2.62 %- 97.38 %86.90 %+ 3.20 %50.93 %- 0.25 %
Our Method
MONA5.08 M2.56 %- 97.44 %87.30 %+ 3.60 %51.36 %+ 0.18 %
+ +Table 2. Results of baselines and our methods on Pascal VOC and ADE20K benchmarks. Swin-L is employed as the pre-trained model here. We present the numbers and percentages of trainable backbone parameters on the left and all the performances on the right. * denotes the trainable parameters in backbones. The best AP/mIoU in each column is bolded. + +(The pre-trained layers in these baselines are fixed, and the adapter intermediate dimensions are all 64, following the AdaptFormer [19]): + +- ADAPTER [21]: Add standard adapter layers after the MSA/MLP layers of each SwinBlock. +- LORA [36]: Add parallel learnable matrices to multihead attention weights. +- ADAPTFORMER [19]: Add parallel adapter layers with scale weights to each MLP layer. +- LORAND [20]: Add LoRand++ ( $\alpha = 4$ , $\beta = 16$ ) layers after the MSA/MLP of each SwinBlock. LoRand++ has the best performance among its variants, so the most chal + +lenging setting is chosen for comparison. + +# 4.4. Main Results + +Instance segmentation on COCO is challenging. From Table 1, we find that Mona outperforms all PEFT baselines and is the only method that outperforms full fine-tuning even by $1\%$ . COCO experiments effectively demonstrate the capability of the proposed method and show a better option than full fine-tuning in terms of storage and performance. Among delta-tuning methods, most baselines without extra structure can save more new parameters (except Partial-1), but their average performance is lower + +
MethodFlowers102OxfordPetsVOC2007Average
top-1 acc.top-5 acc.top-1 acc.top-5 acc.top-1 acc.top-5 acc.top-1 acc.top-5 acc.
Baselines
FULL99.577299.853694.657999.625784.127696.950792.787698.8100
FIXED99.300799.837494.221999.918285.016298.949992.846399.5685
BITFIT99.577299.821195.339399.918285.601899.333693.506199.6910
NORMTUNING99.528499.837495.230399.891085.521099.252893.426699.6604
PARTIAL-199.658599.837495.393899.863784.935498.606693.329299.4359
ADAPTER99.593499.853695.339399.809287.035599.131793.989499.6144
LORA99.544699.853695.148599.891085.702899.313493.465399.6860
ADAPTFORMER99.560999.853695.257699.836586.288499.273093.702399.6544
LORAND99.572599.853695.351599.891086.653499.374193.859199.7062
Our Method
MONA99.676499.902495.476599.918286.970999.505794.041399.7592
+ +Table 3. Results of baselines and our methods on three classification datasets. Swin-L is employed as the pre-trained model here. We present top-1 accuracy $(\%)$ and top-5 accuracy $(\%)$ of each dataset. The best result in each column is bolded. + +
Swin-B (89M)Oriented R-CNN (Faster R-CNN)KLD (RetinaNet)H2Box-v2 (FCOS)
DOTA-v1.0STARSTARSTAR
Baselines
FULL78.31 %38.63 %30.33 %30.29 %
FIXED74.10 %30.83 %23.81 %26.01 %
BITFIT76.05 %34.51 %28.17 %29.41 %
NORMTUNING75.82 %33.13 %27.12 %27.79 %
PARTIAL-175.72 %33.96 %28.53 %28.89 %
ADAPTER78.27 %37.97 %30.35 %30.24 %
LORA75.91 %33.80 %27.48 %28.95 %
ADAPTFORMER77.43 %35.95 %29.36 %30.11 %
LORAND77.65 %36.44 %29.83 %28.85 %
Our Method
MONA78.44 %39.45 %30.90 %31.34 %
+ +Table 4. Results on DOTA and STAR benchmarks. Swin-B is employed as the pre-trained model here. The best AP in each column is bolded. + +than that with extra structure. For baselines with additional structure, adapter-based approaches is superior to the reparameterization-based LoRA. Table 1 shows that LoRA performs well on NLP tasks but poorly on computer vision tasks. Table 1 indicates that the performance of delta-tuning is not directly related to parameter sizes. Partial-1 has the most trainable parameters, but its performance is significantly lower than that of adapter-based baselines. This result suggests that superior module design can effectively enhance the transferring efficiency of pre-trained models and reduce massive new parameters. + +Table 2 shows the results of Pascal VOC (object detection) and ADE20K (semantic segmentation). Also, Mona + +![](images/0705d21537604a38f588bc69096fe95f35330407e2c5f389e93921b4f49c058f.jpg) +Figure 3. Loss curves. Among all the methods, the proposed method converges faster and significantly exceeds the full finetuning. + +outperforms all other methods in Table 2. Mona outperforms full fine-tuning by $3.6\%$ and $0.18\%$ on these two tasks. Table 2 again indicates that full fine-tuning is not the best choice for visual transfer learning. Interestingly, all baselines surpasses full fine-tuning on VOC, which is different from COCO and ADE20K. Relatively little data in VOC may lead to over-fitting when full fine-tuning a 198M Swin-Large pre-trained model. Compared to full fine-tuning, other methods fix most pre-trained parameters, so the model performance is less likely to collapse severely during tuning. NLP scholars treat similar cases as low-resource cases [67, 68]. VOC here can be considered as a low-resource case in CV. For ADE20K, the performance gaps between baselines without additional structure and adapter-based baselines are more significant than VOC and COCO. For parameter sizes, most methods in Tables 1 and 2 (except Partial-1) produce less than $5\%$ new backbone parameters, which is the characteristic of delta- + +tuning. Despite the slight increase in parameters, Mona still outperforms the previous art and breaks the full fine-tuning performance ceiling by a wide margin. + +Table 4 shows the performance on the more challenging oriented object detection tasks. Firstly, Columns 2-3 of Table 4 show that Mona outperforms full fine-tuning and other efficient fine-tuning methods on two datasets with Oriented R-CNN [69]. Secondly, STAR has more instances and classes than DOTA, which is more challenging than DOTA. So we experiment with STAR on more frameworks. Columns 4-5 show the results of all methods with KLD [70] and H2RBox-v2 [71]. It can be seen that Mona outperforms all baseline methods on these frameworks. Table 4 further illustrates that the proposed adapter can lead to performance breakthroughs on a wider range of visual tasks. + +For classification tasks, we show the individual and average results on three classification datasets in Table 3. Mona outperforms all the baselines on Flowers102, OxfordPets, and outperforms the average results of all baselines. Table 3 indicates that Mona has a high transfer efficiency on relatively simple tasks. In addition, we find that the average results of all delta-tuning methods surpass full fine-tuning, which is similar to conclusions in previous art [18]. Compared to classification, the complex dense prediction tasks can demonstrate the advantages and disadvantages of different tuning approaches more intuitively. + +In summary, the results of Tables 1 to 4 can be summarized in two aspects: 1) As to performance, the widely used full fine-tuning paradigm in art like Swin is no longer the optimal choice for visual tasks. The proposed Mona-tuning surpasses the performance ceiling of full fine-tuning in representative tasks such as instance/semantic segmentation, object detection, image classification, and oriented object detection. Specifically, Mona achieves a $1\%$ AP gain over full fine-tuning in the challenging COCO instance segmentation task. 2) Mona, based on multi-cognitive visual filtering, surpasses recent remarkable baselines in most tasks. Mona comprehensively enhances the practicality and generality of delta-tuning in visual tasks. Mona-tuning not only significantly reduces storage costs, but also further elevates the performance ceiling of visual tasks. + +# 4.5. Loss Analysis + +We present the loss converging process for Mona and five representative baselines on the object detection task (Pascal VOC) in Figure 3. The proposed method yields a significant advantage in the convergence process compared to full fine-tuning, which explains its better performance on VOC. Mona also converges faster than other delta-tuning methods, suggesting that multi-cognitive visual filters can better process visual features and accelerate the convergence of transfer learning. Convergence analysis again demonstrates that the proposed method is a highly competitive + +
Intermediate DimensionsTrained Params*APBox
321.35 %86.8 %
642.56 %87.3 %
1285.22 %87.1 %
+ +Table 5. Ablations of intermediate dimensions. 64 intermediate dimensions achieves the best performance. * denotes the trainable parameters in backbones. + +
ModelFULL (VOC)MONA (VOC)Param % (Mona)
Swin-T80.1 %83.5 %4.87 %
Swin-B81.6 %86.5 %4.06 %
Swin-L83.7 %87.3 %2.56 %
+ +Table 6. Performance of Mona on models with different sizes. The results indicate that model sizes do not constrain Mona's superiority. + +visual transfer learning method and full fine-tuning is no longer the optimal choice for visual tasks. + +# 4.6. Ablations + +In this section, we ablate some potential factors that affect model performance, including intermediate dimensions, model sizes, detailed designs and frameworks. All ablation experiments are conducted on Pascal VOC. + +Intermediate Dimension. The workflow of the adapter is to compress the input from pre-trained layers into a low-dimensional feature space and transfer the pre-trained knowledge by tuning adapters. Thus, the intermediate dimension is important for adapters. We ablate the intermediate dimension of Mona in Table 5 and fix other settings. Dimension candidates are 32, 64, and 128. Table 5 show that the 64-dimension surpasses 32-dimension and 128-dimension. Chen et al. [19] also study the intermediate dimension of AdaptFormer. They find that the 64-dimension AdaptFormer surpasses its 32- and 256-dimension versions in classification tasks, which is consistent with our results. The results of Table 5 and Chen et al. indicate that the intermediate dimension of the adapter is not proportional to the performance, which means that a larger number of adapter parameters does not necessarily lead to better results. + +Model Sizes. In Table 6, we change the size of the backbones under the same settings, and the model candidates are 29M Swin-T, 88M Swin-B, and 197M Swin-L. We can draw the following three conclusions from Table 6. First, the more parameters the backbone network has, the smaller the proportion of Mona parameters for the same Mona setting is. This result indicates that Mona-tuning can save more parameters when the backbone gets larger. Existing visual models are getting larger and larger. InternImage-H + +
Scaled NormKernel SettingsAPBox
×[3,5,7]86.9 %
[3,5,7]87.3 %
[3]86.9 %
[5]87.0 %
[7]86.7 %
[3,3,3]86.8 %
[3,5]86.9 %
[5,7]87.1 %
+ +[72] reaches 1.08B parameters, and SwinV2-G [73] reaches 3B. Parameter-efficient Mona-tuning can save billions of parameters and massive storage costs in the era of large models. Second, Mona surpasses full fine-tuning under all settings, and its performance improves when model size grows. Table 6 shows that Mona-tuning can improve training efficiency and performance in smaller models. We just discussed Mona's advantages for large models. However, many resource-limited research teams and project groups use small models. Mona-tuning can also help resource-limited researchers leverage high-performance large models in their own applications. Third, the proposed method is more capable of stimulating the potential of large models compared to full fine-tuning. From Swin-T to Swin-L, full fine-tuning brings $3.6\%$ performance gain, while Mona brings $3.8\%$ . In other words, Mona can perform better as the model gets larger and help further increase the upper bound for performance-sensitive tasks. + +Detailed Designs of Mona. Table 7 presents ablation experiments on two detailed designs of Mona, including the scaled normalization and the convolution kernel setting. The first two rows demonstrate that scaled normalization effectively enhances Mona's performance. The subsequent seven rows compare Mona's performance under different convolution kernel settings of the multi-cognitive visual filters (each setting takes the average of its convolution results). The fifth row indicates that using excessive convolution kernels does not necessarily result in performance gains. Among the seven settings, [3,5,7] performs best and is adopted in Mona. The ablation of details demonstrate the rationality and necessity of the Mona structure. + +Results on PVT. We also test Mona on another influential visual framework, PVT [74], to demonstrate the generalizability of the proposed approach. Table 8 shows that Mona outperforms not only full fine-tuning but also other baselines on PVT (PVT-Large), which indicates that Mona is effective on a wider range of visual frameworks. + +Table 7. Ablations of detailed structures in Mona. Scaled normalization effectively improves performance. [3,5,7] performs best in all kernel settings. + +
MethodsAPBox
Full (PVT-L)76.1%
Adapter (PVT-L)78.9%
LoRA (PVT-L)77.6%
AdaptFormer (PVT-L)79.2%
LoRand (PVT-L)79.3%
Mona (PVT-L)80.3%
+ +Table 8. Results on PVT. Mona outperforms other competitive baseline methods on PVT, demonstrating that Mona can achieve similar results across a broader range of visual frameworks. + +# 5. Discussion + +Inference cost is a concern for the industry. Methods based on adapter modules (such as Adapter, AdapTFormer, LoRand, and Mona) introduce additional structures, along with a small increase in inference cost. Reparameterization-based approaches (like LoRA) do not affect inference cost but tend to perform poorly on visual tasks. The utility of adapters could be greatly enhanced by combining the strengths of both approaches. We will continue to strive towards achieving this goal. + +Additionally, we are also interested in discussing the value of delta-tuning in mobile applications. In the era of large models, storing a single Swin-H [73] requires 2.5GB, and Swin-G even reaches 4GB. However, the Top-10 U.S. apps required only collectively 2.2GB in May 2021 [47]. Recently, many manufacturers have incorporated AI chips into mobile devices to enable large model inference. With delta-tuning, mobile devices only need to store one large model and several small structures (such as Mona) in their hard drives or memory, while achieving impressive performance across multiple complex tasks like style migration and portrait beautification. We hope that visual efficient fine-tuning will not only break the performance ceiling of full fine-tuning but also accelerate the deployment and popularization of large visual models on mobile devices. + +# 6. Conclusion + +This paper proposes a novel visual fine-tuning method, the multi-cognitive visual adapter (Mona) tuning, which effectively enhances the efficiency and performance of visual fine-tuning. Comprehensive experiments demonstrate that the proposed Mona outperforms traditional full fine-tuning paradigms and other delta-tuning methods across representative tasks, including instance segmentation, semantic segmentation, object detection, image classification, and oriented object detection. In the era of large models, full fine-tuning is no longer the optimal choice for visual tasks. We hope that Mona-tuning can improve the knowledge transferring efficiency of large models and bring performance breakthroughs on more visual tasks. + +# References + +[1] Jindong Wang and Yiqiang Chen. Pre-training and finetuning. In Introduction to Transfer Learning: Algorithms and Practice. 2022. 1 +[2] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 1, 2, 4 +[3] Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representation learning at scale. In CVPR, 2023. 1 +[4] Rosalia Tufano, Simone Masiero, Antonio Mastropaolo, Luca Pascarella, Denys Poshyvanyk, and Gabriele Bavota. Using pre-trained models to boost code review automation. In ICSE, 2022. 1 +[5] Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. Recent advances in natural language processing via large pre-trained language models: A survey. ACM Computing Surveys, 2023. +[6] Robert Tinn, Hao Cheng, Yu Gu, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. Fine-tuning large neural language models for biomedical natural language processing. *Patterns*, 2023. 1 +[7] Dongshuo Yin, Xueting Han, Bin Li, Hao Feng, and Jing Bai. Parameter-efficient is not sufficient: Exploring parameter, memory, and time efficient adapter tuning for dense predictions. ACM MM, 2023. 1 +[8] Chompunuch Sarasaen, Soumick Chatterjee, Mario Breitkopf, Georg Rose, Andreas Nurnberger, and Oliver Speck. Fine-tuning deep learning model parameters for improved super-resolution of dynamic mri with prior-knowledge. Artificial Intelligence in Medicine, 2021. 1 +[9] Caisse Amisse, Mario Ernesto Jijón-Palma, and Jorge Antonio Silva Centeno. Fine-tuning deep learning models for pedestrian detection. *Boletim de Ciências Geólicas*, 2021. +[10] Edna Chebet Too, Li Yujiang, Sam Njuki, and Liu Yingchun. A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture, 2019. +[11] Christoph Käding, Erik Rodner, Alexander Freytag, and Joachim Denzler. Fine-tuning deep neural networks in continuous learning scenarios. In ACCV, 2016. 1 +[12] Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 2023. 2 +[13] Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, and Maosong Sun. Sparse structure search for delta tuning. NeurIPS, 2022. 2 +[14] Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuan-Jing Huang, Rui Xie, and Wei Wu. Making parameter-efficient tuning more efficient: A unified framework for classification tasks. In International Conference on Computational Linguistics, 2022. 2 + +[15] Himashi Rathnayake, Janani Sumanapala, Raveesha Rukshani, and Surangika Ranathunga. Adapter-based fine-tuning of pre-trained multilingual language models for code-mixed and code-switched text classification. Knowledge and Information Systems, 2022. 2 +[16] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In ECCV, 2022. 2, 3 +[17] Yen-Cheng Liu, Chih-Yao Ma, Junjiao Tian, Zijian He, and Zsolt Kira. Polyhistor: Parameter-efficient multi-task adaptation for dense vision tasks. NeurIPS, 2022. 2 +[18] Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. Parameter-efficient fine-tuning for vision transformers. arXiv preprint arXiv:2203.16329, 2022. 2, 7 +[19] Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. NeurIPS, 2022. 2, 3, 5, 7 +[20] Dongshuo Yin, Yiran Yang, Zhechao Wang, Hongfeng Yu, Kaiwen Wei, and Xian Sun. $1\%$ vs $100\%$ : Parameter-efficient low rank adapter for dense predictions. In CVPR, 2023. 2, 3, 5 +[21] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In ICML, 2019. 2, 3, 5 +[22] Jieuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Gang Wang, Jianfei Cai, et al. Recent advances in convolutional neural networks. Pattern recognition, 2018. 2 +[23] Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE TNNLS, 2021. +[24] Saad Albawi, Tareq Abed Mohammed, and Saad Al-Zawi. Understanding of a convolutional neural network. In ICET, 2017. 2 +[25] Saban Öztürk, Umut Özkaya, Bayram Akdemir, and Levent Seyfi. Convolution kernel size effect on convolutional neural network in histopathological image processing applications. In ISFEE, 2018. 2 +[26] Danupon Chansong and Siriporn Supratid. Impacts of kernel size on different resized images in object recognition based on convolutional neural network. In iEECON, 2021. +[27] Abhinav Agrawal and Namita Mittal. Using cnn for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy. The Visual Computer, 2020. 2 +[28] Xue Yang, Jirui Yang, Junchi Yan, Yue Zhang, Tengfei Zhang, Zhi Guo, Xian Sun, and Kun Fu. Scdet: Towards more robust detection for small, cluttered and rotated objects. In ICCV, 2019. 2 +[29] Xue Yang, Junchi Yan, Ziming Feng, and Tao He. R3det: Refined single-stage detector with feature refinement for rotating object. In AAAI, 2021. 2 +[30] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 2, 4 + +[31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 2, 4 +[32] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. *Minds and Machines*, 2020. 2 +[33] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. +[34] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. 2 +[35] Tianrun Chen, Lanyun Zhu, Chaotao Deng, Runlong Cao, Yan Wang, Shangzhan Zhang, Zejian Li, Lingyun Sun, Ying Zang, and Papa Mao. Sam-adapter: Adapting segment anything in underperformed scenes. In ICCV, 2023. 2 +[36] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 2, 5 +[37] Chongjie Si, Xuehui Wang, Xue Yang, Zhengqin Xu, Qingyun Li, Jifeng Dai, Yu Qiao, Xiaokang Yang, and Wei Shen. Flora: Low-rank core space for n-dimension. arXiv preprint arXiv:2405.14739, 2024. 2 +[38] Yi Xin, Siqi Luo, Haodi Zhou, Junlong Du, Xiaohong Liu, Yue Fan, Qing Li, and Yuntao Du. Parameter-efficient fine-tuning for pre-trained vision models: A survey. arXiv preprint arXiv:2402.02242, 2024. 2 +[39] Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199, 2021. 2, 4 +[40] Angeliki Giannou, Shashank Rajput, and Dimitris Papailiopoulos. The expressive power of tuning only the norm layers. arXiv preprint arXiv:2302.07937, 2023. 2, 4 +[41] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? NeurIPS, 2014. 2, 4 +[42] Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In ACL, 2022. 2 +[43] Beier Zhu, Yulei Niu, Yucheng Han, Yue Wu, and Hanwang Zhang. Prompt-aligned gradient for prompt tuning. In ICCV, 2023. 2 +[44] Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. VI-adapter: Parameter-efficient transfer learning for vision-and-language tasks. In CVPR, 2022. 2 +[45] Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. Parameter-efficient model adaptation for vision transformers. In AAAI, 2023. 2 +[46] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. In ICLR, 2022. 2 + +[47] Haoyu He, Jianfei Cai, Jing Zhang, Dacheng Tao, and Bohan Zhuang. Sensitivity-aware visual parameter-efficient finetuning. In ICCV, 2023. 2, 8 +[48] Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, and Shanghang Zhang. Gradient-based parameter selection for efficient fine-tuning. In CVPR, 2024. 2 +[49] Dongze Lian, Daquan Zhou, Jiashi Feng, and Xinchao Wang. Scaling & shifting your features: A new baseline for efficient model tuning. NeurIPS, 2022. 2 +[50] Fengze Jiang, Shuling Wang, and Xiaojin Gong. Task-conditional adapter for multi-task dense prediction. In ACM MM, 2024. 2 +[51] Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. Understanding and improving layer normalization. NeurIPS, 2019. 3 +[52] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 3 +[53] Jane F Koretz and George H Handelman. How the human eye focuses. Scientific American, 1988. 3 +[54] Susana Martinez-Conde, Stephen L Macknik, and David H Hubel. The role of fixational eye movements in visual perception. Nature reviews neuroscience, 2004. +[55] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015. 3 +[56] Mark Everingham, SM Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The Pascal visual object classes challenge: A retrospective. IJCV, 2015. 4 +[57] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In CVPR, 2017. 4 +[58] Jian Ding, Nan Xue, Gui-Song Xia, Xiang Bai, Wen Yang, Michael Yang, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Object detection in aerial images: A large-scale benchmark and challenges. IEEE TPAMI, 2021. 4 +[59] Yansheng Li, Linlin Wang, Tingzhu Wang, Xue Yang, Junwei Luo, Qi Wang, Youming Deng, Wenbin Wang, Xian Sun, Haifeng Li, Bo Dang, Yongjun Zhang, Yu Yi, and Junchi Yan. Star: A first-ever dataset and a large-scale benchmark for scene graph generation in large-size satellite imagery. arXiv preprint arXiv: 2406.09410, 2024. 4 +[60] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics & Image Processing, 2008. 4 +[61] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In CVPR, 2012. 4 +[62] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html, 2007. 4 + +[63] Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, et al. Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. 4 +[64] MMSegmentation Contributors. MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark. https://github.com/open-mmlab/mmsegmentation, 2020.4 +[65] Yue Zhou, Xue Yang, Gefan Zhang, Jiabao Wang, Yanyi Liu, Liping Hou, Xue Jiang, Xingzhao Liu, Junchi Yan, Chengqi Lyu, Wenwei Zhang, and Kai Chen. Mmrotate: A rotated object detection benchmark using pytorch. In ACM MM, 2022. 4 +[66] MMClassification Contributors. Openmmlab's image classification toolbox and benchmark. https://github.com/open-mmlab/mmclassification, 2020.4 +[67] Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305, 2020. 6 +[68] Matthew E Peters, Sebastian Ruder, and Noah A Smith. To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987, 2019. 6 +[69] Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. Oriented r-cnn for object detection. In ICCV, 2021. 7 +[70] Xue Yang, Xiaojiang Yang, Jirui Yang, Qi Ming, Wentao Wang, Qi Tian, and Junchi Yan. Learning high-precision bounding box for rotated object detection via kullback-Leibler divergence. NeurIPS, 2021. 7 +[71] Yi Yu, Xue Yang, Qingyun Li, Yue Zhou, Feipeng Da, and Junchi Yan. H2rbox-v2: Incorporating symmetry for boosting horizontal box supervised oriented object detection. NeurIPS, 2024. 7 +[72] Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In CVPR, 2023. 8 +[73] Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In CVPR, 2022. 8 +[74] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, 2021. 8 \ No newline at end of file diff --git a/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/images.zip b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9e0a5a9bfc0e5e5192e3dcb3daeae55a31692bc3 --- /dev/null +++ b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaab3da7cb1a1de0686c00bd4428ef6dfe1a77687315ca360d179fd9c715b52f +size 551360 diff --git a/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/layout.json b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ec20b5271400397109c54c56aea360aeea2ad972 --- /dev/null +++ b/CVPR/2025/5%_100%_ Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6e73c5b12d9be72b6b91c5460b613901ad02f7af4cde29a71f4584cfeb2d3b8 +size 387674 diff --git a/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_content_list.json b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..202655df965c90d69839dc24a8642a9507a8090f --- /dev/null +++ b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9bec8ab5a60ea962714259f8b1aa2e5b84f3630e9788b073ceefa2fcf8c67d8 +size 78784 diff --git a/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_model.json b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..acbe7eff9922a3a97a23274632cadc75f97e3cca --- /dev/null +++ b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd2bf4fc3f5431342c83a4b43efda1b92390f097da878b99e8bfd5e3d3459b38 +size 96869 diff --git a/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_origin.pdf b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f024a1fa2265616609f87a063f9405b5204e58e1 --- /dev/null +++ b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/323dbdcc-b693-4070-92b9-d9dcaf560a4d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47384405e368745b134ca799b5a8c1c60673ea6f7a6406cda36579b7499d0147 +size 1572887 diff --git a/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/full.md b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b9952d3be2c9cb3ae8cf40a15bda92080aef0f1d --- /dev/null +++ b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/full.md @@ -0,0 +1,250 @@ +# A Bias-Free Training Paradigm for More General AI-generated Image Detection + +Fabrizio Guillaro1 Giada Zingarini1 Ben Usman2 Avneesh Sud2 Davide Cozzolino1 Luisa Verdoliva1 + +1University Federico II of Naples 2Google DeepMind + +![](images/d6b21b7622edeeff9acc0531a160b8a8e79a40d5544736d53a98ab1a5fd84f87.jpg) +Figure 1. We introduce a new training paradigm for AI-generated image detection. To avoid possible biases, we generate synthetic images from self-conditioned reconstructions of real images and include augmentation in the form of inpainted versions. This allows to avoid semantic biases. As a consequence, we obtain better generalization to unseen models and better calibration than SoTA methods. + +# Abstract + +Successful forensic detectors can produce excellent results in supervised learning benchmarks but struggle to transfer to real-world applications. We believe this limitation is largely due to inadequate training data quality. While most research focuses on developing new algorithms, less attention is given to training data selection, despite evidence that performance can be strongly impacted by spurious correlations such as content, format, or resolution. A well-designed forensic detector should detect generator specific artifacts rather than reflect data biases. To this end, we propose B-Free, a bias-free training paradigm, where fake images are generated from real ones using the conditioning procedure of stable diffusion models. This ensures semantic alignment between real and fake images, allowing any differences to stem solely from the subtle artifacts introduced by AI generation. Through content-based augmentation, we show significant improvements in both generalization and robustness over state-of-the-art detectors and more calibrated results across 27 different generative models, including recent releases, like FLUX and Stable Diffusion 3.5. Our findings emphasize the importance of a careful dataset design, highlighting the need for further research on this topic. Code and data are publicly available at https://grip-unina.github.io/B-Free/. + +# 1. Introduction + +The rise of generative AI has revolutionized the creation of synthetic content, enabling easy production of high-quality sophisticated media, even for individuals without deep technical expertise. Thanks to user-friendly interfaces and pretrained models, users can create synthetic content such as text, images, music, and videos through simple inputs or prompts [50]. This accessibility has democratized content creation, enabling professionals in fields like design, marketing, and entertainment to leverage AI for creative purposes. However, this raises concerns about potential misuse, such as the creation of deepfakes, misinformation, and challenges related to intellectual property and content authenticity [4, 17, 24]. + +Key challenges for current GenAI image detectors include generalization — detecting synthetic generators not present in the training set — and ensuring robustness against image impairments caused by online sharing, such as compression, resizing, and cropping [41]. In this context, large pre-trained vision-language models like CLIP [34] have demonstrated impressive resilience to these distribution shifts [30]. The success of these models in forensic applications suggests that pre-training on large and diverse datasets may be a promising path forward. An important aspect often overlooked in the current literature is the selection of good datasets to train or fine-tune such models that + +![](images/684f91e43a007099cbac9ef2fc1c9dd9fb7105019810d96738208bda296422d6.jpg) +Figure 2. Forensic detectors can exhibit opposite behaviors depending on their training dataset. The four plots show the prediction distributions for three ViT-based detectors, UnivFD [30], FatFormer [26] and RINE [22], and the proposed one. The fake images (SD-XL or DALL-E 3) are generated from images of a single dataset (RAISE on top, COCO on the bottom) and tested only against real images of the same dataset (Synthbuster [2] and the test dataset from [11]). We observe that for the same detector (e.g., RINE) and the same fake-image generator (e.g., DALL-E 3) the score distributions can vary significantly depending on the dataset used, going from real (left of the dotted line) to fake (right of the dotted line) or vice versa. This is likely due to the presence of biases in the training set that heavily impact the detector prediction. Our detector, on the other hand, shows consistent and correct results. + +primarily rely on hidden, unknown signatures of generative models [29, 47]. Indeed, it is important to guarantee that the detector decisions are truly based on generation-specific artifacts and not on possible dataset biases [7, 27, 42]. In fact, datasets used during the training and testing phases of forensic classifiers could be affected by different types of polarization. + +Format issues have been the Achilles' heel of forensic detectors since at least 2013, when [5] recognized that a dataset for image tampering detection [14] included forged and pristine images compressed with different JPEG quality factors. Therefore, a classifier trained to discriminate tampered and pristine images may instead learn their different processing histories. This issue has been highlighted in [19] with reference to datasets of synthetic and real images. In fact, the former are often created in a lossless format (PNG), while the latter are typically compressed in lossy formats like JPEG. Again a classifier could learn coding inconsistencies instead of forensic clues. Likewise it could learn resampling artifacts, as it was recently shown in [35] - in this case a bias was introduced by resizing all the real images from the LAION dataset to the same resolution, while keeping the fake ones unaltered. + +Forensic clues are subtle and often imperceptible to the human eye, making it easy to introduce biases when constructing the training and test sets, as well as the evaluation protocol. Semantic content itself can also represent a source of bias. For this reason, several recent proposals [2, 3, 11] take great care to include pairs of real and fake images characterized by the same prompts when building a training or test dataset. To gain better insights about the above issues, in Fig. 2 we show the performance of three SoTA ViT-based approaches [22, 26, 30] in distinguishing real images from fake images generated by SD-XL and DALL-E 3. For each + +method we consider two settings: in the first case, real images come from the RAISE dataset [12] and fakes are generated starting from images of the same dataset. The second case uses COCO as a source of reals instead of RAISE. We note an inconsistent behavior of SOTA forensic detectors on the same synthetic generator which can be caused by the presence of biases during training. FakeInversion [7] proposes an effective approach towards semantic alignment of training data using reverse image search to find matching reals, but fails to capture real image distribution after 2021. + +To mitigate potential dataset biases, in this work we propose a new training paradigm, B-Free, where synthetic images are generated using self-conditioned reconstructions of real images and incorporate augmented, inpainted variations. This approach helps to prevent semantic bias and potential misalignment in coding formats. Furthermore, the model avoids resizing operations that can be particularly harmful by washing out the subtle low-level forensic clues [18]. Overall, we make the following contributions: + +- We propose a large curated training dataset of 51k real and 309k fake images. Real images are sourced from COCO, while synthetic images are self-conditioned generations using Stable Diffusion 2.1. This helps the detector to focus on artifacts related to the synthetic generation process avoiding content and coding related biases. +- We show that incorporating proper content-based augmentation leads to better-calibrated results. This ensures that in-lab performance more closely aligns with the expected performance on real-world images shared across social networks. +- We study the effect of different distribution shifts and show that by leveraging a pre-trained large model finetuned end-to-end on our dataset, we achieve a SoTA accuracy superior to $90\%$ even on unseen generators. + +![](images/44e5998684532b1ccc3967dc4bf9579288b186db38ddf84a60f44439c108aa10.jpg) +Figure 3. Overview of existing $(a, b, c)$ and proposed $(d)$ strategies for building an aligned training dataset. Some methods try to match synthetic images to the corresponding real images by using class-based generation $(a)$ or text-to-image generation with real images' descriptions $(b)$ . In $(c)$ real images are fed to an autoencoder to generate a reconstructed fake with the same content. Differently from $(c)$ , in our approach a self-conditioned fake is generated using diffusion steps $(d)$ , and we also add a content augmentation step. + +# 2. Related Work + +A well-curated training set is of vital importance for any data-driven method. In recent years this awareness has much grown also in the forensic field and there have been many efforts in this direction following two main lines of work: $i$ ) forming a reliable dataset by carefully selecting "natural" fakes, or $ii$ ) creating a fully synthetic dataset by injecting forensic artifacts in real images. + +Selecting good natural training data. Wang et al.'s paper [43] was among the first to demonstrate the importance of selecting a suitable training set for gaining generalization to unseen synthetic generators. The selected dataset included images from a single generation architecture (ProGAN) and 20 different real/false categories (Fig. 3.a) and included augmentation in the form of common image post-processing operations, such as blurring and compression. Results clearly show that generalization and robustness strongly benefit from the many different categories included during training as well as from the augmentation procedure. In fact, this dataset has been widely utilized in the literature, where researchers follow a standard protocol assuming the knowledge of one single generative model during training. This scenario describes a typical real world situation where new generative architectures are unknown at test time. + +The dataset proposed in [43] was used in [30] to fine-tune a CLIP model with a single learnable linear layer, achieving excellent generalization not only on GAN models but also on Diffusion-based synthetic generators never seen during training. Likewise, it was used in [22] to train a CNN classifier that leverages features extracted from CLIP's intermediate layers to better exploit low-level forensic features. In [37, 40] image captions (either paired to the dataset images or generated from them) were used as additional input for a joint analysis during training. The approach proposed in [26] is trained using only 4 classes out of the 20 categories proposed in [43], as well as other recent methods [38-40]. + +Alternatively, some methods rely on datasets comprising images from a single diffusion-based generator, such as Latent Diffusion [7, 10, 11], Guided Diffusion [44] or Stable Diffusion [23, 37]. Prior work [7, 11] highlights the importance of aligning both training and test data in terms of semantic content. This choice allowed to better exploit the potential of fixed-pretraining CLIP features by strongly reducing the number of images needed for fine-tuning [11]. In addition, it has the key merit of reducing the dataset content bias, thus allowing for better quality training, and is also adopted in other approaches both during training [1, 3] and at test time to carry out a fairer evaluation [2]. + +Creating training data by artifact injection. A different line of research is to create simulated fake images by injecting traces of the generative process in real images. A seminal work along this line was done by Zhang et al. [52] for GAN image detection. The idea is to simulate artifacts shared by several generators. These peculiar traces are caused by the up-sampling processes included in the generation pipeline and show up as peaks in the frequency domain. Besides these frequency peaks, synthetic images, both GAN-based and diffusion-based, have been shown to exhibit spectral features that are very different from those of natural images [15, 16]. In fact, real images exhibit much richer spectral content at intermediate frequencies than synthetic ones [9, 46]. + +For GAN-generated images, producing realistic simulated fakes requires training the generation architecture specifically for this task [20, 52]. In contrast, diffusion-based image generation can leverage a pre-trained autoencoder embedded within the generation pipeline, which projects images into a latent space without the need for additional training [11, 28]. This procedure has been very recently used in a concurrent work [35] to reduce semantic biases during training (Fig. 3.c). Different from [35] we generate synthetic data by also performing the diffusion steps. Later in this work we will show that this choice allows us + +
Reference# Real/# FakeReal Source# Models
Synthbuster [2]1k / 9kRAISE9
GenImage [53]1.3M / 1.3MImageNet8
FakeInversion [7]44.7k / 44.7kInternet13
SynthWildX [11]500 / 1.5kX3
WildRF [6]1.25k / 1.25kReddit, FB, Xunknown
+ +Table 1. This table provides an overview of the datasets used in our evaluation, including the number of real and fake images, the sources of the real data, and the number of generative models used to create the synthetic images. + +to exploit even subtler inconsistencies at lower frequencies, enhancing the detector performance (Fig. 3.d). + +# 3. Evaluation Protocol + +# 3.1. Datasets + +In our experimental analysis, we want to avoid or at least minimize the influence of any possible afore-mentioned biases. To this end, we carefully select the evaluation datasets as outlined below. Experiments on further datasets are provided in the supplementary material. + +To avoid format bias, we use Synthbuster [2], where both real and generated images are saved in raw format. Therefore, a good performance on this dataset cannot come from the exploitation of JPEG artifacts. A complementary strategy to avoid format biases is to reduce the mismatch between real (compressed) and synthetic (uncompressed) images by compressing the latter. To this end, we modified the fake class in GenImage [53] by compressing images at a JPEG quality close to those used for the real class, as suggested in [19]. This modified dataset, referred to as GenImage unbiased, comprises 5k real and 5k fake images, a small fraction of the original dataset. + +To avoid content bias, we also evaluate performance on datasets where fakes are generated using automated descriptions of real images. In studies like [2, 3] these descriptions are refined into manually created prompts for text-based generation. As a result, the generated images closely align with the content of the real images, minimizing possible biases due to semantic differences. A more refined dataset in this regard is FakeInversion [7], where real images are retrieved from the web using reverse image search, thus ensuring stylistic and thematic alignment with the fakes. + +To allow in-the-wild analysis, we experiment also on datasets of real/fake images collected from the web, such as WildRF [6] and SynthWildX [11]. Both datasets comprise images coming from several popular social networks. Tags were used to find fake images on Reddit, Facebook and X. A short summary of all the datasets used in our evaluation is listed in Table 1. + +![](images/1db8ffece07b1fe04c30e541adbe7063c08266db175d3504b5efff058b59f713.jpg) +Figure 4. Content augmentation process. Starting with a real image, we use its generated variants (first row) and their locally manipulated versions (last row), created by replacing the original background. When inpainting with a different category, we use a bounding box instead of an object mask to allow space for new objects of varying shapes and sizes. + +# 3.2. Metrics + +Most work on GenAI image detection measure performance by means of threshold-independent metrics, such as Area Under the Curve (AUC) or average precision (AP). These metrics indicate ideal classification performance, however the optimal separating threshold is not known and, quite often, the balanced accuracy at a fixed threshold (e.g. 0.5) remains low, especially when there are significant differences between training and testing distributions [41]. Some papers address this problem by adjusting the threshold through a calibration procedure, assuming access to a few images from the synthetic generator under evaluation [10, 30, 43]. In a realistic situation the availability of such calibration images cannot be guaranteed. + +In this work, to provide a comprehensive assessment of performance, we use both AUC and Accuracy at 0.5, in addition we compute the Expected Calibration Error (ECE) and the Negative Log-Likelihood (NLL). ECE measures the ability of a model to provide prediction probabilities well aligned with the true probabilities. More precisely, we use the Binary ECE, which is the weighted average of the differences between the actual probability and the predicted probability across different bins [32]. Then, we use the balanced Negative Log-Likelihood [33], which evaluates the similarity between the distribution of the model's predictions and the actual data distribution, penalizing both low confidence in the correct class and overconfidence in incorrect ones. More details on these metrics can be found in the supplementary material. + +
SynthbusterNew GeneratorsWildRFAVG
MethodTraining settingMidjourneySDXLDALL-E 2DALL-E 3FireflyFLUXSD 3.5FacebookRedditTwitterAUC†/bAcc†
AUC /bAccpaired by text-96.9 / 56.999.5 / 78.178.7 / 50.198.8 / 56.691.3 / 51.194.6 / 51.596.6 / 66.997.8 / 72.584.4 / 67.296.1 / 68.193.5 / 61.9
reconstructed-100. / 99.8100. / 100.81.1 / 52.299.1 / 75.896.7 / 62.198.2 / 64.099.7 / 88.898.9 / 97.577.8 / 75.594.8 / 91.194.6 / 80.7
reconstructedinpainted100. / 99.9100. / 99.990.1 / 62.099.2 / 79.598.1 / 77.295.8 / 69.199.4 / 89.198.8 / 95.978.7 / 76.194.9 / 90.395.5 / 83.9
self-conditioned-99.9 / 97.2100. / 99.290.4 / 58.198.9 / 76.199.4 / 89.795.4 / 59.4100. / 98.495.4 / 86.675.7 / 66.391.4 / 82.794.7 / 81.4
self-conditionedcutmix/mixup99.9 / 94.399.9 / 97.893.5 / 53.299.1 / 72.799.8 / 76.490.4 / 52.399.8 / 91.096.4 / 90.380.3 / 74.893.8 / 82.995.3 / 78.6
"inpainted100. / 99.4100. / 99.696.7 / 77.899.4 / 92.899.9 / 99.298.7 / 87.599.9 / 98.498.9 / 94.489.4 / 81.297.5 / 92.098.0 / 92.2
"inpainted+99.9 / 98.8100. / 99.799.7 / 95.999.6 / 96.8100. / 99.697.9 / 85.399.3 / 95.198.0 / 95.096.0 / 89.899.4 / 96.599.0 / 95.2
"inpainted++100. / 99.6100. / 99.899.7 / 95.799.9 / 98.2100. / 99.799.3 / 92.399.9 / 98.999.0 / 95.695.8 / 86.399.7 / 97.499.3 / 96.4
MethodTraining settingMidjourneySDXLDALL-E 2DALL-E 3FireflyFLUXSD 3.5FacebookRedditTwitterNLL↓/ECE↓
NLL/ECEpaired by text-1.96 / .4180.72 / .2183.60 / .4961.71 / .4162.95 / .4842.62 / .4731.42 / .3271.00 / .2731.77 / .3241.31 / .3101.91 / .374
reconstructed-0.00 / .0030.00 / .0013.33 / .4690.83 / .2401.56 / .3681.46 / .3540.38 / .1150.13 / .0251.20 / .1920.43 / .0820.93 / .185
reconstructedinpainted0.01 / .0080.01 / .0081.16 / .3530.40 / .1970.51 / .2220.79 / .2900.23 / .1070.12 / .0320.73 / .1530.29 / .0660.42 / .144
self-conditioned-0.08 / .0310.02 / .0081.48 / .3990.59 / .2340.27 / .1141.22 / .3790.04 / .0210.31 / .0840.92 / .2370.40 / .0780.53 / .158
self-conditionedcutmix/mixup0.14 / .0630.06 / .0281.66 / .4400.67 / .2680.48 / .2381.82 / .4520.22 / .1030.29 / .0760.77 / .1620.48 / .1410.66 / .197
"inpainted0.02 / .0160.02 / .0170.49 / .1990.18 / .0850.04 / .0250.27 / .1170.05 / .0210.16 / .0700.39 / .0590.19 / .0330.18 / .064
"inpainted+0.04 / .0080.01 / .0060.12 / .0440.10 / .0370.02 / .0110.40 / .1490.14 / .0440.17 / .0320.25 / .0430.11 / .0280.14 / .040
"inpainted++0.02 / .0130.01 / .0110.10 / .0450.06 / .0310.02 / .0190.19 / .0840.04 / .0140.14 / .0390.28 / .0750.09 / .0470.10 / .038
+ +Table 2. Ablation study. We compare several forms of content alignment and content augmentation. Performance are in terms of AUC/Accuracy (top) and ECE/NLL (bottom). Note that all variants share a standard augmentation (blurring + JPEG compression) as proposed in [43]. For content alignment we consider image pairing strategies described in Fig. 3: text-driven generation, reconstruction through autoencoder, and our proposal using self-conditioned images. For the last solution we test several forms of augmentation: a standard cutmix/mixup, and three proposed strategies based on inpainting described in Sec. 4.2. Bold underlines the best performance for each column with a margin of $1\%$ . + +# 4. Proposed Method + +To realize and test our bias-free training paradigm we: + +- build a dataset consisting of real and generated fake images, where the latter are well aligned with their real counterparts but include the forensic artifacts of the diffusion-based generation process. The dataset is created starting from the images collected from the training set of MS-COCO dataset [25], for a total of 51,517 real images. It is then enriched through several forms of augmentation, including locally inpainted images, and comprises eventually 309,102 generated images. + +- use this aligned dataset to fine-tune end-to-end a Vision Transformer (ViT)-based model. Specifically, we adopt a variant of the ViT network proposed in [13] with four registers and use the pretraining based on the self-supervised learning method DINOv2 [31]. During training, we avoid resizing the image and rely on large crops of $504 \times 504$ pixels. At inference time, we also extract crops of the same dimension from the image (if the image is larger we average the results of multiple crops). + +To ensure fake images semantically match the content of real images, we exploit the conditioning mechanism of Stable Diffusion models that allows us to control the synthesis process through a side input, which can be a class-label, a text or another image. The side input is firstly projected to an intermediate representation by a domain specific encoder, and then feeds the intermediate layers of the autoencoders for denoising in the embedding space. After several denoising steps, a decoder is used to obtain the conditioned synthetic image from embedded vector (See Fig. 1). In our self-conditioned generation, we use the inpainting diffusion model of Stable Diffusion 2.1 [36], that has three side in + +![](images/7f853b52596bc4c3743934c0952d7c031dad11a83c131a46f675d6d555d62491.jpg) +(a) + +![](images/e5b48723c4198beca3ee57ddb61411cde3781647e58f8bb4fc4d0de46980b65e.jpg) +(b) + +![](images/d0ae908ff4ab21d8c9a3a24f6e273a506cce1e4d706f21991ed14bb659367b43.jpg) +(c) +Figure 5. Power spectra computed by averaging (2000 images) the differences between: (a) real and reconstructed images, (b) real and self-conditioned images, and (c) reconstructed and self-conditioned images. We can observe that the self-conditioned generation embeds forensic artifacts even at lower frequencies compared to reconstructed images. This means that it is possible to better exploit such inconsistencies to distinguish real from fakes. + +puts: the reference image, a binary mask of the area to inpaint, and a textual description. Using an empty mask, we induce the diffusion steps to regenerate the input, that is, to generate a new image with exactly the same content as the input image. For the content augmentation process, we use the Stable Diffusion 2.1 inpainting method to replace an object with a new one, chosen from the same category or from a different one. Moreover, as shown in Fig. 4, besides the default inpainting, which regenerates the whole image, we consider also a version where the original background is restored. Note that during training, we balance the real and fake class taking an equal number of images from each. + +In the following, we present our ablation study. To avoid dataset bias we use WildRF and Synthbuster. In addition, we test on 1000 FLUX and 1000 Stable Diffusion 3.5 images, which are some of the latest synthetic generators. + +![](images/191d7ed26fd3f69f70bd4240ae84a718e462ba3ead69066ec10549682f47dadc.jpg) +Figure 6. Robustness analysis in terms of balanced Accuracy carried out on nine generators of the Synthbuster dataset. + +# 4.1. Influence of content alignment + +In Tab. 2 we show the performance achieved with different dataset alignment strategies, as described in Fig. 3. Note that all variants are trained with standard augmentations, including blurring and JPEG compression, as proposed in [43]. From the Table, we can observe that there is a large gain in terms of balanced accuracy $(\simeq 20\%)$ when moving from a text-driven generation (first row) to a solution where real and fake images share the semantic content, both reconstructed and self-conditioned. The proposed solution that uses a diffusion pass demonstrates further improvement on average across all the evaluation metrics. This is highlighted in Fig. 5, where we show the power spectra evaluated by averaging the difference between the real and reconstructed images and the real and self-conditioned images. We observe that self-conditioned generation introduces forensic artifacts even at the lowest frequencies, indicating a detector trained on such images can exploit inconsistencies on a broader range of frequencies. + +# 4.2. Effect of content augmentation + +We also analyze the effect of different content augmentation strategies (Fig. 4). We consider standard operations like cut-mix [48] and mix-up [51] and compare them with our proposed solutions that include three variants: + +- inpainted, we replace an object with another from the same category plus the version where the background is substituted with pristine pixels (effectively a local image edit); +- inpainted+, we replace an object with another from both the same and a different category plus the corresponding versions where the background is substituted with pristine pixels; +- inpainted++, we further add some more standard augmentation operations, such as scaling, cut-out, noise addition, and jittering. + +Overall, it is evident from Tab. 2 that augmentation plays a critical role in enhancing model generalization and this can be appreciated especially by looking at balanced accuracy and calibration measures. In fact, adding inpaint + +
ArchitectureFTAUC↑bAcc↑NLL↓ECE↓
DINOv2+regLP80.868.50.58.141
DINOv2+rege2e99.095.20.14.040
DINOv2e2e98.491.10.24.077
SigLIPe2e95.489.90.28.066
+ +Table 3. We compare our solution, DINOv2+reg trained end-to-end, with linear probing (LP) and also consider alternative architectures, basic DINOv2 and SigLIP. + +
ArchitectureTraining SetAUC↑bAcc↑NLL↓ECE↓
CLIP/ViTProGAN [43]54.745.24.85.525
LDM [10]63.948.53.39.487
Ours75.273.90.66.225
RINEProGAN [43]66.165.05.43.312
LDM [10]83.075.70.53.132
Ours89.983.20.39.089
+ +Table 4. Ablation study on the influence of the training data (ProGAN, Latent Diffusion and our dataset) on methods used for AI-generated image detection: CLIP [34] and RINE [22]. + +ing to reconstruction increases the accuracy from 80.7 to 83.9, while the joint use of self-conditioning and inpainting grants a significant extra gain, reaching 92.2 or even 96.4 with inpainted++. The most significant gains are observed on DALL-E 2, DALL-E 3 and FLUX that, probably, differ the most from Stable 2.1 in terms of architecture and hence require a stronger augmentation strategy to generalize. + +In Fig. 6, we analyze the impact of our content augmentation, assessing robustness under various operations: JPEG compression, resizing, and blurring. All three proposed variants of augmentation offer a clear advantage, especially when resizing is applied. The joint use of self-conditioning and inpainting results in the most robust approach. + +# 4.3. Influence of training data and architecture + +We conduct additional experiments to gain deeper insights into the impact of the chosen architecture and the proposed training data on the same datasets shown in Tab. 2. + +First we compare our adopted model, DINOv2+reg trained end-to-end, with an alternative fine-tuning strategy that involves training only the final linear layer, known as linear probing (LP) that is largely adopted in the literature [11, 30]. From Tab. 3 we can see that this latter solution does not perform well. One possible explanation is that features from last layer capture high-level semantics, while our dataset is built to exploit low-level artifacts that derive from first and intermediate layers [10, 22]. In the same Table we compare DINOv2+reg with the basic DINOv2 [31] architecture and SigLIP [49] and we can observe that DINOv2 with the use of registers achieves the best performance, probably thanks to the fact that it avoids to discard local patch information [13]. + +
bAcc(%)↑/NLL↓SynthbusterNew GeneratorsWildRFAVG
MidjourneySDXLDALL-E 2DALL-E 3FireflyFLUXSD 3.5FacebookRedditTwitterbAcc↑/NLL↓
CNNDetect49.5 / 8.4549.8 / 6.9050.2 / 5.7549.5 / 12.950.3 / 3.6649.5 / 10.150.0 / 5.3050.0 / 9.8450.7 / 6.6650.1 / 8.6450.0 / 7.83
DMID100. / 0.0099.7 / 0.0150.1 / 5.9950.0 / 7.0851.0 / 1.7263.7 / 1.2799.9 / 0.0187.8 / 0.5274.3 / 1.8279.1 / 0.8575.6 / 1.93
LGrad57.7 / 6.8858.5 / 6.8155.6 / 7.1047.9 / 7.5847.4 / 7.5054.9 / 7.1151.9 / 7.2466.6 / 3.7457.8 / 4.7245.7 / 5.3154.4 / 6.40
UnivFD52.4 / 2.3568.0 / 1.1583.5 / 0.4347.3 / 3.9490.7 / 0.2448.4 / 3.4569.3 / 1.0748.8 / 3.0659.5 / 1.3756.0 / 2.0162.4 / 1.91
DeFake69.7 / 0.7276.3 / 0.5664.0 / 0.9284.9 / 0.3672.4 / 0.6379.2 / 0.4681.2 / 0.4266.3 / 0.8965.9 / 0.8263.4 / 0.9472.3 / 0.67
DIRE49.7 / 15.349.9 / 15.350.0 / 15.350.0 / 15.349.9 / 15.350.0 / 15.350.0 / 15.351.9 / 4.9879.5 / 2.1556.7 / 4.3953.7 / 11.9
AntifakePrompt70.4 / -84.7 / -65.5 / -86.0 / -70.0 / -59.6 / -60.7 / -69.7 / -68.9 / -78.0 / -71.3 / -
NPR44.9 / 16.650.3 / 16.250.2 / 16.20.6 / 29.90.4 / 47.350.3 / 16.250.3 / 16.250.0 / 32.278.3 / 9.3951.8 / 25.242.7 / 22.5
FatFormer44.4 / 5.2266.7 / 2.7654.1 / 3.6435.9 / 6.9060.1 / 3.5939.4 / 6.1049.1 / 5.0654.7 / 4.5469.5 / 2.5454.8 / 4.4052.9 / 4.48
FasterThanLies61.3 / 2.9871.1 / 1.7950.8 / 5.1553.5 / 3.7955.2 / 4.4053.8 / 4.1053.7 / 3.7646.2 / 3.3251.0 / 3.9953.9 / 3.3155.1 / 3.66
RINE54.6 / 5.0371.8 / 1.9982.2 / 0.7745.3 / 20.591.2 / 0.3646.7 / 10.181.3 / 1.2252.8 / 6.5167.7 / 2.4656.0 / 5.2365.0 / 5.43
AIDE57.5 / 0.9568.4 / 0.7034.9 / 1.3433.7 / 1.3824.8 / 2.0062.9 / 0.8263.3 / 0.8256.9 / 0.9472.1 / 0.6257.3 / 1.0153.2 / 1.06
LaDeDa50.7 / 24.850.7 / 24.850.5 / 24.841.1 / 25.447.4 / 25.650.5 / 24.850.7 / 24.870.3 / 7.1974.7 / 7.9359.6 / 9.4054.6 / 19.9
C2P-CLIP52.8 / 1.1077.7 / 0.4855.6 / 0.9963.2 / 0.7359.5 / 0.8950.1 / 1.3060.9 / 0.9354.4 / 0.9768.4 / 0.6757.4 / 0.9160.0 / 0.90
CoDE76.9 / 0.8275.2 / 0.8154.6 / 2.4473.2 / 0.9858.6 / 2.0059.8 / 1.9767.7 / 1.2770.0 / 0.9766.1 / 1.2970.9 / 1.0167.3 / 1.36
B-Free (ours)99.6 / 0.0299.8 / 0.0195.6 / 0.1098.2 / 0.0699.7 / 0.0292.3 / 0.1998.9 / 0.0495.6 / 0.1486.2 / 0.2897.3 / 0.0996.3 / 0.10
+ +Then, in Tab. 4, we consider a CLIP-based model and the architecture of RINE, and vary the training dataset by including two well known datasets largely used in the literature, one based on ProGAN [43] and the other on Latent Diffusion [10]. We note that our training paradigm achieves the best performance over all the metrics with a very large gain (RINE increases the accuracy from $65\%$ to $83.2\%$ ). + +# 5. Comparison with the State-of-The-Art + +In this Section, we conduct a comparison with SoTA methods on 27 diverse synthetic generation models. To ensure fairness, we include only SoTA methods with publicly available code and/or pre-trained models. The selected methods are listed in Table 6 and are further described in the supplementary material together with additional experiments. For all the experiments now on, ours refers to the detector trained using inpainted++ augmentation. + +A first experiment is summarized in Tab. 5 with results given in terms of balanced accuracy and NLL. Most of the methods struggle to achieve a good accuracy, especially on more recent generators. Instead, B-Free obtains a uniformly good performance on all generators, irrespective of the image origin, whether they are saved in raw format or downloaded from social networks, outperforming the second best (see last column) by $+20.7\%$ in terms of bAcc. Then, we evaluate again all methods on GenImage (unbiased), FakeInversion [7], and SynthWildX [11]. As these datasets encompass multiple generators, we only report the average performance in Tab. 7. On these additional datasets, most methods provide unsatisfactory results, especially in the most challenging scenario represented by SynthWildX, with images that are shared over the web. The proposed method performs well on all datasets, just a bit worse on FakeInversion. Finally in Fig. 7 we study how AUC com + +Table 5. Comparison with SoTA methods in terms of balanced Accuracy and balanced NLL across different generators. Note that AntifakePrompt [8] provides only hard binary labels hence calibration measures cannot be computed. Bold underlines the best performance for each column with a margin of $1\%$ . + +
Ref.AcronymTraining Real/FakeSize (K)Aug.
[43]CNNDetectLSUN / ProGAN360 / 360
[10]DMIDCOCO, LSUN / Latent180 / 180
[38]LGradLSUN / ProGAN72 / 72
[30]UnivFDLSUN / ProGAN360 / 360
[37]DeFakeCOCO / SD20 / 20
[44]DIRELSUN-Bed / ADM40 / 40
[8]AntifakePromptCOCO / SD3,SD2-inp90 / 60
[39]NPRLSUN / ProGAN72 / 72
[26]FatFormerLSUN / ProGAN72 / 72
[23]FasterThanLiesCOCO / SD108 / 542
[22]RINELSUN / ProGAN72 / 72
[45]AIDEImageNet / SD 1.4160 / 160
[6]LaDeDaLSUN / ProGAN360 / 360
[40]C2P-CLIPLSUN / ProGAN72 / 72
[3]CoDELAION / SD1.4, SD2.1, SDXL, DeepF. IF2.3M / 9.2M
B-Free (ours)COCO / SD2.151 / 309
+ +Table 6. AI-generated image detection methods used for comparison and whose code is publicly available. We specify source and size of the training dataset, and whether augmentation is applied. + +pares with balanced accuracy for all the methods over several datasets. We observe that some methods, like NPR and LGrad, present a clear non-uniform behavior, with very good performance on a single dataset and much worse on the others. This seems to suggest that these methods may not be truly detecting forensic artifacts, instead are rather exploiting intrinsic biases within the dataset. Differently, the proposed method presents a uniform performance across all datasets and a small loss between AUC and accuracy. + +Analysis on content shared on-line. Distinguishing real from synthetic images on social networks may be especially challenging due to the presence of multiple re-posting that impair image quality over time. A recent study conducted in [21] analyzed the detector behavior on different instances of an image shared online, showing that the per + +![](images/3828c07e8b76c7b110992ddb746cf94e55e88d600affb5f7a12e89c5a6d2c6b9.jpg) +Figure 7. Average performance in term of AUC and bAcc on four datasets: Synthbuster, GenImage, FakeInversion, SynthWildX. + +![](images/b0f30cb159bc78176b07535f9d308119ce82efa2c14ceee1446f72985849add5.jpg) +Figure 8. Results of SoTA detectors on real and fake images that went viral on internet, analyzing multiple web-scraped versions of each image. The performance is in terms of balanced accuracy evaluated from the initial online post (Log scale). Only detectors with an accuracy over $50\%$ are shown. + +![](images/1bd36ca40ab7b355b3711b59a4c45c8db068dd140a0917cd2ad68bac76113153.jpg) + +formance degrades noticeably in time due to repeated posting. To better understand the impact of our augmentation strategies on such images, we collected a total of 1400 real/fake images that went viral on the web, including several versions of the same real or fake image. + +Fig. 8 illustrates the accuracy, which is evaluated over a 100-day period from the time of initial publication, with times on a logarithmic scale. We compare our proposal with the best performing SoTA methods. We can notice that the performance drops after only one day, after which most competitors are stuck below $70\%$ , with the exception of DeFake that achieves around $75\%$ . Only the proposed method, which comprises more aggressive augmentation, is able to ensure an average accuracy around $92\%$ even after many days from the first on-line post. + +# 6. Limitations + +The method proposed in this work is trained using fake images that are self-conditioned reconstructions from Stable Diffusion 2.1 model. If new generators will be deployed in the future that have a completely different synthesis process, then it is very likely that this approach will fail (the principles and ideas shared in this work may still hold). Fur + +
bAcc(%)↑/NLL↓GenImageFakeInver.SynthWildXAVG
CNNDetect51.3 / 7.8850.9 / 7.9450.0 / 8.0850.7 / 7.96
DMID79.0 / 1.6696.1 / 0.2576.6 / 0.8283.9 / 0.91
LGrad39.6 / 7.1277.2 / 2.2746.3 / 5.5354.3 / 4.97
UnivFD65.5 / 1.3152.8 / 2.1952.5 / 2.5556.9 / 2.02
DeFake73.7 / 0.7463.3 / 0.9562.9 / 0.9866.6 / 0.89
DIRE47.3 / 6.5451.8 / 13.452.5 / 4.6050.5 / 8.19
AntifakePrompt78.5 / -53.9 / -70.8 / -67.8 / -
NPR50.7 / 25.387.0 / 4.9649.9 / 28.262.6 / 19.5
FatFormer61.5 / 3.9959.7 / 3.4553.3 / 4.7558.2 / 4.06
FasterThanLies77.0 / 1.2348.6 / 3.6450.9 / 3.4058.8 / 2.76
RINE69.1 / 2.5763.6 / 4.8456.2 / 6.0763.0 / 4.49
AIDE60.2 / 1.0176.9 / 0.5455.0 / 1.0564.0 / 0.86
LaDeDa50.2 / 29.284.7 / 3.0355.1 / 10.263.3 / 14.1
C2P-CLIP75.5 / 0.5759.6 / 0.8257.4 / 0.9164.2 / 0.76
CoDE71.7 / 1.4378.8 / 0.7472.3 / 0.9574.2 / 1.04
B-Free (ours)89.3 / 0.2786.2 / 0.3295.6 / 0.1490.4 / 0.24
+ +Table 7. Comparison with SoTA methods in terms of average performance in terms of balanced accuracy and NLL for three additional datasets: GenImage, FakeInversion and SynthWildX. + +ther, being a data-driven approach it can be adversarially attacked by a malicious user. This is a very relevant issue that we plan to address in our future work. + +# 7. Conclusions + +In this paper, we propose a new training paradigm for AI-generated image detection. First of all, we empirically demonstrate the importance of pairing real and fake images by constraining them to have the same semantic content. This helps to better extract common artifacts shared across diverse synthetic generators. Then we find that using aggressive data augmentation, in the form of partial manipulations, further boosts performance both in term of accuracy and of calibration metrics. This is extremely relevant especially when working in realistic scenarios, such as image sharing over social networks. Our findings emphasize that careful dataset curation and proper training strategy can be more impactful compared to developing more complex algorithms. We hope this work will inspire other researchers in the forensic community to pursue a similar direction, fostering advancements in bias-free training strategies. + +Acknowledgments. We gratefully acknowledge the support of this research by a Google Gift. In addition, this work has received funding from the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093, and was partially supported by SERICS (PE00000014) under the MUR National Recovery and Resilience Plan, funded by the European Union - NextGenerationEU. + +# References + +[1] Roberto Amoroso, Davide Morelli, Marcella Cornia, Lorenzo Baraldi, Alberto Del Bimbo, and Rita Cucchiara. Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images. ACM Trans. Multimedia Comput. Commun. Appl., 2024. 3 +[2] Quentin Bammey. Synthbuster: Towards detection of diffusion model generated images. IEEE Open Journal of Signal Processing, 2023. 2, 3, 4 +[3] Lorenzo Baraldi, Federico Cocchi, Marcella Cornia, Lorenzo Baraldi, Alessandro Nicolosi, and Rita Cucchiara. Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities. In ECCV, 2024. 2, 3, 4, 7 +[4] Clark Barrett, Brad Boyd, Elie Bursztein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil Feizi, et al. Identifying and Mitigating the Security Risks of Generative AI. Foundations and Trends® in Privacy and Security, 6(1):1-52, 2023. 1 +[5] Giuseppe Cattaneo and Gianluca Roscigno. A possible pitfall in the experimental analysis of tampering detection algorithms. In International Conference on Network-Based Information Systems, 2014. 2 +[6] Bar Cavia, Eliahu Horwitz, Tal Reiss, and Yedid Hoshen. Real-Time Deepfake Detection in the Real-World. arXiv preprint arXiv:2406.09398, 2024. 4, 7 +[7] George Cazenavette, Avneesh Sud, Thomas Leung, and Ben Usman. FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion. In CVPR, pages 10759–10769, 2024. 2, 3, 4, 7 +[8] You-Ming Chang, Chen Yeh, Wei-Chen Chiu, and Ning Yu. AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors. arXiv preprint arXiv:2310.17419, 2023. 7 +[9] Riccardo Corvi, Davide Cozzolino, Giovanni Poggi, Koki Nagano, and Luisa Verdoliva. Intriguing properties of synthetic images: from generative adversarial networks to diffusion models. In CVPR Workshops, pages 973-982, 2023. 3 +[10] Riccardo Corvi, Davide Cozzolino, Giada Zingarini, Giovanni Poggi, Koki Nagano, and Luisa Verdoliva. On the detection of synthetic images generated by diffusion models. In ICASSP, pages 1-5, 2023. 3, 4, 6, 7 +[11] Davide Cozzolino, Giovanni Poggi, Riccardo Corvi, Matthias Nießner, and Luisa Verdoliva. Raising the Bar of AI-generated Image Detection with CLIP. In CVPR Workshops, pages 4356-4366, 2024. 2, 3, 4, 6, 7 + +[12] Duc-Tien Dang-Nguyen, Cecilia Pasquini, Valentina Conotter, and Giulia Boato. RAISE: a raw images dataset for digital image forensics. In ACM MMSys, page 219-224. Association for Computing Machinery, 2015. 2 +[13] Timothee Darcet, Maxime Oquab, Julien Mairal, and Piotr Bojanowski. Vision Transformers Need Registers. In ICLR, 2024. 5, 6 +[14] Jing Dong, Wei Wang, and Tieniu Tan. CASIA Image Tampering Detection Evaluation Database. In IEEE ChinaSIP, 2013. 2 +[15] Ricard Durall, Margret Keuper, and Janis Keuper. Watch your up-convolution: CNN based Generative Deep Neural Networks are failing to reproduce spectral distributions. In CVPR, pages 7890–7899, 2020. 3 +[16] Tarik Dzanic, Karan Shah, and Freddie Witherden. Fourier spectrum discrepancies in deep network generated images. In NeurIPS, pages 3022-3032, 2020. 3 +[17] Ziv Epstein, Aaron Hertzmann, et al. Art and the science of generative AI. Science, 380(6650):1110-1111, 2023. 1 +[18] Diego Gragnaniello, Davide Cozzolino, Francesco Marra, Giovanni Poggi, and Luisa Verdoliva. Are GAN generated images easy to detect? A critical analysis of the state-of-the-art. In ICME, pages 1-6, 2021. 2 +[19] Patrick Grommelt, Louis Weiss, Franz-Josef Pfreundt, and Janis Keuper. Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets. In ECCV Workshops, 2024. 2, 4 +[20] Yonghyun Jeong, Doyeon Kim, Youngmin Ro, Pyounggeon Kim, and Jongwon Choi. FingerprintNet: Synthesized Fingerprints for Generated Image Detection. In ECCV, pages 76-94, 2022. 3 +[21] Dimitrios Karageorgiou, Quentin Bammey, Valentin Porcellini, Bertrand Goupil, Denis Teyssou, and Symeon Papadopoulos. Evolution of Detection Performance throughout the Online Lifespan of Synthetic Images. In ECCV Workshops, 2024. 7 +[22] Christos Koutlis and Symeon Papadopoulos. Leveraging Representations from Intermediate Encoder-blocks for Synthetic Image Detection. In ECCV, pages 394-411, 2024. 2, 3, 6, 7 +[23] Romeo Lanzino, Federico Fontana, Anxhelo Diko, Marco Raoul Marini, and Luigi Cinque. Faster Than Lies: Real-time Deepfake Detection using Binary Neural Networks. In CVPR Workshops, pages 3771-3780, 2024. 3, 7 +[24] Li Lin, Neeraj Gupta, Yue Zhang, Hainan Ren, Chun-Hao Liu, Feng Ding, Xin Wang, Xin Li, Luisa Verdoliva, and Shu Hu. Detecting multimedia generated by large AI models: A survey. arXiv preprint arXiv:2204.06125, 2024. 1 +[25] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, pages 740-755, 2014. 5 +[26] Huan Liu, Zichang Tan, Chuangchuang Tan, Yunchao Wei, Jingdong Wang, and Yao Zhao. Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection. In CVPR, pages 10770-10780, 2024. 2, 3, 7 + +[27] Zhuang Liu and Kaiming He. A Decade's Battle on Dataset Bias: Are We There Yet? In ICLR, 2025. 2 +[28] Sara Mandelli, Paolo Bestagini, and Stefano Tubaro. When Synthetic Traces Hide Real Content: Analysis of Stable Diffusion Image Laundering. In WIFS, pages 1-6, 2024. 3 +[29] Francesco Marra, Diego Gragnaniello, Luisa Verdoliva, and Giovanni Poggi. Do GANs Leave Artificial Fingerprints? In MIPR, pages 506-511, 2019. 2 +[30] Utkarsh Ojha, Yuheng Li, and Yong Jae Lee. Towards universal fake image detectors that generalize across generative models. In CVPR, pages 24480-24489, 2023. 1, 2, 3, 4, 6, 7 +[31] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. DINOv2: Learning Robust Visual Features without Supervision. Transactions on Machine Learning Research Journal, 2024. 5, 6 +[32] Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining Well Calibrated Probabilities Using Bayesian Binning. AAAI, 29(1), 2015. 4 +[33] Joaquin Quinonero-Candela, Carl Edward Rasmussen, Fabian Sinz, Olivier Bousquet, and Bernhard Scholkopf. Evaluating Predictive Uncertainty Challenge. In Machine Learning Challenges Workshop, pages 1-27, 2006. 4 +[34] Alec Radford, JongWook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning Transferable Visual Models From Natural Language Supervision. In ICML, pages 8748-8763, 2021. 1, 6 +[35] Anirudh Sundara Rajan, Utkarsh Ojha, Jedidiah Schloesser, and Yong Jae Lee. On the Effectiveness of Dataset Alignment for Fake Image Detection. arXiv preprint arXiv:2410.11835, 2024. 2, 3 +[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. Stable Diffusion. https://github.com/Stability-AI/stablediffusion, 2022.5 +[37] Zeyang Sha, Zheng Li, Ning Yu, and Yang Zhang. DEFAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models. In ACM SIGSAC, pages 3418-3432, 2023. 3, 7 +[38] Chuangchuang Tan, Yao Zhao, Shikui Wei, Guanghua Gu, and Yunchao Wei. Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection. In CVPR, pages 12105–12114, 2023. 3, 7 +[39] Chuangchuang Tan, Huan Liu, Yao Zhao, Shikui Wei, Guanghua Gu, Ping Liu, and Yunchao Wei. Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection. In CVPR, 2024. 7 +[40] Chuangchuang Tan, Renshuai Tao, Huan Liu, Guanghua Gu, Baoyuan Wu, Yao Zhao, and Yunchao Wei. C2P- + +CLIP: Injecting Category Common Prompt in CLIP to Enhance Generalization in Deepfake Detection. arXiv preprint arXiv:2408.09647, 2024. 3, 7 +[41] Diangarti Tariang, Riccardo Corvi, Davide Cozzolino, Giovanni Poggi, Koki Nagano, and Luisa Verdoliva. Synthetic Image Verification in the Era of Generative AI: What Works and What Isn't There Yet. IEEE Security & Privacy, 22: 37-49, 2024. 1, 4 +[42] Antonio Torralba and Alexei A. Efros. Unbiased look at dataset bias. In CVPR, 2011. 2 +[43] Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. CNN-generated images are surprisingly easy to spot... for now. In CVPR, pages 8695-8704, 2020. 3, 4, 5, 6, 7 +[44] Zhendong Wang, Jianmin Bao, Wengang Zhou, Weilun Wang, Hezhen Hu, Hong Chen, and Houqiang Li. DIRE for diffusion-generated image detection. In ICCV, pages 22445-22455, 2023. 3, 7 +[45] Shilin Yan, Ouxiang Li, Jiayin Cai, Yanbin Hao, Xiaolong Jiang, Yao Hu, and Weidi Xie. A Sanity Check for AI-generated Image Detection. In ICLR, 2025. 7 +[46] Xingyi Yang, Daquan Zhou, Jiashi Feng, and Xinchao Wang. Diffusion Probabilistic Model Made Slim. In CVPR, pages 22552-22562, 2023. 3 +[47] Ning Yu, Larry S Davis, and Mario Fritz. Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints. In ICCV, pages 7556-7566, 2019. 2 +[48] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. CutMix: Regularization strategy to train strong classifiers with localizable features. In ICCV, pages 6023-6032, 2019. 6 +[49] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In ICCV, pages 11975-11986, 2023. 6 +[50] Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu, Lingjie Liu, Adam Kortylewski, Christian Theobalt, and Eric Xing. Multimodal Image Synthesis and Editing: The Generative AI Era. IEEE TPAMI, 45(12): 15098-15119, 2021. 1 +[51] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond Empirical Risk Minimization. In ICLR, 2018. 6 +[52] Xu Zhang, Svebor Karaman, and Shih-Fu Chang. Detecting and Simulating Artifacts in GAN Fake Images. In WIFS, 2019. 3 +[53] Mingjian Zhu, Hanting Chen, Qiangyu Yan, Xudong Huang, Guanyu Lin, Wei Li, Zhijun Tu, Hailin Hu, Jie Hu, and Yunhe Wang. GenImage: A Million-Scale Benchmark for Detecting AI-Generated Image. NeurIPS, 36:77771-77782, 2023. 4 \ No newline at end of file diff --git a/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/images.zip b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c85d1f3de547316ef3f89e04fae3257de1b93ddb --- /dev/null +++ b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e0797b098bd3198d95e1fad31fcf8d9a91afb90eff5ebb8f7af49c31c798788 +size 915239 diff --git a/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/layout.json b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a5bb8dce455d8a014021c8361a81f1c1813774ef --- /dev/null +++ b/CVPR/2025/A Bias-Free Training Paradigm for More General AI-generated Image Detection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae6aca937371b8dae6465c0d9ea4e1b0adadb87a4074a02184dc298193a05c81 +size 305214 diff --git a/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_content_list.json b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..803ccec28b73099cd80e4a40a0587f04b85a28cf --- /dev/null +++ b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a08d0861d0d629b50e3e30a759fe21e416639ffba7125aa7afeb3c4b8aa89f72 +size 86490 diff --git a/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_model.json b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b88ad213a15a2469c5f1e7532ec84c84c3ace2c4 --- /dev/null +++ b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d2f52463060c53d6ef01cefd79677114a0074ddee036927f6cf5c24be17a4a5 +size 112553 diff --git a/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_origin.pdf b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9f88698989d2440515ea52fe4d01f66197b00dae --- /dev/null +++ b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/cce7ea3d-6315-432c-a78d-5b778092ad35_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d57a1b7294fed0759eeac21d5ad5e87653534dfebe53fa5c4475930d98d62892 +size 6514263 diff --git a/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/full.md b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a1e2a8d665db9167e920d2e7dba1e868947698d7 --- /dev/null +++ b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/full.md @@ -0,0 +1,376 @@ +# A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training + +Kai Wang $^{1*}$ , Mingjia Shi $^{1*}$ , Yukun Zhou $^{1,2}$ , Zekai Li $^{1}$ , Zhihang Yuan $^{3}$ , Yuzhang Shang $^{4}$ , Xiaojiang Peng $^{2\dagger}$ , Hanwang Zhang $^{5}$ , Yang You $^{1}$ $^{1}$ National University of Singapore $^{2}$ Shenzhen Technology University $^{3}$ Infinigence-AI + $^{4}$ Illinois Institute of Technology $^{5}$ Nanyang Technological University +Code: NUS-HPC-AI-Lab/SpeedD + +# Abstract + +Training diffusion models is always a computation-intensive task. In this paper, we introduce a novel speed-up method for diffusion model training, called SpeeD, which is based on a closer look at time steps. Our key findings are: i) Time steps can be empirically divided into acceleration, deceleration, and convergence areas based on the process increment. ii) These time steps are imbalanced, with many concentrated in the convergence area. iii) The concentrated steps provide limited benefits for diffusion training. To address this, we design an asymmetric sampling strategy that reduces the frequency of steps from the convergence area while increasing the sampling probability in other areas. Additionally, we propose a weighting strategy to emphasize the importance of time steps with rapid-change process increments. As a plug-and-play and architecture-agnostic approach, SpeeD consistently achieves $3 \times$ acceleration across various diffusion architectures, datasets, and tasks. Notably, due to its simple design, our approach significantly reduces the cost of diffusion model training with minimal overhead. Our research enables more researchers to train diffusion models at a lower cost. + +# 1. Introduction + +Training diffusion models is not usually affordable for many researchers, especially for ones in academia. For example, DALL-E 2 [43] needs 40K A100 GPU days and Sora [44] at least necessitates 126K H100 GPU days. Therefore, accelerating the training for diffusion models has become urgent for broader generative AI and related applications. + +Recently, some acceleration methods for diffusion training focus on time steps, primarily using re-weighting and re-sampling 1) Re-weighting on the time steps based on heuristic rules. P2 [8] and Min-SNR [17] use monotonous + +![](images/bd4448f221743bab547feb16c2a638a3ad5e4baca386b3a2a33b1d42938539e3.jpg) +Figure 1. Closer look at time steps: More than half of the time steps are almost pure noise and easy-to-learn. Motivation: designing an efficient training via analyzing process increment $\delta_t$ at different time steps. $\mathbf{E}(\delta_t)$ and $\mathrm{Var}(\delta_t)$ are the mean and variance of process increments $\delta_t$ . Two histograms represent the proportions of the process increments at different noise levels (left) and the proportions of the time steps (right) in the three areas. The loss curve is obtained from DDPM [20] on CIFAR-10 [33]. + +and single-peak weighting strategies according to sign-to-noise ratios (SNR) in different time steps. 2) Re-sampling the time steps. Log-Normal [28] assigns high sampling probabilities for the middle time steps of the diffusion process. CLTS [67] proposes a curriculum learning based time step schedule, gradually tuning the sampling probability from uniform to Gaussian by interpolation for acceleration as shown in Fig. 2. + +To investigate the essence of the above accelerations, we take a closer look at the time steps. As shown in the left of Fig. 1, we visualize the changes of mean and variance of the process increment $\delta_t \coloneqq x_{t+1} - x_t$ at time step $t$ . The time steps are divided into three areas: acceleration, deceleration, and convergence. We identify the characteristics of three areas of the time steps. One can easily find that + +![](images/6ea6c7ff1b125a5361e1a235c9dd699a8b2cffd3f9ced92b224cb4e38b30cf7d.jpg) + +![](images/ac0754449cea8275a1bfb82f147e3934c5bdbdacb4ff0ee5e3814557abf8b552.jpg) +Figure 3. Core designs of SpeeD. Red and blue lines denote sampling and weighting curves. + +![](images/01f918de5483b6ddae368a4f2cf28eeb98f910b1f922208a2e6de2534470f276.jpg) + +![](images/5ce692b85de4049d31f5343c31d4f894aa506b4b0232df558626c74b08ca8d61.jpg) +Figure 2. Re-weighting and re-sampling methods can't eliminate the redundancy and under-sample issues. $w(t)$ and $\mathbf{P}(t)$ are respectively the weighting and sampling curve. The probability of convergence area being sampled remains, while the one of acceleration is reduced faster. + +the proportions of the three areas are imbalanced: a much larger number of time steps at the narrow convergence area. Besides, process increments at the convergence area are almost identical noise, e.g., in DDPM, the distribution are almost pure white noise. To further explore the characteristics of these three areas, we visualize the training loss curve in the right of Fig. 1. The loss values from the convergence area are much lower than the others, which indicates estimating the identical noise is easy. + +Previous acceleration works have achieved promising results, but the analysis of time steps remains relatively under-explored. P2 [8] and Min-SNR [17] are two re-weighting methods, with their weighting curves across time steps as shown in Fig. 2. They employ uniform sampling of time steps, which include too many easy samples from the convergence area during diffusion model training. Most re-sampling methods heuristically emphasize sampling the middle-time steps, but they do not dive into the difference between the acceleration and convergence areas. CLTS [67] gradually changes the sampling distribution from uniform to Gaussian by interpolation as shown in Fig. 2. The sampling probability of the acceleration area drops faster than the one of the convergence area. The acceleration area is still under-sampled and therefore not well-learned. + +Motivated by the analyses from a closer look at time steps, we propose SpeeD, a novel approach that aims to improve the training efficiency for diffusion models. The core ideas are illustrated in Fig. 3. To mitigate the redundant training cost, different from uniform sampling, we design an asymmetric sampling strategy that suppresses the + +![](images/2d59fab8089acb36126c4bfbbdac881da8e555a026c2a5e6c5689e70e3794419.jpg) + +attendance of the time steps from the convergence area in each iteration. Meanwhile, we weight the time steps by the change rate of the process increment, emphasizing the importance of the rapid-change intervals. + +Our approach has the following characteristics: SpeeD is compatible with various diffusion model training methods, i.e., U-Net [52] and DiT [47] with minimal modifications. For performance, SpeeD achieves non-trivial improvements than baseline and other methods at the same training iterations. For efficiency, SpeeD consistently accelerates the diffusion training by $3 \times$ across various tasks and datasets. It helps mitigate the heavy computational cost for diffusion model training, enabling more researchers to train a model at an acceptable expense. The extra time complexity of SpeeD is $\mathcal{O}(1)$ , costing only seconds to reduce days of diffusion models training on datasets like FFHQ, MetFaces and ImageNet-1K. We hope this work can bring novel insights for efficient diffusion model training. + +# 2. Speeding Up Training: Time Steps + +In this section, we first introduce the preliminaries of diffusion models and then focus on a closer look at time steps and key designs of our proposed SpeeD. + +# 2.1. Preliminaries of Diffusion Models + +In conventional DDPM [20, 57], given data $x_0 \sim p(x_0)$ , the forward process is a Markov-Gaussian process that gradually adds noise to obtain a sequence $\{x_1, x_2, \ldots, x_T\}$ , + +$$ +q (x _ {t} | x _ {t - 1}) = \mathcal {N} (x _ {t}; \sqrt {1 - \beta_ {t}} x _ {t - 1}, \beta_ {t} \mathbf {I}), +$$ + +$$ +q \left(x _ {1: T} \mid x _ {0}\right) = \prod_ {t = 1} ^ {T} q \left(x _ {t} \mid x _ {t - 1}\right), +$$ + +where $\mathbf{I}$ is the unit matrix, $T$ is the total number of time steps, $q$ and $\mathcal{N}$ represent forward process and Gaussian dist. parameterized by scheduled hyper-parameters $\{\beta_t\}_{t\in [T]}$ . Perturbed samples are sampled by $x_{t} = \sqrt{\bar{\alpha}_{t}}\cdot x_{0} + \sqrt{1 - \bar{\alpha}_{t}}\cdot \epsilon ,\epsilon \sim \mathcal{N}(0,\mathbf{I})$ , where $\alpha_{t} = 1 - \beta_{t}$ and $\bar{\alpha}_t = \prod_{s = 1}^t\alpha_s$ + +For diffusion model training, the forward process is divided into pairs of samples and targeted process increments + +by time steps $t$ , defined as $\delta_t \coloneqq x_{t+1} - x_t$ . The diffusion model is expected to predict the next step from the given time step. The training loss [20] for diffusion models is to predict the normalized noise. The loss highlighted with weighting and sampling modules, where $\epsilon \sim \mathcal{N}(0, \mathbf{I})$ : + +$$ +L = \mathbb {E} _ {\mu_ {t}} [ w _ {t} | | \epsilon - \epsilon_ {\theta} (x _ {t}, t) | | ^ {2} ] := \int_ {t} w _ {t} | | \epsilon - \epsilon_ {\theta} (x _ {t}, t) | | ^ {2} \mathrm {d} \mu_ {t}, \tag {1} +$$ + +Intuitively, a neural network $\epsilon_{\theta}$ is trained to predict the normalized noise $\epsilon$ added at given time-step $t$ . The probability of a sample being sampled in the forward process is determined by the probability measure $\mu_t$ , while the weight of the loss function is determined by $w_t$ at $t^{\mathrm{th}}$ time-step. + +# 2.2. Overview of SpeeD + +Based on the above observations and analyses, we propose SpeeD, a novel approach for achieving lossless training acceleration tailored for diffusion models. As illustrated in Fig. 3, SpeeD suppresses the trivial time steps from convergence area, and weight the rapid-change intervals between acceleration and deceleration areas. Correspondingly, two main modules, asymmetric sampling and change-aware weighting, are proposed. Asymmetric sampling uses a two-step step function to respectively suppress and increase the sampling probability corresponding to trivial and beneficial time steps. Change-aware weighting is based on the change rate of process increment $\partial_t\Psi (t)$ in Theorem 1. + +# 2.3. Asymmetric Sampling + +SpeeD adopts the time steps sampling probability $\mathbf{P}(t)$ as the step function in Eqn. 2 to construct the loss in Eqn. 1. We first define $\tau$ as the step threshold in $\mathbf{P}(t)$ . The predefined boundary $\tau$ means the area where the time step are suppressed. The sampling probability is $k$ times from timesteps where $t < \tau$ than that from $t > \tau$ instead of the uniform sampling $\mathbf{U}(t) = 1 / T$ . + +$$ +\mathbf {P} (t) = \left\{ \begin{array}{l l} \frac {k}{T + \tau (k - 1)}, & 0 < t \leq \tau ; \\ \frac {1}{T + \tau (k - 1)}, & \tau < t \leq T, \end{array} \right. \tag {2} +$$ + +where suppression intensity $k \geq 1$ and $\tau \in (0, T]$ . + +Threshold Selection $\tau$ . According to Theorem 1, given a magnitude $r$ , $\tau$ should satisfy $\hat{r}(\tau) > r$ to make sure that $\tau > t_{\mathrm{d-c}}$ , where the time steps suppressed are all time steps in the convergence area. To maximum the number of suppressed time steps, we set $\tau \gets \sqrt{2T\log r / \Delta_{\beta} + T^{2}\beta_{0}^{2} / \Delta_{\beta}^{2}} - T\beta_{0} / \Delta_{\beta}$ . + +# 2.4. Change-Aware Weighting + +According to Theorem 1, a faster change of process increment means fewer samples at the corresponding noise level. + +This leads to under-sampling in acceleration and deceleration areas. Change-aware weighting is adopted to mitigate the under-sampling issue. The weights $\{w_{t}\}_{t\in [T]}$ are assigned based on the gradient of the variance over time, where we use the approximation $\partial_t\hat{\Psi}_t$ in Theorem 1. + +The original gradient $\partial_t\hat{\Psi}_t$ is practically not suitable for weighting due to its small scale. Therefore, $\partial_t\hat{\Psi}_t$ is rescaled into range $[1 - \lambda ,\lambda ]$ that $\min \{1,\max_t\partial_t\hat{\Psi}_t\} \to \lambda$ and $\max \{0,\min_t\partial_t\hat{\Psi}_t\} \to 1 - \lambda$ , where symmetry ceiling $\lambda \in [0.5,1]$ . $\lambda$ regulates the curvature of the weighting function. A higher $\lambda$ results in a more obvious distinction in weights between different time-steps. + +# 2.5. Case Study: DDPM + +In DDPM, the diffusion model learns the noise added in the forward process at given $t^{\mathrm{th}}$ time step. The noise is presented as $\epsilon$ , the label in Eqn. 1, which is the normalized process increment at given time step. This label tells what the output of the diffusion model is aligning to. To take a closer look, we focus on the nature of the process increment $\delta_{t}$ itself to study the diffusion process $x_{t} \to x_{t + 1}$ , instead of $\epsilon$ the normalized one. According to Theorem 1 and Remark 1, based on the variation trends of process increments $\delta_{t}$ , we can distinguish three distinct areas: acceleration, deceleration, and convergence. + +Theorem 1 (Process increment in DDPM). In DDPM's setting [20], the linear schedule hyper-parameters $\{\beta_t\}_{t\in [T]}$ is an equivariant series, the extreme deviation $\Delta_{\beta} := \max_t\beta_t - \min_t\beta_t$ , $T$ is the total number of time steps, and we have the bounds about the process increment $\delta_t \sim \mathcal{N}(\phi_t,\Psi_t)$ , where $\phi_t := (\sqrt{\alpha_{t + 1}} -1)\sqrt{\bar{\alpha}_t} x_0$ , $\Psi_t := [2 - \bar{\alpha}_t(1 + \alpha_{t + 1})]\mathbf{I}$ , $\mathbf{I}$ is the unit matrix: + +$$ +\text {U p p e r - b o u n d :} \quad | | \phi_ {t} | | ^ {2} \leq \hat {\phi} _ {t} | | \mathbb {E} x _ {0} | | ^ {2}, \tag {3} +$$ + +Lower-bound: $\Psi_t\succeq \hat{\Psi}_t\mathbf{I},$ where $\hat{\phi}_t\coloneqq \beta_{\mathrm{max}}\exp \{-(\beta_0 + \Delta_\beta t / 2T)t\}$ and $\hat{\Psi}_t\coloneqq$ $2 - 2\exp \{-(\beta_0 + \Delta_\beta t / 2T)t\} .$ + +Remark 1. The entire diffusion process can be approximated using the upper and lower bounds from Theorem 1, which we visualize as shown in Figure 4. We can observe that the diffusion process can be divided into three areas: acceleration, deceleration, and convergence. The two boundary points of these areas are denoted as $t_{a-d}$ and $t_{d-c}$ with their specific definitions and properties outlined below. + +Definition of $t_{\mathbf{a - d}}$ . The boundary between the acceleration and deceleration areas is determined by the inflection point in the parameter variation curves, as illustrated in Figure 4. This inflection point represents the peak where the process increment changes most rapidly. The key time-step $t_{\mathrm{a - d}}$ between acceleration and deceleration areas satisfies $t_{\mathrm{a - d}} = \arg \max_t\partial_t\hat{\Psi}_t$ and $\beta_{t_{a - d}} = \sqrt{\Delta_\beta / T}$ in our setting, where $\partial_t\hat{\Psi}_t = 2(\beta_0 + \Delta_\beta t / T)\exp \{-(\beta_0 + \Delta_\beta t / 2T)t\}$ . + +![](images/c9453bd430bf734098f7073c2a64079f1ddecb7f85011449237bdddc4c4c1cf2.jpg) +(a) Upper-bound's factor: $\hat{\phi}_t$ + +![](images/97aea07d0d3538ba079278e588bccef94905753ecb7a0ac2494af15f6b5998a9.jpg) +(b) Lower-bound's factor: $\hat{\Psi}_t$ + +![](images/9f87c9b67585282e4e708a689fa2fc6c087f66a823e56116b6ef33e1eb29d349.jpg) +(c) Partial derivative: $\partial_t\hat{\Psi}_t$ +Figure 4. Visualization of Theorem 1: three areas of acceleration, deceleration and convergence. + +Definition of $t_{\mathbf{d - c}}$ . The process is considered to be in the convergence area where the increments' variance is within a range. The convergence area is identified by a magnitude $r$ , where $1 - 1 / r$ is the ratio to the maximum variance. + +According to Theorem 1, the convergence area is defined as one magnitude reduction of the scale factor (i.e., $1 \times r$ ), and we have the lower-bound of the magnitude $\hat{r}_t := \exp \{ (\beta_0 + \Delta_\beta t / 2T) t \}$ employed as the threshold selection function in Section 2.3. The time step $t$ is guaranteed to be in convergence area when $\hat{r}_t > r$ . + +Analyses. In the convergence area, the variations of $\delta_t$ stabilize, signifying that the process is getting steady. + +This area corresponds to a very large proportion of the overall time steps. On top of that, the training loss in this area is empirically low, which leads to the redundant training cost on time steps with limited benefits for training. + +In the acceleration area, the variations of $\delta_t$ increase, indicating a rapid change. Conversely, in the deceleration area, the variations of $\delta_t$ decrease, reflecting a slowing down of the process. Notably, near the peak between the acceleration and deceleration areas, the process exhibits the fastest changes. These time steps only occupy a small proportion. Beyond that, the training losses in this area are empirically high. The issue is that a hard-to-learn area is even undersampled, necessitating more sampling and training efforts. Takeaways. Based on the analyses and observations, we provide takeaways as follows: + +- The samples from convergence area provide limited benefits for training. The sampling of the time step from this area should be suppressed. +- Pay more attention to the process increment's rapid-change area which is hard to learn and the corresponding time steps are fewer than the other areas. + +# 2.6. General Cases: Beyond DDPM + +This section generalizes the above findings on DDPM to broader settings. The findings are about the process increments $\delta_t \coloneqq x_{t+1} - x_t$ , and the related differentiation of right limit $\mathrm{d}x$ in forward process. The corresponding SDE [28] and the discretization are: + +$$ +\mathbf {d} x = x \dot {s} / s \mathbf {d} t + s \sqrt {\dot {\sigma} \sigma} \mathbf {d} w, x _ {t} = s _ {t} x _ {0} + s _ {t} \sigma_ {t} \epsilon , \epsilon \sim \mathcal {N} (0, \mathbf {I}), +$$ + +where scale factor $s = s_{t}$ and noise standard deviation (std.) $\sigma = \sigma_{t}$ are the main designs related to the factors $\hat{\phi}_t$ and $\hat{\Psi}_t$ about process increment $\delta_t$ in Theorem 1 at time steps $t$ . + +Generalize Theorem 1: $s$ - $\sigma$ Scheduled Process Increments. The generalized process increment is $\delta \sim \mathcal{N}(\Delta x_0, \Sigma \mathbf{I})$ , where $\Delta := s_+ - s$ and $\Sigma := s_+^2 \sigma_+^2 + s^2 \sigma^2$ across $t$ . $\Delta$ , $\Sigma$ are continuous on $t$ without discretization, where $s_+$ and $\sigma_+$ are the right outer limits, i.e., $s(t + \mathrm{d}t)$ . In discretization, $\Delta_t$ and $\Sigma_t$ , marked by $t$ , are related to sample granularity of time step $t$ . Like Theorem 1, we study the variation of process increments by $\dot{\Delta} = \dot{s}_+ - \dot{s}$ and $\dot{\Sigma} = \mathbf{m}^\top \dot{\mathbf{n}}$ where $\mathbf{m} = [s_+^2, \sigma_+^2, s^2, \sigma^2]^\top$ , $\mathbf{n} = [\sigma_+^2, s_+^2, \sigma^2, s^2]^\top$ . This formulation involves only terms about derivatives of given schedule functions, which brings computational convenience. Tab. 8 (in Appendix) provides all ingredients needed to calculate curves of $\dot{\Delta}$ and $\dot{\Sigma}$ in schedules of VP, VE and EDM. We also generalize the previous takeaways from the DDPM case study to $s$ - $\sigma$ scheduled setting with the following analyses. + +$\sigma$ is better for the design of sampling and weighting than $s$ . It stands due to its direct reflection about SNR (signal-to-noise ratio), and additionally, because $s$ is usually adapted to heuristic motivations. In DDPM, corresponding to VP in Tab. 8, the SDE design is simply from data to normal noise. In VE, realistic diffusion processes inspire that the diffusion rate is limited to $\sigma = \sqrt{t}$ . Further in EDM, motivation becomes more complex of training objective and concise of schedule and motivation at the same time, bringing benefits to training. Its key ideas and designs are 1) the std. of inputs and targeted outputs of a neural network $F_{\theta}$ in EDM is constrained to 1 with preconditioning; 2) the weights $w_{t}$ in Eqn. 1 are allocated according to $c_{\mathrm{out}} w_{t} = 1$ , where $c_{\mathrm{out}}$ is the scale factor of F-prediction neural network $F_{\theta}$ 's outputs, and is related to $\sigma$ and the std. of data (sometimes normalized as 1). + +Sampling deserves more attention. The SDE design goal of most diffusion models nowadays is to add much larger noise to the data so that the samples can cover larger space. However, in terms of the process increment, it always results in a low signal-to-noise ratio of the late data when $t$ is large. Either the standard deviation is too large or the $s$ used to suppress it is too small. For instance, EDM does not + +bias the data distribution from expectations due to the scale $\dot{\Delta} = 0$ , but $\dot{\Sigma} = 2[(t + \mathrm{d}t)^2 + t^2]$ is a quadratic increase as $t$ grows. In VP, $s \to 0$ , as $t$ grows, leads that the model needs to recover expectations in approximation. For these samples, which are not very informative, a single weight adjustment is not as efficient as reducing the sampling rate. + +# 3. Experiments + +In this section, the visualization is provided in Section 3.1. Comparison with SOTA methods in is provided mainly on ImageNet-1K dataset. The compared baselines and scheduler settings including SDE-VP/VE and EDM. The comparison and ablations on most related hyperparameters and tasks are empirically verified. + +# 3.1. Visualization + +The comparison of visualizations between SpeeD and DiT-XL/2 models on the MetFaces and FFHQ datasets clearly demonstrates the superiority of SpeeD. As shown in Fig. 5, SpeeD achieves significantly better visual quality at just $20\mathrm{K}$ or $30\mathrm{K}$ training iterations, compared to DiT-XL/2. This highlights that SpeeD reaches high-quality results much faster than the baseline method, making it a more efficient and effective approach for training diffusion models. + +# 3.2. Implementation Details + +Datasets. We mainly investigate the effectiveness of our approach on the following datasets: MetFaces [27] and FFHQ [26] are used for unconditional tasks, Celeb-A [38], CIFAR-10 [33] and ImageNet-1K [9] are used to train conditional image generation, and MS-COCO [35] is used to evaluate the generalization of our method in the text to image task. More details can be found in the Appendix A. + +Network architectures. U-Net [52] and DiT [47] are two famous architectures in the diffusion model area. We implement our approach on these two architectures and their variants. We follow the same hyper parameters as the baseline by default. More information about the details of the architectures can be found in Appendix A.1. + +Training details. We train all models using AdamW [31, 39] with a constant learning rate 1e-4. We set the maximum step in training to 1000 and use the linear variance. All images are augmented with horizontal flip transformations if not stated otherwise. Following common practice in the generative modeling literature, the exponential moving average (EMA) [13] of network weights is used with a decay of 0.9999. The results are reported using the EMA model. Details can be found in Tab. 6. + +Evaluation protocols. In inference, we default to generating 10K images. Fréchet Inception Distance (FID) is to evaluate both the fidelity and coverage of generated images. + +# 3.3. Comparisons with other strategies. + +Performance comparisons. Before our comparison, we first introduce our baseline, i.e., DiT-XL/2, a strong image generation backbone as introduced in DiT [47]. We follow the hyperparameter settings from DiT and train DiT-XL/2 on MetFaces [27] and FFHQ [26], respectively. We compare our approach with two re-weighting methods: P2 [8] and Min-SNR [17], and two re-sampling methods: Log-Normal [28] and CLTS [67]. In the evaluation, we use 10K generated images to calculate FID [19] for comparison. To make a detailed comparison, all approaches are trained for 50K iterations and we report the FID per 10K iterations. + +As shown in Tab 1, compared to DiT-XL/2, reweighting, and re-sampling methods, our approach obtains the best FID results. Specifically, at the 50K iteration, compared to other methods, we reduce 2.3 and 2.6 FID scores on MetFaces and FFHQ at least. Another interesting finding is that the re-weighting methods reduce the FID slowly at the beginning of the training, i.e., from the 10K to 20K iterations. That aligns with our analysis well: re-weighting methods involve a lot from the convergence area. Based on the experimental results, the time steps from the convergence area indeed contribute limited to training. + +Efficiency comparisons. In addition to the performance comparison, we also present the acceleration results of our SpeeD. This naturally raises a question: how to calculate the acceleration? Here, we follow the previous diffusion acceleration methods [12] and other efficient training papers [48, 67]: visualizing the FID-Iteration curve and reporting the estimated highest acceleration ratio. We mainly compare with DiT-XL/2, one re-weighting method Min-SNR [8], and one re-sampling method CLTS [67] in Fig 6. At the same training iterations, our approach achieves significantly better FID scores than other methods. Notably, SpeeD accelerates the Min-SNR, and CLTS by 2.7 and 2.6 times, respectively (More comparisons in Appendix B). + +For the comparison with the baseline, i.e., DiT-XL/2, considering the 50K iterations might be too short for converge, we extend the training iterations from 50K to 200K. In the long-term training, we speed up the DiT-XL/2 by 4 times without performance drops. That shows the strong efficiency of our proposed method. Most importantly, we can save $3\sim 5$ times the overall training cost with very minimal overhead. For instance, we save 48 hours (result obtained by training on 8 A6000 Nvidia GPUs) of training time for DiT-XL/2 with negligible seconds overhead. + +# 3.4. Generalization Evaluation + +Cross-architecture robustness evaluation. There are mainly two architectures in the diffusion models: U-Net [52] and DiT [47]. SpeeD is not correlated to specific model architecture, thereby it is a model-agnostic ap + +![](images/ae0ef7eb9be84e1fd51eb580d8140d8466bdefba6b47ec18252686412bebf0cb.jpg) +Figure 5. Our SpeeD obtains significant improvements than baseline in visualizations. More visualizations on other datasets and tasks can be found in the Appendix D. + +![](images/0f013840a1067f1d382460d4fe229190d86eb9a9b32b31b551ad2ecce4163f9c.jpg) +Table 1. The FID $\downarrow$ comparison to the baseline: DiT-XL/2, re-weighting methods: P2 and Min-SNR, and re-sampling methods: Log-Normal and CLTS. All methods are trained with DiT-XL/2 for 50K iterations. We report the FID per 10K iterations. Our approach achieves the best results on MetFaces and FFHQ datasets. Bold entries are best results. Following previous work [14], more results of 100K iterations and longer training phase with different schedules are in Appendix B.1. + +
MetfacesFFHQImageNet
DiTU-NetDiTU-NetDiTU-Net
baseline36.6146.7712.8617.3726.7445.71
SpeeD21.1322.889.9516.5220.6337.33
improve15.4823.892.910.856.117.38
+ +Table 2. Cross-architecture robustness evaluation. 'Baseline' denotes training diffusion models without acceleration strategy. 'DiT' refers to the DiT-XL/2 network. All FID scores are obtained by testing 10K generated images. + +
linearquadraticcosine
FID ↓IS ↑FID ↓IS ↑FID ↓IS ↑
baseline12.864.2111.124.2118.314.10
SpeeD9.954.239.784.2917.794.15
improve2.910.021.340.080.520.05
+ +proach. We implement our method with DiT-XL/2 and U-Net on MetFaces, FFHQ, and ImageNet-1K, respectively. We default to training the models for 50K iterations on MetFaces and FFHQ, 400K on ImageNet-1K. To ensure a fair comparison, we keep all hyper-parameters the same and report the FID scores at 50K iterations. As shown in Tab. 2, SpeeD consistently achieve significantly higher performance under all settings, which indicates the strong generality of SpeeD for different architectures and datasets. + +Cross-schedule robustness evaluation. In the diffusion process, there are various time step schedules, including linear [20], quadratic and cosine [42] schedules. We verify SpeeD's robustness across these schedules. We report FID and inception score (IS) [55] scores as metrics. As shown in Tab. 3, SpeeD achieves significant improvement on linear, quadratic, and cosine schedules both in FID and IS, showing the generality of SpeeD in various schedules. + +Cross-task robustness evaluation. We apply SpeeD to the text-to-image task for evaluating the generality of our method. For text-to-image generation, we first introduce CLIP [49] to extract the text embedding for MS-COCO [35] dataset. Then, DiT-XL/2 is used to train a text-to-image model as our baseline. Following prior work[54], FID score and CLIP score on MS-COCO validation set are + +evaluation metrics for quantitative analyses. As illustrated in Tab. 4, we obtain the better FID and CLIP score than our baseline. + +Table 3. Comparisons of FID and IS scores on FFHQ with different schedules on time steps. We mainly evaluate the generalization of our approach on linear, quadratic, and cosine schedules. We use the vanilla DiT-XL/2 as the baseline. + +
methodsFID↓CLIP score↑
baseline27.410.237
SpeeD25.300.244
+ +Table 4. Text to image. + +# 3.5. Compatibility with other acceleration methods + +Until now, we evaluate the effectiveness and generalization of our proposed method: SpeeD is a task-agnostic and architecture-agnostic diffusion acceleration approach. Is SpeeD compatible with other acceleration techniques? To investigate this, we apply our approach with two re + +![](images/4b4c35178fab9c05db4cce86cf0d6ea19824f5da873fb3937b46ed1a78ea0a5b.jpg) +Figure 6. We plot the FID-Iteration curves of our approach and other methods on the MetFaces dataset. SpeeD accelerates other methods obviously. The horizontal and vertical axes represent the training iterations and log(FID), respectively. Detailed ones are in appendix. + +![](images/88325caa93443b72cf6f8934450cc2450964858ab224a593df46ef86dc72dfc5.jpg) + +![](images/189f7900b8288522f7bee220ff5c5f32d9105ed68e89604a281e5a8b6b04a538.jpg) + +![](images/987f4487c3a7f69630d3d439f3d0ddc2a14958ee61d7cda7891caf209cd54c3d.jpg) +(a) FID-Iteration curve comparisons on ImageNet-1K. + +![](images/bd33d96d796f1b739ea551fa4dd3591c4820c4812f4c9021e9f04cf374622db8.jpg) +(b) FID-Iteration curve comparisons on CIFAR-10. +Figure 7. SpeeD works well with recent acceleration algorithms and can consistently further accelerate the diffusion model training. We plot the figures using log(FID) for better visualization. + +cent proposed algorithms: masked diffusion transformer (MDT) [12] and fast diffusion model (FDM) [66]. + +MDT + SpeeD. MDT [12] proposes a masked diffusion transformer method, which applies a masking scheme in latent space to enhance the contextual learning ability of diffusion probabilistic models explicitly. MDT can speed up the diffusion training by 10 times. They evaluate their MDT with DiT-S/2. We just inject our SpeeD on their MDT and report FID-Iteration curves for comparison in Fig. 7a. All the results are obtained on ImageNet-1K dataset. Our approach can further accelerate MDT at least by $4 \times$ , which indicates the good compatibility of SpeeD. + +FDM + SpeeD. Fast Diffusion Model [66] is a diffusion process acceleration method inspired by the classic momentum approach to solve optimization problems in parameter space. By integrating momentum into the diffusion process, it achieves similar performance as EDM [28] with less training cost. Based on the official implementation, we compare EDM, FDM, and FDM + SpeeD on CIFAR-10 of $32 \times 32$ images. FDM accelerates the EDM by about $1.6 \times$ . SpeeD can further reduce the training cost around $1.6 \times$ . + +# 3.6. Ablation Experiments + +We perform extensive ablation studies to illustrate the characteristics of SpeeD. The experiments in ablation studies are conducted on the FFHQ dataset and U-Net model by + +default. We ablate our designed components: asymmetric sampling (abbreviated as asymmetric) and change-aware weighting (abbreviated as CAW), suppression intensity $k$ in asymmetric sampling defined in Eqn. 2 and symmetry ceiling $\lambda$ for weighting in Sec. 2.4. + +Evaluating the components of SpeeD. Our approach includes two strategies: asymmetric sampling and change-aware weighting. We note these two strategies using 'asymmetric' and 'CAW'. We ablate each component in SpeeD. As illustrated in Tab. 5a, combining our proposed strategies achieves the best results. Using weighting and sampling strategies alone improves the baseline by 0.6 and 1.5 FID scores, respectively, indicating that filtering most samples from the convergence area is more beneficial for training. + +Evaluating of suppression intensity $k$ . To prioritize the training of critical time steps, asymmetric sampling focus on the time steps out of the convergence area. A higher probability is given for these time steps which is $k$ times than the time steps from the convergence area. A larger suppression intensity $k$ means a larger gap in training between the different areas of time steps. We evaluate different suppression intensity $k$ from 1 to 25 and report the FID score In Tab. 5b. We observe that $k = 5$ achieves the best performance. A huge suppression intensity decrease FID scores seriously, which means that it hurts the diver + +
sampling curveCAWFID ↓
uniform17.37
uniform16.75
asymmetric15.82
asymmetric15.07
+ +(a) Components of SpeeD. Suppressing some trivial time steps does help. + +
suppression intensity kFID ↓
115.01
514.86
1016.97
2525.59
+ +(b) Suppression intensity $k$ . Huge suppression decrease diversity to modeling data. + +
symmetry ceiling λFID ↓
0.515.46
0.614.86
0.816.83
1.023.77
+ +(c) Symmetry ceiling $\lambda$ . Weighting served as temperature factor should be in moderation. + +Table 5. Ablation studies on FFHQ dataset. Default settings and baseline are in purple and gray . + +sity a lot to modeling data. This means that the samples in the convergence area, although very close to pure noise, still contains some useful information. Extreme complete discard of these samples results in a degradation of the acceleration performance. + +Evaluating of symmetry ceiling $\lambda$ . The symmetry ceiling $\lambda$ is a hyper-parameter that regulates the curvature of the weighting function. $\lambda$ is set in the interval [0.5, 1]. The midpoint of the re-scaled weight interval is fixed at 0.5. The symmetry ceiling $\lambda$ is the right boundary of the interval and the left boundary is $1 - \lambda$ . A higher $\lambda$ results in a larger weight interval and a more obvious distinction in weights between different time steps. In Tab. 5c, settings $\lambda \leq 0.8$ obtain higher performance on FID scores than the baseline, which indicates SpeeD is relatively robust on symmetry ceiling $\lambda$ . Further increase $\lambda$ leads to performance degradation, and Weighting should be in moderation. + +# 4. Related Work + +We discuss the related works about Diffusion Models and its Training Acceleration. The most related works are as follows, and more discussion are in Appendix C. + +Diffusion models Diffusion models have emerged as the dominant approach in generative tasks [7, 23, 54, 62], which outperform other generative methods including GANs [4, 24, 82], VAEs [32], flow-based models [10]. These methods [20, 28, 58] are based on non-equilibrium thermodynamics [25, 57], where the generative process is modeled as a reverse diffusion process that gradually constructs the sample from a noise distribution [57]. Previous works focused on enhancing diffusion models' generation quality and alignment with users in visual generation. To generate high-resolution images, Latent Diffusion Models (LDMs) [51, 54] perform diffusion process in latent space instead of pixel space, which employ VAEs to be encoder and decoder for latent representations. + +Acceleration in diffusion models To reduce the computational costs, previous works accelerate the diffusion models in training and inference. For training acceleration, the earliest works [8, 17] assign different weights to each time step on Mean Square Error (MSE) loss to improve the learning efficiency. The other methods in training acceleration are + +proposed, e.g., network architecture [53, 64] and diffusion algorithm [28, 66]. Masking modeling [12, 74] are recently proposed to train diffusion models. Works [14, 30, 45, 46] provide observations for explanation from the perspective of multi-tasks learning. SpeeD is of good compatibility with these methods, e.g., [69, 78, 79]. In the field of sampling acceleration, a number of works tackle the slow inference speed of diffusion models by using fewer reverse steps while maintaining sample quality, including DDIM [58], Analytic-DPM [1], and DPM-Solver [40]. + +Conditional generation. To better control the generation process with various conditions, e.g., image style, text prompt and stroke, Classifier-Free Guidance (CFG) proposes a guidance strategy with diffusion models that balance the sample quality and prompt alignment. ControlNet [71] reuses the large-scale pre-trained layers of source models to build a deep and strong encoder to learn specific conditions. Recently benefiting from diffusion models in the image generation field, video generation is getting trendy. The promising results are provided in recent works [34, 41] on video generation tasks. + +# 5. Conclusion + +SpeeD, a approach for accelerating diffusion training by closely examining time steps is proposed. The core insights are: 1) suppressing the sampling probabilities of time steps that offer limited benefits to diffusion training (i.e., those with extremely small losses), and 2) emphasizing the importance of time steps with rapidly changing process increments. SpeeD demonstrates strong robustness across various architectures and datasets, achieving significant acceleration on multiple diffusion-based image generation tasks. Extensive theoretical analyses in this paper, are provided. + +Referring to previous works [8, 11, 14, 17, 46, 47, 67], our experiments train 50K, 100K, and 400K in MetFaces $256 \times 256$ , FFHQ $256 \times 256$ , and ImageNet-1K, respectively. 7M iterations are used in DiT to train ImageNet in the original work [47], which 1) is not the amount of resource usage that can be achieved in general research. Meanwhile, 2) the focus is on the acceleration effect of training, and a direct comparison of the ultimate convergence stage is unnecessary for this work. + +# Acknowledgments + +This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-08-008). This work is partially supported by the National Natural Science Foundation of China (62176165), the Stable Support Projects for Shenzhen Higher Education Institutions (20220718110918001), the Natural Science Foundation of Top Talent of SZTU(GDRC202131). Yang You's research group is being sponsored by NUS startup grant (Presidential Young Professorship), Singapore MOE Tier-1 grant, ByteDance grant, ARCTIC grant, SMI grant (WBS number: A-8001104-00-00), Alibaba grant, and Google grant for TPU usage. We thank Tianyi Li, Yuchen Zhang, Yuxin Li, Zhaoyang Zeng, and Yanqing Liu for the comments on this work. Xiaojiang Peng, Hanwang Zhang, and Yang You are equal advising. Xiaojiang Peng is the corresponding author. + +# References + +[1] Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. arXiv preprint arXiv:2201.06503, 2022.8 +[2] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science., 2(3):8, 2023. 4 +[3] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. 4 +[4] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. 8 +[5] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. 4 +[6] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-alpha: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023. 4 +[7] Ling-Hao Chen, Jiawei Zhang, Yewen Li, Yiren Pang, Xiaobo Xia, and Tongliang Liu. Humanmac: Masked motion completion for human motion prediction. In ICCV, pages 9544-9555, 2023. 8 +[8] Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, and Sungroh Yoon. Perception prioritized training of diffusion models. In CVPR, pages 11472-11481, 2022. 1, 2, 5, 6, 8 +[9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, + +and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248-255. IEEE, 2009. 5 +[10] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. 8 +[11] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. arXiv preprint arXiv:2403.03206, 2024. 6, 8, 4 +[12] Shanghua Gao, Pan Zhou, Ming-Ming Cheng, and Shuicheng Yan. Masked diffusion transformer is a strong image synthesizer. In ICCV, pages 23164-23173, 2023. 5, 7, 8 +[13] Evetette S Gardner Jr. Exponential smoothing: The state of the art. JoF, 4(1):1-28, 1985. 5 +[14] Hyojun Go, Kim, Yunsung Lee, Seunghyun Lee, Shinhyeok Oh, Hyeongdon Moon, and Seungtaek Choi. Addressing negative transfer in diffusion models. In NeurIPS, 2023. 6, 8 +[15] Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 4 +[16] Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. Dynamic neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(11):7436-7456, 2021. 5 +[17] Tiankai Hang, Shuyang Gu, Chen Li, Jianmin Bao, Dong Chen, Han Hu, Xin Geng, and Baining Guo. Efficient diffusion training via min-snr weighting strategy. In ICCV, pages 7441-7451, 2023. 1, 2, 5, 6, 8 +[18] Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity long video generation. 2022. 4 +[19] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. 2017. 5 +[20] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. pages 6840-6851, 2020. 1, 2, 3, 6, 8 +[21] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022. 4 +[22] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video diffusion models. arXiv preprint arXiv:2204.03458, 2022. 4 +[23] Ilia Igashov, Hannes Stärk, Clément Vignac, Arne Schneuing, Victor Garcia Satorras, Pascal Frossard, Max Welling, Michael Bronstein, and Bruno Correia. Equivariant 3d-conditional diffusion model for molecular linker design. NAT MACH INTELL, pages 1–11, 2024. 8 +[24] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, pages 1125-1134, 2017. 8 + +[25] Christopher Jarzynski. Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach. PHYS REV E, 56(5), 1997. 8 +[26] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, pages 4401-4410, 2019. 5 +[27] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. pages 12104-12114, 2020. 5 +[28] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. pages 26565-26577, 2022. 1, 4, 5, 7, 8, 3 +[29] Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. arXiv preprint arXiv:2310.02279, 2023. 4 +[30] Jin-Young Kim, Hyojun Go, Soonwoo Kwon, and Hyunj-Gyoon Kim. Denoising task difficulty-based curriculum for training diffusion models. arXiv preprint arXiv:2403.10348, 2024.8 +[31] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.5 +[32] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 8 +[33] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 1, 5 +[34] PKU-Yuan Lab and Tuzhan AI etc. Open-sora-plan, 2024. 8 +[35] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740-755. Springer, 2014. 5, 6 +[36] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 4 +[37] Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, et al. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation. In ICLR, 2023. 4 +[38] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In ICCV, 2015. 5 +[39] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 5, 1 +[40] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. pages 5775-5787, 2022. 8 +[41] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Zwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: Latent diffusion transformer for video generation. arXiv preprint arXiv:2401.03048, 2024. 8 +[42] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In ICML, pages 8162-8171. PMLR, 2021. 6 +[43] OpenAI. Dalle-2, 2023. 1 + +[44] OpenAI. Sora, 2024. 1 +[45] Byeongjun Park, Hyojun Go, Jin-Young Kim, Sangmin Woo, Seokil Ham, and Changick Kim. Switch diffusion transformer: Synergizing denoising tasks with sparse mixture-of-experts. In ECCV, 2024. 8 +[46] Byeongjun Park, Sangmin Woo, Hyojun Go, Jin-Young Kim, and Changick Kim. Denoising task routing for diffusion models. In ICLR, 2024. 8 +[47] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, pages 4195-4205, 2023. 2, 5, 6, 8 +[48] Ziheng Qin, Kai Wang, Zangwei Zheng, Jianyang Gu, Xi-angyu Peng, Zhaopan Xu, Daquan Zhou, Lei Shang, Baigui Sun, Xuansong Xie, et al. Infobatch: Lossless training speed up by unbiased dynamic data pruning. arXiv preprint arXiv:2303.04947, 2023. 5, 4 +[49] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763. PMLR, 2021. 6, 4 +[50] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21(140):1-67, 2020. 4 +[51] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 8, 4 +[52] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234-241. Springer, 2015. 2, 5 +[53] Dohoon Ryu and Jong Chul Ye. Pyramidal denoising diffusion probabilistic models. arXiv preprint arXiv:2208.01864, 2022. 8 +[54] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. pages 36479-36494, 2022. 6, 8, 4 +[55] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. 2016. 6 +[56] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022.4 +[57] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, pages 2256-2265. PMLR, 2015. 2, 8 +[58] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 8 + +[59] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021. 3 +[60] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023. 4 +[61] Endre Süli and David F Mayers. An introduction to numerical analysis. Cambridge university press, 2003. 1 +[62] Kai Wang, Zhaopan Xu, Yukun Zhou, Zelin Zang, Trevor Darrell, Zhuang Liu, and Yang You. Neural network diffusion. arXiv preprint arXiv:2402.13144, 2024. 8 +[63] Xiang* Wang, Hangjie* Yuan, Shiwei* Zhang, Dayou* Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability. arXiv preprint arXiv:2306.02018, 2023. 4 +[64] Zhendong Wang, Yifan Jiang, Huangjie Zheng, Peihao Wang, Pengcheng He, Zhangyang Wang, Weizhu Chen, Mingyuan Zhou, et al. Patch diffusion: Faster and more data-efficient training of diffusion models. 2024. 8 +[65] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. arXiv preprint arXiv:2212.11565, 2022.4 +[66] Zike Wu, Pan Zhou, Kenji Kawaguchi, and Hanwang Zhang. Fast diffusion model. arXiv preprint arXiv:2306.06991, 2023. 7, 8 +[67] Tianshuo Xu, Peng Mi, Ruilin Wang, and Yingcong Chen. Towards faster training of diffusion models: An inspiration of a consistency phenomenon. arXiv preprint arXiv:2404.07946, 2024. 1, 2, 5, 6, 8 +[68] Ruihan Yang, Prakhar Srivastava, and Stephan Mandt. Diffusion probabilistic modeling for video generation. arXiv preprint arXiv:2203.09481, 2022. 4 +[69] Hu Yu, Li Shen, Jie Huang, Hongsheng Li, and Feng Zhao. Unmasking bias in diffusion model training. In ECCV, 2024. 8 +[70] David Junhao Zhang, Jay Zhangjie Wu, Jia-Wei Liu, Rui Zhao, Lingmin Ran, Yuchao Gu, Difei Gao, and Mike Zheng Shou. Show-1: Marrying pixel and latent diffusion models for text-to-video generation. arXiv preprint arXiv:2309.15818, 2023.4 +[71] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In ICCV, pages 3836-3847, 2023. 8, 4 +[72] Wangbo Zhao, Yizeng Han, Jiasheng Tang, Zhikai Li, Yibing Song, Kai Wang, Zhangyang Wang, and Yang You. A stitch in time saves nine: Small vlm is a precise guidance for accelerating large vlms. arXiv preprint arXiv:2412.03324, 2024.5 +[73] Wangbo Zhao, Yizeng Han, Jiasheng Tang, Kai Wang, Yibing Song, Gao Huang, Fan Wang, and Yang You. Dynamic diffusion transformer. arXiv preprint arXiv:2410.03456, 2024. 5 + +[74] Hongkai Zheng, Weili Nie, Arash Vahdat, and Anima Anandkumar. Fast training of diffusion models with masked transformers. arXiv preprint arXiv:2306.09305, 2023. 8 +[75] Tianyi Zheng. Enfomax: Domain entropy and mutual information maximization for domain generalized face antispoofing. arXiv preprint arXiv:2302.08674, 2023. +[76] Tianyi Zheng. Mcae: Masked contrastive autoencoder for face anti-spoofing. arXiv preprint arXiv:2302.08674, 2023. +[77] Tianyi Zheng, Cong Geng, Peng-Tao Jiang, Ben Wan, Hao Zhang, Jinwei Chen, Jia Wang, and Bo Li. Non-uniform timestep sampling: Towards faster diffusion model training. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 7036-7045, 2024. +[78] Tianyi Zheng, Cong Geng, Peng-Tao Jiang, Ben Wan, Hao Zhang, Jinwei Chen, Jia Wang, and Bo Li. Non-uniform timestep sampling: Towards faster diffusion model training. In ACM MM 2024, 2024. 8, 4 +[79] Tianyi Zheng, Peng-Tao Jiang, Ben Wan, Hao Zhang, Jinwei Chen, Jia Wang, and Bo Li. Beta-tuned timestep diffusion model. In ECCV, 2024. 8, 4 +[80] Tianyi Zheng, Qinji Yu, Zhaoyu Chen, and Jia Wang. Famim: A novel frequency-domain augmentation masked image model framework for domain generalizable face antispoofing. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4470-4474. IEEE, 2024. +[81] Daquan Zhou, Weimin Wang, Hanshu Yan, Weiwei Lv, Yizhe Zhu, and Jiashi Feng. Magicvideo: Efficient video generation with latent diffusion models. arXiv preprint arXiv:2211.11018, 2022.4 +[82] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, pages 2223-2232, 2017. 8 \ No newline at end of file diff --git a/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/images.zip b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2c0bbf5173626dc753c815f91831bc6fd28276f7 --- /dev/null +++ b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cef8489bee6de30815ed3e73a91811070b95ae5efa5651d0c31a1116cd58d8a4 +size 473168 diff --git a/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/layout.json b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3463ae2637690263fd46539214e4888310cfc280 --- /dev/null +++ b/CVPR/2025/A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3066ab37cc4d8d30fe21117dc734a8819c748d83474a07cfc14302064f58f277 +size 536070 diff --git a/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_content_list.json b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3e4252944ab85e043644e3567e9285012825c94c --- /dev/null +++ b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98daf9ab94114eb423ce58eced6ca0edb6b27dbb97bf054646f5a86ff67759c8 +size 77550 diff --git a/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_model.json b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9ec2105fd6bafd4e3346ab4c7af5f07a6bf7cd71 --- /dev/null +++ b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99b12a900d7c54377955304005239cee5c70474078298ad959452be05dd8c15c +size 99803 diff --git a/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_origin.pdf b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3ae23dd6dd65737f12f23f955236c441b9a56fab --- /dev/null +++ b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/f029460e-1f93-4e71-9a1a-d734965310bb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4545922151462005d606eee4f7c7817f74f534b80d38590af963351fae364aa +size 1551652 diff --git a/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/full.md b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e3beb62ed7b992d4ad01d4a9f36c01478349b91a --- /dev/null +++ b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/full.md @@ -0,0 +1,290 @@ +# A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation + +Andrew Z. Wang $^{1,3\dagger}$ Songwei Ge $^{2\dagger}$ $^{1}$ University of Washington + +Tero Karras3 Ming-Yu Liu3 Yogesh Balaji3 +2University of Maryland 3NVIDIA + +# Abstract + +Both text-to-image generation and large language models (LLMs) have made significant advancements. However, many text-to-image models still employ the somewhat outdated T5 and CLIP as their text encoders. In this work, we investigate the effectiveness of using modern decoder-only LLMs as text encoders for text-to-image diffusion models. We build a standardized training and evaluation pipeline that allows us to isolate and evaluate the effect of different text embeddings. We train a total of 27 text-to-image models with 12 different text encoders to analyze the critical aspects of LLMs that could impact text-to-image generation, including the approaches to extract embeddings, different LLMs variants, and model sizes. Our experiments reveal that the de facto way of using last-layer embeddings as conditioning leads to inferior performance. Instead, we explore embeddings from various layers and find that using layer-normalized averaging across all layers significantly improves alignment with complex prompts. Most LLMs with this conditioning outperform the baseline T5 model, showing enhanced performance in advanced visio-linguistic reasoning skills. + +# 1. Introduction + +The field of text-to-image generation has seen tremendous progress in recent years, driven by improvements in diffusion architectures [6, 11, 34], scalable models [39, 40], and better training procedures [26, 27, 34]. Central to the success of these models is the use of pre-trained text encoders that transfer natural language prompts into representations suitable for guiding image generation. However, the impact of different text encoder models on the image generation quality remains largely unexplored. + +Most contemporary image generators use traditional text encoders such as T5 [42] or CLIP [41] for obtaining text embeddings [1, 2, 40, 43, 46]. However, the development of encoder-decoder models has slowed down while there is an increasing interest in decoder-only large language models (LLMs) due to their great scalability. This raises a pressing + +![](images/278e074f17367d0cc0add2d8e75c055afa90ad529618da1b77ee1a803c949a10.jpg) +Figure 1. VQA scores for text-to-image models using T5-XXL and Mistral-7B as the text encoders. The models use embeddings extracted from the last layer of T5 (orange) and Mistral (green), and layer-normalized average embeddings of Mistral (blue). Our results show that Mistral performs worse than T5 when only using the last layer. However, using all layers significantly improves Mistral's performance, surpassing T5 in all aspects. + +question of whether these decoder-only LLMs are suitable for text-to-image generation tasks. + +In this work, we examine how the rich linguistic representations from decoder-only LLMs can be leveraged to enhance text-to-image generation and investigate the most effective strategies for incorporating them. Our focus is on autoregressive language models that use causal attention and are trained with a next-token prediction objective. We address three primary research questions: + +1. To what extent can decoder-only LLMs be utilized to enhance text-to-image generation? What methodologies yield the largest improvements? +2. LLM fine-tuned embedding models have surpassed encoder-decoder models in contextual semantic comprehension across various tasks [38]. Can these embedding models contribute to improving text-to-image alignment? +3. Does increasing the model size of decoder-only LLMs + +lead to measurable improvements in the text-to-image generation performance? + +To address these questions, we develop a standardized text-to-image training and evaluation pipeline based on the Stable Diffusion v2 (SD2) [44] architecture. In each experiment, only the text encoder is replaced, while the rest of the architecture and training recipe remains unchanged. This allows us to isolate and understand the effect of different text representations. We explore a wide variety of models as text encoders, including two traditional text-to-image text encoders, seven open-source LLMs, and three fine-tuned embedding models. We quantitatively evaluate the models using VQAScore [29] on GenAI-Bench [29], and conduct in-depth analysis to understand the strengths and limitations of using LLMs as the text encoders. Our main findings are as follows: + +- Using text embeddings from the last layer of the LLM is sub-optimal: To our knowledge, all current text-to-image diffusion models utilize the embeddings from the final layer of the text encoders as the conditional embedding [2, 46]. However, our results reveal that this approach does not translate effectively to LLMs, leading to inferior results compared to using T5. +- Aggregating features from multiple layers outperforms using a single layer: We find that using embeddings normalized and averaged across all layers yields far better performance than relying on any single layer alone (Figure 1). This is because each layer within an LLM captures different aspects of linguistic information, so using averaged embeddings can combine the strengths of every layer to create a richer and more comprehensive representation [10, 23, 36]. +- LLM-based embedding models sometimes outperform the base models: Our results suggest that fine-tuned embedding models hold the potential for improving text-to-image generation. However, such a performance improvement is not always observed across all the embedding and base models. Despite these mixed outcomes, strong results are achieved with bge-Gemma2 [8] using layer-normalized average embeddings, our best-performing model, highlighting promise in leveraging embedding models for text-to-image generation. +- Scaling up the LLM is beneficial, but not across all aspects: Increasing the model size of LLMs consistently leads to improved performance. However, we observe that model size does not uniformly enhance all aspects of compositional text-to-image generation. These results suggest that simply scaling model size may not be the most efficient approach for improving performance across all skills, highlighting the potential of alternative strategies, such as hybrid models or skill-specific fine + +tuning. + +# 2. Related work + +Text encoders in text-to-image generation. In recent years, a variety of text encoders have been explored for training large-scale diffusion models. DALLE2 [43] and Stable Diffusion [44] use text embeddings from the CLIP [41] model that is trained using an image-text alignment objective. Imagen [46] showed that the embeddings from pure language models such as T5 [42] can be used for image generation and that scaling the language models can improve the image generation performance. eDiff-I [2] observed that the use of CLIP and T5 can give complementary benefits and propose the use of a combination of both these embeddings. Liu et al. [35] showed that using character-aware tokenizers like ByT5 [57] can improve text rendering while generating images. + +Decoder-only LLMs. Several studies have looked into exploiting LLMs for text-to-image generation. Earlier works utilized LLMs as inputs through adapter connectors [15, 19] and explored their sequence-to-sequence capabilities to revise prompts [7] or build scene layouts [31]. The recent concurrent work of Playground-v3 [34] conditions different blocks of the diffusion transformer model using embeddings from intermediate layers of a Llama3 [14] model. Lumina-T2X [12] uses the output layer of a Llama-7B [52] model as the text encoder, while Lumina-Next [61] and Sana [56] use the embeddings from Gemma-2-2B [49] model. + +LLM interpretability. As we are interested in analyzing the best ways to utilize LLMs, a relevant line of research is on analyzing the linguistic representations captured across different layers within LLMs [10, 23, 36, 50, 60]. Some studies have addressed the contextual limitations imposed by causal attention masks in autoregressive models [48], while others have examined approaches to best extract embedding representations from LLMs [4, 8, 25, 30, 53, 54]. + +While these recent works investigate the use of various text encoders for image generation, their models are trained on different datasets (often proprietary) and with varying training setups, making it difficult to isolate the role of text encoders. The goal of this work is to perform a systematic study on the effect of different text encoders, aiming to deepen understanding of how text representations influence text-image alignment and generation quality. + +# 3. Experimental setup + +# 3.1. Training pipeline + +To understand the impact of text encoders in text-to-image diffusion models, we first establish a standard model train + +![](images/f04cd9f62f6fbc4e78517df39b4e805f2b6df3ad6216db01826a6df2c8be825c.jpg) +CLIP (last layer) + +![](images/15c618c876f6378a1ef5a125ef5aa7311bf71558857a3b8ad805aae4c775e035.jpg) +T5-XXL (last layer) + +![](images/cee38cb3850fd8772d51a7aa967c7cd038d9d682c63f7dd12b2b83b0e3aa6bc0.jpg) +Mistral (norm avg) + +![](images/6060e31bdc9b06c343822d4e8b99b3bb856ea7f1cd33209052b3a32aaf1db74f.jpg) +bge-Gemma2 (norm avg) + +![](images/ef422276dd28b037735910903ae1a324c7e4d1f22e661cf8251e357a845efd6b.jpg) +A single, vibrant red rose blooms defiantly through a narrow crack in the weathered, grey concrete. Its velvety petals unfurl gracefully, reaching for the sunlight that filters weakly through the urban haze. + +![](images/91449cec718423102f7e9d58027910c5d8c9ec0162409b9b355109f3563a7c51.jpg) +A small dog not in a tiny sweater, playing joyfully without any clothes. The fluffy white dog, with big brown eyes and floppy ears, bounds through a sun-drenched field of wildflowers. Its tongue lolls out in pure happiness as it chases a bright red butterfly, its tiny paws barely touching the ground. + +![](images/4c8bf75c91088fff7bc4705477f4287ba3533852d5f6ec622c8909d1a2833251.jpg) +Figure 2. Visual comparison of images generated with different text encoders. We use last-layer embeddings (last layer) from the text encoders of CLIP-ViT-H/14 (354M) and T5-XXL (4.7B). We also use average layer-normalized embeddings (norm avg) from the pre-trained LLM Mistral-7B (7B) and the fine-tuned embedding model bge-Gemma2 (bge-multilingual-gemma2; 9B). Mistral and bge-Gemma2 can handle complex reasoning tasks such as negation (panel below) compared to models like CLIP and T5. + +![](images/c712cd7d05327578236df6d8f17da6fe2dc1d9a5014e355dfcf86efb9a8a17ee.jpg) + +ing pipeline, enabling us to isolate the effect of text encoders from other factors, like diffusion model architecture, training data, and compute. For all our experiments, we utilize the U-Net based latent diffusion architecture from Stable Diffusion [44]. We freeze the autoencoder and only train the U-Net model [45], in which text embeddings are conditioned using cross attention blocks. To accommodate the different text embedding dimension sizes from various models, we introduce a linear projection layer with 1024 output features before cross-attention in the U-Net. In each experiment, we replace the text encoder with a different LLM or fine-tuned embedding model. The exact training recipe, including batch size, diffusion process, and other hyperparameters are also adapted from Stable Diffusion [44]. Please find more details in the supplementary material. + +We utilize a 46 million text-image pair subset of the LAION-Aesthetics dataset [47] as the training data. For enhanced caption diversity and richness, we apply VisualFactChecker (VFC) [13], an LLM-based technique for + +performing caption upsampling. We train all models for 800,000 iterations at $256 \times 256$ resolution with a global batch size of 2,048 on 32 A100 GPUs. Each model has around 870M parameters and takes approximately 7 days to train. + +# 3.2. Models of interest + +We mainly explore four types of text encoders: + +- T5: T5-XXL (encoder size $\approx 4.7\mathrm{B}$ params) [42] is an encoder-decoder model that frames NLP tasks in a unified text-to-text format, utilizing a span-masked language model objective. Its text encoder is widely used in text-to-image models to effectively capture linguistic and semantic information for image generation. +- CLIP: CLIP-ViT-H/14 (encoder size $\approx$ 354M params) [41] aligns visual and textual representations within a shared embedding space through contrastive learning, making it a popular choice for text-to-image generation + +
ModelSizeAvgAttr.SceneSpat.ActionPartCount.Comp.Differ.Neg.Uni.
CLIPViT-H/14354M0.6220.6120.7310.6080.6550.5940.5290.5220.4250.4800.632
T5-XXL4.7B0.7410.7370.8090.7410.7820.7230.6770.7170.6750.5990.757
Qwen2-7B7B0.6830.6790.8050.6700.7240.6570.5880.6030.5900.5520.763
Mistral-7B7B0.6750.6670.7630.6650.7110.6410.5760.5560.5260.5240.726
Llama3-8B8B0.6750.6730.7670.6560.7040.6670.6270.6150.5680.5420.768
Gemma2-9B9B0.7100.7090.7940.7110.7600.7050.6420.6590.6170.5440.709
gte-Qwen27B0.4820.4860.5370.4790.4970.4660.4460.3930.4050.4240.437
sfr-Mistral7B0.7100.7060.8040.7070.7400.6910.6610.6700.6150.6080.766
Mistral-7BInstruct7B0.6900.6830.7870.6860.7180.6540.6280.6300.5890.5770.762
bge-Gemma29B0.7370.7300.8240.7290.7930.7220.6620.6540.6410.6230.797
+ +Table 1. VQAScore for models using embeddings extracted from the last layer. We use text encoders from CLIP-ViT-H/14 (354M) and T5-XXL (4.7B), along with four popular open-source pre-trained LLMs: Qwen2 (7B), Mistral-7B (7B), Llama3 (8B), and Gemma2 (9B). Additionally, we include three embedding models fine-tuned on these LLMs: gte-Qwen2 (gte-Qwen2-7B-instruct; 7B), sfr-Mistral (SFR-Embedding-2_R; 7B), and bge-Gemma2 (bge-multilingual-gemma2; 9B). We also include an instruction fine-tuned model, Mistral-7B-Instruct (7B). The highest scores are shown in bold. Our results show that only using the last layer does not work well for LLMs and perform worse than T5. + +for strong text-image alignment. + +- Pre-trained LLMs: Mistral-7B (and instruction fine-tuned Mistral-7B-Instruct) [24], Gemma2 (2B and 9B) [49], Llama3-8B [14], Qwen2 (1.5B and 7B) [58]. We include several open-source, high-performing LLMs in our study, each trained for autoregressive language modeling with a next token prediction objective. Although these models are all decoder-only transformers, they vary in terms of tokenization and other architectural details. +- LLM fine-tuned embedding models: bge-Gemma2 (bgemultilingual-gemma2; 9B) [8], gte-Qwen2 (gte-Qwen2-7B-instruct; 7B) [30], sfr-Mistral (SFR-Embedding-2_R; 7B) [37]. We also evaluate fine-tuned versions of the LLMs mentioned above, specifically selected from top performing models on the Massive Text Embedding Benchmark (MTEB) Leaderboard1 [38]. We aim to examine whether improvements in semantic comprehension translate into improved image generation and text-image alignment. + +# 3.3. Benchmarking and metrics + +To evaluate and compare the performance of different models, we adopt GenAI-Bench [33] as our primary benchmarking suite. GenAI-Bench includes 1,600 diverse and challenging prompts, and each prompt is annotated with specific aspects in the compositional text-to-visual generation: Attribute, Scene, Spatial Relation, Action Relation, Part Relation, Counting, Differentiation, Comparison, Negation, and Universality [29, 33]. These skill annotations enable us + +![](images/3bc34d7e5cf626da9851080e0b3cd9e05bfe92a50d2e42b5e1f26b3548c782a1.jpg) +Figure 3. VQAScore as a function of classifier-free guidance weight. We show the VQAScore of Mistral-7B using layer-normalized average embeddings (blue) and T5-XXL using last-layer embeddings (orange) at varying guidance strengths. + +to conduct in-depth ablation studies, allowing for detailed analysis of how various LLMs and embedding extraction methods affect specific aspects of text-to-image generation. To better align the prompts with our upsampled training distribution, we utilize Gemma2-9B [49] for prompt upsampling as detailed in the supplementary material. + +Previous research has shown that traditional text-image evaluation metrics like CLIPScore [16] and FID [17] are not consistent with human evaluations, particularly in assessing complex compositional tasks [3, 29, 33]. Instead, VQA-based automatic evaluation methods have demonstrated higher reliability and correlation with human judgements [9, 20, 21, 29, 33, 55, 59]. Our observations are con + +
ModelLayerAvgAttr.SceneSpat.ActionPartCount.Comp.Differ.Neg.Uni.
T5-XXL25 (last)0.7410.7370.8090.7410.7820.7230.6770.7170.6750.5990.757
Mistral-7B33 (last)0.6750.6670.7630.6650.7110.6410.5760.5560.5260.5240.726
Mistral-7B320.7100.7180.7940.7070.7470.6860.6380.6250.6500.5820.703
Mistral-7B150.7250.7200.8120.7290.7620.6920.6730.6510.6470.5850.732
Mistral-7B0 (first)0.3750.3720.4990.3390.3790.2900.2970.3280.2000.2780.440
+ +Table 2. VQAScore for models using embeddings extracted from individual layers of Mistral-7B (7B). We include the baseline T5-XXL (4.7B) model using embeddings extracted from the last layer as a reference. The highest scores are shown in bold. Our results show that using middle layers can outperform earlier or later layers. + +
ModelEmbeddingsAvgAttr.SceneSpat.ActionPartCount.Comp.Differ.Neg.Uni.
T5-XXLlast layer0.7410.7370.8090.7410.7820.7230.6770.7170.6750.5990.757
T5-XXLnorm avg0.7470.7480.8130.7450.7800.7200.6870.7360.6750.6170.760
bge-Gemma2last layer0.7370.7300.8240.7290.7930.7220.6620.6540.6410.6230.797
bge-Gemma2avg0.7740.7760.8510.7650.8130.7520.7160.7730.7020.6650.822
bge-Gemma2norm avg0.7890.7870.8460.7820.8210.7860.7450.7760.7440.7120.810
Mistral-7Blast layer0.6750.6670.7630.6650.7110.6410.5760.5560.5260.5240.726
Mistral-7Bavg0.7310.7330.8060.7310.7790.7120.6830.6660.6390.5820.758
Mistral-7Bnorm avg0.7690.7740.8370.7800.8020.7330.6990.7160.7060.6300.789
+ +Table 3. VQAScore for models using different embedding strategies: standard last-layer embeddings (last layer), average embeddings across all layers (avg), and average embeddings across all normalized layers (norm avg). We apply these approaches to the encoder from the baseline T5-XXL (4.7B), the pre-trained LLM Mistral-7B (7B), and our best performing fine-tuned embedding model bge-Gemma2 (bge-multilingual-gemma2; 9B). The highest scores are shown in bold. Our results show that using layer-normalized averaging greatly enhances performance and outperforms T5. + +sistent with these findings, so we use VQAScore [33] as our primary evaluation metric in this work. Notably, we also observe that the custom CLIP-FlanT5 [33] model that was originally proposed for VQAScore is not sufficient to differentiate between similar models [3]. To address this, we utilize GPT-4o [22] for our implementation of VQAScore, achieving improved differentiation between models and a closer match to human-perceived quality. When computing VQAScore, we generate the 1,600 images using the same seeds for each model. The random variation of VQAScore with GPT-4o is up to $\pm 0.003$ , depending on the category. Please find more details in the supplementary material. + +# 3.4. Classifier-free guidance + +In our work, we use a classifier-free guidance [18] of 7.0 for all models. Figure 3 shows a line graph with VQAScore as a function of guidance weight. These results illustrate how different guidance strength affects VQAScore considerably, potentially even more than the choice of the text encoder. As such, we standardize guidance at 7.0 in our evaluation pipeline for apples-to-apples comparison. Similar to the original Stable Diffusion model [44], we do not employ any kind of thresholding [46] or rescaling [32]. + +# 4. Results + +# 4.1. Final layer LLM embeddings lack in visio-linguistic reasoning + +We begin by training text-to-image models using text encoders described in Sec. 3.2, extracting embeddings from the final layer, as commonly done in the literature [12, 46, 56, 61]. As shown in Table 1, replacing T5 with other LLMs consistently results in a decrease in performance. Most of the performance gap can be attributed to the advanced visio-linguistic reasoning skills in GenAI-Bench: Counting, Differentiation, Comparison, Negation, and Universality. Specifically, Comparison shows the most significant drop in performance relative to T5. In fact, when prompts tagged with Comparison are excluded, the results for the LLMs are similar to or even better than T5. + +On the other hand, CLIP demonstrates one of the lowest scores overall, likely due to a combination of its much smaller model size, shorter token sequence length of 77, and reduced ability to capture linguistic and semantic details compared to the other language models [5, 41, 51]. For these reasons, we use T5 as the primary baseline in subsequent analysis. + +
ModelAvgAttr.SceneSpat.ActionPartCount.Comp.Differ.Neg.Uni.
bge-Gemma20.7370.7300.8240.7290.7930.7220.6620.6540.6410.6230.797
bge-Gemma2pooled0.7370.7330.8380.7250.7670.7120.6920.6910.6630.6410.823
sfr-Mistral0.7100.7060.8040.7070.7400.6910.6610.6700.6150.6080.766
sfr-Mistralpooled0.6980.6850.8100.6930.7480.6940.6100.6410.5960.5890.730
+ +Table 4. VQAScore for models using embeddings extracted from the last layer compared to models with additional conditioning on global pooled embeddings [40]. We evaluate the fine-tuned embedding models bge (bge-multilingual-gemma2; 9B) and sfr (SFR-Embedding2_R; 7B). We observe limited differences with the latter approach for embedding models. + +
ModelSizeAvgAttr.SceneSpat.ActionPartCount.Comp.Differ.Neg.Uni.
Qwen21.5B0.6550.6540.7580.6420.6810.6240.5750.5530.5280.5530.676
Qwen27B0.6830.6790.8050.6700.7240.6570.5880.6030.5900.5520.763
Gemma22B0.6400.6500.7450.6270.6870.6060.5800.5970.5270.4420.707
Gemma29B0.7100.7090.7940.7110.7600.7050.6420.6590.6170.5440.709
+ +Table 5. VQAScore for models with different LLM sizes, using embeddings extracted from the last layer. We evaluate the pre-trained LLMs: Qwen2 (1.5B), Qwen2 (7B), Gemma2 (2B), Gemma2 (9B). Our results show that LLM scaling improves performance but does not uniformly impact all aspects of image composition. + +# 4.2. Embedding performance differs by layer + +In addition to using the final layer, we experiment with embeddings extracted from other individual layers within the Mistral-7B model to assess how semantic and linguistic representations across layers vary and impact image generation. The Mistral model, with its 33 layers, has been the focus of several works exploring embedding representations[2] [4, 24, 48, 54]. Table 2 shows that using only the 15th layer of Mistral outperforms the use of the 33rd (last), 0th (first), and 32nd (penultimate) layers. This suggests that different layers of LLMs capture distinct levels of semantic and linguistic understanding [10]. Early layers may primarily encode basic linguistic structures, while later layers tend to be specialized in next-token prediction [36]. By contrast, middle layers appear to offer a more balanced abstraction of semantic representations, which can better support image generation tasks. However, the baseline last layer T5 model still outperforms any single layer Mistral model. This leads to a key insight: using embeddings from any single layer of the LLMs for text-to-image models is not sufficient. + +# 4.3. Averaging layers yields stronger embeddings + +Since different layers in LLMs encode varying levels of semantic representations, we propose to aggregate the embeddings using two approaches. The first method involves directly averaging the embeddings across all layers. In the second approach, we apply mean normalization to each layer's embeddings before averaging. This ensures that + +each layer contributes consistently in scale, preventing any single layer from disproportionately influencing the aggregated representation [60]. We apply these approaches to Mistral and bge-Gemma2, our best performing embedding model. We show our results in Table 3 and Figure 1. More results with other models, such as Qwen2 and Llama3, can be found in supplementary material Table B. For both models, averaged embeddings yield substantial improvements over using only the last layer, even surpassing the baseline T5 model by a large margin. Notably, these approaches overcome the previously observed limitations in advanced visio-linguistic reasoning, demonstrating that embeddings derived from all layers produce richer and more comprehensive representations. + +Our experiments also reveal that applying layer normalization beforehand performs even better than only averaging, as it balances contributions from each layer and prevents any single layer from dominating the representation. This leads to another key finding: the most effective way to utilize LLMs for text-to-image models is to properly use embeddings from all layers. + +Interestingly, applying layer-normalized averaging to T5 embeddings shows almost no performance difference. We believe this is due to the T5 encoder's approach of framing each task in a text-to-text format, and its later layers inherently build upon the semantic and linguistic foundations established in earlier layers. In contrast, the later layers of decoder-only LLMs tend to specialize in next-token prediction, which explains the significant performance gains observed when utilizing embeddings from intermediate layers. + +![](images/a38a557c00e909da8df304def1b922e5b8ebcc1d63ae0e3b4bb0e552aa5527d6.jpg) +Mistral (last layer) + +![](images/51da32ebb5e8897be2365ece14af0155c785d6bcb2b8d72f97280660eefdafc5.jpg) +Mistral (norm avg) + +![](images/9be639583ea4154a72ccbfb2990a2393f0eac53ce895c1654803cce22397bb4d.jpg) +Mistral (last layer) + +![](images/6dd305af2048607b588b3090f7f4f76ed2d532959e6eae91c2fd4b5701d6aaf6.jpg) +Mistral (norm avg) + +![](images/358647fdb5f9c9daa543ae53902310cd306eb3b86eafc6817cb9dfb87952ce91.jpg) +A larger gorilla hands a smaller mechanical monkey a banana + +![](images/bcf58ccba962669476b33bc4bf84f9386669f116e72c3dfcce038e189ec67b6f.jpg) + +![](images/11b0614d3ae95930ba97b3adf4a21191852d5a9bce7349769c2ce30fd4f050a0.jpg) +A tomato vine with several tomatoes on it, all yellow except the largest which is red + +![](images/7be6b277f3738b7efcfd90c5d4462d38ef6927068faae5efaf6e6496e5048975.jpg) +Figure 4. Heatmap visualization. Images generated with Mistral-7B using standard last-layer embeddings (last layer) and layer-normalized average embeddings (norm avg). Corresponding cross-attention heatmaps for the tokens monkey (left set) and largest (right set) are shown in the panel below. These visualizations show how the norm avg model performs better than last layer model on prompts that require advanced visio-linguistic reasoning skills such as differentiation (left set) and comparison (right set). + +# 4.4. Finetuned embedding models show potential + +Our results in Table 1 indicate that fine-tuned embedding models can offer improvement over their pre-trained LLM counterparts, as seen with bge-Gemma2 and sfr-Mistral, but may also lead to significant performance degradation, as with gte-Qwen2. We suspect this inconsistency arises because most fine-tuned embedding models are optimized to produce pooled embeddings primarily through methods like last token pooling, mean pooling, or weighted mean pooling. These pooling strategies may impact the embeddings of individual tokens, resulting in less meaningful representations across the entire sequence [28, 48]. + +We also experiment with a SDXL [40] inspired approach by conditioning the text-to-image models on two types of embeddings: standard per-token embeddings and a single pooled text embedding as a condensed global representation. Individual tokens go to cross-attention, while the pooled token is combined with the timestep embedding to modulate layer activations. Since these embedding models are primarily designed to produce a single pooled embedding, this approach enables us to incorporate their intended use case fully for better comparison to our baseline models. + +We selected bge-Gemma2 and sfr-Mistral for their improvement over their base models. However, as shown in Table 4, we observe limited differences in performance with this method, suggesting that our approach may not fully leverage the potential of these embeddings. + +Despite these mixed results, there is considerable promise in using embedding models for text-to-image generation, as demonstrated by the bge-Gemma2 model with layer-normalized averaging, which is the best performing model in our experiments. Future research to determine the most effective strategies could explore hybrid pooling approaches that combine per-token embeddings from pretrained LLMs with additional conditioning from pooled embeddings in fine-tuned embedding models. Adding an extra latent attention layer, proposed by NV-Embed [28], could also improve the quality of pooled embeddings. + +# 4.5. Scaling improves performance, but not uniformly across compositional skills + +Prior research has shown that scaling the size of the text encoder yields greater improvements in image quality and text-image alignment compared to increasing the size of the diffusion model [46]. In our experiments, we evalu + +ate different LLM model sizes to understand the impact of scaling on performance. Our results on differently sized Qwen2 and Gemma2 models in Table 5 confirm these findings. Although our ablation experiments are limited to a subset of models, the skill annotations from GenAI-Bench reveal an interesting insight: while performance increases with model size, scaling does not uniformly impact all aspects of compositional text-to-visual generation. For instance, we observe minimal improvement in Negation and Counting between the two Qwen2 models, while the difference in Universality is significant. Conversely, the two Gemma2 models show little difference in Universality but exhibit substantial differences across most other skills. + +Given the minimal improvements in specific skills, these results raise questions about the efficiency of simply scaling model size for text-to-image generation. Future work could explore hybrid approaches, such as combining different LLMs to leverage complementary strengths. This is similar to how eDiff-I [2] combined the T5 and CLIP text encoders to enhance embedding representations. Smaller, specialized models fine-tuned on skill-specific tasks might also provide a more efficient way to improve performance. + +Additionally, it would be valuable to investigate the effects of scaling text encoders when fully utilizing all layers, such as our approaches using layer-normalized averaging. This could reveal whether more refined approaches to layer utilization might achieve comparable or even superior results to model scaling. + +# 5. Visual analysis + +As shown in Sections 4.1 and 4.3, LLMs demonstrate significant improvements in advanced visio-linguistic reasoning when using layer-normalized average embeddings. Figure 2 provides a visual comparison with T5 and CLIP models for handling negation, where the baselines fail to interpret such logic and incorrectly depict the dog wearing clothes. Additional comparisons are available in the supplementary material. We observe these enhanced logical abilities in our models with Mistral through visual comparisons and cross-attention heatmaps on relevant tokens: + +Differentiation. The left set of images in Figure 4 illustrate a differentiation prompt involving a gorilla and a mechanical monkey. The model using last-layer embeddings fails to distinguish between the two, rendering the gorilla with robotic features but omitting the monkey. Cross-attention heatmaps for the token monkey show that attention is focused on the gorilla, indicating a lack of differentiation. + +In contrast, the model using averaged layer-normalized embeddings successfully separates the two entities. The cross-attention heatmap for monkey shows distinct attention away from the gorilla, allowing the mechanical monkey to appear separately. This suggests that layer-normalized averaging helps the model better differentiate semantically + +similar objects, as seen in the increased embedding norm distinction between gorilla and monkey. Additionally, we observe that the difference in token embedding norms for gorilla and monkey is greater in layer-normalized averaging compared to the last layer. This increased distinction in embedding norms likely contributes to the model's improved differentiation ability. + +Comparison. The right set of images in Figure 4 show results for comparison prompts, where only the largest tomato should be red. With last-layer embeddings, multiple tomatoes appear red, indicating the model's struggle to interpret logical constraints. Cross-attention heatmaps for largest show diffused attention, which contributes to misalignment with the prompt. + +In contrast, layer-normalized average embeddings lead to better alignment with the prompt, focusing attention on the largest tomato and accurately rendering it as red while keeping the others yellow. However, neither model successfully represents the largest tomato as larger than the others, suggesting that further research is needed to explore how size-related cues are embedded and translated to image generation for improved spatial and comparative alignment. + +# 6. Conclusion + +Our research contributes to the evolving field of text-to-image generation by exploring the use of decoder-only LLMs as the text encoder. Through a controlled training pipeline, we isolate and evaluate the impact of different text encoders, conducting extensive experiments on various LLMs and embedding extraction methods. + +Our findings provide valuable insights into using decoder-only LLMs for text-to-image generation. We demonstrate that relying on a single layer from an LLM is insufficient and propose a simple yet effective approach that leverages all layers through averaged layer-normalized embeddings. With this approach, our models exhibit significant improvements in advanced visio-linguistic reasoning skills, outperforming the baseline T5 model across all aspects of compositional text-to-image generation. Furthermore, our experiments with fine-tuned embedding models reveal promising potential for enhanced contextual semantic comprehension. Additionally, we explore the scalability of LLMs, observing that while performance generally increases with model size, specific compositional skills do not uniformly benefit from scaling. + +This work opens several avenues for future research, including more refined approaches to layer aggregation, optimized usage of embedding models, and the development of hybrid models that combine multiple LLMs. By providing a detailed examination of decoder-only LLMs in text-to-image generation, we hope our insights will guide future advancements and extend to other multimodal applications. + +# References + +[1] Yuval Atzmon, Maciej Bala, Yogesh Balaji, Tiffany Cai, Yin Cui, Jiaojiao Fan, Yunhao Ge, Siddharth Gururani, Jacob Huffman, Ronald Isaac, et al. Edify image: High-quality image generation with pixel space laplacian diffusion models. arXiv preprint arXiv:2411.07126, 2024. 1 +[2] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Ji-aming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, et al. ediff-i: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. 1, 2, 8 +[3] Jason Baldridge, Jakob Bauer, Mukul Bhutani, Nicole Brichtova, Andrew Bunner, Lluis Castrejon, Kelvin Chan, Yichang Chen, Sander Dieleman, Yuqing Du, et al. Imagen 3. arXiv preprint arXiv:2408.07009, 2024. 4, 5 +[4] Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and Siva Reddy. Llm2vec: Large language models are secretly powerful text encoders. arXiv preprint arXiv:2404.05961, 2024. 2, 6 +[5] Santiago Castro, Oana Ignat, and Rada Mihalcea. Scalable performance analysis for vision-language models. arXiv preprint arXiv:2305.18786, 2023. 5 +[6] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. PixArt-alpha: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis. arXiv preprint arXiv:2310.00426, 2023. 1 +[7] Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, and Furu Wei. Textdiffuser-2: Unleashing the power of language models for text rendering. In ECCV, 2024. 2 +[8] Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. Bge m3-embedding: Multilingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. arXiv preprint arXiv:2402.03216, 2024.2, 4 +[9] Jaemin Cho, Abhay Zala, and Mohit Bansal. Visual programming for step-by-step text-to-image generation and evaluation. In NeurIPS, 2023. 4 +[10] Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. Analyzing transformers in embedding space. arXiv preprint arXiv:2209.02535, 2022. 2, 6 +[11] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In ICML, 2024. 1 +[12] Peng Gao, Le Zhuo, Ziyi Lin, Chris Liu, Junsong Chen, Ruoyi Du, Enze Xie, Xu Luo, Longtian Qiu, Yuhang Zhang, et al. Lumina-t2x: Transforming text into any modality, resolution, and duration via flow-based large diffusion transformers. arXiv preprint arXiv:2405.05945, 2024. 2, 5 +[13] Yunhao Ge, Xiaohui Zeng, Jacob Samuel Huffman, Tsung-Yi Lin, Ming-Yu Liu, and Yin Cui. Visual fact checker: enabling high-fidelity detailed caption generation. In CVPR, 2024. 3 + +[14] Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 2, 4 +[15] Wanggui He, Siming Fu, Mushui Liu, Xierui Wang, Wenyi Xiao, Fangxun Shu, Yi Wang, Lei Zhang, Zhelun Yu, Haoyuan Li, et al. Mars: Mixture of auto-regressive models for fine-grained text-to-image synthesis. arXiv preprint arXiv:2407.07614, 2024. 2 +[16] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021. 4 +[17] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS, 2017. 4 +[18] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 5 +[19] Xiwei Hu, Rui Wang, Yixiao Fang, Bin Fu, Pei Cheng, and Gang Yu. Ella: Equip diffusion models with llm for enhanced semantic alignment. arXiv preprint arXiv:2403.05135, 2024. 2 +[20] Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In ICCV, 2023. 4 +[21] Kaiyi Huang, Kaiyue Sun, Enze Xie, Zhenguo Li, and Xihui Liu. T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. In NeurIPS, 2023. 4 +[22] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Weli-hinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. 5 +[23] Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. What does bert learn about the structure of language? In ACL, 2019. 2 +[24] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7B. arXiv preprint arXiv:2310.06825, 2023. 4, 6 +[25] Zhijing Jin, Yuen Chen, Fernando Gonzalez, Jiarui Liu, Jiayi Zhang, Julian Michael, Bernhard Scholkopf, and Mona Diab. Analyzing the role of semantic representations in the era of large language models. arXiv preprint arXiv:2405.01502, 2024. 2 +[26] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In NeurIPS, 2022. 1 +[27] Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine. Analyzing and improving the training dynamics of diffusion models. In CVPR, 2024. 1 +[28] Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. Nvembed: Improved techniques for training llms as generalist + +embedding models. arXiv preprint arXiv:2405.17428, 2024.7 +[29] Baiqi Li, Zhiqiu Lin, Deepak Pathak, Jiayao Li, Yixin Fei, Kewen Wu, Tiffany Ling, Xide Xia, Pengchuan Zhang, Graham Neubig, et al. Genai-bench: Evaluating and improving compositional text-to-visual generation. arXiv preprint arXiv:2406.13743, 2024. 2, 4 +[30] Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281, 2023. 2, 4 +[31] Long Lian, Boyi Li, Adam Yala, and Trevor Darrell. Lm-grounded diffusion: Enhancing prompt understanding of text-to-image diffusion models with large language models. In TLMR, 2024. 2 +[32] Shanchuan Lin, Bingchen Liu, Jiashi Li, and Xiao Yang. Common diffusion noise schedules and sample steps are flawed. In WACV, 2024. 5 +[33] Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. In ECCV, 2024. 4, 5 +[34] Bingchen Liu, Ehsan Akhgari, Alexander Visheratin, Aleks Kamko, Linmiao Xu, Shivam Shrirao, Chase Lambert, Joao Souza, Suhail Doshi, and Daiqing Li. Playground v3: Improving text-to-image alignment with deep-fusion large language models. arXiv preprint arXiv:2409.10695, 2024. 1, 2 +[35] Rosanne Liu, Dan Garrette, Chitwan Sahara, William Chan, Adam Roberts, Sharan Narang, Irina Blok, RJ Mical, Mohammad Norouzi, and Noah Constant. Character-aware models improve visual text rendering. arXiv preprint arXiv:2212.10562, 2022. 2 +[36] Zhu Liu, Cunliang Kong, Ying Liu, and Maosong Sun. *Fantastic semantics* and where to find them: Investigating which layers of generative llms reflect lexical semantics. *arXiv* preprint arXiv:2403.01509, 2024. 2, 6 +[37] Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. Salesforce AI Research's SFR-embedding, the top performing text-embedding model. Salesforce AI Research Blog, 2024. 4 +[38] Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. Mteb: Massive text embedding benchmark. arXiv preprint arXiv:2210.07316, 2022. 1, 4 +[39] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 1 +[40] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1, 6, 7 +[41] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 1, 2, 3, 5 +[42] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and + +Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 2020. 1, 2, 3 +[43] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. 1, 2 +[44] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. 2, 3, 5 +[45] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 3 +[46] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. In NeurIPS, 2022. 1, 2, 5, 7 +[47] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In NeurIPS, 2022. 3 +[48] Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, and Aditi Raghunathan. Repetition improves language model embeddings. arXiv preprint arXiv:2402.15449, 2024. 2, 6, 7 +[49] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024. 2, 4 +[50] Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. What do you learn from context? probing for sentence structure in contextualized word representations. arXiv preprint arXiv:1905.06316, 2019. 2 +[51] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In CVPR, 2022. 5 +[52] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 2 +[53] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. Text embeddings by weakly-supervised contrastive pretraining. arXiv preprint arXiv:2212.03533, 2022. 2 +[54] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. Improving text embeddings with large language models. In ACL, 2024. 2, 6 +[55] Olivia Wiles, Chuhan Zhang, Isabela Albuquerque, Ivana Kajic, Su Wang, Emanuele Bugliarello, Yasumasa Onoe, Chris Knutsen, Cyrus Rashtchian, Jordi Pont-Tuset, et al. + +Revisiting text-to-image evaluation with gecko: On metrics, prompts, and human ratings. arXiv preprint arXiv:2404.16820, 2024. 4 +[56] Enze Xie, Junsong Chen, Junyu Chen, Han Cai, Yujun Lin, Zhekai Zhang, Muyang Li, Yao Lu, and Song Han. Sana: Efficient high-resolution image synthesis with linear diffusion transformers. arXiv preprint arXiv:2410.10629, 2024. 2, 5 +[57] Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models. TACL, 2022. 2 +[58] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprint arXiv:2407.10671, 2024. 4 +[59] Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, and Idan Szpektor. What you see is what you read? improving text-image alignment evaluation. In NeurIPS, 2023. 4 +[60] Yang Zhang, Yanfei Dong, and Kenji Kawaguchi. Investigating layer importance in large language models. arXiv preprint arXiv:2409.14381, 2024. 2, 6 +[61] Le Zhuo, Ruoyi Du, Han Xiao, Yangguang Li, Dongyang Liu, Rongjie Huang, Wenze Liu, Lirui Zhao, Fu-Yun Wang, Zhanyu Ma, et al. Lumina-last: Making lumina-t2x stronger and faster with next-dit. arXiv preprint arXiv:2406.18583, 2024. 2, 5 \ No newline at end of file diff --git a/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/images.zip b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e719d52835436509309e7902ae812579648fb663 --- /dev/null +++ b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4fb64912a5572d77e7d24e440d0d56b637b4b13e7ccb51382abd91bce220ba4 +size 717527 diff --git a/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/layout.json b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..17f68dc174513c1d013e8b6ba1ee841c1ffd7593 --- /dev/null +++ b/CVPR/2025/A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:def0bf1bd8a6d522cf4e948c7d300cbf4aae3be62192d850cc7da5fb5499d1bf +size 332056 diff --git a/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_content_list.json b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6811037451779d486c2f469b657136995ecee25c --- /dev/null +++ b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c69e53cd726671a3df669187561ada682b2d1c82661384b19b04480ebbbb111 +size 93529 diff --git a/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_model.json b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..415327d59e902c9b4fd59f002eccf6a3d7fe339c --- /dev/null +++ b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19c289eca8946b6f34a1d235fc43cd829bd94cce06d4918bb267bd689e1c7389 +size 120795 diff --git a/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_origin.pdf b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ebe8e4bdceb21b40830eee74c34953a8a795c564 --- /dev/null +++ b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/00996750-7955-4b08-8199-1c2e8100b73e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c94341be88600b5144a6614aa3f744eb13782edbcfb4f19201ce0a637cbfcba +size 3566285 diff --git a/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/full.md b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..71a3bbcc1d330282835e8391ddc58a2c1195b8f4 --- /dev/null +++ b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/full.md @@ -0,0 +1,370 @@ +# A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning + +Xin Wen $^{1}$ Bingchen Zhao $^{2}$ Yilun Chen $^{3}$ Jiangmiao Pang $^{3}$ Xiaojuan Qi $^{1*}$ $^{1}$ The University of Hong Kong $^{2}$ University of Edinburgh $^{3}$ Shanghai AI Laboratory +{wenxin, xjqi}@eee.hku.hk + +# Abstract + +Pre-trained vision models (PVMs) are fundamental to modern robotics, yet their optimal configuration remains unclear. Through systematic evaluation, we find that while DINO and iBOT outperform MAE across visuomotor control and perception tasks, they struggle when trained on non-(single-)object-centric (NOC) data—a limitation strongly correlated with their diminished ability to learn object-centric representations. This investigation indicates that the ability to form object-centric representations from the non-object-centric robotics dataset is the key to success for PVMs. Motivated by this discovery, we designed SlotMIM, a method that induces object-centric representations by introducing a semantic bottleneck to reduce the number of prototypes to encourage the emergence of objectness as well as cross-view consistency regularization for encouraging multiview invariance. Our experiments encompass pre-training on object-centric, scene-centric, web-crawled, and ego-centric data. Across all settings, our approach learns transferrable representations and achieves significant improvements over prior work in image recognition, scene understanding, and robot learning evaluations. When scaled up with million-scale datasets, our method also demonstrates superior data efficiency and scalability. Our code and models are publicly available at https://github.com/CVMI-Lab/SlotMIM. + +# 1. Introduction + +Pre-trained vision models (PVMs) have become fundamental building blocks in modern computer vision and robotics. While these models have demonstrated remarkable success in traditional vision tasks, their optimal application in robot learning remains an open challenge. Recent works [45, 55] have shown promising results using masked autoencoders (MAEs) [29] pre-trained on ego-centric data (e.g., Ego4D [24]). However, as suggested by [19], learning on (single-)object-centric datasets like ImageNet [20] can obtain better representations than on ego-centric data. + +Additionally, scene-centric data (e.g., COCO [41] and Open Images [39]) are also relevant to robot contexts and more information-rich. Web-crawled data (e.g., CC12M [14]) offer another alternative, being easier to collect and more scalable. Towards scaling up PVMs for real-world robotic applications, it is important to explore diverse training data sources and revisit learning algorithms accordingly. Our study is thus motivated by two questions: 1) Is MAE the optimal pre-training method for robot learning? 2) Is ego-centric data the best choice for visuomotor pre-training? + +To systematically answer these questions, we construct a comprehensive benchmark that evaluates PVMs across both visuomotor control and perception tasks. For a fair comparison, we control the pre-training data scale at 241K images across different data sources and evaluate models on four diverse downstream tasks: two robot manipulation benchmarks (Franka Kitchen [26] and Meta-World [76]), and two segmentation tasks (Pascal VOC [8] and ADE20K [79]). As shown in Fig. 2, our first key finding challenges the role of MAE: DINO [13] and iBOT [80] significantly outperform MAE across all tasks, particularly when trained on object-centric data. However, we observe a critical limitation: these models' performance degrades substantially when trained on scene-centric or non-(single-)object-centric (NOC) data. In coherence with the findings of [8], we find that this performance drop strongly correlates with the models' diminishing ability to learn object-centric representations from NOc data. This suggests that the challenge lies not in the choice of pre-training method alone, but in maintaining object-centric learning capabilities when training on diverse data sources. + +Motivated by these insights, we introduce SlotMIM (Fig. 5), a method that repurposes and integrates masked image modeling (MIM) and contrastive learning for effective representation learning from NOC datasets. The core idea of SlotMIM is to group patch-level image tokens into object-level feature abstractions, referred to as "slots", thereby decomposing NOC data into object-centric slots so that object-centric techniques can be effectively applied. To make patch-level tokens more semantically aware for subsequent grouping, we enhance MIM with cross-view consistency regularization regarding prototype assignments. Addition- + +![](images/4c36b61cbc60ed565c7fe4222d66313abca86a651f61cafc25235fefe235044a.jpg) +(a) Data-Centric Revisit of PVMs +PVM: 5 pre-training methods +BEiT, SplitMask, MAE, DINO, and iBOT + +![](images/61ce61704f5a3cb2e2a45b216ed689be835f0fbdff52a7ac4923c57acc3173bd.jpg) +Data: 4 pre-training datasets +Ego-Centric, (Single-)Object-Centric, +Scene-Centric, and Web-Crawled data +Eval: 13 behavior cloning tasks +Franka Kitchen and Meta-World + +![](images/b9dc4bba637245c14c0c6341591efa06ef31b4899f37ecb4839ab1d5f0eaea98.jpg) +DINO/IBOT rival other methods a lot! +BEIT SplitMask MAE DINO iBOT + +![](images/05f34c1495ae98f64ac8ec51dc0f4d7d4c0e061fd4a25896233eabd16ea7920c.jpg) +(b) But Why DINO/iBOT Fail on Non-Object-Centric Data? +Objectness drops when trained on NOC. +Object: + +![](images/478cedf3fe512cc7c5b63cb47c2fec21af7d20ba8a97b8101a6014cf92b8bb1e.jpg) + +![](images/9ea3ebe2d89a7b2d40a8aa272b2d6b5499e1c02794892577db722f0bb9186f34.jpg) + +![](images/bd5946bdf1b5835af6efd00ba7317bc9bc25102c309d91820d903f1099beeb54.jpg) +Objectness predicts robot performance. +Figure 1. An overview of this paper. (a) We conduct a comprehensive study evaluating pre-trained vision models (PVMs) on visuomotor control and perception tasks, analyzing how different pretraining (model, data) combinations affect performance. Our analysis reveals that DINO/iBOT excels while MAE underperforms. (b) We investigate the performance drop of DINO/iBOT when trained on non-(single)-object-centric (NOC) data, discovering they struggle to learn objectness from NOC data—a capability that strongly correlates with robot manipulation performance. (c) We introduce SlotMIM, which incorporates explicit objectness guidance during training to effectively learn object-centric representations from NOC data. (d) Through scaled-up pre-training and evaluation across six tasks, we demonstrate that SlotMIM adaptively learns different types of objectness based on the pre-training dataset characteristics, outperforming existing methods. + +![](images/163d91fd655f9e78870eae1c7b80101543d71ee882f37c6c8741564946055c12.jpg) +(c) Object-Centric Learning on Non-Object-Centric Data +Objectness of DINO emerges internally, which relies on (Single-)OC data bias. + +![](images/23e23827179ba120dff37bed4f74836f864bc5d205e37e7d51f30486b2c15a11.jpg) +Our method (SlotMIM) receives explicit objectness supervision externally. + +![](images/274c5bc30ea275ce90e8e64e8e0b3563efb354e4c1b34e5f27d4f43c2e0796c4.jpg) +(d) Strong performance on Mani/Nav & Det/Seg +SlotMIM can adapt to different types of data and learn meaningful objectness. +Strong rep. for control and perception. + +![](images/f3f444e9418a02693a3489e5b3dd9f88bd1f9178f2d8eca41f3c1fa2a18847fd.jpg) + +![](images/d0b47e12e585ee90cd977917fd6408ea2eebd442ff4233826937fb131b3ee797.jpg) + +![](images/30272ddbd4ae4670dad7f52e1b9b12231803aceed503051b5cf7e0fd047728cb.jpg) + +![](images/8bf6cf5d996fb37c7821b97d243a5885ab3c272e3b7babf42d608dbbd48802b9.jpg) +Prev.SoTA SlotMM + +ally, we introduce a semantic bottleneck, which reduces the number of prototypes to encourage the emergence of semantic and objectness at patch-level token representations (see Fig. 4). Building on these semantically enriched patch tokens, we apply attentive pooling over the learned patch-level features, using prototypes to initialize object representations, thereby grouping patches into object-level slots. Contrastive learning [18] is then applied to these slots to improve the discriminativeness of the learned representations. Together, these designs enable us to perform effective representation learning from NOC data. + +Our scaled experiments (Fig. 6) extend the analysis to million-scale datasets, revealing several surprising findings about the relationship between pre-training data and model performance. First, we observe an unexpected inverse scaling trend: while MAE's performance improves with more pre-training data, DINO and iBOT show degraded performance when scaled up. This indicates over-compression of representations: the learning objective reduces properties that are useful for manipulation tasks. Interestingly, we find that SlotMIM trained on ego-centric data learns to discover concepts adaptive to the pre-training data distribution, avoiding over-compression and consistently improving with more data. The resulting models—evaluated on six diverse tasks including Franka Kitchen, Meta-World, ObjectNav, ImageNav, COCO object detection and instance segmentation, and ADE20K semantic segmentation—demonstrate that SlotMIM consistently outperforms existing methods with better efficiency and scalability. Specifically, when pre-trained with + +just 241K samples, it already outperforms prior methods that used over 1M samples, including established approaches like MVP and VC-1. More significantly, while other methods saturate or degrade with increased NOC data, SlotMIM continues to improve and surpass previous methods that used $3 \times$ more data on both ADE20K and COCO benchmarks. This suggests that NOC data, when properly leveraged, can be a more scalable and efficient learning resource than previously thought, enabling new possibilities for scaling up PVMs. + +In conclusion, our work challenges the prevailing trend of relying solely on scaling up ego-centric data with the MAE model to improve PVM transfer performance. Instead, we advocate for 1) finding a method (SlotMIM) that supports robust and scalable pre-training on different types of data, and 2) exploring multiple data types and using the one that best aligns with the target task (e.g., ego-centric data for manipulation tasks, scene-centric data for navigation tasks). This approach not only pushes the boundaries of what is possible with self-supervised learning but also aligns more closely with the practical needs of robot learning applications. + +# 2. Related Work + +Visual pre-training for robotics. Following the success of vision pre-training, robotics researchers have begun leveraging pre-trained vision models (PVMs) instead of training from scratch. As shown in [53], policies using PVMs can match or exceed the performance of those using ground-truth state information. A popular line of work is to train PVMs + +
Pre-train DataSource#Image#Obj/Img#ClassTypeVideo
INet-241KImageNet241K1.71000ObjectX
COCO+COCO241K7.380SceneX
CC-241KCC12M241K--WebX
Ego-241KEgo4D241K--Ego
+ +Object: Object-centric; Scene: Scene-centric; Web: Web-crawled; Ego: Ego-centric + +![](images/203daef9b211a5922f180850c1327d00d8027f66e00c78bd3138fd4bf3ba0305.jpg) +Object-centric + +![](images/ce12acf157dc33eba7d6e210938418e0d3f04f755f1a2fc8b99bb58c99d122a6.jpg) +Scene-centric + +![](images/d5ae5fe0c6cda6500cff6deee54e9fb999569634b771cf27f30a0515d80c09b2.jpg) +Web-crawled + +![](images/262c8153e61a254dc3a300f3abd06816dbd9ce9c020eda3c0ff5ee174f2e6233.jpg) +Ego-centric + +Table 1. Overview of pre-training datasets. We uniformly sample subsets of 241K images from ImageNet, CC12M, and Ego4D. COCO+ is formed by combining train and unlabeled subsets of COCO. Ego4D frames are extracted at 0.2 fps and then sampled to subsets. 1.28M subsets are also considered in later experiments. For scene-centric data, we use the Open Images [39] dataset to scale up. + +on ego-centric data (e.g., Ego4D [24]), including R3M [47], MVP [55, 73], VIP [44], and VC-1 [45]. However, a recent study [19] shows that PVMs trained on ImageNet can also achieve competitive performance on downstream visuomotor tasks. Our work contributes to this line by expanding the scope of pre-training data to scene-centric and web-crawled data. More importantly, we also investigate methods beyond MAE [29], which has been the predominant choice in many existing works [19, 45, 55, 62]. Other PVM paradigms include training from scratch with proper data augmentation [27], from policy learning [44], from language [36, 43] or 3D [16], etc. Considering evaluation, while many works focus on manipulation tasks [19, 33, 44, 47, 55], we also evaluate navigation tasks as in [45] and include perception-oriented tasks like segmentation and detection, thus providing a more complete picture. + +Self-supervised visual representation learning. Self-supervised representation learning aims at learning transferable features from unlabeled data [1, 2, 5, 10, 12, 13, 17, 65]. The field has converged on two main approaches: contrastive learning, which learns representations by comparing positive and negative examples [17, 28, 65], and masked image modeling (MIM), which learns by reconstructing masked image regions [29, 75]. While these methods have proven effective, their evaluation has largely focused on object-centric datasets like ImageNet-1K [20]. Our work broadens this scope by studying self-supervised learning on large-scale non-(single-)object-centric datasets, and primarily evaluating the performance on robot learning tasks. + +Learning on non-(single-)object centric (NOC) data. Several recent works have tackled self-supervised learning on non-object centric data [3, 22, 30, 31, 67-69, 74]. Among them, the majority focuses on learning from scene-centric data and benefiting scene understanding tasks. The idea is primarily either to build pretext tasks on dense feature maps [3, 68, 74], object-centric groups [30, 31, 69], or specialized data augmentation techniques [22, 67]. In addition, some works have also explored learning from uncurated datasets [3, 11, 52, 66], which mainly correspond to large-scale web-crawled datasets, and a main topic is data dedduplication [52]. Our work contributes to this line of research by + +systematically studying the performance of self-supervised learning on multiple types of NOC data. + +# 3. When are PVMs More Effective for Visuomotor Control and Perception Tasks? + +# 3.1. Pre-Training Setting + +Datasets. A common belief is that ego-centric data are the best for robot learning, primarily due to their contextual similarity to manipulation tasks [36, 44, 45, 47, 55, 73]. As suggested by [19], however, data diversity matters more for learning transferrable PVMs and traditional datasets like ImageNet [20] can be more effective. Our study considers the (single-)object-centric ImageNet [20] and ego-centric Ego4D [24]. Moreover, our study also encompasses scene-centric data (e.g., COCO [41]), which are also close to robot contexts and are more information-rich. We also consider web-crawled data (e.g., CC12M [14]), as they are easier to collect and more friendly to scale up. As shown in Tab. 1, experiments in this section control the data scale to $241\mathrm{K}$ uniformly sampled images from each dataset. This ensures fair comparison and efficient pre-training, allowing us to expand pre-training to more (method, dataset) combinations. + +Methods. Another widely-accepted belief is that MAE [29] is one of the best pre-training methods for PVMs [45, 55, 62, 73], which remains unchallenged by [19]. We are interested in whether expanding the search space to NOC data could reveal other strong candidates. While existing works have repeatedly demonstrated MAE's superiority over ResNets [44, 47] and robot-oriented ViTs [18, 54], we compare MAE with several other ViT pre-training methods: BEiT [4], SplitMask [22], DINO [13], and iBOT [80]. These methods were selected based on their strong performance on perception tasks like detection and segmentation. Given the significant computational cost of reproducing pre-training (including MAE) on multiple datasets, this selection provides a good representative set of methods. + +Training. We use ViT-B/16 [21] as the backbone. At 241K data scale, all methods are trained for 800 epochs. At 1.28M data scale, we train for 400 epochs. The optimization hyperparameters follow official settings of each method. + +![](images/a82668d782aa55b52de422759f44889c6296fb6f5ef18e837a25ed5f9d13f39b.jpg) +Figure 2. Performance of PVMs trained with different (model, data) combinations on visuomotor control and perception tasks. (241K scale, best viewed together with Fig. 1a) Our analysis of existing works reveals several key findings: 1) MAE with ego-centric data shows only moderate performance on visuomotor control tasks and performs poorly on ADE20K; 2) DINO and iBOT lead performance across all tasks, with their best models typically trained on object-centric data (except for ADE20K); 3) The top-3 models (DINO, iBOT, and MAE) struggle to learn effective representations for manipulation when trained on scene-centric data. Most notably, 4) SlotMIM (Sec. 4) consistently outperforms prior methods regardless of whether it is pre-trained on object-centric data or not. + +![](images/6c5275ef8a64179f6d3b427096262e28e08a6b444303d629578a0e06087ebf2e.jpg) + +![](images/a66a97366f9b1d5269bf0fd85975dc6a5650ffa54a0efd4919a77bf09cccc80b.jpg) + +![](images/a78d954f3daa0d0e2f3ae9568e18ac8437c514692da935662ff996dee76080e1.jpg) + +
Benchmark SuiteRGBProprio.PhysicsActionGoalLearning
Franka Kitchen [26]XContinu.-IL
Meta-World [76]XContinu.-IL
ObjectNav [6]XDiscreteClassIL
ImageNav [81]XXDiscreteImageRL
+ +![](images/95e8609336edcccb99d168b7fe4c8c967b074ac68d6c1d1281a81effb1d486bb.jpg) +Table 2. Overview of robot manipulation and navigation tasks. Bottom: example tasks of each benchmark suite. + +![](images/d239b0ba4cdc206ce2a3473be14c0fb43de90fca9c53a3daa30892d3a96540b9.jpg) + +![](images/bee82d64a205b545e7095282c4912b27db14bbf627ed8b78a1266f2b51c4c3eb.jpg) + +![](images/ade4979e006a8c2e4b7f925dd302d8ecf18bca896d8f568b9e54d27ec55693b5.jpg) +Figure 3. Behavior cloning with attentive probing. An additional token is trained with cross-attention (trainable) to gather information from all patch tokens from the backbone (frozen), and fed to the policy to learn from expert demonstrations via behavior cloning. + +![](images/54d922f3a7533f8b55a4c6904d0a88eb727df8b4042ece22c76b3d5ee129a740.jpg) + +# 3.2. Evaluation Setting + +We expect a generalizable PVM to benefit both visuomotor control tasks (e.g., manipulating household objects and navigating indoor environments) and perceptual understanding tasks (e.g., recognizing objects and scenes, and correctly localizing them). While these tasks may share some common features, they can also require contrasting properties (e.g., grasping vs. navigation). Therefore, we evaluate PVMs on as diverse tasks as possible to understand their properties and how they interact with (pre-training) dataset biases. An overview is shown in Tab. 2. Due to computational constraints, this section focuses on manipulation (control) and segmentation (perception) tasks, with the strongest PVMs selected for navigation (control) and detection (perception) tasks in later sections. + +Visuomotor control tasks. To make the manipulation tasks better reflect the ability of PVMs, 1) we follow [36] to use trainable attentive pooling (see Fig. 3), as opposed to prior works that employ [CLS] token. This is essential for a fair comparison between PVMs as the [CLS] token of some + +methods (e.g., MAE) does not receive any training signal and the results are rather arbitrary. Attentive pooling instead performs a learnable combination of all output tokens from the encoder, thus better utilizing the potential of models; 2) We also follow [33] to avoid using any proprioceptive input to highlight the effect of PVMs, and 3) run 3 seeds for each experiment to avoid randomness. We then record the best performance of each run following [45], and report average performance across seeds and tasks. We adopt 25 demonstrations per task. The benchmark covers 5 tasks in Franka Kitchen [26] and 8 tasks in Meta-World [76]. Remaining details follows [33]. And all details in navigation tasks follow VC-1 [45]. + +Perception tasks. Following [8], we report the segmentation Jaccard index of the best attention head on Pascal VOC 2012. ADE20K semantic segmentation experiments follow the setting of MAE [29], which uses UperNet [72] and trains for 160K iterations with batch size 16. For COCO object detection and instance segmentation, we follow the setting in iBOT [80] to train a Cascade Mask R-CNN [9] with $1 \times$ schedule (12 epochs), and report box and mask AP. + +![](images/25f379f25e7431f9220119f6ae5ec10fc11148e677a7e8fb96b45ec93b67b58c.jpg) +(a) Clustering assignment of patch tokens. Each patch is assigned to its nearest-neighbor prototype, with different colors indicating different prototypes. + +![](images/8db7c536f644d4f0db3b1c357d384e20bdac7e0212e762805f4eb053ffe1824b.jpg) +Figure 4. Comparison of concepts learned by iBOT and SlotMIM. All models are trained on $\mathrm{COCO + }$ for 800 epochs. While iBOT can discover fine-grained patterns, especially when using fewer prototypes (left), these patterns emerge bottom-up and lack semantic meaning. In contrast, SlotMIM's concepts are semantically coherent, making them more effective for instance discrimination pretext tasks (right). + +![](images/75d4537cffcb6f84a7c7ddefb1686bf0eb6eb878e3aa15fafa1f07887c4c5611.jpg) +(b) Top-5 segments retrieved by the prototypes (by column). A segment consists of patches assigned to the same prototype within an image. Each column shows the top-5 segments with the highest cosine similarity to the prototype corresponding to the column. + +# 3.3. Main Findings + +We evaluate models pre-trained on 241K-scale datasets, with complete results shown in Fig. 2. For better understanding, we also visualize processed data in Fig. 1a and Fig. 1b. Below we discuss several key findings. + +Neither MAE nor ego-centric data are optimal for visuomotor control. As shown in Fig. 2 (complete results) and Fig. 1a (overall results), MAE and ego-centric data achieve only moderate performance in visuomotor control tasks, and perform poorly on ADE20K. This challenges the prevailing belief that MAE combined with ego-centric data is the best pre-training approach for PVMs, suggesting the need to explore alternative methods. + +DINO/iBOT excel across tasks, especially with object-centric data. As demonstrated in Fig. 2 and Fig. 1a, DINO and iBOT lead performance on most tasks, particularly when trained on object-centric data. iBOT also achieves strong semantic segmentation results when trained on scene-centric data. However, this performance advantage is sensitive to data distribution and doesn't generalize well across different pre-training datasets. + +NOC data, particularly scene-centric data, significantly degrades top-performing methods. While DINO/iBOT models achieve leading performance, they suffer substantial degradation on scene-centric data, as does MAE. This observation motivated us to investigate the underlying causes and develop a more robust PVM. + +Objectness predicts DINO/iBOT performance. In Fig. 1b, we demonstrate that DINO/iBOT's ability to learn objectness through attention maps deteriorates alongside performance drops on NOC data. The strong correlation (0.72) between + +objectness and task success rates aligns with findings from [8]. This suggests that achieving good objectness from NOC data could lead to strong performance, which motivated our design of SlotMIM (introduced in Sec. 4). As shown in Fig. 2, this approach proves effective. + +# 4. Object-centric Learning on NOC Data + +# 4.1. Preliminaries + +Deep clustering as self-distillation. DINO [13] learns a set of $C$ prototypes online to cluster image embeddings. Given an input image $\pmb{x} \in \mathbb{R}^{H \times W \times 3}$ , let $f_{\theta}$ and $f_{\xi}$ be student and teacher encoders that produce embeddings $\pmb{z}_{\theta} = f_{\theta}(\pmb{x})$ and $\pmb{z}_{\xi} = f_{\xi}(\pmb{x})$ respectively. The cluster assignments are computed as $p_{\theta}(\pmb{x}) = \mathrm{softmax}(z_{\theta} \cdot C / \tau)$ , where $C = \{c_c\}_{c=1}^C$ are the prototypes and $\tau$ is a temperature parameter. The loss is the cross-entropy between predictions of student and teacher models: $\mathcal{L}_{\mathrm{DINO}}(\pmb{v}^1, \pmb{v}^2) = -\sum_{c=1}^{C} q_{\xi}(\pmb{v}^2)_c \log p_{\theta}(\pmb{v}^1)_c$ , where $\pmb{v}^1$ and $\pmb{v}^2$ are two augmented views of the same image. Centering and sharpening operations are omitted in the equation for simplicity. Since it resembles knowledge distillation with soft labels produced by itself, DINO is also dubbed as self-distillation. + +DINO on image patches with MIM. iBOT [80] extends DINO to local image patches using masked image modeling (MIM). Given a binary mask $\mathcal{M} \in \{0,1\}^N$ indicating masked patches, the masked input $\tilde{\pmb{v}}$ replaces masked patches with a mask token $\pmb{m}$ . The iBOT loss predicts cluster assignments of masked patches from unmasked ones: $\mathcal{L}_{\mathrm{iBOT}}(\pmb{v}) = \sum_{i:\mathcal{M}_i = 1}\mathcal{L}_{\mathrm{DINO}}(\tilde{\pmb{v}}_i,\pmb{v}_i)$ , where $\tilde{\pmb{v}}_i$ is the masked patch from the student and $\pmb{v}_i$ is the corresponding unmasked patch from the teacher. + +![](images/fe3143c4aff3565f8125d846c2471c3e9917ca1acc8e84243e9de23346bfe82f.jpg) +Figure 5. Overview of SlotMIM. Our framework extends iBOT by: 1) repurposing its within-view patch-level self-distillation for object discovery, 2) introducing a cross-view objective for semantic guidance, and 3) performing object-centric contrastive learning on slots (object features grouped from patches with matching cluster assignments). This approach provides explicit objectness supervision without requiring object-centric data, making it applicable to various types of NOC data (see Fig. 1c for comparison and Fig. 1d for results). + +![](images/61a3ce1b2e8a5256db49abf9630187109143bdcbc033201e48e405d71814d65e.jpg) + +Slot attention [42] is a variant of cross-attention that normalizes attention scores on the query axis instead of the key axis, encouraging queries to attend to different parts of the input. Our approach performs similar attentive pooling on patch embeddings based on their cluster assignments, with prototypes $\mathcal{C}$ acting as queries and patch embeddings $z_{\theta,i}$ as keys. Following convention, we refer to these pooled object features as slots - prototypes adapted to image patches. + +# 4.2. SlotMIM Framework + +High-level intuition. We decompose self-supervised learning on NOC data into two subtasks: 1) learning to group image patches into objects, and 2) learning to discriminate objects as prior work done on object-centric data. The key challenge is unsupervised object discovery, which we find emerges naturally from iBOT when using fewer prototypes. + +Representation bottleneck for objectness. iBOT uses a set of prototype embeddings $\mathcal{C} = \{\pmb{c}_c\}_{c=1}^C$ to cluster image patches into $C$ groups, assigning each patch token a soft one-hot encoding $p_{\theta}(\pmb{x}_i)$ of its cluster membership. While iBOT typically uses $C = 8192$ to capture fine patterns, we find a much smaller $C$ (e.g., 512 for COCO) better serves object discovery by creating an information bottleneck that encourages learning compositional object concepts. As shown in Fig. 4a, iBOT's clusters are very fine-grained (2nd row), but objectness emerges with fewer prototypes (3rd row). However, these patterns still lack semantic meaning and can fragment single objects. Additionally, matching discovered objects between views remains difficult as their semantics vary despite being assigned to the same prototype (Fig. 4b, + +left). Both issues suggest the need for semantic-level prototypes. + +Cross-view consistency for semantic learning. The lack of semantic meaning stems from $\mathcal{L}_{\mathrm{iBOT}}$ being computed between patches within the same view, providing no explicit guidance for learning view-invariant representations. We address this with a simple cross-view consistency objective $\mathcal{L}_{\mathrm{patch}}^{\mathrm{cross}}$ that encourages patches under different transformations to share the same token. We match patches between views using ROAlign to align overlapping regions. Formally, for two augmented views $\pmb{v}^{1}$ and $\pmb{v}^{2}$ with patch embeddings $\tilde{z}_{\theta ,i}^{1} = f_{\theta}(\tilde{v}_{i}^{1})$ and $z_{\xi ,j}^{2} = f_{\xi}(v_{j}^{2})$ , the loss is: + +$$ +\mathcal {L} _ {\text {p a t c h}} ^ {\text {c r o s s}} \left(\boldsymbol {v} ^ {1}, \boldsymbol {v} ^ {2}\right) = - \frac {1}{| \mathcal {P} |} \sum_ {(i, j) \in \mathcal {P}} \sum_ {c = 1} ^ {C} \boldsymbol {q} _ {\xi , i, c} ^ {2} \log \tilde {\boldsymbol {p}} _ {\theta , j, c} ^ {1}, \tag {1} +$$ + +where $\tilde{\pmb{p}}_{\theta}^{1} = \mathrm{softmax}(\tilde{\pmb{z}}_{\theta}^{1}\cdot \mathcal{C}_{\theta} / \tau_{s})$ and $\pmb {q}_\xi^2 = \mathrm{softmax}(\pmb {z}_\xi^2\cdot$ $\mathcal{C}_{\xi} / \tau_t)$ are cluster assignments, $\tau_{s}$ and $\tau_{t}$ are temperature parameters, and $\mathcal{P}$ contains matched patch pairs. + +Object-level contrastive learning. With aligned object features, we apply contrastive learning at the object level. We only use slots that occupy at least one patch, filtering with the indicator: $\mathbb{1}_i = \exists_j$ such that $\operatorname{argmax}_c(p_\theta(\boldsymbol{v}_j^1)_c) = i$ . Slots with matching tokens form positive pairs. Using a MoCo-style approach with slots $s_{\theta,i}^1 = h_\theta(\sum_j p_\theta(\boldsymbol{v}_j^1)_i \boldsymbol{z}_{\theta,j}^1)$ , $s_{\xi,i}^2 = \sum_j q_\xi(\boldsymbol{v}_j^2)_i \boldsymbol{z}_{\xi,j}^2$ , the loss is: + +$$ +\mathcal {L} _ {\mathrm {s l o t}} \left(\tilde {\boldsymbol {s}} _ {\theta} ^ {1}, \boldsymbol {s} _ {\xi} ^ {2}\right) = - \frac {1}{K} \sum_ {i = 1} ^ {C} \log \frac {\mathbb {1} _ {i} ^ {1} \mathbb {1} _ {i} ^ {2} \exp \left(\boldsymbol {s} _ {\theta , i} ^ {1} \cdot \boldsymbol {s} _ {\xi , i} ^ {2} / \tau\right)}{\sum_ {j = 1} ^ {C} \mathbb {1} _ {i} ^ {1} \mathbb {1} _ {j} ^ {2} \exp \left(\boldsymbol {s} _ {\theta , i} ^ {1} \cdot \boldsymbol {s} _ {\xi , j} ^ {2} / \tau\right)}, \tag {2} +$$ + +![](images/17ffc5d26eb381b90416a0ab3458a37457f464512b228a8823596f6cb466b191.jpg) +Figure 6. Results of scaling PVM training data. It considers three factors that influence manipulation success rates: data types, pre-training methods, and data scale. Highlighted lines represent the best-performing data scaling for each method, while faded lines indicate sub-optimal performance. It shows that 1) SlotMIM achieves the best performance, scalability, and data efficiency across evaluation tasks by training on NOC data; 2) On manipulation tasks, most methods (except MAE) face performance drop when scaling up pre-training data. + +![](images/fff8ecfb20ec9aab3d2d0e2596361125a2e2de5e3629bd52fc37e676f710f777.jpg) + +![](images/368fafdd1ccadfd816a4f12e5cb511e4f85af5fe93ba6e6efc9082a19c6dd8b4.jpg) + +where $h_\theta$ is a predictor MLP, $K = \sum_{i} \mathbb{1}_i^1 \mathbb{1}_i^2$ counts positive pairs, and $\tau = 0.2$ . The final loss combines these objectives: $\mathcal{L}_{\theta,\xi}(\tilde{\boldsymbol{v}}^1, \boldsymbol{v}^2) = \lambda_1 \mathcal{L}_{\mathrm{patch}}^{\mathrm{within}}(\tilde{\boldsymbol{v}}^1, \boldsymbol{v}^2) + \lambda_1 \mathcal{L}_{\mathrm{patch}}^{\mathrm{cross}}(\tilde{\boldsymbol{v}}^1, \boldsymbol{v}^2) + \lambda_2 \mathcal{L}_{\mathrm{slot}}(\tilde{\boldsymbol{s}}_\theta^1, \boldsymbol{s}_\xi^2)$ , in which $\mathcal{L}_{\mathrm{patch}}^{\mathrm{within}}$ is identical to $\mathcal{L}_{\mathrm{iBOT}}$ and $\lambda_1 = 0.5$ , $\lambda_2 = 1$ . In practice, we optimize the symmetrized objective $\mathcal{L}_{\theta,\xi}(\tilde{\boldsymbol{v}}^1, \boldsymbol{v}^2) + \mathcal{L}_{\theta,\xi}(\tilde{\boldsymbol{v}}^2, \boldsymbol{v}^1)$ . + +# 4.3. Ablation Study + +
maskLcrosspatchLwithinpatchLslotk-NNADEJaccK
1XXX45.147.442.58.3
2XX44.948.642.310.3
3XX27.745.739.320.7
4X45.347.542.98.4
546.249.143.99.4
+ +Table 3. Ablation study on effective modules. + +We ablate on several key components of SlotMIM. All models are trained on $\mathrm{COCO + }$ for 800 epochs. As shown in Tab. 3, we evaluate each model variant using three metrics: $k$ -NN accuracy on ImageNet classification, mean IoU on ADE20K semantic segmentation, and Jaccard index on VOC2012. We also report $\overline{K}$ , the average number of objects/stuff regions discovered per image. The results reveal that Jaccard index positively correlates with representation quality, indicating that better objectness leads to stronger representations. Adding MIM improves downstream segmentation performance (comparing rows 1 and 2). The cross-view consistency loss and slot contrastive loss both contribute significantly to objectness (rows 2, 3, 5). Additionally, the within-view loss acts as an effective regularizer that further enhances the learned representations (rows 4, 5). Additional ablation study are available in the appendix. + +# 5. Scaling Up Pre-Training Data + +To provide an even more comprehensive picture, we extend Fig. 2 by scaling up the pre-training data from 241K + +to 1.28M and beyond (4M for SlotMIM, 5M for MAE, and 12M for iBOT). The models at 1.28M scale follow the same settings as in Tab. 1. For SlotMIM, we combine ImageNet [20], COCO+ [41], OpenImages [39], Objects365 [63], and LVIS [25] to create a 4M-scale scene-centric dataset, which we call DetSoup. We also utilize publicly available checkpoints of MAE trained on 5M-scale ego-centric datasets (MVP [55] and VC-1 [45]), and iBOT trained on 12M-scale ImageNet-21K [58]. The results are presented in Fig. 6. + +# 5.1. Experiments on Manipulation Tasks + +The MAE regime includes MVP [55] and VC-1 [45], which leverage MAE [29] to pre-train models on a massive collection of ego-centric videos [24] and Internet data. V-Cond [36] further proposes language-driven representation learning from human videos and their associated captions. Fig. 6 examines the relationship between manipulation success rates and pre-training methods, comparing scaling trends across different data types: ego-centric, object-centric, and scene-centric. Notably, increasing dataset size does not always improve performance across benchmarks, as also observed by [19]. + +What leads to inverse-scaling behaviors on manipulation tasks? In object manipulation tasks, scaling scene-centric and object-centric data to the million level can lead to performance drops for methods like DINO and iBOT—with MAE being the only exception. We hypothesize that self-supervised representation learning, including MIM, aims to learn invariance by pulling similar visual content together in the embedding space, effectively compressing visual data. However, scaling up data may result in over-compression, losing crucial low-level visual information necessary for visuomotor control tasks (e.g., accurate object grasping). MAE, on the other hand, continues to preserve low-level information due to the nature of its MIM objectives. This + +also explains why MAE is commonly preferred in existing PVM works—they typically start with million-scale data, and MAE is one of the few methods that avoid overcompression at such scale. + +Why SlotMIM does not face over-compression when trained on ego-centric data? SlotMIM builds its training objective on concepts it discovers, with the type of discovered concepts determined by the training data distribution. As shown in Fig. 1d, SlotMIM learns coarse-grained objects from object/scene-centric data, but fine-grained parts from ego-centric data, achieving a good balance between invariance and objectness. We believe this occurs because, unlike object/scene-centric data from diverse Internet sources, ego-centric images come from consecutive human videos sharing contextual backgrounds or scenarios. This contextual similarity means invariance learning in ego-centric data focuses more on differences within the same video or scenario, particularly in foreground objects. This focus is crucial for robot manipulation learning, which requires effective interaction with these foreground objects. + +SlotMIM is more data efficient and scalable in leveraging ego-centric data. Compared to general-purpose pre-trained models and state-of-the-art robot learning methods (e.g., MVP [55] and VC-1 [45]), we demonstrate that SlotMIM, pre-trained with just 241K data samples, outperforms prior methods that used over 1M samples. When scaled to 1M ego-centric data, it achieves the highest success rates among all methods in the comparison. + +# 5.2. Experiments on ADE20K and COCO + +As shown in Fig. 6 (right), compared to previous efforts scaling up with ImageNet-21K (12M images) [58], our SlotMIM models continue to improve and surpass them using $3 \times$ less data. This suggests that NOC data can be a more scalable learning resource. + +![](images/53e36709e812fdce763866e04a3657a8751069d7f83b7ee17c640b524c9b5642.jpg) +Figure 7. Results on COCO object detection and instance segmentation. SlotMIM shows better data efficiency with both object-centric and NOC data, and its results continue to improve with more data, surpassing prior models by a notable margin. + +![](images/eaf8aeab68e10e1ee7b1346707d81399f5e40ae612da9d5fc1f65a53842fa3bb.jpg) + +In Fig. 7, we present an evaluation on COCO object detection and instance segmentation. The superiority of SlotMIM is evident and continues to improve with increased + +data scale. As supported by the visualizations in Fig. 1d, SlotMIM learns to discover common objects from scenes in alignment with human visual perception, supporting its scaling capability for segmentation/detection tasks. + +# 5.3. Experiments on Navigation Tasks + +
MethodArchPretrain DataScaleObjectNavImageNav
MVP [55]ViT-BEgoSoup4.6M51.264.7
VC-1 [45]ViT-BEgo4D+MNI5.6M55.467.9
SlotMIMViT-BEgo4D1.28M48.465.4
SlotMIMViT-BDetSoup4.0M62.069.8
MVP [55]ViT-LEgoSoup4.6M55.068.1
VC-1 [45]ViT-LEgo4D+MNI5.6M60.370.3
+ +Table 4. Results on ObjectNav and ImageNav. Compared with prior SoTA methods using ViT-B, SlotMIM trained on DetSoup improves by $6.6\%$ and $1.9\%$ on ObjectNav and ImageNav, respectively. Compared with ViT-L models, it still rivals them on ObjectNav and achieves comparable performance on ImageNav. + +In Tab. 4, we compare SlotMIM's performance on ObjectNav and ImageNav benchmarks with SoTA methods. Details of the navigation tasks can be found in Tab. 2. Due to the extreme GPU demands of these tasks, we evaluated only two SlotMIM models: each trained on the largest-scale scene/ego-centric data (DetSoup and Ego4D) available. SlotMIM trained on Ego4D shows moderate performance on ImageNav but underperforms on ObjectNav, indicating a misalignment with downstream tasks. In contrast, when trained on DetSoup, it outperforms MVP and VC-1 by $6.6\%$ and $1.9\%$ on ObjectNav and ImageNav, respectively. This demonstrates both the effectiveness of SlotMIM and suggests navigation are closely related to perception tasks. + +# 6. Conclusion + +This work revisits the potential of PVMs for robot learning tasks. We construct a comprehensive benchmark that evaluates models on robot learning tasks to find out what is the best pretraining method and what is the best data for pretraining. Our findings suggest that while DINO and iBOT lead the benchmark, their performance degrades rapidly when trained on NOC data. Furthermore, this degradation can be partially alleviated by strengthening the ability to learn object-centric representations. Motivated by these findings, we propose SlotMIM to effectively learn object-centric representations from NOC datasets. Through extensive experiments across diverse datasets and downstream tasks, including robotics, we demonstrated the consistent superiority of SlotMIM over existing methods. We hope our promising results open new avenues for adopting PVMs for more robot learning tasks. + +# Acknowledgements + +This work has been supported in part by Hong Kong Research Grant Council — Early Career Scheme (Grant No. 27209621), General Research Fund Scheme (Grant No. 17202422, 17212923), Theme-based Research (Grant No. T45-701/22-R) and Shenzhen Science and Technology Innovation Commission (SGDX20220530111405040). Part of the described research work is conducted in the JC STEM Lab of Robotics for Soft Materials funded by The Hong Kong Jockey Club Charities Trust. + +# References + +[1] Yuki M. Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. In ICLR, 2020. 3 +[2] Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. In CVPR, pages 15619–15629, 2023. 3 +[3] Yutong Bai, Xinlei Chen, Alexander Kirillov, Alan Yuille, and Alexander C. Berg. Point-level region contrast for object detection pre-training. In CVPR, pages 16061-16070, 2022. 3 +[4] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEiT: BERT pre-training of image transformers. In ICLR, 2022. 3 +[5] Adrien Bardes, Jean Ponce, and Yann LeCun. VICRegL: Self-supervised learning of local visual features. In NeurIPS, pages 8799-8810, 2022. 3 +[6] Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Manolis Savva, Alexander Toshev, and Erik Wijmans. ObjectNav revisited: On evaluation of embodied agents navigating to objects. arXiv:2006.13171, 2020. 4, 13 +[7] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. RT-1: Robotics transformer for real-world control at scale. In RSS, 2023. 14 +[8] Kaylee Burns, Zach Witzel, Jubayer Ibn Hamid, Tianhe Yu, Chelsea Finn, and Karol Hausman. What makes pre-trained visual representations successful for robust manipulation? In CoRL, 2024. 1, 4, 5, 14 +[9] Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: High quality object detection and instance segmentation. IEEE TPAMI, 43(5):1483-1498, 2019. 4, 14 + +[10] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, pages 139-156, 2018. 3 +[11] Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Armand Joulin. Unsupervised pre-training of image features on non-curated data. In ICCV, pages 2959-2968, 2019. 3 +[12] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In NeurIPS, pages 9912-9924, 2020. 3, 15 +[13] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, pages 9650-9660, 2021. 1, 3, 5, 13, 14, 15 +[14] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12M: Pushing web-scale image-text pretraining to recognize long-tail visual concepts. In CVPR, pages 3558-3568, 2021. 1, 3 +[15] Chi-Lam Cheang, Guangzeng Chen, Ya Jing, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Hongtao Wu, Jiafeng Xu, Yichu Yang, Hanbo Zhang, and Minzhao Zhu. GR-2: A generative video-language-action model with web-scale knowledge for robot manipulation. arXiv:2410.06158, 2024. 14 +[16] Shizhe Chen, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. SUGAR: Pre-training 3d visual representations for robotics. In CVPR, 2024. 3 +[17] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In ICML, pages 1597-1607, 2020. 3 +[18] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In ICCV, pages 9640-9649, 2021. 2, 3, 13 +[19] Sudeep Dasari, Mohan Kumar Srirama, Unnat Jain, and Abhinav Gupta. An unbiased look at datasets for visuo-motor pre-training. In CoRL, 2023. 1, 3, 7 +[20] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009. 1, 3, 7, 17 +[21] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. 3, 13 +[22] Alaaeldin El-Nouby, Gautier Izacard, Hugo Touvron, Ivan Laptev, Herve Jegou, and Edouard Grave. Are large-scale datasets necessary for self-supervised pre-training? arXiv:2112.10740, 2021.3 +[23] Kuan Fang, Fangchen Liu, Pieter Abbeel, and Sergey Levine. MOKA: Open-world robotic manipulation through mark-based visual prompting. In RSS, 2024. 14 +[24] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, + +Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abraham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Wesley Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Kartikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, and Jitendra Malik. Ego4D: Around the world in 3,000 hours of egocentric video. In CVPR, pages 18995-19012, 2022. 1, 3, 7, 14, 15 +[25] Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A dataset for large vocabulary instance segmentation. In CVPR, pages 5356-5364, 2019. 7, 17 +[26] Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. arXiv:1910.11956, 2019. 1, 4 +[27] Nicklas Hansen, Zhecheng Yuan, Yanjie Ze, Tongzhou Mu, Aravind Rajeswaran, Hao Su, Huazhe Xu, and Xiaolong Wang. On pre-training for visuo-motor control: Revisiting a learning-from-scratch baseline. In ICML, pages 12511-12526, 2023. 3 +[28] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, pages 9729-9738, 2020. 3 +[29] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In CVPR, pages 16000-16009, 2022. 1, 3, 4, 7, 16 +[30] Olivier J. Henaff, Skanda Koppula, Jean-Baptiste Alayrac, Aaron van den Oord, Oriol Vinyals, and João Carreira. Efficient visual pretraining with contrastive detection. In ICCV, pages 10086-10096, 2021. 3 +[31] Olivier J. Henaff, Skanda Koppula, Evan Shelhamer, Daniel Zoran, Andrew Jaegle, Andrew Zisserman, João Carreira, and Relja Arandjelović. Object discovery and representation networks. In ECCV, pages 123–143, 2022. 3 +[32] Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang Gao. Look before you leap: Unveiling the power of GPT-4V in robotic vision-language planning. arXiv:2311.17842, 2023. 14 +[33] Yingdong Hu, Renhao Wang, Li Erran Li, and Yang Gao. For pre-trained vision models in motor control, not all policy learning methods are created equal. In ICML, pages 13628-13651, 2023. 3, 4, 13 +[34] Haoxu Huang, Fanqi Lin, Yingdong Hu, Shengjie Wang, and Yang Gao. CoPa: General robotic manipulation + +through spatial constraints of parts with foundation models. arXiv:2403.08248, 2024. 14 +[35] Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, and Li Fei-Fei. VoxPoser: Composable 3d value maps for robotic manipulation with language models. In CoRL, pages 540–562, 2023. 14 +[36] Siddharth Karamcheti, Suraj Nair, Annie S. Chen, Thomas Kollar, Chelsea Finn, Dorsa Sadigh, and Percy Liang. Language-driven representation learning for robotics. In RSS, 2023. 3, 4, 7, 13, 15 +[37] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan P Foster, Pannag R Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, and Chelsea Finn. OpenVLA: An open-source vision-language-action model. In CoRL, 2024. 14, 15 +[38] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dólar, and Ross Girshick. Segment anything. In ICCV, pages 4015-4026, 2023. 14, 15 +[39] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, and Vittorio Ferrari. The Open Images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. IJCV, 128(7):1956-1981, 2020. 1, 3, 7, 14, 17 +[40] Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In ICML, pages 3744-3753, 2019. 13 +[41] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, pages 740-755, 2014. 1, 3, 7, 14, 17 +[42] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. In NeurIPS, pages 11525-11538, 2020. 6 +[43] Yecheng Jason Ma, Vikash Kumar, Amy Zhang, Osbert Bastani, and Dinesh Jayaraman. LIV: Language-image representations and rewards for robotic control. In ICML, pages 23301-23320, 2023. 3 +[44] Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Obert Bastani, Vikash Kumar, and Amy Zhang. VIP: Towards universal visual reward and representation via value-implicit pre-training. In ICLR, 2023. 3 +[45] Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Yecheng Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, and Franziska Meier. Where are we in the search for an artificial visual cortex for embodied intelligence? In NeurIPS, pages 655-677, 2023. 1, 3, 4, 7, 8, 13, 15 +[46] Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh + +Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. Simple open-vocabulary object detection with vision transformers. In ECCV, pages 728-755, 2022. 14 +[47] Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. R3M: A universal visual representation for robot manipulation. In CoRL, pages 892–909, 2023. 3, 13 +[48] Muhammad Muzammal Naseer, Kanchana Ranasinghe, Salman H Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Intriguing properties of vision transformers. In NeurIPS, pages 23296-23308, 2021. 14 +[49] Soroush Nasiriany, Fei Xia, Wenhao Yu, Ted Xiao, Jacky Liang, Ishita Dasgupta, Annie Xie, Danny Driess, Ayzaan Wahid, Zhuo Xu, Quan Vuong, Tingnan Zhang, Tsang-Wei Edward Lee, Kuang-Huei Lee, Peng Xu, Sean Kirmani, Yuke Zhu, Andy Zeng, Karol Hausman, Nicolas Heess, Chelsea Finn, Sergey Levine, and Brian Ichter. PIVOT: Iterative visual prompting elicits actionable knowledge for VLMs. In ICML, pages 37321-37341, 2024. 14 +[50] Octo Model Team, Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joel Hejna, Charles Xu, Jianlan Luo, Tobias Kreiman, You Liang Tan, Lawrence Yunliang Chen, Pannag Sanketi, Quan Vuong, Ted Xiao, Dorsa Sadigh, Chelsea Finn, and Sergey Levine. Octo: An open-source generalist robot policy. In RSS, 2024. 14, 15 +[51] OpenAI. GPT-4 technical report. arXiv:2303.08774, 2023. 14 +[52] Maxime Oquab, Timothee Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mido Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, PoYao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Herve Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. DINOV2: Learning robust visual features without supervision. TMLR, 2024. 3, 14, 15, 16 +[53] Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, and Abhinav Gupta. The unsurprising effectiveness of pretrained vision models for control. In ICML, pages 17359-17371, 2022. 2 +[54] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 3, 14 +[55] Ilija Radosavovic, Tete Xiao, Stephen James, Pieter Abbeel, Jitendra Malik, and Trevor Darrell. Real-world robot learning with masked visual pre-training. In CoRL, pages 416-426, 2023. 1, 3, 7, 8 +[56] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21(140):1-67, 2020. 14 +[57] Santhosh Kumar Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alexander Clegg, John M + +Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, Manolis Savva, Yili Zhao, and Dhruv Batra. Habitat-matterport 3d dataset (HM3d): 1000 large-scale 3d environments for embodied AI. In NeurIPS, 2021. 13 +[58] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet large scale visual recognition challenge. IJCV, 115(3):211-252, 2015. 7, 8, 17 +[59] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A platform for embodied AI research. In ICCV, pages 9339-9347, 2019. 13 +[60] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. LAION-400M: Open dataset of CLIP-filtered 400 million image-text pairs. arXiv:2111.02114, 2021. 15 +[61] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. LAION-5B: An open large-scale dataset for training next generation image-text models. In NeurIPS, pages 25278–25294, 2022. 14 +[62] Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, and Pieter Abbeel. Multi-view masked world models for visual robotic manipulation. In ICML, pages 30613-30632, 2023. 3 +[63] Shuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang Yu, Xiangyu Zhang, Jing Li, and Jian Sun. Objects365: A large-scale, high-quality dataset for object detection. In ICCV, pages 8430-8439, 2019. 7, 17 +[64] Austin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan, Kuang-Huei Lee, Quan Vuong, Paul Wohlhart, Sean Kirmani, Brianna Zitkovich, Fei Xia, Chelsea Finn, and Karol Hausman. Open-world object manipulation using pre-trained vision-language models. In CoRL, pages 3397-3417, 2023. 14 +[65] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In ECCV, pages 776-794, 2020. 3 +[66] Yonglong Tian, Olivier J. Henaff, and Aaron van den Oord. Divide and contrast: Self-supervised learning from uncurated data. In ICCV, pages 10063-10074, 2021. 3 +[67] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, and Luc V Gool. Revisiting contrastive methods for unsupervised learning of visual representations. In NeurIPS, pages 16238-16250, 2021. 3 +[68] Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li. Dense contrastive learning for self-supervised visual pre-training. In CVPR, pages 3024-3033, 2021. 3 +[69] Xin Wen, Bingchen Zhao, Anlin Zheng, Xiangyu Zhang, and Xiaojuan Qi. Self-supervised visual representation learning with semantic grouping. In NeurIPS, pages 16423-16438, 2022. 3, 14 + +[70] Hongtao Wu, Ya Jing, Chilam Cheang, Guangzeng Chen, Jiafeng Xu, Xinghang Li, Minghuan Liu, Hang Li, and Tao Kong. Unleashing large-scale video generative pre-training for visual robot manipulation. In ICLR, 2024. 14 +[71] Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. Gibson env: Real-world perception for embodied agents. In CVPR, pages 9068-9079, 2018. 13 +[72] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In ECCV, pages 418-434, 2018. 4, 13 +[73] Tete Xiao, Ilija Radosavovic, Trevor Darrell, and Jitendra Malik. Masked visual pre-training for motor control. arXiv:2203.06173, 2022. 3 +[74] Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, and Han Hu. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. In CVPR, pages 16684-16693, 2021. 3 +[75] Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. SimMIM: A simple framework for masked image modeling. In CVPR, pages 9653-9663, 2022. 3 +[76] Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In CoRL, 2019. 1, 4 +[77] Jia Zeng, Qingwen Bu, Bangjun Wang, Wenke Xia, Li Chen, Hao Dong, Haoming Song, Dong Wang, Di Hu, Ping Luo, Heming Cui, Bin Zhao, Xuelong Li, Yu Qiao, and Hongyang Li. Learning manipulation by predicting interaction. In RSS, 2024. 15 +[78] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. In CVPR, pages 12104-12113, 2022. 13 +[79] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In CVPR, pages 633-641, 2017. 1 +[80] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. Image BERT pre-training with online tokenizer. In ICLR, 2022. 1, 3, 4, 5, 13, 14 +[81] Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In ICRA, pages 3357–3364, 2017. 4, 13 +[82] Brianna Zitkovich, Tianhe Yu, Sichun Xu, Peng Xu, Ted Xiao, Fei Xia, Jialin Wu, Paul Wohlhart, Stefan Welker, Ayzaan Wahid, Quan Vuong, Vincent Vanhoucke, Huong Tran, Radu Soricut, Anikait Singh, Jaspiar Singh, Pierre Sermanet, Pannag R. Sanketi, Grecia Salazar, Michael S. Ryoo, Krista Reymann, Kanishka Rao, Karl Pertsch, Igor Mordatch, Henryk Michalewski, Yao Lu, Sergey Levine, Lisa Lee, Tsang-Wei Edward Lee, Isabel Leal, Yuheng Kuang, Dmitry Kalashnikov, Ryan Julian, Nikhil J. Joshi, Alex Irpan, Brian Ichter, Jasmine Hsu, Alexander Herzog, Karol Hausman, Keerthana Gopalakrishnan, Chuyuan Fu, Pete Florence, Chelsea Finn, Kumar Avinava Dubey, Danny Driess, Tianli Ding, Krzysztof Marcin Choromanski, Xi Chen, Yev + +gen Chebotar, Justice Carbajal, Noah Brown, Anthony Brohan, Montserrat Gonzalez Arenas, and Kehang Han. RT-2: Vision-language-action models transfer web knowledge to robotic control. In CoRL, pages 2165–2183, 2023. 14 \ No newline at end of file diff --git a/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/images.zip b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c6f1073dd30d29e9617d92b50caeb60f7d52b1a4 --- /dev/null +++ b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c08104cc3b9b10194b2e7bcbacfd31dc457c408c818d2b4b6f012ff1bda268f1 +size 602889 diff --git a/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/layout.json b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3d872db25f4e767c6dc813b09f0a789d3a34cbd8 --- /dev/null +++ b/CVPR/2025/A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaf55c510efd40bd0b8e28725c45fbc24a143ec2cfa74f15f5f699bbb3f01d95 +size 474799 diff --git a/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_content_list.json b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3edc80447652fb70c14f94b68a5e669c8ff8c57b --- /dev/null +++ b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15f5f949336dc509f9b13ccad9d2829188cc397f5462f9d4014ea5d44b9b284f +size 80599 diff --git a/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_model.json b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..11fad710eed1473a91062ed59bfc417155b45769 --- /dev/null +++ b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6edd3300bcec38aee802f3a1d8e418713500d2a3f2109d905e18e2c87ca1bac0 +size 94339 diff --git a/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_origin.pdf b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3c920d8e0f3839ef2b65d477dfedf3174107598a --- /dev/null +++ b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/23aebd3c-4a4f-4148-b060-1061f0f8c69a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6970af01e4d74d005a540bf7bfa50790b3597d222465e3c53103b20ce797311f +size 870738 diff --git a/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/full.md b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/full.md new file mode 100644 index 0000000000000000000000000000000000000000..debfbc2660469cfcca5fba2dbe89cedad3b6e54e --- /dev/null +++ b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/full.md @@ -0,0 +1,302 @@ +# A Dataset for Semantic Segmentation in the Presence of Unknowns + +Zakaria Laskar*,a, Tomáš Vojíř*,a, Matej Grcic*,b, Iaroslav Melekhov●,c, Shankar Gangisettye, Juho Kannalac,d, Jiri Matasa, Giorgos Toliasa, and C.V. Jawahare + +$^{a}$ VRG, FEE, Czech Technical University in Prague, Czechia + +bUniversity of Zagreb Faculty of Electrical Engineering and Computing, Croatia + +$^{c}$ Aalto University, Finland + +$^{\mathrm{e}}$ IIIT Hyderabad, India + +$^{d}$ University of Oulu, Finland + +# Standard Anomaly Evaluation Setting + +![](images/40514a37d84f352bbefc29e15f31ec2b9bf10d1cd5367cd0b6db22a4c012fa7b.jpg) +X Not controlled evaluation + +![](images/e002d393790f23d8dd7657a948a8ed7cec3042b46d7266427260284f43aaf541.jpg) +X No evaluation of the primary objective + +![](images/c6e609939b2b0e1c396fc63dc8959ec39814b8b6e69fd5d12c7d5c5253573188.jpg) +X Mostly toy-like scenes + +![](images/ed5684ba16c86d970e2b1eee118cdc8cfab5815ba90e381d0d1a8b1123babcc7.jpg) +X Limited labels + +# Proposed ISSU Benchmark + +![](images/a4cf7205f45dcd6cf153816436d32365ab8f2118a77486c81d2777a6b5bb9a4e.jpg) +$\checkmark$ Controlled evaluation + +![](images/ab2bae4f869746fe04a57555dbba68928f35b98c3ce40081ece70f325d00100e.jpg) +in-domain +cross-sensor, temporal + +![](images/adea89c7813e98917e834fa17c7f5e8f50d92a6128d08ef0164d902cda9cd078.jpg) +$\checkmark$ Normal + low light +19 Classes + anomaly label +Figure 1. Standard benchmarks cannot separate the effects of domain shift, lighting conditions, and anomaly size during evaluation. The proposed dataset allows controlled evaluation of these effects and supports evaluation of both closed-set and anomaly segmentation. + +![](images/ea3eea1f677afefbb4c11cd8fbe7d2e6c854a3b1f684295cbc73c7ad6ec64418.jpg) + +# Abstract + +Before deployment in the real-world deep neural networks require thorough evaluation of how they handle both knowns, inputs represented in the training data, and unknowns (anomalies). This is especially important for scene understanding tasks with safety critical applications, such as in autonomous driving. Existing datasets allow evaluation of only knowns or unknowns - but not both, which is required to establish "in the wild" suitability of deep neural network models. To bridge this gap, we propose a novel anomaly segmentation dataset, ISSU, that features a diverse set of anomaly inputs from cluttered real-world environments. The dataset is twice larger than existing anomaly segmentation datasets, and provides a training, validation + +and test set for controlled in-domain evaluation. The test set consists of a static and temporal part, with the latter comprised of videos. The dataset provides annotations for both closed-set (knows) and anomalies, enabling closed-set and open-set evaluation. The dataset covers diverse conditions, such as domain and cross-sensor shift, illumination variation and allows ablation of anomaly detection methods with respect to these variations. Evaluation results of current state-of-the-art methods confirm the need for improvements especially in domain-generalization, small and large object segmentation. The code and the dataset are available at https://github.com/vojirt/benchmark_issu. + +# 1. Introduction + +Many successful computer vision applications rely heavily on deeply trained neural networks trained on extensive, fully or partially labeled training data [6, 9, 16, 32]. The + +validation part of the training process provides performance estimates for situations well covered in the training data. + +However, when exposed to data not well represented in training, predictions of a deep neural network model may become arbitrary. Therefore, a crucial capability for these models is the ability to detect such unknown or anomalous inputs1. There are multiple reasons why an input may be "unknown" – it might be a rare case, belonging to the long-tail missed for statistical reasons, a result of domain shifts such as introduction of a novel classes (e.g., a segway on a highway) or a result of optical sensor defects (e.g., a broken or dirty lens in a surveillance camera). + +Deep neural networks, which lack the ability to recognize unknowns, assign to these anomalous inputs a label that corresponds to one of the known classes of the training set, potentially with high confidence [13]. This may result in suboptimal or even dangerous behavior in deployed systems. Thus, the importance of anomaly detection is critical in safety-sensitive applications, such as autonomous driving, where an undetected anomaly could lead to accidents. + +Autonomous driving is a very complex task, with one of its core elements being the perception of the environment surrounding the vehicle, often referred to as scene understanding. Scene understanding is typically defined as a closed-set semantic segmentation task, where each pixel in an image is assigned to one of $K$ known classes. Progress in this area has been greatly advanced by large semantic segmentation datasets [6, 20, 26, 30] along with the development of powerful deep learning models [4, 5] specifically designed for semantic segmentation. However, these datasets overlook the anomaly detection problem. Neither anomalous data nor evaluation protocols are provided with their test sets, limiting the ability to evaluate segmentation models in real-world settings where unknowns may occur. + +To address these limitations, specialized datasets focusing on anomaly detection in driving scenarios have been developed, including LostAndFound [22], Fishyscapes [1], RoadAnomaly [17], and SMIYC [3]. However, these datasets typically use a binary evaluation approach ("known" vs. "anomaly"), where all pixels belonging to closed-set classes are assigned a single "known" label, while unknowns are labeled as "anomaly". This approach diverges from open set $K + 1$ evaluation ("closed-set" vs. "anomaly"), which is essential for real-world applications. Moreover, these datasets lack in-domain training data and are often collected under controlled conditions - usually in clear daylight and in simplified environments such as empty roads or parking areas. This setup leads to limited scene diversity and the absence of clutter from other traffic actors. + +In this paper, we introduce ISSU, a fully annotated semantic segmentation dataset that provides both closed-set and anomaly labels for images in the test set. This allows + +
Dataset-YearSize (annotated)Weather/Env. Cond.LocationClutter
AppoloScape'18 [15]145kDiverseChinaHigh
Mapillary Vistas'17 [20]25kDiverseWorldDiverse
BDD100K'20 [30]10kDiverseUSLow
IDD'19 [26]10kGoodIndiaHigh
Cityscapes'16 [6]5kGoodEuropeLow
WildDash 2'22 [31]4.3kDiverseWorldDiverse
ACDC'21 [24]4kAdverseEuropeLow
ISSU-Train'243.4kDiverseIndiaHigh
+ +Table 1. Comparison of existing datasets for semantic segmentation for driving scenarios. + +for the joint evaluation of closed-set and open-set semantic segmentation, with the anomaly label forming an additional class. Our dataset comprises real-world images collected from roads in India, which, due to its unstructured traffic conditions, present a wide range of anomalies in diverse sizes, shapes, lighting conditions, and complex backgrounds cluttered with on-road traffic agents. ISSU consists of three parts: ISSU-Train, ISSU-Test-Static and ISSU-Test-Temporal. ISSU-Train includes training and validation sets, while ISSU-Test-Static forms the test split for controlled in-domain evaluation (i.e., with train and test data from the same distribution). ISSU-Test-Temporal contains temporal test data in the form of short video clips collected using a different sensor setup than ISSU-Train. Semantic annotations with anomaly labels, along with the specific design of the dataset, allow controlled evaluations that isolate the effects of various nuisance factors, such as different anomaly sizes, lighting conditions, and camera sensors. Beyond standard evaluations, our dataset enables cross-domain evaluations, facilitating an analysis of how models trained on datasets from structured environments, such as Cityscapes [6], generalize to unstructured settings, and vice versa. + +# The contributions are as follows. + +1. We introduce the first real-world segmentation dataset with both closed-set and anomaly labels with defined static and temporal test splits. +2. We present a comprehensive evaluation of the best performing state-of-the-art anomaly segmentation models, based on the standard and most commonly used SMIYC benchmark leaderboard $^2$ , covering approaches from pixel-based to mask-based methods. +3. We provide in-depth analysis of how in-domain, cross-domain, cross-sensor, lighting variations, and anomaly size affect performance. Our results indicate that current methods struggle under these challenging conditions, highlighting the need for further research. +4. The proposed ISSU-Test-Temporal, which consists of short video clips, opens up new directions for future research - particularly in test-time adaptation of anomaly segmentation models in real-world. + +
Dataset-YearDomainSizeAnom. SizeModality% Anom. Pixels% Non-Anom. PixelsClassesWeather/Env. Cond.ClutteroIoU
Street-hazards'22 [14]Synthetic1500DiverseStatic1.0098.9013DayLow
Fishyscapes-static'21 [1]Hybrid1000DiverseStatic2.1085.802DiverseLow
LostFound'16 [22]Real1000SmallStatic0.1239.102DayNone
RoadAnomaly'19 [17]Real60DiverseStatic9.8533.162DayHigh
Fishyscapes-LaF'21 [1]Real275SmallStatic0.2381.132DayNone
SOS'22 [18]Real1129DiverseTemporal0.2123.302DayNone
WOS'22 [18]Real938DiverseTemporal0.8841.802DayNone
SMIYC-RoadAnomaly'21 [3]Real100DiverseStatic13.8082.202DayLow
SMIYC-RoadObstacle'21 [3]Real327Small\( Temporal^† \)0.1239.102DiverseNone
ISSU-Test-Static'24Real980DiverseStatic2.1889.6020DiverseHigh
ISSU-Test-Temporal'24Real1140DiverseTemporal1.2085.6020DiverseHigh
+ +Table 2. Comparison of existing datasets for anomaly detection in driving scenarios. Datasets are compared in terms of dataset properties (Domain, Size, Modality, number of Classes), anomaly statistics (Anomaly size, %Anomaly and Non-Anomaly pixels), diversity of conditions (Weather/Environment, Clutter) and support for open-set evaluation ( $oIoU$ ).† indicates low frame-per-second in sequences. The void class is not considered in the class count reported in the table. + +# 2. Related Work + +Semantic segmentation driving datasets aggregate images from the driver's front view and label them into the 19 most relevant classes to driving tasks (such as road, curb, pedestrian, etc.), as originally proposed in [6]. Some datasets include additional class labels tailored to the specific locations where the data was collected, such as a "tricycle" class in the China region [15]. However, all of them adhere to the basic 19 classes for compatibility reasons. More recent datasets focus on increasing task difficulty by capturing scenes on a larger scale [15, 20], incorporating unstructured traffic environments [26], or including adverse driving conditions [7, 24, 30, 31]. + +Despite these advancements, all datasets ignore pixels outside of the predefined training classes, and their evaluation protocols assess only closed-set performance, i.e., performance on the classes specified during training. This lack of annotation for unknown objects in test sets and the closed-set evaluation methodology limit their ability to validate models in realistic scenarios involving unknown objects. In contrast, ISSU allows the evaluation of semantic perception models in the presence of unknowns by providing labels that include an unknown class, thus supporting an open-set evaluation. The statistics of the commonly used driving semantic segmentation datasets are shown in Tab. 1. + +Anomalies in road-driving scenes. Limited evaluation of standard semantic segmentation road-driving datasets gave rise to specialized datasets that benchmark the detection of unknowns as a standalone task [1, 3]. The Fishyscapes [1] benchmark evaluates obstacle detection in a subset of the LostAndFound [22] dataset and a subset of Cityscapes val injected with synthetic anomalies. The SMIYC [3] benchmark is fully based on real-world images and validates the detection of anomalies on drivable surfaces as well as on the whole images. Several other standalone test + +datasets were proposed in conjunction with novel methods, such as RoadAnomaly [17] which was later merged into the SMIYC benchmark or synthetic Street-hazards [14] which is not widely used due to large domain shift between not photorealistic synthetic and real-world images. Most recently, WOS and SOS [18] datasets were introduced. These datasets include video sequences but only focus on the evaluation on drivable regions of interest. Unlike all these datasets, ISSU contains labels with 19 known classes and an unknown class which enables evaluation of anomaly detection performance in various regions of interest (such as the whole image, drivable surface only or anything in-between), and joint evaluation of the performance in open-set setting $(K + 1$ class evaluation). + +Acquiring detailed annotations (e.g., 19 known classes and an anomaly class) requires extensive manual effort. Thus, several datasets [2, 14] attempt to simplify the labeling efforts by simulating real-world traffic in synthetic environments. However, the quality of synthetic images diverges from the real-world data, leading to domain shifts that complicate the evaluation. Existing road driving anomaly datasets are summarized in Tab. 2. + +# 3. The Proposed ISSU Dataset + +Unstructured driving environments, such as those seen on Indian roads, are challenging for the task of semantic segmentation. The density of on-road and near-road traffic agents such as cars, pedestrians, road-side shops create a cluttered environment as shown in Fig. 2. + +# 3.1. Dataset Composition + +We compose our dataset using images collected on Indian roads [21, 25, 26] with new and detailed annotations of known class and anomaly labels (cf. Sec. 3.3). The dataset consists of three parts, a training set (ISSU-Train), and two test sets (ISSU-Test-Static and ISSU-Test-Temporal). + +![](images/e66b1fddc0503b3302232e0ddf7afb1aca540fdbb4970159ad500d0266e129ed.jpg) + +![](images/bab53a2dd356317a45ad193e3198c41b06aa4ad47271a07e1de54655fb86351f.jpg) + +![](images/aabc8e5aa4aec1323df29c2bdbcf302818554e5c79c859f0a4ac607e40dfe16e.jpg) + +![](images/8015b2f7226e04f412eb432f7fe4fe9b0f153f184bb74b2d3f335dfd9e680e3e.jpg) +Figure 2. Examples of anomalies (shown in white) in the ISSU dataset. The anomalous examples are ordered from small (left) to very large (right). Top: examples of anomalies of different size and shape at approximately the same distance from the ego-vehicle in ISSU-Test-Static. Bottom: temporal view of an anomaly observed at different time-steps in ISSU-Test-Temporal. + +![](images/3e2b7636d793544624dc8dfbaaa2bbcaab08705cb263ebbef543c9ea2bc0339b.jpg) + +![](images/2037a82192770f6ee08c3ba750fd587352f73ccef7b871242ef3af36e82ba191.jpg) + +Training set (ISSU-Train). It consists of images collected from different parts of Indian cities [25, 26]. The training set images contain only objects from the known classes. + +Static test set (ISSU-Test-Static). It consists of images collected in the same way as the training set, but the test set images contains both known and anomalous objects. + +Temporal test set (ISSU-Test-Temporal). It consists of short video clips that are also collected on Indian roads [21]. Images in each clip contains both known and anomalous objects. This set is collected using a consumer grade dashcam [21] which are ubiquitous but may produce lower image quality, e.g., due to firmware issues3. On the other hand, training sets ISSU-Train and CityScapes consist of images captured using higher quality cameras. + +Challenging examples. We focus on studying the affect of challenging viewing conditions, such as extreme lighting conditions and weather variations. To achieve this, all three aforementioned dataset parts include several images and video clips collected in lowlight or in rainy conditions. Detailed examples are shown in the supplementary. This subset has many challenges, such as light burst from oncoming cars and low visibility. The images collected in rainy conditions contain rain droplets and wiper movements, making the segmentation task even more challenging. + +# 3.2. Training and Evaluation Setups + +In-domain Static Evaluation. Training on ISSU-Train and testing on ISSU-Test-Static account for an in-domain Static evaluation setup. + +Cross-domain Static Evaluation. Training models on the + +CityScapes dataset, and testing on ISSU-Test-Static forms a cross-domain Static evaluation setup. The comparison with the in-domain setup allows us to evaluate the impact of such a domain shift in anomaly segmentation performance. + +Cross-sensor Temporal Evaluation. Testing on ISSU-Test-Temporal allows cross-sensor temporal evaluation of methods trained on ISSU-Train or CityScapes due to the quality discrepancy between such sensors. Modeling domain shifts from such image corruptions isw an active field of research [12]. We argue that it is necessary to benchmark anomaly segmentation methods in such real-world settings. This setup can be evaluated in an in-domain or cross-domain fashion, i.e. training models on the CityScapes / ISSU-Train datasets, and testing on ISSU-Test-Temporal forms a cross / in-domain Temporal evaluation setup. + +# 3.3. Annotation + +The annotation process is performed to assign three types of labels: i) semantic class labels representing the known set of classes in the training set, ii) anomaly label denoting unknowns, and iii) void label for pixels that should not be taken into account during evaluation. + +Labels of known classes. We follow the 19 CityScapes labels to define our known classes, but make adjustments for the context of Indian roads. As Indian roads often have blurry road boundaries, we assign both the road and the nearby drivable region as road class label. Unlike semantic segmentation datasets [26], we exclude traffic cones and short on-road traffic-poles from the known "traffic-sign" class following standard anomaly segmentation datasets [3]. + +Anomaly label. We conducted a rigorous multi-step annotation process (details are provided supplementary Sec. 9), identifying anomalies as objects outside the known 19 + +![](images/6ea02718df287874dadf19a83707b60bf49616eb5c5b15c342cb88ccfcf602e3.jpg) +Figure 3. Distributions of anomalies with respect to their size and spatial location within images. The anomalies are quantized to four different size intervals that are used in the ablation. Anomalies less than $7 \times 7$ (black dashed line) are ignored during all evaluations. The spatial distributions are visualized as a probability heatmap for each image location. Green line outlines road pixels that appeared in more than $50\%$ of dataset images. For temporal dataset the spatial distribution is also visualized for different view-points. + +![](images/e31da96a57d1860d0689b1265c6e90f896b6da29e55e5fb943668921ff897ef6.jpg) + +![](images/522f2adcf47c0e49af1d88b94e672ed3f44a07d9e6b418e73b638ed9957c039c.jpg) + +![](images/db5acf02d2c7d5042bbaf4a30d9163c6cf0516936a2ae0dfdfaf4ff6f0a0f18b.jpg) + +CityScapes classes and within a pre-defined region of interest (ROI) - on or within 2 meters of the road (based on visual inspection). Although this process may not cover all unknowns, it was a deliberate design choice to avoid unknowns outside the region of interest as they are less likely to affect the ego-vehicle. In addition, unknowns within the ROI, but frequently observed (e.g. auto-rickshaws, banner) were also not included as anomalies. All such unknowns, but non-anomaly objects, were labeled as void in both train and test sets. Examples of anomaly objects in our datasets include tires, bins, water-tanks, construction material, road barricades, road-maintenance dugouts, animals, road-side vendor items such as fruits etc., traffic cones and traffic-poles, pile of stones, mud and sand, tubs, rope, deep potholes. The process involved 7 annotators over a span of 2 months. To ensure that the team of annotators is familiar with the task, they received appropriate training until they achieved $95\%$ accuracy with respect to the CityScapes labels. Examples of anomalies are shown in Fig. 2 + +Void label. Pixels that are not assigned to the labels of known classes or the anomaly label, according to the aforementioned guidelines, are assigned to the void label. + +# 3.4. Evaluation protocols + +We establish four distinct evaluation protocols, each focusing on different aspects of the problem. This is enabled by the proposed ISSU dataset, because it includes images annotated with $K$ known classes along with an anomaly class. + +Road obstacles evaluation protocol considers the driving surfaces to be the area of interest for evaluation. Therefore, pixels that are not annotated as road or as anomalies are assigned to the void label during this evaluation setup. + +Road anomaly evaluation protocol benchmarks performance across all non-void pixels, i.e., pixels labeled as any of $K + 1$ classes. In contrast to the previous protocol, this one also accounts for errors occurring outside the driving + +regions, which increases the task difficulty. + +Open-set evaluation protocol validates the recognition of known classes in the presence of anomalies. This protocol penalizes both misclassifications among known classes and incorrect detection of anomalies. + +Closed-set evaluation protocol assesses classification performance solely on the $K$ known classes and maps the anomaly label to the void label during this evaluation protocol. This standard evaluation protocol estimates the capabilities of trained models in an ideal setting and can serve as an upper bound for open-set evaluation performance. + +# 3.5. Metrics + +Average precision (AP) quantifies anomaly detection performance by measuring the area under the precision-recall curve. This threshold-free metric is used to evaluate performance in road obstacle and road anomaly protocols. + +$\mathbf{FPR}_T$ measures the false positive rate for the threshold that yields the true positive rate of $95\%$ . This is particularly important in safety-critical applications that demand high sensitivity and recall of all anomalous objects. + +$\mathbf{TPR}_F$ measures the true positive rate for the threshold that yields the false positive rate of $5\%$ . This metric forms a complementary operation point to the previous one and targets applications that require high precision, i.e. low false positive detection. + +F1 score [3] combines the anomaly detection metrics (recall and precision) and extends them from the pixel level to the component level. By grouping connected neighboring anomalous pixels into cohesive components, this approach provides instance-level performance estimates. + +Intersection-over-Union (IoU) quantifies recognition performance by measuring the overlap between predicted and ground-truth segments. Since traffic scene segmentation is a multiclass classification task, we report the macro-averaged IoU over the classes of interest. We use IoU to assess the + +
MethodOOD DataStaticTemporal
Road AnomalyClosed & Open-setRoad AnomalyClosed & Open-set
AP↑FPRT↓TPRF↑IoU↑oIoUT↑oIoUF↑AP↑FPRT↓TPRF↑IoU↑oIoUT↑oIoUF↑
In-domain
pixel-levelJSR-Net†X4.256.13.856.89.844.12.358.72.537.36.329.2
DaCUP†X5.410020.457.09.047.02.910011.137.36.930.4
PixOODX20.339.450.965.847.860.96.256.526.255.132.952.2
mask-levelRbAX75.773.493.673.136.466.436.594.975.257.85.554.3
EAMX77.15.994.473.466.567.445.292.981.859.16.155.5
PebalX69.99.293.373.164.267.232.492.674.957.87.855.4
RbA79.13.995.972.967.866.537.729.476.657.841.755.6
EAM76.84.296.173.868.467.638.791.484.459.56.755.7
Pebal64.54.495.772.967.867.123.624.777.157.846.255.7
UNO71.43.096.973.768.465.930.489.784.859.09.855.2
M2A32.066.971.153.931.549.510.778.647.540.316.934.1
Cross-domain
PixOODX11.473.733.256.320.452.84.880.725.548.714.746.9
RbAX43.397.370.557.24.155.215.798.546.241.31.140.6
RbA56.480.778.957.511.955.124.691.654.443.73.241.9
UNO55.592.979.168.112.065.637.292.470.357.46.654.6
+ +performance in closed-set evaluation. + +Open-Intersection-over-Union (oIoU) [10] evaluates recognition performance of known classes in the presence of anomalous instances. Unlike the standard IoU metric, oIoU incorporates false positives and false negatives committed by the anomaly detector. The difference between IoU and oIoU highlights the performance gap between closed-set and open-set deployments. + +# 3.6. Statistics + +The train set of ISSU-Train comprises 3436 images, while the validation set comprises 762 images. The test set ISSU-Test-Static contains 980 annotated images. ISSU-Test-Temporal, which consists of video clips, includes a total of 21118 images, of which 1140 are annotated. The unannotated images are released to facilitate future research in using temporal images for online test-time adaptation of anomaly segmentation models to mitigate the challenges of domain shifts. The number of pixels (log-scale) per class in ISSU-Train, ISSU-Test-Static, ISSU-Test-Temporal is shown in Fig. 6 of the supplementary. As can be seen, the distribution of pixel counts per class is similar between train and test splits. Additionally, Tab. 6 in supplementary provides statistics on the number of images captured under normal daylight and adverse lowlight conditions across different ISSU splits. The frequency histogram of anomaly sizes is shown in Fig. 3, illustrating significant variations in anomaly sizes. Examples of images showing these variations are presented in Fig. 2. The spatial distribution of the anomalies, along with the approximate road regions, is shown in the bottom left of Fig. 3 for both ISSU-Test-Static + +Table 3. Results for road anomaly, closed-set and open-set evaluation protocols under in-domain and cross-domain evaluation setups. The $T\left( F\right)$ subscript for oIoU metric refers to operating point (anomaly score threshold) for which the methods achieve 95% TPR (5% FPR). + +
MethodOOD DataStaticTemporal
AP↑FPRT↓AP↑FPRT↓
In-domain
pixel-levelJSR-Net†X85.78.452.126.5
DaCUP†X85.510056.8100
PixOODX93.14.383.110.1
mask-levelRbAX92.777.553.498.9
EAMX94.52.270.098.1
PebalX92.33.454.295.6
RbA95.81.757.233.7
EAM95.61.662.196.2
Pebal92.51.948.923.8
UNO94.01.256.192.3
M2A48.978.530.079.5
Cross-domain
PixOODX92.35.184.310.8
RbAX62.499.132.599.3
RbA76.168.937.987.9
UNO66.390.849.190.5
+ +Table 4. Results for road obstacle evaluation protocols under indomain and cross-domain setups. + +and ISSU-Test-Temporal. The plot shows that the anomalies are distributed across various regions of the road. + +# 4. Baselines + +The baselines were selected as the top performing methods on the standard and the most frequently used SMIYC [3] benchmark. We broadly categorized them into two groups + +![](images/a052afc5fea72ea6b0c9637ee890ec8b4ccee2e0bee214bacfdef38b9d4a3061.jpg) + +![](images/314a6a2c8ef8fa140909789eae09423729959f872326ba828c7da4cb29ef2a50.jpg) + +![](images/bf96d54ce0b2ea2c237cd100ea2ab2b513aae25b92e97c13e2ddac1e49af085d.jpg) + +![](images/d0e2ba629f9101f702cb475962df14088a1c25219a18bb442eb62cc153313ad9.jpg) + +![](images/67f92cf4878d58c0d4b4597c482076e53a42585ae9a707b5f74ee853e9fe8dd7.jpg) + +![](images/fd7fac3466951b0759eab28f91e235c08c027a4a189c4789f93ccd7a114c0c6b.jpg) + +![](images/85d464e4f3e112468e90cdc2369cf39008d99635abdf76787c85fd607acf4386.jpg) +- Pixel-level - Mask-level - Mask* -level + +![](images/0e971dcbe60d92d83a0fb3c8d2f31b47f7e15f78911c53f2ca794ce4161836df.jpg) + +![](images/6022d8739eb12c1a6c479b4ff536c7a00f13cf9d45da7bfee1741c3eeb46fe4d.jpg) + +![](images/8ca9f2c5b7a77ce128f7fe753d18f67a7324f4f775a4be6a6546571d86eae146.jpg) + +based on the granularity of regions for which an anomaly score is predicted, i.e., pixel-level and mask-level. The methods are briefly described in the following paragraphs. + +Pixel-level baselines. We consider two reconstruction-based methods, JSR-Net [27] and DaCUP [28] that localize anomalies as poorly reconstructed pixels. We also include recent PixOOD [29] that uses a statistical decision strategy in pre-trained representation to detect anomalies. + +Mask-level baselines. We consider baselines that extend the mask-level classifier [5]. A seminal mask-level approach EAM [11] assigns anomaly scores to masks instead of pixels and aggregates decisions to recover dense predictions. RbA [19] considers regions rejected by all masks as anomalous, while Mask2Anomaly (M2A) [23] adapts the model architecture to enhance anomaly detection. Finally, UNO [8] revisits the $K + 1$ classifier built on top of the mask classifier and combines negative class recognition with prediction uncertainty to improve anomaly detection. + +# 5. Experimental results + +Road Anomaly Evaluation results are presented in Tab. 3 and Tab. 7 in supplementary. For the in-domain Static evaluation setup, most Mask2Former (mask-level) methods trained with auxiliary out-of-domain (OOD) data achieve good performance across three anomaly detection metrics: $\mathrm{FPR}_T (< 5\%)$ , $\mathrm{TPR}_F (>90\%)$ and AP ( $>70\%$ ). Due to the lack of any "objectness" priors, the pixel-level methods classify many random pixels as anomalous with high confidence, resulting in a poor anomaly detection metrics. + +![](images/f344a36d065742a9a39c7ab1dc9502667163b4d6fc1414952570a669b51f715b.jpg) +Figure 4. Cross-domain vs. In-domain performance in road anomaly evaluation protocol. Top row - Static, bottom row - Temporal. $\mathsf{Mask}^*$ are mask-based methods trained with OOD data. The $y = x$ reference line shows relative gain or drop. The $T(F)$ subscript for oIoU metric refers to operating point (anomaly score threshold) for which the methods achieve $95\%$ TPR ( $5\%$ FPR). +Figure 5. Ablation of different anomaly sizes. The plot shows results in ISSU-Test-Static (left) and ISSU-Test-Temporal (right) for road anomaly evaluation protocol under in-domain setup. + +In contrast, results on the challenging in-domain Temporal setup show that both pixel-level and mask-level methods have high $\mathrm{FPR}_T$ and low AP. This indicates that domain shifts due to differences in sensor quality adversely affect anomaly detection performance. + +Results in cross-domain Static and cross-domain Temporal setups show a significant performance drop for all methods compared to the in-domain setup (cf. Fig. 4) resulting in low AP and high $\mathrm{FPR}_T$ . The complementary $\mathrm{TPR}_F$ metric shows that some mask-based methods such as UNO could detect $79\%$ and $70\%$ of anomalies for cross-domain Static and Temporal setups respectively. Successfully detecting the remaining anomalies, to achieve $95\%$ TPR, results in high $\mathrm{FPR}_T$ ( $>90\%$ ). This is because the methods include many known-class pixels as true-positives to correctly classify the hard anomalous cases. Qualitative examples of hard anomalies are presented in Sec. 7.3 (Figs. 7 and 8) and Sec. 11 (Figs. 13 to 15) in supplementary. + +Closed and Open-set Evaluation (Tab. 3). The IoU metric + +
MethodOOD DataDayLowlight
ISSU-Test-StaticISSU-Test-TemporalISSU-Test-StaticISSU-Test-Temporal
AP↑FPRT↓FI↑AP↑FPRT↓FI↑AP↑FPRT↓FI↑AP↑FPRT↓FI↑
pixel-levelJSR-NetX4.2657.451.842.2875.690.834.2039.940.612.2875.690.83
DaCUPX5.09100.001.672.90100.002.257.8038.803.132.80100.002.78
PixOODX34.2432.462.2816.0753.571.505.6360.590.652.1665.760.56
mask-levelRbAX77.065.5916.3439.8295.7711.3867.9997.449.2824.1696.206.81
EAMX77.725.0921.3447.4895.5715.0673.9295.5814.6137.6889.6311.50
PebalX71.276.3721.6034.9597.4511.8461.6189.319.7821.7087.917.18
RbA79.643.4422.1440.7523.0312.0075.4074.3812.3028.5674.548.64
EAM77.263.6322.2139.6294.2715.2874.4714.6514.2637.8184.3512.06
Pebal65.393.710.0028.5522.190.0058.9216.920.0012.2028.800.00
UNO71.712.8229.1931.7841.7718.6074.4714.6514.2630.1088.1714.58
Mask2Anomaly35.1361.029.4511.7177.055.6119.9791.356.177.7886.294.32
+ +Table 5. Ablation of different lighting conditions. Results for in-domain road anomaly evaluation protocol under day and light-adverse conditions (night, rain, fog, dawn). + +for ISSU is for all methods about $10\%$ lower than respective IoU achieved in CityScapes considering in-domain Static setup (75.88% vs. 65.83% for PixOOD and 83.5 - 83.7% vs. $\sim 73\%$ for most Mask2former methods). This difference is significantly higher (20-30%) for in-domain Temporal and cross-domain setups. This highlights the difficulty of the cluttered traffic environment in India and challenging domain shifts. Open-set IoU (oIoU) follows similar conclusions as road anomaly evaluation protocols. Results on indomain Temporal and cross-domain setups show significant difference between closed set IoU (IoU) and open-set IoU at $95\%$ TPR ( $oIoU_T$ ). This is attributed to misdetection of anomalies as known classes and known classes as anomalies. The drop is less significant between IoU and $oIoU_F$ but results in significantly lower TPR $_F$ . It is to be noted, zero drop ( $IoU = oIoU$ ) can be achieved by not detecting any anomalies ( $TPR/FPR = 0\%$ ). Thus it is important to analyze both $TPR_F$ ( $FPR_T$ ) and $oIoU_F$ ( $oIoU_T$ ). The open-set results signify the importance of evaluating semantic segmentation in real-world setting by jointly evaluating closed-set segmentation in the presence of anomalous object. + +Road Obstacle Evaluation (Tab. 4). When the evaluation is limited to road regions, the pixel-level methods generally are much better at generalizing from CityScapes to the ISSU, resulting in a much lower FPR metric for both static and temporal datasets. However, for in-domain Static evaluation setup the mask-level methods are able to outperform other method, mainly due to strong priors baked in objectwise mask predictions that seems to be more robust to detecting entire anomaly instances. In the temporal part where the different sensors act as a form of a domain-shift the Mask2former based methods struggle to localize all anomalies resulting in high FPR. The effects of domain shift are less pronounced in this setup due to the uniformity of roads, as shown in Tab. 8. + +Ablations: anomaly sizes. The effect of anomaly sizes is shown in the Fig. 5, where specific anomaly size ranges (defined in Fig. 3) are considered. The size intervals were motivated by the dataset statistics and spatial resolution of the most commonly used backbone architectures. We use the F1 metric, which is designed to measure instance-level performance. The metric is generally improved with larger anomaly sizes, except for the largest anomalies, where the methods struggle to accurately and fully segment the very large instances. This is again more apparent for pixel-level methods. Consistently with the evaluation limited to road region, the temporal dataset with the additional challenges of different sensors negatively effects Mask2former based methods significantly more than the pixel-level methods. The results for all methods are in the supplementary Fig. 9. + +Ablations: lighting variations. This ablation compares the performance of the methods under lighting variations - day (clear weather and good lighting conditions) and lowlight (e.g. fog, rain, dawn). The results are presented in Tab. 5 which shows that both the pixel-based and mask-based methods struggle under lowlight conditions. + +# 6. Conclusions + +We presented a new dataset and a benchmark for anomaly segmentation in a real-world setting. The results show that cross-domain generalization remains a challenge for current state-of-the-art anomaly segmentation methods. When trained on in-domain data, the performance of these models improves by a significant margin. This forms a strong baseline for future work in cross-domain generalization and adaptation of anomaly segmentation models. The results also showed that existing methods struggle in the presence of lower sensor quality, lower visibility, and small anomaly size. The diverse conditions provided by our benchmark offer a timely test-bed for anomaly segmentation research. + +# Acknowledgments + +The authors acknowledge support from respective sources as follows. Zakaria Laskar: Programme Johannes Amos Comenius (no. CZ.02.01.01/00/22 010/0003405). Giorgos Tolias: Junior Star GACR (no. GM 21-28830M). Tomás Vojíř and Jiri Matas: Toyota Motor Europe and by the Czech Science Foundation grant 25-15993S. Shankar Gangisetty and C.V. Jawahar: iHub-Data and Mobility at IIIT Hyderabad. Matej Gricic: Croatian Recovery and Resilience Fund - NextGenerationEU (grant C1.4 R5-I2.01.0001). Iaroslav Melekhov and Juho Kannala: Research Council of Finland (grants 352788, 353138, 362407), Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Allice Wallenberg Foundation. + +We thank Ram Sharma from CVIT, IIIT Hyderabad, Mahender Reddy and the annotation team in Annotations and Data Capture-Mobility, IIIT Hyderabad for their work in the annotation process. + +# References + +[1] Hermann Blum, Paul-Edouard Sarlin, Juan I. Nieto, Roland Siegwart, and Cesar Cadena. The fishscapes benchmark: Measuring blind spots in semantic segmentation. *IJCV*, 2021. 2, 3, 6, 10 +[2] Daniel Bogdoll, Iramm Hamdard, Lukas Namgyu Rößler, Felix Geisler, Muhammed Bayram, Felix Wang, Jan Imhof, Miguel de Campos, Anushervon Tabarov, Yitian Yang, Hanno Gottschalk, and J. Marius Zöllner. Anovox: A benchmark for multimodal anomaly detection in autonomous driving. CoRR, abs/2405.07865, 2024. 3 +[3] Robin Chan, Krzysztof Lis, Svenja Uhlemeyer, Hermann Blum, Sina Honari, Roland Siegwart, Pascal Fua, Mathieu Salzmann, and Matthias Rottmann. Segmentmefyoucan: A benchmark for anomaly segmentation. In NeurIPS Dataset and Benchmarks, 2021. 2, 3, 4, 5, 6, 10 +[4] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018. 2 +[5] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In CVPR, 2022. 2, 7 +[6] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. 1, 2, 3 +[7] Dengxin Dai, Christos Sakaridis, Simon Hecker, and Luc Van Gool. Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding. IJCV, 2020. 3 +[8] Anja Delić, Matej Grcic, and Siniša Šegvic. Outlier detection by ensembling uncertainty with negative objectness. In BMVC, 2024. 7, 3, 6 + +[9] Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman. The pascal visual object classes (VOC) challenge. *IJCV*, 2010. 1 +[10] Matej Grcic and Sinisa Segvic. Hybrid open-set segmentation with synthetic negative data. IEEE TPAMI, 2024. 6 +[11] Matej Grcic, Josip Saric, and Sinisa Segvic. On advantages of mask-level recognition for outlier-aware segmentation. In CVPR, 2023. 7, 3 +[12] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations, 2019. 4 +[13] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017. 2 +[14] Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joseph Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. Scaling out-of-distribution detection for real-world settings. In Int. Conf. on Mach. Learn. PMLR, 2022. 3, 10 +[15] Xinyu Huang, Peng Wang, Xinjing Cheng, Dingfu Zhou, Qichuan Geng, and Ruigang Yang. The ApolloScape Open Dataset for Autonomous Driving and Its Application. In IEEE TPAMI, pages 2702-2719, 2020. 2, 3 +[16] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper R. R. Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, and Vittorio Ferrari. The open images dataset V4. IJCV, 2020. 1 +[17] Krzysztof Lis, Krishna Kanth Nakka, Pascal Fua, and Mathieu Salzmann. Detecting the unexpected via image synthesis. In ICCV, 2019. 2, 3 +[18] Kira Maag, Robin Chan, Svenja Uhlemeyer, Kamil Kowol, and Hanno Gottschalk. Two video data sets for tracking and retrieval of out of distribution objects. In ACCV, 2022. 3, 10 +[19] Nazir Nayal, Misra Yavuz, João F. Henriques, and Fatma Güney. Rba: Segmenting unknown regions rejected by all. In ICCV, 2023. 7, 3 +[20] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulò, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In ICCV, 2017. 2, 3 +[21] Chirag Parikh, Rohit Saluja, C. V. Jawahar, and Ravi Kiran Sarvadevabhatla. IDD-X: A multi-view dataset for ego-relative important object localization and explanation in dense and unstructured traffic. In ICRA, pages 14815-14821. IEEE, 2024. 3, 4, 5 +[22] Peter Pinggera, Sebastian Ramos, Stefan Gehrig, Uwe Franke, Carsten Rother, and Rudolf Mester. Lost and found: detecting small road hazards for self-driving vehicles. In Int. Conf. on Intelligent Robots and Systems, 2016. 2, 3, 10 +[23] Shyam Nandan Rai, Fabio Cermelli, Dario Fontanel, Carlo Masone, and Barbara Caputo. Unmasking anomalies in road-scene segmentation. In ICCV, 2023. 7, 4 +[24] Christos Sakaridis, Dengxin Dai, and Luc Van Gool. ACDC: the adverse conditions dataset with correspondences for semantic driving scene understanding. In ICCV, 2021. 2, 3 + +[25] Furqan Ahmed Shaik, Abhishek Reddy Malreddy, Nikhil Reddy Billa, Kunal Chaudhary, Sunny Manchanda, and Girish Varma. IDD-AW: A benchmark for safe and robust segmentation of drive scenes in unstructured traffic and adverse weather. In WACV, 2024. 3, 4, 5 +[26] Girish Varma, Anbumani Subramanian, Anoop M. Namboodiri, Manmohan Chandraker, and C. V. Jawahar. IDD: A dataset for exploring problems of autonomous navigation in unconstrained environments. In Winter Conf. Appl. of Comput. Vis., 2019. 2, 3, 4 +[27] Tomas Vojir, Tomás Šipka, Rahaf Aljundi, Nikolay Chumerin, Daniel Olmeda Reino, and Jiri Matas. Road Anomaly Detection by Partial Image Reconstruction With Segmentation Coupling. In ICCV, pages 15651-15660, 2021. 7, 2 +[28] Tomás Vojíř and Jiri Matas. Image-Consistent Detection of Road Anomalies As Unpredictable Patches. In WACV, pages 5491–5500, 2023. 7, 2 +[29] Tomás Vojíř, Jan Šochman, and Jíří Matas. PixOOD: Pixel-Level Out-of-Distribution Detection. In ECCV, 2024. 7, 2, 6 +[30] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darryll. BDD100K: A diverse driving dataset for heterogeneous multitask learning. In CVPR, 2020. 2, 3 +[31] Oliver Zendel, Matthias Schörghuber, Bernhard Rainer, Markus Murschitz, and Csaba Beleznai. Unifying panoptic segmentation for autonomous driving. In CVPR, pages 21351-21360, 2022. 2, 3 +[32] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ADE20K dataset. IJCV, 2019. 1 \ No newline at end of file diff --git a/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/images.zip b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5a0d8f9fec7b7d6d1cc65187f74a921ff6876321 --- /dev/null +++ b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:665795d2c42c6810a3c67ef40f63915a6277793f44c6af425be94728bdcb4933 +size 741094 diff --git a/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/layout.json b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2fb50f458aeb1d3ba091a8bf7088074cb988f5a1 --- /dev/null +++ b/CVPR/2025/A Dataset for Semantic Segmentation in the Presence of Unknowns/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c68620ce7306d521f41166dce8024d12f11e0ac6b2d4e85ca6391594f1b43858 +size 349281 diff --git a/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_content_list.json b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3d97d6f00104064f0975eaeaaa1104ed4d209bfe --- /dev/null +++ b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5bca0ef26e405cf6d74db817fa838e78ae4222e96f88ebf9413579506ca8a23b +size 75770 diff --git a/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_model.json b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c9c81fa92e63115924ccee3437b742c7963e5ca7 --- /dev/null +++ b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:055c52582dd60d30e9c0e188eb5694eae2a7e4f3cb3929f4c99b6633c8e7d2b1 +size 92874 diff --git a/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_origin.pdf b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a4fa82e993f105a19f55629e336ed314aa7ed526 --- /dev/null +++ b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/4e976258-ab89-4b67-a5da-a2728d6871ce_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8790c7a4d0f5ecef34b3ca6b9a068d16544fbc20f824320972521b637d6525a3 +size 3215307 diff --git a/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/full.md b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0046d768dd2989871497e3ed70203b4c26d20028 --- /dev/null +++ b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/full.md @@ -0,0 +1,266 @@ +# A Distractor-Aware Memory for Visual Object Tracking with SAM2 + +Jovana Videnovic*, Alan Lukezic*, Matej Kristan + +Faculty of Computer and Information Science, University of Ljubljana, Slovenia + +jovanavidenovic10@gmail.com, {alan.lukezic, matej.kristan}@fri.uni-lj.si + +# Abstract + +Memory-based trackers are video object segmentation methods that form the target model by concatenating recently tracked frames into a memory buffer and localize the target by attending the current image to the buffered frames. While already achieving top performance on many benchmarks, it was the recent release of SAM2 that placed memory-based trackers into focus of the visual object tracking community. Nevertheless, modern trackers still struggle in the presence of distractors. We argue that a more sophisticated memory model is required, and propose a new distractor-aware memory model for SAM2 and an introspection-based update strategy that jointly addresses the segmentation accuracy as well as tracking robustness. The resulting tracker is denoted as DAM4SAM. We also propose a new distractor-distilled DiDi dataset to study the distractor problem better. DAM4SAM outperforms SAM2.1 and related SAM memory extensions on seven benchmarks and sets a solid new state-of-the-art on six of them. The code and the new dataset are available on https://github.com/jovanavidenovic/DAM4SAM. + +# 1. Introduction + +General visual object tracking is a classical computer vision problem that considers the localization of an arbitrary target in the video, given a single supervised training example in the first frame. The major source of tracking failures are so-called distractors, i.e., image regions that are difficult to distinguish from the tracked object, given the available target model (see Figure 1). These can be nearby objects similar to the tracked target (external distractors) or similar regions on the object when tracking only a part of the object (internal distractors). When the target leaves and re-enters the field of view, the external distractors become particularly challenging. + +Various approaches have been proposed to reduce the vi- + +![](images/b281a6d259e8544cb7f9d5c0a927b2d982bcb33c287a7351c8d6c25aa5e65872.jpg) + +![](images/3441fb99a8e447a28b7c6e2f0e86a0f1a42042775440a30d0b00faba5b1ac446.jpg) + +![](images/59b9552e10f5c3eeaa80bda02911b8690251148fe6473330c45f97cba6386219.jpg) +Figure 1. DAM4SAM distractor-aware memory (DAM) update is triggered by the divergence between the predicted and the alternative masks (top left). This resolves the visual ambiguity and increases tracking robustness (bottom). DAM leads to a significant performance boost, setting a new state-of-the-art result on VOT2022 (top-right). + +![](images/9a2ea02a439ef20f3eef1fcff10b5b1b4a534479e1195cb58b008d8d2f0a9566.jpg) + +sual ambiguity caused by distractors. These include learning discriminative features [2, 5, 6, 10, 43] or explicitly modeling the foreground-background by dedicated modules [9, 28, 29, 50]. An emerging paradigm, already positioned at the top of the major benchmarks [23-25], are memory-based frameworks, which localize the target by pixel association with the past tracked frames [8, 47, 51]. + +The memory-based methods construct the target model by concatenating sequences of entire images with the segmented target, thus implicitly encoding the present distractors. Zhou et. al [51] argue that the visual redundancy in the large memory leads to reduced localization capability due to the nature of cross-attention. They show that limiting the memory to the most recent frames and temporally time-stamping them in fact improves tracking. This paradigm was further verified by the recent tracking foun + +dation model SAM2 [36], which sets a solid state-of-the-art across several video segmentation and tracking benchmarks [13, 18, 25, 35, 41]. + +We argue that while the recent target appearances in the memory are required for accurate segmentation, another type of memory is required to distinguish the target from challenging distractors. To support this claim, we propose a new distractor-aware memory (DAM) and update mechanism for SAM2. The new memory is divided by its tracking functionality into two parts: the recent appearances memory (RAM) and distractor-resolving memory (DRM). While RAM contains the recent target appearances sampled at regular intervals, DRM contains anchor frames that help discriminate the target from critical distractors. A novel DRM updating mechanism is proposed that exploits the information of SAM2 output (see Figure 1), which has been ignored so far by the tracking research. + +In addition, we observe that standard benchmarks contain many sequences, which are no longer considered challenging by modern standards. The high performance on these sequences overwhelms the total score, saturates the benchmarks, and does not properly expose the tracking advances. To address this, we semi-automatically distill several benchmarks into a distractor-distilled tracking dataset (DiDi). + +In summary, our main contribution is the distractor-aware memory (DAM) framework for SAM2.1, a refined version of the SAM2 model [36], and the memory updating strategy, resulting in DAM4SAM. To the best of our knowledge, this is the first memory formulation that divides and updates the memory with respect to its function in tracking. Our secondary contribution is the new DiDi dataset that more clearly exposes tracking advances in the presence of distractors. With no additional training, DAM4SAM substantially outperforms SAM2.1 in robustness on several standard bounding box and segmentation tracking benchmarks, including the new DiDi dataset, and sets a new state-of-the-art in visual object tracking. + +# 2. Related Work + +The recent top-performing tracking frameworks are inspired by the video object segmentation methods based on transformers and memory networks [8, 9, 36, 47, 51]. These methods embed predictions from past frames into memory, therefore extending contextual information beyond just the initial or the previous frame. The attention mechanism is typically used to link frame representations stored in the memory with features extracted from the current frame. In initial methods like [47], arriving frames were continually added to the memory. This led to theoretically unbounded increase in computational complexity and GPU memory. + +This issue has been addressed in [8, 9] by using multiple memory storages and efficient compression schemes to + +capture different temporal contexts, enhancing performance on long-term videos. Alternatively, [51] proposed to restrict the memory to the most recent frames with temporal stamping, which led to improved localization. The principle of the restricted memory is followed by the SAM2 [36] foundation model, which stores the last six frames and the initial frame in the memory. Recently, SAM2Long [14] proposed a training-free method to enhance the performance of SAM2 [36] on long-term sequences by determining the optimal trajectory from multiple segmentation pathways using a constrained tree search. + +Most of the existing tracking methods do not explicitly address tracking in the presence of distractors, even though distractors are a major source of tracking failures. Discriminative (deep) correlation filters [12, 33] are theoretically suitable to handle distractors but are in practice outperformed by the modern transformer-based trackers. However, there have been some recent attempts to address distractors. KeepTrack [32] casts the problem as a multi-target tracking setup, where it identifies target candidates and potential distractors, which are then associated with previously propagated identities using a learned association network. However, the method relies on accurate detection and cannot address internal distractors in practice. In [29] target localization accuracy and robustness are treated as two distinct tasks, which is demonstrated to be beneficial in situations with distractors. Despite an explicit distractor handling mechanism, these methods lead to complicated architectures and cannot fully exploit the learning potential of modern frameworks. In contrast, memory-based methods [8, 9, 36, 51] have the capacity to implicitly handle distractors in an elegant way, since they store entire images and apply a learnable localization by segmentation. Differing from our work, the existing memory management methods are not designed to effectively handle the distractors. + +# 3. Distractor-aware memory for SAM2 + +This section describes the new DAM model for SAM2. Section 3.1 briefly outlines the SAM2 architecture, while the new model is described in Section 3.2. + +# 3.1. SAM2 preliminaries + +SAM2 extends the Segment Anything Model (SAM) [20], originally developed for interactive class-agnostic image segmentation, to video segmentation. It consists of four main components: (i) image encoder, (ii) prompt encoder, (iii) memory bank, and (iv) mask decoder. + +The image encoder applies ViT Hiera1 backbone [37] to embed the input image. Interactive inputs (e.g., positive/negative clicks) are absorbed by the prompt encoder and used for output mask refinement, however, note that + +these are not applicable in the general object tracking setup. The memory bank consists of the encoded initialization frame with a user-provided segmentation mask and six recent frames with segmentation masks generated by the tracker. Temporal encodings are applied to the six recent frames to encode the frame order, while such encoding is not applied to the initialization frame to indicate its unique property of being a single supervised training example and thus serves as a sort of target prior model. + +The memory bank transfers pixel-wise labels onto the current image by cross-attending the features in the current frame to all memory frames, producing memory-conditioned features. The features are then decoded by the mask decoder, which predicts three output masks along with the IoU prediction for each. The mask with the highest IoU is chosen as the tracking output. + +SAM2 applies a variant of the memory management proposed in [51], where the initialization frame is always kept in the memory, while the six recent frames are updated at every new frame by a first-in-first-out protocol. The memory and the management mechanism are visualized in Figure 2, while the reader is referred to [36] for more details. + +# 3.2. Distractor-aware memory - DAM + +Related works [8, 36, 47, 51] have clearly demonstrated the importance of the most recent frames, which are required to address target appearance changes and ensure accurate segmentation. However, a different type of frames is required to prevent drifting in the presence of critical distractors and for reliable target re-detection. + +We propose to compose the memory with respect to its function during tracking into (i) recent appearance memory (RAM) and (ii) distractor resolving memory (DRM). RAM and DRM together form the distractor-aware memory (DAM), visualized in Figure 2. The function of RAM is to ensure segmentation accuracy in the considered frame. Thus we design it akin to the current SAM2 [36] memory, as a FIFO buffer with $\frac{1}{2} N_{\mathrm{DAM}} = 3$ slots, containing the most recent target appearances with temporal encoding to identify temporally more relevant frames for the task. + +On the other hand, DRM serves for ensuring tracking robustness and re-detection. It should contain accurately segmented frames with critical recent distractors, including the initialization frame. It is thus composed of a slot reserved for the initialization frame and a FIFO buffer with $\frac{1}{2} N_{\mathrm{DAM}} = 3$ anchor frames updated during tracking2. Since the purpose of DRM is to encode critical information for resolving distractors, it does not apply temporal encoding. Note that the pre-trained SAM2 already contains the building blocks to implement the proposed memory structure. + +# 3.2.1. RAM management protocol + +A crucial element of the memory-based methods is the memory management protocol. To efficiently exploit the available memory slots, the memory should not be updated at every frame, since the consecutive frames are highly correlated. In fact, [51] argue that visual redundancy in memory should be avoided in attention-based localization. RAM is thus updated every $\Delta = 5$ frames and includes the most recent frame since it is the most relevant for accurate target segmentation in the considered frame. + +SAM2 [36] updates the memory at every frame, including when the target is absent. The target is considered absent when the segmentation mask is all zeros. However, even for a very short occlusion, the memory will quickly fill up with frames without the target, which reduces the target appearance diversity in the model, leading to a reduced segmentation accuracy upon target re-appearance. Furthermore, failing to re-detect the target leads to incorrectly updating the memory by an empty mask, which may cause error accumulation and ultimately re-detection failure. We thus propose to not update RAM when the target is not present, i.e., when the predicted target mask is empty. + +# 3.2.2. DRM management protocol + +DRM inherits the initial update rules from RAM, i.e., update only when the target is present and at least $\Delta = 5$ frames have passed since the last update. It considers an additional rule to identify anchor frames containing critical distractors. In particular, drifting to a distractor may be avoided by including a past temporally nearby frame with this distractor accurately segmented as the background. Recall that SAM2 predicts three output masks and selects the one with the highest predicted IoU (Section 3.1), which means we can consider it as a multi-hypothesis prediction model. Our preliminary study showed that in the frames before the failure occurs, SAM2 in fact detects such distractors in the alternative two predicted output masks (see Figure 1). We thus propose a simple anchor frame detection mechanism based on determining hypothesis divergence between the output and alternative masks. + +A bounding box is fitted to the output mask and to the union of the output mask and the largest connected component in the alternative mask. If the ratio between the area of the two bounding boxes drops below $\theta_{\mathrm{anc}} = 0.7$ , the current frame is considered as a potential candidate to update DRM. Note that updating with a grossly incorrectly segmented target would lead to memory corruption and eventual tracking failure. We thus trigger the DRM update only during sufficiently stable tracking periods, i.e., if the predicted IoU score from SAM2 exceeds a threshold $\theta_{\mathrm{IoU}} = 0.8$ and if the mask area is within $\theta_{\mathrm{area}} = 0.2$ of the median area in the last $N_M = 10$ frames. Note that DAM4SAM is insensitive to the exact value of these parameters. + +![](images/987ef3f0ccf3d2629df1ee8f9753409f0666952a710c60db5580324a4cf28fe2.jpg) +Figure 2. Overview of the SAM2 memory and the proposed Distractor-Aware Memory (DAM), which splits the model into Recent Appearance Memory (RAM) and Distractor Resolving Memory (DRM) and updates them by a new memory management protocol. + +![](images/bbc8f44621ddc4033f844b27a3c666cd6bdcd9d8b270934034cf7f252c123622.jpg) + +# 4. A distractor-distilled dataset + +While benchmarks played a major role in the recent visual object tracking breakthroughs, we note that many of these contain sequences, which are no longer considered challenging by modern standards. In fact, most of the modern trackers obtain high performance on these sequences, which overwhelms the total score and under-represents the improvements on challenging situations. To facilitate the tracking performance analysis of the designs proposed in this paper, we semi-automatically distill several benchmarks into a distractor-distilled tracking dataset (DiDi). + +We considered validation and test sequences of the major tracking benchmarks, which are known for high-quality annotation, i.e., GoT-10k [19], LaSOT [15], UTB180 [1], VOT-ST2020 and VOT-LT2020 [22], and VOT-ST2022 and VOT-LT2022 [23]. This gave us a pool of 808 sequences. A sequence was selected for the DiDi dataset if at least one-third of the frames passed the distractor presence criterion described next. + +A frame was classified as containing non-negligible distractors if it contained a large enough region visually similar to the target. This criterion should be independent from tracker localization method, yet should reflect the power of the modern backbones. We thus encoded the image by DINO2 [34] and for each pixel in the feature space computed the distractor score as the average cosine distance to the features within the ground truth target region. We then computed the ratio between the number of pixels outside and inside the target region that exceeded the average of the scores computed within the target region. If the ratio exceeded 0.5, we considered the frame as containing nonnegligible distractors. + +Using the aforementioned protocol, we finally obtained 180 sequences with an average sequence length of $1.5\mathrm{k}$ frames (274,882 frames in total). Each sequence contains a single target annotated by an axis-aligned bounding box. In addition, we manually segmented the initial frames to allow the initialization of segmentation-based trackers. Figure 3 shows frames from the proposed DiDi dataset. Please see the supplementary material for additional information. + +# 5. Experiments + +The proposed DAM for SAM2 memory model is rigorously analyzed. Section 5.1 reports a series of experiments to justify the design choices. Section 5.2 compares the SAM2.1 extension with DAM with the state-of-the-art on the DiDi dataset. Detailed analysis on the challenging VOT tracking-by-segmentation benchmarks is reported in Section 5.3, while a comparison on the standard bounding-box tracking benchmarks is reported in Section 5.4. + +# 5.1. Architecture design decisions + +The design choices of the proposed distractor-aware memory and the management protocol from Section 3.2 are validated on the DiDi dataset from Section 4. We compute the VOTS [24] measures since they simultaneously account for short-term as well as long-term tracking performance. Performance is summarized by the tracking quality Q score and two auxiliary measures: robustness (i.e., portion of successfully tracked frames) and accuracy (i.e., the average IoU between the prediction and ground truth during successful tracking). Results are shown in Table 1 and in AR plot in Figure 4. + +We first verify the argument that updating with frames without a target present causes memory degradation. We thus extend SAM2.1, with updating only when the predicted mask is not empty (denoted as SAM2.1 PRES). SAM2.1 PRES increases the tracking quality $Q$ by $2.5\%$ , primarily by improved robustness, which justifies our claim. + +We next verify the assumption (also claimed in [51]) that frequent updates reduce tracking robustness due to highly correlated information stored in memory. We reduce the update rate in SAM2.1 PRES to every 5th frame (SAM2.1 $_{\Delta=5}$ ). This negligibly improves Q, but does increase the robustness by $1.2\%$ , which supports the claim. We did not observe further performance improvement with increasing $\Delta$ . + +Finally, we focus on our proposed distractor-resolving memory (DRM), which accounts for the tracking robustness in the presence of distractors. Recall that DRM is updated when a distractor is detected and under the condition that tracking is reliable - we first test the influence of these two conditions independently. We thus extend SAM2.1 $_{\Delta=5}$ with the new DAM and update the DRM part only during + +![](images/9794af65c4ea3e16d08b5b7b77ab00cc042053229cd89dfb7c1acf46fdd25694.jpg) +Figure 3. Example frames from the DiDi dataset showing challenging distractors. Targets are denoted by green bounding boxes. + +Table 1. DAM4SAM architecture justification on DiDi dataset. + +
QualityAccuracyRobustness
SAM2.10.6490.7200.887
SAM2.1PRES0.6650.723③0.903
SAM2.1Δ=50.6670.7180.914
SAM2.1DRM10.672③0.7100.932②
SAM2.1DRM20.6440.6910.913
DAM4SAM0.694①0.727①0.944①
DRMtenc0.6690.7110.925③
RAMlast0.685②0.724②0.932②
+ +reliable tracking periods (SAM2.1 $DRM1$ ). The tracking accuracy improves a little, with robustness increase of $2\%$ . Alternatively, we change the rule to update only when the distractor is detected (SAM2.1 $DRM2$ ). While robustness remains unchanged w.r.t. SAM2.1 $\Delta = 5$ , the accuracy in fact drops. This is expected since distractor detection may be triggered also by the error in the target segmentation, which gets amplified by the update. To verify this, we next apply both our proposed update DRM rules, arriving at the proposed DAM with SAM2.1 (DAM4SAM for short). Compared to SAM2.1 $\Delta = 5$ , we observe substantial improvement in tracking quality Q ( $4\%$ ), primarily due to $3.3\%$ boost in robustness as well as $1.3\%$ boost in accuracy, taking the top-right position in the AR plot (Figure 4) among all variants. This conclusively verifies that DRM should be updated with distractor detected only if tracking is sufficiently reliable. + +In Section 3.2.1 we claim that DRM part of the memory should not be time-stamped, since the frame influence on the distractor resolving in the current frame should not be biased by the temporal proximity and should serve as a time-less prior. To test this claim, we modify DAM4SAM by using temporal encodings in DRM (except for the initialization frame) - we denote it as $\mathrm{DRM_{tenc}}$ . The tracking quality drops by $3.6\%$ , which confirms the claim. We further inspect the updating regime in RAM, which always includes the most recent frame, but updates the memory slots every 5th frame. DAM4SAM is modified to update all RAM slots at every 5th frame $(\mathrm{RAM}_{\overline{\mathrm{last}}})$ . This results in a slight tracking quality decrease $(1.3\%)$ , which indicates + +that including the most recent frame in RAM is indeed beneficial, but not critical. + +![](images/727d9cd159895b65fba7dcfd8097ed16b691a93b68cd42437c5179661d2c19a2.jpg) +Figure 4. Accuracy-robustness plot on DiDi for the ablated versions of DAM4SAM. The tracking quality is given at each label. + +# 5.2. State-of-the-art comparison on DiDi + +DAM4SAM is compared on the DiDi dataset with the recent state-of-the-art trackers TransT [5], SeqTrack [6], AQATrack [40] and AOT [47] as well as trackers with explicit distractor handling mechanisms: KeepTrack [32], Cutie [9] and ODTrack [50]. + +Results in Table 2 reveal the advantages of the trackers with an explicit distractor-handling mechanism over the other trackers. Consider two similarly complex recent trackers SeqTrack and ODTrack, which are both based on the ViT-L backbone. On classical benchmarks like LaSoT, $\mathrm{LaSoT_{ext}}$ and GoT10k, ODTrack outperforms SeqTrack by $2\%$ , $6\%$ and $4\%$ , respectively (see Table 6). However, the performance gap increases to $15\%$ on DiDi (Table 2), which confirms that distractors are indeed a major challenge for modern trackers and that DiDi has a unique ability to emphasize the tracking capability under these conditions and + +to expose tracking design weaknesses. + +Focusing on the evaluation of the proposed tracker, DAM4SAM outperforms all trackers - the standard state-of-the-art trackers as well as trackers with explicit distractor handling mechanism. In particular, DAM4SAM outperforms state-of-the-art distractor-aware trackers ODTrack and Cutie, by $14\%$ and $21\%$ , respectively. + +We compare the proposed DAM4SAM with the concurrent, unpublished work SAMURAI [45]. SAMURAI is also built on top of SAM2.1 [36] and improves memory management by incorporating motion cues into memory selection and mask refinement process. In this respect, the work is closely related to ours. Results show that DAM4SAM outperforms SAMURAI in tracking quality by $2\%$ on DiDi, mostly due to the higher robustness (i.e., DAM4SAM tracks longer than SAMURAI). This result demonstrates the superiority of our new DAM memory and the management protocol for distractor handling, which is also less complex than the concurrent counterpart proposed in SAMURAI. + +Compared to another unpublished tracker with the alternative memory design SAM2.1Long [14], DAM4SAM outperforms it by a healthy $7\%$ tracking quality boost, indicating superiority of our proposed memory. The results reveal that the major source of the performance boost is the DAM4SAM tracking robustness, which means it fails less often and thus much better handles distractors. In fact, a close inspection of the results shows that SAM2.1Long performs on par with the baseline SAM2.1, which indicates that the long-term memory update mechanism presented in [14] does not improve performance in the presence of distractors. Finally, comparing DAM4SAM to the baseline SAM2.1 reveals $7\%$ boost in tracking quality, again, attributed mostly due to the improved robustness $(6\%)$ . + +These results validate the benefits of the proposed DAM and its management protocol in handling challenging distractors. Qualitative results of tracking and segmentation with DAM4SAM on DiDi (Figure 5) further demonstrate the remarkable tracking capability in the presence of challenging distractors. + +# 5.3. State-of-the-art comparison on VOT benchmarks + +The VOT initiative [21] is the major tracking initiative, providing challenging datasets for their yearly challenges. In contrast to most of the tracking benchmarks, the targets are annotated by segmentation masks, which allows more accurate evaluation of segmentation trackers, compared to the classic bounding-box benchmarks. In this paper, we include two recent single-target challenges: VOT2020 [22] and VOT2022 [23], as well as the most recent multi-target challenge VOTS2024 [25]. + +VOT2020 benchmark [22] consists of 60 challenging sequences, while trackers are run using anchor-based pro + +Table 2. State-of-the-art comparison on DiDi dataset. + +
QualityAccuracyRobustness
SAMURAI [45]0.680②0.722③0.930②
SAM2.1Long [14]0.6460.7190.883
ODTrack [50]0.6080.740①0.809
Cutie [9]0.5750.7040.776
AOT [47]0.5410.6220.852
AQATrack [40]0.5350.6930.753
SeqTrack [6]0.5290.7140.718
KeepTrack [32]0.5020.6460.748
TransT [5]0.4650.6690.678
SAM2.1 [36]0.649③0.7200.887③
DAM4SAM0.694①0.727②0.944①
+ +![](images/ee815bcf090b4d2e94e4beda73d741489e97a61daf7f2edaacf69845c229b34b.jpg) +Figure 5. DAM4SAM qualitative results on the DiDi dataset with predicted masks shown in green, and tracked objects denoted by arrows. Per-frame calculated overlaps (intersection-over-union for each frame) are shown above the figures to indicate failure-free tracking over the entire sequence. + +tocol [22] to maximally utilize each sequence. Tracking performance is measured by the accuracy and robustness, summarized by the primary measure called the expected average overlap (EAO). + +Results on VOT2020 are shown in Table 3. The proposed DAM4SAM outperforms all compared trackers. In particular, it outperforms the recently published MixViT [11] by $25\%$ EAO, improving both, accuracy and robustness. DAM4SAM outperforms also the VOT2020 challenge winner RPT [31] by a large margin (37.5% in EAO). Comparing DAM4SAM to the original SAM2.1, the EAO is boosted by + +Table 3. State-of-the-art comparison on the VOT2020 benchmark. The challenge winner is marked by $\mathfrak{Q}$ + +
EAOAccuracyRobustness
ODTrack [50]0.605 ③0.7610.902 ③
MixViT-L+AR [11]0.5840.7550.890
SeqTrack-L [6]0.561--
MixFormer-L [10]0.5550.762 ③0.855
RPT ② [31]0.5300.7000.869
OceanPlus [49]0.4910.6850.842
AlphaRef [44]0.4820.7540.777
AFOD [7]0.4720.7130.795
SAM2.1 [36]0.681 ②0.778 ②0.941 ②
DAM4SAM0.729 ①0.799 ①0.961 ①
+ +$7\%$ , while accuracy and robustness are improved by $2.7\%$ and $2.1\%$ , respectively. + +VOT2022 benchmark [23] uses a refreshed dataset with 62 sequences (simplest sequences removed, and more challenging added). Table 4 includes the challenge top-performers, including the winner MS_AOT [47], as well as the recent published state-of-the-art trackers: DiffusionTrack [30], MixFormer [10], OSTrack [48] and D3Sv2 [28]. The proposed DAM4SAM outperforms the VOT2022 winner MS_AOT by a significant margin of $12\%$ in EAO. Note that the performance improvement is a consequence of both improved accuracy $(2\%)$ and robustness $(3\%)$ compared to the MS_AOT. In addition to achieving state-of-the-art performance, DAM4SAM outperforms also the baseline SAM2.1 by a healthy margin of $9\%$ EAO. + +The results on both, VOT2020 and VOT2022 clearly show that DAM4SAM outperforms all trackers, including the challenges top-performers and the recently published trackers, setting new state-of-the-art on these benchmarks. Despite its simplicity, the proposed training-free memory management is the key element for achieving excellent tracking performance. + +VOTS2024 benchmark [25] is one of the most recent benchmarks, consisting of 144 sequences. In contrast to VOT2020 and VOT2022, the VOTS2024 benchmark introduces a new, larger dataset, tracking multiple objects in the same scene (with ground truth sequestered at an evaluation server) and a new performance measure, designed to address short-term, long-term $^3$ , single and multi-target tracking scenarios. The VOTS2024 is currently considered as the most challenging tracking benchmark. + +Results are reported in Table 5. It is worth pointing out that the top performers are mostly unpublished (not peer-reviewed) trackers, tuned for the competition, and often complex ad-hoc combinations of multiple methods. For + +Table 4. State-of-the-art comparison on the VOT2022 benchmark. The challenge winner is marked by $\boldsymbol{\Omega}$ . + +
EAOAccuracyRobustness
MS_AOT ≈ [47]0.673 ③0.7810.944 ③
DiffusionTrack [30]0.634--
DAMTMask [23]0.6240.796 ③0.891
MixFormerM [10]0.5890.799 ②0.878
OSTrackSTS [48]0.5810.7750.867
Linker [42]0.5590.7720.861
SRATransTS [23]0.5470.7430.866
TransT_M [5]0.5420.7430.865
GDFourner [23]0.5380.7440.861
TransLL [23]0.5300.7350.861
LWL_B2S [3]0.5160.7360.831
D3Sv2 [28]0.4970.7130.827
SAM2.1 [36]0.692 ②0.7790.946 ②
DAM4SAM0.753 ①0.800 ①0.969 ①
+ +example, the challenge winner S3-Track combines visual and (mono)depth features, uses several huge backbones, and is much more complex than DAM4SAM. Despite this, using the same parameters as in other experiments, DAM4SAM achieves a solid second place among the challenging VOTS2024 competition. In particular, it outperforms the challenge-tuned versions of the recently published trackers LORAT [27], Cutie [9], and the VOT2022 winner AOT [47]. Furthermore, the proposed memory management mechanism in DAM4SAM contributes to $8\%$ performance boost in tracking quality compared to the baseline SAM2.1, mostly due to $9\%$ higher robustness. + +Table 5. State-of-the-art comparison on the VOT2024 benchmark. The challenge winner is marked by $\boldsymbol{\Omega}$ . + +
QualityAccuracyRobustness
S3-Track 8 [26]0.722①0.7840.889①
DMAOT_SAM [46]0.6530.794①0.780
HQ-DMAOT [46]0.6390.7540.790
DMAOT [46]0.6360.7510.795③
LY-SAM [25]0.6310.7650.776
Cutie-SAM [9]0.6070.7560.730
AOT [47]0.5500.6980.767
LORAT [27]0.5360.7250.784
SAM2.1 [36]0.661③0.791③0.790
DAM4SAM0.711②0.793②0.864②
+ +# 5.4. State-of-the-art comparison on bounding box benchmarks + +For a complete evaluation, we compare DAM4SAM on the following three standard bounding box tracking datasets: LaSoT [15], $\mathrm{LaSoT_{ext}}$ [16] and GoT10k [19]. Since frames are annotated by bounding boxes and SAM2 requires a segmentation mask provided in the first frame, we use the same SAM2 model to estimate the initialization mask. The min-max operation is applied on the predicted masks to obtain the axis-aligned bounding boxes required for the evaluation. Tracking performance is computed using area under the success rate curve [39] (AUC) in LaSoT [15] and $\mathrm{LaSoT_{ext}}$ [16] and the average overlap [19] (AO) in Got10k. + +LaSoT [15] is a large-scale tracking dataset with 1,400 video sequences, with 280 evaluation sequences and the rest are used for training. The sequences are equally split into 70 categories, where each category is represented by 20 sequences (16 for training and 4 for evaluation). The dataset consists of various scenarios covering short-term and long-term tracking. Results are shown in Table 6. The proposed DAM4SAM outperforms the baseline SAM2.1 [36] by $7.3\%$ , which indicates that the proposed memory management is important also in a bounding box tracking setup. Furthermore, DAM4SAM performs on par with the top-performing tracker LORAT [27], which was tuned on La-SoT training set, i.e., on the categories included in the evaluation set. It is worth noting that LORAT [27] has approximately $50\%$ more training parameters than DAM4SAM, making the model significantly more complex. + +$\mathbf{LaSO}\mathbf{T}_{\mathrm{ext}}$ [16] is an extension of the LaSoT [15] dataset, by 150 test sequences, divided into 15 new categories, which are not present in the training dataset. The results in Table 6 show that DAM4SAM outperforms the baseline version by a comfortable margin of $7\%$ in AUC. In addition, it outperforms the second-best tracker LORAT [27] by $7.6\%$ . This indicates that DAM4SAM generalizes well across various object categories while existing trackers suffer a much larger performance drop. + +GoT10k [19] is another widely used large-scale tracking dataset, composed of $\sim 10\mathrm{k}$ video sequences, from which 180 sequences are used for testing. We observed that top-performing trackers on GoT10k test set achieve excellent tracking performance, e.g., more than $78\%$ of average overlap, which leaves only small room for potential improvements. However, a solid $3.7\%$ boost in tracking performance is observed when comparing DAM4SAM to the top-performers LORAT [27] and ODTrack [50]. A close inspection of the DAM4SAM results reveals that more than $99\%$ of frames are successfully tracked (i.e., with a nonzero overlap), which indicates that GoT10k [19] difficulty level is diminishing for modern trackers. + +Table 6. State-of-the-art comparison on three standard bounding-box benchmarks. + +
LaSoT (AUC)LaSoText (AUC)GoT10k (AO)
MixViT [11]72.4-75.7
LORAT [27]75.1①56.6③78.2③
ODTrack [50]74.0②53.978.2③
DiffusionTrack [30]72.3-74.7
DropTrack [38]71.852.775.9
SeqTrack [6]72.5③50.774.8
MixFormer [10]70.1-71.2
GRM-256 [17]69.9-73.4
ROMTrack [4]71.451.374.2
OSTrack [48]71.150.573.7
KeepTrack [32]67.148.2-
TOMP [33]68.5--
SAM2.1 [36]70.056.9②80.7②
DAM4SAM75.1①60.9①81.1①
+ +# 6. Conclusion + +We proposed a new distractor-aware memory model DAM and a management regime for memory-based trackers. The new model divides the memory by its tracking functionality into the recent appearances memory (RAM) and a distractor-resolving memory (DRM), responsible for the tracking accuracy and robustness, respectively. Efficient update rules are proposed, which also utilize the tracker output to detect frames with critical distractors useful for updating DRM. In addition, a distractor-distilled dataset DiDi is proposed to evaluate challenging tracking scenarios. + +The proposed DAM is implemented with SAM2.1 [36], forming DAM4SAM. Extensive analysis confirms the design decisions. Without any retraining and using fixed parameters, DAM4SAM sets a solid state-of-the-art on six benchmarks with a moderate $(20\%)$ speed reduction compared to SAM2.1 (i.e., 11 vs. $13.3~\mathrm{fps}^4$ ). This makes a compelling case for arguably simpler localization architectures of memory-based frameworks compared to the current tracking state-of-the-art. Furthermore, the results suggest more research should focus on efficient memory designs, with possibly learnable management policies. We believe these directions hold a strong potential for further performance boosts in future work. + +Acknowledgements. This work was supported by Slovenian research agency program P2-0214 and projects J2-2506, Z2-4459 and J2-60054, and by supercomputing network SLING (ARNES, EuroHPC Vega - IZUM). + +# References + +[1] Basit Alawode, Yuhang Guo, Mehnaz Ummar, Naoufel Werghi, Jorge Dias, Ajmal Mian, and Sajid Javed. Utb180: A high-quality benchmark for underwater tracking. In Proc. Asian Conf. Computer Vision, pages 3326-3342, 2022. 4 +[2] Luca Bertinetto, Jack Valmadre, João F Henriques, Andrea Vedaldi, and Philip H S Torr. Fully-convolutional siamese networks for object tracking. In Proc. European Conf. Computer Vision Workshops, pages 850–865, 2016. 1 +[3] Goutam Bhat, Felix Järemo Lawin, Martin Danelljan, Andreas Robinson, Michael Felsberg, Luc Van Gool, and Radu Timofte. Learning what to learn for video object segmentation. In Proc. European Conf. Computer Vision, pages 777-794, 2020. 7 +[4] Yidong Cai, Jie Liu, Jie Tang, and Gangshan Wu. Robust object modeling for visual tracking. In Int. Conf. Computer Vision, pages 9589-9600, 2023. 8 +[5] Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun Yang, and Huchuan Lu. Transformer tracking. In Comp. Vis. Patt. Recognition, pages 8126-8135, 2021. 1, 5, 6, 7 +[6] Xin Chen, Houwen Peng, Dong Wang, Huchuan Lu, and Han Hu. Seqtrack: Sequence to sequence learning for visual object tracking. In Comp. Vis. Patt. Recognition, pages 14572-14581, 2023. 1, 5, 6, 7, 8 +[7] Yiwei Chen, Jingtao Xu, Jiaqian Yu, Qiang Wang, ByungIn Yoo, and Jae-Joon Han. Afod: Adaptive focused discriminative segmentation tracker. In Proc. European Conf. Computer Vision, pages 666-682, 2020. 7 +[8] Ho Kei Cheng and Alexander G Schwing. Xmem: Long-term video object segmentation with an atkinson-shiffrin memory model. In Proc. European Conf. Computer Vision, pages 640-658, 2022. 1, 2, 3 +[9] Ho Kei Cheng, Seoung Wug Oh, Brian Price, Joon-Young Lee, and Alexander Schwing. Putting the object back into video object segmentation. In Comp. Vis. Patt. Recognition, pages 3151-3161, 2024. 1, 2, 5, 6, 7 +[10] Yutao Cui, Cheng Jiang, Limin Wang, and Gangshan Wu. Mixformer: End-to-end tracking with iterative mixed attention. In Comp. Vis. Patt. Recognition, pages 13608-13618, 2022. 1, 7, 8 +[11] Yutao Cui, Cheng Jiang, Gangshan Wu, and Limin Wang. Mixformer: End-to-end tracking with iterative mixed attention. IEEE Trans. Pattern Anal. Mach. Intell., 46(6): 4129-4146, 2024. 6, 7, 8 +[12] Martin Danelljan, Goutam Bhat, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Int. Conf. Computer Vision, pages 6181-6190, 2019. 2 +[13] Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Philip HS Torr, and Song Bai. Mose: A new dataset for video object segmentation in complex scenes. In Comp. Vis. Patt. Recognition, pages 20224-20234, 2023. 2 +[14] Shuangrui Ding, Rui Qian, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Yuwei Guo, Dahua Lin, and Jiaqi Wang. Sam2long: Enhancing sam 2 for long video segmentation with a training-free memory tree. arXiv preprint arXiv:2410.16268, 2024. 2, 6 + +[15] Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, and Haibin Ling. Lasot: A high-quality benchmark for large-scale single object tracking. In Comp. Vis. Patt. Recognition, 2019. 4, 8 +[16] Heng Fan, Hexin Bai, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Harshit, Mingzhen Huang, Juehuan Liu, et al. Lasot: A high-quality large-scale single object tracking benchmark. In Int. J. Comput. Vision, pages 439-461, 2021. 8 +[17] Shenyuan Gao, Chunluan Zhou, and Jun Zhang. Generalized relation modeling for transformer tracking. In Comp. Vis. Patt. Recognition, pages 18686-18695, 2023. 8 +[18] Lingyi Hong, Zhongying Liu, Wenchao Chen, Chenzhi Tan, Yuang Feng, Xinyu Zhou, Pinxue Guo, Jinglun Li, Zhaoyu Chen, Shuyong Gao, et al. Lvos: A benchmark for large-scale long-term video object segmentation. arXiv preprint arXiv:2404.19326, 2024. 2 +[19] Lianghua Huang, Xin Zhao, and Kaiqi Huang. Got-10k: A large high-diversity benchmark for generic object tracking in the wild. In IEEE Trans. Pattern Anal. Mach. Intell., pages 1562–1577, 2018. 4, 8 +[20] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. In Int. Conf. Computer Vision, pages 4015-4026, 2023. 2 +[21] Matej Kristan, Jiri Matas, Ales Leonardis, Tomas Vojir, Roman Pflugfelder, Gustavo Javier Fernandez Dominguez, Georg Nebehay, Fatih Porikli, and Luka Cehovin. A novel performance evaluation methodology for single-target trackers. In IEEE Trans. Pattern Anal. Mach. Intell., pages 213-215, 2016. 6 +[22] Matej Kristan, Aleš Leonardis, Jiří Matas, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kämäräinen, Martin Danelljan, Luka Čehovin Zajc, Alan Lukežić, Ondrej Drbohlav, et al. The eighth visual object tracking VOT2020 challenge results. In Proc. European Conf. Computer Vision Workshops, pages 547-601, 2020. 4, 6 +[23] Matej Kristan, Aleš Leonardis, Jiří Matas, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kämäräinen, Hyung Jin Chang, Martin Danelljan, Luka Čehovin Zajc, Alan Lukežić, et al. The tenth visual object tracking vot2022 challenge results. In Proc. European Conf. Computer Vision Workshops, pages 431-460, 2022. 1, 4, 6, 7 +[24] Matej Kristan, Jiří Matas, Martin Danelljan, Michael Felsberg, Hyung Jin Chang, Luka Čehovin Zajc, Alan Lukežić, Ondrej Drbohlav, Zhongqun Zhang, Khanh-Tung Tran, et al. The first visual object tracking segmentation votes2023 challenge results. In Int. Conf. Computer Vision Workshops, pages 1796-1818, 2023. 4 +[25] Matej Kristan, Jiri Matas, Pavel Tokmakov, Michael Felsberg, Luka Čehovin Zajc, Alan Lukežić, Khanh-Tung Tran, Xuan-Son Vu, Johanna Bjorklund, Hyung Jin Chang, and Gustavo Fernandez. The second visual object tracking segmentation votes2024 challenge results, 2024. 1, 2, 6, 7 +[26] Xin Li, Deshui Miao, Zhenyu He, Yaowei Wang, Huchuan Lu, and Ming-Hsuan Yang. Learning spatial-semantic fea + +tures for robust video object segmentation. In arXiv preprint arXiv:2407.07760, 2024. 7 +[27] Liting Lin, Heng Fan, Zhipeng Zhang, Yaowei Wang, Yong Xu, and Haibin Ling. Tracking meets lora: Faster training, larger model, stronger performance. In Proc. European Conf. Computer Vision, pages 300-318, 2025. 7, 8 +[28] Alan Lukežić, Jiří Matas, and Matej Kristan. A discriminative single-shot segmentation network for visual object tracking. In IEEE Trans. Pattern Anal. Mach. Intell., pages 9742-9755, 2021. 1, 7 +[29] Alan Lukežić, Žiga Trojer, Jiří Matas, and Matej Kristan. A new dataset and a distractor-aware architecture for transparent object tracking. Int. J. Comput. Vision, pages 1-14, 2024. 1, 2 +[30] Run Luo, Zikai Song, Lintao Ma, Jinlin Wei, Wei Yang, and Min Yang. Diffusiontrack: Diffusion model for multi-object tracking. In Proc. of the AAAI Conf. on Artificial Intelligence, pages 3991-3999, 2024. 7, 8 +[31] Ziang Ma, Linyuan Wang, Haitao Zhang, Wei Lu, and Jun Yin. Rpt: Learning point set representation for siamese visual tracking. In Proc. European Conf. Computer Vision Workshops, pages 653-665, 2020. 6, 7 +[32] Christoph Mayer, Martin Danelljan, Danda Pani Paudel, and Luc Van Gool. Learning target candidate association to keep track of what not to track. In Comp. Vis. Patt. Recognition, pages 13444-13454, 2021. 2, 5, 6, 8 +[33] Christoph Mayer, Martin Danelljan, Goutam Bhat, Matthieu Paul, Danda Pani Paudel, Fisher Yu, and Luc Van Gool. Transforming model prediction for tracking. In Comp. Vis. Patt. Recognition, pages 8731-8740, 2022. 2, 8 +[34] Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rabbat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision. arXiv:2304.07193, 2023. 4 +[35] Jordi Pont-Tuset, Federico Perazzi, Sergi Caelles, Pablo Arbeláez, Alexander Sorkine-Hornung, and Luc Van Gool. The 2017 davis challenge on video object segmentation. arXiv:1704.00675, 2017. 2 +[36] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Dollár, and Christoph Feichtenhofer. Sam 2: Segment anything in images and videos. In Proc. Int. Conf. Learning Representations, 2025. 2, 3, 6, 7, 8 +[37] Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, et al. Hiera: A hierarchical vision transformer without the bells-and-whistles. In Proc. Int. Conf. Mach. Learning, pages 29441-29454, 2023. 2 + +[38] Qiangqiang Wu, Tianyu Yang, Ziquan Liu, Baoyuan Wu, Ying Shan, and Antoni B Chan. Dropmae: Masked autoencoders with spatial-attention dropout for tracking tasks. In Comp. Vis. Patt. Recognition, pages 14561–14571, 2023. 8 +[39] Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Object tracking benchmark. In IEEE Trans. Pattern Anal. Mach. Intell., pages 1834-1848, 2015. 8 +[40] Jinxia Xie, Bineng Zhong, Zhiyi Mo, Shengping Zhang, Liangtao Shi, Shuxiang Song, and Rongrong Ji. Autoregressive queries for adaptive tracking with spatio-temporal transformers. In Comp. Vis. Patt. Recognition, pages 19300-19309, 2024. 5, 6 +[41] Ning Xu, Linjie Yang, Yuchen Fan, Dingcheng Yue, Yuchen Liang, Jianchao Yang, and Thomas Huang. Youtube-vos: A large-scale video object segmentation benchmark. arXiv preprint arXiv:1809.03327, 2018. 2 +[42] Zizheng Xun, Shangzhe Di, Yulu Gao, Zongheng Tang, Gang Wang, Si Liu, and Bo Li. Linker: Learning long short-term associations for robust visual tracking. In IEEE Transactions on Multimedia, pages 6228-6237, 2024. 7 +[43] Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, and Huchuan Lu. Learning spatio-temporal transformer for visual tracking. In Int. Conf. Computer Vision, pages 10448-10457, 2021. 1 +[44] Bin Yan, Xinyu Zhang, Dong Wang, Hutchuan Lu, and Xiaoyun Yang. Alpha-refine: Boosting tracking performance by precise bounding box estimation. In Comp. Vis. Patt. Recognition, pages 5289-5298, 2021. 7 +[45] Cheng-Yen Yang, Hsiang-Wei Huang, Wenhao Chai, Zhongyu Jiang, and Jenq-Neng Hwang. Samurai: Adapting segment anything model for zero-shot visual tracking with motion-aware memory. arXiv:2411.11922, 2024. 6 +[46] Zongxin Yang and Yi Yang. Decoupling features in hierarchical propagation for video object segmentation. In Neural Inf. Proc. Systems, pages 36324-36336, 2022. 7 +[47] Zongxin Yang, Yunchao Wei, and Yi Yang. Associating objects with transformers for video object segmentation. Neural Inf. Proc. Systems, 34:2491-2502, 2021. 1, 2, 3, 5, 6, 7 +[48] Botao Ye, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Joint feature learning and relation modeling for tracking: A one-stream framework. In Proc. European Conf. Computer Vision, pages 341-357, 2022. 7, 8 +[49] Zhipeng Zhang, Houwen Peng, Jianlong Fu, Bing Li, and Weiming Hu. Ocean: Object-aware anchor-free tracking. In Proc. European Conf. Computer Vision, pages 771-787, 2020. 7 +[50] Yaozong Zheng, Bineng Zhong, Qihua Liang, Zhiyi Mo, Shengping Zhang, and Xianxian Li. Odtrack: Online dense temporal token learning for visual tracking. In Proc. of the AAAI Conf. on Artificial Intelligence, pages 7588-7596, 2024. 1, 5, 6, 7, 8 +[51] Junbao Zhou, Ziqi Pang, and Yu-Xiong Wang. Rmem: Restricted memory banks improve video object segmentation. In Comp. Vis. Patt. Recognition, pages 18602-18611, 2024. 1, 2, 3, 4 \ No newline at end of file diff --git a/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/images.zip b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fc4bf2de81f528e1ae034059b10202fea124b903 --- /dev/null +++ b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83961242c124e8dec73df1fe55bf7876a1c59b92b2e5ffa9c30dcf64ac14b7ad +size 576987 diff --git a/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/layout.json b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9db63d82948c07e29fdb7b9f8db6eab02c44f1dc --- /dev/null +++ b/CVPR/2025/A Distractor-Aware Memory for Visual Object Tracking with SAM2/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41973c54f91a6b148780bff93dd287937bd64a11e2c8b447cc2ee3e572de73a3 +size 340215 diff --git a/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_content_list.json b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..839f23427f90392cc012d4fd4cf4e3ff38b5f75a --- /dev/null +++ b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:285cf0a3695a0948b29b7771aef427a9b837e394334f60f9dd605ced7dab6095 +size 96418 diff --git a/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_model.json b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6e1386d62ba1470d35ec25f0f7afbc3fe5c89df9 --- /dev/null +++ b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d48691a786af42da7297964075c57a342e831c981434953dd27aebf965b1840e +size 119119 diff --git a/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_origin.pdf b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..307e8c4f22f72e15c8764643ab1fa420fe16dfc4 --- /dev/null +++ b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/df09f1c7-a9b0-4c23-b900-ec225450f19c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e80181d03099fb72cc115914dcff07558c2eca2db42fd8ce204a83cb06e226a +size 3002622 diff --git a/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/full.md b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f0505d84cabedfacad5f596f1ddddb963f1d987e --- /dev/null +++ b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/full.md @@ -0,0 +1,440 @@ +# A Flag Decomposition for Hierarchical Datasets + +Nathan Mankovich +Universitat de València + +Ignacio Santamaria +Universidad de Cantabria + +Gustau Camps-Valls +Universitat de València + +Tolga Birdal +Imperial College London + +# Abstract + +Flag manifolds encode nested sequences of subspaces and serve as powerful structures for various computer vision and machine learning applications. Despite their utility in tasks such as dimensionality reduction, motion averaging, and subspace clustering, current applications are often restricted to extracting flags using common matrix decomposition methods like the singular value decomposition. Here, we address the need for a general algorithm to factorize and work with hierarchical datasets. In particular, we propose a novel, flag-based method that decomposes arbitrary hierarchical real-valued data into a hierarchy-preserving flag representation in Stiefel coordinates. Our work harnesses the potential of flag manifolds in applications including denoising, clustering, and few-shot learning. + +# 1. Introduction + +Hierarchical structures are fundamental across a variety of fields: they shape taxonomies and societies [25], allow us to study 3D objects [26], underpin neural network architectures [59], and form the backbone of language [31]. They reflect parts-to-whole relationships [55] and how our world organizes itself compositionally [49]. However, when handling hierarchies in data, we often resort to the temptation to flatten them for simplicity, losing essential structure and context in the process. This tendency is evident in standard dimensionality reduction techniques, like principal component analysis, which ignore any hierarchy the data contains. + +In this work, we advocate for an approach rooted in flags to preserve the richness of hierarchical linear subspaces. A flag [33, 36] represents a sequence of nested subspaces with increasing dimensions, denoted by its type or signature $(n_{1}, n_{2}, \ldots, n_{k}; n)$ , where $n_{1} < n_{2} < \ldots < n_{k} < n$ . For instance, a flag of type $(1, 2; 3)$ describes a line within a plane in $\mathbb{R}^3$ . By working in flag manifolds—structured spaces of such nested subspaces—we leverage the full complexity of hierarchical data. Flag manifolds have already + +![](images/43d6fe4c9968d9a435cce4cefa2996618d64e7335ede03d76ca06a6b8489fa3e.jpg) +Figure 1. A flag decomposition (center) is used for a hierarchy-preserving flag representation and reconstruction. This decomposition separates a data matrix $\mathbf{D}$ with an associated hierarchy of column indices $\mathcal{A}_1 \subset \mathcal{A}_2 \subset \mathcal{A}_3$ into Stiefel coordinates $\mathbf{Q}$ for a flag $[\mathbf{Q}]$ , a block-upper triangular matrix $\mathbf{R}$ , and a permutation matrix $\mathbf{P}$ (not pictured). Example applications include denoising (green), clustering (yellow), and few-shot learning (orange). + +shown promise in extending traditional methods like Principal Component Analysis (PCA) [36, 47, 53], Independent Component Analysis (ICA) [41-45], generalizing subspace learning [54], and Self-Organizing Maps (SOM) [32]. They enable robust representations for diverse tasks: averaging motions [33], modeling variations in face illumination [11, 33], parameterizing 3D shape spaces [9], and clustering subspaces for video and biological data [33-35, 38]. + +Our main contribution is a Flag Decomposition (FD) specifically designed to preserve hierarchical structures within data (see Fig. 1). First, we formalize the notion of hierarchies in data and the definition of a flag ( $\S 2$ ). Next, we provide a practical algorithm for deriving FDs ( $\S 3$ ) and outline multiple promising applications ( $\S 4$ ). Then we demonstrate its robustness in clustering tasks involving noisy and outlier-contaminated simulated data ( $\S 5$ ). Our approach outperforms standard methods, such as Singular Value Decomposition (SVD), in clustering and denoising hyperspec + +tral satellite images. Finally, we show that using flags as prototypes in a few-shot framework improves classification accuracy on benchmark datasets. Final remarks are in (§6) and formal proofs are in the suppl. material. Our implementation is https://github.com/nmank/FD. + +# 2. Preliminaries + +We begin by formalizing hierarchical datasets. Then, build to a definition of flags by providing the necessary background in matrix spaces. For notation, italicized capital letters (e.g., $\mathcal{A}$ ) denote sets, and boldface letters denote matrices and column vectors (e.g., $\mathbf{X}$ and $\mathbf{x}_i$ ). [X] denotes the subspace spanned by the columns of $\mathbf{X}$ . + +Consider the data matrix $\mathbf{D} \in \mathbb{R}^{n \times p}$ with a hierarchy defined by the subsets of column indices + +$$ +\varnothing = \mathcal {A} _ {0} \subset \mathcal {A} _ {1} \subset \mathcal {A} _ {2} \subset \dots \subset \mathcal {A} _ {k} = \{1, 2, \dots , p \}. \tag {1} +$$ + +Let $\mathbf{D}_{\mathcal{A}_i}$ be the matrix containing only columns of $\mathbf{D}$ in $\mathcal{A}_i$ + +Definition 1 (Column hierarchy for $\mathbf{D}$ ). We call $\mathcal{A}_1 \subset \mathcal{A}_2 \subset \dots \subset \mathcal{A}_k$ a column hierarchy for $\mathbf{D} \in \mathbb{R}^{n \times p}$ when + +$$ +\dim \left(\left[ \mathbf {D} _ {\mathcal {A} _ {i - 1}} \right]\right) < \dim \left(\left[ \mathbf {D} _ {\mathcal {A} _ {i}} \right]\right) f o r i = 1, 2, \dots , k. \tag {2} +$$ + +Given a column hierarchy for $\mathbf{D}^1$ , there is a natural correspondence between column and subspace hierarchies + +columns: $\mathcal{A}_1\subset \mathcal{A}_2\subset \dots \subset \mathcal{A}_k$ + +subspaces: $[\mathbf{D}_{\mathcal{A}_1}]\subset [\mathbf{D}_{\mathcal{A}_2}]\subset \dots \subset [\mathbf{D}_{\mathcal{A}_k}]$ + +These hierarchies can include coarse-to-fine neighborhoods (e.g., Example 2.1), spectral hierarchies (e.g., Example 2.2), and feature representations (e.g., Example 2.3). + +Example 2.1 (Neighborhood Hierarchy). Consider $(p_i \times p_i)$ concentric RGB image patches increasing in size with $i = 1, 2, 3$ . We store the $i^{\text{th}}$ image patch in $\mathbf{D} \in \mathbb{R}^{3 \times p_i^2}$ . $\mathcal{A}_1$ contains the column indices of the smallest image patch in $\mathbf{D}$ , $\mathcal{A}_2$ contains those of the next smallest patch, and $\mathcal{A}_3$ the largest patch. This results in the neighborhood column hierarchy $\mathcal{A}_1 \subset \mathcal{A}_2 \subset \mathcal{A}_3$ for the data matrix $\mathbf{D}$ . + +Example 2.2 (Spectral Hierarchy). Let $\mathbf{D} \in \mathbb{R}^{n \times p}$ be a hyperspectral image with $n$ pixels and $p$ bands. A hierarchy is imposed on the bands by grouping wavelengths into: + +![](images/5a101766022f10f915fa57b003c21b7f5d9ef42dfa817357b887ba7c76b99fd6.jpg) + +Example 2.3 (Feature Hierarchy). Consider a feature extractor (e.g., deep network) admitting the following decomposition: $f_{\Theta} = f_{\Theta}^{(2)} \circ f_{\Theta}^{(1)} : \mathbb{R}^N \to \mathbb{R}^n$ where $f_{\Theta}^{(1)} : \mathbb{R}^N \to \mathbb{R}^n$ . $s$ samples $x_1, \dots, x_s$ are used to obtain + +$$ +\mathbf {D} = \left[ f _ {\Theta} ^ {(1)} (\boldsymbol {x} _ {1}) | \dots | f _ {\Theta} ^ {(1)} (\boldsymbol {x} _ {s}) | f _ {\Theta} (\boldsymbol {x} _ {1}) | \dots | f _ {\Theta} (\boldsymbol {x} _ {s}) \right]. +$$ + +Since information flows from $f_{\Theta}^{(1)}$ to $f_{\Theta}^{(2)}$ , it is natural to assume that features extracted by $f_{\Theta}^{(1)}$ span a subspace of the features extracted by $f_{\Theta}$ . Therefore, we propose the hierarchy $\{1, 2, \ldots, s\} \subset \{1, 2, \ldots, 2s\}$ . + +Next, we build a mathematical formalization of flags. + +Definition 2 (Matrix spaces). The orthogonal group $O(n)$ denotes the group of distance-preserving transformations of a Euclidean space of dimension $n$ : + +$$ +O (n) := \left\{\mathbf {M} \in \mathbb {R} ^ {n \times n}: \mathbf {M} ^ {\top} \mathbf {M} = \mathbf {M M} ^ {\top} = \mathbf {I} \right\}. \tag {3} +$$ + +A permutation matrix is a square matrix $\mathbf{P} \in \mathbb{R}^{n \times n}$ where each column and each row contains only a single 1 value and the rest are 0. DP permutes the columns of $\mathbf{D}$ . An important property of permutation matrices is $\mathbf{P}^{-1} = \mathbf{P}^{\top}$ . The Stiefel manifold $St(k, n)$ , a.k.a. the set of all orthonormal $k$ -frames in $\mathbb{R}^n$ , can be represented as the quotient: $St(k, n) = O(n) / O(n - k)$ . A point on the Stiefel manifold is parameterized by a tall-skinny $n \times k$ real matrix with orthonormal columns, i.e. $\mathbf{X} \in \mathbb{R}^{n \times k}$ where $\mathbf{X}^{\top} \mathbf{X} = \mathbf{I}$ . The Grassmannian, $Gr(k, n)$ represents the set of all $k$ -dimensional subspaces of $\mathbb{R}^n$ . Each point in $Gr(k, n)$ can be identified with an equivalence class of matrices in the Stiefel manifold, where two matrices are equivalent if their columns span the same subspace. We represent $[\mathbf{X}] \in Gr(k, n)$ using the Stiefel coordinates $\mathbf{X} \in St(k, n)$ . + +Definition 3 (Flag). Let $\mathcal{V}_i$ be an $n_i$ -dimensional subspace of a vector space $\mathcal{V}$ of dimension $n$ . A flag of type $(n_1, n_2, \ldots, n_k; n)$ , is the nested sequence of subspaces + +$$ +\varnothing \subset \mathcal {V} _ {1} \subset \mathcal {V} _ {2} \subset \dots \subset \mathcal {V} _ {k} \subset \mathcal {V}. \tag {4} +$$ + +The flag manifold $\mathcal{FL}(n_1, n_2, \ldots, n_k; n)$ is a Riemannian manifold and the collection of all flags of type, a.k.a. signature, $(n_1, n_2, \ldots, n_k; n)$ [52, 60]. The first empty subspace, $\emptyset$ with dimension $n_0 = 0$ , is now mentioned for completeness but will be dropped from notation and implicit from here on. In this work, we will work with real flags, namely $\mathcal{V} = \mathbb{R}^n$ . + +Remark 1 (Flag manifold as a quotient of groups). Ye et al. [60, Proposition 4.10] prove that $\mathcal{FL}(n_1,\ldots ,n_k;n)$ is diffeomorphic to the quotient space $St(n_{k},n) / (O(m_{1})\times$ $O(m_{2})\times \dots \times O(m_{k}))$ where $m_i = n_i - n_{i - 1}$ . This fact gives a Stiefel manifold coordinate representation of a flag. + +Definition 4 (Stiefel coordinates for flags [60]). A flag is represented by $\mathbf{X} = [\mathbf{X}_1|\mathbf{X}_2|\dots |\mathbf{X}_k]\in St(n_k,n)$ where $\mathbf{X}_i\in \mathbb{R}^{n\times m_i}$ . Specifically, $\mathbf{X}$ represents the flag + +$$ +[ [ \mathbf {X} ] ] = [ \mathbf {X} _ {1} ] \subset [ \mathbf {X} _ {1}, \mathbf {X} _ {2} ] \subset \dots \subset [ \mathbf {X} _ {1}, \mathbf {X} _ {2}, \dots \mathbf {X} _ {k} ] \subset \mathbb {R} ^ {n} \tag {5} +$$ + +We say $\llbracket \mathbf{X}\rrbracket$ is a flag of type (or signature) $(n_{1},n_{2},\ldots ,n_{k};n)$ and $[\mathbf{X}_1,\mathbf{X}_2,\dots \mathbf{X}_i]$ denotes the span of $[\mathbf{X}_1|\mathbf{X}_2|\dots |\mathbf{X}_i]$ (for $i = 1,2,\ldots ,k$ ). + +Table 1. Computing the chordal distance on Steifel, Grassmann, and flag manifolds using matrix representatives. $\parallel \cdot {\parallel }_{F}$ is Frobenius norm. + +
Representation / ManifoldX, Y ∈ St(nk, n)[X], [Y] ∈ Gr(nk, n)[[X]], [[Y]] ∈ FCL(1, ..., nk; n)
Chordal distance||X - Y||F1/√2||XXT - YYT||F√1/2 ∑i=1k ||XiXiT - YiYiT||2/F
+ +Given the tall-skinny orthonormal matrix representatives $\mathbf{X},\mathbf{Y}\in \mathbb{R}^{n\times n_k}$ , we also utilize their chordal distances as given in Tab. 1. The chordal distance on the Stiefel manifold measures the 2-norm of the vectorized matrices. In contrast, the Grassmannian chordal distance measures the 2-norm of the vector of sines of the principal angles [6] between the subspaces through the Frobenius norm of the projection matrices [12]. The chordal distance on the flag manifold [48] arises from the fact that it is a closed submanifold of $\mathrm{Gr}(m_1,n)\times \dots \times \mathrm{Gr}(m_k,n)$ as shown by Ye et al. [60, Proposition 3.2]. This distance is similar to the Grassmannian chordal distance between subsequent pieces of the flags (e.g., $[\mathbf{X}_i]$ and $[\mathbf{Y}_i]$ for $i = 1,\ldots ,k$ ). + +# 3. Flag Decomposition (FD) + +We will now introduce our novel Flag Decomposition (FD) that, given $\mathbf{D}$ , outputs a hierarchy-preserving flag $[\mathbf{Q}]$ . From this point on, $[\cdot, \cdot, \cdot]$ denotes column space and $[\cdot| \cdot|]$ block matrices. Let $\mathcal{B}_i = \mathcal{A}_i \setminus \mathcal{A}_{i-1}$ be the difference of sets for $i = 1, 2, \ldots, k$ and $\mathbf{B}_i = \mathbf{D}_{\mathcal{B}_i}$ so that $[\mathbf{D}_{\mathcal{A}_i}] = [\mathbf{B}_1, \mathbf{B}_2, \ldots, \mathbf{B}_i]$ . We define the permutation matrix $\mathbf{P}$ so $\mathbf{B} = [\mathbf{B}_1|\mathbf{B}_2| \cdots|\mathbf{B}_k] = \mathbf{DP}$ . Also, denote the projection matrix onto the null space of $[\mathbf{Q}_i]$ with $\mathbf{Q}_i \in St(m_i, n)$ as $\Pi_{\mathbf{Q}_i^\perp} = \mathbf{I} - \mathbf{Q}_i\mathbf{Q}_i^\top$ . We use $n_0 = 0$ , $\mathcal{A}_0 = \emptyset$ , and $\Pi_{\mathbf{Q}_0^\perp} = \mathbf{I}$ . + +Definition 5 (Hierarchy-preserving flags). A flag $\mathbb{[X]}\in$ $\mathcal{FL}(n_1,n_2,\ldots ,n_k;n)$ is said to preserve the hierarchy of $\mathbf{D}$ if $[\mathbf{D}_{\mathcal{A}_i}] = [\mathbf{X}_1,\mathbf{X}_2,\dots ,\mathbf{X}_i]$ for each $i = 1,2,\ldots ,k$ + +If $\mathcal{A}_1\subset \mathcal{A}_2\subset \dots \subset \mathcal{A}_k$ is a column hierarchy for $\mathbf{D}$ then a hierarchy-preserving flag results in the three, equivalent, nested sequences of subspaces + +$$ +\left[ \mathbf {D} _ {\mathcal {A} _ {1}} \right] \subset \quad \left[ \mathbf {D} _ {\mathcal {A} _ {2}} \right] \subset \dots \subset \quad \left[ \mathbf {D} _ {\mathcal {A} _ {k}} \right] +$$ + +$$ +\left[ \mathbf {B} _ {1} \right] \subset \left[ \mathbf {B} _ {1}, \mathbf {B} _ {2} \right] \subset \dots \subset \left[ \mathbf {B} _ {1}, \mathbf {B} _ {2}, \dots , \mathbf {B} _ {k} \right] +$$ + +$$ +[ \mathbf {X} _ {1} ] \quad \subset \quad [ \mathbf {X} _ {1}, \mathbf {X} _ {2} ] \quad \subset \quad \dots \quad \subset \quad [ \mathbf {X} _ {1}, \mathbf {X} _ {2}, \ldots , \mathbf {X} _ {k} ]. +$$ + +SVD and QR decomposition can recover flags from data with certain, limited column hierarchies (see suppl. material for details). However, when faced with a more complex column hierarchy, both QR and SVD cannot recover the entire hierarchy-preserving flag (see Fig. 2). + +These examples motivate a generalized decomposition of $\mathbf{D}$ that outputs a hierarchy-preserving flag. In particular, unlike in QR decomposition, $\mathbf{D}$ can be rank-deficient (e.g., $\mathrm{rank}(\mathbf{D}) < p$ ); and unlike the SVD, we can decompose into flags of type $(n_1, n_2, \ldots, n_k; n)$ with $n_k \leq p$ . + +![](images/38c3f4c369768b5438db6dfdc2363f2ef2b72bcf0bc925cec09c158f7ff70191.jpg) +Figure 2. We recover a flag from $\mathbf{D}$ with hierarchy $\mathcal{A}_1 \subset \mathcal{A}_2$ . Columns of $\mathbf{D}$ are plotted as points with $\mathcal{A}_1$ in blue and $\mathcal{A}_2 \setminus \mathcal{A}_1$ in orange. FD is the only method that recovers the flag (line inside plane). SVD correctly recovers the plane but not the line whereas QR only recovers the line and the plane misses the orange points. + +![](images/5fa49838817303c6c543254366060ae10849129915673ff77f1b446f0a39084a.jpg) + +![](images/e05f771a4be21232864b703d20b9b29a65db9579521903685b94c304f4d22040.jpg) + +Definition 6 (Flag Decomposition (FD)). Let $\mathbf{D} \in \mathbb{R}^{n \times p}$ be data with the hierarchically nested sequence of column indices $\mathcal{A}_1 \subset \mathcal{A}_2 \subset \dots \subset \mathcal{A}_k$ . A flag decomposition of type $(n_1, n_2, \dots, n_k; n)$ is the matrix factorization + +$$ +\mathbf {D} = \mathbf {Q R P} ^ {\top} \tag {6} +$$ + +where the block structures are + +$$ +\mathbf {Q} = \left[ \underbrace {\mathbf {Q} _ {1}} _ {n \times m _ {1}} \mid \underbrace {\mathbf {Q} _ {2}} _ {n \times m _ {2}} \mid \dots \mid \underbrace {\mathbf {Q} _ {k}} _ {n \times m _ {k}} \right] \in \mathbb {R} ^ {n \times n _ {k}}, \tag {7} +$$ + +$$ +\mathbf {R} = \left[ \begin{array}{c c c c} \mathbf {R} _ {1 1} & \mathbf {R} _ {1 2} & \dots & \mathbf {R} _ {1 k} \\ \mathbf {0} & \mathbf {R} _ {2 2} & \dots & \mathbf {R} _ {2 k} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf {0} & \mathbf {0} & \dots & \mathbf {R} _ {k k} \end{array} \right] \in \mathbb {R} ^ {n _ {k} \times p}, \tag {8} +$$ + +$$ +\mathbf {P} = \left[ \mathbf {P} _ {1} \mid \mathbf {P} _ {2} \right| \dots \mid \mathbf {P} _ {k} ] \in \mathbb {R} ^ {p \times p}. \tag {9} +$$ + +Here, $\mathbf{Q}$ corresponds to the Stiefel coordinates for the hierarchy-preserving flag $[\mathbf{Q}]\in \mathcal{FL}(n_1,n_2,\ldots ,n_k;n)$ with $m_{i} = n_{i} - n_{i - 1}$ and $n_k\leq p$ $\mathbf{R}$ is a block uppertriangular matrix, and $\mathbf{P}$ is a permutation matrix so that $\mathbf{QR} = \mathbf{DP}$ + +We now use Prop. 1 to determine when we can recover a hierarchy-preserving flag from data and then we use Prop. 2 to show how to construct $\mathbf{R}$ and $\mathbf{P}$ from this flag. Finally, we combine Prop. 1 and 2 to define when we can find an FD (see Prop. 3) and investigate its uniqueness (see Prop. 4). + +Proposition 1. Suppose $\mathcal{A}_1\subset \mathcal{A}_2\subset \dots \subset \mathcal{A}_k$ is a column hierarchy for $\mathbf{D}$ . Then there exists $\mathbf{Q} = [\mathbf{Q}_1|\mathbf{Q}_2|\dots |\mathbf{Q}_k]$ that are coordinates for the flag $[\mathbf{Q}]]\in \mathcal{FL}(n_1,n_2,\ldots ,n_k;n)$ where $n_i = \mathrm{rank}(\mathbf{D}_{\mathcal{A}_i})$ that satisfies $[\mathbf{Q}_i] = [\Pi_{\mathbf{Q}_{i - 1}^{\perp}}\dots \Pi_{\mathbf{Q}_1^{\perp}}\mathbf{B}_i]$ and the projection property (for $i = 1,2\dots ,k$ ): + +$$ +\Pi_ {\mathbf {Q} _ {i} ^ {\perp}} \Pi_ {\mathbf {Q} _ {i - 1} ^ {\perp}} \dots \Pi_ {\mathbf {Q} _ {1} ^ {\perp}} \mathbf {B} _ {i} = 0. \tag {10} +$$ + +Proof. Define (for $i = 2,3,\ldots ,k$ ) the projector onto the null space of $[\mathbf{Q}_1,\mathbf{Q}_2,\dots ,\mathbf{Q}_i]$ , as $\Pi_{\mathbf{Q}_{i}^{\perp}} = \mathbf{I} - \mathbf{Q}_{:i}\mathbf{Q}_{:i}^{\top}$ . We use this to define $\mathbf{C}_i = \Pi_{\mathbf{Q}_{i - 1}^{\perp}}\mathbf{B}_i$ and $\mathbf{Q}_i\in St(m_i,n)$ so that $[\mathbf{Q}_i] = [\mathbf{C}_i]$ . Then we use mathematical induction to show results ending in Eq. (10) and $\mathbf{Q}_i\in St(m_i,n)$ with $m_{i} = n_{i} - n_{i - 1}$ where $n_i = \mathrm{rank}(\mathbf{D}_{\mathcal{A}_i})$ . + +The simplest methods for recovering $\mathbf{Q}$ so that $[\mathbf{Q}_i] = [\Pi_{\mathbf{Q}_{i - 1}^{\perp}}\dots \Pi_{\mathbf{Q}_1^{\perp}}\mathbf{B}_i]$ include the left singular vectors from the truncated SVD and the $\mathbf{Q}$ matrix from the QR decomposition with pivoting. We will now use $\mathbf{Q}$ and the projection property to construct $\mathbf{R}$ and $\mathbf{P}$ for the FD. + +Proposition 2. Suppose $\mathcal{A}_1\subset \mathcal{A}_2\subset \dots \subset \mathcal{A}_k$ is a column hierarchy for $\mathbf{D}$ . Then there exists some hierarchy-preserving $[\mathbf{Q}]\in \mathcal{FL}(n_1,n_2,\ldots ,n_k;n)$ (with $n_i = \mathrm{rank}(\mathbf{D}_{\mathcal{A}_i})$ ) that satisfies the projection property of $\mathbf{D}$ and can be used for a flag decomposition of $\mathbf{D}$ with + +$$ +\mathbf {R} _ {i, j} = \left\{ \begin{array}{l l} \mathbf {Q} _ {i} ^ {\top} \boldsymbol {\Pi} _ {\mathbf {Q} _ {i - 1} ^ {\perp}} \dots \boldsymbol {\Pi} _ {\mathbf {Q} _ {1} ^ {\perp}} \mathbf {B} _ {i}, & i = j \\ \mathbf {Q} _ {i} ^ {\top} \boldsymbol {\Pi} _ {\mathbf {Q} _ {i - 1} ^ {\perp}} \dots \boldsymbol {\Pi} _ {\mathbf {Q} _ {1} ^ {\perp}} \mathbf {B} _ {j}, & i < j \end{array} , \right. \tag {11} +$$ + +$$ +\mathbf {P} _ {i} = \left[ \mathbf {e} _ {b _ {i, 1}} \mid \mathbf {e} _ {b _ {i, 2}} \mid \dots \mid \mathbf {e} _ {b _ {i, | B _ {i} |}} \right] \tag {12} +$$ + +where $\{b_{i,j}\}_{j = 1}^{|B_i|} = \mathcal{B}_i$ and $\mathbf{e}_b$ is the $b_{i,j}^{\mathrm{th}}$ standard basis vector. + +Proof sketch. This is proved using the previous proposition. + +![](images/6cabe326f6fc8d668ef59d7ba38882a7f37733115b0168e0f6fdc5c6b7105518.jpg) + +Proposition 3. A data matrix $\mathbf{D}$ admits a flag decomposition of type $(n_{1}, n_{2}, \dots, n_{k}; n)$ if and only if $\mathcal{A}_{1} \subset \mathcal{A}_{2} \subset \dots \subset \mathcal{A}_{k}$ is a column hierarchy for $\mathbf{D}$ . + +Proof sketch. We use Prop. 1 and 2 and the definition of a column hierarchy for D. Details in suppl. material. + +Therefore, any $\mathbf{D}$ with an associated column hierarchy admits a hierarchy-preserving FD. Now we state a uniqueness result for the FD. + +Proposition 4 (Block rotational ambiguity). Given the FD $\mathbf{D} = \mathbf{Q}\mathbf{R}\mathbf{P}^{\top}$ , any other Stiefel coordinates for the flag $[\mathbf{Q}]$ produce an FD of $\mathbf{D}$ (via Prop. 2). Furthermore, different Stiefel coordinates for $[\mathbf{Q}]$ produce the same objective function values in Eq. (13) and Eq. (14) (for $i = 1,\dots ,k$ ). + +Proof sketch. Notice $\mathbf{Q}_i\mathbf{Q}_i^T = (\mathbf{Q}_i\mathbf{M}_i)(\mathbf{Q}_i\mathbf{M}_i)^\top$ for any $\mathbf{Q}_i \in St(m_i, n)$ and $\mathbf{M}_i \in O(m_i)$ . See our suppl. material for details. + +# 3.1. Flag recovery + +In this section, we introduce an approach for recovering the FD $\mathbf{D} = \mathbf{Q}\mathbf{R}\mathbf{P}^{\top}$ from a given, corrupted version of the dataset, $\tilde{\mathbf{D}}$ and the column hierarchy $\mathcal{A}_1\subset \mathcal{A}_2\subset \dots \subset \mathcal{A}_k$ + +for $\mathbf{D}$ . We call recovering $\llbracket \mathbf{Q} \rrbracket$ from $\tilde{\mathbf{D}}$ and $\mathcal{A}_1 \subset \mathcal{A}_2 \subset \dots \subset \mathcal{A}_k$ the flag recovery. + +Recall that any $\mathbb{[Q]}$ satisfying the projection property of $\mathbf{D}$ can be used for a FD (see Prop. 2). However, since we only have access to $\tilde{\mathbf{D}}$ , we may not be able to satisfy this property. As a remedy, we try to get as close as possible to satisfying the projection property by optimizing for $\mathbb{[Q]}$ such that $\Pi_{\mathbf{Q}_{i}^{\perp}}\dots \Pi_{\mathbf{Q}_{1}^{\perp}}\tilde{\mathbf{B}}_{i}\approx \mathbf{0}$ for each $i = 1,2,\ldots ,k$ . We minimize this cost column-wise to solve the problem in maximum generality. Specifically, we propose the following minimization: + +$$ +[ [ \mathbf {Q} ] ] = \underset {[ [ \mathbf {X} ] ] \in \mathcal {F L} (n _ {1}, n _ {2}, \dots , n _ {k}; n)} {\arg \min } \sum_ {i = 1} ^ {k} \sum_ {j \in \mathcal {B} _ {i}} \| \Pi_ {\mathbf {X} _ {i} ^ {\perp}} \dots \Pi_ {\mathbf {X} _ {1} ^ {\perp}} \tilde {\mathbf {d}} _ {j} \| _ {r} ^ {q} \tag {13} +$$ + +for $r \geq 0$ , $q > 0$ . Choosing small $r$ and $q$ (e.g., $r = 0$ and $q = 1$ ) would result in a robust flag recovery, optimal for recovering $\mathbf{D}$ in the presence of outlier columns in $\tilde{\mathbf{D}}$ . This problem is difficult, even after restricting $q$ and $r$ , so we address the iterative optimization for each $\mathbf{Q}_i$ for $i = 1$ , then $i = 2$ , and so on until $i = k$ . + +$$ +\mathbf {Q} _ {i} = \underset {\mathbf {X} \in S t (m _ {i}, n)} {\arg \min } \sum_ {j \in \mathcal {B} _ {j}} \| \boldsymbol {\Pi} _ {\mathbf {X} ^ {\perp}} \boldsymbol {\Pi} _ {\mathbf {Q} _ {i - 1} ^ {\perp}} \dots \boldsymbol {\Pi} _ {\mathbf {Q} _ {1} ^ {\perp}} \tilde {\mathbf {d}} _ {j} \| _ {r} ^ {q}. \tag {14} +$$ + +The solution to the case where $r = q = 2$ is obtained by the first $m_{i}$ left singular vectors of $\Pi_{\mathbf{Q}_{i - 1}^{\perp}}\dots \Pi_{\mathbf{Q}_1^{\perp}}\tilde{\mathbf{D}}_{\mathcal{B}_j}$ . In general, solving Eq. (14) for some $i$ recovers $\mathbf{Q}_i$ whose columns form a basis for a $m_{i}$ dimensional subspace in $\mathbb{R}^n$ . Although outputting a truncated basis via QR with pivoting or rank-revealing QR decompositions would offer faster alternatives to SVD for solving Eq. (14), SVD offers more reliable subspace recovery [10]. Thus, we use SVD-based algorithms and leave QR methods for future work. + +For cases where $\tilde{\mathbf{D}}$ has outlier columns, we use an $L_{1}$ penalty, i.e., $q = 1$ , and introduce an IRLS-SVD solver2, a simple method that resembles IRLS algorithms for subspace recovery [15, 27-29, 34, 57, 61]. In practice, we implement a vanilla IRLS-SVD algorithm which could further be made faster and provably convergent using tools from [1, 4, 23, 24, 58]. We leave more advanced solvers, as well as working with other values of $r$ and $q$ (e.g., $r = 0$ [30]), for future work. + +# 3.2. Flag-BMGS + +We now propose Flag-BMGS, an algorithm for finding FD and its robust version, Robust FD (RFD). Our algorithm is inspired by the Block Modified Gram-Schmidt (BMGS) procedure [3, 19]. Modified Gram-Schmidt (MGS) is a more numerically stable implementation of the classical Gram-Schmidt orthogonalization. BMGS runs an MGS algorithm on block matrices, iteratively projecting and orthonormalizing matrices rather than vectors, to output a QR + +Table 2. A summary of GS algorithms and their properties. + +
AlgorithmGSMGSBMGSFlag-BMGS
StableX
Block-wiseXX
Hier.-pres.XXX
+ +decomposition. In contrast, we use Flag-BMGS on a data matrix with a column hierarchy to produce a hierarchy-preserving FD. We summarize the properties of Gram-Schmidt variants in Tab. 2. + +Flag-BMGS operates by first generating a permutation matrix $\mathbf{P}$ (see Eq. (12)) to extract the matrix $\mathbf{B} = \mathbf{D}\mathbf{P}^{\top}$ , using the column hierarchy. Then each iteration $i = 1,2,\ldots,k$ constructs $\Pi_{\mathbf{Q}_{i-1}^{\perp}} \cdots \Pi_{\mathbf{Q}_1^{\perp}} \mathbf{B}_i$ , solves an optimization of the form Eq. (14), and then constructs each $\mathbf{R}_{i,j}$ for $j \leq i$ (see Eq. (11)). In experiments, we call FD the output of Flag-BMGS using SVD with $r = q = 2$ and Robust FD (RFD) the iterative variant using $r = 2$ and $q = 1$ to solve Eq. (14). Algorithm details are in suppl. material. + +Stability results and the search for more optimal algorithms, such as those using block Householder transformations [17] are left to future work. Many other block matrix decompositions exist and a brief discussion of such a low-rank block matrix decomposition [46] can be found in suppl. material. + +On the flag type. Flag type is an input to Flag-BMGS. Detecting or selecting an adapted flag type from data rather than relying on a heuristic choice, is recently addressed by Szwagier et al. in principal subspace analysis [53]. The FD model does not benefit from this advance because it preserves hierarchies rather than directions of maximum variance. We now discuss methods for estimating flag type. + +Assuming full access to $\mathbf{D}$ , the flag type is $(n_{1}, n_{2}, \ldots, n_{k}; n)$ where $n_{i} = \mathrm{rank}(\mathbf{D}_{\mathcal{A}_{i}})$ (see Prop. 1). Yet, the data can be corrupted, i.e., we observe only $\tilde{\mathbf{D}} = \mathbf{D} + \epsilon$ ( $\epsilon$ denotes random noise) instead of the true $\mathbf{D}$ . This leads to an estimation problem of the flag type of the FD assuming access to $\tilde{\mathbf{D}}$ and the true (known) column hierarchy for $\mathbf{D}$ . + +A naive approach to address the problem of flag type estimation for our FD is to run the FD along with a singular value truncation in each SVD. Methods for truncating the SVs include the elbow and Gavish-Dohono [13, 16]. In this work, given a column hierarchy and $\tilde{\mathbf{D}}$ (but not $\mathbf{D}$ ), we choose a flag type where $n_k < \mathrm{rank}(\tilde{\mathbf{D}})$ and input it to FD. In doing so, the output of FD forms a reduced-rank approximation of $\mathbf{D}$ denoted $\hat{\mathbf{D}} = \mathbf{Q}\mathbf{R}\mathbf{P}^{\top}$ . + +A promising future research direction involves exploring smarter truncation methods for extracting the flag type of $\mathbf{D}$ under specific contamination criteria. + +![](images/018b99d7401289c78a8d296577ef83c870629105a88849a24a2a52112d415358.jpg) +Figure 3. Images from the YFB [5] are flattened and horizontally stacked into $\mathbf{D}$ . We use the hierarchy with the images of the first subject as $\mathcal{A}_1$ and all images as $\mathcal{A}_2$ . We run FD (flag type $(1,2)$ ) and baselines (rank 2). FD is the only method to correctly reconstruct the subjects. We plot the basis vectors (eigenfaces) on the right and find FD extracts basis elements that most closely resemble the subjects. + +![](images/8e78c4a0de8b560d68ad1d2d4a0ea12e0f90ab8bb00cc8bc6f2723cc1acc699e.jpg) + +# 4. Applications + +Before moving on to experimental results, we specify applications of Flag Decomposition (FD), which enables reconstruction in the presence of data corruption, visualization, classification, and a novel prototype and distance for few-shot learning. + +# 4.1. Reconstruction + +Consider the matrix $\mathbf{D}$ with an associated column hierarchy $\mathcal{A}_1\subset \mathcal{A}_2\subset \dots \subset \mathcal{A}_k$ . Suppose we have a corrupted version $\tilde{\mathbf{D}}$ and the feature hierarchy is known a priori. We use FD to recover $\mathbf{D}$ from $\tilde{\mathbf{D}}$ . For example, $\tilde{\mathbf{D}}$ could be a (pixels $\times$ bands) flattened hyperspectral image with a hierarchy on the bands (see Example 2.2 and Fig. 1) with outlier bands or additive noise. Another example includes a hierarchy of subjects: with images of subject 1 inside those of subjects 1 & 2 (see Fig. 3). A reconstruction using FD respects this hierarchy by preserving the identities of the two subjects. + +Suppose FD is computed by running Flag-BMGS on $\tilde{\mathbf{D}}$ to output $\mathbf{Q},\mathbf{R},\mathbf{P}$ . Then we reconstruct $\hat{\mathbf{D}} = \mathbf{QR}\mathbf{P}^{\top} \approx \mathbf{D}$ . This is a low-rank reconstruction of $\tilde{\mathbf{D}}$ when $n_k < \mathrm{rank}(\tilde{\mathbf{D}})$ . Unlike other reconstruction algorithms this application preserves the column hierarchy. + +# 4.2. Leveraging the geometry of flags + +Consider a collection of $\mathcal{D} = \{\mathbf{D}^{(j)}\in \mathbb{R}^{n\times p_j}\}_{j = 1}^N$ with column hierarchies $\mathcal{A}_1^{(j)}\subset \mathcal{A}_2^{(j)}\subset \dots \subset \mathcal{A}_k^{(j)}$ For example, $\mathcal{D}$ could be a collection of (band $\times$ pixel) hyperspectral image patches. After choosing one flag type $(n_{1},n_{2},\ldots ,n_{k};n)$ with $n_k\leq \min (p_1,\ldots ,p_N)$ we can use Flag-BMGS with this flag type on each $\mathbf{D}^{(j)}$ to extract the collection of flags $\mathcal{Q} = \{\llbracket \mathbf{Q}^{(j)}\rrbracket \}_{j = 1}^{N}\subset$ $\mathcal{FL}(n_1,n_2,\dots ,n_k;n)$ . Now, we can use chordal distance on the flag manifold $\mathcal{FL}(n_1,n_2,\dots ,n_k;n)$ or the product of Grassmannians $Gr(m_{1},n)\times Gr(m_{2},n)\times \dots \times$ $Gr(m_{k},n)$ to build an $N\times N$ distance matrix and run multidimensional scaling (MDS) [22] to visualize $\mathcal{D}$ or $k$ -nearest neighbors [39] to cluster $\mathcal{D}$ see Fig.1). Other clustering algorithms like $k$ -means can also be implemented with + +![](images/317e8e645c6ecb09a70be2d3ece613194aa1ed48fc7551fce518694267a3fe02.jpg) +Figure 4. To perform few shot-learning, we embed all $s = 3$ shots $(\pmb{x}_1,\pmb{x}_2,\pmb{x}_3)$ of one class into one flag using FD. The light shapes are pre-trained and frozen feature extractors. + +means on products of Grassmannians (e.g., [11, 14, 34, 37]) or chordal flag averages (e.g., [33]). Additionally, we can generate intermediate flags by sampling along geodesics between flags in $\mathcal{Q}$ using tools like manopt [8, 56] for exploration of the flag manifold between data samples. + +# 4.3. Few-shot learning + +In few-shot learning, a model is trained on very few labeled examples, a.k.a. 'shots' from each class to make accurate predictions. Suppose we have a pre-trained feature extractor $f_{\Theta} : \mathcal{X} \to \mathbb{R}^n$ , parameterized by $\Theta$ . In few-shot learning, the number of classes in the training set is referred to as 'ways.' We denote the feature representation of $s$ shots in class $c$ as $f_{\Theta}(\boldsymbol{x}_{c,1}), f_{\Theta}(\boldsymbol{x}_{c,2}), \ldots, f_{\Theta}(\boldsymbol{x}_{c,s})$ where $\boldsymbol{x}_{c,i} \in \mathcal{X}$ . The 'support' set is the set of all shots from all classes. A few-shot learning architecture contains a method for determining a class representative (a.k.a. 'prototype') in the feature space $(\mathbb{R}^n)$ for each class using its shots. A test sample ('query') is then passed through the encoder, and the class of the nearest prototype determines its class. Overall, a classical few-shot learning architecture is comprised of three (differentiable) pieces: (1) a mapping of shots in the feature space to prototypes, (2) a measure of distance between prototypes and queries, and (3) a loss function for fine-tuning the pre-trained encoder. + +Flag classifiers. Take a feature extractor that decomposes into $k \geq 2$ functions. Specifically, $f_{\Theta} = f_{\Theta}^{(k)} \circ \dots \circ f_{\Theta}^{(1)}: \mathcal{X} \to \mathbb{R}^n$ , where each $f_{\Theta}^{(i)}$ maps to $\mathbb{R}^n$ . We can generalize Example 2.3 to construct a $k$ -part hierarchical data matrix. For simplicity, we consider the case where $k = 2$ . After constructing a data matrix with a corresponding column hierarchy, we use Flag-BMGS to represent the support of one class, $c$ , in the feature space as $[[\mathbf{Q}_{1}^{(c)}]] \in \mathcal{FL}(n_1, n_2; n)$ (see Fig. 4). Now, each subspace $[\mathbf{Q}_{1}^{(c)}]$ and $[\mathbf{Q}_{2}^{(c)}]$ represents the features extracted by $f_{\Theta}^{(1)}$ and $f_{\Theta}$ , respectively. + +Given a flag-prototype $\mathbb{[Q^{(c)}]}\in \mathcal{FL}(n_1,n_2;n)$ and query $\left\{f_{\Theta}^{(1)}(\boldsymbol {x}),f_{\Theta}(\boldsymbol {x})\right\} \subset \mathbb{R}^n$ , we measure distance between the them as + +$$ +\left\| \boldsymbol {\Pi} _ {\mathbf {Q} _ {1} ^ {(c)} \perp} f _ {\Theta} ^ {(1)} (\boldsymbol {x}) \right\| _ {2} ^ {2} + \left\| \boldsymbol {\Pi} _ {\mathbf {Q} _ {2} ^ {(c)} \perp} f _ {\Theta} (\boldsymbol {x}) \right\| _ {2} ^ {2} \tag {15} +$$ + +Table 3. Metrics for evaluating simulations. LRSE stands for Log Relative Squared Error, and $\parallel \cdot \parallel_F$ is the Frobenius norm. $[[\mathbf{X}]]$ represents true flag, $\mathbf{D}$ the true data, $[[\hat{\mathbf{X}}]]$ the estimated flag, and $\hat{\mathbf{D}}$ the reconstructed data. + +
Metric (↓)Formula
Chordal Distance LRSE√∑i=1kmi−tr(XiTˆXiˆXTi))10 log10(∥D−ˆD∥F2/∥D∥F2)
+ +where $\Pi_{\mathbf{Q}_i^{(c)\perp}} = \mathbf{I} - \mathbf{Q}_i^{(c)}\mathbf{Q}_i^{(c)}^\top$ for $i = 1,2$ . This is proportional to the squared chordal distance on $\mathcal{FL}(1,2;n)$ when the matrix $\left[f_{\Theta}^{(1)}(\boldsymbol {x})|f_{\Theta}(\boldsymbol {x})\right]$ is in Stiefel coordinates. + +Flag classifiers are fully differentiable enabling fine-tuning of the feature extractor with a flag classifier loss. We leave this as an avenue for future work. + +# 5. Results + +We run three simulations in Sec. 5.1 to test the capacity of FD and RFD for flag recovery and reconstruction for noisy and outlier-contaminated data. In Sec. 5.2 we visualize clusters of flag representations for hierarchically structured $\mathbf{D}$ matrices using FD. We highlight the utility of FD for denoising hyperspectral images in Sec. 5.3. We cluster FD-extracted flag representations of hyperspectral image patches via a pixel hierarchy in Sec. 5.4. Finally, in Sec. 5.5, we apply flag classifiers to three datasets for classification. + +Baselines. While more modern, task-specific algorithms may exist as baselines for each experiment, our primary objective is to demonstrate the effectiveness of FD and RFD (computed using Flag-BMGS) compared to the de facto standards, SVD and QR, in the context of hierarchical data. SVD is a standard denoising method. Two common flag extraction algorithms are SVD [11, 33, 34, 53] and QR [33]. In Sec. 5.5 we compare our results to two standard prototypes (e.g., means and subspaces) for classification within the few-shot learning paradigm. + +Metrics. In the additive noise model $\tilde{\mathbf{D}} = \mathbf{D} + \epsilon$ , we compute the signal-to-noise ratio (SNR) in decibels (dB) as + +$$ +\operatorname {S N R} (\mathbf {D}, \epsilon) = 1 0 \log_ {1 0} \left(\| \mathbf {D} \| _ {F} ^ {2} / \| \epsilon \| _ {F} ^ {2}\right). \tag {16} +$$ + +A negative SNR indicates more noise than signal, and a positive SNR indicates more signal than noise. The rest of our metrics are in Tab. 3. + +# 5.1. Reconstruction Under Corruption + +For both experiments, we generate $\mathbf{X} \in St(10,4)$ that represents $[\mathbf{X}] \in \mathcal{FL}(2,4;10)$ . Then we use $\mathbf{X}$ to build a data matrix $\mathbf{D} \in \mathbb{R}^{10 \times 40}$ with the feature hierarchy $\mathcal{A}_1 \subset \mathcal{A}_2 = \{1,2,\dots,20\} \subset \{1,2,\dots,40\}$ . We generate $\tilde{\mathbf{D}}$ as either + +![](images/30d6c275e7959b48de5f690834efcbdaae8dbcac89a27d991100bb0f45929f3d.jpg) +Figure 5. FD & RFD improve flag recovery while maintaining accurate reconstructions. SNR is Eq. (16). LSRE & Dist are in Tab. 3 with Dist as the chordal distance. Best fit lines are quadratic. + +$\mathbf{D}$ with additive noise or $\mathbf{D}$ with columns replaced by outliers. Our goal is to recover $\mathbb{[X]}$ and $\mathbf{D} = \mathbf{X}\mathbf{R}\mathbf{P}^{\top}$ from $\tilde{\mathbf{D}}$ using FD and RFD with a flag type of (2, 4; 10), and the first 4 left singular vectors from SVD. We evaluate the estimated $\mathbb{[X]}$ and $\hat{\mathbf{D}}$ using Tab. 3. + +Additive noise. We contaminate with noise by $\tilde{\mathbf{D}} = \mathbf{D} + \epsilon$ where $\epsilon$ is sampled from either a mean-zero normal, exponential, or uniform distribution of increasing variance. FD and RFD improve flag recovery over SVD and produce similar reconstruction errors (see Fig. 5). + +Robustness to outliers. We construct $\tilde{\mathbf{D}}$ to contain outlier columns. The inlier columns of $\tilde{\mathbf{D}}$ form the flag-decomposable $\mathbf{D} = \mathbf{X}\mathbf{R}\mathbf{P}^{\top}$ with the flag $[\mathbf{X}]$ . FD and RFD outperform SVD and IRLS-SVD, with RFD providing the most accurate flag recovery and inlier reconstructions (see Fig. 6). + +# 5.2. MDS Clustering + +We generate 60 D matrices in 3 clusters, each with 20 points. Then we add normally-distributed noise to generate 60 $\tilde{\mathbf{D}}$ matrices (see suppl. material). We compute the SNR for each $\tilde{\mathbf{D}}$ via Eq. (16) and find the mean SNR for the 60 matrices is $-4.79\mathrm{dB}$ , indicating that significant noise has been added to each D. This experiment aims to find the method that best clusters the $\tilde{\mathbf{D}}$ matrices. + +We use SVD (with 4 left singular vectors) and FD (with flag type $(2,4;10)$ ) on each of the $60\tilde{\mathbf{D}}$ matrices to recover the flag representations. Then the chordal distance is used to generate distance matrices. Finally, MDS visualizes these data in 2 dimensions. Our additional baseline is run on the Euclidean distance matrix between the flattened $\tilde{\mathbf{D}}$ matrices. We find that FD produces a distance matrix and MDS with most defined clusters in Fig. 7. + +# 5.3. Hyperspectral Image Denoising + +We consider denoising images captured by the AVIRIS hyperspectral sensor. Two hyperspectral images are used for model evaluation: the KSC and Indian Pines datasets [2]. KSC is a $(512 \times 614)$ image with 176 bands and Indian Pines is a $(145 \times 145)$ image with 200 bands. We run two + +![](images/956eaa5ff66b67203b368f8f97002702bcb74fe7442310c65c25615c99f5bcf7.jpg) +Figure 6. FD and RFD improve flag recovery and reconstruction error over SVD and IRLS-SVD with RFD. LSRE & Dist are defined in Tab. 3 with Dist as the chordal distance. + +experiments, one on each image, by randomly selecting a $50 \times 50$ square and flattening it to generate $\mathbf{D} \in \mathbb{R}^{2500 \times p}$ (pixels as rows and bands as columns). Then, we add mean-zero Gaussian noise of increasing variance to obtain $\tilde{\mathbf{D}}$ , on which we run our FD and SVD to denoise. LRSE is measured between $\mathbf{D}$ and the denoised reconstruction $\hat{\mathbf{D}}$ (see Tab. 3) to determine the quality of the reconstruction. + +For our FD, we specify a flag of type (8, 9, 10; 2500), and SVD is run using the first 10 left singular vectors. The hierarchy used as input to our algorithm mirrors the spectrum hierarchy (see Example 2.2) by assigning $\mathcal{A}_1$ to the first 40 bands, $\mathcal{A}_2$ to the first 100 bands, and $\mathcal{A}_3$ to all the bands. We find in Fig. 8 that FD consistently improves HSI denoising over SVD. When testing exponential and uniform noise, FD and SVD produce similar quality denoising. + +# 5.4. Hyperspectral Image Clustering + +We now showcase image patch clustering using the KSC hyperspectral image. The data was pre-processed to remove low SNR and water absorption bands, then split into $3 \times 3$ patches of pixels from the same class. Each patch is translated into a $\mathbf{D} \in \mathbb{R}^{176 \times 9}$ (bands as rows and pixels as columns) with the hierarchy described in Example 2.1 + +![](images/2ab17948e5c3dc7629bd68093b6d0ec1451b3bb8668dad745e0876625839f813.jpg) +Figure 7. (Top row) distance matrices using Euclidean distance (Euclidean) and chordal distance between flags (SVD and FD). (Bottom row) 2D representation of the data colored by cluster via MDS applied to the distance matrix. + +![](images/dea06eb6b1f96800359d55d166e89d3bdd450a863c0ff1163c861a35c5c154a4.jpg) +Figure 8. FD improves hyperspectral image denoising over SVD (see SNR Eq. (16), LRSE Tab. 3). + +with $\mathcal{A}_1$ as the center pixel. A flag recovery method is run on each $\mathbf{D}$ to extract a flag of type (1,8; 176). Then, we compute a chordal distance matrix between the collection of flags. Finally, we classify these flags using $k$ -nearest neighbors. We compare FD to QR and SVD in Fig. 9 and find that FD produces the highest classification accuracy for the number of nearest neighbors between 6 and 24. + +Instead of using chordal distance between flags to measure distance, we use a sum of Grassmannian chordal distances. We hypothesize that this is a more suited distance for this example because it is more robust to outlier pixels. Given $\mathbb{[X]}$ , $\mathbb{[Y]}\in \mathcal{FL}(1,8;176)$ , we use a chordal distance on the product of Grassmannians that takes advantage of the embedding of $\mathcal{FL}(1,8;176)$ in $Gr(1,176)\times Gr(7,176)$ . See our suppl. material for details. + +# 5.5. Few-shot Learning + +We deploy FD in few-shot learning using an Alexnet [21], pre-trained on ImageNet, as the feature extractor $f_{\Theta}: \mathcal{X} \to \mathbb{R}^{4096}$ , admitting the representation $f_{\Theta} = f_{\Theta}^{(1)} \circ f_{\Theta}^{(2)}$ where the range of both $f_{\Theta}^{(1)}$ and $f_{\Theta}^{(2)}$ is $\mathbb{R}^{4096}$ . We use the feature hierarchy outlined in Example 2.3 and the procedure in Sec. 4.3 to map the support of one class to a flag prototype using FD (see Fig. 4). The distance between a query point and a flag prototype is Eq. (15). We call this pipeline a flag classifier. Our baselines include Euclidean [51] and subspace [50] classifiers. No fine-tuning is used to optimize the feature extractor in any experiments. + +Our two baseline methods, Euclidean and subspace, use means and subspaces as prototypes. Specifically, prototypical networks [51] are a classical few-shot architecture that uses averages for prototypes and Euclidean distance between prototypes and queries. On the other hand, subspace classifiers from adaptive subspace networks [50] use subspaces as prototypes and measure distances between prototypes and queries via projections of the queries onto the prototype. Building upon these baseline methods, we use a flag-based approach (see Figs. 1 and 4). For a fair comparison, baselines stack features extracted by $f_{\Theta}^{(1)}$ and $f_{\Theta}$ . + +We evaluate flag classifiers on EuroSat [18], CIFAR-10 [20], and Flowers102 [40] datasets, and report the average classification accuracy in Tab. 4 over 20 random trials each containing 100 evaluation tasks with 10 query images and 5 ways per task. We find that flag classifiers perform + +![](images/835a1ea3609d2e379d248457351e04a75384d51bc8a95d3215729347874f13aa.jpg) +Figure 9. $k$ -nearest neighbors classification accuracies (↑) using chordal distance matrices derived from flag representations of $3 \times 3$ image patches of the KSC dataset over 20 random trials with a $70\% - 30\%$ training-validation split. + +similarly to subspace classifiers and improve classification accuracy in two cases. Further results are in suppl. material. + +Table 4. Classification accuracy $(\uparrow)$ with $s$ shots, 5 ways, and 100 evaluation tasks each containing 10 query images, averaged over 20 random trials. Flag types for 'Flag' are $(s - 1, 2(s - 1))$ and the subspace dimension is $s - 1$ . + +
sDatasetFlagEuc.Subsp.
3EuroSat77.776.777.6
CIFAR-1059.658.659.6
Flowers10290.288.290.2
5EuroSat81.880.781.8
CIFAR-1065.265.265.2
Flowers10293.291.493.2
7EuroSat83.982.683.8
CIFAR-1068.068.668.1
Flowers10294.592.794.5
+ +# 6. Conclusion + +We introduced Flag Decomposition (FD), a novel matrix decomposition that uses flags to preserve hierarchical structures within data. We further proposed Flag-BMGS to robustly find this decomposition even under noise and outlier contamination and studied its properties. With this algorithm, FD augments the machine learning arsenal by providing a robust tool for working with hierarchical data, applicable in tasks like denoising, clustering, and few-shot learning, as demonstrated by our evaluations. + +Limitations & future work. Our FD framework is a first step to hierarchy-aware decompositions and leaves ample room for future study. For example, Flag-BMGS is prone to the instabilities of Gram-Schmidt and requires a flag type and column hierarchy as inputs. In the future, we will improve algorithms for faster and more stable computation. Next, we plan to automate the flag type detection and explore fine-tuning a feature extractor for few-shot learning with flags. We will also investigate directly learning (latent) hierarchical structures from data. + +Acknowledgements. N. Mankovich thanks Homer Durand, Gherardo Varando, Claudio Verdun, and Bernardo Freitas Paulo da Costa for enlightening conversations on flag manifolds and their applications. N. Mankovich and G. Camps-Valls acknowledge support from the project "Artificial Intelligence for complex systems: Brain, Earth, Climate, Society" funded by the Department of Innovation, Universities, Science, and Digital Society, code: CIPROM/2021/56. This work was also supported by the ELIAS project (HORIZON-CL4-2022-HUMAN-02-02, Grant No. 101120237), the THINKINGEARTH project (HORIZON-EUSPA-2022-SPACE-02-55, Grant No. 101130544), and the USMILE project (ERC-SyG2019, Grant No. 855187). T. Birdal acknowledges support from the Engineering and Physical Sciences Research Council [grant EP/X011364/1]. T. Birdal was supported by a UKRI Future Leaders Fellowship [grant number MR/Y018818/1] as well as a Royal Society Research Grant RG/R1/241402. The work of I. Santamaria was partly supported under grant PID2022-137099NB-C43 (MAD-DIE) funded by MICIU/AEI /10.13039/501100011033 and FEDER, UE. + +# References + +[1] Khurrum Aftab, Richard Hartley, and Jochen Trumpf. Generalized Weiszfeld algorithms for lq optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(4):728-745, 2014. 4 +[2] Tatyana V. Bandos, Lorenzo Bruzone, and Gustavo Camps-Valls. Classification of hyperspectral images with regularized linear discriminant analysis. IEEE Transactions on Geoscience and Remote Sensing, 47(3):862-873, 2009. 7 +[3] Jesse L Barlow. Block modified Gram-Schmidt algorithms and their analysis. SIAM Journal on Matrix Analysis and Applications, 40(4):1257-1290, 2019. 4 +[4] Amir Beck and Shoham Sabach. Weiszfeld's method: Old and new results. Journal of Optimization Theory and Applications, 164:1-40, 2015. 4 +[5] Peter N. Belhumeur, Joao P Hespanha, and David J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711-720, 1997. 5 +[6] Ake Bjorck and Gene H Golub. Numerical methods for computing angles between linear subspaces. Mathematics of computation, 27(123):579-594, 1973. 3 +[7] Steffen Börm, Lars Grasedyck, and Wolfgang Hackbusch. Introduction to hierarchical matrices with applications. Engineering analysis with boundary elements, 27(5):405-422, 2003. 2 +[8] Nicolas Boumal, Bamdev Mishra, P-A Absil, and Rodolphe Sepulchre. Manopt, a Matlab toolbox for optimization on manifolds. JMLR, 15(1):1455-1459, 2014. 6 +[9] Ioana Ciuclea, Alice Barbora Tumpach, and Cornelia Vizman. Shape spaces of nonlinear flags. In International Con + +ference on Geometric Science of Information, pages 41-50. Springer, 2023. 1 +[10] James W Demmel. Applied numerical linear algebra. SIAM, 1997. 4 +[11] Bruce Draper, Michael Kirby, Justin Marks, Tim Marrinan, and Chris Peterson. A flag representation for finite collections of subspaces of mixed dimensions. Linear Algebra and its Applications, 451:15-32, 2014. 1, 6 +[12] Alan Edelman, Tomás A Arias, and Steven T Smith. The geometry of algorithms with orthogonality constraints. SIAM Journal on Matrix Analysis and Applications, 20(2):303-353, 1998. 3 +[13] Antonella Falini. A review on the selection criteria for the truncated SVD in Data Science applications. Journal of Computational Mathematics and Data Science, 5, 2022. 5 +[14] P Thomas Fletcher, Suresh Venkatasubramanian, and Sarang Joshi. The geometric median on riemannian manifolds with application to robust atlas estimation. NeuroImage, 45(1): S143-S152, 2009. 6 +[15] Vaibhav Garg, Ignacio Santamaria, David Ramirez, and Louis L Scharf. Subspace averaging and order determination for source enumeration. IEEE Transactions on Signal Processing, 67(11):3028-3041, 2019. 4 +[16] Matan Gavish and David L. Donoho. The optimal hard threshold for singular values is $\frac{4}{\sqrt{3}}$ . IEEE Transactions on Information Theory, 60(8):5040-5053, 2014. 5 +[17] Vincent Griem and Sabine Le Borne. A block Householder-based algorithm for the QR decomposition of hierarchical matrices. SIAM Journal on Matrix Analysis and Applications, 45(2):847-874, 2024. 5 +[18] Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217-2226, 2019. 8 +[19] William Jalby and Bernard Philippe. Stability analysis and improvement of the block Gram-Schmidt algorithm. SIAM journal on scientific and statistical computing, 12(5):1058-1073, 1991. 4 +[20] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, 2009. 8 +[21] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. 8 +[22] Joseph B Kruskal and Myron Wish. Multidimensional scaling. Sage, 1978. 5 +[23] Christian Kümmerle and Claudio M Verdun. A scalable second order method for ill-conditioned matrix completion from few samples. In International Conference on Machine Learning, pages 5872-5883. PMLR, 2021. 4 +[24] Christian Kümmerle, Claudio Mayrink Verdun, and Dominik Stöger. Iteratively reweighted least squares for basis pursuit with global linear convergence rate. Advances in Neural Information Processing Systems, 34:2873-2886, 2021. 4 + +[25] David Lane. Hierarchy, complexity, society. In Hierarchy in natural and social sciences, pages 81-119. Springer, 2006. 1 +[26] Zhiying Leng, Tolga Birdal, Xiaohui Liang, and Federico Tombari. Hypersdfusion: Bridging hierarchical structures in language and geometry for enhanced 3d text2shape generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19691-19700, 2024. 1 +[27] Gilad Lerman and Tyler Maunu. Fast, robust and non-convex subspace recovery. Information and Inference: A Journal of the IMA, 7(2):277-336, 2018. 4 +[28] Gilad Lerman and Tyler Maunu. An overview of robust subspace recovery. Proceedings of the IEEE, 106(8), 2018. +[29] Gilad Lerman, Michael B McCoy, Joel A Tropp, and Teng Zhang. Robust computation of linear models by convex relaxation. Foundations of Computational Mathematics, 15: 363-410, 2015. 4 +[30] Guangcan Liu, Zhouchen Lin, Shuicheng Yan, Ju Sun, Yong Yu, and Yi Ma. Robust recovery of subspace structures by low-rank representation. IEEE transactions on pattern analysis and machine intelligence, 35(1):171-184, 2012. 4 +[31] Robert E Longacre. Hierarchy in language. Mouton, 1966. 1 +[32] Xiaofeng Ma, Michael Kirby, and Chris Peterson. Self-organizing mappings on the flag manifold with applications to hyper-spectral image data analysis. Neural Computing and Applications, 34(1):39–49, 2022. 1 +[33] Nathan Mankovich and Tolga Birdal. Chordal averaging on flag manifolds and its applications. In ICCV, pages 3881-3890, 2023. 1, 6 +[34] Nathan Mankovich, Emily J King, Chris Peterson, and Michael Kirby. The flag median and FlagIRLS. In CVPR, pages 10339-10347, 2022. 4, 6 +[35] Nathan Mankovich, Helene Andrews-Polymenis, David Threadgill, and Michael Kirby. Module representatives for refining gene co-expression modules. Physical Biology, 20 (4):045001, 2023. 1 +[36] Nathan Mankovich, Gustau Camps-Valls, and Tolga Birdal. Fun with Flags: Robust principal directions via flag manifolds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 1 +[37] Nathan J Mankovich. Subspace and Network Averaging for Computer Vision and Bioinformatics. PhD thesis, Colorado State University, 2023. 6 +[38] Tim Marrinan, J Ross Beveridge, Bruce Draper, Michael Kirby, and Chris Peterson. Finding the subspace mean or median to fit your need. In CVPR, 2014. 1 +[39] Tim Marrinan, P-A Absil, and Nicolas Gillis. On a minimum enclosing ball of a collection of linear subspaces. Linear Algebra and its Applications, 625:248-278, 2021. 5 +[40] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722-729. IEEE, 2008. 8 +[41] Yasunori Nishimori, Shotaro Akaho, and Mark D Plumbley. Riemannian optimization method on generalized flag mani + +folds for complex and subspace ICA. In AIP Conference Proceedings, pages 89-96. American Institute of Physics, 2006. 1 +[42] Yasunori Nishimori, Shotaro Akaho, and Mark D Plumbley. Riemannian optimization method on generalized flag manifolds for complex and subspace ICA. In AIP Conference, 2006. +[43] Yasunori Nishimori, Shotaro Akaho, and Mark D Plumbley. Riemannian optimization method on the flag manifold for independent subspace analysis. In International conference on independent component analysis and signal separation, pages 295-302. Springer, 2006. +[44] Yasunori Nishimori, Shotaro Akaho, Samer Abdallah, and Mark D Plumbley. Flag manifolds for subspace ICA problems. In ICASSP, pages IV-1417. IEEE, 2007. +[45] Yasunori Nishimori, Shotaro Akaho, and Mark D Plumbley. Natural conjugate gradient on complex flag manifolds for complex independent subspace analysis. In International Conference on Artificial Neural Networks. Springer, 2008. 1 +[46] Frank Ong and Michael Lustig. Beyond low rank+ sparse: Multiscale low rank matrix decomposition. IEEE journal of selected topics in signal processing, 10(4), 2016. 5 +[47] Xavier Pennec. Barycentric subspace analysis on manifolds. Annals of Statistics, 46(6A), 2018. 1 +[48] Renaud-Alexandre Pitaval and Olav Tirkkonen. Flag orbit codes and their expansion to Stiefel codes. In IEEE Information Theory Workshop, pages 1-5. IEEE, 2013. 3 +[49] Stanley N Salthe. Evolving hierarchical systems: their structure and representation. Columbia University Press, 1985. 1 +[50] Christian Simon, Piotr Koniusz, Richard Nock, and Mehrtash Harandi. Adaptive subspaces for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4136-4145, 2020. 8 +[51] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30, 2017. 8 +[52] Tom Szwagier and Xavier Pennec. Rethinking the Riemannian logarithm on flag manifolds as an orthogonal alignment problem. In International Conference on Geometric Science of Information, pages 375-383. Springer, 2023. 2 +[53] Tom Szwagier and Xavier Pennec. The curse of isotropy: from principal components to principal subspaces, 2024. 1, 5, 6 +[54] Tom Szwagier and Xavier Pennec. Nested subspace learning with flags, 2025. 1 +[55] Mohammad Reza Hosseinzadeh Taher, Michael B Gotway, and Jianming Liang. Representing part-whole hierarchies in foundation models by learning localizability composability and decomposability from anatomy via self supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11269-11281, 2024. 1 +[56] James Townsend, Niklas Koep, and Sebastian Weichwald. Pymanopt: A Python toolbox for optimization on manifolds using automatic differentiation. arXiv preprint arXiv:1603.03236, 2016. 6 + +[57] Manolis Tsakiris and René Vidal. Dual principal component pursuit. Journal of Machine Learning Research, pages 1-50, 2018. 4 +[58] Claudio Mayrink Verdun, Oleh Melnyk, Felix Krahmer, and Peter Jung. Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence. In The Thirty-Seventh Annual Conference on Learning Theory, pages 3823-3872. PMLR, 2024. 4 +[59] Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis DeCoste, Wei Di, and Yizhou Yu. Hd + +cnn: hierarchical deep convolutional neural networks for large scale visual recognition. In Proceedings of the IEEE international conference on computer vision, 2015. 1 +[60] Ke Ye, Ken Sze-Wai Wong, and Lek-Heng Lim. Optimization on flag manifolds. Mathematical Programming, 194(1): 621-660, 2022. 2, 3 +[61] Teng Zhang and Gilad Lerman. A novel m-estimator for robust PCA. The Journal of Machine Learning Research, 15 (1):749–808, 2014. 4 \ No newline at end of file diff --git a/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/images.zip b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..be132b1d66cb66b44d5fc0294374f1e256b06932 --- /dev/null +++ b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d7cb888fe7570f74846ce414fcca9b534f4f33be3f80b9b037ac97471df52ff +size 419402 diff --git a/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/layout.json b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..da5d72ccfac98f0312da45ace69201f2fd5c8625 --- /dev/null +++ b/CVPR/2025/A Flag Decomposition for Hierarchical Datasets/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9ac091fd15dd7a97f42ac11e4233b9a199a93dbf2bf399214a0b8879e20685f +size 708568 diff --git a/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_content_list.json b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8bae96e7f9b9af2e00fa83fe1f0c9f39cb0925ae --- /dev/null +++ b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4b96768d2a9facee8f430a101a435bef858af7790052ce723c24516aa093d3a +size 75658 diff --git a/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_model.json b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e96a8cca1e2b20aa8e18d369a2cb63d7eaa3e881 --- /dev/null +++ b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:050f0b1cc514ac19138275ebd4074ff2696735223b36c5e35062cbcc330a1b08 +size 94010 diff --git a/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_origin.pdf b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6589061c0a20c040ba7973d7eb7617018eab7b31 --- /dev/null +++ b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/468856f8-8572-45cb-8162-0cfa7d6d7227_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71c5cdb9245852c5d0c7ed8f2154df9c3f82c74e46ead4cfe9438cda5fca9758 +size 2079690 diff --git a/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/full.md b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a394fa2ad41cd468d93e47a86016d1b625f989ac --- /dev/null +++ b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/full.md @@ -0,0 +1,308 @@ +# A Focused Human Body Model for Accurate Anthropometric Measurements Extraction + +Shuhang Chen $^{1}$ * Xianliang Huang $^{1}$ * Zhizhou Zhong $^{1}$ Juhong Guan $^{2}$ † Shuigeng Zhou $^{1}$ † + +$^{1}$ Shanghai Key Lab of Intelligent Information Processing, and School of Computer Science, Fudan University, Shanghai, China + $^{2}$ School of Computer Science Technology, Tongji University, Shanghai, China + $^{1}\{chensh22,huangxl21,zzzhong22\} @m.fudan.edu.cn$ $^{2}$ jhguan@tongji.edu.cn $^{1}$ sgzhou@fudan.edu.cn + +# Abstract + +3D anthropometric measurements have a variety of applications in industrial design and architecture (e.g. vehicle seating and cockpits), Clothing (e.g. military uniforms), Ergonomics (e.g. seating) and Medicine (e.g. nutrition and diabetes) etc. Therefore, there is a need for systems that can accurately extract human body measurements. Current methods estimate human body measurements from 3D scans, incurring a heavy data collection burden. Moreover, slight variations in camera angle, distance, and body postures may significantly affect measurement results. In response to these challenges, this paper introduces a focused human body model for accurately extracting anthropometric measurements. Concretely, we design a Bypass Network via CNN and ResNet architectures, which augments the frozen backbone SMPLer-X with additional feature extraction capabilities. Furthermore, to boost model effectiveness, we propose a dynamical loss function that automatically recalibrates the weights to make the network focus on targeted anthropometric parts. In addition, we construct a multimodal body measurement benchmark dataset consisting of depth, point clouds, meshes and corresponding body measurements to support model evaluation and future anthropometric measurement research. Extensive experiments on both open-source and proposed human body datasets demonstrate the superiority of our approach over existing counterparts, including current mainstream commercial body measurement software. + +# 1. Introduction + +Nowadays, more and more consumers go shopping online for perfectly-fitted apparel, driving the demand for mass + +personalization and custom-fitted clothing. Accurate body measurements are essential to meet this demand. The growing popularity of reconstructing 3D human body shapes from images has expanded the applications of anthropometric measurements across various fields, including medicine science [25], industrial design [13, 26, 44], fashion industry (clothing design) [8], ergonomics [28], fitness [35], and entertainment [5, 10, 12]. + +Most existing anthropometric measurement methods can be roughly divided into two types: one-stage framework [2, 22, 33], and two-stage framework [14, 36, 41]. The former calculates body measurements directly from input images or landmarks in an end-to-end manner. However, image-based methods often struggle with scale ambiguity, making them sensitive to noise and hard to obtain accurate measurements. In contrast, two-stage methods overcome these issues by taking body measurements from a deformed template model (e.g. SMPL [21] and SMPL-X [27]), which corresponds to a complete reconstruction of input data. Therefore, they rely heavily on the accuracy of the reconstructed parametric human body model. + +Some other methods [22, 29, 40] require capturing point clouds as input and extracting measurement factors from 3D scans. As point cloud representations rely on accurate 3D data acquired via expensive 3D sensors, consequently, several studies turn to utilize statistical models [6], linear regression [2], and convolutional neural networks [14, 36] for body shape measurements. However, these methods have limitations in being generalized to complex and variable scenarios. Factors such as gender, camera angle, distance, and challenging postures impede them to obtain accurate measurements. Recently, Vision Transformer (ViT) models have achieved great success in several 2D and 3D tasks. Although researchers have tried to employ deep learning techniques for anthropometric measurements, there is no ViT-based model for extracting human body measurements + +using two-stage methods. This is partly due to the challenges in gathering paired 3D data with corresponding labeled measurement values and the difficulty of training large-scale models with 3D data. + +To overcome the above problems, this paper introduces a large-scale body reconstruction model designed specifically to generate accurate measurement results in complex scenarios. We focus on optimizing a standard anthropometric measurement pipeline, which involves (1) reconstructing the SMPL model parameters to obtain human body meshes and (2) estimating the body measurements from the recovered meshes. They are referred to as two problems: Human Mesh Recovery (HMR) and Human Body Measurements Estimation (HBME), respectively. + +Concretely, inspired by the success of large-scale human body models [3] and the wide adoption of template-based approaches, we adopt the encoder-decoder ViT architecture as the backbone model to obtain reconstructed SMPL body parameters. Furthermore, we propose a bypass network to guide the frozen SMPL-X Model to focus on measurement parts. This bypass network, incorporating residual connections, enhances the feature extraction capability of the SMPLer-X model. Additionally, we combine the bypass network with a dynamic loss function to recalibrate the anthropometric and non-anthropometric regions, thereby generating accurate body measurements. + +As image-based methods raise privacy concerns related to facial features, resulting in a scarcity of publicly available circumference datasets. We construct a benchmark dataset, named Fashion-body, which includes human depth data, point clouds, meshes, and corresponding anthropometric measurements. Our dataset consists of examples collected from diverse angles, lighting conditions, and backgrounds. In addition, we also validate our method on point cloud data acquired from Mojave Flash Lidar Sensor to effectively prevent the leakage of users' private facial information [23, 45]. + +The major contributions of this paper are as follows: + +- We propose a novel method for extracting anthropometric measurements using a ViT-based SMPLer-X human model from images or point clouds captured with a high-end Mojave Flash Lidar Sensor. +- We introduce a bypass network with a dynamic loss function to the foundation model SMPLer-X. By making the foundation model focus on location-specific reconstruction, both reconstruction performance and the accuracy of anthropometric measurements are significantly improved. +- To support the training and evaluation of the proposed model, we construct a new dataset Fashion-body, which can also serve as a benchmark for future research in anthropometric measurements. + +# 2. Related Work + +Human Body and Measurements Datasets The Shape Completion and Animation of People (SCAPE) model [1] significantly advanced the field of human shape estimation by providing a method to build realistic 3D human body meshes in various shapes and poses. While datasets like CAESAR [30] offer extensive 3D scans and body measurements, they lack real images, necessitating simulation from scans using virtual cameras. Recently, Yan et al. [43] published the BodyFit dataset, featuring over 4K body scans with computed body measurements and simulated silhouettes, along with a collection of photographs and tape measurements from 194 subjects. Additionally, BodyM [31] is notable as the first large-scale dataset that pairs body measurements with silhouettes obtained through semantic segmentation of real photographs. However, a comprehensive dataset containing RGB images, point clouds, meshes, and corresponding ground truth measurement values is currently unavailable. In this work, we leverage point clouds, distance fields, and meshes with corresponding measurement values as our Fashion-body dataset. + +Body measurements can be extracted from point clouds, template meshes, or features from 3D/2D data. Traditional tools and 3D scans typically provide higher accuracy in obtaining body measurements directly, but they are cumbersome and relatively expensive. To address these issues, researchers have developed a series of fully automatic methods to reduce both the time and cost involved in the measurement extraction process. + +Anthropometric Measurements Extraction Anthropometric measurements can be extracted from images, point clouds, template mesh, or features. Traditional tools and 3D scans typically provide higher accuracy in obtaining body measurements directly, but they are cumbersome and relatively expensive. To address these issues, researchers have developed a series of fully automatic methods to reduce both the time and costs required for the measurement extraction process. The existing approaches to anthropometric measurement extraction can be broadly divided into two categories: landmark-based [22], [4] and template-based approaches [42]. Landmark-based approaches involve scanning the human body, body segmentation, landmark detection, and then extracting measurements based on these landmarks. For example, chest measurements are landmarks used to measure the bust point distance. While landmark-based methods can efficiently calculate measurements from identified landmarks, they are sensitive to noise and missing data in the input scan. In contrast, template-based methods calculate measurements on the deformed human template that corresponds to the complete reconstruction human body. These methods overcome the noise and + +missing data by fitting a human template body to the input scan. However, the major challenge of existing template-based methods has limitations in generalizing to different scene categories. Additionally, the reconstruction quality lacks reliability under extreme viewing angles, changing backgrounds, and different shooting distances, making accurate measurement across various scenarios challenging. To address this, we combine Vision Transformer (ViT) with the template-based approach to harness the effectiveness and power of large-scale models. + +Human Mesh Reconstruction The literature of recovering 3D human representations from RGB images is extensive [37, 39] with techniques falling into two broad categories. Parametric methods [1, 21] characterize the human body model in terms of a parametric representation. Nonparametric methods directly regress a 3D body representation from images using convolutional neural networks [17], transformers [20], intermediate representations [24] or implicit functions [7, 32]. Recently, Lin et al [19] proposed the first ViT-based backbone [9] to resolve the issues in previous approaches. This provides a promising and concise way to leverage the scaling-up model in two-stage body measurements. However, there remains a scarcity of benchmark datasets for comparing body measurements, and few researchers are exploring the integration of additional data for generalizable and accurate body measurement results. + +# 3. Method + +# 3.1. Overview + +Our objective is to acquire a mapping function $f(I) = (\theta, \beta, \pi)$ for reconstructing an individual depicted in the image. This involves predicting his/her 3D pose and shape parameters $(\theta$ and $\beta$ , respectively) based on the provided image, where $\pi$ denotes the camera translation. The function $M(\theta, \beta)$ then maps shape and pose parameters to vertices. As illustrated in Fig. 1, we employ the Vision Transformer (ViT) architecture to acquire a mapping function $f(I) = (\theta, \beta, \pi)$ . Then, we utilize a Bypass Network to provide additional supervision to generate accurate results on measurement parts. Observing that the original loss tends to optimize global key points rather than the local measurement points we focus on, we introduce a dynamical loss function that constantly re-weight the parameters between the measurement part and the non-measurement part. On the other hand, the measurement set constitutes our measurement modules, which are pre-defined based on ISO 8559 standard [18]. Finally, we obtain the body measurements by inputting the reconstructed vertices into an anthropometric module. + +# 3.2. SMPL-X Model + +In this paper, SMPL-X [27] is used to jointly model the human body, face, and hand on top of SMPL. Initially, there are 75 rotational parameters for the global rotation and body, eyes, jaw poses, 24 low-dimensional PCA coefficients or 90 rotational parameters for hand poses, and 10 for the body shape and 10 for the facial expressions. The joint regressor $\mathcal{F}$ is utilized to obtain 3D key points from the above parameters through a transformation function $\mathcal{R}_{\theta}$ along the kinematic tree. + +# 3.3. Focused Human Body Model + +Our goal is to develop a scalable and efficient model, pretrained on large datasets, to create body measurement vertices applicable across diverse scenarios, laying the foundation for future studies involving larger models in this field. + +Human body model architecture. As illustrated in Fig. 1, we employ a modified Vision Transformer(ViT) [9] for extracting image tokens. The standard transformer decoder [38] consists of six layers, each incorporating multi-head self-attention, multi-head cross-attention, and feedforward blocks with layer normalization. The decoder has a 2048 hidden dimension, 8 (64-dim) cross-attention heads, and a hidden dimension of 1024 in the feed-forward MLP block. It operates on a single learnable 2048-dimensional SMPL query token as input and cross-attends to the output image tokens. Before entering the backbone, the image undergoes cropping by a whole-body bounding box and resizing to $512 \times 384$ . It is then tokenized into $32 \times 24$ patches with a patch size of 16, followed by patch embedding and positional encoding to obtain image tokens $I_{token}$ . Additional learnable tokens are concatenated with the SMPL query token and processed utilizing ViT. A positional module is employed to predict 3D key points. Finally, a linear regressor module read out 3D key points from the transformer decoder, providing pose $\theta$ , shape $\beta$ , and camera $\pi$ . + +Focusing network. Our focusing network employs bypass network to extract various guidance features, preserving the pre-trained features for consistent performance and promoting efficient feature extraction. As depicted in Fig. 1, we employ a lightweight module based on CNN and ResNet architectures, serving as a plug-and-play bypass network. This network enhances the feature extraction capabilities of the backbone SMPLer-X model, preventing the discard of raw image information during data transfer. In addition, we utilize SirenReLU [34] as an activation function to efficiently extract features from the image, facilitating precise human reconstruction. This activation function is designed to represent and learn high-frequency functions, especially those complex patterns that are difficult to represent in conventional neural networks. + +![](images/36d9a9b999dbe43336893a616980e9c2a3d67dc31d8155700664270820567845.jpg) +Figure 1. The architecture of our method. Our human body model is based on a frozen SMPLer-X, while a trainable Bypass Network constrains the backbone focusing on the measurement parts. Reconstruction results are constantly refined by dynamic loss $\mathcal{L}_{\mathrm{DL}}$ . The anthropometric module outputs final body measurements which are calculated based on the ISO 8559 standard. + +# 3.4. Human Body Measurement + +Our anthropometric module is based on the SMPL model, which is a parameterized body model derived from numerous high-quality 3D scans representing diverse individuals in varying shapes and poses. This model decomposes body deformations into 72 pose and 10 shape parameters. Initially, we standardize pose and shape parameters for each gender to create an average-shaped mesh in an A-pose. We define an accurate mesh range of measurements on the template of the SMLP model. Then we obtain high-quality mesh and bounding boxes from Focused Human Body Model and approximate the subject height as the height of the bounding box. + +Subsequently, we introduce a few notations to describe our anthropometric mathematically. Given an SMPL surface template $T$ , we define a measurement plane $P_{i}(i\in 1,2,\dots,k)$ which is a user-defined set of $k$ body measurements, including chest circumference, waist circumference, hip circumference, wrist circumference, and shoulder width, according to the ISO 8559 standard which describes the location of measurements used for the creation of physical and digital anthropometric databases. Fig. 3 visualize the relatively exact anthropometric measurements selected from this standard. And the measurement set $\mathcal{M}_i$ represents the intersection of the surface template $T$ with a measurement plane $P_{i}$ . Note that a measurement set $\mathcal{M}_i$ is the combination of adjacent measurement vertex $\mathbf{v}_i^{(m)}$ and $\mathbf{v}_i^{(n)}$ (Examples $\mathbf{v}_i^{(1)}$ and $\mathbf{v}_i^{(2)}$ are illustrated in Fig. 2. Finally, we define the mode of measurement set $|\mathcal{M}_i|$ as anthropometric. + +ric measurement value: + +$$ +\left| \mathcal {M} _ {i} \right| = \left| \mathbf {v} _ {1} ^ {(i)} - \mathbf {v} _ {2} ^ {(i)} \right| \oplus \left| \mathbf {v} _ {2} ^ {(i)} - \mathbf {v} _ {3} ^ {(i)} \right| \oplus \dots \oplus \left| \mathbf {v} _ {m} ^ {(i)} - \mathbf {v} _ {n} ^ {(i)} \right| \tag {1} +$$ + +where $\oplus$ is the row-wise concatenation operator and the Manhattan distance is utilized to calculate adjacent measurement vertex $\mathbf{v}$ which effectively expedites computation. According to the definition above, our measurement module outputs the $|\mathcal{M}_i|$ as anthropometric result. + +![](images/3ec0b062fad6b60511e527ed598e5aa8d5445670c7eb49e2f0730d29ce6773b6.jpg) +(a) +Figure 2. The descriptions of the measurement plane $P_{i}$ with surface template $T$ and their intersection $\mathcal{M}_i$ . Both $v_{i}^{(1)}$ and $v_{i}^{(2)}$ are the adjacent measurement vertexes belonging to the measurement part $\mathcal{M}_i$ . + +![](images/6c17fe7977e4abb583a063dc810cd66d24bf31f760a9f4e76a0d4ed9f8f840bf.jpg) +(b) + +![](images/d708835e901d7b3995f8122900ebfcd6b893bf33772c1bf0cefc0dd8d82e2310.jpg) +(c) + +# 3.5. Loss Function + +We propose a dynamic loss function specially developed to train our architecture and boost the performance of body measurements. our total loss function is a weighted sum of $\mathcal{L}_{\mathrm{coord}}$ and $\mathcal{L}_{\mathrm{para}}$ which are defined in equation (1) and equation (2), respectively. + +We introduce a dynamic weight parameter, denoted as + +$\alpha$ , associated with the proportion of the measurement part and non-measurement part. The coordination loss $\mathcal{L}_{coord}$ is defined as follow: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {c o o r d}} = (1 - \alpha) \left(\sum_ {\mathbf {v} _ {i} \in \mathcal {M}} w _ {i} \| \hat {\mathbf {v}} _ {i} - \mathbf {v} _ {i} \| _ {2} ^ {2}\right) \\ + \alpha \left(\sum_ {\mathbf {v} _ {i} ^ {o} \in T / \mathcal {M}} w _ {i} \| \hat {\mathbf {v}} _ {i} ^ {o} - \mathbf {v} _ {i} ^ {o}) \| _ {2} ^ {2}\right), \tag {2} \\ \end{array} +$$ + +where $\hat{\mathbf{v}}_i^o$ and $\mathbf{v}_i^o$ are the $i$ -th predicted and ground truth 3D vertex outside the measurement parts and $\hat{\mathbf{v}}_i$ and $\mathbf{v}_i$ are the $i$ -th predicted and ground truth 3D vertex locations on the measurement parts. $T / \mathcal{M}$ is the vertex on template $T$ outside the measurement set $\mathcal{M}$ . The weight $w_i$ for vertex $v_i$ is proportional to the average area of the mesh triangles connected to vertex $\hat{v}_i$ . + +Whenever we have access to the GT SMPL pose parameters $\theta^{*}$ and shape parameters $\beta^{*}$ , a $\mathcal{L}_{\mathrm{para}}$ loss is bootstrapped to the model predictions. + +$$ +\mathcal {L} _ {\text {p a r a}} = \left| \left| \theta - \theta^ {*} \right| \right| _ {2} ^ {2} + \left| \left| \beta - \beta^ {*} \right| \right| _ {2} ^ {2} \tag {3} +$$ + +Where $\theta$ and $\beta$ are the predicted pose and shape parameters. + +Finally, the total dynamic loss of our scaling-up human body model is as follows: + +$$ +\mathcal {L} _ {\mathrm {D L}} = \lambda_ {\text {c o o r d}} \mathcal {L} _ {\text {c o o r d}} + \lambda_ {\text {p a r a}} \mathcal {L} _ {\text {p a r a}}. \tag {4} +$$ + +# 4. Experiments + +In this section, we conduct comprehensive experiments on the open-source body measurement dataset, proposed Fashion-body dataset, and Lidar data to validate the robustness and effectiveness of our method. We showcase qualitative results of reconstructed body shape and quantitative results of measurement values. Ablation studies are discussed to evaluate the impact of tuning loss and performance when integrating the proposed bypass network. + +# 4.1. Datasets + +Open-source body measurement dataset. To evaluate and compare the proposed methods with the state-of-the-art body measurement baselines, we carried out experiments on Body Measurements Datasets [16], including the front and side images of 391 males and 324 females with corresponding GT measurements. + +Fashion-body dataset. Previous body measurement datasets lack the details and diversity of real body shapes, such as distance, point clouds, and meshes. Our Fashion-body dataset fills this gap, being the first public dataset to include not only 30 paired front and side images but also distance, point clouds, and meshes. An example is illustrated in Fig. 5. Each subject was measured by a skilled anthropologist, providing chest, waist, hip, wrist, and shoulder + +width as the GT measurement values for our experiments. RGB images were captured in an indoor setup, with subjects standing in A-posed and capture distance varied from 2.5-3.5 meters. Note that we never require subjects to wear tight-fitting clothing. We also utilize a high-performance commercial Mojave Sensor for scanning subjects aged between 18 and 24 years old. The sensor outputs amplitude and distance files, which can be converted into point clouds. That allows us to conduct tests on LiDAR data. + +# 4.2. Baselines and Metrics + +We compare our method with several state-of-the-art body measurement baseline methods. The methods of comparison are divided into three categories: The first one are End-to-end methods, e.g., Linear Regression [2], NeuralAnthro [36]. Then, self-defined baselines, denoted as HMR-BMViT and 4D-BMViT, are proposed to validate the effectiveness of our proposed pipeline. HMR-BMViT utilize the backbone of HMR [15] to reconstruct full 3D mesh in an end-to-end manner. 4D-BMViT is a fully updated version of HMR, which adopts an end-to-end transformer architecture [11] as the backbone to reconstruct human body mesh. Finally, we select 3 existing commercial 3D body measurement mobile applications and set consistent input forms for a fair comparison with our method. Verifyt1 is a 3D body scanning app that allows user to share measurements with Verifyt-powered brands and receive accurate size recommendations or custom-fit products. YourFit is an AI-powered 3D body scanning solution from 3DLook2 company, targeted to precise sizing, virtual fitting, and health and fitness data tracking. + +To quantitatively evaluate the measurement results, we utilize the following standard metrics: Mean absolute error (MAE), Relative percentage error (RPE), and Mean Absolute Difference (AMAD). All the evaluation metrics report the mean of these values and measurement values are measured in centimeters (cm). + +# 4.3. Implementation Details + +The training of our backbone human body model is conducted on 8 GeForce RTX 4090 GPUs, with a total batch size of 512 (256 for ViT-Huge) for 10 epochs. Specifically, the lightweight focusing network only takes $20\mathrm{min}$ to train 20 epochs. We use Adam optimizer with cosine annealing for both training and fine-tuning. The learning rate for training is $1\times 10^{-5}$ with the minimum learning rate set to $1\times 10^{-6}$ , while the learning rate for finetuning is $1\times 10^{-5}$ with the minimum learning rate set to $5\times 10^{-7}$ . The input image is first patchified into input tokens and passed through the transformer to get output tokens. The output tokens are then passed to the transformer decoder. We use + +![](images/065a2546e5741c1f4ce8a14b6b3738db573ca0fb82f3176f7ea9d93f66a86c7d.jpg) +Distance + +![](images/90bc534e69ff14214534d92adb9b04fb04c5597662b3cd9fcb2d85724cc611c0.jpg) +Point clouds +Figure 3. Overview of our proposed dataset. Each object consists of point cloud data, distance from camera, SMPL mesh, and corresponding ground truth body measurements. The samples leverage the Open3D library for visualizing the point cloud and OpenCV for displaying the distance and amplitude images. + +![](images/74f20af1bebde9bb54e477c275431f5159252d1dbf1775de5a20580fe25129ff.jpg) +Mesh + +![](images/aca69269d98fb193ab24f2b59f89c68af394f974a7bee14d94b08f387a9a7a19.jpg) +GT Measurements + +a ViT-H/16, the "Huge" variant with $16 \times 16$ input patch size. It has 50 transformer layers, takes a $256 \times 192$ sized image as input, and outputs $16 \times 12$ image tokens, each of dimension 1280. During training, we set $\lambda_{\mathrm{coord}}$ and $\lambda_{\mathrm{para}}$ to 0.2 and 0.1. The parameter $\alpha$ is initialized to zero and gradually increased throughout the iterations. + +![](images/f6506c0abb5dfc23bd76a5f4bf31fe7bee83072dbce3bd5c12c8f76e21eb4f0d.jpg) +Person1_front + +![](images/7cd3d4dbc0d278060d4db1cda4a3ec5035ffe50f4a352b63eef3e74ce952871c.jpg) +Persion2_front + +![](images/1d3da70a1e2f81a1d5c7c3d945d176ef57749ad446228437047c77d56d92cc92.jpg) +Persion3 front + +![](images/8cce056b8ec6ba3bc8a0a7bd9e7c6ba93cb452249829e202efd9f967c1f80bf0.jpg) +Persion4 side +Figure 4. Reconstruction results of front and side on body measurement dataset and our Fashion-body dataset. + +![](images/3a51df60bb7509d6bb5b6b1341b2b4ae67e30ed3e086dbb9c253b6b7fb519dce.jpg) +Persion5 side + +![](images/10e0d7572e8021a0ba9359e7a3c4e3981c480b874567bbc1349248df1d38a344.jpg) +Persion6_side + +# 4.4. Qualitative and Quantitative Results + +Reconstruction results from images. We present three examples of front-view reconstruction from the body measurement dataset in the 1st row of Fig. 4, and three examples of side-view reconstruction from our Fashion-body dataset in the 2nd row. The subject reconstructs of body measurement showcase good results even under extreme camera views and varying camera distances, facilitating subsequent anthropometric measurements extraction. Furthermore, our method achieves accurate reconstructions for side-view subjects in our Fashion-body dataset, even in + +scenes with heavy clothing. + +Measurement results from images. Here, we evaluate four measurement parts and five body parts on body measurement dataset and our Fashion-body dataset, each with a single image as input. The corresponding MAE results are shown in Tab. 1. Linear regression [2] is a simple linear model that estimates body measurements based on height and weight. NeuralAnthro [36] employs a convolutional neural network and four additional body dimensions as supervision signals for body measurement estimation. Linear regression achieves MAEs of approximately 9.74 centimeters on the body measurement dataset and 6.23 centimeters on our Fashion-body dataset. NeuralAnthro performs worse, especially in measuring hip circumference. On the other hand, HMR-BMViT achieves lower MAE errors on both datasets with similar performance observed in 4D-BMViT. This improvement can be attributed to the adoption of transformer architecture, which outperforms traditional DNN and CNN networks. More importantly, our method surpasses both HMR-BMViT and 4D-BMViT across all body measurements, due to leveraging the scalability of the focusing network to ensure reliable measurement values. Notably, the deviations in the overall measurements of the baseline methods arise from their inadequate optimization in reconstructing the measurement parts. + +Measurement results from point clouds. We obtain the point clouds from the output amplitude and distance of the Mojave Sensor (For the detailed process please refer to appendix). The obtained point clouds are converted into meshes using a self-supervised method [46]. Then, we utilize our anthropometric module - the Point2Mesh baseline, to calculate measurements from the reconstructed 3D body. Fig. 5 demonstrate the qualitative reconstructions for different poses with $360^{\circ}$ rotation. For a fair comparison, we re-train all models on the same validation dataset. We present MAE results of HMR-BMViT, 4D-BMViT, and our proposed Point2Mesh in Tab. 2 to demonstrate the quantitative performance of our-BMViT on lidar data. Our-BMViT obtains superior performance across all measurements with a small MAE error in chest, waist, hip, wrist, and shoulder + +![](images/7910d7f318176cffedddb6193d8e3647a6920c9c6408887cb9519255b9fed0f7.jpg) + +![](images/421704f26ddaeec874c3695398c4f80eecc09bbf41d643b993fd2aad40b73eae.jpg) + +![](images/ff2b6927a19cef6c4241f42bbbafe56a485d1181668f05e20206a75b3f00701b.jpg) + +![](images/0884305f56c2f1f497b68b70e3c6f0b626d8cff4d6fddf974ce463c51cb177eb.jpg) + +![](images/2a50ac911ecf1a40873df3e9eda0e53eeb806e6a84422e6663d763d95df16560.jpg) + +Figure 5. Qualitative body reconstruction results with different poses from amplitude and distance of our Fashion-body dataset. + +
MethodBODY MEASUREMENTS DATASETOUR FASHION-BODY DATASET
ChestWaistHipShoulder WidthChestWaistHipWrist
Linear Regression5.937.4315.989.624.245.528.672.46
NeuralAnthro11.8412.2817.74.4610.9811.8920.041.71
HMR-BMViT6.8010.219.293.226.234.887.760.72
4D-BMViT5.1914.5313.542.376.9211.1613.70.98
Our-BMViT3.296.738.272.293.073.916.950.41
+ +Table 1. Mean Absolute Error (MAE) results on body measurements dataset and Fashion-body dataset. We report MAE results of chest size, waist size, and hip size between the predicted circumference and the GT circumference. + +width. This is attributed to the reconstructions being well-aligned with the raw amplitude and distance. + +
MethodLIDAR DATA
ChestWaistHipWristShoulder Width
HMR-BMViT4.128.707.741.474.67
4D-BMViT6.255.805.800.743.91
Point2Mesh5.033.446.400.591.73
Our-BMViT3.073.415.710.411.53
+ +Comparison with mobile applications. To compare our proposed method with mainstream commercial body measurement applications, we conducted experiments on six subjects (three males and three females) utilizing an iPhone 12 to capture the input data. We report the mean absolute error (MAE) for chest circumference, waist circumference, hip circumference, and wrist circumference in Tab. 3. Our method presents slightly better results compared to the commercially available anthropometric software Verifyt. While YouFit predicts measurements directly from a 3D scan in the standing pose, it achieves an average MAE of 5.1 centimeters for chest measurements, compared to 3.32 centimeters with our method. Although Verifyt outperforms in hip measurements, it exhibits a large deviation in waist measurements. Overall, our method demonstrates superior performance across all dimensions of body measurements. It demonstrates significant improvements in chest, waist, and wrist measurements, outperforming both 3DLook and Verifyt in our dataset. + +Table 2. Measurement results (MAE in cm.) on lidar data. + +
MethodFASHION-BODY DATASET
ChestWaistHipWrist
3DLook5.214.295.620.56
Verifyyt3.638.301.190.33
Our-BMViT3.323.532.640.11
+ +Table 3. Average MAE (cm.) comparison with commercial body measurement applications on our Fashion-body dataset. + +
MethodOUR FASHION-BODY DATASET
ChestWaistHipWristShoulder Width
SMPLer-X6.349.8411.051.134.57
Ours w/o LDL5.619.2210.531.024.42
Ours w/o Bypass Network4.737.0410.361.004.48
Our full model3.073.916.950.412.03
+ +Table 4. Ablation study on our Fashion-body dataset. We report the variations of chest size, waist size, and hip size between the predicted circumference and the ground truth circumference. + +# 4.5. Ablation Study + +We investigate the impact of $\mathcal{L}_{\mathrm{DL}}$ and bypass network on final body measurement accuracy. As shown in Tab.4, the design of bypass network incorporating the $\mathcal{L}_{\mathrm{DL}}$ loss is highly effective. This is attributed to concatenating additional features obtained from bypass network to the backbone network. Fig. 1 illustrates that our focused network structure enhances the feature extraction capability of the SMPLer-X module through residual connections. Consequently, the final extracted features are significantly enriched. The original network structure focuses on learning the global body + +![](images/8d86f047f580cd6fdb3789fd5160ae44b47a5683b64a2d87ab1c70e050f9ce9f.jpg) + +![](images/7733f29929e60ceed01f036e4769a89d909059fc319968767cc646eb9f11ddd9.jpg) + +![](images/bd7f4a6c5f9b9cf9d5f8498b515534be3c41931272aebe0f2525fcc3addf0e77.jpg) + +![](images/683ed37b469146cdf7fcc278ea53e07eff760a3d714059a9435e416fb96813e4.jpg) + +![](images/72f0c5d3d46b561bcd0a854783ce7494f0d50acff2d71f20b9818d0e085c9064.jpg) + +![](images/59c64c7cce7a016350527c5494142651566d725b055088af34ed86dd630fb60e.jpg) +Image + +![](images/c94c9e88960f88a1704efe17beb32f182c5d772ff45739c66d9c8cd282b9f37d.jpg) +SMPLer-X + +![](images/89e6a264580c6b6051849498ba6465a0c6a243f5cff80907541f07fa11682404.jpg) +w/o $\mathcal{L}_{\mathrm{DL}}$ loss + +![](images/151ed169c8ea58b447db05f7832ae3ae94c0757d48347f566d53cb8a3fa15586.jpg) +w/o Bypass Network + +![](images/97956b2b71ca49e259db5a7a2a7a3f1057abc0fe77dca37dee84edf32adb5ea1.jpg) +Our full model +Figure 6. Ablation result of our proposed loss function and bypass network on synthetic and realistic scenes. Our full model performs best in terms of 3D key points and depth maps. From left to right, we compare the experiment result with the original SMPL-X, w/o bypass network and ours. Both bypass network and $\mathcal{L}_{DL}$ improve the final results. + +information, which is unnecessary in our measurement task. The $\mathcal{L}_{\mathrm{DL}}$ loss term directs bypass network focus on specific measurement parts, enabling the production of accurate measurements in the subsequent stage. As shown in Tab. 4, without the addition of the $\mathcal{L}_{\mathrm{DL}}$ loss, the predicted measurement values are less accurate. The best results are achieved when both the $\mathcal{L}_{\mathrm{DL}}$ loss and the bypass network are integrated into our framework. As shown in Fig.6, we perform the ablation study for the Bypass Network and w/o $\mathcal{L}_{\mathrm{DL}}$ loss on two examples from the open-source dataset and our Fashion-body dataset. We also investigate how the accuracy of different reconstruction modules affects the final body measurement error. Our full model performs best in terms of 3D key points and overall measurement parts. From left to right, we compare the experiment result with the original SMPLER-X, w/o $\mathcal{L}_{\mathrm{DL}}$ , w/o bypass network and our measurement approach. Both bypass network and $\mathcal{L}_{\mathrm{DL}}$ improves the final results. + +# 5. Conclusion + +In this paper, we address the growing demand for accurate 3D anthropometric measurements in online garment shopping, aiming to streamline the production of well-fitted clothing. Existing methods relying on 3D scans encounter challenges in data collection and generalization across diverse scenarios. To overcome these challenges, we introduce a novel approach featuring a scaling-up human body model (SMPLer-X) capable of extracting measurements from both images and lidar data. A dynamic loss function and a Bypass Network based on CNN and ResNet architectures are integrated to enhance the effectiveness of feature extraction. And a dataset called Fashionbody is constructed to support model training and evaluation as well as future research on anthropometric measurements. Experimental results on both existing and proposed datasets demonstrate that our method is superior to current baselines and commercial software, offering a promising direction to cost-effective and tailored clothing production. + +# References + +[1] Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, and James Davis. Scape: shape completion and animation of people. In ACM SIGGRAPH 2005 Papers, pages 408-416. 2005. 2, 3 +[2] Kristijan Bartol, David Bojanic, Tomislav Petkovic, Stanislav Peharec, and Tomislav Pribanić. Linear regression vs. deep learning: A simple yet effective baseline for human body measurement. Sensors, 22(5):1885, 2022. 1, 5, 6 +[3] Zhongang Cai, Wanqi Yin, Ailing Zeng, Chen Wei, Qingping Sun, Yanjun Wang, Hui En Pang, Haiyi Mei, Mingyuan Zhang, Lei Zhang, et al. Smpler-x: Scaling up expressive human pose and shape estimation. arXiv preprint arXiv:2309.17448, 2023. 2 +[4] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7291-7299, 2017. 2 +[5] Hejia Chen, Haoxian Zhang, Shoulong Zhang, Xiaoqiang Liu, Sisi Zhuang, Yuan Zhang, Pengfei Wan, Di ZHANG, and Shuai Li. Cafe-Talk: Generating 3d talking face animation with multimodal coarse-and fine-grained control. In The Thirteenth International Conference on Learning Representations. 1 +[6] Zhi-Quan Cheng, Yin Chen, Ralph R Martin, Tong Wu, and Zhan Song. Parametric modeling of 3d human body shape—a survey. Computers & Graphics, 71:88–100, 2018. 1 +[7] Enric Corona, Albert Pumarola, Guillem Alenya, Gerard Pons-Moll, and Francesc Moreno-Noguer. Semplicit: Topology-aware generative model for clothed people. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11875-11885, 2021. 3 +[8] Hein Daanen and Sung-Ae Hong. Made-to-measure pattern development based on 3d whole body scans. International Journal of Clothing Science and Technology, 20(1):15-25, 2008. 1 +[9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. CoRR, abs/2010.11929, 2020. 3 +[10] Eric J Drinkwater, David B Pyne, and Michael J McKenna. Design and interpretation of anthropometric and fitness testing of basketball players. Sports medicine, 38:565-578, 2008. 1 +[11] Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, and Jitendra Malik. Humans in 4d: Reconstructing and tracking humans with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14783-14794, 2023. 5 +[12] Xianliang Huang, Shuhang Chen, Zhizhou Zhong, Jiajie Gou, Jihong Guan, and Shuigeng Zhou. Hi-nerf: Hybridizing 2d inpainting with neural radiance fields for 3d scene inpainting. In Proceedings of the Asian Conference on Computer Vision, pages 2855–2871, 2024. 1 + +[13] Xiaosong Jia, Zhenjie Yang, Qifeng Li, Zhiyuan Zhang, and Junchi Yan. Bench2drive: Towards multi-ability benchmarking of closed-loop end-to-end autonomous driving. arXiv preprint arXiv:2406.03877, 2024. 1 +[14] Nastaran Nourbakhsh Kaashki, Pengpeng Hu, and Adrian Munteanu. Anet: A deep neural network for automatic 3d anthropometric measurement extraction. IEEE Transactions on Multimedia, 2021. 1 +[15] Angjoo Kanazawa, Michael J Black, David W Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7122-7131, 2018. 5 +[16] Muhammad Ubale Kiru. Body measurements datasets. 2021. 5 +[17] Nikos Kolotouros, Georgios Pavlakos, and Kostas Dani-ilidis. Convolutional mesh regression for single-image human shape reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4501-4510, 2019. 3 +[18] Youngsuk Lee. The definition and generation of body measurements (iso 8559 series of standards). In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) Volume IX: Aging, Gender and Work, Anthropometry, Ergonomics for Children and Educational Environments 20, pages 405-422. Springer, 2019. 3 +[19] Jing Lin, Ailing Zeng, Haoqian Wang, Lei Zhang, and Yu Li. One-stage 3d whole-body mesh recovery with component aware transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21159-21168, 2023. 3 +[20] Kevin Lin, Lijuan Wang, and Zicheng Liu. End-to-end human pose and mesh reconstruction with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1954-1963, 2021. 3 +[21] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. Smpl: A skinned multiperson linear model. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2, pages 851-866. 2023. 1, 3 +[22] Lukasz Markiewicz, Marcin Witkowski, Robert Sitnik, and Elżbieta Mielicka. 3d anthropometric algorithms for the estimation of measurements required for specialized garment design. Expert Systems with Applications, 85:366-385, 2017. 1, 2 +[23] Yuxi Mi, Zhizhou Zhong, Yuge Huang, Jiazhen Ji, Jianqing Xu, Jun Wang, Shaoming Wang, Shouhong Ding, and Shuigeng Zhou. Privacy-preserving face recognition using trainable feature subtraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 297-307, 2024. 2 +[24] Gyeongsik Moon and Kyoung Mu Lee. I2l-meshnet: Image-to-lixel prediction network for accurate 3d human pose and mesh estimation from a single rgb image. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VII 16, pages 752-768. Springer, 2020. 3 +[25] Cynthia L Ogden. Mean body weight, height, and body mass index: United States 1960-2002. Number 347. Department + +of Health and Human Services, Centers for Disease Control and ..., 2004. 1 +[26] Byoung-Keon D Park, Sheila Ebert, and Matthew P Reed. A parametric model of child body shape in seated postures. Traffic injury prevention, 18(5):533-536, 2017. 1 +[27] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10975-10985, 2019. 1, 3 +[28] Stephen Pheasant and Christine M Haslegrave. *Bodyspace: Anthropometry, ergonomics and the design of work*. CRC press, 2018. 1 +[29] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 1 +[30] Kathleen M Robinette, Sherri Blackwell, Hein Daanen, Mark Boehmer, Scott Fleming, Tina Brill, David Hoeferlin, and Dennis Burnsides. Civilian american and european surface anthropometry resource (caesar), final report, volume i: Summary. Sytronics Inc Dayton Oh, 2002. 2 +[31] Nataniel Ruiz, Miriam Bellver, Timo Bolkart, Ambuj Arora, Ming C Lin, Javier Romero, and Raja Bala. Human body measurement estimation with adversarial augmentation. In 2022 International Conference on 3D Vision (3DV), pages 219-230. IEEE, 2022. 2 +[32] Shunsuke Saito, Tomas Simon, Jason Saragih, and Hanbyul Joo. Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 84-93, 2020. 3 +[33] Jaineel Shah, Chinmay Shah, Harshit Sandhu, Maruf Shaikh, and Prachi Natu. A methodology for extracting anthropometric measurements from 2d images. In 2019 International Conference on Advances in Computing, Communication and Control (ICAC3), pages 1-6. IEEE, 2019. 1 +[34] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in neural information processing systems, 33:7462-7473, 2020. 3 +[35] A Strudwick and T Reilly D Doran. Anthropometric and fitness profiles of elite players in two football codes. Journal of sports medicine and physical fitness, 42(2):239, 2002. 1 +[36] Yansel González Tejeda and Helmut A Mayer. A neural anthropometer learning from body dimensions computed on human 3d meshes. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1-8. IEEE, 2021. 1, 5, 6 +[37] Yating Tian, Hongwen Zhang, Yebin Liu, and Limin Wang. Recovering 3d human mesh from monocular images: A survey. IEEE transactions on pattern analysis and machine intelligence, 2023. 3 +[38] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia + +Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 3 +[39] Jinbao Wang, Shujie Tan, Xiantong Zhen, Shuo Xu, Feng Zheng, Zhenyu He, and Ling Shao. Deep 3d human pose estimation: A review. Computer Vision and Image Understanding, 210:103225, 2021. 3 +[40] Tan Xiaohui, Peng Xiaoyu, Liu Liwen, and Xia Qing. Automatic human body feature extraction and personal size measurement. Journal of Visual Languages & Computing, 47: 9-18, 2018. 1 +[41] S Yan, J Wirta, and J Kämäräinen. Anthropometric clothing measurements from 3d body scans. corr abs/1911.00694 (2019), 1911. 1 +[42] Song Yan, Johan Wirta, and Joni-Kristian Kämäräinen. Anthropometric clothing measurements from 3d body scans. Machine Vision and Applications, 31(1-2):7, 2020. 2 +[43] Song Yan, Johan Wirta, and Joni-Kristian Kämäräinen. Silhouette body measurement benchmarks. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 7804-7809. IEEE, 2021. 2 +[44] Zhenjie Yang, Xiaosong Jia, Hongyang Li, and Junchi Yan. Llm4drive: A survey of large language models for autonomous driving. arXiv e-prints, pages arXiv-2311, 2023. 1 +[45] Zhizhou Zhong, Yuxi Mi, Yuge Huang, Jianqing Xu, Guodong Mu, Shouhong Ding, Jingyun Zhang, Rizen Guo, Yunsheng Wu, and Shuigeng Zhou. Slerface: face template protection via spherical linear interpolation. arXiv preprint arXiv:2407.03043, 2024. 2 +[46] Xinxin Zuo, Sen Wang, Qiang Sun, Minglun Gong, and Li Cheng. Self-supervised 3d human mesh recovery from noisy point clouds. arXiv preprint arXiv:2107.07539, 2021. 6 \ No newline at end of file diff --git a/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/images.zip b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..52367876a0d950a51cb9b3ce3690151f3cd7d006 --- /dev/null +++ b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e4fe93d8e9e42faa4fc25489797bb9bcc9113e31e353edf645b38199571d8d4 +size 516903 diff --git a/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/layout.json b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ca380ca8fa01a120f4c7346f34e9f87ba174ab05 --- /dev/null +++ b/CVPR/2025/A Focused Human Body Model for Accurate Anthropometric Measurements Extraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78e886882c8901acf52fdc997b2dc3cf872076d06c55f90b7625b3e036c9687c +size 397918 diff --git a/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_content_list.json b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..03f0a71a1fd56d3ebda62f145cc94447384fe2c8 --- /dev/null +++ b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faad3f0f9afdb631bb892de27efb5c0f7f9b084be52406b1f8d20bc297aa13f1 +size 84426 diff --git a/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_model.json b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_model.json new file mode 100644 index 0000000000000000000000000000000000000000..49967281e07e4e571eb7e2308c53d7154b494d44 --- /dev/null +++ b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20782b5363dab39c697ac48b40024c8643b643f5906ecf35b6a1b66db7ed0d80 +size 105052 diff --git a/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_origin.pdf b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c584574691cf6c98db6abc12a8940eb7fba9dd10 --- /dev/null +++ b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/b84d8fe3-e105-4d16-bcf4-2ecaed6a1524_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1ac921b06aacafd76ec84191ccc5b406ce9f17e5e0b1f9e1fcee29c8b9d40e0 +size 1950693 diff --git a/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/full.md b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a8f952bba0e0f969d111bf1612398d7fd42d3f31 --- /dev/null +++ b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/full.md @@ -0,0 +1,426 @@ +# A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening + +Jie Huang $^{1*}$ + +Haorui Chen $^{1*}$ + +Jiaxuan Ren + +Siran Peng2,3 + +Liangjian Deng† + +1University of Electronic Science and Technology of China + +$^{2}$ Institute of Automation, Chinese Academy of Sciences + +$^{3}$ School of Artificial Intelligence, University of Chinese Academy of Sciences + +{jayhuang,hrchen,jiaxuan. ren}@std.uestc.edu.cn,liangjian.deng@uestc.edu.cn + +# Abstract + +Currently, deep learning-based methods for remote sensing pansharpening have advanced rapidly. However, many existing methods struggle to fully leverage feature heterogeneity and redundancy, thereby limiting their effectiveness. We use the covariance matrix to model the feature heterogeneity and redundancy and propose Correlation-Aware Covariance Weighting (CACW) to adjust them. CACW captures these correlations through the covariance matrix, which is then processed by a nonlinear function to generate weights for adjustment. Building upon CACW, we introduce a general adaptive dual-level weighting mechanism (ADWM) to address these challenges from two key perspectives, enhancing a wide range of existing deep-learning methods. First, Intra-Feature Weighting (IFW) evaluates correlations among channels within each feature to reduce redundancy and enhance unique information. Second, Cross-Feature Weighting (CFW) adjusts contributions across layers based on inter-layer correlations, refining the final output. Extensive experiments demonstrate the superior performance of ADWM compared to recent state-of-the-art (SOTA) methods. Furthermore, we validate the effectiveness of our approach through generality experiments, redundancy visualization, comparison experiments, key variables and complexity analysis, and ablation studies. Our code is available at https://github.com/Jie-1203/ADWM. + +# 1. Introduction + +The application of high-resolution multispectral (HRMS) images is expanding rapidly, with uses in fields such as object detection [8, 28], change detection [3, 23], unmixing [4], and classification [6, 7]. However, due to technological limitations, satellites are typically only able to capture + +![](images/e42602aa2a2bc392ef807e86a7d30b2e3763d702b6da128d82435acc90bb9c81.jpg) + +![](images/5a4ab9156a689550a37b4a897a3c9e583b83a0c0a9c6467c20ce5c4411d5062c.jpg) + +![](images/6f045a07570d01938b3b38051744b909328479b891a437a0915153298e9e552e.jpg) + +![](images/dd73170c8fc6d72e15b4c9c63524e680525bfd1744df72331235e8d12abee665.jpg) +Figure 1. Application of our dual-level weighting mechanism within the existing methods. (a) General methods. (b) Intra-Feature Weighting (IFW): weighting different channels within a single feature. (c) Cross-Feature Weighting (CFW): weighting features at different depths to obtain the final result. (d) Our dual-level weighting combines both IFW and CFW to fully unlock the potential of the original networks. + +low-resolution multispectral (LRMS) images alongside high-resolution panchromatic (PAN) images. In this setup, PAN images provide high spatial resolution, whereas LRMS images offer rich spectral information. To generate HRMS images with both high spatial and spectral resolutions, the pansharpening technique was introduced and has since seen continuous development. + +![](images/d3b8d7284f9f3c56c1104d7c0594cd048b82a7f2585105262bcfb9869b77df24.jpg) + +![](images/222e827a13f6d72c6f38b7fad0f8fd5f4a6305f73dadcedf22adff2d2d299d31.jpg) + +![](images/577c2f1b130b703dd896fbc73d1f7d1d88784cc4aa3cab8c716bdfce48057eb1.jpg) +Figure 2. Feature heterogeneity and redundancy correspond to the covariance matrix: darker colors indicate stronger correlations and redundancy, while lighter colors suggest weaker correlations and more heterogeneity. (a) Intra-feature: different channels within a feature. (b) Cross-feature: features at different depths. (c) CACW leverages intra- and cross-feature correlations to generate weights and adjust features accordingly. + +Over recent decades, pansharpening methods have evolved considerably, transitioning from traditional approaches to modern deep learning-based techniques. Traditional methods encompass component substitution (CS) [9, 33], multi-resolution analysis (MRA) [34, 35], and variational optimization (VO) [13, 32]. With advancements in hardware and software, deep learning-based methods [12, 14, 41, 45] have shown significant promise in addressing pansharpening challenges. Recent studies focusing on fusion tasks [22, 26, 39] highlight the benefits of sequential feature extraction to achieve effective data fusion. This keeps the continuity with the tradition of using sequential feature extraction as shown in Fig. 1 (a) for achieving improved fusion in pansharpening. + +However, the above methods often overlook a crucial characteristic across networks: feature heterogeneity and redundancy, which occur in two dimensions as shown in Fig. 2. In the intra-feature dimension, PAN and LRMS images contain redundant and heterogeneous information, with multiple bands in LRMS images exhibiting similar redundancy [16]. In the cross-feature dimension, shallow features capture low-level details like edges and textures, while deeper features become abstract, encoding semantic information [42]. These + +features vary by depth but also exhibit redundancy. Overall, previous methods fail to comprehensively address these issues, limiting refined fusion. + +To overcome the limitations of previous methods, we propose an Adaptive Dual-level Weighting Mechanism (ADWM). Our method can be seamlessly plugged into the network involving sequential feature processing to fully unleash the potential of the original network, see Fig. 1. Specifically, our first level of weighting, termed Intra-Feature Weighting (IFW), focuses on optimizing the internal structure of intermediate features through adaptive weighting. This step emphasizes the importance of each channel individually, allowing for a refined adjustment that accounts for the feature heterogeneity and redundancy channels themselves. Following this, the second level of weighting, CrossFeature Weighting (CFW), dynamically adjusts the contribution of features from different layers based on inter-layer correlations. This approach allows features at various depths to contribute adaptively to the final output, ensuring a balanced integration of deep-shallow features across the network. Unlike DenseNet [21], which directly concatenates feature maps from different layers, CFW adjusts each layer's influence based on its relevance to the final output. CFW balances shallow and deep layer contributions, enhancing representation precision and fidelity. + +To achieve the above process, we need a weighting method that can finely capture relationships between features. SENet [20] uses global pooling for channel-level weights, applying high-level adaptive weighting but oversimplifies feature compression, which leads to a loss of inter-channel relationships. SRANet [27] employs a self-residual attention mechanism for dynamic channel weights but lacks effective redundancy compression, leading to suboptimal resource use. Different from previous benchmark attention-based methods, such as channel attention [27, 38], spatial attention [20], etc., our method leverages the correlations within the covariance matrix, effectively capturing feature heterogeneity and redundancy, forming the proposed Correlation-Aware Covariance Weighting (CACW). The use of the covariance matrix is inspired by Principal Component Analysis (PCA) [1], which utilizes the covariance matrix to capture information correlation for dimensionality reduction. Specifically, as shown in Fig. 2 (c), we first compute a covariance matrix to capture inter- or cross-feature correlations, reflecting feature heterogeneity and redundancy. Subsequently, we further process this covariance matrix through a nonlinear transformation enabling the generation of weights for feature adjustment. By leveraging these weights, the model identifies the importance of different channels or layers, focusing on essential components while effectively reducing redundancy. + +To sum up, the contributions of this work are as follows: 1. We use covariance to measure feature heterogeneity and redundancy and propose CACW, which utilizes the corre + +![](images/2bf8db1f5506ebbd703bf9d51ac75ae6d2bd021fce262e600954d75e24289016.jpg) +Figure 3. The CACW structure is illustrated. First, we compute the covariance matrix $\tilde{C}$ based on the correlations among the $n$ columns of $X$ . Then, this covariance matrix is passed through a nonlinear function $g$ , which generates the resulting weights. + +relations within the covariance matrix to adaptively generate weights. This weighting strategy can also be extended to applications beyond pansharpening. + +2. We propose an adaptive dual-level weighting mechanism (ADWM). This mechanism uses IFW to adjust the importance of different channels in the intra-feature dimension and applies CFW to adaptively modulate the contribution of shallow and deep features to the final result in the cross-feature dimension. +3. Extensive experiments verify that the ADWM module can be seamlessly integrated into various existing networks, enhancing their performance and achieving state-of-the-art results in a plug-and-play manner. + +# 2. Method + +In this section, we first introduce CACW, followed by an overview of ADWM, a detailed explanation of the proposed IFW and CFW, and a supplement on the plug-and-play integration of ADWM. + +# 2.1. Correlation-Aware Covariance Weighting + +In this section, we will introduce the design of CACW. Fig. 3 illustrates the overall process of CACW. + +Background of PCA [1]. PCA is a common technique for dimensionality reduction. Let $X \in \mathbb{R}^{m \times n}$ represent the input observation matrix, where $m$ denotes the number of samples and $n$ denotes the number of features to be explored. The covariance matrix $C \in \mathbb{R}^{n \times n}$ can be calculated as follows: + +$$ +C = \frac {1}{m - 1} (X - \bar {X}) ^ {T} (X - \bar {X}), \tag {1} +$$ + +where $\bar{X}$ is the mean of each feature in $X$ . Next, we perform eigenvalue decomposition and select the top $k$ eigenvectors $\mathbf{v}_i$ with the largest eigenvalues $\lambda_i$ to form matrix $P = [\mathbf{v}_1,\mathbf{v}_2,\dots ,\mathbf{v}_k]$ . The process of generating $\mathbf{v}_i$ can be formulated as follows: + +$$ +C \mathbf {v} _ {i} = \lambda_ {i} \mathbf {v} _ {i}. \tag {2} +$$ + +By projecting $X$ onto this subspace using $P$ , we obtain a reduced-dimensional representation $Y$ . The process of projecting is as follows: + +$$ +Y = P ^ {T} X. \tag {3} +$$ + +CACW. The purpose of CACW is to generate weights for feature selection through adaptive adjustment, rather than dimensionality reduction via projection. In PCA, eigenvectors $\mathbf{v}_i$ represent the main directions of variation, while in CACW, the weight generation process establishes a basis to highlight important features and suppress redundancy. We first compute the covariance matrix $C$ using Eq. (1), capturing correlations that reflect feature heterogeneity and redundancy. To mitigate the influence of absolute magnitudes, we normalize $C$ to obtain $\tilde{C}$ . The process of normalization is as follows: + +$$ +\tilde {C} _ {i j} = \frac {C _ {i j}}{\| \bar {X} _ {i} \| \| \bar {X} _ {j} \|}, \tag {4} +$$ + +where $\|\bar{X}_i\|$ and $\|\bar{X}_j\|$ are the norms of the respective feature vectors and each element $\tilde{C}_{ij}$ represents the normalized similarity between features $i$ and $j$ . As shown in Eq. (2), PCA's eigenvalue decomposition relies on linear feature decomposition, which is not data-driven and cannot adaptively highlight important features while suppressing redundancy. To address this, we achieve linear feature decomposition through a neural network. Specifically, we use a multi-layer perceptron (MLP) $g$ to nonlinearly map the covariance matrix $\tilde{C}$ to obtain the weight vector $\gamma \in \mathbb{R}^{n \times 1}$ . The process of generating weights is as follows: + +$$ +\gamma = g (\tilde {C}), \tag {5} +$$ + +where $\gamma$ can represent either channel-weight in IFW or layerweight in CFW. + +Difference with attention. CACW stems from the covariance-based observation in Fig. 2 and the motivation to reduce feature redundancy and enhance heterogeneity, rather than being inspired by attention. Due to the difference between inspiration and motivation, its operation also fundamentally differs from attention. The scaling normalizes the covariance matrix to a correlation matrix, guiding the MLP to capture correlations, while attention's scaling mainly addresses gradient stability. CACW's MLP is placed after the covariance matrix for PCA-like eigenvalue decomposition in a data-driven way, while attention's MLP comes before QKV to extract nonlinear features, ignoring data correlations. + +![](images/2afde3f71513650d9274af73af0ced2e626884a90533d954ecf37f2cbe51feb2.jpg) +Figure 4. The overall workflow of ADWM is comprised of two sub-modules: Intra-Feature Weighting (IFW) and Cross-Feature Weighting (CFW). In IFW, each original feature $F_{i}$ is adjusted to $\tilde{F}_{i}$ based on its internal correlations. In CFW, weights are generated based on the correlations among $F_{i}$ features, dynamically adjusting each $\tilde{F}_{i}$ 's contribution to the final output. + +# 2.2. Overview of ADWM + +We denote PAN as $P\in \mathbb{R}^{H\times W}$ , LRMS as $L\in \mathbb{R}^{\frac{H}{4}\times \frac{W}{4}\times c}$ and HRMS as $H\in \mathbb{R}^{H\times W\times c}$ . In many methods, $P$ and $L$ are processed through an encoder and then fed into a sequential feature extraction block, such as Resblock [17], generating multiple intermediate features $F_{i}\in \mathbb{R}^{H\times W\times C}$ , where $i$ represents the $i$ -th layer. Our ADWM is applied to adjust these features, maintaining generality and flexibility by integrating directly into existing network architectures without requiring modifications to the original structure. Firstly, each feature $F_{i}$ is adaptively weighted based on its own correlation, resulting in $\tilde{F}_i\in \mathbb{R}^{H\times W\times C}$ . Then, we collect $F_{i}$ and $\tilde{F}_i,i = 1,2,\dots ,n$ to obtain $F$ and $\tilde{F}\in \mathbb{R}^{N\times H\times W\times C}$ . We generate weights based on $F$ and apply them to $\tilde{F}$ , resulting in the adjusted $\hat{F}$ . The dual-level weighting process can be formulated as follows: + +$$ +\tilde {F _ {i}} = \operatorname {I F W} \left(F _ {i}\right), \tag {6} +$$ + +$$ +F = \left[ F _ {1}, \dots , F _ {n} \right], \quad \tilde {F} = \left[ \tilde {F} _ {1}, \dots , \tilde {F} _ {n} \right], \tag {7} +$$ + +$$ +\hat {F} = \operatorname {C F W} (F, \tilde {F}), \tag {8} +$$ + +where IFW and CFW represent the intra-feature and crossfeature weighting processes, respectively, which are detailed + +in Sec. 2.3 and Sec. 2.4. $\hat{F}$ is then processed by a decoder to obtain the final output $H$ . + +# 2.3. Intra-Feature Weighting + +The details of IFW: First, we reshape $F_{i}$ to obtain $F_{i}^{R} \in \mathbb{R}^{HW \times C}$ , focusing on exploring the correlations between channels by treating the $HW$ spatial pixels as our sample space. Second, to dynamically adjust each feature $F_{i}$ , we calculate the channel-weight $\alpha_{i} \in \mathbb{R}^{C \times 1}$ , which addresses the heterogeneity and redundancy within the feature by assigning individualized importance to each channel based on its unique contribution. The process of generating channel-weight $\alpha_{i}$ is as follows: + +$$ +F _ {i} ^ {R} = \operatorname {R e s h a p e} \left(F _ {i}\right), \tag {9} +$$ + +$$ +\alpha_ {i} = f \left(F _ {i} ^ {R}\right), \tag {10} +$$ + +where $f(\cdot)$ denotes the CACW. The channel-weight $\alpha_{i}$ is then applied to $F_{i}$ , adjusting the relative importance of each channel to produce the adjusted feature $\tilde{F}_{i}$ . The weighting process is as follows: + +$$ +\tilde {F} _ {i} = F _ {i} \odot \alpha_ {i}, \tag {11} +$$ + +where $\odot$ denotes element-wise production. Unlike the PCA's global orthogonal projection in Eq. (3), IFW independently + +![](images/812dedb0876581a22faf792f0158cd26325c54ad3e4d3e270ac3f9a9c567c33f.jpg) +(a) Covariance Matrices in IFW with Corresponding Weights + +![](images/9fdd3a602fc91fbef516f0cdaca0dccedf4bc4ae704a615b85d64d89ea2d637a.jpg) + +![](images/db7f6c2d73415f94831b0b2aeb10d87d766c08b27e6acea0b02e6a36b5135bce.jpg) +(b) Entropy Curve of Features Across Layers +(c) Covariance Matrices in CFW with Corresponding Weights +Figure 5. Visualization of covariance matrices, weights in IFW and CFW. (a) Channels that are multiples of six selected for clarity. (b) Lower entropy indicates higher feature redundancy. + +scales each feature dimension through the generated weights. This can be seen as a simplified form of projection that preserves the original feature basis while optimizing the distribution of information. + +The impact of IFW: As shown in Fig. 5 (a), shallower layers have lighter matrices, reflecting greater information diversity, while deeper layers darken, indicating high redundancy, as confirmed by the decreasing entropy trend in Fig. 5 (b). IFW adjusts weights accordingly, producing varying distributions across layers. In shallow layers, weights are uniform (0.141–0.155), while in deeper layers, they vary more (0.004–0.055), emphasizing key channels and suppressing others, helping the model focus on critical structures. + +# 2.4. Cross-Feature Weighting + +The details of CFW: The goal of CFW is to use adaptive weighting to fully leverage all intermediate features, effectively addressing heterogeneity and redundancy across layers. First, we apply spatial average pooling to adjust the shape of $F$ , resulting in a representation $F^{P} \in \mathbb{R}^{C \times N}$ , which enables us to explore the correlations between the $N$ layers by treating the $C$ channels as our sample space. Second, to dynamically adjust the contribution of different layer features + +![](images/3e10d03307d753a47b6bdeaf5a07c8af6760ca2abc7dbdb466823539c9aa9092.jpg) +Figure 6. (a) ADWM can be seen as a feature-to-feature method. (b) The way it is embedded into more complex networks. + +to the final result, we calculate the layer-weight $\beta \in \mathbb{R}^{N\times 1}$ . The process of generating channel-weight $\beta$ can be formulated as follows: + +$$ +F ^ {P} = \frac {1}{H \times W} \sum_ {i = 1} ^ {H} \sum_ {j = 1} ^ {W} F _ {i j}, \tag {12} +$$ + +$$ +\beta = f \left(F ^ {P}\right), \tag {13} +$$ + +where the reshape operation flattens the width and height of the original input and $f(\cdot)$ denotes the CACW. Unlike IFW, CFW produces a single output feature $\hat{F}$ synthesized from $N$ intermediate features. First, we use softmax on $\beta$ to normalize layer weights, balancing depth contributions. Then, these weights modulate each intermediate feature $\tilde{F}$ reducing redundancy by giving less weight to repetitive layers. A final summation produces the integrated feature $\hat{F} \in \mathbb{R}^{H \times W \times C}$ . The weighting process can be formulated as follows: + +$$ +\hat {F} = \sum_ {k = 1} ^ {N} \left(\tilde {F} \odot \operatorname {s o f t m a x} (\beta)\right), \tag {14} +$$ + +where $\odot$ denotes element-wise production, and $\sum_{k=1}^{N}$ represents the summation across the $N$ -dimension. The pointwise weighting and summation process in Eq. (14) can also be rewritten in a matrix multiplication form as follows: + +$$ +\hat {F} = \left(\operatorname {s o f t m a x} (\beta)\right) ^ {T} \tilde {F}. \tag {15} +$$ + +This corresponds to Eq. (3) in form. However, unlike PCA's global dimensionality reduction using a fixed orthogonal basis, CFW dynamically learns and applies task-specific weights $\beta$ . + +The impact of CFW: As shown in Fig. 5 (c), the CFW covariance matrix evolves during training, transitioning from deep red and blue regions to an overall lighter color, indicating reduced redundancy and increased feature diversity. As training progresses, the layer weights generated by CFW adjust gradually, reflecting the model's adaptive tuning of each layer's impact. These shifts reflect the model's refinement to align layer contributions with task demands, enhancing its ability to leverage diverse features and optimize performance throughout learning. + +# 2.5. Flexible Plug-and-Play Integration + +In addition to being integrated with the entire network in the way shown in Fig. 4, ADWM can also be flexibly combined + +![](images/25360f33dbf06b306c7dd206b486227cec46c01e9d739548bee746a476033f7b.jpg) + +![](images/d7498e0d6c6b976410c2a67280c08f76b1148b17b3db361c3019b79f4f24e2fd.jpg) + +![](images/a4228662f889f5f59ad1f8e28826e766e31bbdcf129738edc83530e434f89975.jpg) + +![](images/9e2198af77f29fc89d996fe86283898bc3e2116abd9cd0f1437bee8ffb077e70.jpg) + +![](images/e1b2b167dbe238a9a8274ecaeec4d1aa2c38fafb157f72a4934c5b7f82f24bee.jpg) + +![](images/6cac02b95649a1b9260ddc61bc2c97c1a962a8cf65db3b808332dea4002c5341.jpg) + +![](images/f473011674d5f7950c809214035e7c6f37548e4c3161401740d51adf76fc08ac.jpg) +MTF-GLP-FS + +![](images/163240687c792a6dab4296dddaaee997b42c334c66b17bf2b8f04cb4d39a63e0.jpg) +BDSD-PC + +![](images/14d1e2add9b8b8603c6061d4f24ca91c69893dc1aaa581861120aeaf27b2ecda.jpg) +TV +LGPNet + +![](images/8bbb98c3d7638e43a9779b7d589ca3b2f1f240970e672b1593c4f7aa2042c95d.jpg) +PNN +PanMamba + +![](images/e0bf3e4714f4fa3968fb086b7a80dba2f0a087d26e1ccfd043726c567f7b466d.jpg) +PanNet +Proposed + +![](images/6e7d05f90982c5a17f6f2f5044ff3685182f5b017e8e4863576cb6c62e02d022.jpg) +DiCNN +GT + +![](images/083f2323e718df8e6238ae931e475d0f92eb83098b5d3e791d6e858a809b38b0.jpg) +FusionNet +MTF-GLP-FS +Figure 7. The residuals (Top) and visual results (bottom) of all compared approaches on the GF2 reduced-resolution dataset. + +![](images/1a65461be2583c3ee6da9439b1a8de1be9728d95e37c88bb615911480a5d541a.jpg) +BDSD-PC + +![](images/a1112e3ae776efda41ee5100e5c3001303a6e27a77a40ca88b2e20529ad673da.jpg) +LAGNet +TV + +![](images/66e9eb6f75fe74e2225e7953cfb623ccffddd6fa2553fd1f9e8581ab2d2fb0e0.jpg) +PNN + +![](images/8f2d3262d90c1843f1c53bb7c111b7ce638ae9d91c12f168c2f99c0b06769e5f.jpg) +PanNet + +![](images/068a8581ee0d866e83e263f4206e0664fc699bed838191f02cba418741cc7223.jpg) +DiCNN + +![](images/66b8c3f8bd3e42f556767ca7f51ceb046ffede1fb8629311cf09d8dc76847483.jpg) +FusionNet + +![](images/39a888f3d871b0436d21824b558875f976768d211b11005406e2eda3d1195857.jpg) +LAGNet + +![](images/feee8f0ae2da4f8610edcdc3f3b4577c1c59066f4bd110394a92db3d3fae8d67.jpg) +LGPNet + +![](images/8a3f3e0747cb3fd98d2993b50544f1a87b5c9929292a592626b1609b9081bef7.jpg) +PanMamba + +![](images/fd3d926f36096d67a3dc9174506d8b4ec7c4eca9d0f04aa5249dd9335184d409.jpg) +Proposed + +![](images/52edfdd6923a861448083484f3ee5f62b565f30f50f11df80181f2b71b31f5f7.jpg) +GT + +Table 1. Comparisons on WV3, QB, and GF2 reduced-resolution datasets, each with 20 samples. Best: bold , and second-best: underline. + +
MethodsWV3QBGF2
PSNR↑SAM↓ERGAS↓Q8↑PSNR↑SAM↓ERGAS↓Q4↑PSNR↑SAM↓ERGAS↓Q4↑
MTF-GLP-FS [35]32.9635.3164.7000.83332.7097.7927.3730.83535.5401.6551.5890.897
BDSD-PC [33]32.9705.4284.6970.82932.5508.0857.5130.83135.1801.6811.6670.892
TV [30]32.3815.6924.8550.79532.1367.5107.6900.82135.2371.9111.7370.907
PNN [29]37.3133.6772.6810.89336.9425.1814.4680.91839.0711.0481.0570.960
PanNet [40]37.3463.6132.6640.89134.6785.7675.8590.88540.2430.9970.9190.967
DiCNN [18]37.3903.5922.6720.90035.7815.3675.1330.90438.9061.0531.0810.959
FusionNet [10]38.0473.3242.4650.90437.5404.9044.1560.92539.6390.9740.9880.964
LAGNet [24]38.5923.1032.2910.91038.2094.5343.8120.93442.7350.7860.6870.980
LGPNet [43]38.1473.2702.4220.90236.4434.9544.7770.91541.8430.8450.7650.976
PanMamba [19]39.0122.9132.1840.92037.3564.6254.2770.92942.9070.7430.6840.982
Proposed39.1702.9132.1450.92138.4664.4503.7050.93743.8840.6720.5970.985
+ +with more complex networks. As shown in Fig. 6 (a), it can be seen as a feature-to-feature method, deriving the next feature from a series of sequential features. Especially, as shown in Fig. 6 (b), complex networks like U-Net often consist of parts with similar feature sizes and semantics, and an independent ADWM is applied to each part. As shown in Tab. 2, it still improves performance even when applied to a more complex network. + +# 3. Experiments + +# 3.1. Datasets, Metrics, and Training Details + +In our experiments, we employ three datasets derived from satellite imagery captured by WorldView-3 (WV3), QuickBird (QB), and GaoFen-2 (GF2), constructed in accordance with Wald's protocol [37]. The datasets and associated data processing techniques are obtained from the PanCollection + +repository1 [11]. We use well-established evaluation metrics to assess our method. For the reduced-resolution dataset, we employ SAM [5], ERGAS [36], Q4/Q8 [15], and PSNR. For the full-resolution dataset, $\mathrm{D}_s$ , $\mathrm{D}_{\lambda}$ , and HQNR [2] are used as evaluation metrics. Among them, HQNR is derived from $D_s$ and $D_{\lambda}$ , providing a comprehensive assessment of image quality. In addition, we train our model in Tab. 1 using the $\ell_1$ loss function and the Adam optimizer [25] with a batch size of 64. The training details of our method in Tab. 2 remain consistent with those in the original paper. All experiments are conducted on an NVIDIA GeForce GTX 3090 GPU. More details can be found in supplementary materials. + +# 3.2. Comparison with SOTA methods + +We evaluate our method against nine competitive approaches, including 1) three classical methods: MTF-GLP-FS [35], + +![](images/95663806d79fff5fce20d246c33926d7d54dba4e5d0e5deaefdb918874f7d56c.jpg) + +![](images/d5800785e7b9af5743732a81287dc50b568b502db74dd089496ba8f4fe10a387.jpg) +FusionNet + +![](images/6c750eed6830aeaf8231b4ba7766ab45957b65403293a82cca70aaeafefc6dff.jpg) + +![](images/e1a7a8d025e3ab6ff95ea8031b51ef97859de2b2c3a33bd933a7509a153c2294.jpg) +FusionNet* + +![](images/007a83215d13efbee9b7830a46939a03df90dd3df37e2e52e3fbbb2a4cb0059d.jpg) + +![](images/b019a13a7adade5218d589daed0be2125df7a5e4f9a09be7f1084a148bfe80c9.jpg) +LGPNet + +![](images/4eb16fb7c4d86f8986ebe9f7250ec16e7de33c814ea341519df37031cca2721b.jpg) + +![](images/ce3f0c69b8122259c1cee898169c1e97272ea7901ca51b4083c8a06f973814e0.jpg) +LGPNet* + +![](images/4300568e2e75999a19941e6cfa854951ad4839a29677f308f89937656b2b2eec.jpg) + +![](images/578219454a634c00c4b0cdca1706fa89a56b9d398cf7a4efcaa179c3c6d70368.jpg) +U2Net + +![](images/19e85b601f4cd8bcaa2077188e6e681e2f37d8e3663c453c7711847e0c205394.jpg) + +![](images/5bf2aff97604e5f32dd015100ef177dbd3a8a6a9977aed1789d0449fa81d457e.jpg) +U2Net* + +![](images/6a4604bc87f8321971b18615138bdaa837a5953c4af4b96ca867cdd170e0ad45.jpg) +Figure 8. The visual results (Top) and HQNR maps (Bottom) of all evaluated general methods on the WV3 full-resolution dataset. + +Table 2. Comparisons on WV3, QB, and GF2 datasets with 20 full-resolution samples, respectively. Methods marked with * represent the corresponding method enhanced with our ADWM module without any further changes. The best results in each column are bolded. + +
MethodParamsWV3QBGF2
Dλ↓Ds↓HQNR↑Dλ↓Ds↓HQNR↑Dλ↓Ds↓HQNR↑
FusionNet [10]78.6K0.0240.0370.9400.0570.0520.8940.0350.1010.867
FusionNet*85.5K0.0220.0330.9460.0660.0380.8990.0340.0940.875
LGPNet [43]27.0K0.0220.0390.9400.0930.0610.8700.0300.0810.892
LGPNet*33.4K0.0200.0320.9500.0680.0410.8940.0270.0680.908
U2Net [31]757.5K0.0200.0280.9520.0520.0370.9130.0240.0510.927
U2Net*781.4K0.0190.0260.9550.0540.0290.9190.0210.0490.931
+ +BDSD-PC [33], and TV [30]; 2) seven deep learning-based methods PNN [29], PanNet [40], DiCNN [18], FusionNet [10], LAGNet [24], LGPNet [43], and PanMamba [19]. We incorporated ADWM into the classic LAGNet as our proposed method. Tab. 1 presents a comprehensive comparison of our method with state-of-the-art approaches across three datasets. Notably, our method achieves this high level of performance across all metrics. Specifically, our method achieves a PSNR improvement of 0.158dB, 0.255dB, and 0.598dB on the WV3, QB, and GF2 datasets, respectively, compared to the second-best results. These improvements highlight the clear advantages of our method, confirming its competitiveness in the field. Fig. 7 provides qualitative assessment results for GF2 datasets alongside the respective ground truth (GT). By comparing the mean squared error (MSE) residuals between the pansharpened results and the ground truth, it is clear that our residual maps are the darkest, suggesting that our method achieves the highest fidelity among the evaluated methods. + +# 3.3. Generality Experiment + +Our ADWM serves as a plug-and-play mechanism, allowing it to be easily integrated into various existing frameworks without requiring extensive modifications. To further demonstrate the generality of our method, we integrated three different approaches into our ADWM framework, including + +FusionNet [10], LGPNet [43], and U2Net [31]. As shown in Tab. 2, our method improves performance for each approach across three datasets, with only a negligible increase in parameter size. In terms of visual quality shown in Fig. 8, as depicted in the second row, the redder areas indicate better performance, while the bluer areas indicate poorer performance. After integrating ADWM into all the methods, they all show larger and deeper red areas, indicating better performance. These results validate our method's significant potential for practical applications. + +# 3.4. Redundancy Visualization + +To demonstrate that our method improves by our motivation to reduce feature redundancy and enhance heterogeneity, we visualized the results using LGPNet [43] on the WV3 reduced-resolution dataset. We applied SVD to the covariance matrix of each intermediate feature, sorted the eigenvalues to obtain the corresponding scree plot [44], and averaged all the intermediate features to generate Fig. 9. Compared to the original network and self-attention-based weighting, our method yields the smoothest curve, indicating a more balanced distribution of variance across multiple dimensions. This suggests that information is more evenly spread rather than being concentrated in a few dominant components, leading to reduced redundancy. This reduction leads to higher performance metrics, such as PSNR. + +![](images/ded23aec31f0ccf5e4955b999144414e15e63c5d3d3aa6aae0fb556f6ccabd4d.jpg) +Figure 9. Scree Plot to illustrate the differences in redundancy. The smoother the curve, the lower the redundancy. + +Table 3. Comparison of different weight generation approaches. + +
MethodParamsWV3
PSNR↑SAM↓ERGAS↓Q8↑
PCA11.2K38.6123.0762.2730.913
Pool39.2K39.0532.9152.1830.920
Attention42.5K38.8753.0122.2370.915
CACW26.2K39.1702.9132.1450.921
+ +# 3.5. Evaluation and Analysis of CACW + +Comparison of Weighting Approaches: This section shows the comparison results of CACW with other commonly used approaches for generating weights. In the first row of Tab. 3, PCA selects the top half eigenvectors corresponding to the largest eigenvalues, which are then processed by an MLP to generate the weight. CACW outperforms PCA, showing that a neural network with nonlinear capabilities creates a more effective feature importance representation. In the second row, Pool denotes global average pooling, compressing each channel to a scalar and generating weights through an MLP. CACW surpasses Pool with fewer parameters by preserving inter-channel correlations via a covariance matrix, addressing the information loss and dependency limitations of pooling. In the third row, Attention uses self-attention to compute a relevance matrix before generating weights through an MLP. With fewer parameters, CACW outperforms Attention, avoiding the complexity and noise of attention mechanisms. + +Impact of Intermediate Layer Size: This section explores the impact of the intermediate layer size $d$ , a key variable in our method. Experiments were conducted using LAGNet [24] as the backbone on the WV3 reduced-resolution dataset. In this analysis, $d$ is varied in IFW while keeping CFW fixed, with its FLOPs constant at 90, contributing only a small portion to the total computation. As shown in Fig. 10, the overall FLOPs increase as $d$ becomes larger. In terms of performance, when $d$ is too small, the network struggles to capture complex feature correlations, resulting in lower PSNR values. As $d$ increases, PSNR improves, indicating enhanced image quality. However, when $d$ + +![](images/5d11b143f67e58b90ad4f94e1c16c371aca00974d99eb64b85f0a93eb2b35e00.jpg) +Figure 10. FLOPs (K) and PSNR (dB) with $d$ represented as a fraction of $n$ , both are variables in Fig. 3. + +Table 4. Ablation experiment on WV3 reduced-resolution dataset. + +
IFWCFWWV3
PSNR↑SAM↓ERGAS↓Q8↑
XX38.5923.1032.2910.910
X39.0282.9722.1830.917
X38.9232.9902.2100.918
39.1702.9132.1450.921
+ +exceeds $0.8n$ , the network becomes overly complex, leading to overfitting and a subsequent drop in PSNR. + +Complexity Analysis: The additional computational complexity of ADWM, compared to the original network, mainly stems from generating the covariance matrix. IFW has a theoretical complexity of $O(C^2 \cdot (H \cdot W))$ , where $H$ and $W$ are the spatial dimensions, and $C$ is the number of channels. CFW has a theoretical complexity of $O(N^2 \cdot C)$ , where $N$ is the number of feature maps being processed. In total, the complexity of our module is $O(N \cdot (H \cdot W) \cdot C^2 + N^2 \cdot C)$ , indicating that the main computational cost is driven by the number of channels $C$ and the spatial dimensions $H$ and $W$ . + +# 3.6. Ablation Study + +We conducted ablation studies with LAGNet [24] on the WV3 dataset. In the second row of Tab. 4, replacing CFW's dynamic weighting with equal weights causes a performance drop, highlighting the importance of dynamic adjustment. In the third row, omitting IFW caused a significant decline, underscoring its role in handling feature heterogeneity and redundancy. However, performance still exceeded the first row, demonstrating CFW's effectiveness in integrating shallow and deep features. + +# 4. Conclusion + +In this paper, we use the covariance matrix to model feature heterogeneity and redundancy, introducing CACW to capture these correlations and generate weights for effective adjustment. Building on CACW, we propose the adaptive dual-level weighting mechanism (ADWM), which includes Intra-Feature Weighting (IFW) and Cross-Feature Weighting (CFW). ADWM significantly enhances a wide range of existing deep learning methods, and its effectiveness is thoroughly demonstrated through extensive experiments. + +Acknowledgement: This work was supported in part by the NSFC (12271083, 62273077), Sichuan Province's Science and Technology Empowerment for Disaster Prevention, Mitigation, and Relief Project (2025YFNH0001), and the Natural Science Foundation of Sichuan Province (2024NSFJQ0013). + +# References + +[1] Hervé Abdi and Lynne J Williams. Principal component analysis. Wiley interdisciplinary reviews: computational statistics, 2(4):433-459, 2010. 2, 3 +[2] Alberto Arienzo, Gemine Vivone, Andrea Garzelli, Luciano Alparone, and Jocelyn Chanussot. Full-resolution quality assessment of pansharpening: Theoretical and hands-on approaches. IEEE Geoscience and Remote Sensing Magazine, 10(3):168-201, 2022. 6 +[3] Anju Asokan and JJESI Anitha. Change detection techniques for remote sensing applications: a survey. Earth Science Informatics, 12(2):143-160, 2019. 1 +[4] José M Bioucas-Dias, Antonio Plaza, Nicolas Dobigeon, Mario Parente, Qian Du, Paul Gader, and Jocelyn Chanussot. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing, 5(2):354-379, 2012. 1 +[5] Joseph W Boardman. Automating spectral unmixing of aviris data using convex geometry concepts. In JPL, Summaries of the 4th Annual JPL Airborne Geoscience Workshop. Volume 1: AVIRIS Workshop, 1993. 6 +[6] Xiangyong Cao, Feng Zhou, Lin Xu, Deyu Meng, Zongben Xu, and John Paisley. Hyperspectral image classification with markov random fields and a convolutional neural network. IEEE Transactions on Image Processing, 27(5):2354-2367, 2018. 1 +[7] Xiangyong Cao, Jing Yao, Zongben Xu, and Deyu Meng. Hyperspectral image classification with convolutional neural network and active learning. IEEE Transactions on Geoscience and Remote Sensing, 58(7):4604-4616, 2020. 1 +[8] Gong Cheng and Junwei Han. A survey on object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 117:11-28, 2016. 1 +[9] Jaewan Choi, Kiyun Yu, and Yongil Kim. A New Adaptive Component-Substitution-Based Satellite Image Fusion by Using Partial Replacement. IEEE Transactions on Geoscience and Remote Sensing, 49(1):295-309, 2011. 2 +[10] Liang-Jian Deng, Gemine Vivone, Cheng Jin, and Jocelyn Chanussot. Detail injection-based deep convolutional neural networks for pansharpening. IEEE Transactions on Geoscience and Remote Sensing, page 6995-7010, 2021. 6, 7 +[11] Liang-jian Deng, Gemine Vivone, Mercedes E. Paoletti, Giuseppe Scarpa, Jiang He, Yongjun Zhang, Jocelyn Chanussot, and Antonio Plaza. Machine Learning in Pansharpening: A benchmark, from shallow to deep networks. IEEE Geoscience and Remote Sensing Magazine, 10(3):279-315, 2022. 6 + +[12] Renwei Dian, Shutao Li, Anjing Guo, and Leyuan Fang. Deep hyperspectral image sharpening. IEEE transactions on neural networks and learning systems, 29(11):5345-5355, 2018. 2 +[13] Xueyang Fu, Zihuang Lin, Yue Huang, and Xinghao Ding. A Variational Pan-Sharpening With Local Gradient Constraints. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10257-10266. IEEE, 2019. 2 +[14] Xueyang Fu, Wu Wang, Yue Huang, Xinghao Ding, and John Paisley. Deep multiscale detail networks for multiband spectral image sharpening. IEEE Transactions on Neural Networks and Learning Systems, 32(5):2090-2104, 2021. 2 +[15] Andrea Garzelli and Filippo Nencini. Hypercomplex Quality Assessment of Multi/Hyperspectral Images. IEEE Geoscience and Remote Sensing Letters, 6(4):662-665, 2009. 6 +[16] Hassan Ghassemian. A review of remote sensing image fusion methods. Information Fusion, 32:75-89, 2016. 2 +[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 4 +[18] Lin He, Yizhou Rao, Jun Li, Jocelyn Chanussot, Antonio Plaza, Jiawei Zhu, and Bo Li. Pansharpening via Detail Injection Based Convolutional Neural Networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(4):1188-1204, 2019. 6, 7 +[19] Xuanhua He, Ke Cao, Ke Ren Yan, Rui Li, Chengjun Xie, Jie Zhang, and Man Zhou. Pan-mamba: Effective pan-sharpening with state space model. ArXiv, abs/2402.12192, 2024. 6, 7 +[20] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7132-7141, 2018. 2 +[21] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 2 +[22] Jie Huang, Rui Huang, Jinghao Xu, Siran Pen, Yule Duan, and Liangjian Deng. Wavelet-assisted multi-frequency attention network for pansharpening. arXiv preprint arXiv:2502.04903, 2025. 2 +[23] Gong Jianya, Sui Haigang, Ma Guorui, and Zhou Qiming. A review of multi-temporal remote sensing data change detection algorithms. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 37 (B7):757-762, 2008. 1 +[24] Zi-Rong Jin, Tian-Jing Zhang, Tai-Xiang Jiang, Gemine Vivone, and Liang-Jian Deng. Lagconv: Local-context adaptive convolution kernels with global harmonic bias for pan-sharpening. In Proceedings of the AAAI conference on artificial intelligence, pages 1113-1121, 2022. 6, 7, 8 +[25] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. 6 +[26] Shutao Li, Renwei Dian, Leyuan Fang, and José M. Bioucas-Dias. Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Transactions on Image Processing, 27(8):4118-4130, 2018. 2 + +[27] Hefei Ling, Jiyang Wu, Lei Wu, Junrui Huang, Jiazhong Chen, and Ping Li. Self residual attention network for deep face recognition. IEEE Access, 7:55159-55168, 2019. 2 +[28] Junmin Liu, Shijie Li, Changsheng Zhou, Xiangyong Cao, Yong Gao, and Bo Wang. Sraf-net: A scene-relevant anchor-free object detection network in remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, DOI: 10.1109/TGRS.2021.3124959, 2021. 1 +[29] Giuseppe Masi, Davide Cozzolino, Luisa Verdoliva, and Giuseppe Scarpa. Pansharpening by convolutional neural networks. Remote Sensing, 8(7):594, 2016. 6, 7 +[30] Frosti Palsson, Johannes R. Sveinsson, and Magnus O. Ulfarsson. A new pansharpening algorithm based on total variation. IEEE Geoscience and Remote Sensing Letters, page 318-322, 2013. 6, 7 +[31] Siran Peng, Chenhao Guo, Xiao Wu, and Liang-Jian Deng. U2net: A general framework with spatial-spectral-integrated double u-net for image fusion. In Proceedings of the 31st ACM International Conference on Multimedia (ACM MM), pages 3219-3227, 2023. 7 +[32] Xin Tian, Yuerong Chen, Changcai Yang, and Jiayi Ma. Variational Pansharpening by Exploiting Cartoon-Texture Similarities. IEEE Transactions on Geoscience and Remote Sensing, 60:1–16, 2022. 2 +[33] Gemine Vivone. Robust band-dependent spatial-detail approaches for panchromatic sharpening. IEEE Transactions on Geoscience and Remote Sensing, page 6421-6433, 2019. 2, 6, 7 +[34] Gemine Vivone, Rocco Restaino, Mauro Dalla Mura, Giorgio Licciardi, and Jocelyn Chanussot. Contrast and Error-Based Fusion Schemes for Multispectral Image Pansharpening. IEEE Geoscience and Remote Sensing Letters, 11(5): 930-934, 2014. 2 +[35] Gemine Vivone, Rocco Restaino, and Jocelyn Chanussot. Full scale regression-based injection coefficients for panchromatic sharpening. IEEE Transactions on Image Processing, 27(7): 3418-3431, 2018. 2, 6 +[36] Lucien Wald. Data fusion: definitions and architectures: fusion of images of different spatial resolutions. Presses des MINES, 2002. 6 +[37] Lucien Wald, Thierry Ranchin, and Marc Mangolini. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogrammetric engineering and remote sensing, 63(6):691-699, 1997. 6 +[38] Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo, and Qinghua Hu. Eca-net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11534-11542, 2020. 2 +[39] Xinyu Xie, Yawen Cui, Tao Tan, Xubin Zheng, and Zitong Yu. Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Visual Intelligence, 2(1):37, 2024. 2 +[40] Junfeng Yang, Xueyang Fu, Yuwen Hu, Yue Huang, Xinghao Ding, and John Paisley. Pannet: A deep network architecture for pan-sharpening. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 1753-1761, 2017. 6, 7 + +[41] Yong Yang, Lei Wu, Shuying Huang, Weiguo Wan, Wei Tu, and Hangyuan Lu. Multiband remote sensing image pan-sharpening based on dual-injection model. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:1888-1904, 2020. 2 +[42] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer Vision – ECCV 2014, pages 818–833, Cham, 2014. Springer International Publishing. 2 +[43] Chen-Yu Zhao, Tian-Jing Zhang, Ran Ran, Zhi-Xuan Chen, and Liang-Jian Deng. Lgpcnv: Learnable gaussian perturbation convolution for lightweight pansharpening. In IJCAI, pages 4647-4655, 2023. 6, 7 +[44] Mu Zhu and Ali Ghodsi. Automatic dimensionality selection from the scree plot via the use of profile likelihood. Computational Statistics & Data Analysis, 51(2):918-930, 2006. 7 +[45] Peixian Zhuang, Qingshan Liu, and Xinghao Ding. Panggf: A probabilistic method for pan-sharpening with gradient domain guided image filtering. Signal Processing, 156:177-190, 2019. 2 \ No newline at end of file diff --git a/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/images.zip b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bac47b772f9bccc20db003332c62b87bf7eaab0b --- /dev/null +++ b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1368ddace3e07d699dd96ea9817f4988edb13e8ab0639b546734f04e7c33392 +size 970007 diff --git a/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/layout.json b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1105260d338a2701364fbbd552e92a2f0e4e310b --- /dev/null +++ b/CVPR/2025/A General Adaptive Dual-level Weighting Mechanism for Remote Sensing Pansharpening/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17d764026b331badb49f377331591b6be55db5e12d0e59dd4ceb1cfe04e2382e +size 499429 diff --git a/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_content_list.json b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..81d1ccb9eb69884d1fb79ec8b4bab90215f35660 --- /dev/null +++ b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a6c1219a5583b0e758face76207f31a412bdfe2cc12fa1b2078be58d19fe36c +size 85359 diff --git a/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_model.json b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..90fa367bc1c9e15b62e25fde17e82627843caf1b --- /dev/null +++ b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03e96216c05419014b9ccfb5ce2f27ae9c3b237e91e01767494a34bfaec84327 +size 106665 diff --git a/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_origin.pdf b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ca98fa45ff28a4b5e0f1a558c4cab1a74aae718b --- /dev/null +++ b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/895e46af-6cc5-4cc4-b2d9-ddcd4e7c57f9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fce0eeb6cc7b6d5305c10460da38a0135c6341bc71e8ff79830e6507d6dc0e8 +size 7192696 diff --git a/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/full.md b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b6c71ad368fb44c34007e96b066982a2b27b4a98 --- /dev/null +++ b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/full.md @@ -0,0 +1,338 @@ +# A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering + +Zheming Xu $^{1,2}$ , He Liu $^{1,2}$ , Congyan Lang $^{1,2*}$ , Tao Wang $^{1,2}$ , Yidong Li $^{1,2}$ , Michael C. Kampffmeyer $^{3}$ $^{1}$ School of Computer Science & Technology, Beijing Jiaotong University + +$^{2}$ Key Laboratory of Big Data & Artificial Intelligence in Transportation, Ministry of Education, China + +$^{3}$ Department of Physics and Technology, UiT The Arctic University of Norway + +{zhemingxu, liuhe1996, cylang, twang, ydli}@bjtu.edu.cn, michael.c.kampffmeyer@uit.no + +# Abstract + +Recent graph-based multi-view clustering (GMVC) methods typically encode view features into high-dimensional spaces and construct graphs based on distance similarity. However, the high dimensionality of the embeddings often leads to the hubness problem, where a few points repeatedly appear in the nearest neighbor lists of other points. We show that this negatively impacts the extracted graph structures and message passing, thus degrading clustering performance. To the best of our knowledge, we are the first to highlight the detrimental effect of hubness in GMVC methods and introduce the hubREP (hub-aware Representation Embedding and Pairing) framework. Specifically, we propose a simple yet effective encoder that reduces hubness while preserving neighborhood topology within each view. Additionally, we propose a hub-aware pairing module to maintain structure consistency across views, efficiently enhancing the view-specific representations. The proposed hubREP is lightweight compared to the conventional autoencoders used in state-of-the-art GMVC methods and can be integrated into existing GMVC methods that mostly focus on novel fusion mechanisms, further boosting their performance. Comprehensive experiments performed on eight benchmarks confirm the superiority of our method. The code is available at https://github.com/zmxu196/hubREP. + +# 1. Introduction + +Multi-view data, providing diverse information from various domains or sensors of the same target, has gained significant attention in complex tasks, including autonomous driving [39] and disease analysis [26]. In the unsupervised context, multi-view clustering (MVC) [17, 40] aims to leverage the consistent and complementary information inherent in multi-view data to group samples into distinct clusters. + +In light of the exceptional performance of deep neural + +Figure 1. Integrating our proposed hubREP into a standard autoencoder-based approach (SAE) and DMCAG [8] empirically reduces the hubness metric while improving clustering performance over three multi-view datasets, i.e., BBCSport, BDGP and Reuters. Arrows illustrate how methods improve with hubREP. +![](images/5c52cbd40cc24f30abd19f8894216f734b0a8d28b6ca840c645c9d56ca69012a.jpg) +O (●) BBCSport with DMCAG (+Ours) +3BDGP with DMCAG (+Ours) +□()Reuters withDMCAG(+Ours) +O (●) BBCSport with SAE (+Ours) +3(美)BDGP with SAE $(\pm \mathrm{Q})$ +(1)Reuters with SAE (+Ours) + +networks [16], the past few years have seen a surge in deep MVC methods. One group of approaches focuses on subspace learning [14, 27, 29, 47], which seeks to find a common latent space for aligning and clustering multi-view data, frequently employing contrastive learning techniques. Alternatively, another group of methods [18, 32, 50] addresses MVC via adversarial learning, utilizing generators and discriminators to align latent representations by minimizing discrepancies across views. Despite achieving remarkable performance, the above two groups neglect the intrinsic topological structure of multi-view data, which is inherently fitted to clustering tasks. + +To fully leverage the underlying topology of multiview data, deep graph-based MVC (GMVC) has become a dominant branch, with a majority profiting from the ability of graph neural networks (GNNs) to aggregate neighbor information. In particular, these methods exploit view-specific graphs that have been built either by leveraging raw data [37, 43] or more recently via latent embeddings [8, 25] and focus on designing various mechanisms to effectively fuse these view-specific graphs [34, 43]. + +In this work, we argue that instead of focusing on the de + +sign of novel fusion approaches for the view-specific graphs, we need to focus on the root cause of the problem by understanding and improving the quality of the underlying representations that these graphs are built upon. In particular, current state-of-the-art GMVC approaches, leverage deep autoenoders (AEs) to learn embeddings and then construct similarity-based k-Nearest Neighbor (kNN) graphs from them. This leads to two particular shortcomings. (1) These methods are particularly affected by the hubness problem [23, 24], where certain points frequently appear in the nearest neighbor lists of other points, resulting in sub-optimal graph construction. (2) By leveraging deep AEs to obtain the latent representations, they typically do not explicitly encourage the preservation of neighborhood information, leading to sub-optimal embeddings for the clustering-oriented graph generation. + +To address the above two challenges jointly, we first conduct empirical studies demonstrating that GMVC is indeed vulnerable to hubness and that hubness reduction techniques can mitigate performance degradation. Based on these insights, we propose a simple yet effective GMVC framework, hub-aware Representation Embedding and Pairing for Graph-Based Multi-View Clustering (hubREP). Our approach departs from the autoencoder paradigm and leverages a novel hubness-aware view-specific encoder that encourages embeddings to be uniformly spread on a hypersphere, which provably reduces hubness [28], while at the same time ensuring that we learn structure-aware embeddings that facilitate graph construction and GNN message passing. Figure 1 shows the effectiveness and flexibility of hubREP in mitigating hubness while enhancing clustering performance over existing methods. Besides the hubness-reduction for each view, we additionally leverage a hub-aware pairing across views by employing a consistency loss that aligns structures across views. + +In summary, our major contributions are as follows: + +- We provide the first demonstration that the hubness problem is an important underlying issue in multi-view clustering and is harmful to GMVC methods. This analysis motivates our study. +- We propose a simple but effective view-specific encoder to eliminate hubs while preserving local structures of samples for better graph construction. Moreover, the proposed encoder can be flexibly plugged into prior GMVC methods that tend to focus on the fusion aspect, leading to improved performance (see Figure 1). +- We propose a cross-view pairing approach with a hub-aware pairing mechanism to maintain structural consistency among different views. +- We provide an extensive empirical evaluation, with results conveying the effectiveness and superiority of the proposed method. + +# 2. Related Work + +Deep Graph-based Multi-view Clustering. Over the years, deep MVC has leveraged the powerful capability of deep learning, allowing for deeper analysis through deep subspace methods [13, 29, 47, 48], deep adversarial methods [18, 32, 50], and deep graph-based methods [25, 33], enabling richer representations of multi-view data. In recent years, Graph Neural Networks (GNNs) [41] have garnered substantial attention due to their ability to explore both structural and content information within data samples. While early approaches [32, 37, 43] prepare view-specific graphs for GNNs from the original data representation, more recent approaches [8, 21, 22, 25] have constructed graphs from latent embeddings. Among these, DFMVC [25] employs deep autoencoders to extract embedded features and latent graphs per view, creating a fusion graph and global feature for graph convolution. DMCAG [8] focuses on building anchor graphs from subspace representations and refines embeddings with pseudo-labels. While these methods explore latent topology for each view, they ignore the hubness problem in high-dimensional spaces. Further, the autoencoder they employed is not specifically designed for GNN-based clustering and lacks local structure preservation. On the contrary, our method is designed to explicitly address the hubness problem and simultaneously maintain the view-specific topology structure in the data, boosting the clustering performance. + +The Hubness Problem. Hubness is an intrinsic property of high-dimensional representation spaces, where certain data points, known as hubs, frequently emerge among the nearest neighbors of numerous other points [24]. Therefore, hubs significantly influence graph-based tasks like distance-based graph construction and message propagation. Several communities such as cross-model retrieval [3, 38] and zero-shot or few-shot learning [6, 9, 28, 42] have witnessed the negative impact of the hubness problem. As a remedy, various normalization methods have been utilized to project the representations onto a hypersphere [11, 36], which help reduce hubness but do not guarantee a zero data mean, a prerequisite to eliminate hubness. Trosten et al. [28] aimed to address this problem by proposing an embedding projection method nohub to reduce the hubness theoretically for transductive few-shot learning. However, this optimization-based approach is limited to approaches with fixed representations (transductive setting) as the approach does not learn a parametric mapping and lacks an out-of-sample extension, requiring retraining for new unseen samples. To address the hubness problem in GMVC, we instead introduce a parametric encoder that is able to learn a direct mapping from the input space to a latent space where hubness is reduced, while structure is preserved. + +![](images/7cab690c399438e7a4150cb78004e77085021004f98b65620ed4e73158608445.jpg) + +![](images/14f9ad4cec6dcae49d4e867a36df63dead861f2a1db10293a344c8bc4a32a124.jpg) + +![](images/36100f3db87a9ca276b2691608e6afb5bf45fa7728f9e8f82b2994a83ba934a7.jpg) + +![](images/f99d09b6e4d326ce9b0885b0a3ae2e967a682fad0fb71ea3fdab40360130e4ec.jpg) + +![](images/862dc437706b2dc9c8c3eb5eb3dc7bc8cbc3581f2f405d7f056959a68a4536c1.jpg) + +![](images/d6775f41145bb75890ed7d44f95c1b312ea5c761f608facefe52f148b40bb47f.jpg) +(a) Distribution of $N_{k}(x)$ + +![](images/ff59a9baf091e4430f1ba317fabb5c24500eb3bf115d7b30177f6742ca657fa4.jpg) +(b) Dims vs. SK + +![](images/908ade3385393eda311d2704ae984e63a0e4109c42273bdc08648f0034772cc4.jpg) +(d) Dims vs. ACC + +![](images/085173a766a9eb28f5d543037d102eb9798c9f22c503fa9878a3fbf2abc5a127.jpg) +(c) Effects of different techniques on ACC and SK + +![](images/0d6a10da5379b319838c8fdd72439b0ebd2ed17886c9ac92b812bcf65584f33f.jpg) +(e) SK vs. ACC + +![](images/6780a195fc7800a515b40340ae1deb366630cb08290d6ba9e76f3bfe625eb640.jpg) +(f) SK vs. NMI + +![](images/d93f818800961e133c21973995b377bbbdc3e21f52353d7c3999d4c8a0bdc411.jpg) +Figure 2. Empirical Studies. (a) Normalized frequency histograms of k-occurrence with different dimensions in BBCSport. The x-axis denotes $N_{k}(x)$ . (b) The relationship between dimension and skewness (SK), a measure of hubness, on five datasets. (c) Clustering accuracy (bar chart) and SK value (dot line) across different hubness reduction techniques. (d) The relationship between dimension and clustering performance on five datasets. (e) Correlation between SK and clustering accuracy. (f) Correlation between SK and clustering performance, measured in terms of normalized mutual information (NMI). +Figure 3. Visualization of the absolute difference of $N_{k}(x^{v})$ across two different views for the BBCSport dataset. + +# 3. Motivation + +Over the past decades, the hubness phenomenon has been neglected in the context of multi-view clustering. Instead, within graph neural network-based multi-view clustering (GMVC), existing approaches have focused on novel approaches for the fusion of the multi-view embeddings. To systematically investigate the role of hubness in GMVC, we propose a series of empirical studies aimed at examining both the impact of hubness on clustering performance and the limitations of current autoencoder (AE)-based approaches in handling the hubness and structure-preservation problem. To evaluate the fundamental issues of view-specific embeddings, we focus on the core aspects of current state-of-the-art models, excluding different fusion approaches. Specifically, + +we use autoencoders as the view-specific embedding backbone and graph autoencoders to explore sample topology. K-means is then applied to the average-fused representation for clustering. Let $N_{k}(x)$ denote the k-occurrence, the number of times a sample $x$ appears as a k-nearest neighbor to other samples, we use the skewness (SK) metric [23] to describe the asymmetry of the $N_{k}(x)$ distribution. A positive SK value signifies a right-skewed distribution, where the tail extends to the right. The greater the SK value, the more severe the hubness problem, as more points become hubs. Additional implementation details of the model and metrics can be found in the supplementary materials. + +# (I) Sensitivity to the Hubness Problem + +Observation 1: Prevalence of the hubness problem in MVC. Let $d$ denotes the dimensionality of the representation in the latent space of the AE. In Figure 2a, we present the normalized frequency of hub occurrences across various $d$ on the BBCSport dataset. As $d$ increases, the histogram becomes more right-skewed, indicating that a larger number of samples frequently appear in the neighborhoods of other samples. To confirm that this hubness problem is not dataset-specific, we expanded our analysis on more datasets, as shown in Figure 2b, consistently observing hubness across different data. Besides, we also visualize the absolute difference of $N_{k}(x^{v})$ between two views in Figure 3, observing + +that the same sample holds different hubness scores in different views, highlighting the importance of aligning cross-view hubness to maintain multi-view consistency. + +Observation 2: Reducing hubness by reducing dimensionality hurts performance. Even though autoencoders are designed to embed high-dimensional data into a lower-dimensional latent space, our empirical studies on five common benchmarks reveal that excessive dimensionality reduction can lead to significant information loss. Specifically, we varied $d$ over the set $\{10, 64, 128, 256, 512, 1024\}$ . As shown in Figure 2d, simply reducing dimensionality to avoid the hubness problem can result in suboptimal clustering performance due to the potential loss of critical information during data compression. + +Observation 3: Reducing hubness enhances GNN-based MVC. Building on the previous observations, we turn to existing simple embedding approaches, including L2 normalization [36], Z-score normalization (ZN) [11], and centered L2-normalization (CL2N) [36], which have been shown to reduce hubness effectively in the context of FSL. Specifically, we examine the impact of these techniques on clustering performance and hubness. Here $d = 1024$ . In Figure 2c, all five datasets exhibit a consistent trend: each hubness reduction technique successfully decreases hubness while simultaneously improving clustering performance. To better quantify the relationship between hubness and clustering performance, we further performed correlation analyses over eight datasets with Pearson correlation coefficients [7], which can assess the linear relationship between two variables. Here we include three additional datasets beyond the previous five to further emphasize the importance of the problem (see Supplementary for the dataset details). As depicted in Figure 2e and Figure 2f, there is a strong, statistically significant negative correlation ( $r < 1$ , p-value $< 0.05$ ) between hubness (SK) and clustering performance (ACC, NMI). This demonstrates that the embedding-based hubness reduction techniques can further enhance the performance of GMVC, indicating the potential that hubness-aware GMVC approaches could have. + +# (II) Lack of Local Structure Preservation + +Given multi-view data $\{\mathbf{X}^v\}_{v = 1}^V$ , where $\mathbf{X}^v = \{x_i^v\}_{i = 1}^N$ represents the data samples in view $v$ and $x_{i}^{v}\in \mathbb{R}^{d_{v}}$ , with $N$ and $d_v$ denoting the number of samples and the dimension of the $v$ -th view data, respectively. In general, conventional deep autoencoders in multi-view clustering can be expressed as minimizing the reconstruction function as follows: + +$$ +\mathcal {L} _ {\text {a u t o e n c o d e r}} = \sum_ {v = 1} ^ {V} \sum_ {i = 1} ^ {N} \| x _ {i} ^ {v} - g _ {A E} ^ {v} \left(f _ {A E} ^ {v} \left(x _ {i} ^ {v}\right)\right) \| _ {F} ^ {2}, \tag {1} +$$ + +where $f_{AE}^{v}(\cdot)$ and $g_{AE}^{v}(\cdot)$ are the view-specific encoders and decoders. $\| \cdot \|_{F}$ denotes the Frobenius norm. This objective maximizes a lower bound of the Mutual Information between + +![](images/c0368dad2639183fd583a1e095bea6fa7982996240eafd13b1f8309d59687816.jpg) +Figure 4. Overview of our proposed hubREP. The main goal of hubREP is to provide better view-specific representations for the clustering-oriented graph construction. In hubREP, we first perform hub reduction and neighbor structure preservation for each view, while sparing different views through hub-aware alignment. The resulting embeddings and graphs are then fed into graph autoencoders for the consensus representation, followed by the K-means clustering algorithm. + +$x_{i}^{v}$ and latent representation $f_{AE}^{v}(x_{i}^{v})$ [31]. However, it does not explicitly consider the preservation of local structure. In other words, if two samples $x_{i}$ and $x_{j}$ are similar in the original space, there is no guarantee of the closeness of their latent representations $f_{AE}^{v}(x_{i})$ and $f_{AE}^{v}(x_{j})$ . As a result, the deep autoencoder may obtain sub-optimal latent embeddings. An ideal encoder $f_{G}^{v}$ for GMVC, on the other hand, should also transform the pairwise relations among samples. + +To this end, we seek to propose a hub-aware embedding framework designed to: (1) eliminate hubs, thereby preventing severe error propagation in subsequent graph neural network layers; (2) align cross-view samples for inter-view consistency; (3) preserve the local structure of data to enable better view-specific graph construction with higher quality. + +# 4. Method + +Motivated by the analysis and empirical observations in Sec. 3, we propose a simple yet effective framework, dubbed hubREP, as illustrated in Figure 4, to embed data into hub-aware representations in the context of GMVC. Specifically, for intra-view data, we replace the typical encoder-decoder structure with hub-aware encoders to reduce hubness while preserving structure relations. For the inter-view aspect, we propose a hub-aware cross-view paring objective, ensuring that the core structure of the data is preserved across views. + +# 4.1. View-specific Hub-aware Feature Embedding + +Previous graph-based methods primarily focus on novel fusion strategies and neglect the quality of the underlying embeddings. In this work we demonstrate that the way these embeddings are currently obtained is inherently flawed, resulting in sub-optimal downstream clustering performance (see Sec. 3). Building on these insights and in order to address this fundamental problem of current approaches, we + +propose a new approach to construct effective view-specific feature embeddings. Our approach is designed to address the hubness problem and preserve neighborhood information, resulting in feature embeddings that are well-suited for graph construction in multi-view clustering. + +We demonstrated in the previous section, that simple normalization techniques like L2, ZN, and CL2N can reduce hubness and improve performance by projecting points onto a hypersphere, a geometry that is more suited to reducing the hubness problem [11, 28]. However, these methods do not encourage representations to be uniform spread across the hypersphere or zero-mean embeddings, which is required to eliminate hubness [28]. Further, they neglect local neighborhood preservation, resulting in suboptimal performance. + +In transductive few-shot learning (TFSL), this has resulted in the design of novel embedding approaches [28] that directly optimize the embeddings to adhere to the desired properties, following a t-SNE [30] inspired approach. However, by directly optimizing the embeddings, no mapping function is learned from the original (pre-normalized) embedding to the normalized embeddings. While this is not a problem in TFSL, where the original embeddings are assumed fixed and the mapping is not required, it is not applicable for multi-view clustering approaches, where we need to learn the embedding function. + +Inspired by the theoretical study of [28], we therefore propose a new parametric hub-aware embedding approach for GMVC. Specifically, given the input space $\mathcal{X}^v$ and the hidden latent space $\mathcal{H}^v$ , we are interested in learning the non-linear mapping in each view $f^v: \mathcal{X}^v \to \mathcal{H}^v$ . + +To ensure neighborhood preservation, while at the same time avoiding the need for an encoder-decoder structure, we define similarities in the input space and the embedding space using a softmax over pairwise cosine similarities. Specifically, in the input space $\mathcal{X}^v$ , we compute pairwise similarity $p_{ij}^{v} = \frac{1}{2} (p_{i|j}^{v} + p_{j|i}^{v})$ , where the conditional probability $p_{ij}^{v}$ is formulated as: + +$$ +p _ {i | j} ^ {v} = \frac {\exp \left(\kappa_ {i} \left(x _ {i} ^ {v \top} x _ {j} ^ {v}\right) / \left\| x _ {i} ^ {v} \right\| \left\| x _ {j} ^ {v} \right\|\right)}{\sum_ {b , m} \exp \left(\kappa_ {i} \left(x _ {b} ^ {v \top} x _ {m} ^ {v}\right) / \left\| x _ {b} ^ {v} \right\| \left\| x _ {m} ^ {v} \right\|\right)}, \tag {2} +$$ + +where $\kappa_{i}$ is set such that the effective neighborhood of $x_{i}^{v}$ consists of a pre-defined number of neighbors, thus controlling the scale of the neighborhood.2 Here, $b$ and $m$ are sample indices iterating over all sample pairs. + +$$ +q _ {i j} ^ {v} = \frac {\exp \left(\kappa f ^ {v} \left(x _ {i} ^ {v} \mid \Theta^ {v}\right) ^ {\top} f ^ {v} \left(x _ {j} ^ {v} \mid \Theta^ {v}\right)\right)}{\sum_ {b , m} \exp \left(\kappa f ^ {v} \left(x _ {b} ^ {v} \mid \Theta^ {v}\right) ^ {\top} f ^ {v} \left(x _ {m} ^ {v} \mid \Theta^ {v}\right)\right)}, \tag {3} +$$ + +where points $x_{i / j}^{v}$ are embedded via the parametric mapping function (the view-specific encoder) $f^{\upsilon}(x_{i / j}^{v}|\Theta^{v})$ , which is parameterized by $\Theta^v$ , and $\kappa$ is a hyperparameter. Note, this + +setup follows [28] and [30], with key differences: unlike [30], our embedding space is a hypersphere, and unlike both [28] and [30] our embeddings are parameterized mappings from the input. To learn the non-linear mapping $f^v$ , parameterized by $\Theta^v$ , to preserve the local neighborhood structure, we optimize the KL divergence, resulting in: + +$$ +\underset {\Theta^ {v}} {\arg \min } K L (P \| Q) = \underset {\Theta^ {v}} {\arg \min } \sum_ {i, j} p _ {i j} ^ {v} \log \frac {p _ {i j} ^ {v}}{q _ {i j} ^ {v}}. \tag {4} +$$ + +This optimization can be decomposed into optimizing a combination of two losses, one encouraging the preservation of neighborhood information and one encouraging a uniform embedding on the hypersphere3, resulting in: + +$$ +\begin{array}{l} \mathcal {L} _ {s} = \underbrace {- (1 - \lambda) \sum_ {v} \sum_ {i , j} \kappa p _ {i j} ^ {v} \left(f ^ {v} \left(x _ {i} ^ {v} | \Theta^ {v}\right) ^ {\top} f ^ {v} \left(x _ {j} ^ {v} | \Theta^ {v}\right)\right)} _ {\text {N e i g h b o r h o o d P r e s e r v a t i o n} \mathcal {L} _ {n p}} \\ + \lambda \underbrace {\log \sum_ {v} \sum_ {b , m} \exp \left(\kappa f ^ {v} \left(x _ {b} ^ {v} \mid \Theta^ {v}\right) ^ {\top} f ^ {v} \left(x _ {m} ^ {v} \mid \Theta^ {v}\right)\right)} _ {\text {H u b n e s s R e d u c t i o n} \mathcal {L} _ {h r}}, \tag {5} \\ \end{array} +$$ + +where a trade-off parameter $\lambda$ is introduced to control the impact of each of the terms. The neighborhood preservation term $\mathcal{L}_{np}$ will be minimized if highly similar points in the input space (large $p_{ij}^v$ ) have a high similarity in the embedding space. While the hubness reduction term $\mathcal{L}_{hr}$ corresponds to the negative entropy on the sphere, thus encouraging embeddings to be spread out uniformly on the sphere, thereby reducing hubness. Optimizing this loss allows us to train a view-specific hub-aware encoder that efficiently ensures that representations exhibit reduced hubness while ensuring that local structural information is preserved. + +# 4.2. Hub-aware Cross-view Pairing + +Representation alignment is a crucial component of deep multi-view clustering. Recent methods commonly employ contrastive learning to consider pairwise similarity by pulling similar samples closer and pushing different samples apart. However, these methods often struggle with negative sample selection. Although some work turns to directly aligning view-specific structures [44, 48], they primarily treat each sample equally, leading to sub-optimal performance. In addition, their graphs built from kNN are generally sparse, overlooking relations between samples. Unlike these previous works, we propose a hub-aware cross-view pairing loss, aiming to effectively align pairwise structures across different views. + +Specifically, we introduce the hubness score to dynamically adjust the weight of each sample for each view. Given embedding $\mathbf{H}^v = \{h_i^v\}_{i=1}^N$ from the view-specific encoder, + +the hubness score $s(h_j^v)$ of sample $h_j^v$ can be formulated as follows: + +$$ +s \left(h _ {j} ^ {v}\right) = \sum_ {i = 1} ^ {N} \mathbb {1} \left(h _ {j} ^ {v} \in \operatorname {k N N} \left(h _ {i} ^ {v}\right)\right) + \epsilon , \tag {6} +$$ + +where $\mathbb{1}(\cdot)$ is the Iverson bracket indicator function that returns 1 when the condition is true and 0 otherwise. Here, we introduce a small value $\epsilon$ to ensure that $s(h_j^v)$ is positive, taking outliers into account; $\mathrm{kNN}(h_i^v)$ represents the set of k-nearest neighbors of the sample $h_i^v$ . As demonstrated in Sec. 3, for the same sample, the hubness score varies across views, resulting in different hubness distributions. Generally speaking, hubs exhibit a centrical property and are statistically significant among all samples. Therefore, they should receive more attention during cross-view alignment. To this end, the proposed hub-aware cross-view pairing loss can be defined as: + +$$ +\mathcal {L} _ {c} = \frac {1}{V} \sum_ {v \neq u} \| \left(\mathbf {S} ^ {u} \mathbf {S} ^ {v \top}\right) \odot \left(\mathbf {H} ^ {u} \mathbf {H} ^ {u \top} - \mathbf {H} ^ {v} \mathbf {H} ^ {v \top}\right) \| _ {F} ^ {2}, \tag {7} +$$ + +where $\odot$ denotes the element-wise multiplication operation and $\mathbf{S}^u, \mathbf{S}^v \in \mathbb{R}^N$ denote the hubness scores for the $u_{th}$ and $v_{th}$ views, respectively. + +# 4.3. Graph Autoencoder + +Given latent embedding $\mathbf{H}^v$ in each view, we first build view-specific graph $\mathbf{A}^v$ via the kNN algorithm. To fully leverage graph structure information, we further utilize graph autoencoders [10] to enhance view-specific representations by aggregating structural information from neighbors. + +Graph Encoder. For each view $v$ , we then employ a graph encoder $g^{v}(\cdot)$ to aggregate information from neighbor samples. The representation is updated as follows: + +$$ +\mathbf {Z} _ {(l)} ^ {v} = \sigma \left(\bar {\mathbf {A}} ^ {v} \mathbf {Z} _ {(l - 1)} ^ {v} \mathbf {W} _ {(l)} ^ {v} + \mathbf {b} _ {(l)} ^ {v}\right), \tag {8} +$$ + +where $\widetilde{\mathbf{A}}^v = \widetilde{\mathbf{D}}^{v - \frac{1}{2}}\widetilde{\mathbf{A}}^v\widetilde{\mathbf{D}}^{v - \frac{1}{2}}$ is the symmetrically normalized adjacency matrix, with $\widetilde{\mathbf{D}}^v$ representing the degree matrix of $\widetilde{\mathbf{A}}^v = \mathbf{A}^v +\mathbf{I}$ . $\mathbf{A}^v$ is the original adjacency matrix and $\mathbf{I}$ is the identity matrix. Here, $\mathbf{Z}_l^v\in \mathbb{R}^{N\times d_l}$ denotes the hidden feature matrix at the $l$ -th layer for view $v$ , with $\mathbf{Z}_{(0)}^v = \mathbf{H}^v$ being the initial feature matrix. $\sigma (\cdot)$ denotes the activation function. + +Graph Decoder. The decoder $d^{v}(\cdot)$ includes reconstructing both the feature and adjacency matrices for each view $v$ , ensuring that the latent embeddings retain both attribute information and graph structure. Concretely, we introduce a feature-level decoder $e^{v}(\cdot)$ and a structure-level decoder $t^{v}(\cdot)$ , thereby obtaining reconstructed embedding $\hat{\mathbf{H}}^{v} = e^{v}(\mathbf{Z}^{v})$ and reconstructed graphs $\hat{\mathbf{A}}^{v} = t^{v}(\mathbf{Z}^{v}) = \text{sigmoid}(\mathbf{Z}^{v}\mathbf{Z}^{v\top})$ . The reconstruction loss of the graph autoencoder can then be formulated as: + +$$ +\mathcal {L} _ {g r e c} = \sum_ {v = 1} ^ {V} \left\| \mathbf {H} ^ {v} - \hat {\mathbf {H}} ^ {v} \right\| _ {F} ^ {2} + \sum_ {v = 1} ^ {V} \left\| \mathbf {A} ^ {v} - \hat {\mathbf {A}} ^ {v} \right\| _ {F} ^ {2}. \tag {9} +$$ + +
DatasetsViewsDimensionSamplesClusters
3Sources33560/3631/30681696
BBCSport23183/32035445
BDGP21750/7925005
MNIST-USPS2784/784500010
Hdigit2784/2561000010
Reuters52000/2000/2000/2000/200012006
Animal42689/2000/2001/20001167320
Noisymnist2784/7841500010
+ +Table 1. Details of the multi-view datasets considered in this work. + +# 4.4. The Overall Loss Function. + +In summary, we avoid the need for view-specific decoders and introduce a simple yet effective encoder that produces hub-aware embeddings and cross-view pairing for GMVC, which jointly enhances the preservation of neighborhood relationships, mitigates the hubness problem and aligns structures across views. + +The overall loss function can therefore be defined as: + +$$ +\mathcal {L} = \mathcal {L} _ {g r e c} + \alpha \mathcal {L} _ {s} + \beta \mathcal {L} _ {c}, \tag {10} +$$ + +where $\alpha$ and $\beta$ are trade-off parameters. Finally, we obtain the desired view-specific embeddings, which are then passed through a K-means clustering algorithm after average fusion to yield the final clustering results. + +# 5. Experiment + +# 5.1. Setup + +Datasets, Evaluation Metrics, and Implementation Details. The experiments are carried out on eight public multi-view datasets, including 3Sources $^{4}$ , BBCSport $^{5}$ , BDGP [4], MNIST-USPS [20], Hdigit [5], Reuters [2], Animal [15] and Noisyminist [35] (see Table 1). For evaluation of clustering performance, we employ seven common metrics [1], i.e., accuracy (ACC), normalized mutual information (NMI), adjusted rand index (ARI), purity (PUR), precision (PRE), recall (REC), and F1-score (F1) to assess clustering performance. For the hubness, we measure the skewness (SK) [24] and hub occurrence (HO) [12]. In our experiment, we perform end-to-end training and do not need to pertain the view-specific decoders. More experimental details are provided in the supplementary Sec. 5. + +Baselines. In this paper, we compare our model with 10 state-of-the-art methods, including adversarial-based approaches [50], subspace-based methods [29, 44-46] and graph-based approaches [8, 19, 43, 45, 49]. For a fair comparison, all the baseline methods are executed with recommended parameters or by performing a parameter search to achieve optimal performance. + +
Method3SourcesAnimal
ACCNMIARIPURPREF1ACCNMIARIPURPREF1
MCGC (TIP '19[49])0.42010.14910.00600.48520.23500.35250.13870.06360.13910.12930.07030.0241
EAMC (CVPR'20[50])0.30180.08990.04500.46150.29480.25150.25570.40310.10090.30620.19750.2255
MFLVC (CVPR'22[44])0.48990.32790.21030.57630.36870.41150.19640.18130.07120.23860.11820.1259
AECoDDC (CVPR'23[29])0.36090.17120.09470.53250.34940.30000.27960.38410.16610.32290.18640.1862
GCFAgg (CVPR'23[46])0.50300.42020.27040.61540.42200.46380.14410.13500.06150.20440.10970.1176
DFP-GNN TMM'23[43]0.64500.58120.54510.73960.63520.63840.18560.13990.06670.22260.11170.1291
DMCAG (IJCAI'23[8])0.55620.40790.32890.47930.55550.48220.28640.40490.16890.33590.19120.1878
S2MVTC (CVPR'24[19])0.30180.08680.01240.40830.24130.25420.21270.16370.06890.22490.11080.1357
MVCAN (CVPR'24[45])0.42600.29720.12650.50300.42990.45320.13030.10480.01960.18540.08910.1526
SURER (AAAI'24[34])0.71600.57130.55510.75150.60450.6837OOMOOMOOMOOMOOMOOM
Ours0.92900.82460.85190.94080.86920.87120.29660.49040.20710.35900.23340.2334
MNIST-USPS
+ +Table 2. Clustering results. Best results and second best are highlighted in bold and underlined, respectively. "OOM" denotes out-of-memory. + +# 5.2. Experimental Results and Analysis + +Superiority of hubREP. The comparison results of the clustering performance with ten methods on eight datasets are presented in Table 2. It can be observed that: (1) our method achieves superior performance in all but one case, achieving for instance improvements of respectively $21.30\%$ , $24.34\%$ , and $29.68\%$ higher ACC, NMI, and ARI scores compared to the closest competitor on the 3Sources dataset. Note, hubREP is still one of the best-performing approaches also for the MNIST-USPS dataset. + +The improvements of hubREP are obtained by only focusing on the representation embedding in each view, without leveraging more advanced fusion mechanisms, which demonstrates the effectiveness of our simple approach. In the following paragraph and Table 4, we further investigate the potential of integrating our approach with a more advanced fusion mechanism. (2) Compared to previous GMVC methods, including the most relevant DMCAG that focuses on latent graph construction, and DFP-GNN and SURER that build graphs from raw data, hubREP obtains superior + +performance in most cases, demonstrating the effectiveness of our hub-aware embedding and pairing approach. + +Simplicity and Transferability of hubREP. Our proposed hubREP primarily focuses on improving view-specific representations, making it highly adaptable for integration with alternative graph-based clustering methods. In this subsection, we perform a transferability analysis by integrating our methods into DMCAG, which focuses on latent-space anchor graph learning, along with pseudo-label and contrastive learning-based fusion mechanisms. As shown in Table 4, integrating our method effectively reduces the hubness problem while simultaneously improving clustering performance. Additionally, by dropping the decoders our approach further reduces the number of trainable parameters significantly. Notably, we achieve the best results on BDGP, Reuters, and MNIST-USPS, demonstrating the superior performance and seamless integration of our method when applied to other graph-based frameworks. + +Ablation Study. To verify the effectiveness of each component in hubREP, we compare our approach with sev + +
Method3SourcesBBCSportBDGPReuters
ACCNMIARIPURACCNMIARIPURACCNMIARIPURACCNMIARIPUR
hubREP-LS-Lc0.70650.65190.55790.82010.92460.80450.81040.95110.72110.59910.36950.72140.51020.32520.24170.5418
hubREP-LS0.36290.16860.07550.45170.41360.06540.06680.46320.47780.30370.12830.54030.24170.10220.03360.3397
hubREP-LS0.88170.76190.76040.89150.94000.83620.84360.95590.93480.83940.84690.94190.53220.39870.31600.5994
hubREP0.92900.82460.85190.94080.96320.89480.90700.98530.98680.95380.96740.99160.65330.43300.36280.6567
+ +Table 3. Ablation study results on four datasets with four evaluation metrics. + +
DatasetsMethodSK ↓HO ↓ACC ↑NMI ↑ARI ↑#Params ↓
BBCSportDMCAG [8]0.96710.30580.66540.56290.502711.57M
+ hubREP0.58460.19470.81430.66540.60117.94M
BDGPDMCAG [8]1.15011.3800.97760.93960.94537.30M
+ hubREP0.17830.09570.99280.98210.98426.00M
ReutersDMCAG [8]0.81460.27150.59080.35510.302923.11M
+ hubREP0.74000.26820.68050.63570.501117.02M
3SourcesDMCAG [8]1.40330.43000.55620.40790.328917.94M
+ hubREP0.42410.07810.74560.70290.596612.01M
MNIST-USPSDMCAG [8]0.54030.14210.97940.95770.95567.41M
+ hubREP0.10010.07360.99620.99000.99166.25M
+ +Table 4. Simplicity and transferability analysis of hubREP across five datasets. Note that the **bold** and **underlined** values achieve the best and second-best results when compared to the results in Table 2, respectively. + +![](images/e1bd543f269fa86aa392cdd72177cf7dc32e70ffc8ca511278acae4c649e1c83.jpg) +Figure 5. The inner product of representations embedded by different methods on the BBCSport dataset for each view. +Figure 7. T-SNE visualization of the consensus representation on the BBCSport dataset. + +eral degraded variants: (i) $\mathbf{hubREP} - \mathcal{L}_s - \mathcal{L}_s$ , which removes both $\mathcal{L}_s$ and $\mathcal{L}_c$ and replace the hub-aware encoders with conventional autoencoders; (ii) $\mathbf{hubREP} - \mathcal{L}_s$ , and (iii) $\mathbf{hubREP} - \mathcal{L}_c$ , which remove $\mathcal{L}_s$ and $\mathcal{L}_c$ from the whole model, respectively. According to Table 3, the comparison between (i) and (ii) highlights the importance of preserving neighborhood topology for GMVC. Furthermore, our hubREP consistently outperforms both (ii) and (iii), verifying the effectiveness of jointly optimizing hub-aware representation embedding and pairing for GMVC. + +Parameter Sensitivity Analysis. In this section, we empirically evaluate the trade-off hyper-parameters $\alpha$ , $\beta$ , and $\lambda$ in Eq. 5 and Eq. 10. Experimental results in Figure 6 indicate that the performance stays stable when $\alpha$ and $\beta$ have relatively large values and drops sharply when $\alpha$ is too small due to the importance of intra-view embedding in MVC. Since $\lambda$ is an internal parameter of $\mathcal{L}_s$ controlled by $\alpha$ , we fix $\beta$ as 1 to investigate the effect of $\lambda$ . Note that when $\lambda = 0/1$ , Eq. 5 degrades to $\mathcal{L}_{np}$ or $\mathcal{L}_{hr}$ . It can be observed in Figure 6 that $\lambda$ in the range of [0.3, 0.7] achieves higher performance. Empirically, we recommend setting $\lambda$ as 0.5. + +Visualization. To demonstrate the effectiveness of our method, we plot the inner product matrices for each view + +![](images/ed8b8cfc17fdcf6153e2bed0bb13924331c36fe24189ff58791f97a30be9bf00.jpg) +(a) ACC with $\beta$ vs $\alpha$ + +![](images/784dfb5c9e422b90b7469d915f3f4e957992ebce54cea9b9d96cb047cf90dfd4.jpg) +(b) ACC with $\alpha$ vs $\lambda$ + +![](images/ab636593ce97d8a584dfcaea2b996145131eebbdc21f2bad38d615f3e6dd2857.jpg) +Figure 6. Parameter sensitivity analysis on BBCSport. +(a) AE+GNN + +![](images/99bc282ac72ffd4eeb958b8daca96180f88b5e72551df2fdd719b988f2523237.jpg) +(b) SURER + +![](images/a5cd56b35b3644ba5094b2ead4180d786991b653a3f103ab99f81bdb5f26dc82.jpg) +(c) Ours + +in Figure 5. Compared to raw, conventional normalization methods (see supplementary material Sec. 5 for $\mathrm{AE + CL2N}$ ), and SURER [34], our approach provides a clearer structure by preserving local neighbor relationships and ensuring consistent alignment across views. This is achieved through our hub-pairing strategy, which reduces noisy hubs. More intuitively, the t-SNE visualization in Figure 7 shows that our method learns more discriminative representations, resulting in better clustering performance. + +# 6. Conclusion + +In this paper, we demonstrate and address the hubness problem in graph-based multi-view clustering (GMVC) by proposing hubREP, a simple yet efficient encoder that simultaneously reduces hubness, preserves neighbor topology, and aligns hub-weighed structure across views to embed optimal representations for cluster-friendly graph generation. Focusing on view-specific embeddings, hubREP offers a flexible solution that can be integrated into other GMVC methods, including those with more advanced fusion modules. Extensive experimental results across eight datasets demonstrate the effectiveness and adaptability of the proposed method. + +# Acknowledgment + +This work was supported by the National Natural Science Foundation of China (No.62402031), the Beijing Nova Program (No.20240484620), the Beijing Natural Science Foundation (No. L221011), and the Research Council of Norway (grant 309439 and 315029). + +# References + +[1] Enrique Amigó, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. A comparison of extrinsic clustering evaluation metrics based on formal constraints. Inf. Retr., 12(4):461-486, 2009. 6 +[2] Massih R Amini, Nicolas Usunier, and Cyril Goutte. Learning from multiple partially observed views—an application to multilingual text categorization. NIPS, pages 28-36, 2009. 6 +[3] Simion-Vlad Bogolin, Ioana Croitoru, Hailin Jin, Yang Liu, and Samuel Albanie. Cross modal retrieval with querybank normalisation. In CVPR, pages 5184-5195, 2022. 2 +[4] Xiao Cai, Hua Wang, Heng Huang, and Chris Ding. Joint stage recognition and anatomical annotation of drosophila gene expression patterns. Bioinform., 28(12):16-24, 2012. 6 +[5] Man-Sheng Chen, Jia-Qi Lin, Xiang-Long Li, Bao-Yu Liu, Chang-Dong Wang, Dong Huang, and Jian-Huang Lai. Representation learning in multi-view clustering: a literature review. Data Sci. Eng., 7(3):225-241, 2022. 6 +[6] Ali Cheraghian, Shafin Rahman, Dylan Campbell, and Lars Petersson. Mitigating the hubness problem for zero-shot learning of 3d objects. In BMVC, page 41, 2019. 2 +[7] Israel Cohen, Yiteng Huang, Jingdong Chen, Jacob Benesty, Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. Pearson correlation coefficient. Noise Reduction in Speech Processing, pages 1-4, 2009. 4 +[8] Chenhang Cui, Yazhou Ren, Jingyu Pu, Xiaorong Pu, and Lifang He. Deep multi-view subspace clustering with anchor graph. In IJCAI, pages 3577-3585, 2023. 1, 2, 6, 7, 8 +[9] Georgiana Dinu and Marco Baroni. Improving zero-shot learning by mitigating the hubness problem. In ICLR Workshop, 2015. 2 +[10] Shaohua Fan, Xiao Wang, Chuan Shi, Emiao Lu, Ken Lin, and Bai Wang. One2multi graph autoencoder for multi-view graph clustering. In WWW, pages 3070-3076, 2020. 6 +[11] Nanyi Fei, Yizhao Gao, Zhiwu Lu, and Tao Xiang. Z-score normalization, hubness, and few-shot learning. In ICCV, pages 142-151, 2021. 2, 4, 5 +[12] Arthur Flexer and Dominik Schnitzer. Choosing p norms in high-dimensional spaces based on hub analysis. Neurocomputing, 169:281-287, 2015. 6 +[13] Jiaqi Jin, Siwei Wang, Zhibin Dong, Xinwang Liu, and En Zhu. Deep incomplete multi-view clustering with cross-view partial sample and prototype alignment. In CVPR, pages 11600-11609, 2023. 2 +[14] Guanzhou Ke, Bo Wang, Xiaoli Wang, and Shengfeng He. Rethinking multi-view representation learning via distilled disentangling. In CVPR, pages 26764-26773, 2024. 1 + +[15] Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE Trans. Pattern Anal. Mach. Intell., 36 (3):453-465, 2014. 6 +[16] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nat., 521(7553):436-444, 2015. 1 +[17] Yingming Li, Ming Yang, and Zhongfei Zhang. A survey of multi-view representation learning. IEEE Trans. Knowl. Data Eng., 31(10):1863-1883, 2019. 1 +[18] Zhaoyang Li, Qianqian Wang, Zhiqiang Tao, Quanxue Gao, Zhaohua Yang, et al. Deep adversarial multi-view clustering network. In IJCAI, pages 2952-2958, 2019. 1, 2 +[19] Zhen Long, Qiyuan Wang, Yazhou Ren, Yipeng Liu, and Ce Zhu. S2mvtc: a simple yet efficient scalable multi-view tensor clustering. In CVPR, pages 24088-24097, 2024. 6, 7 +[20] Xi Peng, Zhenyu Huang, Jiancheng Lv, Hongyuan Zhu, and Joey Tianyi Zhou. Comic: Multi-view clustering without parameter selection. In ICML, pages 5092-5101, 2019. 6 +[21] Jingyu Pu, Chenhang Cui, Yazhou Ren, Zichen Wen, Tianyi Wu, Xiaorong Pu, and Lifang He. Deep multi-view clustering via view-specific representation and global graph. In ACAIT, pages 493-502, 2023. 2 +[22] Jingyu Pu, Chenhang Cui, Xinyue Chen, Yazhou Ren, Xiaorong Pu, Zhifeng Hao, S Yu Philip, and Lifang He. Adaptive feature imputation with latent graph for deep incomplete multi-view clustering. In AAAI, pages 14633-14641, 2024. 2 +[23] Miloš Radovanović, Alexandros Nanopoulos, and Mirjana Ivanović. Nearest neighbors in high-dimensional data: The emergence and influence of hubs. In ICML, pages 865-872, 2009. 2, 3 +[24] Milos Radovanovic, Alexandros Nanopoulos, and Mirjana Ivanovic. Hubs in space: Popular nearest neighbors in high-dimensional data. J. Mach. Learn. Res., 11:2487-2531, 2010. 2, 6 +[25] Yazhou Ren, Jingyu Pu, Chenhang Cui, Yan Zheng, Xinyue Chen, Xiaorong Pu, and Lifang He. Dynamic weighted graph fusion for deep multi-view clustering. In *IJCAI*, pages 4842-4850, 2024. 1, 2 +[26] Tong Tong, Katherine Gray, Qinquan Gao, Liang Chen, Daniel Rueckert, Alzheimer's Disease Neuroimaging Initiative, et al. Multi-modal classification of alzheimer's disease using nonlinear graph fusion. Pattern Recognit., 63:171-181, 2017. 1 +[27] Daniel J Trosten, Sigurd Lokse, Robert Jenssen, and Michael Kampffmeyer. Reconsidering representation alignment for multi-view clustering. In CVPR, pages 1255–1265, 2021. 1 +[28] Daniel J Trosten, Rwiddhi Chakraborty, Sigurd Løkse, Kristoffer Knutsen Wickström, Robert Jenssen, and Michael C Kampffmeyer. Hubs and hyperspheres: Reducing hubness and improving transductive few-shot learning with hyperspherical embeddings. In CVPR, pages 7527-7536, 2023. 2, 5 +[29] Daniel J Trosten, Sigurd Løkse, Robert Jenssen, and Michael C Kampffmeyer. On the effects of self-supervision and contrastive alignment in deep multi-view clustering. In CVPR, pages 23976-23985, 2023. 1, 2, 6, 7 +[30] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. J. Mach. Learn. Res., 9(11), 2008. 5 + +[31] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and León Bottou. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11(12), 2010. 4 +[32] Haiyue Wang, Wensheng Zhang, and Xiaoke Ma. Contrastive and adversarial regularized multi-level representation learning for incomplete multi-view clustering. *Neural Networks*, 172: 106102, 2024. 1, 2 +[33] Jing Wang and Songhe Feng. Contrastive and view-interaction structure learning for multi-view clustering. In IJCAI, pages 5055-5063, 2024. 2 +[34] Jing Wang, Songhe Feng, Gengyu Lyu, and Jiazheng Yuan. Surer: Structure-adaptive unified graph neural network for multi-view clustering. In AAAI, pages 15520-15527, 2024. 1, 7, 8 +[35] Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In ICML, pages 1083-1092, 2015. 6 +[36] Yan Wang, Wei-Lun Chao, Kilian Q Weinberger, and Laurens Van Der Maaten. Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. CoRR, abs/1911.04623, 2019. 2, 4 +[37] Yiming Wang, Dongxia Chang, Zhiqiang Fu, and Yao Zhao. Consistent multiple graph embedding for multi-view clustering. IEEE Trans. Multim., 25:1008-1018, 2023. 1, 2 +[38] Yimu Wang, Xiangru Jian, and Bo Xue. Balance act: Mitigating hubness in cross-modal retrieval with query and gallery banks. In EMNLP, pages 10542-10567, 2023. 2 +[39] Yuqi Wang, Jiawei He, Lue Fan, Hongxin Li, Yuntao Chen, and Zhaoxiang Zhang. Driving into the future: Multiview visual forecasting and planning with world model for autonomous driving. In CVPR, pages 14749-14759, 2024. 1 +[40] Jie Wen, Zheng Zhang, Lunke Fei, Bob Zhang, Yong Xu, Zhao Zhang, and Jinxing Li. A survey on incomplete multiview clustering. IEEE Trans. Syst. Man Cybern. Syst., 53(2): 1136-1149, 2022. 1 +[41] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE Trans. Neural Networks Learn. Syst., 32(1):4-24, 2021. 2 +[42] Chenxi Xiao, Naveen Madapana, and Juan Wachs. One-shot image recognition using prototypical encoders with reduced hubness. In WACV, pages 2251-2260, 2021. 2 +[43] Shunxin Xiao, Shide Du, Zhaoliang Chen, Yunhe Zhang, and Shiping Wang. Dual fusion-propagation graph neural network for multi-view clustering. IEEE Trans. Multim., 25: 9203-9215, 2023. 1, 2, 6, 7 +[44] Jie Xu, Huayi Tang, Yazhou Ren, Liang Peng, Xiaofeng Zhu, and Lifang He. Multi-level feature learning for contrastive multi-view clustering. In CVPR, pages 16030-16039, 2022. 5, 6, 7 +[45] Jie Xu, Yazhou Ren, Xiaolong Wang, Lei Feng, Zheng Zhang, Gang Niu, and Xiaofeng Zhu. Investigating and mitigating the side effects of noisy views for self-supervised clustering algorithms in practical multi-view scenarios. In CVPR, pages 22957-22966, 2024. 6, 7 + +[46] Weiqing Yan, Yuanyang Zhang, Chenlei Lv, Chang Tang, Guanghui Yue, Liang Liao, and Weisi Lin. Gcfagg: global and cross-view feature aggregation for multi-view clustering. In CVPR, pages 19863-19872, 2023. 6, 7 +[47] Xiaoqiang Yan, Zhixiang Jin, Fengshou Han, and Yangdong Ye. Differentiable information bottleneck for deterministic multi-view clustering. In CVPR, pages 27425-27434, 2024. 1, 2 +[48] Xihong Yang, Jin Jiaqi, Siwei Wang, Ke Liang, Yue Liu, Yi Wen, Suyuan Liu, Sihang Zhou, Xinwang Liu, and En Zhu. Dealmvc: Dual contrastive calibration for multi-view clustering. In ACM MM, pages 337-346, 2023. 2, 5 +[49] Kun Zhan, Feiping Nie, Jing Wang, and Yi Yang. Multiview consensus graph clustering. IEEE Trans. Image Process., 28 (3):1261-1270, 2019. 6, 7 +[50] Runwu Zhou and Yi-Dong Shen. End-to-end adversarial-attention network for multi-modal clustering. In CVPR, pages 14607-14616, 2020. 1, 2, 6, 7 \ No newline at end of file diff --git a/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/images.zip b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dbed9017a9802bf3533acd0970b316d16d38caa9 --- /dev/null +++ b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf73821005eae73ec307e661f9cd8d1610d18bf587d5e4e97272f253ae849418 +size 774230 diff --git a/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/layout.json b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a6017d3bdb0d33269488ea4692aff077b70b029f --- /dev/null +++ b/CVPR/2025/A Hubness Perspective on Representation Learning for Graph-Based Multi-View Clustering/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aeccf870e87f4dd6e9dfc040b38abcced7e02abe2d9878cc288e9d53cfeea3ad +size 452839 diff --git a/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_content_list.json b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..569d366164b92470fc1cbfdf90106ca4e383ffda --- /dev/null +++ b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aaa1789659842792b1d1aa38569bc822a1fac249aef3be7cf1a4ce0417ad3446 +size 77481 diff --git a/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_model.json b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_model.json new file mode 100644 index 0000000000000000000000000000000000000000..13ebc8ff6464d588211a24642cae33ca8ae6dffe --- /dev/null +++ b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5beca7d1323475adeee3162c7e23a0629a8965f49c6a09bfa17b1abe88f17649 +size 96344 diff --git a/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_origin.pdf b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7d58b6e1dd3e6bbcd4e3d2922daf9cd054cce0b9 --- /dev/null +++ b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/3ec10370-a835-4f8b-9dd6-310bcae1dc70_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffd6882a75555fdeacc579ee9e885cb4451780e89265bb81b0159d2026a0b038 +size 6426577 diff --git a/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/full.md b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9d5bb874b79b30821758b0d580c34682ea03d549 --- /dev/null +++ b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/full.md @@ -0,0 +1,284 @@ +# A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions + +Jiangbei Hu $^{1,2*}$ Yanggeng Li $^{2*}$ Fei Hou $^{3,4}$ Junhui Hou $^{5}$ Zhebin Zhang $^{6}$ Shengfa Wang $^{1}$ Na Lei $^{1}$ Ying He $^{2\dagger}$ + +$^{1}$ School of Software, Dalian University of Technology $^{2}$ CCDS, Nanyang Technological University $^{3}$ Key Laboratory of System Software (CAS) and SKLCS, Institute of Software, CAS $^{4}$ University of Chinese Academy of Sciences + +$^{5}$ Department of Computer Science, City University of Hong Kong $^{6}$ OPPO + +# Abstract + +Unsigned distance fields (UDFs) provide a versatile framework for representing a diverse array of 3D shapes, encompassing both watertight and non-watertight geometries. Traditional UDF learning methods typically require extensive training on large 3D shape datasets, which is costly and necessitates re-training for new datasets. This paper presents a novel neural framework, LoSF-UDF, for reconstructing surfaces from 3D point clouds by leveraging local shape functions to learn UDFs. We observe that 3D shapes manifest simple patterns in localized regions, prompting us to develop a training dataset of point cloud patches characterized by mathematical functions that represent a continuum from smooth surfaces to sharp edges and corners. Our approach learns features within a specific radius around each query point and utilizes an attention mechanism to focus on the crucial features for UDF estimation. Despite being highly lightweight, with only 653 KB of trainable parameters and a modest-sized training dataset with 0.5 GB storage, our method enables efficient and robust surface reconstruction from point clouds without requiring for shape-specific training. Furthermore, our method exhibits enhanced resilience to noise and outliers in point clouds compared to existing methods. We conduct comprehensive experiments and comparisons across various datasets, including synthetic and real-scanned point clouds, to validate our method's efficacy. Notably, our lightweight framework offers rapid and reliable initialization for other unsupervised iterative approaches, improving both the efficiency and accuracy of their reconstructions. Our project and code are available at https://jbhu67.github.io/LoSF-UDF.github.io/. + +# 1. Introduction + +3D surface reconstruction from raw point clouds is a significant and long-standing problem in computer graphics and machine vision. Traditional techniques like Poisson Surface Reconstruction [19] create an implicit indicator function from oriented points and reconstruct the surface by extracting an appropriate isosurface. The advancement of artificial intelligence has led to the emergence of numerous neural network-based methods for 3D reconstruction. Among these, neural implicit representations have gained significant influence, which utilize signed distance fields (SDFs) [1, 2, 7, 33, 38, 42, 43] and occupancy fields [6, 10, 29, 35] to implicitly depict 3D geometries. SDFs and occupancy fields extract isosurfaces by solving regression and classification problems, respectively. However, both techniques require internal and external definitions of the surfaces, limiting their capability to reconstructing only watertight geometries. Therefore, unsigned distance fields [11, 15, 22, 27, 37, 46, 48, 49] have recently gained increasing attention due to their ability to reconstruct nonwatertight surfaces and complex geometries with arbitrary topologies. + +Reconstructing 3D geometries from raw point clouds using UDFs presents significant challenges due to the non-differentiability near the surface. This characteristic complicates the development of loss functions and undermines the stability of neural network training. Various unsupervised approaches [15, 48, 49] have been developed to tailor loss functions that leverage the intrinsic characteristics of UDFs, ensuring that the reconstructed geometry aligns closely with the original point clouds. However, these methods suffer from slow convergence, necessitating an extensive network training time to reconstruct a single geometry. As a supervised method, GeoUDF [37] learns local geometric priors through training on datasets such as ShapeNet [8], thus achieving efficient UDF estimation. Nonetheless, the + +generalizability of this approach is dependent on the training dataset, which also leads to relatively high computational costs. + +In this paper, we propose a lightweight and effective supervised learning framework, LoSF-UDF, to address these challenges. Since learning UDFs does not require determining whether a query point is inside or outside the geometry, it is a local quantity independent of the global context. Inspired by the observation that 3D shapes manifest simple patterns within localized areas, we synthesize a training dataset comprising a set of point cloud patches by utilizing local shape functions. Subsequently, we can estimate the unsigned distance values by learning local geometric features through an attention-based network. Our approach distinguishes itself from existing methods by its novel training strategy. Specifically, it is uniquely trained on synthetic surfaces, yet it demonstrates remarkable capability in predicting UDFs for a wide range of common surface types. For smooth surfaces, we generate training patches (quadratic surfaces) by analyzing principal curvatures, meanwhile, we design simple shape functions to imitate sharp features. This strategy has three unique advantages. First, it systematically captures the local geometries of most common surfaces encountered during testing, effectively mitigating the dataset dependence risk that plagues current UDF learning methods. Second, for each training patch, the ground-truth UDF is readily available, streamlining the training process. Third, this approach substantially reduces the costs associated with preparing the training datasets. We evaluate our framework on various datasets and demonstrates its ability to robustly reconstruct high-quality surfaces, even for point clouds with noise and outliers. Notably, our method can serve as a lightweight initialization that can be integrated with existing unsupervised methods to enhance their performance. We summarize our main contributions as follows. + +- We present a simple yet effective data-driven approach that learns UDFs directly from a synthetic dataset consisting of point cloud patches, which is independent of the global shape. +- Our method is computationally efficient and requires training only once on our synthetic dataset. Then it can be applied to reconstruct a wide range of surface types. +- Our lightweight framework offers rapid and reliable initialization for other unsupervised iterative approaches, improving both efficiency and accuracy. + +# 2. Related work + +Neural surface representations. Reconstructing 3D surfaces from point clouds is a classic and important topic in computer graphics [4, 5, 44]. Recently, the domain of deep learning has spurred significant advances in the implicit neural representation of 3D shapes. Some of these works trained a classifier neural network to construct oc + +occupancy fields [6, 10, 29, 35] for representing 3D geometries. Poco [6] achieves superior reconstruction performance by introducing convolution into occupancy fields. Ouasfi et al. [32] recently proposed a uncertainty measure method based on margin to learn occupancy fields from sparse point clouds. Compared to occupancy fields, SDFs [1, 2, 33, 38, 42, 43] offer a more precise geometric representation by differentiating between interior and exterior spaces through the assignment of signs to distances. + +Unsigned distance fields learning. Although Occupancy fields and SDFs have undergone significant development recently, they are hard to reconstruct surfaces with boundaries or nonmanifold features. G-Shell [25] developed a differentiable shell-based representation for both watertight and non-watertight surfaces. However, UDFs provide a simpler and more natural way to represent general shapes [11, 15, 22, 27, 37, 46, 48, 49]. Various methods have been proposed to reconstruct surfaces from point clouds by learning UDFs. CAP-UDF [48] suggested directing 3D query points toward the surface with a consistency constraint to develop UDFs that are aware of consistency. LevelSetUDF [49] learned a smooth zero-level function within UDFs through level set projections. As a supervised approach, GeoUDF [37] estimates UDFs by learning local geometric priors from training on many 3D shapes. DUDF [15] formulated the UDF learning as an Eikonal problem with distinct boundary conditions. UODF [27] proposed unsigned orthogonal distance fields that every point in this field can access the closest surface points along three orthogonal directions. Instead of reconstructing from point clouds, many recent works [13, 23, 26, 28] learn high-quality UDFs from multi-view images to reconstruct non-watertight surfaces. Furthermore, UiDFF [50] presents a 3D diffusion model for UDFs to generate textured 3D shapes with boundaries. + +Local-based reconstruction. Most methods achieve 3D reconstruction by constructing a global function from point clouds. For example, Poisson methods [19, 20] fits surfaces by solving partial differential equations, while neural network-based methods like DeepSDF [9, 30, 34] represent geometry through network optimization. The limitation of most global methods lies in their need for extensive datasets for training, coupled with inadequate generalization to unseen shape categories. Conversely, 3D surfaces exhibit local similarities and repetitions, which have spurred the development of techniques for reconstructing surfaces locally. Ohtake et al. [31] introduced a shape representation utilizing a multi-scale partition of unity framework, wherein the local shapes of surfaces are characterized by piecewise quadratic functions. DeepLS [7] and LDIF [16] reconstructed local SDF by training learnable implicit functions or neural networks. PatchNets [40] proposed a mid-level patch-based surface representation, facilitating the development of models with enhanced generalizability. Ying et al. [47] introduced + +![](images/91cc00ddcb88e465d9cdbd620b8b667ee8e26ff6520271c24437b0d6293582dd.jpg) +Figure 1. Pipeline. First, we train a UDF prediction network $\mathcal{U}_{\Theta}$ on a synthetic dataset, which contains a series of local point cloud patches that are independent of specific shapes. Given a global point cloud $\mathbf{P}$ , we then extract a local patch $\mathcal{P}$ assigned to each query point $\mathbf{q}$ within a specified radius, and obtain the corresponding UDF values $\mathcal{U}_{\hat{\Theta}}(\mathcal{P},\mathbf{q})$ . Finally, we extract the mesh corresponding to the input point cloud by incorporating the DCUDF [18] framework. + +![](images/e8b1366ac7b21ec536e8a54eec58c4db75360d9c6d444c9ed8cb5e7972cac5f7.jpg) + +a local-to-local shape completion framework that utilized adaptive local basis functions. While these methods all focus on SDF, GeoUDF [37] represents a recent advancement in reconstructing UDF from a local perspective. + +# 3. Method + +Motivation. Distinct from SDFs, there is no need for UDFs to determine the sign to distinguish between the inside and outside of a shape. Consequently, the UDF values are solely related to the local geometric characteristics of 3D shapes. Furthermore, within a certain radius for a query point, local geometry can be approximated by general mathematical functions. Stemming from these insights, we propose a novel UDF learning framework that focuses on local geometries. We employ local shape functions to construct a series of point cloud patches as our training dataset, which includes common smooth and sharp geometric features. Given a point cloud to reconstruct, we employ the optimized model to output the corresponding distance values based on the local patch within radius for each query point. Figure 1 illustrates the pipeline of our proposed UDF learning framework. + +# 3.1. Local shape functions + +Smooth patches. From the viewpoint of differential geometry [14], the local geometry at a specific point on a regular surface can be approximated by a quadratic surface. Specifically, consider a regular surface $S: \mathbf{r} = \mathbf{r}(u, v)$ with a point + +![](images/f2efec7b04472020427c791b433f4ff79b91852ae78af0d2f681de964de0c0c7.jpg) +(a) Smooth patches + +![](images/15bc32f3b5fb5252604bc9c0591fda650ce1fe94e3e50de03b9de6727ba76ba3.jpg) +(b) Sharp patches +Figure 2. Local geometries. (a) For points on a geometry that are differentiable, the local shape at these points can be approximated by quadratic surfaces. (b) For points that are non-differentiable, we can also construct locally approximated surfaces using functions. + +$\mathbf{p}$ on it. At point $\mathbf{p}$ , it is possible to identify two principal direction unit vectors, $\mathbf{e}_1$ and $\mathbf{e}_2$ , with the corresponding normal $\mathbf{n} = \mathbf{e}_1 \times \mathbf{e}_2$ . A suitable parameter system $(u, v)$ can be determined such that $\mathbf{r}_u = \mathbf{e}_1$ and $\mathbf{r}_v = \mathbf{e}_2$ , thus obtaining the corresponding first and second fundamental forms as + +$$ +[ I ] _ {\mathbf {p}} = \left[ \begin{array}{l l} 1 & 0 \\ 0 & 1 \end{array} \right], \quad [ \mathrm {I I} ] _ {\mathbf {p}} = \left[ \begin{array}{c c} \kappa_ {1} & 0 \\ 0 & \kappa_ {2} \end{array} \right], \tag {1} +$$ + +where $\kappa_{1},\kappa_{2}$ are principal curvatures. Without loss of generality, we assume $\mathbf{p}$ corresponding to $u = v = 0$ and expand the Taylor form at this point as + +$$ +\begin{array}{l} \mathbf {r} (u, v) = \mathbf {r} (0, 0) + \mathbf {r} _ {u} (0, 0) u + \mathbf {r} _ {v} (0, 0) v + \frac {1}{2} [ \mathbf {r} _ {u u} (0, 0) u ^ {2} \\ + \mathbf {r} _ {u v} (0, 0) u v + \mathbf {r} _ {v v} (0, 0) v ^ {2} ] + o \left(u ^ {2} + v ^ {2}\right). \tag {2} \\ \end{array} +$$ + +Decomposing $\mathbf{r}_{uu}(0,0),\mathbf{r}_{uv}(0,0)$ , and $\mathbf{r}_{vv}(0,0)$ along the tangential and normal directions, we can formulate Eq. (2) according to Eq. (1) as + +$$ +\begin{array}{l} \mathbf {r} (u, v) = \mathbf {r} (0, 0) + (u + o (\sqrt {u ^ {2} + v ^ {2}})) \mathbf {e} _ {1} + (v \\ + o \left(\sqrt {u ^ {2} + v ^ {2}}\right)) \mathbf {e} _ {2} + \frac {1}{2} \left(\kappa_ {1} u ^ {2} + \kappa_ {2} v ^ {2} + o \left(u ^ {2} + v ^ {2}\right)\right)) \mathbf {n} \tag {3} \\ \end{array} +$$ + +where $o(u^2 + v^2) \approx 0$ is negligible in a small local region. Consequently, by adopting $\{\mathbf{p}, \mathbf{e}_1, \mathbf{e}_2, \mathbf{n}\}$ as the orthogonal coordinate system, we can define the form of the local approximating surface as + +$$ +x = u, \quad y = v, \quad z = \frac {1}{2} \left(\kappa_ {1} u ^ {2} + \kappa_ {2} v ^ {2}\right), \tag {4} +$$ + +which exactly are quadratic surfaces $z = \frac{1}{2} (\kappa_1x^2 +\kappa_2y^2)$ . Furthermore, in relation to Gaussian curvatures $\kappa_{1}\kappa_{2}$ , quadratic surfaces can be categorized into four types: ellipsoidal, hyperbolic, parabolic, and planar. As shown in Fig. 2, for differentiable points on a general geometry, the local shape features can always be described by one of these four types of quadratic surfaces. + +Sharp patches. For surfaces with sharp features, they are not differentiable at some points and cannot be approximated in the form of a quadratic surface. We categorize + +![](images/010edef8e12b68045c85dc83ce962839f65f07a15efe578827e9482e8ffba766.jpg) +Figure 3. Synthetic surfaces for training. By manipulating functional parameters, we can readily create various smooth and sharp surfaces, subsequently acquiring pairs of point cloud patches and query points via sampling. + +commonly seen sharp geometric features into four types, including creases, cusps, corners, and v-saddles, as illustrated in Fig. 2(b). We construct these four types of sharp features in a consistent form $z = f(x,y)$ like smooth patches. We define a family of functions as + +$$ +z = 1 - h \cdot g (x, y), \tag {5} +$$ + +where $h$ can adjust the sharpness of the shape. Specifically, $g = \frac{|kx - y|}{\sqrt{1 + k^2}}$ for creases ( $k$ can control the direction), $g = \sqrt{x^2 + y^2}$ for cusps, $g = \max(|x|, |y|)$ for corners, $g = (|x| + |y|) \cdot \left(\frac{|x|}{x} \cdot \frac{|y|}{y}\right)$ for v-saddles. Fig. 3 illustrates several examples of smooth and sharp patches with distinct parameters. + +Synthetic training dataset. We utilize the mathematical functions introduced above to synthesize a series of point cloud patches for training. As shown in Fig. 3, we first uniformly sample $m$ points $\{(x_i, y_i)\}_{i=1}^m$ within a circle of radius $r_0$ centered at $(0,0)$ in the $xy$ -plane. Then, we substitute the coordinates into Eqs. (4) and (5) to obtain the corresponding $z$ -coordinate values, resulting in a patch $\mathcal{P} = \{\mathbf{p}_{i=1}^m\}$ , where $\mathbf{p}_i = (x_i, y_i, z(x_i, y_i))$ . Subsequently, we randomly collect query points $\{\mathbf{q}_i\}_{i=1}^n$ distributed along the vertical ray intersecting the $xy$ -plane at the origin, extending up to a distance of $r_0$ . For each query point $\mathbf{q}_i$ , we determine its UDF value $\mathcal{U}(\mathbf{q}_i)$ , which is either $|\mathbf{q}_i^{(z)}|$ for smooth patches or $1 - |\mathbf{q}_i^{(z)}|$ for sharp patches. Noting that for patches with excessively high curvature or sharpness, the minimum distance of the query points may not be the distance to $(0,0, z(0,0))$ , we will exclude these patches from our training dataset. Overall, each sample in our synthetic dataset is specifically in the form of $\{\mathbf{q}, \mathcal{P}, \mathcal{U}(\mathbf{q})\}$ . + +# 3.2. UDF learning + +We perform supervised training on the synthesized dataset which is independent of specific shapes. The network learns the features of local geometries and utilizes an attention-based module to output the corresponding UDF values from the learned features. After training, given any 3D point clouds and a query point in space, we extract the local point cloud patch near the query, which has the same form as the data in the training dataset. Consequently, our network can predict the UDF value at that query point based on this local point cloud patch. + +# 3.2.1. Network architecture + +For a sample $\{\mathbf{q},\mathcal{P} = \{\mathbf{p}_i\}_{i = 1}^m,\mathcal{U}(\mathbf{q})\}$ , we first obtain a latent code $\mathbf{f}_p\in \mathbb{R}^{l_p}$ related to the local point cloud patch $\mathcal{P}$ through a Point-Net [36] $\mathcal{F}_p$ . To derive features related to distance, we use relative vectors from the patch points to the query point, $\mathcal{V} = \{\mathbf{p}_i - \mathbf{q}\}_{i = 1}^m$ , as input to a Vectors-Net $\mathcal{F}_v$ which is similar to the Point-Net $\mathcal{F}_p$ . This process results in an additional latent code $\mathbf{f}_v\in \mathbb{R}^{l_v}$ . Subsequently, we apply a cross-attention module [41] to obtain the feature codes for the local geometry, + +$$ +\mathbf {f} _ {G} = \operatorname {C r o s s A t t n} \left(\mathbf {f} _ {p}, \mathbf {f} _ {v}\right) \in \mathbb {R} ^ {l _ {G}}, \tag {6} +$$ + +where we take $\mathbf{f}_p$ as the Key-Value (KV) pair and $\mathbf{f}_v$ as the Query (Q). In our experiments, we set $l_p = l_v = 64$ , and $l_G = 128$ . Based on the learned geometric features, we aim to fit the UDF values from the distance within the local point cloud. Therefore, we concatenate the distances $\mathbf{d} \in \mathbb{R}^m$ induced from $\mathcal{V}$ with the latent code $\mathbf{f}_G$ , followed by a series of fully connected layers to output the predicted UDF values $\mathcal{U}_{\Theta}(\mathbf{q})$ . Figure 4 illustrates the overall network architecture and data flow. The two PointNets used in our network to extract features from point cloud patches $\mathcal{P}$ and vectors $\mathcal{V}$ consist of four ResNet blocks. In addition, the two fully connected layer modules in our framework consist of three layers each. To ensure non-negativity of the UDF values output by the network, we employ the softplus activation function. + +Denoising module. In our network, even if point cloud patches are subjected to a certain degree of noise or outliers, their representations in the feature space should remain similar. However, distances induced directly from noisy vectors $\nu$ will inevitably contain errors, which can affect the accurate prediction of UDF values. To mitigate this impact, we introduce a denoising module that predicts displacements $\Delta \mathbf{d}$ from local point cloud patches, as shown in Fig. 4. We then add the displacements $\Delta \mathbf{d}$ to the distances $\mathbf{d}$ to improve the accuracy of the UDF estimation. + +# 3.2.2. Training and evaluation + +Data augmentation. During the training process, we scale all pairs of local patches $\mathcal{P}$ and query points $\mathbf{q}$ to conform to the bounding box constraints of $[-0.5, 0.5]$ , and the corresponding GT UDF values $\mathcal{U}(\mathbf{q})$ are scaled by equivalent magnitudes. Given the uncertain orientation of local patches + +![](images/8ca15c9d82a5d98b5362a26d059b13e37bceaab9c5f78c647856dc568c662254.jpg) +Figure 4. Network architecture of LoSF-UDF. + +extracted from a specified global point cloud, we have applied data augmentation via random rotations to the training dataset. Furthermore, to enhance generalization to open surfaces with boundaries, we randomly truncate $20\%$ of the smooth patches to simulate boundary cases. To address the issue of noise handling, we introduce Gaussian noise $\mathcal{N}(0,0.1)$ to $30\%$ of the data in each batch during every training epoch. + +Loss functions. We employ $L_{1}$ loss $\mathcal{L}_{\mathrm{u}}$ to measure the discrepancy between the predicted UDF values and the GT UDF values. Moreover, for the displacements $\Delta \mathbf{d}$ output by the denoising module, we employ $L_{1}$ regularization to encourage sparsity. Consequently, we train the network driven by the loss function $\mathcal{L} = \mathcal{L}_u + \lambda_d\mathcal{L}_r$ , where $\mathcal{L}_u = |\mathcal{U}(\mathbf{q}) - \mathcal{U}_{\Theta}(\mathbf{q})|$ , $\mathcal{L}_r = |\Delta \mathbf{d}|$ , we set $\lambda_d = 0.01$ in our experiments. + +Evaluation. Given a 3D point cloud $\mathbf{P}$ for reconstruction, we first normalize it to fit within a bounding box with dimensions ranging from $[-0.5, 0.5]$ . Subsequently, within the bounding box space, we uniformly sample grid points at a specified resolution to serve as query points. Finally, we extract the local geometry $\mathcal{P}_{\mathbf{p}}$ for each query point by collecting points from the point cloud that lie within a sphere of a specified radius centered on the query point. We can obtain the predicted UDF values by the trained network $\mathcal{U}_{\Theta^{*}}(\mathbf{q}, \mathcal{P}_{\mathbf{q}})$ , where $\Theta^{*}$ represents the optimized network parameters. Note that for patches $\mathcal{P}_{\mathbf{p}}$ with fewer than 5 points, we set the UDF values as a large constant. Finally, we extract meshes from the UDFs using the DCUDF model [18]. + +# 3.3. Integration with unsupervised methods + +Unsupervised methods, such as CAP-UDF [48] and LevelSetUDF [49], require time-consuming iterative reconstruction of a single point cloud. In contrast, our LoSF-UDF method is a highly lightweight framework. Once trained on a synthetic, shape-independent local patch dataset, it efficiently reconstructs plausible 3D shapes from diverse point clouds, even in the presence of noise and outliers. Although unsupervised methods are time-consuming, they can reconstruct shapes with richer details due to the combined effects of various loss functions. Therefore, we integrate our method with unsupervised approaches to provide better initialization, thereby accelerating convergence (Tab. 3) and achieving improved reconstruction results. Generally, assuming the network of the unsupervised method is $\mathcal{B}_{\Xi}$ , we define the loss function of our integrated framework as + +$$ +\min _ {\Xi} \mathcal {L} = \alpha_ {t} \frac {1}{N} \sum_ {i = 1} ^ {N} \left| \mathcal {B} _ {\Xi} \left(\mathbf {q} _ {i}\right) - \mathcal {U} _ {\Theta^ {*}} \left(\mathbf {q} _ {i}\right) \right| + (1 - \alpha_ {t}) \mathcal {L} _ {\text {u n s u p v}}, \tag {7} +$$ + +where $\mathcal{B}_{\Xi}$ can be selected as a MLP network like CAPUDF [48] and LevelSetUDF [49], or a SIREN network [39] like DEUDF [45]. $\mathcal{L}_{\mathrm{unsupv}}$ is the loss functions employed + +Table 1. Qualitative comparison of different UDF learning methods. "Normal" indicates whether the method requires point cloud normals during learning. "Feature Type" refers to whether the information required during training is global or local. "Noise" and "Outlier" indicate whether the method can handle the presence of noise and outliers in point clouds. + +
MethodsInputNormalLearning TypeFeature TypeNoiseOutlier
CAP-UDF [48]DenseNot requiredUnsupervisedGlobalXX
LevelSetUDF [49]DenseNot requiredUnsupervisedGlobalX
DUDF [15]DenseRequiredUnsupervisedGlobalXX
GeoUDF [37]SparseNot requiredSupervisedLocalXX
OursDenseNot requiredSupervisedLocal
+ +in these unsupervised methods. $\mathcal{U}_{\Theta^{*}}$ is our trained LoSF network with optimized parameters $\Theta^{*}$ . $\alpha_{t} \in [0,1]$ is a time-dependent weight. In our experiments (refer to Sec. 4.5), the whole training process requires around 20000 iterations. The value of $\alpha_{t}$ decreases from 1 to 0 gradually during the first 10000 iterations. + +# 4. Experimental results + +# 4.1. Setup + +Datasets. To compare our method with other state-of-the-art UDF learning approaches, we tested it on various datasets that include general artificial objects from the field of computer graphic. Following previous works [23, 48, 49], we select the "Car" category from ShapeNet [8], which has a rich collection of multi-layered and non-closed shapes. Furthermore, we select the real-world dataset DeepFashion3D [24] for open surfaces, and ScanNet [12] for large outdoor scenes. To assess our model's performance on actual noisy inputs, we conducted tests on real range scan dataset [3] following the previous works [48, 49]. + +Baselines & metrics. For our validation datasets, we compared our method against the state-of-the-art UDF learning models, which include unsupervised methods like CAPUDF [48], LevelSetUDF [49], and DUDF [15], as well as the supervised learning method, GeoUDF [37]. We trained GeoUDF independently on different datasets to achieve optimal performance. Table 1 shows the qualitative comparison between our methods and baselines. To evaluate performance, we compare our approach with other baseline models in terms of $L_{1}$ -Chamfer Distance (CD), F1-Score (setting thresholds of 0.005 and 0.01), and normal consistence (NC) metrics between the ground truth meshes and the meshes extracted from learned UDFs. For a fair comparison, we adopt the same DCUDF [18] method for mesh extraction. All experiments are conducted on NVIDIA RTX 4090 GPU. + +# 4.2. Experimental results + +Synthetic data. For general 3D graphic models, ShapeNetCars, and DeepFashion3D, we obtain dense point clouds by randomly sampling on meshes. Considering that GeoUDF [37] is a supervised method, we retrain it on ShapeNetCars, and DeepFashion3D, which are randomly + +![](images/f876d087b0363b77e18900be2f35bea02349a3fda518e12416d6ced3005b3058.jpg) +Figure 5. Visual comparisons of reconstruction results on the synthetic dataset. We provide more results in the supplementary materials. + +partitioned into training $(70\%)$ , testing $(20\%)$ , and validation subsets $(10\%)$ . All models are evaluated in the validation sets, which remain unseen by any of the UDF learning models prior to evaluation. Figure 5 illustrates the visual comparison of reconstruction results, and Table 2 presents the quantitative comparison in terms of evaluation metrics. We test each method using their own mesh extraction technique, as shown in Fig. 6, which display obvious visual artifacts such as small holes and non-smoothness. We thus apply DCUDF [18], the state-of-art method, to each baseline model, extracting the surfaces as significantly higher + +![](images/e105182087439614a5cc3116746546aa130f329996318f35a36fc92fa6469154.jpg) +Figure 6. We compare other methods using their own mesh extraction techniques against the DCUDF approach. + +quality meshes. Since our method utilizes DCUDF for surface extraction, we adopt it as the default technique to ensure consistency and fairness in comparisons with the baselines. Our method achieves stable results in reconstructing various types of surfaces, including both open and closed surfaces, and exhibits performance comparable to that of the SOTA methods. Noting that DUDF [15] requires normals during training, and GeoUDF utilizes the KNN approach to determine the nearest neighbors of the query points. As a result, DUDF and GeoUDF are less stable when dealing with point clouds with noise and outliers, as shown in Fig. 5. + +Noise & outliers. To evaluate our model with noisy inputs, we added Gaussian noise $\mathcal{N}(0, 0.25\%)$ to the clean data across all datasets for testing. The middle three columns in Fig. 5 display the reconstructed surface results from noisy point clouds, and Table 2 also presents the quantitative comparisons. It can be observed that our method can robustly reconstruct smooth surfaces from noisy point clouds. Additionally, we tested our method's performance with outliers by converting $10\%$ of the clean point cloud into outliers, + +Table 2. We compare our method with other UDF learning methods in terms of $L_{1}$ -Chamfer distance ( $\times 100$ ), F-score with thresholds of 0.005 and 0.01, and normal consistency. The best results are highlighted with 1st and 2nd. + +
MethodCleanNoiseOutliers
CD ↓F10.005F10.01NC ↑CD ↓F10.005F10.01NC ↑CD ↓F10.005F10.01NC ↑
ShapeNetCars [8]CAP-UDF [48]1.3240.4510.8840.9451.2080.2860.6320.8332.9180.1940.3040.725
LevelSetUDF [49]1.2230.4640.8920.9501.1760.2990.6540.8422.7630.2000.3790.753
GeoUDF [37]0.8320.5620.9010.9471.8690.2050.6080.8472.5980.1440.3270.757
DUDF [15]0.7730.6970.8710.9451.9390.2140.5080.8262.9490.1350.3290.722
Ours0.6110.6450.9270.9670.6140.5460.9130.9660.7050.4370.7510.944
OverlapDist [1]CAP-UDF [48]0.8850.3530.8090.9730.8030.3750.7190.8715.1500.1730.4250.863
LevelSetUDF [49]0.8810.3350.8480.9830.7230.3840.7500.8745.0830.2780.4300.882
GeoUDF [37]0.6250.5250.9570.9730.6760.5060.9510.8823.8830.1460.3100.764
DUDF [15]0.5060.5860.9720.9650.9340.3330.5300.8492.5580.1180.2650.821
Ours0.5480.5650.9660.9880.6140.5410.9670.9060.5590.4630.9660.909
+ +as shown in the last three columns of Fig. 5. Experimental results demonstrate that our method can handle up to $50\%$ outliers while still achieving reasonable results. Even in the presence of both noise and outliers, our method maintains a high level of robustness. The corresponding results are provided in the supplementary materials. + +Real scanned data. Dataset [3] provide several real scanned point clouds, as illustrated in Fig. 7, we evaluate our model on the dataset to demonstrate the effectiveness. Our approach can reconstruct smooth surfaces from scanned data containing noise and outliers. However, our model cannot address the issue of missing parts. This limitation is due to the local geometric training strategy, which is independent of the global shape. + +# 4.3. Analysis + +Efficiency. We compare the time complexity of our method with other methods, as shown in Tab. 3. All tests were conducted on an Intel i9-13900K CPU and an NVIDIA RTX 4090 GPU. Computational results show that supervised, local feature-based methods like our approach and GeoUDF [37] significantly outperform unsupervised meth + +![](images/610baa90dc9327373a3fcc1aa237ba4ad9b772a3f0e50a33689fa544a6bb3ca2.jpg) +Figure 7. Evaluations on real scanned data. The evaluation metrics presented in the table represent the average results of these models. + +Table 3. Comparison of time efficiency. We measured the average runtime in minutes. "#Params" denotes the number of network parameters, while "Size" refers to the storage space occupied by these parameters. + +
MethodSRBDeepFashion3DShapeNetCars#ParamSize (KB)
CAP-UDF15.8710.510.64631001809
LoSF + CAP-UDF6.844.324.40--
LevelSetUDF15.0813.6514.674631001809
LoSF + LevelSetUDF6.134.854.97--
DUDF14.2811.1214.584618251804
GeoUDF0.080.070.07253378990
Ours0.870.510.42167127653
+ +ods in terms of computational efficiency. Additionally, our method has a significant improvement in training efficiency compared to GeoUDF. Utilizing ShapeNet as the training dataset, GeoUDF requires 120GB of storage space. In contrast, our method employs a shape-category-independent dataset, occupying merely 0.50GB of storage. Our network is very lightweight, with only 653KB of trainable parameters and a total parameter size of just 2MB. Compared to GeoUDF, which requires 36 hours for training, our method only requires 14.5 hours. + +Patch radius and point density. During the evaluation phase, the radius $r$ used to find the nearest points for each query point determines the size of the extracted patch and the range of effective query points in the space. The choice of radius directly influences the complexity of the geometric features captured. When normalizing point clouds to a unit bounding box, we set the radius, $r = 0.018$ . This setting achieves satisfactory reconstruction for our testing datasets. In the supplementary materials, we present a bias analysis experiment comparing the synthesized local patches and the local geometries extracted from the test point cloud data. The experimental results confirm that setting $r$ to 0.018 maintains a relatively low bias, suggesting its effectiveness. Users can conduct a preliminary bias analysis based on our well-trained model to adjust the size of the radius according to the complexity of the input point cloud. This process is not time-consuming. Through experimental testing (refer to the supplementary materials), our algorithm ensures reasonable reconstruction, provided that there are at least 30 points within a unit area. A possible way for mitigating issues arising from low sampling rates is to apply an upsampling module [37] during the pre-processing step. + +# 4.4. Ablation studies + +Cross-Attn module. Our main goal is to derive the UDF value for a query point by learning the local geometry within a radius $r$ . To achieve this, we utilize Points-Net to capture the point cloud features $\mathbf{f}_p$ of local patches. This process enables the local geometry extracted from test data to align with the synthetic data through feature matching, even in the presence of noise or outliers. Vectors-Net is tasked with learning the features $\mathbf{f}_v$ of the set of vectors pointing towards + +![](images/0d6d7f099be27ccf35320b175b1cdbee90347a0f4260a363df15dfb17cf8d12d.jpg) +Figure 8. Ablation studies on Cross-Attn module. (a) Without the Points-Net and Cross-Attn modules. (b) Without the Cross-Attn module. (CD score is multiplied by 100) +Figure 10. Reconstruction results of the integrated framework. + +the query point, which includes not only the position of the query point but also its distance information. The Cross-Attn module then processes these local patch features $\mathbf{f}_p$ as keys and values to query the vector features $\mathbf{f}_v$ , which contain distance information, returning the most relevant feature $\mathbf{f}_G$ that determines the UDF value. See Fig. 8 for two ablation studies on noisy point clouds. + +Denoising module. Our framework incorporates a denoising module to handle noisy point clouds. We conducted ablation experiments to verify the significance of this module. Specifically, we set $\lambda_{d} = 0$ in the loss function to disable the denoising module, and then retrained the network. As illustrated in Fig. 9, we present the reconstructed surfaces for the same set of noisy point clouds with and without the denosing module, respectively. + +![](images/3136457b08ffef0f886f85a43eb966f0de2c4066730bd2cc85c58a9a58f03c7d.jpg) +Figure 9. Ablation on denoising module: Reconstructed surfaces from the same point clouds with noise/outliers corresponding to framework with and without the denoising module, respectively. + +# 4.5. Results of unsupervised integration + +Our LoSF-UDF approach offers better initialization for unsupervised methods, including CAP-UDF [48], LevelSetUDF [49], and DEUDF [13]. We evaluated 12 models on the Threescan dataset [21], each containing rich details. Using the integration framework based on our lightweight LoSF-UDF, we achieve comparable or even superior recon + +Table 4. Quantitative comparison results of the integrated framework with unsupervised methods. + +
MethodLoSF*+CAP-UDF [48]*+LevelSetUDF [49]*SIREN [45]
CD(×100)↓0.4090.4470.4290.217
F10.005↑0.6380.6090.6100.906
F10.01↑0.9580.9360.9520.984
NC↑0.9640.9460.9620.969
InputLoSF*+CAP-UDF*+LevelSetUDF*SIREN
+ +struction results, as illustrated in Fig. 10 and Tab. 4. More importantly, we improve the efficiency of original unsupervised methods as shown in Tab. 3. Considering the DEUDF is not open-source, we employ their proposed loss functions to train a SIREN network [39] on our own. For the loss function terms that require normal information, we used the method of PCA [17] to estimate the normals during the optimization process. Thanks to the robustness of LoSF, the accuracy of its estimation has been enhanced. + +# 5. Conclusion + +In this paper, we introduce a novel and lightweight neural framework for surface reconstruction from 3D point clouds by learning UDFs from local shape functions. Our key insight is that 3D shapes exhibit simple patterns within localized regions, which can be exploited to create a training dataset of point cloud patches represented by mathematical functions. As a result, our method enables efficient and robust surfaces reconstruction without the need for shape-specific training, even in the presence of noise and outliers. Extensive experiments on various datasets have demonstrated the efficacy of our method. Moreover, our lightweight framework can be integrated with unsupervised methods to provide rapid and reliable initialization, enhancing both efficiency and accuracy. + +# Acknowledgement + +The NTU authors were supported in part by the Ministry of Education, Singapore, under its Academic Research Fund Grants (MOE-T2EP20220-0005 & RT19/22) and the RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). The DLUT authors were supported by the National Natural Science Foundation of China under Grants 62402083 and T2225012, the National Key R&D Program of China under Grants 2021YFA1003003. F. Hou was supported by the Basic Research Project of ISCAS (ISCAS-JCMS-202303) and the Major Research Project of ISCAS (ISCAS-ZD-202401). J. Hou was supported by the Hong Kong RGC under Grants 11219422 and 11219324. + +# References + +[1] Ma Baorui, Han Zhizhong, Liu Yu-Shen, and Zwicker Matthias. Neural-pull: Learning signed distance functions from point clouds by learning to pull space onto surfaces. In International Conference on Machine Learning (ICML), 2021. 1, 2 +[2] Ma Baorui, Liu Yu-Shen, and Han Zhizhong. Reconstructing surfaces for sparse point clouds with on-surface priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 1, 2 +[3] Matthew Berger, Joshua A. Levine, Luis Gustavo Nonato, Gabriel Taubin, and Claudio T. Silva. A benchmark for surface reconstruction. ACM Trans. Graph., 32(2), 2013. 5, 7 +[4] Matthew Berger, Andrea Tagliasacchi, Lee M Seversky, Pierre Alliez, Joshua A Levine, Andrei Sharf, and Claudio T Silva. State of the art in surface reconstruction from point clouds. In 35th Annual Conference of the European Association for Computer Graphics, Eurographics 2014-State of the Art Reports. The Eurographics Association, 2014. 2 +[5] Matthew Berger, Andrea Tagliasacchi, Lee M Seversky, Pierre Alliez, Gael Guennebaud, Joshua A Levine, Andrei Sharf, and Claudio T Silva. A survey of surface reconstruction from point clouds. In Computer graphics forum, pages 301-329. Wiley Online Library, 2017. 2 +[6] Alexandre Boulch and Renaud Marlet. Poco: Point convolution for surface reconstruction, 2022. 1, 2 +[7] Rohan Chabra, Jan Eric Lenssen, Eddy Ig, Tanner Schmidt, Julian Straub, Steven Lovegrove, and Richard Newcombe. Deep local shapes: Learning local sdf priors for detailed 3d reconstruction, 2020. 1, 2 +[8] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago, 2015. 1, 5, 7 +[9] Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE/CVF + +conference on computer vision and pattern recognition, pages 5939-5948, 2019. 2 +[10] Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Implicit functions in feature space for 3d shape reconstruction and completion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. 1, 2 +[11] Julian Chibane, Aymen Mir, and Gerard Pons-Moll. Neural unsigned distance fields for implicit function learning. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 1, 2 +[12] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017. 5 +[13] Junkai Deng, Fei Hou, Xuhui Chen, Wencheng Wang, and Ying He. 2s-udf: A novel two-stage udf learning method for robust non-watertight model reconstruction from multi-view images, 2024. 2, 8 +[14] Manfredo P Do Carmo. Differential geometry of curves and surfaces: revised and updated second edition. Courier Dover Publications, 2016. 3 +[15] Miguel Fainstein, Viviana Siless, and Emmanuel Iarussi. Dudf: Differentiable unsigned distance fields with hyperbolic scaling, 2024. 1, 2, 5, 6, 7 +[16] Kyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, and Thomas Funkhouser. Local deep implicit functions for 3d shape. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4857-4866, 2020. 2 +[17] Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle. Surface reconstruction from unorganized points. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, page 71-78, New York, NY, USA, 1992. Association for Computing Machinery. 8 +[18] Fei Hou, Xuhui Chen, Wencheng Wang, Hong Qin, and Ying He. Robust zero level-set extraction from unsigned distance fields based on double covering. ACM Trans. Graph., 42(6), 2023. 3, 5, 6 +[19] M Kazhdan. Poisson surface reconstruction. In Eurographics Symposium on Geometry Processing, 2006. 1, 2 +[20] Michael Kazhdan and Hugues Hoppe. Screened poisson surface reconstruction. Acm Transactions on Graphics, 32(3): 1-13, 2013. 2 +[21] Oliver Laric. Three d scans. https://threadscans.com/. 8 +[22] Qing Li, Huifang Feng, Kanle Shi, Yi Fang, Yu-Shen Liu, and Zhizhong Han. Neural gradient learning and optimization for oriented point normal estimation. In SIGGRAPH Asia 2023 Conference Papers, 2023. 1, 2 +[23] Yu-Tao Liu, Li Wang, Jie Yang, Weikai Chen, Xiaoxu Meng, Bo Yang, and Lin Gao. Neudf: Leaning neural unsigned distance fields with volume rendering. In Computer Vision and Pattern Recognition (CVPR), 2023. 2, 5 +[24] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaou Tang. Deepfashion: Powering robust clothes recognition and + +retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 5, 7 +[25] Zhen Liu, Yao Feng, Yuliang Xiu, Weiyang Liu, Liam Paull, Michael J. Black, and Bernhard Schölkopf. Ghost on the shell: An expressive representation of general 3d shapes. 2024. 2 +[26] Xiaoxiao Long, Cheng Lin, Lingjie Liu, Yuan Liu, Peng Wang, Christian Theobalt, Taku Komura, and Wenping Wang. Neuraludf: Learning unsigned distance fields for multi-view reconstruction of surfaces with arbitrary topologies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20834-20843, 2023. 2 +[27] Yujie Lu, Long Wan, Nayu Ding, Yulong Wang, Shuhan Shen, Shen Cai, and Lin Gao. Unsigned orthogonal distance fields: An accurate neural implicit representation for diverse 3d shapes. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 1, 2 +[28] Xiaoxu Meng, Weikai Chen, and Bo Yang. Neat: Learning neural implicit surfaces with arbitrary topologies from multiview images. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 2 +[29] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2 +[30] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4460-4470, 2019. 2 +[31] Yutaka Ohtake, Alexander Belyaev, Marc Alexa, Greg Turk, and Hans-Peter Seidel. Multi-level partition of unity implicits. In Acm Siggraph 2005 Courses, pages 173-es. 2005. 2 +[32] Amine Ouasfi and Adnane Boukhayma. Unsupervised occupancy learning from sparse point cloud, 2024. 2 +[33] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2 +[34] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 165-174, 2019. 2 +[35] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. In European Conference on Computer Vision (ECCV), 2020. 1, 2 +[36] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017. 4 +[37] Siyu Ren, Junhui Hou, Xiaodong Chen, Ying He, and Wenping Wang. Geoudf: Surface reconstruction from 3d point clouds via geometry-guided distance representation. In Pro + +ceedings of the IEEE/CVF International Conference on Computer Vision, pages 14214-14224, 2023. 1, 2, 3, 5, 7 +[38] Liu Shi-Lin, Guo Hao-Xiang, Pan Hao, Peng-Shuai Wang, Tong Xin, and Liu Yang. Deep implicit moving least-squares functions for 3d reconstruction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021. 1, 2 +[39] Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In Proc. NeurIPS, 2020. 5, 8 +[40] Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Carsten Stoll, and Christian Theobalt. Patchnets: Patch-based generalizable deep implicit 3d shape representations. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16, pages 293–309. Springer, 2020. 2 +[41] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023. 4 +[42] Peng-Shuai Wang, Yang Liu, and Xin Tong. Dual octree graph networks for learning adaptive volumetric shape representations. ACM Transactions on Graphics, 41(4):1-15, 2022. 1, 2 +[43] Zixiong Wang, Pengfei Wang, Pengshuai Wang, Qiujie Dong, Junjie Gao, Shuangmin Chen, Shiqing Xin, Changhe Tu, and Wenping Wang. Neural-ims: Self-supervised implicit moving least-squares network for surface reconstruction. IEEE Transactions on Visualization and Computer Graphics, pages 1–16, 2023. 1, 2 +[44] Baixin Xu, Jiangbei Hu, Fei Hou, Kwan-Yee Lin, Wayne Wu, Chen Qian, and Ying He. Parameterization-driven neural surface reconstruction for object-oriented editing in neural rendering. In European Conference on Computer Vision, pages 461-479. Springer, 2024. 2 +[45] Cheng Xu, Fei Hou, Wencheng Wang, Hong Qin, Zhebin Zhang, and Ying He. Details enhancement in unsigned distance field learning for high-fidelity 3d surface reconstruction, 2024. 5, 8 +[46] Jianglong Ye, Yuntao Chen, Naiyan Wang, and Xiaolong Wang. Gifs: Neural implicit function for general shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. 1, 2 +[47] Hui Ying, Tianjia Shao, He Wang, Yin Yang, and Kun Zhou. Adaptive local basis functions for shape completion. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1-11, 2023. 2 +[48] Junsheng Zhou, Baorui Ma, Yu-Shen Liu, Yi Fang, and Zhizhong Han. Learning consistency-aware unsigned distance functions progressively from raw point clouds. In Advances in Neural Information Processing Systems (NeurIPS), 2022. 1, 2, 5, 7, 8 +[49] Junsheng Zhou, Baorui Ma, Shujuan Li, Yu-Shen Liu, and Zhizhong Han. Learning a more continuous zero level set in unsigned distance fields through level set projection. In Proceedings of the IEEE/CVF international conference on computer vision, 2023. 1, 2, 5, 7, 8 +[50] Junsheng Zhou, Weiqi Zhang, Baorui Ma, Kanle Shi, Yu-Shen Liu, and Zhizhong Han. Udiff: Generating conditional + +unsigned distance fields with optimal wavelet diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 2 \ No newline at end of file diff --git a/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/images.zip b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dcbc7f9a31cec6aced4a27f9ae53656fbd305085 --- /dev/null +++ b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06ad8cd829c7d48ae63d83ce4948b10854894e7191f18d59e037e37b6f131aee +size 650105 diff --git a/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/layout.json b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..538ba8f92f38ed6117eb1b8eccf7e164326dfd75 --- /dev/null +++ b/CVPR/2025/A Lightweight UDF Learning Framework for 3D Reconstruction Based on Local Shape Functions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f04c8eb411605e017533792d89305c69c2fb0c7918f0c6a84997de05af42dcb +size 406124 diff --git a/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_content_list.json b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..06be39cd74df927760cf854858cfa84dfd10e763 --- /dev/null +++ b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c300c4f2d28526ec36cbd13da19281b885c2e8df2d420eeaf923d384a3193d4 +size 92159 diff --git a/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_model.json b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a57116b51786741cadb25b3256bf67fd28611b80 --- /dev/null +++ b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26754cd2ae2392f5eb73c30a9b14347786b056891bc27f2a0036f315e48555c7 +size 116051 diff --git a/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_origin.pdf b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a877f9b48f3a19cd67a248c5b5970d8785236c7e --- /dev/null +++ b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/329b88c6-b793-419e-a0b2-e0ed7257493a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0c5623faa3b0338afb03cc494fe3182d2fa8799b37a22ab8e711a08af502712 +size 1997896 diff --git a/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/full.md b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8948bc6fcb004b383c6accda9d190d1b228b42f6 --- /dev/null +++ b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/full.md @@ -0,0 +1,387 @@ +# A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations + +Théo Bodrito $^{*1}$ , Olivier Flasseur $^{2}$ , Julien Mairal $^{3}$ , Jean Ponce $^{1,4}$ , Maud Langlois $^{2}$ , Anne-Marie Lagrange $^{5,6}$ + +1Département d'Informatique de l'École normale supérieure (ENS-PSL, CNRS, Inria) + +2Universite Claude Bernard Lyon 1, Centre de Recherche Astrophysique de Lyon UMR 5574, + +ENS de Lyon, CNRS, Villeurbanne, F-69622, France + +3Université Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK + +$^{4}$ Courant Institute and Center for Data Science, New York University + +$^{5}$ Laboratoire d'Études Spatiales et d'Instrumentation en Astrophysique, Observatoire de Paris, + +Université PSL, Sorbonne Université, Université Paris Diderot + +6Université Grenoble Alpes, Institut de Planétologie et d'Astrophysique de Grenoble + +# Abstract + +The search for exoplanets is an active field in astronomy, with direct imaging as one of the most challenging methods due to faint exoplanet signals buried within stronger residual starlight. Successful detection requires advanced image processing to separate the exoplanet signal from this nuisance component. This paper presents a novel statistical model that captures nuisance fluctuations using a multiscale approach, leveraging problem symmetries and a joint spectral channel representation grounded in physical principles. Our model integrates into an interpretable, end-to-end learnable framework for simultaneous exoplanet detection and flux estimation. The proposed algorithm is evaluated against the state of the art using datasets from the SPHERE instrument operating at the Very Large Telescope (VLT). It significantly improves the precision-recall tradeoff, notably on challenging datasets that are otherwise unusable by astronomers. The proposed approach is computationally efficient, robust to varying data quality, and well suited for large-scale observational surveys. + +# 1. Introduction + +Direct imaging [30, 65] is an astronomical technique to probe the vicinity of young, nearby stars, where exoplanets and circumstellar disks—structures of dust and gas from which exoplanets can form—are found [36, 37]. Unlike indirect methods [57], which detect exoplanets via secondary + +![](images/bc889a384b12bf0cff680f88c09538a34aad7de36904f92ce7bf5c19b889fcb5.jpg) +Figure A. Left: typical observations $\mathbf{y}$ and PSF $\mathbf{h}$ from the SPHERE instrument in ASDI mode. The synthetic exoplanet is very bright for illustration purposes. Right: temporal slice along the vertical line. + +effects like gravitational wobbles or transit dimming, direct imaging captures visual evidence of exoplanets and circumstellar disks by recording a direct image of their emitted flux. Analyzing this light across spectral bands provides insights into exoplanet properties (e.g., temperature, gravity), atmospheric compositions, molecular abundances, and formation processes [2, 9, 66]. While existing technology has enabled imaging of young, giant, gaseous exoplanets, next-generation ground- and space-based telescopes may soon image rocky exoplanets in habitable zones, advancing the search for life beyond our Solar System [10, 14]. + +Imaging exoplanets is challenging due to the high contrast and angular resolution required. The primary difficulty lies in the large brightness ratio (or contrast) between the host star and exoplanets; in the infrared, where gas giant exoplanets' thermal emission is most detectable, they are typically $10^{5} - 10^{6}$ times fainter. Additionally, because exoplanets appear close to their host stars from Earth's perspective, high spatial resolution is essential to separate their faint signals. To address these challenges, observatories like the VLT are equipped with specialized instruments for direct imaging (e.g., SPHERE [4]) and cutting-edge optical technologies. Adaptive optics employs a deformable mirror to obtain sharp images by correcting for atmospheric turbulence in real time [20]. Coronagraphs (optical masks) block some starlight, further improving contrast [58]. + +Despite advanced optical devices, direct imaging remains challenging as exoplanet signals are still dominated by a strong nuisance component with approximately $10^{3}$ times higher contrast. This nuisance is mainly composed of structured speckles [24], caused by imperfections in optical corrections that allow residual starlight, unblocked by the coronagraph, to leak into images as a spatially correlated diffraction pattern. The speckle pattern is both spatially and spectrally correlated, exhibiting quasi-static behavior across exposures with minor fluctuations over time. In addition to speckles, other stochastic noise sources—such as thermal background, detector readout, and photon noise—add further contamination. Together, these factors create a non-stationary nuisance that varies in intensity and structure across the field of view, with higher intensity and correlation near the star. This nuisance, which closely resembles the point-like signals expected from exoplanets (instrumental point-spread function off the optical axis), is the primary limitation to direct imaging. Figure A illustrates these characteristics with a dataset recorded by SPHERE. + +In this context, dedicated processing is crucial to separate exoplanet signals from the nuisance component [50]. Observational techniques like angular differential imaging (ADI [41]), spectral differential imaging (SDI [61]), and combined angular and spectral differential imaging (ASDI [66]) introduce diversity that aids in distinguishing exoplanet signals from the nuisance, see Sec. 3 for resulting image formation models. ADI takes advantage of Earth's rotation by keeping the telescope pupil fixed, causing off-axis exoplanets to follow a predictable circular trajectory, while the star remains centered. This apparent motion helps separate exoplanet signals from quasi-static speckles. SDI further improves separation by capturing images across multiple spectral channels, where speckles scale quasi-linearly with wavelength while exoplanet signals stay fixed. ASDI combines ADI and SDI to produce a four-dimensional dataset (spatial, temporal, and spectral dimensions). This diversity in A(S)DI calls for advanced algo + +rithms that can leverage these priors to optimize exoplanet detection and spectral characterization. + +We propose a novel hybrid approach that combines the interpretability of a statistical framework with end-to-end learnable components, improving detection sensitivity and characterization accuracy for directly imaged exoplanets. Our main contributions include: + +- A learnable architecture based on a statistical model integrating pixel correlations across spatial scales and leveraging the spatial symmetries of the nuisance component, improving both detection and characterization. +- Statistically reliable detection scores and estimates with uncertainty quantification, essential for astrophysics. +- Joint processing of 4-D datasets incorporating the ASDI forward model, shown to significantly boost detection. + +# 2. Related work + +Various methods have been developed to isolate faint planetary signals from stellar nuisance, broadly categorized as subtraction-, statistical-, and learning-based models [50]. + +Subtraction-based methods. Subtraction-based methods are among the earliest and most common techniques for exoplanet detection in direct imaging, aiming to remove quasi-static speckles that obscure faint signals. The cA(S)DI algorithms [38, 41] subtract a reference model by averaging frames and stacking residuals, enhancing the rotating exoplanet signal. TLOCI [44] and its variants (e.g., [32, 68]) optimize linear combinations of images to model speckles, while KLIP/PCA [3, 60] employs principal component analysis for low-rank subspace projection. Other approaches, such as non-negative matrix factorization [33, 49, 51, 52] and LLSG [34], decompose data into components to isolate sources, while the RSM algorithm [17-19] combines outputs from multiple methods to reduce individual biases. However, these methods often lack statistically rigorous outputs, such as interpretable detection scores and unbiased flux estimates, as they rely on heuristic image combinations rather than fully end-to-end models. + +Statistical approaches. To address the previous limitations, statistical methods model the nuisance using probabilistic frameworks. The ANDROMEDA [6] and FMMF [54] approaches rely on matched filtering, simplifying the nuisance as uncorrelated Gaussian noise. Similarly, SNAP [64] estimates both the nuisance and exoplanet components jointly through maximum likelihood under the same assumption. The PACO algorithm [25, 26] improves on this by extending beyond the white noise assumption. It uses a patch-based statistical framework to model the spatial and spectral covariances of the nuisance, effectively capturing the local structure of speckles. This approach draws on techniques from computer vision, such as denoising, restoration, super-resolution, collaborative filtering [15], sparse coding [1, 40], or mixture models [71, 72]. + +Learning-based approaches. Machine learning methods are increasingly used in high-contrast imaging, inspired by advances in fields like photography and biomedical imaging. Early applications in exoplanet detection, such as the SVM model by [23], leverage structured high-contrast data, while the (NA)-SODINN algorithms [8, 35] frames detection as a binary classification problem, using KLIP/PCA-processed patches with random forests or CNNs. Despite their effectiveness, SODINN struggles with high false alarm rates and complex hyperparameter tuning [7]. Generative adversarial networks (GANs) have also been applied to simulate nuisance patterns for training deep learning models [70]. Approaches like TRAP [55] and HSR [31] handle unmixing via regularized regression, modeling nuisance evolution with signal-free reference data. Deep PACO [27, 28] combines statistical modeling with deep learning, leveraging PACO's nuisance statistical model and a CNN to detect exoplanets and refine residual mismatches. + +All these algorithms are observation-dependent, building nuisance models directly from the dataset of interest. This dependency hinders detection near the host star due to (i) significant temporal fluctuations in the nuisance and (ii) residual self-subtraction, where part of the exoplanet signal is mistakenly removed. Recent observation-independent approaches address these limitations by using archival survey data to model the nuisance. Super-RDI [56] extends KLIP/PCA for large observational databases, while ConStruct [69] uses an auto-encoder to learn typical speckle patterns, and [11] employs a discriminative nuisance model. MODEL&CO [5] improves deep PACO with a deep statistically-modeled nuisance framework. Our proposed approach, ExoMILD (Exoplanet imaging by MIxture of Learnable Distributions), belongs to this new category. + +# 3. Image formation models + +In direct imaging, a stellar system is observed through an optical system that includes the atmosphere, telescope, and scientific instrument. The response of this system to a point source (e.g., an exoplanet) is defined by its point-spread function off the optical axis (off-axis PSF), describing how light from the source is distributed across the sensor. However, as noted in Sec. 1, residual aberrations uncorrected by adaptive optics produce a quasi-static, structured speckle pattern. To mitigate speckles through numerical processing, several observational strategies are used, as detailed next. + +# 3.1. Angular Differential Imaging (ADI) + +In ADI, the field derotator of the telescope is turned off causing the field of view (including any exoplanets) to rotate around the target star due to Earth's rotation. Meanwhile, the optical system remains stationary, causing the speckles to remain fixed. This distinction helps separating exoplanet signals from speckle noise. Formally, let $\pmb{y}$ in $\mathbb{R}^T\times H\times W$ + +denote the sequence of measurements from the observation, where $T$ is the number of exposures and $H$ , $W$ represent the pixel dimensions of each exposure. Each frame $\pmb{y}_t$ in $\mathbb{R}^{H\times W}$ can then be represented as: + +$$ +\boldsymbol {y} _ {t} = \boldsymbol {s} _ {t} + \sum_ {k = 0} ^ {K - 1} \alpha^ {(k)} h \left(\boldsymbol {x} _ {t} ^ {(k)}\right) + \boldsymbol {\epsilon} _ {t}, \tag {1} +$$ + +where $s_t$ in $\mathbb{R}^{H \times W}$ is the speckles component, $K$ is the (unknown) number of exoplanets, $\pmb{x}_t^{(k)}$ is the position of exoplanet $k$ at time $t$ , $\pmb{h}(\pmb{x})$ in $\mathbb{R}^{H \times W}$ is the PSF model centered on position $\pmb{x}$ in $\mathbb{R}^2$ , $\alpha^{(k)}$ in $\mathbb{R}_+$ is the flux of the exoplanet, and $\epsilon_t$ in $\mathbb{R}^{H \times W}$ is stochastic noise. The position of exoplanet $k$ at time $t$ can be written as $\pmb{x}_t^{(k)} = r\big(\pmb{x}_0^{(k)},\phi_t\big)$ , where $\pmb{x}_0^{(k)}$ in $\mathbb{R}^2$ is its initial position on frame $y_0$ , $\phi_t$ in $\mathbb{R}$ is the predictable parallactic angle at time $t$ , defined as the cumulated apparent rotation (induced by Earth's rotation) since the start of the sequence, and $r: \mathbb{R}^2 \times \mathbb{R} \to \mathbb{R}^2$ defines a rotation trajectory centered on the star. + +# 3.2. Angular Spectral Differential Imaging (ASDI) + +The speckles pattern induced by the star can be understood as the PSF on the optical axis (on-axis PSF) of the optical system. According to diffraction laws, this pattern scales homothetically with wavelength, creating a chromatic effect similar to chromatic aberrations in photography. In ASDI, this spectral scaling helps further disentangle exoplanet signals from speckles. In this case, each observation is denoted as $\mathbf{y}$ in $\mathbb{R}^{C\times T\times H\times W}$ , where $C$ is the number of channels. The forward model (1) becomes + +$$ +\boldsymbol {y} _ {c, t} = \beta_ {c} \mathbf {D} _ {\lambda_ {c} / \lambda_ {0}} \boldsymbol {s} _ {0, t} + \sum_ {k = 0} ^ {K - 1} \alpha_ {c} ^ {(k)} \boldsymbol {h} _ {c} \left(\boldsymbol {x} _ {t} ^ {(k)}\right) + \boldsymbol {\epsilon} _ {c, t}, \tag {2} +$$ + +where $\beta_{c}$ in $\mathbb{R}$ is the amplitude of the speckles in channel $c$ , $\mathbf{D}_{\lambda_c / \lambda_0}$ is the homothety operator aligning channel 0 on channel $c$ with a dilation coefficient defined as the ratio of their wavelengths, $s_{0,t}$ in $\mathbb{R}^{H\times W}$ is speckles pattern at wavelength $\lambda_0$ , $\epsilon_{c,t}$ in $\mathbb{R}^{H\times W}$ is the additive thermal noise. In this setting, the exoplanet flux $\alpha_{c}$ and the off-axis PSF $h_c$ now depend on channel $c$ . + +# 4. Proposed method + +# 4.1. Convolutional statistical model + +Local Gaussian model of speckles. Building on the PACO algorithm [25, 26], we propose to capture local spatial correlations between pixels of the nuisance term. We denote by $\mathcal{G}_p$ the grid of spatial pixels, such that $|\mathcal{G}_p| = HW$ and $\forall i$ in $[0, HW - 1]$ , $x_i$ in $\mathbb{R}^2$ . Then, we model the statistical distribution of collections of patches positioned on a spatial grid $\mathcal{G}_d$ , such that $\mathcal{G}_d \subset \mathcal{G}_p$ , with $M := |\mathcal{G}_d|$ . We denote by $\boldsymbol{y}_t^{(j)}$ in $\mathbb{R}^p$ , $\forall (j, t) \in [0, M - 1] \times [0, T - 1]$ the observed patch in spatial location $j$ at time $t$ . In absence of exoplanet, i.e., when only the nuisance component + +![](images/55faee181266e824be20d12aae83a041859f30576a9a6a800e1cb20f876e9a11.jpg) +Figure B. Workflow of the proposed method: it exploits both the spectral behavior of speckles and the apparent motion of exoplanets to disentangle the exoplanet signal from the nuisance component in the observations $\pmb{y}$ . To achieve this, local patches of the nuisance are modeled as Gaussian distributions, leveraging problem symmetries and incorporating multiple scales. These patches are fed to our convolutional statistical model, and combined to form a detection map. Additionally, a learned object prior, represented by a UNet $f_{\nu}$ , is introduced to denoise this detection map produced by the statistical model. This approach results in an end-to-end learnable architecture. + +![](images/d35797d8eb1ea10a22b8c710baf81f5ab1d4888405e5ca786af3794443fdfa9e.jpg) +Figure C. Proposed convolutional statistical model: spectrally aligned speckles patches, indexed by $j$ , with dimensions $Np$ and $CT$ samples, are first linearly projected into a lower-dimensional space of size $m$ . In this space, the parameters of the Gaussian distribution $\widehat{\boldsymbol{m}}_j$ and $\widehat{\mathbf{C}}_j$ are estimated and subsequently combined with the PSF $h$ to compute the terms $\boldsymbol{a}^{(j)}$ and $\boldsymbol{b}^{(j)}$ . As detailed in Appendix A.3, the efficient computation of $\boldsymbol{a}^{(j)}$ relies on the Cholesky decomposition of the precision matrix $\widehat{\mathbf{C}}_j^{-1}$ . + +is present, each collection of patches $\{\pmb{y}_t^{(j)}\}_{t = 0:T - 1}$ is modeled by a multivariate Gaussian: + +$$ +\forall (j, t), \boldsymbol {y} _ {t} ^ {(j)} \sim \mathcal {N} (\boldsymbol {m} _ {j}, \mathbf {C} _ {j}), \tag {3} +$$ + +with $m_j$ in $\mathbb{R}^p$ and $\mathbf{C}_j$ in $\mathbb{R}^{p \times p}$ the mean and covariance of the Gaussian distribution. These parameters are estimated by maximum likelihood and shrinkage of the covariance matrix. They are denoted as $\widehat{\boldsymbol{m}}_j$ and $\widehat{\mathbf{C}}_j$ in the following. Additional details are presented in Appendix A.1. + +Detection criterion at a given position. This local statistical model estimates the time-invariant flux $\widehat{\alpha}$ of an exoplanet at position $x_0$ in $\mathbb{R}^2$ in the first frame by maximizing the global likelihood: + +$$ +\widehat {\alpha} = \underset {\alpha} {\arg \max } \ell (\alpha , \boldsymbol {x} _ {0}). \tag {4} +$$ + +Assuming that collections of patches on the trajectory of an exoplanet are independent, the likelihood becomes: + +$$ +\ell (\alpha , \boldsymbol {x} _ {0}) = \prod_ {t} \mathbb {P} \left(\boldsymbol {y} _ {t} ^ {(i _ {t})} - \alpha \boldsymbol {h} ^ {(i _ {t})} (\boldsymbol {x} _ {t}) \mid \widehat {\boldsymbol {m}} _ {i _ {t}}, \widehat {\boldsymbol {\mathrm {C}}} _ {i _ {t}}\right), \tag {5} +$$ + +where $\pmb{x}_t = r(\pmb{x}_0, \phi_t)$ is the predictive position of the exoplanet at time $t$ , $i_t$ the index of the patch centered on position $\lfloor \pmb{x}_t \rfloor$ in $\mathcal{G}_p$ , and $\pmb{h}^{(i_t)}(\pmb{x}_t)$ in $\mathbb{R}^p$ the patch $i_t$ of the off-axis PSF model centered on $\pmb{x}_t$ . We propose to consider the modified likelihood: + +$$ +\ell (\alpha , \boldsymbol {x} _ {0}) = \prod_ {t} \prod_ {j \in S (\boldsymbol {x} _ {t})} \mathbb {P} \left(\boldsymbol {y} _ {t} ^ {(j)} - \alpha \boldsymbol {h} ^ {(j)} \left(\boldsymbol {x} _ {t}\right) \mid \widehat {\boldsymbol {m}} _ {j}, \widehat {\mathbf {C}} _ {j}\right) ^ {w _ {j}}, \tag {6} +$$ + +where $S(\pmb{x}_t)$ represents the subset of modeled distributions for patches around location $\pmb{x}_t$ , with each distribution $j$ weighted by $w_j$ in $\mathbb{R}^+$ , where $\sum w_j = 1$ . Compared to (5), this approach is more robust, as it aggregates overlapping patch contributions at each time-step, modeling the + +nuisance component as a convolutional process influenced by multiple, overlapping noise sources. It is also more computationally efficient, as it does not require a dense grid $\mathcal{G}_d$ for patch collections, thereby reducing the parameter estimation load. This efficiency is essential for incorporating the model into an end-to-end learning framework (see Sec. 4.3). Finally, this flexible formulation can accommodate additional correlation types (see Sec. 4.2). Solving problem (6) leads to the flux estimator $\widehat{\alpha}$ and its standard deviation $\widehat{\sigma}_{\alpha}$ : + +$$ +\widehat {\alpha} = \frac {\sum_ {t} b _ {t} \left(\boldsymbol {x} _ {t}\right)}{\sum_ {t} a _ {t} \left(\boldsymbol {x} _ {t}\right)}, \quad \widehat {\sigma} _ {\alpha} = \frac {1}{\sqrt {\sum_ {t} a \left(\boldsymbol {x} _ {t}\right)}}, \tag {7} +$$ + +where + +$$ +b _ {t} \left(\boldsymbol {x} _ {t}\right) = \sum_ {j \in S \left(\boldsymbol {x} _ {t}\right)} w _ {j} \boldsymbol {h} ^ {(j)} \left(\boldsymbol {x} _ {t}\right) ^ {\top} \widehat {\mathbf {C}} _ {j} ^ {- 1} \left(\boldsymbol {y} _ {t} ^ {(j)} - \widehat {\boldsymbol {m}} _ {j}\right), \tag {8} +$$ + +$$ +a \left(\boldsymbol {x} _ {t}\right) = \sum_ {j \in S \left(\boldsymbol {x} _ {t}\right)} w _ {j} \boldsymbol {h} ^ {(j)} \left(\boldsymbol {x} _ {t}\right) ^ {\top} \widehat {\mathbf {C}} _ {j} ^ {- 1} \boldsymbol {h} ^ {(j)} \left(\boldsymbol {x} _ {t}\right). \tag {9} +$$ + +To assess the probability of an exoplanet at $x_0$ , we use the generalized likelihood ratio test (GLRT) to statistically evaluate the parameter $\alpha$ . Under the null hypothesis $\mathcal{H}_0$ where $\alpha = 0$ (indicating no exoplanet), the statistics + +$$ +\widehat {\gamma} = \widehat {\alpha} / \widehat {\sigma} _ {\alpha} = \left(\sum_ {t} b _ {t} \left(\boldsymbol {x} _ {t}\right)\right) / \left(\sqrt {\sum_ {t} a \left(\boldsymbol {x} _ {t}\right)}\right), \tag {10} +$$ + +is controlled and follows a Gaussian distribution $\mathcal{N}(0,1)$ . When evaluating the probability of presence of an exoplanet, we test against the alternative hypothesis $\mathcal{H}_1$ : $\alpha > 0$ . The statistics $\widehat{\gamma}$ can be directly mapped to a probability of detection. It can also be interpreted as the output signal-to-noise ratio through the statistical model, and in practice, this is the detection score used by astronomers [63]. To obtain the dense counterparts $\widehat{\gamma}, \widehat{\alpha}, \widehat{\sigma}$ in $\mathbb{R}^{H \times W}$ , we adopt a fast approximation similar to [25] detailed in Appendix A.2. + +Iterative procedure for characterization. Astrometry is the task of precisely estimating the flux $\alpha$ and the sub-pixel position $\pmb{x}_0$ of an exoplanet. This can be achieved by jointly optimizing the likelihood defined in Eq. (6). Denoting by $\pmb{z} = [\alpha, \pmb{x}_0]$ in $\mathbb{R}^3$ , the gradient $\pmb{g}$ and Hessian matrix $\mathbf{H}$ of the neg-log-likelihood are decomposed as follows: + +$$ +\pmb {g} (\pmb {z}) = \sum_ {t} \sum_ {j \in S (\pmb {x} _ {t})} w _ {j} \pmb {g} _ {j} (\pmb {z}), \mathbf {H} (\pmb {z}) = \sum_ {t} \sum_ {j \in S (\pmb {x} _ {t})} w _ {j} \mathbf {H} _ {j} (\pmb {z}) +$$ + +where $\pmb{g}_j$ and $\mathbf{H}_j$ represent the gradient and Hessian operators for each distribution $j$ , respectively. These operators are responsible for updating the statistical parameters $\pmb{m}_j$ and $\mathbf{C}_j$ , which are initially biased due to the presence of the exoplanet. Further details are provided in Sec. A.4. Each iteration indexed by $l$ writes as: + +$$ +\boldsymbol {z} ^ {(l + 1)} = \boldsymbol {z} ^ {(l)} - \mathbf {H} (\boldsymbol {z} ^ {(l)}) ^ {- 1} \boldsymbol {g} (\boldsymbol {z} ^ {(l)}). \tag {11} +$$ + +# 4.2. Extensions of the statistical model: mixture of distributions + +We now build on the convolutional statistical model presented in Sec. 4.1 to introduce a multi-scale statistical model of the nuisance component. + +Linear projection. Instead of modeling correlations between the pixels of a patch, we propose to model correlations between projected linear features: + +$$ +\forall (j, t), \quad \mathbf {A} \boldsymbol {y} _ {t} ^ {(j)} \sim \mathcal {N} (\boldsymbol {m} _ {j}, \mathbf {C} _ {j}), \tag {12} +$$ + +where $\mathbf{A}$ in $\mathbb{R}^{m\times p}$ is the projection matrix and $m\leq p$ . This is a generalization of the model presented in Sec. 4.1, for which $\mathbf{A} = \mathbf{I}_p$ . The terms $b_{t}$ and $\pmb{a}$ introduced in Eqs. (26)-(27) become: + +$$ +b _ {i, t} = \sum_ {j \in S \left(\boldsymbol {x} _ {i}\right)} w _ {j} \boldsymbol {h} _ {j i} ^ {\top} \mathbf {A} _ {j} ^ {\top} \widehat {\mathbf {C}} _ {j} ^ {- 1} \left(\mathbf {A} _ {j} \boldsymbol {y} _ {t} ^ {(j)} - \widehat {\boldsymbol {m}} _ {j}\right), \tag {13} +$$ + +$$ +a _ {i} = \sum_ {j \in S \left(\boldsymbol {x} _ {i}\right)} w _ {j} \boldsymbol {h} _ {j i} ^ {\top} \mathbf {A} _ {j} ^ {\top} \widehat {\mathbf {C}} _ {j} ^ {- 1} \mathbf {A} _ {j} h _ {j i}. \tag {14} +$$ + +The learnable projection $\mathbf{A}$ decorrelates the feature space, where the statistical distribution is defined, from the pixel space, enabling effective handling of long-range correlations. + +Multi-scale approach. So far, we have shown how our model can combine multiple neighboring local distributions through weighted averaging of the terms $\boldsymbol{b}_t$ and $\boldsymbol{a}$ . Extending this approach, we propose to integrate distributions across multiple spatial scales, by varying the patch sizes $p$ . We denote $\mathcal{P}$ the set of patch sizes chosen. We expand the set $S(\boldsymbol{x}_t)$ of modeled distributions to include different scales, such that: + +$$ +S \left(\boldsymbol {x} _ {t}\right) = \bigcup_ {p \in \mathcal {P}} S _ {p} \left(\boldsymbol {x} _ {t}\right), \tag {15} +$$ + +where $S_{p}(\boldsymbol{x}_{t})$ is the subset of patches with patch size $p$ containing $\boldsymbol{x}_{t}$ . The linear projection is particularly useful in controlling the dimensionality of the feature space for larger patches by setting $m < p$ , thereby maintaining computational efficiency. + +Leveraging rotational symmetries. In direct imaging, the speckle pattern exhibits approximate central and rotational symmetries, see Fig. A. These symmetries arise from near-isotropic distortions of the wavefront and from the optical system's near-circular symmetry. Symmetries of the speckles can be theoretically understood through a series expansion of the diffraction pattern, using increasing powers of the Fourier transform of the residual phase error [48, 53, 59]. Leveraging these symmetries is essential for constructing a more robust model of the speckles, particularly to mitigate the effects of self-subtraction. This effect is a well-known challenge in direct imaging that arises + +when limited parallactic rotation results in the contamination of speckle parameters by the exoplanet signal, ultimately reducing detection sensitivity [46]. To incorporate these symmetries, we propose modeling the joint distribution of patches extracted from the same location after rotating the observation by $2\pi /N$ , also known as $N$ -fold rotational symmetry: + +$$ +\forall (j, t), \quad \mathbf {A} \left[ \mathbf {R} _ {2 \pi n / N} \left(\boldsymbol {y} _ {t}\right) ^ {(j)} \right] _ {n = 0: N - 1} \sim \mathcal {N} \left(\boldsymbol {m} _ {j}, \mathbf {C} _ {j}\right), \tag {16} +$$ + +where $\mathbf{A}$ is in $\mathbb{R}^{m\times Np}$ . The parameters of this joint distribution are less subject to self-subtraction as it is very unlikely to observe exoplanets simultaneously in all jointly modeled patches. We use a mixture model with $N = 1,2,4$ . + +Joint multi-spectral modeling. In ASDI, the spectral channels are related by a homothety, as given in Eq. (2). We propose to leverage this relationship, and model: + +$$ +\forall (j, c, t), \quad \mathbf {A} \beta_ {c, j} ^ {- 1} \mathbf {D} _ {\lambda_ {0} / \lambda_ {c}} (\boldsymbol {y} _ {c, t}) ^ {(j)} \sim \mathcal {N} (\boldsymbol {m} _ {j}, \mathbf {C} _ {j}). \tag {17} +$$ + +In this setting, the estimators of statistical parameters $\widehat{\pmb{m}}_j$ and $\widehat{\mathbf{C}}_j$ are less prone to self-subtraction, as the spectral diversity reduces the ambiguity between speckles and exoplanet signals. The estimator of the local amplitude $\widehat{\beta}_{c,j}$ is computed as the standard deviation of pixel values across $\{\mathbf{D}_{\lambda_0 / \lambda_c}(\pmb{y}_{c,t})^{(j)}\}_t$ . + +# 4.3. End-to-end trainable approach + +Problem statement. We propose to combine our statistical model on the nuisance component with a learnable prior on the exoplanet signals. This approach can be formulated as an optimization problem: + +$$ +\widehat {\boldsymbol {\alpha}} = \underset {\boldsymbol {\alpha} \in \mathbb {R} ^ {H \times W}} {\arg \min } \varphi_ {\theta} (\boldsymbol {\alpha}, \chi (\boldsymbol {y})) + \psi_ {\nu} (\boldsymbol {\alpha}), \tag {18} +$$ + +where $\varphi$ is the data-fitting term, corresponding to the negative log-likelihood of our statistical model of the nuisance component described in Sec. 4.1, and $\psi$ is a prior on exoplanet signals. We denote by $\theta, \nu$ the learnable parameters associated with these terms, and by $\chi(\pmb{y})$ the parameters estimated for each observation. In practice $\chi(\pmb{y}) = \{\widehat{\pmb{m}}_j, \widehat{\mathbf{C}}_j, \widehat{\beta}_{c,j}\}$ , $\theta = \{\mathbf{A}_j, w_j\}_j$ , and $\nu$ corresponds to the parameters of the neural network implementing $\psi$ . + +Detection by denoising. We propose a two-step approach to obtain the detection score corresponding to Eq. (18). First, we compute the detection score corresponding to the statistical model $\varphi$ , which admits a fast approximation denoted by $\gamma \in \mathbb{R}^{H \times W}$ in the following, and given by Eq. (28). We recall that under the null hypothesis, i.e., when no exoplanet is present, the elements of $\gamma$ follow a Gaussian distribution $\mathcal{N}(0, 1)$ . Therefore, extracting the signals of + +exoplanets corresponds to denoising $\gamma$ to remove this background noise. We propose to achieve this step using a neural network, such that: + +$$ +\widetilde {\gamma} = f _ {\nu} (\widehat {\gamma}), \tag {19} +$$ + +where $f_{\nu}$ is the denoiser implemented by the neural network, and $\widetilde{\gamma}$ the final detection score. + +Training objective. We suppose that the denoised detection score can be decomposed similarly to the GLRT form provided in Eq. (10) for the statistical model. This leads to $\widetilde{\gamma} = \widetilde{\alpha} / \widetilde{\sigma}_e$ , where $\widetilde{\alpha}, \widetilde{\sigma}$ in $\mathbb{R}^{H \times W}$ are the estimated flux of the exoplanet for each pixel, and the standard deviation denoting the uncertainty associated with it. Additionally, we assume that the uncertainty $\widetilde{\sigma}$ remains unaffected by the neural network, as it is primarily driven by the high variability of the speckles already captured by the statistical model. Consequently, both components can be expressed as: + +$$ +\widetilde {\boldsymbol {\alpha}} = f _ {\nu} (\widehat {\boldsymbol {\alpha}} / \widehat {\boldsymbol {\sigma}}) \times \widehat {\boldsymbol {\sigma}}, \quad \widetilde {\boldsymbol {\sigma}} = \widehat {\boldsymbol {\sigma}}. \tag {20} +$$ + +This formulation yields a pixel-wise Gaussian distribution: $\mathcal{N}(\widetilde{\alpha},\widetilde{\sigma})$ . The learnable parameters $\theta$ and $\nu$ of our model are optimized by minimizing the negative log-likelihood between estimates $(\widetilde{\alpha},\widetilde{\sigma})$ and ground truth $\alpha_{\mathrm{gt}}$ : + +$$ +\mathcal {L} \left(\widetilde {\alpha}, \widetilde {\sigma}, \alpha_ {\mathrm {g t}}\right) = 0. 5 \left(\widetilde {\alpha} - \alpha_ {\mathrm {g t}}\right) ^ {2} / \widetilde {\sigma} ^ {2} + \log \widetilde {\sigma}. \tag {21} +$$ + +Model ensembling. To improve robustness, we combine outputs from multiple trained models. Specifically, given outputs $\widetilde{\alpha}$ and $\widetilde{\sigma}$ , we can define $\widetilde{a}$ and $\widetilde{b}$ such that $\widetilde{\sigma} = 1 / \sqrt{\widetilde{a}}$ and $\widetilde{\alpha} = \widetilde{b} / \widetilde{a}$ . For $Q$ models indexed by $q$ , the outputs are aggregated as follows: + +$$ +\widetilde {\boldsymbol {a}} _ {Q} = \sum_ {q} \widetilde {\boldsymbol {a}} _ {q} / Q, \quad \widetilde {\boldsymbol {b}} _ {Q} = \sum_ {q} \widetilde {\boldsymbol {b}} _ {q} / Q. \tag {22} +$$ + +The combined detection score, $\widetilde{\gamma}_{Q} = \widetilde{b}_{Q} / \sqrt{\widetilde{a}_{Q}}$ , represents a weighted average of the individual detection scores. + +Calibration. The trained model needs to be calibrated in order to relate the detection score to a probability of false alarm. We follow the procedure outlined in [5], and rely on a separate calibration dataset to estimate the cumulative distribution function of $\widetilde{\gamma}$ under the null hypothesis. Additional details are provided in the Supplementary Material. + +# 5. Experiments + +# 5.1. Data, algorithms and evaluation protocol + +Datasets. We use SPHERE datasets, a cutting-edge exoplanet finder instrument at the VLT [4]. Raw observations were sourced from the public data archive of the European Southern Observatory and calibrated with public tools [21, 47] from the High-Contrast Data Center. This results in + +
Spatial scalesN=1N=1,2N=1, 2, 4
ADI8 × 80.554 ± 0.0050.571 ± 0.0050.575 ± 0.005
+16 × 160.561 ± 0.0050.573 ± 0.0040.579 ± 0.005
+32 × 320.567 ± 0.0050.575 ± 0.0050.580 ± 0.005
+64 × 640.569 ± 0.0050.577 ± 0.0050.581 ± 0.005
ASDI8 × 80.713 ± 0.0050.719 ± 0.0050.723 ± 0.005
+16 × 160.720 ± 0.0050.723 ± 0.0050.725 ± 0.005
+32 × 320.720 ± 0.0050.725 ± 0.0050.726 ± 0.005
+64 × 640.720 ± 0.0050.725 ± 0.0040.726 ± 0.004
+ +science-ready 4-D datasets $\pmb{y}$ (with $L = 2$ spectral channels, $T\in [[15,300]]$ temporal frames, and $H\times W = 1024^2$ pixels per image), along with the off-axis PSF $\pmb{h}$ , wavelengths $\lambda_{c}$ , and parallactic angles $\phi_t$ for algorithm input. Many algorithms have achieved optimal detection sensitivity far from the star, limited by photon noise [5, 28]. Thus, our analysis focuses on a smaller, star-centered region of $H = W = 256$ pixels, where detection sensitivity can still be significantly improved [5, 28]. + +Training procedure. For training, we use 220 observations from the SHINE-F150 large survey of SPHERE [39]. For testing, we select 8 datasets representing typical diversity in observing conditions and instrumental settings (e.g., parallactic rotation amplitude $\Delta_{\phi} = |\phi_{T - 1} - \phi_0|$ ). Five test datasets (on stars HD 159911, HD 216803, HD 206860, HD 188228, HD 102647) are used for benchmarking against state of the art methods and conducting a model ablation analysis. To evaluate performance, we simulate synthetic faint point-like sources mimicking exoplanet signatures and inject them into real data. This simulation procedure is common practice in direct imaging to ground the actual performance of algorithms because it is very realistic [7, 8, 16, 17, 28, 35]. Any real source indeed takes the form of the off-axis PSF, which is measured immediately before and after the main observation sequence by offsetting the star from the coronagraph. This simulation procedure is essential due to the lack of ground truth and the limited number of exoplanets (only a few dozen) detected by direct imaging to date. In addition, we rely on simulated sources for training our deep model, which is also a standard practice for learning-based techniques. Three datasets of star HR 8799, hosting three known exoplanets in the field of view [42, 43], are also used as full real data. + +Baselines. For detection, we benchmark the proposed algorithms against methods from different classes described in Sect. 2. Selection criteria are (i) code availability (often limited in direct imaging), (ii) relevance, and (iii) widespread use. In this context, we include the cADI [41], KLIP/PCA [60] subtraction-based methods, as these are implemented in most data processing pipelines [13, 21, 62], + +Table A. Impact of multi-scale and $N$ -fold rotational symmetries on detection performance (AUC) of our statistical model. + +
Methodflux error (ARE)position error (RMSE)
PACO0.560.21
Proposed0.510.11
+ +Table B. Comparison of statistical models for flux estimation. + +have been responsible for detecting nearly all imaged exoplanets –including the most recent discoveries [45, 67]– and remain heavily utilized by astronomers. Concerning statistical methods, we focus on PACO [25, 26] that uniquely accounts for data correlations. PACO has consistently outperformed KLIP/PCA in large observational surveys [12, 22] and achieved state of the art performance on SPHERE data in a community benchmark, surpassing various subtraction-based, statistical, and learning-based approaches [7]. We also evaluate the proposed approach against MODEL&CO [5], a recent hybrid method that shown to outperform cADI, KLIP/PCA, and (deep) PACO [5]. For cADI and KLIP/PCA, we use the VIP Python package [13, 33], fine-tuning parameters (e.g., PCA modes) to optimize detection scores guided by the ground truth. PACO and MODEL&CO were processed by their authors using data-driven settings. For flux estimation, we compare only with PACO due to computational constraints. We conduct all analyses in both ADI and ASDI modes (where supported) to assess the gains from joint spectral processing. + +Metrics. The detection metric used is the area under the receiver operating characteristic curve (AUC), representing the true positive rate against the false discovery rate obtained by varying the detection threshold. Higher AUC values indicate better performance. This standard metric in direct imaging [7, 25, 34] captures the precision-recall tradeoff and allows fair algorithm comparisons, as a common threshold does not ensure consistent false alarm rates due to the lack of statistical grounding in some detection maps, see Sec. 1. The primary metric is the absolute relative error (ARE) between the ground truth and estimated flux, with lower values indicating better performance. We also report the root mean square error (RMSE) for sub-pixel localization of exoplanets. + +# 5.2. Quantitative and qualitative evaluations + +Statistical model. We evaluate the impact of leveraging multiple spatial scales and symmetries in our statistical model by testing its detection performance across configurations listed in Table A. Results show that using both symmetries and scales is crucial in ADI, where statistical parameters suffer from self-subtraction without the robustness of added spectral diversity. We then evaluate the statistical model's performance in estimating flux and sub-pixel position (regression tasks) using the optimization process from + +
ModalityMethodHD159911 (54°)HD216803 (23°)HD206860 (11°)HD188228 (6°)HD102647 (2°)average AUC
ADIcADI0.288 ± 0.0140.422 ± 0.0070.489 ± 0.0100.303 ± 0.0140.343 ± 0.0090.369 ± 0.005
PCA0.634 ± 0.0100.643 ± 0.0110.505 ± 0.0090.392 ± 0.0110.218 ± 0.0110.478 ± 0.005
PACO0.629 ± 0.0060.669 ± 0.0120.579 ± 0.0150.517 ± 0.0150.207 ± 0.0120.520 ± 0.006
MODEL&CO0.653 ± 0.0100.731 ± 0.0110.646 ± 0.0120.638 ± 0.0140.554 ± 0.0090.645 ± 0.005
Proposed0.673 ± 0.0090.740 ± 0.0100.661 ± 0.0120.631 ± 0.0130.518 ± 0.0120.645 ± 0.005
ASDIcASDI0.448 ± 0.0090.537 ± 0.0130.451 ± 0.0070.352 ± 0.0170.294 ± 0.0120.417 ± 0.006
PCA0.694 ± 0.0130.696 ± 0.0070.552 ± 0.0110.398 ± 0.0110.236 ± 0.0090.515 ± 0.005
PACO0.698 ± 0.0150.768 ± 0.0080.710 ± 0.0140.700 ± 0.0090.589 ± 0.0140.693 ± 0.005
Proposed0.731 ± 0.0090.804 ± 0.0130.747 ± 0.0100.782 ± 0.0080.744 ± 0.0060.761 ± 0.004
+ +Table C. Comparative detection scores (AUC) in A(S)DI modes. Dataset names (stars) and parallactic amplitude $\Delta_{\phi}$ are reported on top. ASDI mode (which is better than ADI) is not supported by MODEL&CO. + +![](images/69d8f7c34fd5f28c9e1acd7fe62f5da47060b89326d21cd900bfb1c4c35ff3ba.jpg) +Figure D. Detection maps on observations of HD 159911 star with synthetic exoplanets. The (calibrated) detection threshold is equivalent for all methods. The proposed approach here detect 1 additional source compared to the second best method (PACO ASDI). + +Sec. 4.1. Table B shows average errors across 1,351 synthetic exoplanets detectable by both considered methods. Our approach outperforming others in almost all cases. + +End-to-end learnable model. We evaluate detection performance on 5 observations using synthetic exoplanet injections through direct models (1)-(2). For each observation, 100 cubes are generated, totaling 1,000 injected exoplanets. This process is repeated 5 times with different random seeds, and AUC scores are reported in Table C. Examples of representative detection maps are shown in Fig. K. The proposed approach matches or outperforms state of the art algorithms in ADI mode. In ASDI mode, it shows a significant boost in detection sensitivity, consistently surpassing comparative methods. + +Validation on real data. We compare detection methods using 3 observations, spanning several years, of the HR + +![](images/156c0e4c44881a1a2d65f1160668b32d9506da1e6fbfcb08d2c940b3a803bc45.jpg) +Figure E. Detection maps on 3 observations (stacked in false RGB colors) of HR 8799 star. The elliptical arcs depict the estimated (projected) orbits of three known exoplanets, with the detection results shown as red, green and blue dots for the corresponding 2016, 2018 and 2021 observations. Squares are for false alarms identified in [5], Fig. 17. + +8799 star. The stacked detection maps, highlighting exoplanets' orbital motion, are shown in Fig. E. All detection maps use a consistent unit (signal-to-noise score) and dynamic range. The proposed method maximizes detection confidence with no false alarms. + +# 6. Conclusion + +We propose a novel hybrid approach for exoplanet imaging that combines a multi-scale statistical model with deep learning, capturing spatial correlations in the nuisance component for simultaneous detection and flux estimation. This approach provides statistically grounded detection scores, unbiased estimates, and native uncertainty quantification. Tested on VLT/SPHERE data, it outperforms state-of-the-art techniques, demonstrating efficiency and robustness across varied data qualities. Its versatility and reduced computational complexity make it ideal for large-scale surveys. The approach will be extended to handle higher spectral resolution data. Additionally, the nuisance model can be adapted for reconstructing spatially extended objects like circumstellar disks, namely birthplaces of exoplanets. + +# Acknowledgments + +This work was supported by the French government under management of Agence Nationale de la Recherche as part of the "France 2030" program, PR[AI]RIE-PSAI projet (reference ANR-23-IACL-0008) and MIAI 3IA Institute (reference ANR-19-P3IA-0003), and by the European Research Council (ERC) under grant agreement 101087696 (APHELEIA project). This work was also supported by the ERC under the European Union's Horizon 2020 research and innovation programme (COBREX; grant agreement 885593), the ANR under the France 2030 program (PEPR Origins, reference ANR-22-EXOR-0016), the French National Programs (PNP and PNPS), and the Action Spécifique Haute Résolution Angulaire (ASHRA) of CNRS/INSU co-funded by CNES. This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011013643 made by GENCI. JP was supported in part by the Louis Vuitton/ENS chair in artificial intelligence and a Global Distinguished Professorship at the Courant Institute of Mathematical Sciences and the Center for Data Science at New York University. + +# References + +[1] Michal Aharon, Michael Elad, and Alfred Bruckstein. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11):4311-4322, 2006. 2 +[2] France Allard, Nicole F Allard, Derek Homeier, John Kielkopf, Mark J McCaughrean, and Fernand Spiegelman. K-H2 quasi-molecular absorption detected in the T-dwarf Indi Ba. *Astronomy & Astrophysics*, 474(2):L21-L24, 2007. 1 +[3] Adam Amara and Sascha P Quanz. Pynpoint: an image processing package for finding exoplanets. Monthly Notices of the Royal Astronomical Society, 427(2):948-955, 2012. 2 +[4] Jean-Luc Beuzit, Arthur Vigan, David Mouillet, Kjetil Dohlen, Raffaele Gratton, Anthony Boccaletti, J-F Sauvage, Hans Martin Schmid, Maud Langlois, Cyril Petit, et al. SPHERE: the exoplanet imager for the Very Large Telescope. *Astronomy & Astrophysics*, 631:A155, 2019. 2, 6 +[5] Théo Bodrito, Olivier Flasseur, Julien Mairal, Jean Ponce, Maud Langlois, and Anne-Marie Lagrange. Model&co: Exoplanet detection in angular differential imaging by learning across multiple observations. Monthly Notices of the Royal Astronomical Society, 534(2):1569-1596, 2024. 3, 6, 7, 8, 2 +[6] F Cantalloube, D Mouillet, LM Mugnier, J Milli, Olivier Absil, CA Gomez Gonzalez, G Chauvin, J-L Beuzit, and A Cornia. Direct exoplanet detection and characterization using the andromeda method: Performance on vlt/naco data. *Astronomy & Astrophysics*, 582:A89, 2015. 2 +[7] Faustine Cantalloube, Carlos Gomez-Gonzalez, Olivier Absil, Carles Cantero, Regis Bacher, MJ Bonse, Michael Bottom, C-H Dahlqvist, Célia Desgrange, Olivier Flasseur, et al. Exoplanet imaging data challenge: benchmarking the various image processing methods for exoplanet detection. In + +Adaptive Optics Systems VII, pages 1027-1062. SPIE, 2020. 3, 7 +[8] Carles Cantero, Olivier Absil, C-H Dahlqvist, and Marc Van Droogenbroeck. Na-sodinn: A deep learning algorithm for exoplanet image detection based on residual noise regimes. *Astronomy & Astrophysics*, 680:A86, 2023. 3, 7 +[9] Gilles Chabrier, Isabelle Baraffe, France Allard, and P Hauschild. Evolutionary models for very low-mass stars and brown dwarfs with dusty atmospheres. The Astrophysical Journal, 542(1):464, 2000. 1 +[10] Gaël Chauvin. Direct imaging of exoplanets at the era of the extremely large telescopes. arXiv preprint arXiv:1810.02031, 2018. 1 +[11] Pattana Chintarungruangchai, Guey Jiang, Jun Hashimoto, Yu Komatsu, and Mihoko Konishi. A possible converter to denoise the images of exoplanet candidates through machine learning techniques. New Astronomy, 100:101997, 2023. 3 +[12] A Chomez, A-M Lagrange, P Delorme, M Langlois, G Chauvin, O Flasseur, J Dallant, F Philipot, S Bergeon, D Albert, et al. Preparation for an unsupervised massive analysis of sphere high-contrast data with paco-optimization and benchmarking on 24 solar-type stars. *Astronomy & Astrophysics*, 675:A205, 2023. 7, 2 +[13] Valentin Christiaens, Carlos Gonzalez, Ralf Farkas, Carl-Henrik Dahlqvist, Evert Nasedkin, Julien Milli, Olivier Absil, Henry Ngo, Carles Cantero, Alan Rainot, et al. Vip: A python package for high-contrast imaging. Journal of Open Source Software, 8, 2023. 7 +[14] Thayne Currie, Beth Biller, Anne-Marie Lagrange, Christian Marois, Olivier Guyon, Eric Nielsen, Mickael Bonnefoy, and Robert De Rosa. Direct imaging and spectroscopy of extrasolar planets. arXiv preprint arXiv:2205.05696, 2022. 1 +[15] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080-2095, 2007. 2 +[16] Hazan Daglayan, Simon Vary, Faustine Cantalloube, P-A Absil, and Olivier Absil. Likelihood ratio map for direct exoplanet detection. In 2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS), pages 1-5. IEEE, 2022. 7 +[17] C-H Dahlqvist, Faustine Cantalloube, and Olivier Absil. Regime-switching model detection map for direct exoplanet detection in adi sequences. *Astronomy & Astrophysics*, 633: A95, 2020. 2, 7 +[18] C-H Dahlqvist, Faustine Cantalloube, and Olivier Absil. Auto-ism: An automated parameter-selection algorithm for the rsm map exoplanet detection algorithm. *Astronomy & Astrophysics*, 656:A54, 2021. +[19] C-H Dahlqvist, Gilles Louppe, and Olivier Absil. Improving the rsm map exoplanet detection algorithm-psf forward modelling and optimal selection of psf subtraction techniques. *Astronomy & Astrophysics*, 646:A49, 2021. 2 +[20] Richard Davies and Markus Kasper. Adaptive optics for astronomy. Annual Review of Astronomy and Astrophysics, 50: 305-351, 2012. 2 +[21] Ph Delorme, Nadege Meunier, D Albert, Eric Lagadec, H Le Coroller, R Galicher, D Mouillet, A Boccaletti, DINO Mesa, + +J-C Meunier, et al. The SPHERE Data Center: a reference for high contrast imaging processing. arXiv preprint arXiv:1712.06948, 2017. 6, 7 +[22] P Delorme, A Chomez, V Squicciarini, M Janson, O Flasseur, O Schib, R Gratton, AM Lagrange, M Langlois, L Mayer, et al. Giant planets population around b stars from the first part of the beast survey. arXiv preprint arXiv:2409.18793, 2024. 7 +[23] Rob Fergus, David W Hogg, Rebecca Oppenheimer, Douglas Brenner, and Laurent Pueyo. S4: A spatial-spectral model for speckle suppression. The Astrophysical Journal, 794(2):161, 2014. 3 +[24] Michael P Fitzgerald and James R Graham. Speckle statistics in adaptively corrected images. The Astrophysical Journal, 637(1):541, 2006. 2 +[25] Olivier Flasseur, Loic Denis, Éric Thiebaut, and Maud Langlois. Exoplanet detection in angular differential imaging by statistical learning of the nonstationary patch covariances-the paco algorithm. *Astronomy & Astrophysics*, 618:A138, 2018. 2, 3, 5, 7 +[26] Olivier Flasseur, Loic Denis, Éric Thiebaut, and Maud Langlois. Paco asdi: an algorithm for exoplanet detection and characterization in direct imaging with integral field spectrographs. *Astronomy & Astrophysics*, 637:A9, 2020. 2, 3, 7 +[27] Olivier Flasseur, Théo Bodrito, Julien Mairal, Jean Ponce, Maud Langlois, and Anne-Marie Lagrangev. Combining multi-spectral data with statistical and deep-learning models for improved exoplanet detection in direct imaging at high contrast. In 2023 31st European Signal Processing Conference (EUSIPCO), pages 1723-1727. IEEE, 2023. 3 +[28] Olivier Flasseur, Théo Bodrito, Julien Mairal, Jean Ponce, Maud Langlois, and Anne-Marie Lagrange. deep paco: Combining statistical models with deep learning for exoplanet detection and characterization in direct imaging at high contrast. Monthly Notices of the Royal Astronomical Society, 527(1):1534-1562, 2024. 3, 7 +[29] Olivier Flasseur, Eric Thiébaut, Loïc Denis, and Maud Langlois. Shrinkage mmse estimators of covariances beyond the zero-mean and stationary variance assumptions. In 2024 32nd European Signal Processing Conference (EUSIPCO), pages 2727-2731, 2024. 1 +[30] Katherine B Follette. An introduction to high contrast differential imaging of exoplanets and disks. *Publications of the Astronomical Society of the Pacific*, 135(1051):093001, 2023. 1 +[31] Timothy D. Gebhard, Markus J Bonse, Sascha P Quanz, and Bernhard Schölkopf. Half-sibling regression meets exoplanet imaging: Psf modeling and subtraction using a flexible, domain knowledge-driven, causal framework. *Astronomy & Astrophysics*, 666:A9, 2022. 3 +[32] Benjamin L Gerard and Christian Marois. Planet detection down to a few $\lambda / \mathrm{d}$ : an rsdi/tloci approach to psf subtraction. In Adaptive Optics Systems V, pages 1544-1556. SPIE, 2016. 2 +[33] Carlos Alberto Gomez Gonzalez, Olivier Wertz, Olivier Absil, Valentin Christiaens, Denis Defrère, Dimitri Mawet, Julien Milli, Pierre-Antoine Absil, Marc Van Droogen- + +broeck, Faustine Cantalloube, et al. Vip: Vortex image processing package for high-contrast direct imaging. The Astronomical Journal, 154(1):7, 2017. 2, 7 +[34] CA Gomez Gonzalez, Olivier Absil, P-A Absil, Marc Van Droogenbroeck, Dimitri Mawet, and Jean Surdej. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging adi sequences-the lIsg algorithm. *Astronomy & Astrophysics*, 589:A54, 2016. 2, 7 +[35] CA Gomez Gonzalez, Olivier Absil, and Marc Van Droogenbroeck. Supervised detection of exoplanets in high-contrast imaging sequences. *Astronomy & Astrophysics*, 613:A71, 2018. 3, 7 +[36] SY Haffert, AJ Bohn, J de Boer, IAG Snellen, J Brinchmann, JH Girard, CU Keller, and R Bacon. Two accreting protoplanets around the young star PDS 70. Nature Astronomy, 3 (8):749-754, 2019. 1 +[37] M Keppler, M Benisty, A Müller, Th Henning, R Van Boekel, F Cantalloube, C Ginski, RG Van Holstein, A-L Maire, A Pohl, et al. Discovery of a planetary-mass companion within the gap of the transition disk around pds 70. *Astronomy & Astrophysics*, 617:A44, 2018. 1 +[38] A-M Lagrange, D Gratadour, G Chauvin, T Fusco, D Ehrenreich, D Mouillet, G Rouset, D Rouan, F Allard, É Gendron, et al. A probable giant planet imaged in the $\beta$ Pictoris disk: VLT/NaCo deep L'-band imaging. *Astronomy & Astrophysics*, 493(2):L21-L25, 2009. 2 +[39] Maud Langlois, R Gratton, A-M Lagrange, P Delorme, A Boccaletti, M Bonnefoy, A-L Maire, D Mesa, G Chauvin, S Desidera, et al. The sphere infrared survey for exoplanets (shine)-ii. observations, data reduction and analysis, detection performances, and initial results. *Astronomy & Astrophysics*, 651:A71, 2021. 7 +[40] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online dictionary learning for sparse coding. In Proceedings of the 26th annual international conference on machine learning, pages 689-696, 2009. 2 +[41] Christian Marois, David Lafreniere, René Doyon, Bruce Macintosh, and Daniel Nadeau. Angular differential imaging: a powerful high-contrast imaging technique. The Astrophysical Journal, 641(1):556, 2006. 2, 7 +[42] Christian Marois, Bruce Macintosh, Travis Barman, B Zuckerman, Inseok Song, Jennifer Patience, David Lafrenière, and René Doyon. Direct imaging of multiple planets orbiting the star HR 8799. Science, 322(5906):1348-1352, 2008. 7 +[43] Christian Marois, B Zuckerman, Quinn M Konopacky, Bruce Macintosh, and Travis Barman. Images of a fourth planet orbiting hr 8799. Nature, 468(7327):1080, 2010. 7 +[44] Christian Marois, Carlos Correia, Raphael Galicher, Patrick Ingraham, Bruce Macintosh, Thayne Currie, and Rob De Rosa. GPI PSF subtraction with TLOCI: the next evolution in exoplanet/disk high-contrast imaging. In SPIE Astronomical Instrumentation + Telescopes, page 91480U. International Society for Optics and Photonics, 2014. 2 +[45] D Mesa, R Gratton, P Kervella, M Bonavita, S Desidera, V D'Orazi, S Marino, A Zurlo, and E Rigliaco. Af lep b: The lowest-mass planet detected by coupling astrometric and direct imaging data. *Astronomy & Astrophysics*, 672:A93, 2023. 7 + +[46] J Milli, D Mouillet, A-M Lagrange, A Boccaletti, D Mawet, G Chauvin, and M Bonnefoy. Impact of angular differential imaging on circumstellar disk images. *Astronomy & Astrophysics*, 545:A111, 2012. 6 +[47] Alexey Pavlov, Ole Möller-Nilsson, Markus Feldt, Thomas Henning, Jean-Luc Beuzit, and David Mouillet. Sphere data reduction and handling system: overview, project status, and development. Advanced Software and Control for Astronomy II, 7019:1093-1104, 2008. 6 +[48] Marshall D Perrin, Anand Sivaramakrishnan, Russell B Makidon, Ben R Oppenheimer, and James R Graham. The structure of high strehl ratio point-spread functions. The Astrophysical Journal, 596(1):702, 2003. 5 +[49] Sai Krishanth PM, Ewan S Douglas, Justin Hom, Ramya M Anche, John Debes, Isabel Rebollido, and Bin B Ren. Nmf-basedgpu accelerated coronography pipeline. In Techniques and Instrumentation for Detection of Exoplanets XI, pages 668-679. SPIE, 2023. 2 +[50] Laurent Pueyo. Direct imaging as a detection technique for exoplanets. Handbook of Exoplanets, pages 705-765, 2018. 2 +[51] Bin Ren, Laurent Pueyo, Guangtun Ben Zhu, John Debes, and Gaspard Duchêne. Non-negative matrix factorization: robust extraction of extended structures. The Astrophysical Journal, 852(2):104, 2018. 2 +[52] Bin Ren, Laurent Pueyo, Christine Chen, Élodie Choquet, John H Debes, Gaspard Duchéne, François Menard, and Marshall D Perrin. Using data imputation for signal separation in high-contrast imaging. The Astrophysical Journal, 892(2):74, 2020. 2 +[53] Erez N Ribak and Szymon Gladysz. Fainter and closer: finding planets by symmetry breaking. Optics Express, 16(20): 15553-15562, 2008. 5 +[54] Jean-Baptiste Ruffio, Bruce Macintosh, Jason J Wang, Laurent Pueyo, Eric L Nielsen, Robert J De Rosa, Ian Czekala, Mark S Marley, Pauline Arriaga, Vanessa P Bailey, et al. Improving and assessing planet sensitivity of the gpi exoplanet survey with a forward model matched filter. The Astrophysical Journal, 842(1):14, 2017. 2 +[55] Matthias Samland, J Bouwman, DW Hogg, W Brandner, T Henning, and Markus Janson. Trap: A temporal systematics model for improved direct detection of exoplanets at small angular separations. *Astronomy & Astrophysics*, 646:A24, 2021. 3 +[56] Aniket Sanghi, Jerry W Xuan, Jason J Wang, et al. Efficiently searching for close-in companions around young m dwarfs using a multiyear psf library. The Astronomical Journal, 168(5):215, 2024. 3 +[57] Nuno C Santos. Extra-solar planets: Detection methods and results. New Astronomy Reviews, 52(2-5):154-166, 2008. 1 +[58] Rémi Sousummer. Apodized pupil lyot coronagnographs for arbitrary telescope apertures. The Astrophysical Journal Letters, 618(2):L161, 2004. 2 +[59] Rémi Sousummer, André Ferrari, Claude Aime, and Laurent Jolissaint. Speckle noise and dynamic range in coronarographic images. The Astrophysical Journal, 669(1):642, 2007. 5 + +[60] Rémi Sousummer, Laurent Pueyo, and James Larkin. Detection and characterization of exoplanets and disks using projections on karhunen-loève eigenimages. The Astrophysical Journal Letters, 755(2):L28, 2012. 2, 7 +[61] William B Sparks and Holland C Ford. Imaging spectroscopy for extrasolar planet detection. The Astrophysical Journal, 578(1):543, 2002. 2 +[62] Tomas Stolker, Markus J Bonse, Sascha P Quanz, Adam Amara, Gabriele Cugno, Alexander J Bohn, and Anna Boehle. Pynpoint: a modular pipeline architecture for processing and analysis of high-contrast imaging data. *Astronomy & Astrophysics*, 621:A59, 2019. 7 +[63] Éric Thiébaut, Loïc Denis, Laurent Mugnier, André Ferrari, David Mary, Maud Langlois, Faustine Cantalloube, and Nicholas Devaney. Fast and robust exo-planet detection in multi-spectral, multi-temporal data. In Adaptive Optics Systems V, pages 1534–1543. SPIE, 2016. 5 +[64] William Thompson and Christian Marois. Improved contrast in images of exoplanets using direct signal-to-noise ratio optimization. The Astronomical Journal, 161(5):236, 2021. 2 +[65] Wesley A Traub and Ben R Oppenheimer. Direct imaging of exoplanets. Exoplanets, pages 111-156, 2010. 1 +[66] Arthur Vigan, Claire Moutou, Maud Langlois, France Allard, Anthony Boccaletti, Marcel Carbillet, David Mouillet, and Isabelle Smith. Photometric characterization of exoplanets using angular and spectral differential imaging. Monthly Notices of the Royal Astronomical Society, 407(1):71-82, 2010. 1, 2 +[67] Kevin Wagner, Jordan Stone, Andrew Skemer, Steve Ertel, Ruobing Dong, Daniel Apai, Eckhart Spalding, Jarron Leisenring, Michael Sitko, Kaitlin Kratter, et al. Direct images and spectroscopy of a giant protoplanet driving spiral arms in mwc 758. Nature Astronomy, 7(10):1208-1217, 2023. 7 +[68] Zahed Wahhaj, Lucas A Cieza, Dimitri Mawet, Bin Yang, Hector Canovas, Jozua de Boer, Simon Casassus, François Menard, Matthias R Schreiber, Michael C Liu, et al. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI. *Astronomy & Astrophysics*, 581:A24, 2015. 2 +[69] Trevor N Wolf, Brandon A Jones, and Brendan P Bowler. Direct exoplanet detection using convolutional image reconstruction (construct): A new algorithm for post-processing high-contrast images. The Astronomical Journal, 167(3):92, 2024. 3 +[70] Kai Hou Yip, Nikolaos Nikolaou, Piero Coronica, Angelos Tsiaras, et al. Pushing the limits of exoplanet discovery via direct imaging with deep learning. In Machine Learning and Knowledge Discovery in Databases: European Conf., ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part III, pages 322–338. Springer, 2020. 3 +[71] Kai Yu, Yuanqing Lin, and John Lafferty. Learning image representations from the pixel level via hierarchical sparse coding. In CVPR 2011, pages 1713-1720. IEEE, 2011. 2 +[72] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In 2011 international conference on computer vision, pages 479-486. IEEE, 2011. 2 \ No newline at end of file diff --git a/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/images.zip b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7b258d88e592ba9aef32f45b3945c9d343ea2b55 --- /dev/null +++ b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92bd8f70ee2bbb6c87e7ae0652770c1190cd72d3fa364c84a9a1bd0c2eeb7296 +size 603535 diff --git a/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/layout.json b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..48ac74aaacad96ff07dcdac5313981fc30aca25c --- /dev/null +++ b/CVPR/2025/A New Statistical Model of Star Speckles for Learning to Detect and Characterize Exoplanets in Direct Imaging Observations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a522def966926cc40ef48397f2efecd5f52f7162e715ff3d3755d6baf2f630f +size 534909 diff --git a/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_content_list.json b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ccdde3d0da6787e7640c32d64d343311f71bfaa2 --- /dev/null +++ b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84c57969bc6db0baac02325107f28fdcfbe064cfc13e966302c87684d539b044 +size 83130 diff --git a/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_model.json b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_model.json new file mode 100644 index 0000000000000000000000000000000000000000..67f1bf4967e2cd6d1a4999218d4b9a2a5db3d18c --- /dev/null +++ b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1633a0d51b96a02d52563bee06176695e18e4046f13db09abe7d5805fc59db60 +size 102575 diff --git a/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_origin.pdf b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..67e40cd9f15a83fd1d1d2573c2092bc9a4189a5e --- /dev/null +++ b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/a0d4bfdc-a5ad-4533-a724-3bee0f7d0259_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fbd5140590958b5c9c1bab0ac5b48619165063e9873374a4547b06b999fc02e +size 3552441 diff --git a/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/full.md b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a60412ba925afaa77493df7f384da6a64ae11285 --- /dev/null +++ b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/full.md @@ -0,0 +1,377 @@ +# A Physics-Informed Blur Learning Framework for Imaging Systems + +Liqun Chen $^{1}$ Yuxuan Li $^{1}$ Jun Dai $^{1}$ Jinwei Gu $^{2,3}$ Tianfan Xue $^{3,1*}$ + +$^{1}$ Shanghai AI Laboratory $^{2}$ NVIDIA $^{3}$ The Chinese University of Hong Kong + +{chenliqun, liyuxuan, daijun, xuetianfan}@pjlab.org.cn, jinweig@nvidia.com + +https://openimaginglab.github.io/PSF-Estimation/ + +![](images/9fdc27c99f661e5dc70962b1bb2e6709a92a3bb3b2b2a9c3c5178176bb985353.jpg) +PSF (Degradation Transfer) + +![](images/689676ee5c1b1d7cc1c04671266cc0ab07404e40865e532812ba12747344a635.jpg) + +![](images/535dd717f26de22cfd08b807f65a17a38cfb123366815a984c94f098380540ab.jpg) +PSF (Ours) +PSF (Ground-truth) + +![](images/d18a421ff6fa592e3fbc41c6cb170c325a3ef467d76749b75588b4e1d88aa48a.jpg) +Input: blurry image +Figure 1. We introduce a point spread function (PSF) estimation framework and demonstrate its effectiveness in deblurring. From left to right: PSF estimated via Degradation Transfer [6] (state-of-the-art), PSF estimated by our method, and the ground-truth PSF of lens #63762 from Edmund; a blurry input image synthesized using an image from the FiveK dataset [5] and the ground-truth PSF; the patches from the blurry input image; the patches deblurred by pre-trained Restormers [46] using training data generated from PSFs obtained through our method and Degradation Transfer [6], respectively; and the corresponding ground truth patches. Our approach outperforms existing state-of-the-art methods in both PSF estimation accuracy and deblurring quality. + +![](images/9b940c4787a26bbe18b9969a5ae7133aa16c7dbf0d56abddec3085a5a8fe833a.jpg) + +![](images/7153f2c59f4157143e113cd1f38d669e4a6d537631dae529568bc618b4e2005f.jpg) +Blurry + +![](images/86f4f1201a9590e8bbabaddaa570c0b3e0a31851beec508d686edcd0e265a488.jpg) + +![](images/94f73239bcb3090f4f6f6fa2290f9d35a3cdbed7ab68a7b2e320e35d6144e406.jpg) + +![](images/3a2e7e338a8b77e484fab3f096a9891c0f88634563e7277702040d119acf31b7.jpg) +Output (Degradation Transfer) +Output (Ours) + +![](images/5427da7e28f8c4b2df455bd0052a313ffd4a4d1587738b86949c955862b65316.jpg) + +![](images/fcdf2ad5a122c19a957915f6d23e977889394cc0a04d06fb4fabba1adb0dadc9.jpg) +Ground-truth + +![](images/930c6771129e3cefc7761213b943f87166a2251528a4d0cdc0e5d19a9284bd40.jpg) + +# Abstract + +Accurate blur estimation is essential for high-performance imaging in various applications. Blur is typically represented by the point spread function (PSF). In this paper, we propose a physics-informed PSF learning framework for imaging systems, consisting of a simple calibration followed by a learning process. Our framework could achieve both high accuracy and universal applicability. Inspired by the Seidel PSF model for representing spatially varying PSF, we identify its limitations in optimization and introduce a novel wavefront-based PSF model accompanied by an optimization strategy, both reducing optimization com + +plexity and improving estimation accuracy. Moreover, our wavefront-based PSF model is independent of lens parameters, eliminating the need for prior knowledge of the lens. To validate our approach, we compare it with recent PSF estimation methods (Degradation Transfer and Fast Two-step) through a deblurring task, where all the estimated PSFs are used to train state-of-the-art deblurring algorithms. Our approach demonstrates improvements in image quality in simulation and also showcases noticeable visual quality improvements on real captured images. The code and models are public. + +# 1. Introduction + +Imaging systems drive broad applications across numerous fields. However, their practical performance is inherently constrained by spatially nonuniform aberrations. Accurately characterizing it is crucial for achieving high performance in digital photography [9, 13, 44, 46, 47], industrial inspection [41], autonomous driving [38], astronomical observation [16, 19] and microscopy [31, 48]. + +The point spread function (PSF) serves as a mathematical representation of blur. Although accurately modeling the PSF in imaging systems offers significant benefits, achieving both high accuracy and broad applicability remains a significant challenge. Despite the numerous methods proposed for PSF estimation [6, 8, 11, 18, 21, 23, 24, 26, 29, 31, 36], accurately modeling the PSF requires a detailed characterization of the imaging system, which often involves simulating complex compound lenses [7, 8, 36, 49] based on lens design files. Additionally, many of these models are tailored to specific imaging systems [8, 49], limiting their generalization to other setups. This raises an important question: Is it possible to achieve universal and accurate PSF estimation through a simple calibration, similar to how camera noise calibration is handled in the industry? + +In this work, we propose a physics-informed PSF learning framework for imaging systems, which consists of a simple calibration step followed by a learning process. Our approach is designed to provide both broad applicability and high accuracy. + +We propose a novel wavefront-based PSF model that effectively represents the PSF of imaging systems without prior knowledge of lens parameters, making it applicable to a wide range of imaging systems. In addition, we design a learning scheme that targets the measurement of the spatial frequency response (SFR) in the image plane. To improve estimation accuracy, we structure the basis of our PSF model so that each basis influences only a single SFR direction, allowing for a more accurate fit to various SFR measurements. Using curriculum learning [2], we progressively learn the PSF from center to edge. Our learning scheme accelerates convergence with lower loss, resulting in high accuracy. + +Our PSF estimation framework achieves superior accuracy, outperforming existing methods, as demonstrated in Fig. 1. To validate our approach, we compare it with recent PSF estimation methods (Degradation Transfer [6] and Fast Two-step [11]) through a deblurring task, where all estimated PSFs are used to train state-of-the-art deblurring algorithms. Quantitative comparisons on the Flickr2K dataset [25] show significant improvements in image quality, as shown in Tab. 3. Furthermore, the deblurred results on the real captured images exhibit noticeable improvements in visual quality, as shown in Fig. 6. + +# 2. Related Work + +PSF of Imaging Systems The PSF of an imaging system is multi-dimensional and complex, arising and accumulating throughout the imaging process. + +A typical color imaging pipeline comprises three key components: a lens, a sensor with a color filter array, and an image signal processor (ISP) [4, 10, 32]. Each component significantly influences the PSF of the system. + +The lens degrades image quality, and manufacturing imperfections can exacerbate this degradation. Optical aberrations such as spherical, coma, astigmatism, field curvature[17, 42] cause blurring, with chromatic aberrations leading to color misalignment and fringing [33]. Together, these aberrations result in spatially variant degradation that differs across color channels. + +The color filter array employs pixel-level filters to capture color, inevitably causing information loss during raw image capture. The ISP processes this raw data into a final image through operations such as gain adjustment, demosaicing, color correction, white balance, gamma correction, and tone mapping. These nonlinear processes further complicate the characterization of image degradation. + +In this work, we treat the PSF of an imaging system as an integrated whole and estimate it directly from the final captured image. + +Models for PSF PSF modeling approaches fall into three categories [26]: non-parametric, parametric, and optical simulation-based methods. + +Non-parametric models represent blur as a 2D distribution, disregarding interpixel relationships within a field and connections across fields. These models sparsely sample spatially variant PSF across the field of view. Consequently, their sparse and independent nature limits their ability to capture the high-dimensional characteristics of PSF within imaging systems. + +Parametric models, such as heteroscedastic Gaussian [9, 11] and Efficient Filter Flow [34], use a limited set of parameters, which can oversimplify the PSF. More advanced methods, including Zernike polynomials [30] and Seidel aberrations [50], incorporate wavefront aberrations and diffraction effects. These models establish field-dependent relationships [15, 50], allowing dense PSF estimation with minimal measurements. However, the complexity of Zernike polynomials may hinder practical use, whereas Seidel aberrations offer a simpler parameterization for system aberrations. + +Optical simulation models rely on detailed lens design parameters to generate PSF through ray-tracing or diffraction propagation [1] under various configurations. However, acquiring accurate lens parameters can be challenging due to intellectual property restrictions. + +PSF Estimation Many techniques have been developed to estimate the PSF [6, 8, 11, 18, 21, 23, 24, 26, 29, 31, 36]. Accurately estimating PSF in real-world imaging systems often requires real captures due to factors such as manufacturing errors, variability in assembly and changes in system performance over time. Among these techniques, there is a significant focus on learning-based methods that utilize real captures. + +Among these learning-based methods, one category employs non-parametric PSF models, such as applying a degradation framework to learn PSF with optical geometric priors [6]. However, this approach lacks guarantees of smooth transitions both within the PSF and across the field of view. To address these smoothness issues, a recent method uses a multi-layer perceptron to provide a continuous PSF representation[26]. The primary challenge of this approach lies in the complex alignment needed between blurred and sharp patterns, involving procedures such as homography-based perspective projection, lens distortion correction, and radiometric compensation. + +Another category adopts a parametric PSF model, such as using a heteroscedastic Gaussian with parameters estimated from closed-form equations based on image gradients. However, this model can be overly restrictive, particularly for first-entry lenses where the blur may not conform to a Gaussian kernel [11]. + +In summary, employing an accurate parametric PSF model is critical for precise estimation. Furthermore, robust and simplified measurements are preferred for operational efficiency. + +# 3. Proposed Method + +Our work aims to present a practical approach for learning the PSF of an imaging system. We utilize spatial frequency response (SFR) measurement, a technique widely used in the industry. + +PSF and SFR are interconnected, while the PSF reflects the system's capability to capture fine details, influencing its resolution, the spatial frequency response (SFR) is a key metric for quantifying resolution. The SFR can be derived from the PSF. To simplify the analysis, due to rotational symmetry in optical systems, the PSF is also symmetric (as shown in Fig. 2). So this study focuses on the PSF along the $+\mathrm{Y}$ axis, SFR could be derived from PSF: + +$$ +\operatorname {S F R} (\mathrm {H}, \lambda , \phi) = h (\operatorname {P S F} (\mathrm {H}, \lambda), \phi), \tag {1} +$$ + +here, $\lambda$ is wavelength, $\mathrm{H}$ is normalized field height (shown in Fig. 2), and the $\phi$ is the rotation angle from the $+\mathrm{Y}$ axis on the image plane, with positive values indicating clockwise rotation (shown in Fig. 2), $h$ is the mapping function (see supplementary materials). For a given normalized field height $\mathrm{H}$ and wavelength $\lambda$ , the PSF is a 2D distribution, + +while the SFR relates to directional blur, with the direction specified by the rotation angle $\phi$ . + +# 3.1. Problem Formulation + +# 3.1.1. PSF Estimation by Optimization + +The PSF of an imaging system is multi-dimensional, and directly estimating it for different configurations is challenging. A parametric model, such as the Seidel PSF model, can simplify this process. + +To understand this model, we start with wavefront aberration, which represents the deviation of the real wavefront from the ideal shape at the exit pupil [14]. This deviation leads to defocus on the image plane, resulting in the PSF (see Fig. 2). In incoherent imaging systems, the PSF is closely related to the wavefront aberration: + +$$ +\operatorname {P S F} (\mathrm {H}, \lambda) = \left| \mathcal {F} \left(A (\mathbf {p}) \exp \left(\frac {i 2 \pi W (\mathrm {H} , \lambda , \mathbf {p})}{\lambda}\right)\right) \right| ^ {2}, \tag {2} +$$ + +where $W(\mathrm{H},\lambda ,\mathbf{p})$ is the wavefront aberration, and $\mathbf{p}$ represents a point in polar coordinates on the pupil plane: + +$$ +\mathbf {p} = \left( \begin{array}{c} \rho \\ \theta \end{array} \right), \tag {3} +$$ + +with $\rho \in [0,1]$ as the radial coordinate and $\theta \in [0,2\pi ]$ as the angular coordinate. $A(\mathbf{p})$ is the aperture function, typically known. By further decomposing the wavefront aberration into Seidel basis [15], expressed as: + +$$ +W (\mathrm {H}, \lambda , \mathbf {p}) = \sum_ {k = 0} ^ {\infty} \sum_ {l = 0} ^ {\infty} \sum_ {m = 0} ^ {\infty} W _ {k l m} \mathrm {H} ^ {k} \rho^ {l} \cos^ {m} \left(\frac {\pi}{2} - \theta\right) \tag {4} +$$ + +where $k = 2p + m$ and $l = 2n + m$ ( $p, n, m \in \mathbb{N}$ ), this decomposition provides a set of Seidel coefficients $W_{klm}$ (only select around the first 10 items), which theoretically + +![](images/ba2e5b7c5b8e81d0cfad6706773d9894ab2706a7075e2a33f110fc083f412b3c.jpg) +Figure 2. Diagram of wavefront aberration and PSF. When light passes through an aberrated optical system, the real wavefront deviates from the ideal, causing defocus in the imaging plane. This deviation, varying with incidence angle and wavelength, creates a spatially varying, symmetric PSF. We focus on the PSF along the $+\mathrm{Y}$ axis, where normalized field height $\mathrm{H}$ and wavelength $\lambda$ define $\mathrm{PSF}(\mathrm{H},\lambda)$ . Other PSFs are generated by rotating $\mathrm{PSF}(\mathrm{H},\lambda)$ by angle $\phi$ from the $+\mathrm{Y}$ axis, with positive $\phi$ indicating clockwise rotation (yellow box). + +represents a single-channel, spatially varying PSF of an imaging system. + +PSF estimation is then framed as learning a set of Seidel coefficients from observed SFR measurements. This learning is typically achieved through gradient descent optimization. The optimization process aims to adjust the Seidel coefficients to match SFR measurements across the entire image plane. + +# 3.1.2. Mitigating Gradient Conflicts in Optimization + +However, learning the PSF (or Seidel coefficients) is not trivial for several reasons. Certain Seidel bases, such as spherical aberration $(\rho^2)$ , simultaneously impact SFR curves across multiple directions, which causes coupling among these directions (Fig. 3) and hinders accurate fitting to diverse SFR data. Moreover, the inverse problem is inherently ill-posed, particularly due to the exclusion of the phase component in Eq. (2). The nonlinearity of transformation as shown in Eq. (2) further complicates the inversion process. Together, these factors create conflicting gradients during the optimization. + +Gradient conflicts are frequently discussed in multi-task learning, where the aim is to improve efficiency by sharing model structures across tasks. However, such conflicts can lead to poorer task performance compared to independent learning [27]. To address this, we build on existing methods for mitigating gradient conflicts and propose refined strategies. + +First, we propose a novel wavefront basis where each basis function influences only one direction of the SFR. The modified expression is: + +$$ +W (\mathrm {H}, \lambda , \mathbf {p}) = \sum_ {(p, q, r) \in \mathcal {Q}} W _ {p q r} (\mathrm {H}, \lambda) \rho^ {p} (\sin \theta) ^ {q} (\cos \theta) ^ {r}, \tag {5} +$$ + +where the set $\mathcal{Q}$ is defined as: + +$$ +\begin{array}{l} \mathcal {Q} = \{(2, 2, 0), (2, 0, 2), (3, 1, 0), (3, 3, 0), (4, 2, 0), \\ (4, 0, 2), (5, 1, 0), (6, 2, 0), (6, 0, 2) \}. \tag {6} \\ \end{array} +$$ + +In our modified basis, each term includes either a $\cos \theta$ or $\sin \theta$ component, ensuring it influences the SFR independently along either the vertical or horizontal axis. This approach helps mitigate gradient conflict during optimization, as shown in Fig. 3. For further information about the new basis, please refer to the supplementary materials. + +Second, we optimize parameters to match the SFR within narrower field of view. Instead of targeting the SFR across the entire field of view, this approach focuses on smaller, less variable SFR targets, facilitating easier convergence. Although this is a discrete representation, adjusting the optimization step enables control over the PSF output density, allowing for either dense or sparse representations as needed. + +![](images/255b95bf92dbd0e6baff598d202c9db94465c8dce288e0411d83c4283bb1655a.jpg) +Figure 3. An example demonstrating how the proposed wavefront basis mitigates gradient conflicts. Top: In the Seidel PSF model, the spherical aberration basis $\rho^2$ creates a circular PSF shape with $360^{\circ}$ of blur (orange arrow). This produces identical SFR in both the $0^{\circ}$ and $90^{\circ}$ directions. When attempting to optimize the coefficient $W_0$ to match real SFR measurements, which differ between $0^{\circ}$ and $90^{\circ}$ , gradient conflicts arise. Bottom: In our proposed wavefront basis, each basis affects the SFR in only one direction. This allows the model to independently adjust coefficients $W_1$ and $W_2$ to better match the measured SFR without gradient conflict. + +Third, we learn the PSF by optimization progressively from the center to the edge [35]. According to aberration theory, only spherical aberration impacts the center of the image plane, while coma and field curvature aberrations gradually appear toward the edges, creating a more complex PSF pattern. Following this progression, we apply curriculum learning [2] to gradually learn PSF from center to edge. + +# 3.2. Implementation + +# 3.2.1. Image Capture + +Images are captured in a controlled environment [12], with a checkerboard test chart mounted on a holder and the imaging system positioned on a tripod at a fixed distance. To capture the SFR across the entire field of view in a single setup, the checkerboard is large enough to fill the image plane, and multiple consecutive images are taken. These images are recorded in raw format with the lowest gain, and exposure time is adjusted to prevent overexposure while maximizing the grayscale range. The images are then averaged to reduce noise. Finally, the averaged raw image is converted to a linear RGB format (Fig. 4) for PSF estimation, minimizing the impact of subsequent nonlinear ISP operations that convert linear RGB to sRGB [3]. + +# 3.2.2. Two-stage PSF Estimation + +To leverage advanced optimizers and the flexibility of neural networks to enhance the optimization process, we integrate two multi-layer perceptrons (MLPs) into the physical transformations, allowing the MLPs to adjust their neurons + +![](images/b63f0d7d53985036688de13fb9758522f60d9105e02aa1e79308db3657ab6d16.jpg) +Figure 4. Diagram of the proposed two-step PSF estimation framework, the first step involves learning monochromatic aberration per normalized image height H. The network $\mathcal{G}_{\Theta 1}$ processes H and $\mathrm{H}^2$ to output coefficients, generate wavefront aberration and transform it into the $\mathrm{PSF}^*$ , followed by calculating the modulation transfer function $\mathrm{MTF}^*$ , resulting in the spatial frequency response ( $\mathrm{SFR}^*$ ) curve. Concurrently, a real SFR curve at the same H of one color channel is derived from real capture. Discrepancies between these curves guide $\mathcal{G}_{\Theta 1}$ to faithfully represent real aberration. The second step focuses on learning PSF shifts across channels. Using H as input, $\mathcal{G}_{\Theta 2}$ calculates shifts, generates shifted PSF, and produces chromatic areas $\mathrm{CA}^*$ through a physical process. Real chromatic areas CA data at the same H are obtained from captures, the disparities between the two data guiding $\mathcal{G}_{\Theta 2}$ to output $\mathrm{CA}^*$ faithfully representing reality. These two steps enable the learning of spatial-variant PSF of the whole imaging system. + +to learn the target [39]. As shown in Fig. 4, our approach follows a two-stage learning strategy. First, we independently estimate the PSF for each color channel. Next, we learn the PSF shifts across channels by analyzing chromatic area differences. Separating the PSF estimation into two subproblems, specifically monochromatic PSF and interchannel PSF shift, simplifies the optimization process compared to single-stage approach. Here, we refer to the MLPs paired with physical transformations as a surrogate model that represents the PSF of the imaging system. + +Monochromatic PSF Estimation The MLP $\mathcal{G}_{\Theta_1}$ takes normalized field height H as input, and outputs coefficients $W_{pqr}^{*}$ . These coefficients are then used to generate the SFR through transformation: + +$$ +\mathrm {S F R} ^ {*} (\mathrm {H}, \phi) = h \left(g \left(\mathcal {G} _ {\Theta_ {1}} (\mathrm {H}), \mathrm {H}\right), \phi\right), \tag {7} +$$ + +where $g$ is the mapping function that generates the PSF as defined in Eqs. (2) and (5), and $h$ is the mapping function that outputs the SFR as in Eq. (1), the superscript * denotes the surrogate output, distinguishing it from the ground-truth value. The goal of the surrogate model is to optimize the network parameters to closely match the SFR measurements: + +$$ +\Theta_ {1} ^ {*} (\mathrm {H}) = \underset {\Theta_ {1}} {\arg \min } \sum_ {\mathrm {H}} \sum_ {\phi = 0} ^ {2 \pi} | \mathrm {S F R} ^ {*} (\mathrm {H}, \phi) - \mathrm {S F R} (\mathrm {H}, \phi) |, \tag {8} +$$ + +here, in each optimization step, $\mathrm{H}$ is restricted to a smaller region, defined by a narrow field of view, the field of view interval $\Delta \mathrm{H}$ ( $\Delta \mathrm{H} \in (0.03, 0.1)$ ). The value $\mathrm{H}$ is gradually increased from 0 to 1 to learn the PSF across the entire image plane. + +Cross-Channel PSF Shift Estimation In addition to monochromatic aberrations, it is crucial to consider PSF shifts across different color channels, as these shifts can result in color misalignment and fringing, known as chromatic aberration. Building upon previous work [28], we define the chromatic aberration area (CA) as the region enclosed by the edge gradient line of a blurred black-and-white edge image and the horizontal axis (in pixels). To quantify chromatic aberration, we define the chromatic area difference as: + +$$ +\Delta \mathrm {C A} (\mathrm {H}, \lambda , \phi) = \mathrm {C A} (\mathrm {H}, \lambda , \phi) - \mathrm {C A} (\mathrm {H}, \lambda_ {\mathrm {G}}, \phi), \tag {9} +$$ + +where CA is chromatic aberration area, $\Delta \mathrm{CA}$ is chromatic aberration area difference, $\lambda = \{\lambda_{\mathrm{R}},\lambda_{\mathrm{B}}\}$ , and the green channel $\lambda_{\mathrm{G}}$ serving as the reference. + +In practical image capture, chromatic aberration area differences $\Delta \mathrm{CA}$ can be directly inferred from the fringe patterns observed in a captured checkerboard image. However, in a simulated surrogate model, this aberration is influenced by both the 2D distribution of the PSF and the rotation angle $\phi$ , expressed as: + +$$ +\mathrm {C A} ^ {*} (\mathrm {H}, \lambda , \phi) = \mathcal {L} \left(\mathrm {P S F} _ {\mathrm {S}} ^ {*} (\mathrm {H}, \lambda , \mathbf {x}), \phi\right), \tag {10} +$$ + +where $\mathcal{L}$ is a mapping function (see supplementary materials), and $\mathrm{PSF}_{\mathrm{S}}^{*}$ refers to the shifted $\mathrm{PSF}^*$ , which is derived from the monochromatic PSF estimation. To estimate these shifts, we introduce a second MLP $\mathcal{G}_{\Theta_2}$ , which takes the $\mathrm{PSF}^*$ learned by $\mathcal{G}_{\Theta_1}$ , the wavelength $\lambda$ , and the normalized field height $\mathrm{H}$ as input. It outputs the PSF shifts, which are applied to the PSF as follows: + +$$ +\mathrm {P S F} _ {\mathrm {S}} ^ {*} (\mathrm {H}, \lambda , \mathbf {x}) = \mathrm {T} \left(\mathcal {G} _ {\Theta_ {2}} (\mathrm {H}, \lambda), \mathrm {P S F} ^ {*} (\mathrm {H}, \lambda , \mathbf {x})\right), \tag {11} +$$ + +
Comparison ChartLens #63762Lens #89752
ProposedDegradation Transfer [6]Fast Two-step [11]ProposedDegradation Transfer [6]Fast Two-step [11]
H=0 wo/ noisePSNR ↑41.981 ± 1.13240.822 ± 1.01642.24042.073 ± 1.05842.128 ± 1.17443.511
SSIM ↑0.937 ± 0.0660.939 ± 0.0710.9430.945 ± 0.0690.926 ± 0.0730.947
H=0.7 wo/ noisePSNR ↑49.185 ± 1.24246.521 ± 1.34743.74147.811 ± 1.11545.896 ± 1.01244.171
SSIM ↑0.967 ± 0.0610.959 ± 0.0790.9330.959 ± 0.0670.951 ± 0.0780.942
H=1 wo/ noisePSNR ↑50.156± 1.60644.920 ± 1.59244.99350.624± 1.53743.872 ± 1.51644.801
SSIM ↑0.983 ± 0.0640.966 ± 0.0880.9330.979 ± 0.0760.959 ± 0.0810.938
H=0 w/ 1% noisePSNR ↑42.075 ± 1.10240.629 ± 1.11641.82242.970 ± 0.79241.053 ± 1.04441.790
SSIM ↑0.949± 0.0650.925 ± 0.0850.9370.951± 0.0770.947 ± 0.0800.941
H=0.7 w/ 1% noisePSNR ↑47.467 ± 1.57945.981 ± 1.48344.28646.284 ± 1.18144.907 ± 1.17743.812
SSIM ↑0.960 ± 0.0710.926 ± 0.0830.9500.958 ± 0.0810.930 ± 0.0870.938
H=1 w/ 1% noisePSNR ↑49.151 ± 1.62243.981 ± 1.62944.55449.803 ± 1.64343.522 ± 1.75243.604
SSIM ↑0.987 ± 0.0730.936 ± 0.0860.9310.969 ± 0.0750.942 ± 0.0820.933
+ +Table 1. Evaluation of PSF accuracy using synthetic checkerboard patterns under different configurations, including variations in relative image height (H) and the presence or absence of noise. The proposed method quantitatively outperforms the Degradation Transfer [6] and Fast Two-step [11] methods in terms of PSNR and SSIM. For a fair comparison, all PSFs have been normalized so that the sum of each channel equals one. In most configurations, our method outperforms the other approaches. + +where $\mathrm{T}$ denotes the shift operation, as shown in Fig. 4. $\lambda = \{\lambda_{\mathrm{R}}, \lambda_{\mathrm{B}}\}$ , with only the PSF of the red and blue channels being shifted. + +The goal is to estimate the PSF shifts between channels to match the chromatic area differences in the measurements. To achieve this, the surrogate model $\mathcal{G}_{\Theta_2}$ is trained to minimize the difference between the predicted and observed chromatic area differences: + +$$ +\Theta_ {2} ^ {*} (\mathrm {H}, \lambda) = \underset {\Theta_ {2}} {\operatorname {a r g m i n}} \sum_ {\mathrm {H}} \sum_ {\phi = 0} ^ {2 \pi} | \Delta \mathrm {C A} ^ {*} (\mathrm {H}, \lambda , \phi) - \Delta \mathrm {C A} (\mathrm {H}, \lambda , \phi) |, \tag {12} +$$ + +where, in each optimization step, $\mathrm{H}$ is restricted to a smaller region and gradually increased from 0 to 1 to learn the PSF shift across the entire image plane, following the same steps and intervals described in Eq. (8). + +Since chromatic area differences arise from both monochromatic aberrations and PSF shifts across channels, we employ a two-stage learning process. The PSF shifts are estimated only after addressing monochromatic aberrations, ensuring a more accurate optimization process. + +# 4. Experimental Results + +We evaluate the proposed method from two perspectives: the accuracy of the estimated PSF in simulations and the deblurring performance in both simulated and real-world scenarios. + +# 4.1. Dataset, Algorithms and Metrics + +To evaluate deblurring performance, we select three state-of-the-art deep learning-based algorithms: MPRNet[45], Restormer [46], and FFTFormer [22]. During the training stage, we use 500 images from the Flickr2K dataset [25] to ensure broad applicability across various natural scenes, + +with the blurred images synthesized using estimated PSF. During the testing stage, we reserve 100 images from the same dataset, with the blurred images synthesized using ground-truth PSF. + +We employ two metric sets to assess performance in simulation and real capture respectively: + +- Full-reference metrics to evaluate in simulation. We use PSNR and SSIM [40] to measure the difference between the output and the ground-truth. +- Non-reference metrics are applied for real capture evaluation. We employ MUSIQ [20] and MANIQA [43] to assess the visual quality of the reconstructed images. + +# 4.2. Experiments on Simulation + +We evaluate both the accuracy of the estimated PSF and the deblurring performance by simulation. In this setup, the imaging system uses an IDS camera equipped with onsemi AR1820HS sensor. The imaging lenses are sourced from Edmund (#63762 or #89752), and the simulated PSF, generated by Zemax $^{®}$ , serve as the ground-truth. + +To evaluate the accuracy of the estimated PSF, we simulate degraded checkerboard patterns by convolving ground-truth PSF with ideal patterns, followed by estimating the PSF from these degraded patterns. The accuracy of the estimated PSF is compared to the ground truth PSF using PSNR and SSIM metrics. To further assess the robustness of the approach, noise is added to the degraded patterns. For comparison, the following two methods are selected: + +- Degradation Transfer [6]: A deep linear model incorporating optical geometric priors. +- Fast Two-step [11]: An empirical affine model that processes image gradients. + +An ablation study is then conducted to evaluate the con + +![](images/e60a9dc59a02051454d82b12528e081b8ad334a8fb6e1b0680dbe2db43156196.jpg) +Figure 5. Estimated PSFs and ground-truth: The PSFs are arranged from left to right by increasing normalized field height H. From top to bottom, the PSF estimates using Degradation Transfer [6], Fast Two-step [11], and our method, followed by the ground-truth PSF of lens #63762 from Edmund. + +ribution of each component to the overall method. + +In evaluating deblurring performance, we account for the ISP pipeline within the camera. + +# 4.2.1. Accuracy of Estimated PSF + +As shown in Fig. A5, the imaging lens is #63762 from Edmund, and estimated PSF from different methods are listed. PSF is channel-normalized for visualization. Comparatively, our method is closest to the ground-truth. + +In traditional lens design, designers typically focus on three normalized field heights: 0, 0.7, and 1 [37], as these provide a representative sampling of the image plane. Following this convention, we selected these normalized field heights for quantitative comparison. We compare two scenarios: one without noise (ideal) and one with noise (realistic) when performing SFR measurements. A $1\%$ noise level is set for realism, as multiple consecutive checkerboard images can be captured and averaged to reduce noise. As shown in Tab. 1, as an optimization method, both our approach and Degradation Transfer [6] produce variable results, while the Fast Two-step method outputs a consistent result each time. In most configurations, our method outperforms the other approaches in both scenarios. + +# 4.2.2. Ablation Study + +To further evaluate the proposed method, we conduct an ablation study to quantify the impact of various factors on performance. Tab. 2 presents a comparison of the proposed method with three alternative configurations: (1) without optimization within a narrow field of view, i.e., without small interval optimization; (2) without the proposed wavefront basis in Eq. (5), using the Seidel basis instead; and (3) without optimization from center to edge based on curriculum learning, i.e., without curriculum learning. From these comparisons, we conclude that optimization within a + +Table 2. Quantitative assessment (PSNR/SSIM) of each component in the proposed method using the imaging system without noise (Edmund Lens #63762 and onsemi AR1820HS sensor). + +
H = 0H = 0.7H = 1
w/o small interval optimization42.514/0.93442.682/ 0.93741.064/ 0.922
w/o proposed wavefront basis42.180/0.93147.479/ 0.95044.579/ 0.954
w/o curriculum learning41.562/0.93748.580/ 0.95746.023/ 0.955
Proposed42.643/0.94049.079/ 0.96849.252/ 0.981
+ +narrow field of view, the proposed wavefront basis, and curriculum learning strategy all significantly enhance the estimation accuracy of the spatially variant PSF, particularly for larger fields of view. These design choices are essential for achieving precise results across the entire field of view. + +# 4.2.3. Deblurring Results + +Different from the approach in Sec. 4.2.1, it is crucial to account for the camera pipeline when evaluating deblurring results. To minimize the impact of non-linear operations in the ISP, we assume PSF-induced blur occurs in the linear RGB image. + +Thus, we estimate the PSF from a linear RGB checkerboard image. Specifically, a clear checkerboard image is convolved with the ground-truth PSF, followed by mosaicing and demosaicing, to obtain a linear RGB checkerboard image from which we estimate the PSF. We evaluate both noise-free and noisy scenarios during SFR measurement, adding $1\%$ noise to the blurry checkerboard image in the noisy scenarios. These noisy checkerboard images are then used to estimate the PSF. + +The estimated PSF is subsequently used to recover images. We evaluate deblurring performance using deblurring networks [22, 45, 46]. During the training stage, we convert clear images from the Flickr2K dataset [25] to linear RGB images through unprocessing [3]. These images are then convolved with the estimated PSF, followed by color correction and gamma correction to produce blurred images. Both blurred and clear images are fed into the networks for training. In the testing stage, input blurry images are generated using the same process but with the ground-truth PSF. + +As shown in Tab. 3, our approach consistently outperforms others in both noise-free and noisy scenarios. + +Table 3. Quantitative evaluations (PSNR/SSIM) using the imaging system (Edmund Lens #63762 and onsemi AR1820HS sensor). + +
Deblurring methods
MPRNetRestormerFFTFormer
Degradation Transfer w/o noise30.546/0.87330.691/0.87130.534/0.872
Fast Two-step w/o noise30.217/0.87030.340/0.86930.335/0.868
Ours w/o noise31.243/0.89431.506/0.89431.358/0.891
Degradation Transfer [6] w/ noise30.326/0.86030.417/0.86130.308/0.860
Fast Two-step [11] w/ noise30.097/0.86530.144/0.86330.127/0.862
Ours w/ noise31.018/0.88931.271/0.88731.145/0.887
+ +![](images/9c3f7774a2734ff458a4bb967c081f75b98d11f61822ebe7426ee1e5a3f1f0df.jpg) +Figure 6. Performance comparison with state-of-the-art methods on real captures. From left to right: sharp output image deblurred by the pre-trained Restormer, using training data synthesized from our estimated PSF; real captured image patches from a custom-built imaging system (Edmund Lens: #63762 and onsemi AR1820HS sensor); deblurred image patches from pre-trained Restormers using data synthesized with estimated PSFs from Degradation Transfer [6], Fast Two-step [11], and our approach. MUSIQ $\uparrow$ / MANIQA $\uparrow$ scores are shown in the bottom-right corner. Notably, the scores are lower than expected, likely due to the evaluation being performed on small patches. + +# 4.3. Experiments on Real Captures + +We conduct experiments using real captures from the same device used in the simulations (Edmund Lens #63762 and IDS camera with onsemi AR1820HS sensor). + +We capture checkerboard images in the laboratory to estimate the PSF, followed by training Restormer [46] (as described in Sec. 4.2.3). The pre-trained Restormer is subsequently applied to deblur the captured images. For comparison, we follow the same procedure with two other PSF estimation methods [6, 11]. + +# 4.3.1. Experiment Setup + +We capture checkerboard images in the laboratory using a custom-built device comprising an Edmund Lens #63762 and an IDS camera (onsemi AR1820HS sensor). The camera is mounted on a tripod, aimed at a checkerboard secured on a card holder, with two angled LED surface lights positioned vertically to provide uniform illumination [12]. See Sec. 3.2.1 for further setup details. + +# 4.3.2. Recovery Comparison + +We estimate the PSF according to the process outlined in Fig. 4. These estimated PSF are then applied to recover images, followed by an evaluation of the deblurring performance. To reduce cumulative degradation in ISP pipeline, we assume that convolution takes place in the linear RGB domain. Under this assumption, we estimate PSF from linear RGB checkerboard images. To prepare images in the training stage, we first convolve the PSF with linear RGB images generated by unprocessing method [3], then apply color correction, gamma correction, and tone mapping to generate blurry sRGB images. + +As shown in Fig. 6, a comparison of image patches demonstrates that our method effectively sharpens the + +image, outperforming others in terms of MUSIQ and MANIQA scores (the higher the better), leading to improved image quality. + +# 5. Conclusion and Discussion + +In this work, we propose a novel physics-informed blur learning framework for imaging systems, which significantly improves the accuracy of PSF estimation and achieves excellent deblurring performance in both simulation and real-world capture scenarios. Importantly, it operates independently of lens parameters, enabling seamless integration into the mass production of various imaging devices without requiring prior knowledge of lens characteristics. While we demonstrate its effectiveness in photography applications, this approach can also provide valuable insights for enhancing image quality in other imaging systems, such as microscopes, industrial inspection systems, and autonomous driving applications. + +Our work is only the first step towards PSF modeling and estimation for general imaging systems. Recovering a PSF is inherently ill-posed due to information loss during image formation. In our current work, chromatic aberrations in wide field-of-view images have not been fully corrected (see supplementary materials for examples of failed cases), which will be addressed in the future. + +# Acknowledgments + +This work was supported by the National Key R&D Program of China (No. 2022ZD0160201), Shanghai Artificial Intelligence Laboratory, RGC Early Career Scheme (ECS) No. 24209224 and CUHK Direct Grants (RCFUS) No. 4055189. + +# References + +[1] Bevan B Baker and Edward Thomas Copson. The mathematical theory of Huygens' principle. American Mathematical Soc., 2003. 2 +[2] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48, 2009. 2, 4 +[3] Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, and Jonathan T Barron. Unprocessing images for learned raw denoising. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11036-11045, 2019. 4, 7, 8 +[4] Michael S. Brown. Understanding color & the in-camera image processing pipeline for computer vision. ICCV 2019 Tutorial, 2019. Accessed: 2024-08-20. 2 +[5] Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Frédo Durand. Learning photographic global tonal adjustment with a database of input / output image pairs. In The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition, 2011. 1 +[6] Shiqi Chen, Huajun Feng, Keming Gao, Zhihai Xu, and Yueting Chen. Extreme-quality computational imaging via degradation framework. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2632-2641, 2021. 1, 2, 3, 6, 7, 8 +[7] Shiqi Chen, Huajun Feng, Dexin Pan, Zhihai Xu, Qi Li, and Yueting Chen. Optical aberrations correction in postprocessing using imaging simulation. ACM Transactions on Graphics (TOG), 40(5):1-15, 2021. 2 +[8] Shiqi Chen, Ting Lin, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. Computational optics for mobile terminals in mass production. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4245-4259, 2022. 2, 3 +[9] Mauricio Delbracio, Ignacio Garcia-Dorado, SungJoon Choi, Damien Kelly, and Peyman Milanfar. Polyblur: Removing mild blur by polynomial reburring. IEEE Transactions on Computational Imaging, 7:837-848, 2021. 2 +[10] Maurizio Delbracio, Damien Kelly, Michael S Brown, and Peyman Milanfar. Mobile computational photography: A tour. Annual review of vision science, 7(1):571-604, 2021. 2 +[11] Thomas Eboli, Jean-Michel Morel, and Gabriele Facciolo. Fast two-step blind optical aberration correction. In European Conference on Computer Vision, pages 693-708. Springer, 2022. 2, 3, 6, 7, 8 +[12] International Organization for Standardization (ISO). ISO 12233:2014 - photography - digital still cameras - resolution and spatial frequency response, 2014. 4, 8 +[13] Jin Gong, Runzhao Yang, Weihang Zhang, Jinli Suo, and Qionghai Dai. A physics-informed low-rank deep neural network for blind and universal lens aberration correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24861-24870, 2024. 2 +[14] Joseph W Goodman. Introduction to Fourier optics. Roberts and Company publishers, 2005. 3 + +[15] Robert W Gray, Christina Dunn, Kevin P Thompson, and Jannick P Rolland. An analytic expression for the field dependence of zernike polynomials in rotationally symmetric optical systems. Optics Express, 20(15):16436-16449, 2012. 2, 3 +[16] Yuduo Guo, Yuhan Hao, Sen Wan, Hao Zhang, Laiyu Zhu, Yi Zhang, Jiamin Wu, Qionghai Dai, and Lu Fang. Direct observation of atmospheric turbulence with a video-rate widefield wavefront sensor. Nature Photonics, pages 1-9, 2024. 2 +[17] Harold H Hopkins. Image formation by a general optical system. 1: General theory. Applied optics, 24(16):2491-2505, 1985. 2 +[18] Jurij Jemec, Franjo Pernuš, Boštjan Likar, and Miran Bürmen. 2d sub-pixel point spread function measurement using a virtual point-like source. International journal of computer vision, 121:391-402, 2017. 2, 3 +[19] Emin Karabal, P-A Duc, Harald Kuntschner, Pierre Chanial, J-C Cuillandre, and Stephen Gwyn. A deconvolution technique to correct deep images of galaxies from instrumental scattered light. *Astronomy & Astrophysics*, 601:A86, 2017. 2 +[20] Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5148-5157, 2021. 6 +[21] Eric Kee, Sylvain Paris, Simon Chen, and Jue Wang. Modeling and removing spatially-varying optical blur. In 2011 IEEE international conference on computational photography (ICCP), pages 1-8. IEEE, 2011. 2, 3 +[22] Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. Efficient frequency domain-based transformers for high-quality image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5886-5895, 2023. 6, 7 +[23] Jingyun Liang, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Mutual affine network for spatially variant kernel estimation in blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4096-4105, 2021. 2, 3 +[24] Tobias Liaudat, Jean-Luc Starck, Martin Kilbinger, and Pierre-Antoine Frugier. Rethinking data-driven point spread function modeling with a differentiable optical model. Inverse Problems, 39(3):035008, 2023. 2, 3 +[25] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Young Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136-144, 2017. 2, 6, 7 +[26] Esther YH Lin, Zhecheng Wang, Rebecca Lin, Daniel Miau, Florian Kainz, Jiawen Chen, Xuaner Cecilia Zhang, David B Lindell, and Kiriakos N Kutulakos. Learning lens blur fields. arXiv preprint arXiv:2310.11535, 2023. 2, 3 +[27] Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. Conflict-averse gradient descent for multi-task learning. Advances in Neural Information Processing Systems, 34:18878-18890, 2021. 4 + +[28] Alexis Lluis-Gomez and Eran A Edirisinghe. Chromatic aberration correction in raw domain for image quality enhancement in image sensor processors. In 2012 IEEE 8th International Conference on Intelligent Computer Communication and Processing, pages 241-244. IEEE, 2012. 5 +[29] Ali Mosleh, Paul Green, Emmanuel Onzon, Isabelle Begin, and JM Pierre Langlois. Camera intrinsic blur kernel estimation: A reliable framework. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4961-4968, 2015. 2, 3 +[30] Kuo Niu and Chao Tian. Zernike polynomials and their applications. Journal of Optics, 24(12):123001, 2022. 2 +[31] Chang Qiao, Haoyu Chen, Run Wang, Tao Jiang, Yuwang Wang, and Dong Li. Deep learning-based optical aberration estimation enables offline digital adaptive optics and super-resolution imaging. Photonics Research, 12(3):474-484, 2024. 2, 3 +[32] Rajeev Ramanath, Wesley E Snyder, Youngjun Yoo, and Mark S Drew. Color image processing pipeline. IEEE Signal processing magazine, 22(1):34-43, 2005. 2 +[33] José Sasán. Introduction to aberrations in optical imaging systems. Cambridge University Press, 2012. 2 +[34] Christian J Schuler, Michael Hirsch, Stefan Harmeling, and Bernhard Scholkopf. Blind correction of optical aberrations. In Computer Vision-ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part III 12, pages 187-200. Springer, 2012. 2 +[35] Haosen Shi, Shen Ren, Tianwei Zhang, and Sinno Jialin Pan. Deep multitask learning with progressive parameter sharing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19924-19935, 2023. 4 +[36] Yichang Shih, Brian Guenter, and Neel Joshi. Image enhancement using calibrated lens simulations. In Computer Vision-ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part IV 12, pages 42-56. Springer, 2012. 2, 3 +[37] Warren J Smith. Modern optical engineering: the design of optical systems. 2008. 7 +[38] Ethan Tseng, Ali Mosleh, Fahim Mannan, Karl St-Arnaud, Avinash Sharma, Yifan Peng, Alexander Braun, Derek Nowrouzezahrai, Jean-Francois Lalonde, and Felix Heide. Differentiable compound optics and processing pipeline optimization for end-to-end camera design. ACM Transactions on Graphics (TOG), 40(2):1-19, 2021. 2 +[39] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9446-9454, 2018. 5 +[40] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 6 +[41] Jiamin Wu, Yuduo Guo, Chao Deng, Anke Zhang, Hui Qiao, Zhi Lu, Jiachen Xie, Lu Fang, and Qionghai Dai. An integrated imaging sensor for aberration-corrected 3d photography. Nature, 612(7938):62-71, 2022. 2 + +[42] James C Wyatt and Katherine Creath. Basic wavefront aberration theory for optical metrology. Applied optics and optical engineering, 11(part 2):28-39, 1992. 2 +[43] Sidi Yang, Tianhe Wu, Shuwei Shi, Shanshan Lao, Yuan Gong, Mingdeng Cao, Jiahao Wang, and Yujiu Yang. Maniaq: Multi-dimension attention network for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1191-1200, 2022. 6 +[44] Tao Yue, Jinli Suo, Jue Wang, Xun Cao, and Qionghai Dai. Blind optical aberration correction by exploring geometric and visual priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1684-1692, 2015. 2 +[45] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14821-14831, 2021. 6, 7 +[46] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728-5739, 2022. 1, 2, 6, 7, 8 +[47] Kai Zhang, Wangmeng Zuo, and Lei Zhang. Deep plug-and-play super-resolution for arbitrary blur kernels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1671-1681, 2019. 2 +[48] Weisong Zhao, Shiqun Zhao, Liuju Li, Xiaoshuai Huang, Shijia Xing, Yulin Zhang, Guohua Qiu, Zhenqian Han, Yingxu Shang, De-en Sun, et al. Sparse deconvolution improves the resolution of live-cell super-resolution fluorescence microscopy. Nature biotechnology, 40(4):606-617, 2022. 2 +[49] Jingwen Zhou, Bingkun Chen, Jiapu Yan, Zheng Ren, Wenguan Zhang, Huajun Feng, Yueting Chen, and Meijuan Bian. Optical degradation correction of manufacturing-perturbed glass-plastic hybrid lens systems via a joint hardware-software optimization framework. Optics Express, 32(15): 25866-25882, 2024. 2 +[50] Jingwen Zhou, Shiqi Chen, Zheng Ren, Wenguan Zhang, Jiapu Yan, Huajun Feng, Qi Li, and Yueting Chen. Revealing the preference for correcting separated aberrations in joint optic-image design. Optics and Lasers in Engineering, 178: 108220, 2024. 2 \ No newline at end of file diff --git a/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/images.zip b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..491d9c9bb1e708d64543bdd3a6689bb71da04032 --- /dev/null +++ b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d5706afdb215af9d4302c9497db0b10f85dcbf273b91e2576b03749464e59c4 +size 637681 diff --git a/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/layout.json b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3cad1ac63783a12008ba5dab2b4ab390f8334fdd --- /dev/null +++ b/CVPR/2025/A Physics-Informed Blur Learning Framework for Imaging Systems/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d63f53a90195663f75a7dec8b92f55cabd6674388017afac5ba3e5de9a7f621 +size 418349 diff --git a/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_content_list.json b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d37e9a86ce42a46c3dcaa6462ab73548809012c2 --- /dev/null +++ b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccb281941f7b30c70f53e3dfbca3b0b14bb8969576c1a1c7f5dd983711e29b49 +size 84652 diff --git a/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_model.json b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_model.json new file mode 100644 index 0000000000000000000000000000000000000000..caab490a2bfc07aa9ec727b6baa039ff88218ce0 --- /dev/null +++ b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:748e68588f893dbf61c3818e2e44dec8a967ca2469eb31c917c7319abbfc949a +size 107210 diff --git a/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_origin.pdf b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4d1514d6dfe73119ae1a66f3bcc2fdde03b26060 --- /dev/null +++ b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/d1140f05-15bd-4a94-91b7-a27e1828a326_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccf345ed497d41e5903248b7aebde2083bc463a584df48db69ffdf615f4997f8 +size 3299154 diff --git a/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/full.md b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3efcf8f3d3e9330e816e7e7c7aca1c4ebb173e79 --- /dev/null +++ b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/full.md @@ -0,0 +1,354 @@ +# A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition + +Duosheng Chen $^{1,3}$ Shihao Zhou $^{1,3}$ Jinshan Pan $^{4}$ Jinglei Shi $^{3}$ Lishen Qu $^{1,3}$ Jufeng Yang $^{1,2,3*}$ + +1 Nankai International Advanced Research Institute (SHENZHEN·FUTIAN) + +$^{2}$ Pengcheng Laboratory, $^{3}$ VCIP & TMCC & DISSec, College of Computer Science, Nankai University + +$^{4}$ School of Computer Science and Engineering, Nanjing University of Science and Technology + +duoshengchen@mail.nankai.edu.cn, zhoushihao96@mail.nankai.edu.cn, sdluran@gmail.com + +jinglei.shi@nankai.edu.cn, qulishen@mail.nankai.edu.cn, yangjufeng@nankai.edu.cn + +# Abstract + +Effectively leveraging motion information is crucial for the image deblurring task. Existing methods typically build deep-learning models to restore a clean image by estimating blur patterns over the entire movement. This suggests that the blur caused by rotational motion components is processed together with the translational one. Exploring the movement without separation leads to limited performance for complex motion deblurring, especially rotational motion. In this paper, we propose Motion Decomposition Transformer (MDT), a transformer-based architecture augmented with polarized modules for deblurring via motion vector decomposition. MDT consists of a Motion Decomposition Module (MDM) for extracting hybrid rotation and translation features and a Radial Stripe Attention Solver (RSAS) for sharp image reconstruction with enhanced rotational information. Specifically, the MDM uses a deformable Cartesian convolutional branch to capture translational motion, complemented by a polar-system branch to capture rotational motion. The RSAS employs radial stripe windows and angular relative positional encoding in the polar system to enhance rotational information. This design preserves translational details while keeping computational costs lower than dual-coordinate design. Experimental results on 6 image deblurring datasets show that MDT outperforms state-of-the-art methods, particularly in handling blur caused by complex motions with significant rotational components. The code and pre-trained models are available at https://github.com/Calvin11311/MDT. + +# 1. Introduction + +Image deblurring aims to recover a sharp image from its blurred counterparts. Deep convolutional neural networks (CNNs) have significantly advanced the research progress + +![](images/5a3d2c26ddf32d48f8d9c63c2c535d7c8352bcca6e7fa0190b7b7947535af9c8.jpg) +(a) Motion Decomposition + +![](images/5f02a9fbb26f69fc9c198bd2556bff7b8fb99dc5b70141cfe99622932749b206.jpg) + +![](images/52fe35e7cac0bea602db14c2ca8d3f0bcdae449b3b7208a05acf3c2d37b050ae.jpg) +(b) Blurry Image +(c) Tokens in Polar Coordinate +Figure 1. (a) The demonstration of the motion trajectory causing blur. It can be decomposed into translation and rotation components, as the blue curve consists of the red rotational motion and the orange translational movement. (b) Illustration of the kernel estimation [4], the red curves describe the motion state that causes the blur. (c) Exhibiting the relationship of tokens within the polar coordinates. $\theta$ quantifies the relative angle between A and B. + +![](images/6f786887f269bc53bdfe4310296c8ce770db869ca97d4e6e19e6f7fcaaa806f9.jpg) + +of this problem [16, 34, 50] thanks to their ability to model complex local blur patterns using diverse strategies (e.g., the multi-scale [50, 52]/multi-stage network architectures [34, 44], latent space kernel estimation [16], etc.). Unfortunately, they often struggle to capture non-local contextual information due to the spatially invariant and limited receptive fields of convolutional kernels. Although several methods [36, 54] focus on using larger kernels or deeper networks to expand the receptive field, these approaches can not always generate satisfactory results [23]. + +As the self-attention in Transformer [47] can capture the relationships of long-range pixels, the methods [14, 48, 51] achieve impressive performance. The scaled-dot-product operation in self-attention requires huge computational cost, so several methods utilize the window-based attention mechanism [29, 48] to relieve this issue. Specifically, the rectangular window designs explore motion along orthogonal directions in the Cartesian coordinate system, + +achieving a performance boost of deblurring. However, motion blur is commonly irregular and does not always follow orthogonal directions in real-world scenarios(Fig. 1(b)). + +In fact, motion typically comprises rotational and translational components [12]. As illustrated in Fig. 1(a), the trajectory of motion can be decomposed into the rotational component (red curve) and the translational one (orange line). Yet, most of the previous methods process the translational and rotational components together with complex networks. The neglect of a specialized design for rotational motion leads to the insufficient exploration of movement causing blur, particularly in capturing rotational features. + +Although several methods [9, 45] attempt to capture the movement from the horizontal and vertical directions. This design can result in linearization errors [13] when capturing rotational motion, which is inherently non-linear. Therefore, it is crucial to consider a strategy that estimates blur patterns by effectively handling rotational movements and still preserving translational details. Consequently, we develop a polarization-aided Transformer that leverages motion vector decomposition to remove the blur from translational and rotational perspectives, aiming to tackle the common neglect of rotational motion in existing methods. + +In this paper, we propose the Motion Decomposition Transformer (MDT), which is augmented with two polarized modules for complex image deblurring: the motion decomposition module (MDM) and the radial stripe attention solver (RSAS). MDM adopts the dual-branch design with the deformable convolution operation [59] for extracting the shallow feature. Compared to a convolution layer that embeds the input image, the proposed module adopts two branches based on different coordinate systems to separately address each motion component. Specifically, one branch uses a standard Cartesian-based deformable convolution layer to capture translational motion. The other branch employs polar-aided offsets to explore rotational movement in a complementary manner. + +In addition, we develop the RSAS to estimate the blur pattern by considering the translation and rotational movements in the polar coordinates system. The polarization-aided attention module RSAS can directly capture the features of two motion components by using relative distance and relative angles. Therefore, we designed a polar coordinate-based image reconstruction module that especially leverages angular relationships to better characterize the blur caused by rotational motion components. The previous polar coordinates-based methods [2, 17, 21] usually contain a radial partition with various sampling and reshaping operations which lead to excessive loss of image information. Due to the detailed information required for the deblurring task [3], we cancel the sampling operation in the stripe window design to preserve the essential motion details. Moreover, the existing angular relative posi + +tional in polar coordinates [2] is designed for warped images which is not suit for blurry images. We propose a new angular relative positional encoding. It contains relative angles and relative distance biases to characterize the relationships of tokens in movements. Finally, we design an effective RSAS with the radial stripe window and angular relative positional encoding to improve the restoration of blurred images by addressing both translational and rotational components of motion. We formulate the aforementioned modules into an encoder-decoder architecture. The experiment results show that our method obtains a superior performance against state-of-the-art methods. + +Our main contributions are summarized as follows: + +- We propose a motion decomposition module (MDM) with a dual-branch scheme to separate the movements that triggered blur. It adopts two deformable convolution layers with Cartesian and polar offsets to effectively extract hybrid rotational and translational motion features. +- We design a polar coordinates-based attention module, the radial stripe attention solver (RSAS), which employs radial stripe windows and angular relative positional encoding to effectively explore motion features, particularly the rotational components, for deblurring. +- We evaluate MDT on 6 image deblurring datasets, where MDT outperforms state-of-the-art approaches. Additionally, we form a subset from the existing dataset to specially assess the capability of handling blur caused by the significant rotational movement. + +# 2. Related Work + +# 2.1. Image Deblurring + +Deep CNN-based methods. Recently, deep CNN-based methods [7, 10, 18] achieve great performance in the image deblurring task. Nah et al. [34] design a multi-scale network to estimate the blur pattern from a blurry image. To enhance the efficiency of blur pattern exploration in a multi-scale architecture, Gao et al. [18] introduce a method for selective parameter sharing. Sun et al. [42] utilize the CNNs to predict the probabilistic distribution of the blur kernel at the patch level. The multi-patch architecture [50, 52] is another design to improve the performance of deblurring. Zhang et al. [52] present a hierarchical deblurring model based on the multi-patch strategy, which estimates the feature of the blur stage by stage. Zamir et al. [50] employ a cross-stage strategy, fusing features from different stages to effectively restore sharp images. + +Direction-related methods. Several approaches [19, 31] focus on learning directions of degradation to enhance image restoration. WDNet [31] designs a dual-branch network for demoiring, which contains a DPM block to capture the texture from eight directions. DSC [19] utilizes the attention mechanism in a spatial recurrent + +neural network for locating shadows from each direction. Other researchers [15, 30] have explored the role of motion information in improving deblurring performance. Fang et al. [15] estimate the non-uniform blur kernel in latent space and use it to reconstruct the sharp image. MISC Filter [30] restores a sharp image by adaptively estimating the spatially-variant motion flow and aligning the motion-induced blurring patterns to the motion. Owing to the spatially invariant nature of the convolution operation and its limited receptive field [51], CNNs methods struggle to capture global information of images. + +# 2.2. Vision Transformers + +Transformer-based methods achieve an ability to model global contexts with self-attention in high-level tasks (e.g., object segmentation [37], object detection [24], and text recognition [27]), and further extended to low-level tasks (image deblurring [23, 51], super-resolution [29], and image denoising [5, 48], etc.). Nevertheless, the vallina self-attention mechanism incurs a significant computational cost. To address this issue, SwinIR [29] employs a technique that divides the image into $8 \times 8$ windows and performs attention operations within a window to reduce the computing complexity. Meanwhile, Restormer [51] designs with a cross-channel dimensions model that further minimizes computational demands. + +Despite these advancements, the full potential of spatial information for image restoration has not yet been fully exploited. CAT [9] addresses this by aggregating features across rectangular windows with an axial-shift operation, which not only achieves a large receptive field but also enhances inter-window interactions with linear complexity. Stripformer [45] proposes the intra- and inter-strip attention, which decomposes motion information into horizontal and vertical directions for estimating the blur pattern. + +Moreover, the current Transformer-based approaches [53, 57] attempt to further improve the performance of image restoration by a sparse attention module. AST [57] introduces an adaptive sparse Transformer that employs a two-branch paradigm to mitigate noisy interactions from irrelevant areas and to reduce feature redundancy. BSSTNet [53] adopts a blur-aware sparse Transformer designed for restoration, designing an extended temporal window that leverages distant frames and bidirectional feature propagation to reduce error accumulation. Nevertheless, the Cartesian system-based methods struggle to capture motion information in nonorthogonal directions. Although new methods [20, 26, 43] for exploring movements in image deblurring are emerging, they require additional information (e.g., image prior, historical information, neighboring frames) for restoring sharp images. Unlike previous works, our model introduces an + +efficient motion decomposition Transformer based on polar coordinates, intending to jointly capture spatial information from both translational and rotational motion. + +# 3. Proposed Method + +Our goal is to restore high-quality sharp images by modeling the blur formation process from movements and utilizing the non-local information with Transformers. The proposed method mainly contains two components: the motion decomposition module (MDM) (Sec. 3.2) and the radial stripe attention solver (RSAS) (Sec. 3.3). + +# 3.1. Overall Pipeline + +The overall pipeline of the proposed architecture is presented in Fig. 2(a). Given a blurry image $\mathbf{X}$ with a resolution of $H\times W\times 3$ , where $H$ and $W$ represent the height and width. The MDM decomposes the movement into translation and rotation by a dual-branch design. One branch adopts the Cartesian-based deformable convolution layer to embed the input image $\mathbf{X}$ , resulting in the representation of the translational feature $\mathbf{F}_t\in \mathbb{R}^{H\times W\times C}$ . The other branch forms offsets based on the polar coordinate system to produce the rotational feature $\mathbf{F}_r\in \mathbb{R}^{H\times W\times C}$ . Next, we add the two features $\mathbf{F}_t$ and $\mathbf{F}_r$ to generate the output of MDM, e.g., $\mathbf{F}_0\in \mathbb{R}^{H\times W\times C}$ . Following the U-shape structure [48], the $\mathbf{F}_0$ passes through 3 encoder levels. Each level contains multiple Transformer blocks, where the number of blocks is gradually increased and the $l$ -th stage of the encoder produces the feature $\mathbf{F}_l\in \mathbb{R}^{\frac{H}{2^l}\times \frac{W}{2^l}\times 2^l C}$ . We then employ the FFN by [23] to capture useful information for the feature refinement. In the decoder part, we design RSAS (see Fig. 2(c)) to estimate the blur pattern from the rotation and translation movement in the polar coordinates system. To be specific, the feature $\mathbf{F}_{att}\in \mathbb{R}^{\frac{H}{2^l}\times \frac{W}{2^l}\times 2^l C}$ is produced by the radial stripe self-attention (RS-SA) and the angular relative position encoding (ARPE). After the $l$ decoder stages, we reshape the features to the one with the size of $H\times W$ . Finally, a $3\times 3$ convolution layer is applied to generate the residual part $\mathbf{R}\in \mathbb{R}^{H\times W\times 3}$ and then the restored image is obtained by $\mathbf{X}' = \mathbf{X} + \mathbf{R}$ . + +# 3.2. Motion Decomposition Module + +The embedding layer of existing methods [23, 45, 51] often consists of a $3 \times 3$ convolution layer to extract the shallow features of input images with a fixed receptive field kernel. However, subtle and varying local movement patterns are difficult for the normal convolution layers to focus on, as supported by [38]. To alleviate this problem, we design a dual-branch motion decomposition module (MDM) that incorporates deformable convolution networks (DCNv2 [59]) in polar coordinates. This approach enhances the perception of object structures, enabling adaptive es + +![](images/41d9f92ef3339f9304b8feccd5f9b3d864fe9ac106c34a60b8168ba1d8fe69d9.jpg) +(a) MDT + +![](images/f5325d4d2fa35467f8d0e24930029e72bd098e5c55ef1d36294ea227da644445.jpg) + +![](images/0c65debc586f8fe82b6fb1d88fb7ef3279755cae4741befe3459fa39ac6062e5.jpg) +(b) MDM +Figure 2. Overview of MDT. (a) The proposed method MDT consists of an asymmetric encoder-decoder architecture and the encoder module only has FFN which is directly used from [23]. (b) The proposed motion decomposition module (MDM) consists of two deformable convolution layers. One branch encodes image features using a standard Cartesian-based layer and the other operates in the polar coordinates. In particular, it learns the new grid from the meshgrid function, and then the polar offsets derived from the arctan. (c) Illustration of the radial strip attention solver (RSAS), which contains a polar-based radial stripe self-attention (RS-SA). We design an angular relative position encoding (ARPE) to build a new attention module under the polar coordinate system. The specific design of the three components: radial stripe self-attention, angular relative position encoding, and azimuth patch merging will be presented in Sec. 3.3. + +timation of the motion components, whether rotational or translational, during the blur-forming process. In detail, as shown in Fig. 2(b), the translation branch generates the ordinary offsets $\Delta_t$ for the translational motion, and the rotation branch aims at the rotational movement by making the $\Delta_r$ from polar coordinates. Given the blurry image $\mathbf{X}$ , MDM first generates a grid $\mathbf{D} \in \mathbb{R}^{H \times W \times 2}$ by a meshgrid function as the input to each branch, which is used to create a grid out of two given one-dimensional arrays $\mathbf{D}_x$ and $\mathbf{D}_y \in \mathbb{R}^{H \times W}$ representing the indexes. + +Translation Branch. In the translation branch, we multiply $\mathbf{D}$ and the input image $\mathbf{X}$ , then adopt a convolution layer to generate the offsets $\Delta_t = \text{Conv}_{3\times 3}(\mathbf{X}\mathbf{D})$ , where the $\text{Conv}_{3\times 3}(\cdot)$ denotes the $3\times 3$ convolution layer. The offsets tensor $\Delta_t\in \mathbb{R}^{H\times W\times G}$ and the original image $\mathbf{X}$ are the input for the deformable convolution. The parameter $G$ means the number of offset groups, equal to $2\times$ kernel_size $\times$ kernel_size. Next, we generate the output feature of the translation branch $\mathbf{F}_t\in \mathbb{R}^{H\times W\times C}$ : $\mathbf{F}_t = D\text{Conv} (\mathbf{X},\Delta_t)$ , where $D\text{Conv} (\cdot)$ means the deformable convolution. + +Rotation Branch. The purpose of the rotation branch is to leverage the polar system to explore the rotational movement. We apply the arctan function to convert the input grid $\mathbf{D}$ to polar coordinates by $\mathbf{D}_r = \arctan (\mathbf{D}_y / \mathbf{D}_x)$ , $\mathbf{D}_x$ and $\mathbf{D}_y$ are matrices formed by the $x$ and $y$ coordinate components of the $\mathbf{D}$ , respectively. $\mathbf{D}_r\in \mathbb{R}^{H\times W}$ denotes the distribution matrix in the polar coordinates system. We use the $3\times 3$ convolution layer to produce the rotational offsets $\Delta_r\in \mathbb{R}^{H\times W\times G}$ . The whole process of the rotation branch + +can be represented as: + +$$ +\mathbf {F} _ {r} = D C o n v (\mathbf {X}, \operatorname {C o n v} _ {3 \times 3} (\mathbf {D} _ {r})), \tag {1} +$$ + +$\mathbf{F}_r \in \mathbb{R}^{H \times W \times C}$ is the output feature of rotation branch. Finally, we add the above two features as the output feature $\mathbf{F}_0 \in \mathbb{R}^{H \times W \times C}$ of MDM. + +# 3.3. Radial Stripe Attention Solver + +As mentioned in Sec. 1, previous Transformer methods [9, 45] adopt the window split and the position encoding in the Cartesian coordinate system, which restrict modeling the rotational part to restore sharp images. To achieve high-quality image deblurring with the polar Transformer, we propose the Radial stripe Attention Solver (RSAS), which consists of Radial Stripe Self-Attention (RS-SA), Angular Relative Position Encoding (ARPE), and azimuth patch merging, as shown in Fig. 2(c). + +Radial Stripe Self-Attention. To tailor a polar coordinate-based Transformer for the deblurring task, we propose a Radial Strip Self-Attention (RS-SA) We expand the attention area by a radial stripe window [9, 45] to better estimate the motion information in the polar coordinate. We use $M_{\varphi}$ and $M_r$ to represent the non-overlapping window along the angles and distance in the polar coordinates. To be specific, we design a new window-based self-attention containing the radial stripe window shape with the size of $M_{\varphi} = M \times M$ and $M_r = 1$ to enhance the ability to capture the blur pattern ( $M$ is the window size). In our RS-SA, given the input feature $\mathbf{F}_l$ from the MDM module, we adopt radial stripe attention to generate the output feature $\mathbf{F}_{att} \in \mathbb{R}^{\frac{H}{2^l} \times \frac{W}{2^l} \times 2^l C}$ . + +![](images/e3c9828145f2fc59688bc0b948dae00208c609a578b261daaa1173330d66a7bd.jpg) +(a) Cartesian-based positional encoding + +![](images/7af674c66f28ca8ed8c569d55fdb51b67fbf361c4950cd156c520d9626edcb8c.jpg) +(b) Angular relative positional encoding +Figure 3. The comparison of two coordinates positional encoding. (a) The Cartesian-based positional encoding employs $\Delta x$ and $\Delta y$ to represent relative distances, which restricts its capability to learn the angular relationships between two green tokens. (b) The angular relative positional encoding represents positional relationships using the angle $\Delta \varphi$ and the distance $\Delta r$ , which allows for a more effective representation of rotational motion information with angles meanwhile the translation one also can be learned well. + +Angular Relative Positional Encoding. Due to the polar-based attention in MDT, we design a corresponding Angular Relative Positional Encoding (ARPE) that presents the relationship between tokens in the polar coordinates. The standard relative positional encoding contains the position bias $\mathbf{B} \in \mathbb{R}^{P^2 \times P^2}$ , where $P$ denotes the number of tokens in a window. The attention module is formulated as: + +$$ +\mathbf {A t t} (\mathbf {Q}, \mathbf {K}, \mathbf {V}) = \operatorname {S o f t m a x} \left(\mathbf {Q} \mathbf {K} ^ {T} / \sqrt {d} + \mathbf {B}\right) \mathbf {V}, \tag {2} +$$ + +where $\mathbf{Q},\mathbf{K},\mathbf{V}\in \mathbb{R}^{P^2\times d}$ denote the queries, keys, and values, which are obtained by $\mathbf{Q} = \mathbf{F}_l\mathbf{P}_Q,\mathbf{K} = \mathbf{F}_l\mathbf{P}_K,\mathbf{V} =$ $\mathbf{F}_l\mathbf{P}_V$ . To be specific, $\mathbf{P}_Q,\mathbf{P}_K,\mathbf{P}_V$ denote the projections matrices of queries, keys, and values, respectively; $\mathbf{F}_l$ is the input feature, and $d$ represents the dimension of query/key. + +The position bias in the Cartesian system captures the relative position from orthogonal directions while restricting the ability to capture the rotation motion information for image deblurring, as shown in Fig. 3(a). To explore the rotational aspect of motion for restoring sharp images, we propose ARPE that models the spatial relationships between tokens by angles and distance (tokens refer to pixels in the image). On the angular relative positional encoding, the distance $r$ and the angle $\varphi$ are utilized to characterize the relative position between tokens. We separate the position bias $\mathbf{B}$ into $\mathbf{B}_r$ and $\mathbf{B}_{\varphi}$ , representing the distance and angular relative position biases, respectively. The token $i$ -th on the polar coordinates is shown as: + +$$ +r _ {i} = \frac {r _ {m a x} (i - 0 . 5)}{N _ {r}}, \quad \varphi_ {i} = \frac {2 \pi (i - 0 . 5)}{N _ {\varphi}}, \qquad (3) +$$ + +where $r_i$ and $\varphi_{i}$ denote the distance and angle of $i$ -th token, and the $r_{max}$ represents the half field distance of the tensor. $N_{\varphi}$ and $N_{r}$ denote the division number of angles and the distance in the radial direction separately. The tokens $i$ and $j$ have the relative position $(\Delta r,\Delta \varphi)$ , can be formulated as: + +$$ +\Delta r = r _ {i} - r _ {j}, \quad i, j = \{1, 2, \dots , N _ {r} \}, \tag {4} +$$ + +$$ +\Delta \varphi = \varphi_ {i} - \varphi_ {j}, \qquad i, j = \{1, 2,..., N _ {\varphi} \}. +$$ + +The two position bias tensors $\mathbf{B}_r$ and $\mathbf{B}_{\varphi}$ are defined as: + +$$ +\mathbf {B} _ {r} = \mathbf {a} _ {\Delta r} \sin (\Delta r) + \mathbf {b} _ {\Delta r} \cos (\Delta r), \tag {5} +$$ + +$$ +\mathbf {B} _ {\varphi} = \mathbf {a} _ {\Delta \varphi} \sin (\Delta \varphi) + \mathbf {b} _ {\Delta \varphi} \cos (\Delta \varphi), +$$ + +where a and b denote $\hat{\mathbf{B}}_{\varphi} \in \mathbb{R}^{(2M_{\varphi} - 1) \times 2}$ and $\hat{\mathbf{B}}_r \in \mathbb{R}^{(2M_r - 1) \times 2}$ . Since the relative positions in the radial stripe window range from $[-M_{\varphi} + 1, M_{\varphi} - 1]$ and $[-M_r + 1, M_r - 1]$ along the angles and distance, respectively. We parameterize smaller size bias matrices $\hat{\mathbf{B}}_{\varphi}$ and $\hat{\mathbf{B}}_r$ following [32]. $M_{\varphi}$ and $M_r$ denote the number of tokens in a window. For better comprehension, the ARPE is visually represented in Fig. 3(b). So the positional encoding tensors in ARPE are $\mathbf{B}_{\varphi}$ and $\mathbf{B}_r \in \mathbb{R}^{M_\varphi^2 \times M_r^2}$ , the attention module is defined as: + +$$ +\operatorname {A t t} (\mathbf {Q}, \mathbf {K}, \mathbf {V}) = \text {S o f t m a x} \left(\mathbf {Q} \mathbf {K} ^ {T} / \sqrt {d} + \mathbf {B} _ {\varphi} + \mathbf {B} _ {r}\right) \mathbf {V}, \tag {6} +$$ + +Azimuth Patch Merging. We design an azimuth patch merging technique to align with the polar coordinate-based self-attention. Our proposed method merges the patches along the angles and distance, similar to [32]. + +# 4. Experiment + +In this section, we evaluate the proposed approach against state-of-the-art methods and analyze the effect of each module by adopting different design choices. + +# 4.1. Experimental Details + +Datasets. We train MDT on the GoPro dataset [34] and directly evaluate it on GoPro [34], HIDE [41], Realblur [39] and RWBI [55] datasets. Furthermore, we train our method on REDS [35] dataset with the training setting of FFTformer [23], and we evaluate the result on the validation set (noted as REDS-val-300). Furthermore, we train our method on REDS [35] and a newly proposed real-world dataset RSBlur [40]. We train our model using AdamW optimizer [22] and adopt the same loss function as [10]. Following [23], the initial learning rate is set as $1e^{-3}$ and decreases to $1e^{-7}$ after 300,000 iterations with the cosine decay strategy. The batch and patch sizes are empirically set to be 64 and $128 \times 128$ , respectively. In our 3-level encoder-decoder architecture, the number of Transformer blocks is set to [6, 6, 12]. The progressive learning strategy is employed as [51]. + +# 4.2. Comparisons with State-of-the-art Methods + +Quantitative Evaluations. We compare our model on Go-Pro [34] and Hide [41] dataset with 14 state-of-the-art methods. As shown in Tab. 1, MDT outperforms all considered image deblurring algorithms. It is worth mentioning that compared with Stripformer [45], which is designed based on the Cartesian system, MDT yields a performance + +Table 1. Quantitative comparison on GoPro [34] and HIDE [41] for image deblurring. All methods are only trained on GoPro [34]. + +
MethodGoPro [34]HIDE [41]Params ↓ (M)
PSNR ↑SSIM ↑PSNR ↑SSIM ↑
DeblurGAN-v2 [25]29.550.93427.400.88260.9
DMPHN [52]31.200.94029.090.92421.7
MIMO [10]32.450.95729.990.93016.1
MPRNet [50]32.660.95930.960.93920.1
Restormer [51]32.920.96131.220.94226.1
Uformer [48]33.060.96730.900.95350.9
Stripformer [45]33.080.96231.030.94019.7
SFNet [11]33.270.96231.100.94113.3
FFTformer [23]34.210.96931.620.94616.6
UFPNet [16]34.060.96831.740.94780.3
Stripformer+ID-Blau [49]33.660.96631.500.94419.7
SEMGUD [8]29.060.92727.640.892-
MISC Filter [30]34.100.96931.660.94616.0
FPro [58]33.050.96130.630.93622.3
MDT (Ours)34.260.96931.840.94814.3
+ +Table 2. Quantitative comparison on RealBlur [39]. All methods are only trained on GoPro [34]. + +
MethodRealBlur-R [39]RealBlur-J [39]Average
PSNR ↑SSIM ↑PSNR ↑SSIM ↑PSNR ↑SSIM ↑
NAFNet [7]33.630.94426.330.85629.980.900
IR-SDE [33]32.560.90923.190.69127.890.800
FFTformer [23]33.660.94825.710.85129.690.900
CODE [56]33.810.93926.250.80130.030.870
GRL-B [28]33.970.94426.400.81630.190.880
MDT (Ours)34.370.94826.350.85530.360.902
+ +gain of $1.17\mathrm{~dB}$ and $0.81\mathrm{~dB}$ in terms of PSNR on GoPro and HIDE, respectively. We attribute the success of MDT to the deblurring that can explore the motion features in the polar coordinate system. Moreover, our proposed method surpasses the previous best method FFTformer [23], while using fewer parameters. The results verify that our polar Transformer has a stronger capability of accurately restoring blurry images. Tab. 2 shows that MDT achieves the best and second-best performance on the RealBlur-J and RealBlur-R datasets [39], respectively. Notably, our method trained only on the GoPro [34] dataset surpasses all other methods on average, demonstrating that it has a superior generalization to other datasets. + +Additionally, we compare our model on a real-world dataset RSBlur [40] with 7 state-of-the-art methods in Tab. 3. Compared with Uformer [48], which contains conventional window-based attention for deblurring, MDT achieves a performance improvement of $0.40\mathrm{dB}$ PSNR. This can be interpreted from the radial stripe window of our method, which effectively explores blur patterns while requiring a much lower parameter cost. We conduct the evaluation experiments on REDS-val-300 [35]. As shown in Tab. 4, MDT attains the best results with the lowest parameters. Specifically, our approach obtains $0.16\mathrm{dB}$ and $0.42\mathrm{dB}$ improvement over the previous best method NAFNet [7] + +Table 3. Quantitative comparison on RSBlur [40] in terms of PSNR and SSIM. + +
MethodPSNR ↑SSIM ↑Params (M) ↓
SRN [44]32.530.8406.8
MIMO [10]33.370.85616.1
MPRNet [50]33.610.86120.1
Restormer [51]33.690.86326.1
Uformer [48]33.980.86650.9
FFTformer [23]33.950.89416.6
FPro [58]33.090.88322.3
MDT (Ours)34.380.89914.3
+ +Table 4. Quantitative comparison on REDS-val-300 [35]. + +
MethodPSNR ↑SSIM ↑Params (M) ↓
MPRNet [50]28.790.81120.1
HINet [6]28.830.86288.7
MAXIM [46]28.930.86522.2
NAFNet [7]29.090.86767.9
FFTformer [23]29.140.86716.6
FPro [58]28.950.87522.3
MDT (Ours)29.250.86914.3
+ +Table 5. Quantitative comparison on RWBI [55] with the no-reference metric NIQE. + +
MethodInput -Uformer [48]SFNet [11]FFTformer [23]UFPNet [16]FPro [58]MDT (Ours)
NIQE ↓5.4366.0615.6835.1885.4675.1544.929
+ +and HINet [6]. Although the REDS dataset presents a challenge due to the joint restoration of blurry images suffering from JPEG compression, our MDT showcases the superior ability, primarily attributed to the RS-SA design. + +Finally, we evaluate MDT on the RWBI dataset [55], which contains real images without ground truth. As presented in Tab. 5, MDT obtains the lowest NIQE score, demonstrating the superior perceptual quality of the results. Qualitative Evaluations. We perform the qualitative comparisons on the GoPro, HIDE, and RealBlur datasets. As shown in Fig. 4, we compare MDT with other methods on the GoPro dataset. Our approach in Fig. 4(h) achieves the clearest result in restoring the edges of the pink chair. The red boxes indicate that the other evaluated methods in Fig. 4(d)-(f) still contain an unclear structure of the seatback. Fig. 5 shows the visual comparisons on the HIDE dataset. Our method in Fig. 5(h) generates the finer edge of the zipper. In contrast, the other Transformer-based methods in Fig. 5(d)-(g) with Cartesian attention designs have limited capacity to recover a clearer structure. Finally, we present visual comparisons in Fig. 6 on the RealBlur dataset to validate the deblurring capabilities in real-world scenarios. The comparison methods in Fig. 6(c)-(e) still exhibit significant blur in the text within the images. Alternatively, our MDT in Fig. 6(f) restores significantly clearer text and sharper character edges on the billboard. + +![](images/db19f5d4e2795d731171e2264a657b54c19b2ed4b95e3047b8f85638c5e40c22.jpg) +Figure 4. The qualitative results on GoPro [34]. Our method (h) achieves the clearest result in restoring the edges of the pink chair. + +![](images/841562fdfdc254a892df20c8ae2d43718c832eb425ae978937788656e674de07.jpg) +Figure 5. The qualitative results on HIDE [41]. The proposed method (h) restores a clearer result on the structure of the women's zipper. + +![](images/3c2699fa78a7a9c5c481e4321f0dc48f553f7c9f2996c9517c170c2dc45970d2.jpg) +Figure 6. The qualitative results on RealBlur [39]. The proposed method (f) generates significantly clearer edges on the characters. + +Table 6. Different designs of MDM. + +
ModulePSNR ↑SSIM ↑Params. ↓FLOPs ↓
(a) Only-once31.860.951614.3112.61
(b) Multi-scale30.220.934515.7119.42
(c) Parallel27.960.897715.7124.67
(d) Dconv(X, Wt + Wr)31.320.946914.3112.48
+ +# 4.3. Ablation Studies + +In this section, we conduct ablation studies to analyze the contribution and computational cost of each component of MDT. We use the GoPro [34] dataset for the comparison and train the model with 150,000 iterations. The GMACs are calculated on an image size of $256 \times 256$ . Tab. 7 illustrates the contributions of each module, where CNN and FSAS are from the baseline method, while MDM and RSAS are ours. To be specific, we perform ablation experiments by replacing the corresponding modules. The CNN and MDM in the first column are the embedding layers, and the other contains FSAS and RSAS are the self-attention solvers. + +Effect of MDM. Tab. 7(d) shows that MDM provides a favorable gain of $0.86\mathrm{dB}$ in the PSNR metric over the method with the CNN. Moreover, MDM obtains superior results in the LPIPS and NIQE metrics, demonstrating the better quality closely aligns with human perception of the generated images. The best performance across all metrics verifies the effectiveness of the polarization-aided in the dual-branch design. This success can be attributed to the separate handling of the two motion components in different coordinate systems. It allows for better exploration of movement + +Table 7. Quantitative evaluations of each module in proposed method on the GoPro [34] dataset. + +
CNNMDMFSASRSASPNSR↑SSIM↑LPIPS↓NIQE↓
(a)29.150.91740.16345.457
(b)30.600.93930.12155.344
(c)31.000.94280.11695.337
(d)31.860.95160.10595.253
+ +features compared to a convolution layer. + +As shown in Tab. 6, other designs of MDM have lower performance. In (b), the blur has already been removed in deeper layers, and using MDM again leads to treating the clean content as blur. As shown in row (c), The Transformer block processes original features and combines outputs in parallel with the MDM results. This makes it difficult to learn the two types of motion information, which can be pre-extracted from MDM. The above two designs also have inferior performance and higher computational costs. Adding offsets before $DConv$ in (d), the features of different motion components cannot be learned separately, which negatively impacts performance. In contrast, we use MDM only once and add the outputs sequentially. + +Effect of RSAS. Combined with RSAS, the method attains a remarkable performance of image deblurring, demonstrating the effectiveness of the module. In Tab. 7(c), RSAS yields $1.85\mathrm{dB}$ gains over the baseline model in terms of PSNR metrics. This can be viewed as RSAS effectively extracting features along the radius and angles corresponding to modeling translation and rotational motion information. As a result, the blur pattern can be more accurately estimated. We further provide a visual comparison in Fig. 7(c), where RSAS generates a fine structure of the text. + +To better visualize the difference between the Cartesian coordinates-based attention module and the polar one, we present the visual results of our approach and Stripformer [45], as shown in Fig. 8(b). However, due to the strip window along the orthogonal directions, the Stripformer still retains rotational blur trajectory on the license plate, as indicated by the red box in Fig. 8(b). While in Fig. 8(c), + +Table 8. The computational complexity comparison with the input image size ${256} \times {256}$ . + +
MethodDeblurGAN-v2 [25]MIMO [10]MPRNet [50]Restormer [51]Uformer [48]Stripformer [45]FFTformer [23]UFPNet [16]MDT (Ours)
GMACs (G)411.34154.58777.68141.2489.46177.65131.75243.78112.61
Params (M)60.916.120.126.150.919.716.680.314.3
Time (s)0.060.020.120.080.040.070.131.320.14
+ +![](images/73c514425d84920b5ec43b17edc1ef0d1d2e6773e8e579e3e2b1e71dfc93f11e.jpg) +(a) Blurry image + +![](images/2afb725ae14af0d609f3fb1eed705aad6e0be18c09b44f5feee62955653b9059.jpg) +(b) w/o RSAS + +![](images/b2121574e11ff6bd5fd6621fd9c131ecf24fb5bd99fe67fa4dd3c638390b3681.jpg) +(c) w/ RSAS + +![](images/311b694c4034a17e86011c30cbe2637e48ea8a65a4738d6b2c250e582090fba4.jpg) +Figure 7. The comparison of RSAS. Adopting the RSAS module generates a fine structure of the word. +(a) Blurry image +Figure 8. The comparison of different coordinate systems. MDT restores a clearer license in the polar system than Cartesian. + +![](images/afde661dce5de97cb02bda3c027a5bbe4b3448a97d733417f2e2975c849f234b.jpg) +(b) Cartesian system + +![](images/88ec2e6f71b549fa9ea16c9b84511c442e86022039e61318f457e9faca20c1e1.jpg) +(c) Polar system + +MDT restores a much clearer result, demonstrating the effectiveness of the polar design in addressing rotational blur. + +Model Efficiency. In Tab. 8, we compare the computational complexity of MDT with 8 state-of-the-art methods. MDT has fewer GMACs than the latest methods, FFTformer [23] and UFPNet [16]. Although Uformer has the lowest GMACs, it also has the second-highest parameters. Compared to previous works, MDT demonstrates lower or comparable complexity while achieving superior performance, which proves its effectiveness and efficiency. + +Exploring the rotational component. To further evaluate the ability of MDT in controlled rotational blur and non-motion blur situations, we conduct experiments on defocus, rotation and translation cases. In the case of rotational blur, our approach achieves a PSNR gain of 0.51 dB over the baseline method FFTformer [23], which is significantly greater than the 0.07 dB gain in the defocus blur case. Additionally, the proposed method obtains a more significant improvement in rotational case compared with the translation one. It verifies that our approach excels in restoring blur, mainly triggered by rotational motion. + +Impact of direction number in RSAS. We conduct experiments with settings of 2, 4 and 8 directions to analyze the impact of the different divisions in RSAS. Based on perceptual (LPIPS/NIQE) and full-reference (PSNR/SSIM) metrics, the 4-direction and 8-direction approaches show similar performance, but both outperform the 2-direction setting. This suggests that incorporating more directions leads + +Table 9. The quantitative comparison for the different blur. We conduct experiments on the defocus blur dataset DDPD [1] and manually-curated datasets. + +
MethodDefocusRotationTranslation
PSNR ↑SSIM ↑PSNR ↑SSIM ↑PSNR ↑SSIM ↑
FFTformer [23]25.890.854927.590.886231.730.9497
MDT (Ours)25.960.855028.100.894131.680.9491
+ +![](images/bc6ed903d0759cbca2bd183bf422f0f0a004612fb0b07c8302d1b83f75fe8bc5.jpg) +(a) Input +Figure 9. Examples of the limitation: the text on the tablet is still slightly blurred. MDT is designed based on motion vector decomposition, which results in limitations for defocus deblurring, as such images typically lack motion features. + +![](images/a140c745b59592fd2c0484ddbeab769160f3b8a0c1afd3287dc181e78ff9c50b.jpg) +(b) MDT + +to better modeling of the two motion components than only focusing on two translational directions. + +# 5. Conclusion + +This paper introduces the Motion Decomposition Transformer (MDT), which aims to explore rotational and translational motion components, respectively. MDT employs two key modules: MDM and RSAS, utilized to capture the separated motion components features and reconstruct sharp images. By utilizing the polar coordinate system, MDT can better estimate blur patterns triggered by rotational motion while preserving the translational details. These designs not only reduce computational costs but also obtain significant performance improvements in image deblurring tasks. Experimental results demonstrate that MDT achieves state-of-the-art performance across 6 datasets. + +Limitation. Our method still has limitations for future improvement. Although we design a polar Transformer scheme to deblur by exploring motion, its capacity to handle the blur triggered without motion, such as defocus blur, remains limited. This is primarily due to the different formation processes from motion blur, as illustrated in Fig. 9. + +Acknowledgements. This work was supported by Shenzhen Science and Technology Program (No. JCYJ2024081 3114229039), Natural Science Foundation of Tianjin (No.24JCZXJC00040), National Natural Science Foundation of China (Nos.U22B2049, 62302240, 624B2072), SCCI, Dalian Univ. Tech (No. SCCI2023YB01), and Supercomputing Center of Nankai University. + +# References + +[1] Abdullah Abuolaim and Michael S Brown. Defocus deblurring using dual-pixel data. In ECCV, 2020. 8 +[2] Akshaya Athwale, Arman Afrasiyabi, Justin Lagüe, Ichrak Shili, Ola Ahmad, and Jean-François Lalonde. Darswin: Distortion aware radial swim transformer. In ICCV, 2023. 2 +[3] Stephan Brehm, Sebastian Scherer, and Rainer Lienhart. High-resolution dual-stage multi-level feature aggregation for single image and video deblurring. In CVPR, 2020. 2 +[4] Guillermo Carbajal, Patricia Vitoria, José Lezama, and Pablo Muse. Blind motion deblurring with pixel-wise kernel estimation via kernel prediction networks. arXiv preprint arXiv:2308.02947, 2023. 1 +[5] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In CVPR, 2021. 3 +[6] Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, and Chengpeng Chen. Hinet: Half instance normalization network for image restoration. In CVPR, 2021. 6 +[7] Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In ECCV, 2022. 2, 6, 7 +[8] Lufei Chen, Xiangpeng Tian, Shuhua Xiong, Yinjie Lei, and Chao Ren. Unsupervised blind image deblurring based on self-enhancement. In CVPR, 2024. 6 +[9] Zheng Chen, Yulun Zhang, Jinjin Gu, Yongbing Zhang, Linghe Kong, and Xin Yuan. Cross aggregation transformer for image restoration. In NeurIPS, 2022. 2, 3, 4 +[10] Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In ICCV, 2021. 2, 5, 6, 8 +[11] Yuning Cui, Yi Tao, Zhenshan Bing, Wenqi Ren, Xinwei Gao, Xiaochun Cao, Kai Huang, and Alois Knoll. Selective frequency network for image restoration. In ICLR, 2023. 6 +[12] Xiaoyu Deng, Yan Shen, Mingli Song, Dacheng Tao, Jiajun Bu, and Chun Chen. Video-based non-uniform object motion blur estimation and deblurring. Neurocomputing, 2012. 2 +[13] Joseph Djugash, Sanjiv Singh, and Benjamin Grocholsky. Modeling mobile robot motion with polar representations. In IROS, 2009. 2 +[14] Jiangxin Dong, Jinshan Pan, Zhongbao Yang, and Jinhui Tang. Multi-scale residual low-pass filter network for image deblurring. In ICCV, 2023. 1 +[15] Zhenxuan Fang, Weisheng Dong, Xin Li, Jinjian Wu, Leida Li, and Guangming Shi. Uncertainty learning in kernel estimation for multi-stage blind image super-resolution. In ECCV, 2022. 3 +[16] Zhenxuan Fang, Fangfang Wu, Weisheng Dong, Xin Li, Jinjian Wu, and Guangming Shi. Self-supervised non-uniform kernel estimation with flow-based motion prior for blind image deblurring. In CVPR, 2023. 1, 6, 8 +[17] Yuchao Feng and Yuxiang Sun. Polarpoint-bev: Bird-eye-view perception in polar points for explainable end-to-end autonomous driving. TIV, 2024. 2 + +[18] Hongyun Gao, Xin Tao, Xiaoyong Shen, and Jiaya Jia. Dynamic scene deblurring with parameter selective sharing and nested skip connections. In CVPR, 2019. 2 +[19] Xiaowei Hu, Chi-Wing Fu, Lei Zhu, Jing Qin, and Pheng-Ann Heng. Direction-aware spatial context features for shadow detection and removal. TPAMI, 2019. 2 +[20] Xiang Ji, Haiyang Jiang, and Yinqiang Zheng. Motion blur decomposition with cross-shutter guidance. In CVPR, 2024. 3 +[21] Yanqin Jiang, Li Zhang, Zhenwei Miao, Xiatian Zhu, Jin Gao, Weiming Hu, and Yu-Gang Jiang. Polarformer: Multicamera 3d object detection with polar transformers. In AAAI, 2023. 2 +[22] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 5 +[23] Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. Efficient frequency domain-based transformers for high-quality image deblurring. In CVPR, 2023, 1, 3, 4, 5, 6, 7, 8 +[24] Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, and Jiří Matas. Deblurgan: Blind motion deblurring using conditional adversarial networks. In CVPR, 2018. 3 +[25] Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang Wang. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In ICCV, 2019. 6, 7, 8 +[26] Chanseok Lee, Jeongsol Kim, Seungmin Lee, Jaehwang Jung, Yunje Cho, Taejoong Kim, Taeyong Jo, Myungjun Lee, and Jang Mooseok. Blind image deblurring with noiserobust kernel estimation. In ECCV, 2024.3 +[27] Hyukzae Lee, Chanho Jung, and Changick Kim. Blind deblurring of text images using a text-specific hybrid dictionary. TIP, 2019. 3 +[28] Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, and Van Luc Gool. Efficient and explicit modelling of image hierarchies for image restoration. In CVPR, 2023. 6 +[29] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swim transformer. In ICCV Workshops, 2021. 1, 3 +[30] Chengxu Liu, Xuan Wang, Xiangyu Xu, Ruhao Tian, Shuai Li, Xueming Qian, and Ming-Hsuan Yang. Motion-adaptive separable collaborative filters for blind motion deblurring. In CVPR, 2024. 3, 6 +[31] Lin Liu, Jianzhuang Liu, Shanxin Yuan, Gregory Slabaugh, Aleš Leonardis, Wengang Zhou, and Qi Tian. Wavelet-based dual-branch network for image demoiring. In ECCV, 2020. 2 +[32] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 5 +[33] Ziwei Luo, Fredrik K Gustafsson, Zheng Zhao, Jens Sjolund, and Thomas B Schon. Image restoration with mean-reverting stochastic differential equations. In ICML, 2023. 6 +[34] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, 2017. 1, 2, 5, 6, 7 + +[35] Seungjun Nah, Sanghyun Son, Suyoung Lee, Radu Timofte, and Kyoung Mu Lee. Ntire 2021 challenge on image deblurring. In CVPR, 2021. 5, 6 +[36] Jinshan Pan, Jiangxin Dong, Yang Liu, Jiawei Zhang, Jimmy Ren, Jinhui Tang, Yu-Wing Tai, and Ming-Hsuan Yang. Physics-based generative adversarial models for image restoration and beyond. TPAMI, 2020. 1 +[37] Liyuan Pan, Yuchao Dai, Miaomiao Liu, Fatih Porikli, and Quan Pan. Joint stereo video deblurring, scene flow estimation and moving object segmentation. TIP, 2019. 3 +[38] Yaolei Qi, Yuting He, Xiaoming Qi, Yuan Zhang, and Guanyu Yang. Dynamic snake convolution based on topological geometric constraints for tubular structure segmentation. In CVPR, 2023. 3 +[39] Jaesung Rim, Haeyun Lee, Jucheol Won, and Sunghyun Cho. Real-world blur dataset for learning and benchmarking deblurring algorithms. In ECCV, 2020. 5, 6, 7 +[40] Jaesung Rim, Geonung Kim, Jungeon Kim, Junyong Lee, Seungyong Lee, and Sunghyun Cho. Realistic blur synthesis for learning image deblurring. In ECCV, 2022. 5, 6 +[41] Ziyi Shen, Wenguan Wang, Xiankai Lu, Jianbing Shen, Haibin Ling, Tingfa Xu, and Ling Shao. Human-aware motion deblurring. In ICCV, 2019. 5, 6, 7 +[42] Jian Sun, Wenfei Cao, Zongben Xu, and Jean Ponce. Learning a convolutional neural network for non-uniform motion blur removal. In CVPR, 2015. 2 +[43] Zhijing Sun, Xueyang Fu, Longzhuo Huang, Aiping Liu, and Zheng-Jun Zha. Motion aware event representation-driven image deblurring. In ECCV, 2024. 3 +[44] Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. Scale-recurrent network for deep image deblurring. In CVPR, 2018. 1, 6 +[45] Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. Stripformer: Strip transformer for fast image deblurring. In ECCV, 2022. 2, 3, 4, 5, 6, 7, 8 +[46] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In CVPR, 2022. 6 +[47] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 1 +[48] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In CVPR, 2022. 1, 3, 6, 7, 8 +[49] Jia-Hao Wu, Fu-Jen Tsai, Yan-Tsung Peng, Chung-Chi Tsai, Chia-Wen Lin, and Yen-Yu Lin. Id-blau: Image deblurring by implicit diffusion-based reblurring augmentation. In CVPR, 2024. 6 +[50] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In CVPR, 2021. 1, 2, 6, 7, 8 +[51] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In CVPR, 2022. 1, 3, 5, 6, 7, 8 + +[52] Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In CVPR, 2019. 1, 2, 6 +[53] Huicong Zhang, Haozhe Xie, and Hongxun Yao. Blur-aware spatio-temporal sparse transformer for video deblurring. In CVPR, 2024. 3 +[54] Jiawei Zhang, Jinshan Pan, Jimmy Ren, Yibing Song, Linchao Bao, Rynson W.H. Lau, and Ming-Hsuan Yang. Dynamic scene deblurring using spatially variant recurrent neural networks. In CVPR, 2018. 1 +[55] Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, and Hongdong Li. Deblurring by realistic blurring. In CVPR, 2020. 5, 6 +[56] Haiyu Zhao, Yuanbiao Gou, Boyun Li, Dezhong Peng, Jiancheng Lv, and Xi Peng. Comprehensive and delicate: An efficient transformer for image restoration. In CVPR, 2023. 6, 7 +[57] Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, and Jufeng Yang. Adapt or perish: Adaptive sparse transformer with attentive feature refinement for image restoration. In CVPR, 2024. 3 +[58] Shihao Zhou, Jinshan Pan, Jinglei Shi, Duosheng Chen, Lishen Qu, and Jufeng Yang. Seeing the unseen: A frequency prompt guided transformer for image restoration. In ECCV, 2024. 6, 7 +[59] Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In CVPR, 2019. 2, 3 \ No newline at end of file diff --git a/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/images.zip b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e40f4bf0166a42c4d78e97af6bdc0688ca49ef93 --- /dev/null +++ b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a5e5b196e11944c1c1bc617f17f8dfeb8d368b195923d9e42817c9e47301f0d +size 626841 diff --git a/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/layout.json b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..46a3f1308a7b0de8811cfedf5c7de0e8c2c326fc --- /dev/null +++ b/CVPR/2025/A Polarization-Aided Transformer for Image Deblurring via Motion Vector Decomposition/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d1ddf78466f472168109a9ca632a3caac72ffe8cdc27e7d857949d437ec8daf +size 461706 diff --git a/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_content_list.json b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e618726e9aa7edf978ea41502befe3831bef6159 --- /dev/null +++ b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43d764e195b8851bf6e6704188c416af1e3ed9de3fda223d52fa5a38f83ed3df +size 79876 diff --git a/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_model.json b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f57ea2ecbb537351fb4913c7d743edceaeec8ec7 --- /dev/null +++ b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1befc64797a89745ceed753e1b70669dadf473e8b029bb3168c1f75ddeedc5b +size 98703 diff --git a/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_origin.pdf b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7d564eaf1dedf545e4c52175d7a331119332af58 --- /dev/null +++ b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/9db9042c-ddce-4703-8663-1a5f43849d91_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:039b3e35df592eab3aa8fca0c8b1114cf53b9a0e1631a3f82df1d1041f480afb +size 10066924 diff --git a/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/full.md b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0c9b68a89cf8286ee606f1b698032d42b91b3cc5 --- /dev/null +++ b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/full.md @@ -0,0 +1,297 @@ +# A Regularization-Guided Equivariant Approach for Image Restoration + +Yulu Bai $^{1,*}$ + +Jiahong $\mathbf{F u}^{1,*}$ + +Qi Xie $^{1,\dagger}$ + +Deyu Meng $^{1,2,3}$ + +$^{1}$ School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China + +$^{2}$ Macau University of Science and Technology, Taipa, Macao + +3 Pengcheng Laboratory, Shenzhen, China + +https://github.com/yulu919/EQ-REG + +# Abstract + +Equivariant and invariant deep learning models have been developed to exploit intrinsic symmetries in data, demonstrating significant effectiveness in certain scenarios. However, these methods often suffer from limited representation accuracy and rely on strict symmetry assumptions that may not hold in practice. These limitations pose a significant drawback for image restoration tasks, which demands high accuracy and precise symmetry representation. To address these challenges, we propose a rotation-equivariant regularization strategy that adaptively enforces the appropriate symmetry constraints on the data while preserving the network's representational accuracy. Specifically, we introduce EQ-Reg, a regularizer designed to enhance rotation equivariance, which innovatively extends the insights of data-augmentation-based and equivariant-based methodologies. This is achieved through self-supervised learning and the spatial rotation and cyclic channel shift of feature maps deduce in the equivariant framework. Our approach firstly enables a non-strictly equivariant network suitable for image restoration, providing a simple and adaptive mechanism for adjusting equivariance based on task. Extensive experiments across three low-level tasks demonstrate the superior accuracy and generalization capability of our method, outperforming state-of-the-art approaches. + +# 1. Introduction + +Various types of transformation symmetry can be observed in both local features and global semantic representations of images, including translation, rotation, reflective and scaling symmetries[1]. Compared to fully connected networks, convolutional neural networks (CNNs) incorporate translation symmetry into their architecture. This intrinsic property has contributed to the widespread success of + +![](images/8c07736e44855098954c7958ccae717d2b35825dacaa637d0124bc2d5c9a32b7.jpg) +Figure 1. The feature map of the trained neural network for CNN+Data Aug., EQ-CNN[11], and the proposed EQ-Reg. + +CNNs across various computer vision tasks, including image restoration (IR), segmentation, and recognition[2]. Recent advances [3-10] have highlighted the crucial role of incorporating transformation symmetry priors, such as equivariant networks, in the design of network architectures. + +Data augmentation is one of the most commonly used techniques for introducing equivariance to transformations beyond simple translation [3, 4, 6]. The idea is to enrich the training set with transformed samples and train networks on the augmented data set to achieve a model that is robust to transformations. A key advantage of this approach is that it enhances the network's equivariance without necessitating any structural modifications. As a result, it remains both highly feasible and effective, preserving the representational capacity while introducing transformation robustness. This is why it continues to be a widely adopted strategy in IR tasks. + +However, there are also notable limitations in data-augmentation-based methods. The primary issue is that supervision imposed on the network is overly simplistic, relying solely on self-supervision at the final output. As shown in Fig.2 (a), it is indeed a non-trivial task to construct regulation for feature maps in commonly used CNN, as the transformations of feature maps are unpredictable when the input image occurs transformation. While in current data-augmentation approaches, the internal feature layers of the + +network receive no direct regularization, resulting in limited gains in equivariance. In other words, networks trained with these methods only become robust to transformations, but cannot be considered truly equivariant. + +Recently, incorporating transformation equivariance into network architectures has attracted significant research interest. Prominent examples in this line of work include Equivariant Convolutional Networks (EQ-CNNs) [5] and its subsequent variants [8-10]. Taking rotation equivariance as an example, the EQ-CNN framework ingeniously employs rotation and cyclic properties of convolutional kernels to achieve strictly equivariant networks. It ensures that all feature layers in the network are equivariant, meaning that as the input rotates, only spatial rotation and channel cyclic shift occur, with no other unpredictable changes (as shown in Fig.2 (b)). More importantly, it has been theoretically proven that the EQ-CNN framework is the only method to achieve transformation equivariance1 [7]. + +Although significant progress has been achieved in the design of equivariant networks, there are still issues that need consideration in practical applications. Particularly for IR tasks that require high representation accuracy, the most critical challenge lies in the accuracy of filter representation. + +On the one hand, in the EQ-CNN framework, the convolution kernel must be transformed according to the group structure, inevitably introducing the filter parameterization techniques (as shown in Fig.2 (b)). When performing filter parameterization, discrete parameters often struggle to fully capture the continuous function, leading to a degradation in representation accuracy. As a result, current EQ-CNN methods are typically limited to tasks such as classification and are less suitable for IR tasks [8-10]. Very recently, Xie et al.[11] introduced a filter Fourier-series-expansion-based parameterization approach for image super resolution, achieving an improvement in representation accuracy and verifying the importance of equivariance for IR tasks. However, the issue of representation accuracy remains unresolved, occasionally resulting in performance degradation. + +On the other hand, real-world data rarely exhibit perfect transformation symmetry, implying that the strict equivariance of EQ-CNN framework may not always be appropriate. Specifically, in IR, the degradation process often violates the visibility condition [12], leading to corrupted measurements that do not share the same symmetry priors as the underlying high-quality image. Imposing strict symmetry constraints on such inputs during neural network design can produce erroneous textures in the restored result. Recent researches have proposed multiple methods for relaxing strictly equivariant [13-15]. However, these approaches remain confined to the framework of group convolution. The parameter sharing techniques associated with group convolution kernels cannot circumvent the issue of + +Table 1. The difference between proposed EQ-Reg and common methods. Symbol $\bigcirc$ represents approximate equivariance. + +
Data Aug.EQ-CNNEQ-Reg
Feature Map Equiv.X
Output Equiv.
Equiv. Degree AdjustableXX
Cyclic Shift of the ChannelX
Representation AccuracyMaintainMay dropMaintain
Beyond CNN ArchitectureEasyHardEasy
+ +diminished representation accuracy. Furthermore, there are no theoretical guarantees for these heuristic constructions. As a result, these methods cannot achieve satisfactory results in low level vision tasks. + +To overcome the limitations of data augmentation-based and EQ-CNN-based approaches, we propose a straightforward and effective regularization-based strategy for achieving equivariance in IR tasks, currently focused on rotation-equivariant networks. The core idea is to integrate the spatial rotation and channel cyclic shift principles, derived from the EQ-CNN framework, into the intermediate layers of commonly used non-equivariant networks. Then, alterations in intermediate layers induced by random rotation of the input image calculated and employed as a regularization loss for self-supervision training. The contribution of this work can be mainly summarized as follows: + +1) This work innovatively combines the insights of data-augmentation-based and EQ-CNN-based methods by designing rotation-equivariant regularization strategies for IR tasks. Compared to data-augmentation-based methods, our method achieves self-supervision of equivariance across all layers for the first time. Compared to EQ-CNN-based methods, the proposed method does not require modifying network architectures. Instead, we introduce a regularization term during the training process, offering a practical and effective solution. Furthermore, it opens up new avenues for designing equivariant networks beyond the traditional CNN framework. Table 1 highlights the specific advantages of the proposed method over previous approaches. +2) The proposed method is the first non-strictly equivariant network designed for IR tasks. Specifically, our approach introduces a simple yet effective mechanism for adaptively adjusting equivariance, making it versatile for various applications, including those involving non-strict symmetries or cases where strict equivariance is disrupted due to data degradation. As demonstrated in Figure 1, the proposed EQ-Regularization effectively adapts to capturing image symmetry priors directly from the data. +3) We demonstrate the effectiveness of our approach across various IR tasks, including medical image reconstruction, image deraining, and image inpainting, as well as in image classification. Extensive experiments conducted on multiple tasks demonstrate the performance and generalizability of the proposed method, surpassing current state-of-the-art + +(SOTA) methods. Further analysis and visualizations validate the rationale and efficacy of our approach for broader vision tasks in real-world applications. + +# 2. Related Work + +# 2.1. Strict Equivariant CNNs + +Early attempts for exploiting transformation symmetry priors in images primarily rely on heuristic approaches [16-19], with data augmentation[16] being the most widely used. Several attempts have been made to design end-to-end self-supervised learning frameworks that leverage equivariant priors in images [6, 20, 21]. However, these methods apply supervision only at the network's final output. + +Recent efforts to incorporate transformation symmetry priors in images have primarily focused on embedding transformation equivariance directly into network architectures via equivariant convolution designs. Notably, GCNN [5] and HexaConv [22] explicitly integrate $\pi / 2$ and $\pi / 3$ degree rotation equivariances into the neural network, respectively. Despite these advancements, achieving rotation equivariance for arbitrary angles remains challenging due to the reliance on discrete filter designs in these approaches. More comprehensive forms of equivariance have been explored through techniques such as interpolation [23, 24] and Gaussian resampling [25]. However, in these methods, the expected equivariance cannot be theoretically guaranteed. + +Current equivariant CNNs methods exploit filter parametrization technique for arbitrarily rotating filters in continuous domain. Early approaches by [8] and [9] introduced harmonics as steerable filters to achieve exact equivariance for larger transformation groups within the continuous domain. The harmonic-based approach ensures complete rotation equivariance, making it a compelling focus for both practical applications and theoretical research. Typically, [7], [26] provided theoretical treatment of equivariant convolution, and derive generalized convolution formulas. After that, [10] and [27] designed equivariance by relating convolution with partial differential operators and proposed PDO-eConv. However, these filter parameterization approaches suffer from limited expressive accuracy, which negatively impacts performance in image restoration (IR) tasks. Very recently, Xie et al.[11] proposed Fourier series expansion-based filter parametrization, which has relatively high expression accuracy. + +# 2.2. Soft Equivariant CNNs + +While symmetry constraints can be highly effective in machine learning, they may become restrictive when the data does not strictly adhere to perfect symmetry. Relaxing the rigid assumptions in equivariant networks to achieve a balance between inductive bias and expressiveness in deep learning has been the pursuit of several recent works. + +Elsayed et al.[28] demonstrated that strict spatial invariance can be overly restrictive and that relaxing spatial weight sharing can outperform both convolutional and locally connected models. Building on this, Wang et al.[13] extended this weight relaxation scheme to broader symmetry groups such as rotation SO(2), scaling $\mathbb{R}_{>0}$ , and Euclidean E(2). They define "approximate equivariance" explicitly and model it through a relaxed group convolutional layer. Further, Romero et al. [14] proposed Partial-GCNN, enforcing strict equivariance on selected elements of the symmetry group. However, their method relies on constructing probability distributions of group elements and sampling from them, which both limits its applicability and adds complexity to implementation. + +Finzi et al. [29] introduced a mechanism for modeling soft equivariances by integrating equivariant and nonequivariant MLP layers. However, the high parameter count in fully connected layers makes this approach impractical for handling large-scale datasets. Similarly, Kim et al. [30] introduced a regularizer-based approach, using a Projection-Based Equivariance Regularizer to achieve approximate equivariances and model mixed approximate symmetry. In summary, most existing methods for relaxing symmetry constraints are limited to the architecture of group-equivariant convolutional networks or rely on heuristic approaches that lack theoretical guarantees. Furthermore, the effectiveness of these methods in low-level visual tasks, such as IR, remains largely unvalidated. + +# 3. Method Framework + +In this section, we first present the foundational concepts required for constructing the equivariant regularization. Next, we describe the proposed regularizer and provide implementation details for its practical use. + +# 3.1. Prior Knowledge + +Equivariance. Equivariance of a mapping transform means that a transformation on the input will result in a predictable transformation on the output [10, 26]. In this work, we concentrate on achieving rotation equivariance in 2D convolutions. Rotation equivariance ensures that rotating the input leads solely to a corresponding rotation in the output, without introducing any additional, unpredictable variations. Mathematically, $\Psi$ is a convolution mapping from the input feature space to the output feature space, and $S$ is a subgroup of rotation transformations, i.e., + +$$ +\left\{A _ {k} = \left[ \begin{array}{l l} \cos 2 \pi k / t & \sin 2 \pi k / t \\ - \sin 2 \pi k / t & \cos 2 \pi k / t \end{array} \right] | k = 0, 1, \dots , t - 1 \right\}. \tag {1} +$$ + +Then, $\Psi$ is equivariant with respect to $S$ , if for any rotation matrix $\tilde{A} \in S$ , + +$$ +\Psi \left[ \pi_ {\tilde {A}} ^ {I} \right] (I) = \pi_ {\tilde {A}} ^ {F} [ \Psi ] (I), \tag {2} +$$ + +![](images/8b8c82d958871b6fb46e206e7b88bd15a7234ac8430b1c62f7ca5eed9af18338.jpg) +(a) Common CNN + +![](images/163193932b5c563d436d7181f7eca5b00fb5961ff6b5962809b0aae11014a928.jpg) +(b) EQ-CNN + +![](images/c3589add464067a9b40321f97d4a6c92ce8c7436d1ad2d065930057bd5530a50.jpg) +(c) EQ-Regularization CNN +Figure 2. Illustrations of the feature maps of different CNNs when rotating the input images for $2\pi /3$ . (a) A Common CNN. (b) The rotation EQ-CNN. (c) The proposed EQ-Regularization CNN. + +where $I$ is an input image; $\pi_{\tilde{A}}^{I}$ and $\pi_{\tilde{A}}^{F}$ denote how the transformation $\tilde{A}$ acts on input image and output features, respectively. And $[\cdot ]$ denotes the composition of functions. + +Structure of feature maps. For an input image $I$ of size $h \times w \times n_0$ ( $n_0 = 1$ for grey image and $n_0 = 3$ for color image), We denote the intermediate feature map as $F$ , which is a multi-channel tensor of size $h \times w \times n \times t$ . Here, the third dimension corresponds to the feature channels, while the fourth dimension represents a selected rotation subgroup $S$ . Following the previous work [10, 26], we denote the feature map corresponding to the $k$ -th group element as $F^{A_k} \in \mathbb{R}^{h \times w \times n}$ , where $A_k$ is a rotation matrix in $S$ , and also used as an index to denote a specific tensor mode in the feature map $F$ . + +# 3.2. Proposed Regularizer + +# Why to Pursue High Accuracy and Adaptability? + +Previous work has provided clear guidelines for designing neural networks to handle data with non-trivial symmetries [7]. As stated in Lemma 1, a feedforward network is equivariant to the group action if and only if it adheres to the convolutional operation described in Eq. (3). + +Lemma 1 A feed forward neural network $\mathcal{N}$ is equivariant to the action of a compact group $G$ on its inputs if and only if each layer of $\mathcal{N}$ implements a generalized form of convolution derived from the following formula. + +$$ +(f * g) (u) = \int_ {G} f (u v ^ {- 1}) g (v) d \mu (v). \tag {3} +$$ + +where $f$ and $g$ are two functions $G \to \mathbb{C}$ , and integration is with respect to the Haar measure $\mu$ . + +Therefore, the construction of a rotation-equivariant convolutional network can only proceed according to the aforementioned formulation, and the compact group $G$ as the roto-translation group $SE(2)$ . + +Based on Lemma 1, it can be proved that the current equivariant convolution is exact in the continuous domain. However, they would become approximate after necessary discretization for real-world applications. In this case, the latest theoretical analysis has derived the equivariant error specifically for the entire equivariant network under arbitrary rotation degrees, stated as follows [31]: + +Lemma 2 For an image $I$ with size $h \times w \times n_0$ , and a $N$ -layer rotation equivariant CNN network $N_{eq}(\cdot)$ , under proper conditions, the following result holds: + +$$ +\begin{array}{l} \left| \mathrm {N} _ {e q} \left[ \pi_ {\theta} \right] (I) - \pi_ {\theta} \left[ \mathrm {N} _ {e q} \right] (I) \right| \leq C _ {1} m ^ {2} + C _ {2} p m t ^ {- 1}, \tag {4} \\ \forall \theta = 2 k \pi / t, k = 1, 2, \dots , t, \\ \end{array} +$$ + +where $\pi_{\theta}$ defines the rotation transformation of the input image(or on a feature map²), $m$ is the mesh size, $p$ is the filter size and $C_1,C_2$ is a positive constant. + +The left side of Eq.(4) denotes the equivariant error, i.e., the difference between results obtained when rotations are applied before and after convolution. While the equation suggests that reducing $m$ to an infinitely small value and increasing $t$ to an infinitely large value would minimize the equivariant error to zero, achieving such extremes is impractical in real-world implementations. Consequently, the equivariant network will inevitably introduce errors, leading to reduced representation accuracy. Furthermore, as illustrated in Fig.1 (b), with constraints such as a fixed angle + +selection (e.g., $t = 4$ ) and a finite value for $m$ , the equivariant network learns a strictly symmetry constrained to four angles. This results in the circular features in Fig.1 (b) and (e) appearing nearly square. + +# How to Achieve High Accuracy and Adaptability? + +In equivariant network, the rotation of input image $I \in \mathbb{R}^{h \times w \times n_0}$ will cause the rotation of feature map $F \in \mathbb{R}^{h \times w \times n}$ in convolution. In addition to rotating in the spatial dimension, feature $F$ will naturally accompanied by a pattern transformation respect to the rotation group in $S$ due to the property of the group equivariant network. Specifically, the rotation of the feature map would be consistent with spatial rotation on the first two dimensions and cyclically shifting along the final dimensions, as illustrated in Fig.2 (b). Formally, for $\forall \tilde{A} \in S$ , we have + +$$ +\pi_ {\tilde {A}} ^ {F} (F) = \left[ \pi_ {\tilde {A}} ^ {F} \left(F ^ {\tilde {A} ^ {- 1} A _ {1}}\right), \pi_ {\tilde {A}} ^ {F} \left(F ^ {\tilde {A} ^ {- 1} A _ {2}}\right), \dots , \pi_ {\tilde {A}} ^ {F} \left(F ^ {\tilde {A} ^ {- 1} A _ {t}}\right) \right], \tag {5} +$$ + +where $F^{A_k}, k = 1,2,\dots,t$ , are tensors of size $h \times w \times n$ , which is viewed as a n-channel image when performing spatially rotation $\pi_{\tilde{A}}^I$ . + +To overcome the limitations imposed by representation accuracy errors of equivariant network and effectively incorporate rotational symmetry priors in images, we draw inspiration from the inherent rotational and cyclic shift properties of feature map channels in group equivariant convolution. We introduce a regularization term, as defined in Eq.(6), which constrains feature maps at each convolutional layer, thereby enhancing the network equivariance. + +$$ +L _ {l a y e r} = \left\| \pi_ {\hat {A}} ^ {F} \left(\phi^ {(l)} (I)\right) - \phi^ {(l)} \left(\pi_ {\hat {A}} ^ {I} (I)\right) \right\| _ {F} ^ {2}, \tag {6} +$$ + +where, $\phi^{(l)}$ is the first $l$ layers of the convolution network. $\tilde{A}$ is a rotation matrix with randomly selected angles in $\{2k\pi /t|k = 1,2,\dots t\}$ . + +For the first term on the right side of Eq.(6), denoted $\phi^{(l)}(I) := F_l$ , $\pi_{\tilde{A}}^F (F_l)$ is defined in Eq.(5). For the second term on the right side of Eq.(6), denoted $\pi_{\tilde{A}}^I (I) := r(I)$ , we have: + +$$ +\phi^ {(l)} (r (I)) = \left[ r (F) ^ {A _ {1}}, r (F) ^ {A _ {2}}, \dots , r (F) ^ {A _ {t}} \right]. \tag {7} +$$ + +According to above formula, we calculate the error between the feature maps obtained by rotating the image before convolution and those obtained by convolving the image first and then rotating it, followed by cyclic shifting on channels, as shown in Fig.2 (c). Instead of using equivariant convolution architecture which would lead to representation accuracy errors, we still use common convolution layer which can maintain the high accuracy. Our goal is to guide the feature maps after each convolution layer of the image so that they can approach the strictly equivariant feature maps in group equivariant convolution networks. + +Then, the whole regularization term of the network be formulated as: + +$$ +L _ {e q u i} = \mathbb {E} _ {\tilde {A}} \left(\sum_ {l} \left\| \pi_ {\tilde {A}} ^ {F} \left(\phi^ {(l)} (I)\right) - \phi^ {(l)} \left(\pi_ {\tilde {A}} ^ {I} (I)\right) \right\| _ {F} ^ {2}\right). \tag {8} +$$ + +We utilize this regularization to guide training, allowing the model to learn an adaptive symmetry prior from data. + +CNNs trained with the regularization term, i.e. Eq. (8), are not constrained by the assumption of strict rotational symmetry prior imposed by rotation equivariant convolution. Instead, they learn a rotational symmetry prior that adapts to the distribution of real-world degraded image. Furthermore, unlike other methods with relaxed equivariant constraints, our method achieves high-precision image restoration with a solid theoretical foundation. + +Implementation Details. Since the proposed method is a basic convolution module, it can be adopted to arbitrary network in a plug and play manner. For an input image $I \in \mathbb{R}^{h \times w \times n}$ , a random rotation transformation $\tilde{A}$ from the rotation group is applied to $I$ , resulting in $\pi_{\tilde{A}}^I(I)$ . Both the original image $I$ and $\pi_{\tilde{A}}^I(I)$ are simultaneously passed through the first layer $\phi^{(1)}(\cdot)$ to obtain the output features $\phi^{(1)}(I)$ and $\phi^{(1)}\left(\pi_{\tilde{A}}^R(I)\right)$ . As shown in Fig.2(c), we perform spatial rotation and group channel cyclic shift on the feature $\phi^{(1)}(I)$ according to the transformation $\tilde{A}$ to obtain $\pi_{\tilde{A}}^F\left(\phi^{(1)}(I)\right)$ . Subsequently, the loss $L_{layer}^{(1)}$ for the first layer is computed using Eq. (6). The process is then iteratively applied to the subsequent layers, culminating in the computation of the overall equivariant loss $L_{equiv}$ for the entire network. + +# 4. Experimental Results + +# 4.1. Metal Artifact Reduction in CT Image + +Network setting. In order to comprehensively verify the effects of the proposed regularizer, We compare the proposed method with existing rotation equivariant methods, including GCNN [5], E2-CNN [26], PDO-eConv [10] and F-Conv [11], and partial-GCNN [14], the partial equivariant methods. Specifically, for ACDNet, We construct strictly equivariant and partial equivariant network by replacing the regular convolution in the original network with above methods as ACDNet-gconn, ACDNet-e2conn, ACDNet-pdoe, ACDNet-fconv, and ACDNet-partial respectively. Our method is represented by ACDNet-reg. Moreover, we use the same notation strategy for DICDNet and OSCNet, ensuring clarity and coherence throughout. Among these methods, We perform the experiment on the $p4$ , rotation group i.e., $t = 4$ . Besides, the channel numbers of the strictly equivariant methods are set as $\frac{1}{4}$ of the original network. All the training settings and loss function are set the same as the original methods for fair competition. + +Table 2. Average PSNR/SSIM of different competing methods on synthesized DeepLesion [32]. + +
MethodLarge Metal →Medium Metal →Small Metal →Average
Input24.12 / 0.676126.13 / 0.747127.75 / 0.765928.53 / 0.796428.78 / 0.807627.06 / 0.7586
LI [33]27.21 / 0.892028.31 / 0.918529.86 / 0.946430.40 / 0.955530.57 / 0.960829.27 / 0.9347
NMAR [34]27.66 / 0.911428.81 / 0.937329.69 / 0.946530.44 / 0.959130.79 / 0.966929.48 / 0.9442
CNNMAR [35]28.92 / 0.943329.89 / 0.958830.84 / 0.970631.11 / 0.974331.14 / 0.975230.38 / 0.9644
DuDoNet [36]29.87 / 0.972330.60 / 0.978631.46 / 0.983931.85 / 0.985831.91 / 0.986231.14 / 0.9814
DSCMAR [37]34.04 / 0.934333.10 / 0.936233.37 / 0.938432.75 / 0.939332.77 / 0.939533.21 / 0.9375
InDuDoNet [38]36.74 / 0.974239.32 / 0.989341.86 / 0.994444.47 / 0.994845.01 / 0.995841.48 / 0.9897
ACDNet [39]37.84 / 0.989439.74 / 0.992841.86 / 0.995043.24 / 0.996043.96 / 0.996441.33 / 0.9939
ACDNet-partial34.33 / 0.959137.21 / 0.975439.02 / 0.982841.00 / 0.985441.28 / 0.986638.57 / 0.9978
ACDNet-pdoe35.09 / 0.977338.22 / 0.988040.43 / 0.992843.26 / 0.995043.75 / 0.995540.15 / 0.9897
ACDNet-e2cnn37.00 / 0.986139.04 / 0.990841.29 / 0.994043.40 / 0.995543.35 / 0.995940.81 / 0.9925
ACDNet-gconn37.70 / 0.987840.14 / 0.992641.97 / 0.994843.75 / 0.996243.86 / 0.996441.48 / 0.9935
ACDNet-fconv38.60 / 0.988840.20 / 0.992842.24 / 0.995044.42 / 0.996144.58 / 0.996542.01 / 0.9938
ACDNet-reg38.99 / 0.989940.60 / 0.993242.13 / 0.995443.88 / 0.996544.76 / 0.996742.07 / 0.9943
DICDNet [39]38.88 / 0.989539.86 / 0.992342.85 / 0.995144.61 / 0.995745.74 / 0.996542.38 / 0.9938
DICDNet-partial37.46 / 0.986139.85 / 0.991541.94 / 0.994144.77 / 0.995444.90 / 0.996041.79 / 0.9926
DICDNet-pdoe35.43 / 0.977438.48 / 0.987341.03 / 0.992443.79 / 0.994344.32 / 0.995040.61 / 0.9893
DICDNet-e2cnn37.30 / 0.985439.76 / 0.991042.08 / 0.993944.95 / 0.995445.13 / 0.995741.84 / 0.9923
DICDNet-gconn38.18 / 0.987939.81 / 0.991942.33 / 0.994645.03 / 0.995945.41 / 0.996342.15 / 0.9933
DICDNet-fconv38.76 / 0.988639.96 / 0.992242.91 / 0.995044.99 / 0.995845.72 / 0.996442.47 / 0.9936
DICDNet-reg39.23 / 0.990140.23 / 0.992743.47 / 0.995546.08 / 0.996646.20 / 0.996843.04 / 0.9943
OSCNet [40]39.04 / 0.989540.09 / 0.992443.12 / 0.995244.93 / 0.995745.97 / 0.996542.63 / 0.9939
OSCNet-partial37.33 / 0.985940.02 / 0.991842.28 / 0.994344.99 / 0.995645.30 / 0.996041.98 / 0.9927
OSCNet-pdoe35.51 / 0.977738.29 / 0.987440.84 / 0.992143.50 / 0.994143.90 / 0.994840.41 / 0.9892
OSCNet-e2cnn37.17 / 0.984739.27 / 0.990742.08 / 0.994044.44 / 0.995144.93 / 0.995741.58 / 0.9920
OSCNet-gconn38.51 / 0.988540.03 / 0.992342.68 / 0.994945.42 / 0.996245.50 / 0.996442.43 / 0.9937
OSCNet-fconv39.14 / 0.989540.35 / 0.992742.94 / 0.995245.63 / 0.996345.83 / 0.996542.78 / 0.9940
OSCNet-reg38.92 / 0.989840.79 / 0.993243.33 / 0.995445.91 / 0.996546.02 / 0.996743.00 / 0.9943
+ +![](images/915d7df0337b559450fbac6e9c2729e22805573b0a5d1ef8eaf5c7015715c74e.jpg) +Figure 3. Performance comparison on a typical metal-corrupted CT image from the synthesized DeepLesion[32]. The red pixels stand for metallic implants. + +Datasets and Training Settings. Following the synthetic procedure in [39], [40], we can generate the paired $X$ and $Y$ for training and testing by using 1,200 clean CT images from DeepLesion [32] and 100 simulated metallic implants from [35]. Consistent with the original work, we randomly select 90 masks and 1000 clean CT images to synthesize metal-corrupted training samples. The remaining 10 masks and 200 clean CT images as test samples. For training, except the total epoch of DICDNet is changed to 200 to ensure convergence, all methods of replacing the rotation equivariant network remain consistent with the original method. + +Quantitative and Qualitative Comparison. As illustrated in Table 2, it is evident that when replacing the networks with its strictly rotation equivariant version, the performance will be slightly reduced, indicating that the symmetry in the data may be imperfect, and the network depicts strict symmetry. Secondly, the performance of rotational equivariant networks implemented by early parameterization methods, such as PDO-eConv and E2CNN, will be significantly degraded due to low representation accuracy. Also, partial-GCNN performs poorly on metal artifact reduction tasks as a way to relax strict equivariant constraints. + +Table 3. Average PSNR/SSIM of different competing methods on four benchmark datasets. + +
MethodRain100L [41]Rain100H [41]Rain1400 [42]Rain12 [43]
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Input26.900.838413.560.370925.240.809730.140.8555
DSC[44]27.340.849413.770.319927.880.839430.070.8664
GMM[43]29.050.871715.230.449827.780.858532.140.9145
JCAS[45]28.540.852414.620.451026.200.847133.100.9305
Clear[46]30.240.934415.330.742126.210.895131.240.9353
DDN[42]32.380.925822.850.725028.450.888834.040.9330
RESCAN[47]38.520.981229.620.872032.030.931436.430.9519
PReNet[48]37.450.979030.110.905332.550.945936.660.9610
SPANet[49]35.330.969425.110.833229.850.914835.850.9572
JORDER_E[41]38.590.983430.500.896732.000.934736.690.9621
SIRR[50]32.370.925822.470.716428.440.889334.020.9347
IDT[51]35.420.967430.450.908133.550.953135.980.9584
RCDNet[52]40.000.986031.510.911933.100.947537.610.9644
RCDNet-partial38.020.979629.750.855031.660.930536.580.9585
RCDNet-pdoe39.080.983030.340.897932.920.944636.420.9463
RCDNet-gcnn39.940.985630.880.902732.880.944837.530.9618
RCDNet-e2cnn40.150.986331.330.909032.180.936937.500.9597
RCDNet-fconv40.070.986230.980.903832.660.942037.670.9618
RCDNet-reg40.330.986831.640.913833.490.950637.740.9633
+ +![](images/37d1eda89987ffbc3cc21008eb53fb45c029bace93b3dbed4597b3c688affa99.jpg) +Figure 4. The $1^{st}$ column: a typical ground truth sample in Rain100L [41] dataset (upper) and its ground truth rain layer (lower). The $2^{nd} - 14^{th}$ columns: derained results (upper) and extracted rain layers (lower) by all competing methods + +However, our method can describe the degree of equivariance in the data without damaging the representation accuracy and further outperforms the competing methods. Visually, as demonstrated in Figure 3, our method performs better than the original method in removing metal artifacts and restoring the contour of human tissue structures. Both quantitative and qualitative comparisons reveal the superi + +ority of the proposed methods. Additional generalization results can be found in the supplementary material. + +# 4.2. Single Image Rain Removal + +Network setting. Similar to the experiments in Section 4.1, we construct RCDNet-gCNN, RCDNet-e2CNN, RCDNet-pdoe, RCDNet-fconv, RCDNet-partial to verify the effectiveness of strictly rotation equivariant networks, with + +Table 4. PSNR of pixelwise image inpainting reconstruction for different methods using noisy test measurements. + +
γA†(y)EIREIPartialPDOee2cnnFconvGCNNEQ-Reg
0.015.7 ± 1.518.9 ± 1.019.7 ± 2.113.8 ± 1.219.5 ± 1.519.7 ± 1.820.2 ± 1.520.7 ± 2.021.2 ± 1.7
0.055.1 ± 1.411.8 ± 3.018.0 ± 1.513.2 ± 1.016.5 ± 1.417.3 ± 1.417.4 ± 1.418.0 ± 1.218.5 ± 1.4
0.14.4 ± 1.39.8 ± 0.816.6 ± 0.212.3 ± 1.115.0 ± 1.315.4 ± 1.315.7 ± 1.116.5 ± 1.516.9 ± 1.3
yREIPDOee2cnnFconvGCNNEQ-RegGT
γ=0.0511.9516.7517.8117.8118.5019.01
+ +Figure 5. Inpainting reconstructions on test images with Poisson noise ( $\gamma = 0.05$ ) and $30\%$ mask rate. PSNR values are shown in the top right corner of the images. + +setting $t = 2$ & $\frac{1}{2}$ channel number. Because in the current rain removal task, the rain strip types in the datasets usually present a 180-degree rotational symmetry relationship. We therefore expect the network to have two angles, that is, an approximate equivariant of 180 degrees. All the training settings and loss function are set the same as the original RCDNet for fair competition. + +Datasets and Training settings. We compare our method with typical single image rain removal SOTA methods, including DSC [44], GMM [43], JCAS [45], Clear [46], DDN [42], RESCAN [47], PReNet [48], SPANet [49], JORDER_E [41], SIRR [50], and RCDNet on four commonly-used benchmark datasets, i.e., Rain100L [41], Rain100H [41], Rain1400 [42], and Rain12 [43]. The training strategy is carried out according to the original setting. + +Quantitative and Qualitative Comparison. As shown in Table 3, with the proposed equivariant regularization, RCDNet-reg consistently outperform the method of strict or partial equivariant RCDNet, and achieve comparable performance with the transformer-based SOTA method IDT. Particularly, in the datasets Rain100L, Rain100H and Rain12, the RCDNet-reg method achieves the best performance among these methods. In addition, as shown in Figure 4, our proposed method exhibits superior performance in removing visual rain streaks without compromising the details of the original image. It is worth noting that the model based on convolution sparse coding often mistakenly identifies white stripes as rain stripes, while the addition of equivariant regularization can effectively alleviate this shortcoming. These results support the effectiveness of adopting equivariant regularization in this task. + +# 4.3. Image Inpainting + +Experiment Setting. We adopt the self-supervised framework for learning from noisy and partial measurements introduced in EI [20] and REI [6], reproducing the original image inpainting experiment as described in these works. Building upon this setup, we compare the methods of strict equivariant and partial equivariant with the methods pre + +sented in this paper. Specifically, we assess reconstruction performance on the Urban100 dataset [53] by applying a random mask covering $30\%$ of the image and performing restoration under various Poisson noise levels. All training configurations and the loss function are maintained as in [6]. Quantitative and Qualitative Comparison. As shown in Table 4, the proposed method achieves the best effect under different noise intensity interference. The visualization results in Figure 5 also show that the proposed method can better remove noise and restore original image details. Additional visualizations and further analysis can be found in the supplementary material. + +# 4.4. Image classification + +We compare the proposed method with the existing partial equivariant approach, Partial-GCNN [14], on the CIFAR-100 image classification task. Table 5 indicates that our approach consistently outperforms the competing methods. + +Table 5. Classification accuracy of CIFAR100 dataset. + +
DatasetGroupNo.elemsGCNNPartial-GCNNEQ-Reg
CIFAR100SE(2)t = 449.7953.2856.10
t = 853.9856.5357.56
+ +# 5. Conclusion + +In this paper, we introduce a simple yet effective regularizer-based equivariant strategy for image processing tasks, capable of adaptively adjusting equivariance to suit various applications. Extensive experiments across multiple tasks validate the superior performance and generalizability of the proposed method, surpassing current SOTA methods. Moreover, this approach paves the way for developing equivariant networks beyond the conventional CNN framework, expanding the possibilities for future network design, including the equivariant Transformer-based networks and networks equivariant with respect to other transformations. Acknowledgment. This research was supported by the NSFC project under contract U21A6005, the Major Key Project of PCL under Grant PCL2024A06, the Tianyuan Fund for Mathematics of the National Natural Science Foundation of China (Grant No. 12426105), and the Key Research and Development Program (Grant No. 2024YFA1012000). + +# References + +[1] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. 1 +[2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-9, 2015. 1 +[3] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5197-5206, 2015. 1 +[4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1 +[5] Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990-2999. PMLR, 2016. 2, 3, 5 +[6] Dongdong Chen, Julián Tachella, and Mike E Davies. Robust equivariant imaging: a fully unsupervised framework for learning to image from noisy and partial measurements. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5647-5656, 2022. 1, 3, 8 +[7] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In International conference on machine learning, pages 2747-2755. PMLR, 2018. 2, 3, 4 +[8] Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 849-858, 2018. 2, 3 +[9] Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. Advances in neural information processing systems, 32, 2019. 3 +[10] Zhengyang Shen, Lingshen He, Zhouchen Lin, and Jinwen Ma. Pdo-econvs: Partial differential operator based equivariant convolutions. In International Conference on Machine Learning, pages 8697-8706. PMLR, 2020. 1, 2, 3, 4, 5 +[11] Qi Xie, Qian Zhao, Zongben Xu, and Deyu Meng. Fourier series expansion based filter parametrization for equivariant convolutions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4537-4551, 2022. 1, 2, 3, 5 +[12] Matthias Beckmann and Nick Heilenkötter. Equivariant neural networks for indirect measurements. SIAM Journal on Mathematics of Data Science, 6(3):579-601, 2024. 2 +[13] Rui Wang, Robin Walters, and Rose Yu. Approximately equivariant networks for imperfectly symmetric dynamics. In International Conference on Machine Learning, pages 23078-23091. PMLR, 2022. 2, 3 +[14] David W Romero and Suhas Lohit. Learning partial equivariances from data. Advances in Neural Information Processing Systems, 35:36466-36478, 2022. 3, 5, 8 + +[15] Tycho van der Ouderaa, David W Romero, and Mark van der Wilk. Relaxing equivariance constraints with non-stationary continuous filters. Advances in Neural Information Processing Systems, 35:33818-33830, 2022. 2 +[16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. 3 +[17] Dmitry Laptev, Nikolay Savinov, Joachim M Buhmann, and Marc Pollefeys. Ti-pooling: transformation-invariant pooling for feature learning in convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 289-297, 2016. +[18] Carlos Esteves, Christine Allen-Blanchette, Xiaowei Zhou, and Kostas Daniilidis. Polar transformer networks. arXiv preprint arXiv:1709.01889, 2017. +[19] Kihyuk Sohn and Honglak Lee. Learning invariant representations with local transformations. 2012. 3 +[20] Dongdong Chen, Julián Tachella, and Mike E Davies. Equivariant imaging: Learning beyond the range space. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4379-4388, 2021. 3, 8 +[21] Zixiang Zhao, Haowen Bai, Jiangshe Zhang, Yulun Zhang, Kai Zhang, Shuang Xu, Dongdong Chen, Radu Timofte, and Luc Van Gool. Equivariant multi-modality image fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25912-25921, 2024. 3 +[22] Emiel Hoogeboom, Jorn WT Peters, Taco S Cohen, and Max Welling. Hexaconv. arXiv preprint arXiv:1803.02108, 2018. 3 +[23] Yanzhao Zhou, Qixiang Ye, Qiang Qiu, and Jianbin Jiao. Oriented response networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 519-528, 2017. 3 +[24] Diego Marcos, Michele Volpi, Nikos Komodakis, and Devis Tuia. Rotation equivariant vector field networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5048-5057, 2017. 3 +[25] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5028-5037, 2017. 3 +[26] Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. Advances in neural information processing systems, 32, 2019. 3, 4, 5 +[27] Zhengyang Shen, Tiancheng Shen, Zhouchen Lin, and Jinwen Ma. Pdo-es2cnns: Partial differential operator based equivariant spherical cnns. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 9585-9593, 2021. 3 +[28] Gamaleldin Elsayed, Prajit Ramachandran, Jonathon Shlens, and Simon Kornblith. Revisiting spatial invariance with low-rank local connectivity. In International Conference on Machine Learning, pages 2868-2879. PMLR, 2020. 3 + +[29] Marc Finzi, Gregory Benton, and Andrew G Wilson. Residual pathway priors for soft equivariance constraints. Advances in Neural Information Processing Systems, 34:30037-30049, 2021. 3 +[30] Hyunsu Kim, Hyungi Lee, Hongseok Yang, and Juho Lee. Regularizing towards soft equivariance under mixed symmetries. In International Conference on Machine Learning, pages 16712-16727. PMLR, 2023. 3 +[31] Jiahong Fu, Qi Xie, Deyu Meng, and Zongben Xu. Rotation equivariant proximal operator for deep unfolding methods in image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 4 +[32] Ke Yan, Xiaosong Wang, Le Lu, Ling Zhang, Adam P Harrison, Mohammadhadi Bagheri, and Ronald M Summers. Deep lesion graphs in the wild: relationship learning and organization of significant radiology image findings in a diverse large-scale lesion database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9261-9270, 2018. 6 +[33] Willi A Kalender, Robert Hebel, and Johannes Ebersberger. Reduction of ct artifacts caused by metallic implants. Radiology, 164(2):576-577, 1987. 6 +[34] Esther Meyer, Rainer Raupach, Michael Lell, Bernhard Schmidt, and Marc Kachelrieß. Normalized metal artifact reduction (nmar) in computed tomography. Medical physics, 37(10):5482-5493, 2010. 6 +[35] Yanbo Zhang and Hengyong Yu. Convolutional neural network based metal artifact reduction in x-ray computed tomography. IEEE transactions on medical imaging, 37(6):1370-1381, 2018. 6 +[36] Wei-An Lin, Haofu Liao, Cheng Peng, Xiaohang Sun, Jingdan Zhang, Jiebo Luo, Rama Chellappa, and Shaohua Kevin Zhou. Dudonet: Dual domain network for ct metal artifact reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10512-10521, 2019. 6 +[37] Lequan Yu, Zhicheng Zhang, Xiaomeng Li, and Lei Xing. Deep sinogram completion with image prior for metal artifact reduction in ct images. IEEE transactions on medical imaging, 40(1):228-238, 2020. 6 +[38] Hong Wang, Yuexiang Li, Haimiao Zhang, Jiawei Chen, Kai Ma, Deyu Meng, and Yefeng Zheng. Indudonet: An interpretable dual domain network for ct metal artifact reduction. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2021: 24th International Conference, Strasbourg, France, September 27-October 1, 2021, Proceedings, Part VI 24, pages 107-118. Springer, 2021. 6 +[39] Hong Wang, Yuexiang Li, Nanjun He, Kai Ma, Deyu Meng, and Yefeng Zheng. Dicdnet: deep interpretable convolutional dictionary network for metal artifact reduction in ct images. IEEE Transactions on Medical Imaging, 41(4):869-880, 2021. 6 +[40] Hong Wang, Qi Xie, Yuexiang Li, Yawen Huang, Deyu Meng, and Yefeng Zheng. Orientation-shared convolution representation for ct metal artifact learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 665-675. Springer, 2022. 6 + +[41] Wenhan Yang, Robby T Tan, Jiashi Feng, Zongming Guo, Shuicheng Yan, and Jiaying Liu. Joint rain detection and removal from a single image with contextualized deep networks. IEEE transactions on pattern analysis and machine intelligence, 42(6):1377-1393, 2019. 7, 8 +[42] Xueyang Fu, Jiabin Huang, Delu Zeng, Yue Huang, Xinghao Ding, and John Paisley. Removing rain from single images via a deep detail network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3855-3863, 2017. 7, 8 +[43] Yu Li, Robby T Tan, Xiaojie Guo, Jiangbo Lu, and Michael S Brown. Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2736-2744, 2016. 7, 8 +[44] Yu Luo, Yong Xu, and Hui Ji. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE international conference on computer vision, pages 3397-3405, 2015. 7, 8 +[45] Shuhang Gu, Deyu Meng, Wangmeng Zuo, and Lei Zhang. Joint convolutional analysis and synthesis: sparse representation for single image layer separation. In Proceedings of the IEEE international conference on computer vision, pages 1708-1716, 2017. 7, 8 +[46] Xueyang Fu, Jiabin Huang, Xinghao Ding, Yinghao Liao, and John Paisley. Clearing the skies: A deep network architecture for single-image rain removal. IEEE Transactions on Image Processing, 26(6):2944-2956, 2017. 7, 8 +[47] Xia Li, Jianlong Wu, Zhouchen Lin, Hong Liu, and Hongbin Zha. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European conference on computer vision (ECCV), pages 254-269, 2018. 7, 8 +[48] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3937-3946, 2019. 7, 8 +[49] Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson WH Lau. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12270-12279, 2019. 7, 8 +[50] Wei Wei, Deyu Meng, Qian Zhao, Zongben Xu, and Ying Wu. Semi-supervised transfer learning for image rain removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3877-3886, 2019. 7, 8 +[51] Jie Xiao, Xueyang Fu, Aiping Liu, Feng Wu, and Zheng-Jun Zha. Image de-raining transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(11):12978-12995, 2022. 7 +[52] Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng. A model-driven deep neural network for single image rain removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3103-3112, 2020. 7 +[53] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In \ No newline at end of file diff --git a/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/images.zip b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b623b244a5e2d8338047460f2247bb2955048911 --- /dev/null +++ b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d795bd226b69623abaf92a9e3b8e06a0c84eb0ddaa2fad2bdfc4e31fda21bd0f +size 1141099 diff --git a/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/layout.json b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0cb340d79024b5e24f5d6e6d5804efd04a20ed88 --- /dev/null +++ b/CVPR/2025/A Regularization-Guided Equivariant Approach for Image Restoration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc37859d999012cd543865f759237643df4528c991364179bc4f21a64661133e +size 381124 diff --git a/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_content_list.json b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9cd685bec9fa0fc6ccbb22bf2d529f0dc4a78813 --- /dev/null +++ b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:565db27f291fbbe4143c029c7111c0256b42bcee7f4c19f4450e383796447dfa +size 74356 diff --git a/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_model.json b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_model.json new file mode 100644 index 0000000000000000000000000000000000000000..decc839c7cd4fda03498f42c2fc8169adc1a0285 --- /dev/null +++ b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a16793dfdd0bac4b54b7c106f813fd6bb087939015a1898cd5de88f4574c3bb0 +size 94359 diff --git a/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_origin.pdf b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d02576dd66b84a7810fa24617f0936fd4a00e804 --- /dev/null +++ b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/6a213956-431d-4057-8ec5-43422f931878_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb916423d41576d7588d4e5cde1f069ce909b72655f3b0e16f243bc458dda2a2 +size 8071579 diff --git a/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/full.md b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dd0f0fda166b88003a9a646d9803ee5f7791b231 --- /dev/null +++ b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/full.md @@ -0,0 +1,355 @@ +# A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging + +Yuanye Liu1 Jinyang Liu3 Renwei Dian1,2* Shutao Li3 + +1The School of Robotics, Hunan University, China + +$^{2}$ Industrial Design and Machine Intelligence Innovation Research Institute, Quanzhou-Hunan University, China + +3The College of Electrical and Information Engineering, Hunan University, China + +{yuanye-liu, jinyangliu, drw, shutao_li}@hnu.edu.cn https://github.com/YuanyeLiu/SRLF-Net + +![](images/b1cf8b3830384c9bcc4ae833f4a0e435c66ea64d50004addcece93c698939999.jpg) +(a) Spatial-Spectral Structure-Guided Selective Re-learning Mechanism + +![](images/748d4f60138d4563c7479e36ca101c80c893a91395ae4d57cd8425aa51137336.jpg) +(b) PSNR-FLOPs-Params Comparisons +Figure 1. The diagram of the Spatial-Spectral Structure-Guided Selective Re-learning Mechanism (SSG-SRL) and PSNR-FLOPs-Params Comparisons. (a) SSG-SRL adaptively filters out distorted feature points from the preliminary fusion feature for targeted learning. (b) The PSNR-FLOPs-Params Comparisons evaluate different methods from three aspects: the quality of the fused image (vertical axis), model size (horizontal axis), and computational complexity (circle radius). SRLF-Net outperforms other state-of-the-art methods while requiring fewer floating-point operations (FLOPs). + +# Abstract + +Hyperspectral fusion imaging is challenged by high computational cost due to the abundant spectral information. We find that pixels in regions with smooth spatial-spectral structure can be reconstructed well using a shallow network, while only those in regions with complex spatial-spectral structure require a deeper network. However, existing methods process all pixels uniformly, which ignores this property. To leverage this property, we propose a Selective Re-learning Fusion Network (SRLF) that initially extracts features from all pixels uniformly and then selectively refines distorted feature points. Specifically, SRLF first employs a Preliminary Fusion Module with robust global + +modeling capability to generate a preliminary fusion feature. Afterward, it applies a Selective Re-learning Module to focus on improving distorted feature points in the preliminary fusion feature. To achieve targeted learning, we present a novel Spatial-Spectral Structure-Guided Selective Re-learning Mechanism (SSG-SRL) that integrates the observation model to identify the feature points with spatial or spectral distortions. Only these distorted points are sent to the corresponding re-learning blocks, reducing both computational cost and the risk of overfitting. Finally, we develop an SRLF-Net, composed of multiple cascaded SRLFs, which surpasses multiple state-of-the-art methods on several datasets with minimal computational cost. + +# 1. Introduction + +Hyperspectral imaging technology is widely used in remote sensing [18, 19], environmental monitoring [27, 47], agriculture [21, 34], and various other fields due to its rich + +spectral information. However, the limitations of imaging principles result in a trade-off between spatial and spectral resolution. Achieving high spectral resolution often requires sacrificing spatial resolution, limiting the practical applications of hyperspectral imaging technology. Recently, computational imaging methods, such as spectral super-resolution [3, 5, 10, 31], coded aperture snapshot spectral imaging [4, 42, 46], and spatial-spectral fusion [15, 23], have received significant attention for their capability to reconstruct high-resolution hyperspectral images (HRHSI) through advanced algorithms. This paper focuses on spectral-spatial fusion, which attempts to obtain HRHSI by fusing low-resolution hyperspectral images (LRHSI) with multispectral images (MSI). + +Spectral-spatial fusion methods can be broadly categorized into traditional and deep learning (DL)-based approaches. Traditional methods usually rely on manual priors to establish relationships among LRHSI, MSI, and HRHSI. The nonlinear capabilities of such models are limited, making it challenging to obtain high-quality HRHSI. Benefiting from the powerful nonlinear modeling capacity of deep learning techniques, both convolutional neural network (CNN)-based methods and Transformer-based methods have demonstrated performance far superior to traditional methods. Despite these advances, DL-based methods still face significant challenges, with one of the most prominent being the trade-off between computational cost and fusion performance. While many CNN-based methods offer lower computational cost but limited fusion accuracy, Transformer-based methods achieve better fusion accuracy at the expense of higher computational cost. Reducing computational cost while maximizing fusion accuracy is a critical practical challenge, especially important for deploying models on edge computing platforms or compact cameras. + +Recently, some promising lightweight frameworks [22, 38] for visible light image super-resolution have been proposed. These frameworks employ classifiers to categorize the image into complex and smooth regions. They then process these regions with different network structures, effectively reducing the overall computational cost. However, applying a classifier to hyperspectral images (HSIs) incurs significant computational cost, as it needs to account for the spatial and spectral structure simultaneously. Additionally, directly processing the image patches or pixels obtained through classification disrupts their inherent position information. + +To address the issues mentioned above, we propose a novel network, SRLF-Net, which primarily consists of several cascaded SRLFs. Each SRLF comprises two modules: a Preliminary Fusion Module and a Selective Re-learning Module. The Preliminary Fusion Module aims to preliminarily extract feature before breaking the spatial position relationship. It employs a multi-scale Mamba architecture, + +providing strong global perception capability. The Preliminary Fusion Module outputs a preliminary fusion feature, with most feature points accurately extracted and only a few feature points distorted. The Selective Re-learning Module consists of two Spectral Transformer blocks and one Mamba block. The Spectral Transformer blocks are responsible for re-learning the feature points with spatial and spectral distortions, respectively, while the Mamba block is responsible for integrating the re-learned results. To achieve selective learning, we propose a Spatial-Spectral Guided Selective Re-learning Mechanism (SSG-SRL), as shown in Fig. 1. This mechanism incorporates the observation model to generate pseudo-MSI and pseudo-LRHSI. It then identifies distorted feature points from both spectral and spatial perspectives, requiring only a few parameters and incurring extremely low computational cost. By focusing on distorted feature points and avoiding the over-processing of well-extracted ones, this selective re-learning approach reduces the computational cost and mitigates overfitting. + +Our contribution can be summarized as follows: + +- We propose a novel SRLF integrating Mamba and Transformer, which first performs preliminary fusion and then adaptively selects distorted feature points for targeted learning, thereby avoiding computational waste and mitigating overfitting. +- We propose a Spatial-Spectral Guided Selective Relearning Mechanism that integrates the observation models of LRHSI and MSI, efficiently enabling selective learning with few parameters (Params) and low computational cost. +- Based on SRLF, we construct an SRLF-Net for spatial-spectral fusion, which balances fusion accuracy and computational cost, surpassing current state-of-the-art methods on several public datasets with minimal computational cost. + +# 2. Related Work + +# 2.1. Traditional Methods + +Traditional spatial-spectral fusion methods can be categorized into two types: PAN-sharpening-based methods and decomposition-based methods. Early approaches, such as component substitution (CS)-based methods [2, 35] and multiresolution analysis (MRA)-based methods [1, 26], were initially designed for panchromatic sharpening tasks, but later were also used for fusing LRHSIs and MSIs. The decomposition-based methods mainly refer to matrix factorization (MF) and tensor decomposition (TD). The MF-based approach unfolds hyperspectral data into a two-dimensional matrix and decomposes it into a spectral feature matrix and a mixing coefficient matrix. For instance, a representative MF-based method was proposed by Laranas et al. [24], which transformed the spatial-spectral fu + +sion optimization problem into an estimation problem for a constrained spectral feature matrix and a mixing coefficient matrix. Li et al. [25] introduced regularization techniques based on both external and internal priors in the matrix representation. However, the unfolding operation disrupts the three-dimensional structure of HSI, which is detrimental to the reconstruction of HSI information. The TD-based methods treat the HSI as a three-dimensional tensor, overcoming the limitations of MF-based methods and demonstrating a certain level of advancement. For instance, Dian et al. [9] proposed a generalized tensor nuclear norm to enforce low-rank constraints in three spatial-spectral dimensions. + +# 2.2. Deep Learning-based Methods + +Deep Learning (DL)-based methods offer significant performance improvements in the field of hyperspectral reconstruction, thus gaining much attention [28, 29, 44, 45]. For example, Palsson et al. [32] first employed the PCA technique for data dimensionality reduction and then fed the reduced data into a fusion network based on 3D convolution. In addition, some methods utilize the observation model to improve the fusion performance. For example, Xie et al. [41] and Dong [12] incorporated the observation model into the fusion network. Dian et al. [7] utilized the observation model to generate training data, which ensures the consistency of the imaging model between the test data and the training data. CNN-based methods often struggle with limited receptive fields, making them less effective in capturing global context. In contrast, the Multi-Head Self-Attention (MHSA) mechanism in Transformers allows for the effective capture of global information. Many Transformer-based methods [30, 37] have been proposed and have achieved better results. However, these methods are limited by quadratic computational complexity. Recently, Mamba, derived from state-space equation models in control theory, has also been introduced to the field of HSI reconstruction. For example, Peng et al. [33] designed the FusionMamba module capable of handling dual inputs, and embedded it into two U-shaped networks to achieve hyperspectral reconstruction. + +Existing methods treat all pixels equally. However, pixels in smooth regions can be learned by a shallow network, whereas only those in complex regions require a deeper network. To leverage this property, we propose a selective relearning mechanism that adaptively identifies distorted pixels for targeted refinement, thereby enhancing fusion quality while reducing computational cost. + +# 3. Our Proposed Method + +# 3.1. Motivation and Framework Overview + +Motivation. The process of image reconstruction over a network resembles how humans learn new things: first, ac- + +quiring an overall understanding of the content, and then focusing on more complex details. Therefore, we propose that the network should first perform an initial feature extraction of the entire image and subsequently refine a few distorted feature points. This strategy can reduce the excessive learning on low-difficulty feature points, thereby mitigating the risk of overfitting and lowering FLOPs. + +**Framework Overview.** The structure of SRLF-Net is shown in Fig. 2. The input MSI and LRHSI are denoted as $Y$ and $X$ , respectively. First, they are concatenated along the channels and pass through an embedding layer to adjust the channel dimensions, resulting in $F_0$ . Next, $F_0$ undergoes $N$ SRLFs to produce $[F_1, F_2, \dots, F_N]$ . Finally, a convolutional layer serves as the Refine layer to fuse these features $[F_0, F_1, F_2, \dots, F_N]$ , resulting in the fused HSI. The SRLF consists of two parts: a Preliminary Fusion Module and a Selective Re-learning Module. The Preliminary Fusion Module generates a fusion feature, while the Selective Re-learning Module refines its distorted points based on spatial-spectral structure. + +# 3.2. Preliminary Fusion Module + +Recently, Mamba [14] has been used to replace Transformer to capture global information with linear computational complexity, which can be discretized as: + +$$ +y = \bar {K} u, \tag {1} +$$ + +$$ +\overline {{K}} = (C \overline {{B}}, C \overline {{A B}}, C \overline {{A}} ^ {L - 1} \overline {{B}}), +$$ + +where $y$ is the output, $\overline{K}$ represents the convolutional kernel, and $L$ is the length of the input sequence. $\bar{A}$ , $\bar{B}$ , and $\bar{C}$ are parameter matrices, with $\bar{B}$ and $\bar{C}$ varying according to the input $u$ . In practical computation, the Mamba module employs kernel fusion and recomputation to improve the efficiency of feature sequence scanning and reduce storage requirements. + +To extract feature efficiently, we apply the SS2D block [16], an improved Mamba structure, as the base module. It uses a cross-scan method to obtain the global receptive field. The structure and Effective Receptive Field (ERF) of the SS2D block are shown in Fig. 3(a). The figure illustrates that a single SS2D block primarily focuses on a cross-shaped region, indicating that the module captures sequential information near the current position. The pixels outside the cross shape have less impact on the central point because they are farther away from it during the four traversal processes. This is because each scanning path follows Eq. 1, and the weights decay as the distance increases. Although the SS2D block can capture global information, its perception capability is limited, primarily focusing on the pixels that are closer along the scanning path. Owing to the principle of SSM, the weights corresponding to each pixel are not as independent as they are in the convolutional kernel. + +![](images/19c6c085b7d2c94ca43e3b0c1dd4bd9de7549ec87df69327219a945d8097d23e.jpg) +Figure 2. The overview of the proposed SRLF-Net. The Embedding layer and the Refine layer are both individual convolutional layers. + +![](images/643f393767ed5f32e4f7a5628e379cca3b8d1752a1ffee7d6e157aa2057054c0.jpg) +(a) 2D-Selective Scan Block and Effective Receptive Field + +![](images/e7a7464ab21d45661df5f84f3d079a2f980dadef2cdf881aad431cd66d042755.jpg) +(b) Cascaded Structure and Effective Receptive Field +(c) Multi-Scale Cascaded Structure and Effective Receptive Field +Figure 3. Different structures of Preliminary Fusion Modules and Effective Receptive Fields. (a) Due to the limitations of the SSM principle, the global perception capability of a single SS2D block is limited. (b) The ERF of the cascade structure has slightly improved, but not significantly. (c) The ERF of the multi-scale cascaded structure has significantly improved, resulting in a stronger global perception capability. + +Even after training, the ERF still exhibits a cross-shaped pattern. Figure 3(b) shows the structure of simply stacking three SS2D blocks and their corresponding ERF. The highlighted areas of the ERF are still concentrated on a few pixels, which is insufficient for broader global perception. To enhance the global perception capability of the Preliminary Fusion Module, we adopt a multi-scale structure, which has a significantly larger highlighted central region in its ERF, as shown in Fig. 3(c). Denote the input of this module as $F_{in}$ , then the process of the Preliminary Fusion Module can be expressed as: + +$$ +T _ {1} = S S 2 D \left(F _ {i n}\right), +$$ + +$$ +T _ {1} ^ {d} = \operatorname {D o w n} (T _ {1}), +$$ + +$$ +T _ {2} = S S 2 D \left(T _ {1} ^ {d}\right), \tag {2} +$$ + +$$ +T _ {2} ^ {\prime} = P S \left(T _ {2}\right), +$$ + +$$ +F _ {o u t} = S S 2 D \left(\operatorname {C o n c a t} \left(T _ {2} ^ {\prime}, T _ {1}\right)\right), +$$ + +where $SS2D(\cdot)$ is the function of the SS2D block, and $Concat(\cdot)$ represents the concatenation operation along the channel dimension. $Down(\cdot)$ represents the downsampling + +operation implemented by convolution, and $PS(\cdot)$ is the PixelShuffle (PS) operation, which is an upsampling technique. $T_{1}$ and $T_{2}$ are the outputs of the first two SS2D blocks, respectively. $T_{1}^{d}$ is the outcome of the downsampling operation, and $T_{2}^{r}$ is the outcome of the PS operation. $F_{out}$ is the fusion result of two scale features. + +# 3.3. Selective Re-learning Module + +As shown in Fig. 2, the Selective Re-learning Module consists of a Spatial Re-learning (Spa-RL) block, a Spectral Re-learning (Spe-RL) block, and a Re-fusion (RF) block. Spa-RL block and Spe-RL block utilize the same Spectral Transformer structure, refining spatially and spectrally distorted feature points, respectively. The RF block is an SS2D block that aims to integrate the results of spatial and spectral relearning. The input to the Selective Re-learning Module is a fused feature, where the representation of spatially smooth regions is accurate, with only a few points exhibiting significant distortion. Therefore, we propose a Spatial-Spectral Structure-Guided Selective Re-learning Mechanism (SSG + +![](images/9577192a04451785761f47bfe2e728dcff06859f4ae2bf8ad7d9aee945d16bd5.jpg) +(b) Spectral Re-learning Block +Figure 4. The process of the Spatial Re-learning (Spa-RL) block and Spectral Re-learning (Spe-RL) block. The Spa-RL block and the Spe-RL block incorporate the SSG-SRL, enabling targeted re-learning of spatial-spectral distorted feature points for reconstruction. + +Table 1. Quantitative indexes of the test methods on the CAVE dataset and the Harvard dataset. The best result is marked in bold font. + +
MethodFLOPSParams(M)CAVEHarvard
PSNRSAMUIQISSIMPSNRSAMUIQISSIM
Hysure [36]//42.93038.11080.93210.981445.20413.61800.86890.9807
NSSR [11]//46.62463.25050.95630.991047.14592.93960.88990.9843
DAEM [17]417.5G10.0241.58224.19650.92830.979643.37454.35860.84670.9728
FusionMamba [33]129.62G2.5846.81072.65110.97100.993747.62662.76120.89390.9851
DHIF-Net [20]3.616T22.669547.86132.38490.97320.994847.77352.69870.89530.9855
DSPNet [37]422.189G6.055147.56652.58810.97220.994147.59732.77850.89380.9851
MIMO-SST [13]98.1653G4.98348.36092.58340.97060.994447.69152.78040.89410.9852
Mog-DCN [12]4.390T7.07148.59102.25410.97480.995347.89102.69640.89610.9855
LRTN [30]132.586G3.53547.78192.43870.96750.993947.70882.73300.89380.9855
Ours81.22G1.3349.38812.18160.97520.995447.89542.68760.89560.9856
+ +SRL) in the Spa-RL block and Spe-RL block. + +Specifically, the output of the Preliminary Fusion Module is denoted as $Z'$ . We then use the Structural Similarity Index (SSIM) to evaluate the quality of each feature point in terms of spatial structure. However, directly comparing $Z'$ with the Ground Truth (GT) $Z$ is unfeasible. To address this, we acquire the pseudo-MSI $Y'$ using the spectral response function of the observation model as: + +$$ +Y ^ {\prime} = R Z ^ {\prime}, \tag {3} +$$ + +where $R$ represents the spectral response function, which is assumed to be known. Next, we calculate the SSIM map $M_{spa}$ between $Y'$ and $Y$ to assess the spatial quality of each feature point: + +$$ +M _ {s p a} = S S I M \left(Y, Y ^ {\prime}\right). \tag {4} +$$ + +We then select a few low-scoring feature points for relearning as: + +$$ +\begin{array}{l} M _ {s p a} ^ {\prime} = \operatorname {S o r t} _ {p} (\text {F l a t t e n} (M _ {s p a})) \\ V _ {Y ^ {\prime}}, V _ {Y} = \operatorname {S e l e c t} \left(Y ^ {\prime}, Y; M _ {s p a} ^ {\prime}, r\right), \tag {5} \\ V = R L _ {s p a} \left(V _ {Y ^ {\prime}} + V _ {Y}\right), \\ Z _ {s p a} ^ {\prime} = V + Y ^ {\prime} + Y, \\ \end{array} +$$ + +where $Sort_{p}(\cdot)$ represents sorting in ascending order, and $Flatten(\cdot)$ indicates unfolding along the spatial dimension. $Select(\cdot)$ selecting feature points based on $M_{spa}'$ , and $RL_{spa}(\cdot)$ denotes the re-learning block for spatial refinement. $M_{spa}'$ is the sorted SSIM map. $V_{Y'}$ and $V_{Y}$ are spatially distorted feature points in $Y'$ and $Y$ , respectively. $V$ represents the refined result of these distorted feature points, and $Z_{spa}'$ is the output of the Spa-RL block. $r$ is the proportion of spatially distorted feature points, which is determined based on the average SSIM score as: + +$$ +r = \lambda (1 - A v g (M _ {s p a})), \tag {6} +$$ + +![](images/9933f752ee35df8e72d4aabd8e7e456afa8f10232e9c4e0d15c9be592b6a778e.jpg) +Figure 5. The pseudo-color images and spectral angle error maps of reconstructed jelly_beans_ms (a test image of the CAVE dataset) by different methods. + +![](images/65f62bfff1224f8c88f213d7ea72b3e7bdc5792e9aaeaf6e28f941e774514129.jpg) +Figure 6. The pseudo-color images spectral angle error maps of reconstructed imgf7 (a test image of the Harvard dataset) by different methods. + +where $\lambda$ is a hyperparameter that adjusts the proportion of feature points identified as distorted. $Avg(\cdot)$ denotes the average operation. A high $Avg(M_{spa})$ indicates a more accurate feature representation, which means that fewer feature points need to be re-learned. Conversely, a low $Avg(M_{spa})$ suggests that more feature points need to re-learning. + +Similarly, we degrade the result of spatial re-learning, $Z_{spa}^{\prime}$ , into pseudo-LRHSI $X^{\prime}$ as: + +$$ +X ^ {\prime} = \left(Z _ {s p a} ^ {\prime} * C\right) \downarrow , \tag {7} +$$ + +where $C$ is the Gaussian blur kernel. $\ast$ and $\downarrow$ denote convolution and spatial downsampling operations. We then evaluate the spectral quality of each feature point via SAM map $M_{spe}$ as: + +$$ +M _ {s p e} = S A M \left(X ^ {\prime}, X\right). \tag {8} +$$ + +We then select the feature points with high SAM scores, at a ratio of $r$ , as those exhibiting spectral distortion. We then refine these feature points as: + +$$ +M _ {s p e} ^ {\prime} = \operatorname {S o r t} _ {r} (\text {F l a t t e n} (M _ {s p e})) +$$ + +$$ +U _ {X ^ {\prime}}, U _ {X} = \operatorname {S e l e c t} \left(X ^ {\prime}, X; M _ {s p e} ^ {\prime}, r\right), \tag {9} +$$ + +$$ +U = R L _ {s p e} \left(U _ {X ^ {\prime}} + U _ {X}\right), +$$ + +$$ +Z _ {s p e} ^ {\prime} = U + X ^ {\prime} + X, +$$ + +where $Sort_{r}$ represents sorting in descending order. $RL_{spe}(\cdot)$ denotes the re-learning block for spectral refinement, sharing the same structure as $RL_{spa}(\cdot)$ . $M_{spe}'$ is the sorted SAM map. $U_{X'}$ and $U_{X}$ are the spectrally distorted feature points in $X'$ and $X$ , respectively. U represents the refined result of these distorted feature points. $Z_{spe}'$ is the output of the Spe-RL block. Finally, the RF block fuses $Z_{spe}'$ and $Z_{spa}'$ to produce the output of the entire Selective Re-learning Module. + +# 3.4. FLOPs Analysis of SSG-SRL + +The spectral response and spatial response can be considered as downsampling in the network, which is common. Therefore, we ignore this part and only account for the FLOPs caused by SSIM and SAM computations as: + +$$ +F L O P s _ {S S I M} = C _ {1} \left(5 k ^ {2} + 1 0\right) H W, +$$ + +$$ +F L O P s _ {S A M} = 6 \frac {H W}{s ^ {2}} + 2 C _ {2} \frac {H W}{s ^ {2}}, \tag {10} +$$ + +where $s$ represents the downsampling factor of the spatial response, and $k$ denotes the size of the Gaussian blur kernel. $C_1$ and $C_2$ are the number of spectral channels for MSI and HSI, respectively. $H$ and $W$ denote the spatial height and width of the input MSI. In our experiments, $H = W = 512$ , + +![](images/1a6f92e8164c52061d28ebba029a0756c5b5c2f4a2980882560b4562c246568f.jpg) +(a) LRHSI +(b) NSSR +Figure 7. The pseudo-color images of reconstructed Gaofen5 (part) by different methods. +Table 2. No-reference quality assessment for Gaofen5. The best result is marked in bold font. + +![](images/5cff52f8a3edd8db2696d866465f72e42940a0c1d64cc8718a9644605148f836.jpg) +(c) Hysure + +![](images/c130ca0ba922e3df43132febd9899a51ae84869bd454a292e7dab3a05f7354c6.jpg) +(d) DHIF-Net + +![](images/905a4548f935d48ac1c46ac9759bae2d22e557a12a195757a6037b9b3c3d5ec9.jpg) +(e)DSPNet + +![](images/1f2294d143b15b2eaeeee098aa8b22e36581143fa1d6a5564a90205f1e560c13.jpg) +(f) MIMO-SST + +![](images/ed56652a29d1bcec552cbc4753797359d528082e5eaa37e6a2085ee837143221.jpg) +(g)Mog-DCN + +![](images/34b74bb2ba5edad566495efb296f44b36f4eaa16b5e87101ac2c6ab951970c21.jpg) +(h) LRTN + +![](images/5d50d8ee531ba7e6b0ff7420e8518ce7306c2db2b757cda6e1e179f503340647.jpg) +(i) Ours + +
metricMethod
Hysure [36]NSSR [11]FusionMamba [33]DHIF-Net [20]DSPNet [37]MIMO-SST [13]Mog-DCN [12]LRTN [30]Ours
QNR0.94560.89440.96260.97170.97220.97140.97200.97180.9733
+ +Table 3. Ablation study on network depth. The best result is marked in bold font. + +
ModelPSNRSAMUIQISSIM
N = 148.27032.30740.97310.9943
N = 248.74282.21720.97450.9950
N = 349.05622.24770.97500.9951
N = 449.38812.18160.97520.9954
N = 549.52142.16930.97530.9954
+ +Table 4. Ablation study on key modules. The best result is marked in bold font. + +
ModelFLOPs(G)PSNRSAMUIQISSIM
w/o multi-scale84.5848.04732.41110.97130.9937
w/o selection86.6549.31352.21930.97510.9953
w/o B and R81.2249.34122.20540.97520.9953
Ours81.2249.38812.18160.97520.9954
+ +$C_1 = 3$ , $C_2 = 31$ , $s = 8$ , and $k = 5$ . Therefore, the FLOPs for a single SSG-SRL are less than 0.11 GFLOPs², which is almost negligible. + +# 4. Experiments + +# 4.1. Datasets and Evaluation Metrics + +We train the proposed SRLF-Net on three publicly available datasets: CAVE [43], Harvard [6], and Gaofen5. For quantitative evaluation, we employ four image assessment metrics: Peak Signal-to-Noise Ratio (PSNR), Spectral Angle Mapper (SAM), Universal Image Quality Index (UIQI [39]), and Structural Similarity Index Measure (SSIM [40]). + +# 4.2. Compared Methods + +We compare SRLF-Net with two classic traditional methods: Hysure [36] and NSSR [11], as well as several state-of-the-art DL-based methods: DAEM [17], FusionMamba + +![](images/0af434fc49a3913244b3417599a820c21be72a721be5e7cc1fad69f89cd0a4b0.jpg) +Figure 8. Robustness analysis of the ratio coefficient $\lambda$ . + +![](images/9eab3f9b93994053307efcd482a2d372e4d91056c2b22c03f58afb83b24bcc44.jpg) + +[33], LRTN [30], Mog-DCN [12], DHIF-Net [20], DSPNet [37], and MIMO-SST [13]. + +# 4.3. Experimental Results on Simulated Data + +Table 1 records the test results of all methods on the CAVE and Harvard datasets. Our method achieves the best performance with fewer FLOPs and Params. Figures 5 and 6 illustrate the pseudo-color images and spectral error maps of the fusion results obtained from traditional approaches and some competitive DL-based methods. The magnified local regions of the images reveal that the results produced by our method have clearer texture details and higher spectral fidelity. Although Mog-DCN also achieves good visual effects on the Harvard dataset, its computational cost is relatively high. + +# 4.4. Experimental Results on Real Data + +For the Gaofen5 dataset, we perform spatial downsampling on the existing LRHSI and MSI according to [15] and [8] to obtain the input data for training, using the LRHSI as the GT. During testing, we input the original LRHSI and MSI to obtain the HRHSI. Since the Gaofen5 dataset doesn't have a corresponding HRHSI, we use the no-reference image quality assessment metric QNR to quantitatively evaluate the performance of different methods, as shown in Table 2. Our method achieves the highest QNR. Additionally, Fig. 7 presents the visual results of the reconstructed images generated by traditional approaches and some compet + +![](images/8a807cc568f9cdb4bdd7461dee4b3964e3d440024ad0fddc468fd3ebd505a6e4.jpg) +Figure 9. Visualization of the positions of distorted feature points under varied ratio coefficient $\lambda$ for different stages. Visualization of selection based on spatial structure shows the values of selected feature point positions in the original MSI, while the visualization based on spectral structure displays the corresponding values in the original LRHSI. Black areas represent unselected feature points. + +itive deep learning-based methods. The figure shows that the reconstruction visual quality of deep learning methods is significantly better than that of traditional methods. + +# 4.5. Ablation Study + +Impact of network depth $N$ . The network depth $N$ determines the number of SRLFs. Table 3 records the results for $N$ ranging from 1 to 5. It shows that when $N < = 4$ , evaluation metrics improve significantly as $N$ increases. For instance, PSNR increases by at least 0.3134 dB (49.0562 dB - 48.7428 dB). However, when $N = 5$ , the quality improvement of the fused image is not substantial, with PSNR increasing by only 0.1333 dB compared to $N = 4$ . Considering the computational cost and fusion quality, $N = 4$ is the optimal choice. + +Impact of Preliminary Fusion Module. In the Preliminary Fusion Module, we employ a multi-scale Mamba structure to enhance global perception ability. To assess the performance of this structure, we substitute it with the structure shown in Fig. 3(b). As indicated in Table 4, the PSNR of the model without multi-scale decreases by $1.3408\mathrm{dB}$ + +Impact of SSG-SRL. SSG-SRL is to filter out distorted feature points for targeted learning. To verify the effect of this selective re-learning, we retain the network structure of the Selective Re-learning Module but removes the SSG-SRL. The experimental results are presented in Table 4. Removing the SSG-SRL not only decreases the PSNR but also increases the FLOPs by 5.43 GFLOPs. + +Impact of prior $C$ and $R$ . SRLF-Net involves the spectral response function $R$ and the spatial response function $B$ of the observation model (Eq. 3 and Eq. 7), which are assumed to be known. Here, we test the fusion results when $R$ and $C$ are unknown. We substitute a $5 \times 5$ strided convolutional layer for the spatial response function and a lin + +ear layer without bias for the spectral response function. As shown in Table 4, SRLF-Net achieves satisfactory fusion results even when $R$ and $C$ are unknown, with PSNR reaches 49.3412 dB. + +Impact of the ratio coefficient $\lambda$ . The ratio coefficient $\lambda$ determines the number of feature points entering the relearning blocks. We adjust the value of $\lambda$ during inference from 0 to 1 to test its effect on network performance and FLOPs. As shown in Fig. 8, SAM remains relatively stable when $0.1 \leq \lambda \leq 0.3$ . When $\lambda > 0.3$ , SAM begins to fluctuate, and as $\lambda > 0.5$ , SAM gradually increases. Meanwhile, the FLOPs grow linearly as $\lambda$ increases. In Fig. 9, we further visualize the selection results at different stages for various values of $\lambda$ . From the figure, it can be seen that in all four stages, the feature points selected based on spatial quality tend to be located in areas with significant texture variation. In contrast, the feature points selected based on spectral quality differ across stages. + +# 5. Conclusion + +In this paper, we propose a novel Selective Re-learning Fusion Network (SRLF), which selects distorted feature points based on a preliminary fusion feature for relearning. Specifically, SRLF first performs feature fusion through the Preliminary Fusion Module and then refines distorted feature points using the Selective Re-learning Module. To learn distorted feature points purposefully, we introduce a Spatial-Spectral Structure-Guided Selective Re-learning Mechanism (SSG-SRL), which mitigates the overfitting of accurate feature points, thereby enhancing fusion accuracy while reducing computational cost. Based on SRLF, we develop SRLF-Net for spatial-spectral fusion, which outperforms several state-of-the-art methods with minimal computational cost across multiple datasets. + +# References + +[1] Bruno Aiazzi, Luciano Alparone, Stefano Baronti, Andrea Garzelli, and Massimo Selva. Mtf-tailored multiscale fusion of high-resolution ms and pan imagery. Photogrammetric Engineering & Remote Sensing, 72(5):591-596, 2006. 2 +[2] Bruno Aiazzi, Stefano Baronti, and Massimo Selva. Improving component substitution pansharpening through multivariate regression of ms + pan data. IEEE Transactions on Geoscience and Remote Sensing, 45(10):3230-3239, 2007. 2 +[3] Naveed Akhtar and Ajmal Mian. Hyperspectral recovery from rgb images using gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(1): 100-113, 2020. 2 +[4] Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17502-17511, 2022. 2 +[5] Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, and Luc Van Gool. Mst++: Multi-stage spectral-wise transformer for efficient spectral reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 745-755, 2022. 2 +[6] Ayan Chakrabarti and Todd Zickler. Statistics of real-world hyperspectral images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 193-200, 2011. 7 +[7] Renwei Dian, Anjing Guo, and Shutao Li. Zero-shot hyperspectral sharpening. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(10):12650-12666, 2023. 3 +[8] Renwei Dian, Anjing Guo, and Shutao Li. Zero-shot hyperspectral sharpening. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(10):12650-12666, 2023. 7 +[9] Renwei Dian, Yuanye Liu, and Shutao Li. Hyperspectral image fusion via a novel generalized tensor nuclear norm regularization. IEEE Transactions on Neural Networks and Learning Systems, 2024. 3 +[10] Renwei Dian, Yuanye Liu, and Shutao Li. Spectral superresolution via deep low-rank tensor representation. IEEE Transactions on Neural Networks and Learning Systems, 2024. 2 +[11] Weisheng Dong, Fazuo Fu, Guangming Shi, Xun Cao, Jinjian Wu, Guangyu Li, and Xin Li. Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Transactions on Image Processing, 25(5):2337-2352, 2016. 5, 7 +[12] Weisheng Dong, Chen Zhou, Fangfang Wu, Jinjian Wu, Guangming Shi, and Xin Li. Model-guided deep hyperspectral image super-resolution. IEEE Transactions on Image Processing, 30:5754-5768, 2021. 3, 5, 7 +[13] Jian Fang, Jingxiang Yang, Abdolraheem Khader, and Liang Xiao. Mimo-sst: Multi-input multi-output spatial-spectral transformer for hyperspectral and multispectral image fu + +sion. IEEE Transactions on Geoscience and Remote Sensing, 62:1-20, 2024. 5, 7 +[14] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces, 2024. 3 +[15] Anjing Guo, Renwei Dian, and Shutao Li. Unsupervised blur kernel learning for pansharpening. In IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, pages 633-636, 2020. 2, 7 +[16] Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, and Shu-Tao Xia. Mambair: A simple baseline for image restoration with state-space model. In European conference on computer vision, pages 222-241. Springer, 2024. 3 +[17] Wen-jin Guo, Weiying Xie, Kai Jiang, Yunsong Li, Jie Lei, and Leyuan Fang. Toward stable, interpretable, and lightweight hyperspectral super-resolution. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22272-22281, 2023. 5, 7 +[18] Xian Guo, Xin Huang, Lefei Zhang, Liangpei Zhang, Antonio Plaza, and Jón Atli Benediktsson. Support tensor machines for classification of hyperspectral remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing, 54(6):3248-3264, 2016. 1 +[19] Danfeng Hong, Bing Zhang, Xuyang Li, Yuxuan Li, Chenyu Li, Jing Yao, Naoto Yokoya, Hao Li, Pedram Ghamisi, Xiuping Jia, Antonio Plaza, Paolo Gamba, Jon Atli Benediktsson, and Jocelyn Chanussot. Spectralgpt: Spectral remote sensing foundation model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(8):5227-5244, 2024. 1 +[20] Tao Huang, Weisheng Dong, Jinjian Wu, Leida Li, Xin Li, and Guangming Shi. Deep hyperspectral image fusion network with iterative spatio-spectral regularization. IEEE Transactions on Computational Imaging, 8:201-214, 2022. 5, 7 +[21] Jianxin Jia, Jinsong Chen, Xiaorou Zheng, Yueming Wang, Shanxin Guo, Haibin Sun, Changhui Jiang, Mika Karjalainen, Kirsi Karila, Zhiyong Duan, Tinghuai Wang, Chong Xu, Juha Hyyppä, and Yuwei Chen. Tradeoffs in the spatial and spectral resolution of airborne hyperspectral imaging systems: A crop identification case study. IEEE Transactions on Geoscience and Remote Sensing, 60:1-18, 2022. 1 +[22] Xiangtao Kong, Hengyuan Zhao, Yu Qiao, and Chao Dong. Classsr: A general framework to accelerate superresolution networks by data characteristic. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12016-12025, 2021. 2 +[23] Zeqiang Lai, Ying Fu, and Jun Zhang. Hyperspectral image super resolution with real unaligned rgb guidance. IEEE Transactions on Neural Networks and Learning Systems, 2024. 2 +[24] Charis Lanaras, Emmanuel Baltsavias, and Konrad Schindler. Hyperspectral super-resolution by coupled spectral unmixing. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 3586-3594, 2015. 2 +[25] Shutao Li, Renwei Dian, and Haibo Liu. Learning the external and internal priors for multispectral and hyperspectral + +image fusion. Science China Information Sciences, 66(4): 140303, 2023. 3 +[26] JG Liu. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. International Journal of remote sensing, 21(18): 3461-3472, 2000. 2 +[27] Jie Liu, Qisheng Feng, Tiangang Liang, Jianpeng Yin, Jinlong Gao, Jing Ge, Mengjing Hou, Caixia Wu, and Wenlong Li. Estimating the forage neutral detergent fiber content of alpine grassland in the tibetan plateau using hyperspectral data and machine learning algorithms. IEEE Transactions on Geoscience and Remote Sensing, 60:1-17, 2022. 1 +[28] Jinyang Liu, Renwei Dian, Shutao Li, and Haibo Liu. Sgfusion: A saliency guided deep-learning framework for pixel-level image fusion. Information Fusion, 91:205-214, 2023. 3 +[29] Jinyang Liu, Shutao Li, Haibo Liu, Renwei Dian, and Xiaohui Wei. A lightweight pixel-level unified image fusion network. IEEE Transactions on Neural Networks and Learning Systems, 2023. 3 +[30] Yuanye Liu, Renwei Dian, and Shutao Li. Low-rank transformer for high-resolution hyperspectral computational imaging. International Journal of Computer Vision, pages 1-16, 2024. 3, 5, 7 +[31] Shaohui Mei, Ge Zhang, Nan Wang, Bo Wu, Mingyang Ma, Yifan Zhang, and Yan Feng. Lightweight multiresolution feature fusion network for spectral super-resolution. IEEE Transactions on Geoscience and Remote Sensing, 61:1-14, 2023. 2 +[32] Frosti Palsson, Johannes R. Sveinsson, and Magnus O. Ulfarsson. Multispectral and hyperspectral image fusion using a 3-d-convolutional neural network. IEEE Geoscience and Remote Sensing Letters, 14(5):639-643, 2017. 3 +[33] Siran Peng, Xiangyu Zhu, Haoyu Deng, Liang-Jian Deng, and Zhen Lei. Fusionmamba: Efficient remote sensing image fusion with state space model. IEEE Transactions on Geoscience and Remote Sensing, 62:1-16, 2024. 3, 5, 7 +[34] María D. Raya-Sereno, María Alonso-Ayuso, José L. Pancorbo, José L. Gabriel, Carlos Camino, Pablo J. Zarcotejada, and Miguel Quemada. Residual effect and n fertilizer rate detection by high-resolution vnir-swir hyperspectral imagery and solar-induced chlorophyll fluorescence in wheat. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-17, 2022. 1 +[35] Vittala K Shettigara. A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set. Photogrammetric Engineering and remote sensing, 58(5):561-567, 1992. 2 +[36] Miguel Simoes, José Bioucas-Dias, Luis B Almeida, and Jocelyn Chanussot. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Transactions on Geoscience and Remote Sensing, 53 (6):3373-3388, 2014. 5, 7 +[37] Yucheng Sun, Han Xu, Yong Ma, Minghui Wu, Xiaoguang Mei, Jun Huang, and Jiayi Ma. Dual spatial-spectral pyramid network with transformer for hyperspectral image fusion. IEEE Transactions on Geoscience and Remote Sensing, 2023. 3, 5, 7 + +[38] Yan Wang, Yi Liu, Shijie Zhao, Junlin Li, and Li Zhang. Camixersr: Only details need more" attention". In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25837-25846, 2024. 2 +[39] Zhou Wang and A.C. Bovik. A universal image quality index. IEEE Signal Processing Letters, 9(3):81-84, 2002. 7 +[40] Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612, 2004. 7 +[41] Qi Xie, Minghao Zhou, Qian Zhao, Zongben Xu, and Deyu Meng. Mhf-net: An interpretable deep network for multispectral and hyperspectral image fusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3):1457-1473, 2022. 3 +[42] Kazuhiro Yamawaki and Xian-Hua Han. Lightweight hyperspectral image reconstruction network with deep feature hallucination. In Proceedings of the Asian Conference on Computer Vision, pages 164-178, 2022. 2 +[43] Fumihito Yasuma, Tomoo Mitsunaga, Daisuke Iso, and Shree K. Nayar. Generalized assorted pixel camera: Post-capture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing, 19(9):2241-2253, 2010. 7 +[44] Chong Zhang, Wenjing Liu, Juntao Li, Siqi Li, Lizhi Wang, Hua Huang, Yuanjin Zheng, Yongtian Wang, Jinli Suo, and Weitao Song. Tunable optimally-coded snapshot hyperspectral imaging for scene adaptation. *Laser & Photonics Reviews*, page 2401921, 2025. 3 +[45] Chong Zhang, Xianglei Liu, Lizhi Wang, Shining Ma, Yuanjin Zheng, Yue Liu, Hua Huang, Yongtian Wang, and Weitao Song. Lensless efficient snapshot hyperspectral imaging using dynamic phase modulation. Photonics Research, 13(11): 511, 2025. 3 +[46] Shipeng Zhang, Lizhi Wang, Lei Zhang, and Hua Huang. Learning tensor low-rank prior for hyperspectral image reconstruction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12006-12015, 2021. 2 +[47] Yaqiong Zhang, Yongming Xu, Wencheng Xiong, Ran Qu, Jiahua Ten, Qijia Lou, and Na Lv. Inversion study of heavy metals in soils of potentially polluted sites based on uav hyperspectral data and machine learning algorithms. In 2021 11th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), pages 1-5, 2021. 1 \ No newline at end of file diff --git a/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/images.zip b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..51f18276413956394948a632f6a2e7c3738ca4cc --- /dev/null +++ b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf9776a27a709054fa6945fe48dd03d9ea4a099cf9ef459ad8dcb8ebf5ed657e +size 914231 diff --git a/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/layout.json b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5a4cd5ad83abc3fb597ca80af381be382778854d --- /dev/null +++ b/CVPR/2025/A Selective Re-learning Mechanism for Hyperspectral Fusion Imaging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:607ce8af464c70ab8d29fe6c853c606d7a0f28da5acac47dcbb6bf26734cc6b7 +size 420093 diff --git a/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_content_list.json b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..01d2a211989bc70294a229ce4e7d9846a25688c3 --- /dev/null +++ b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:355f3254cc93bb3d88b734da9f45a8e5b1d43ae952f2344e61f208c6daf119ad +size 82566 diff --git a/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_model.json b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..28d4025056025f05a92a5e7b727d0d58633c9965 --- /dev/null +++ b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c94d78b5d86a008e06120c17fe280caba56cc1b47a46b7fc5ddbfc1d8f995486 +size 100823 diff --git a/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_origin.pdf b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..40d9571e41a90d2bfa0ee8ee5484042a05a58cd3 --- /dev/null +++ b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/a07e730c-be1d-4068-a8a4-93e06f295be0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a263249e4a3de7cea24d0a1131dc275f4519087f1d7d6d2f23160013b653dca8 +size 959109 diff --git a/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/full.md b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..60943dba6f71a97ef411631aea703ceede4fb3b3 --- /dev/null +++ b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/full.md @@ -0,0 +1,322 @@ +# A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation + +Zheng Zhang†, Guanchun Yin†, Bo Zhang‡, Wu Liu‡, Xiuzhuang Zhou† and Wendong Wang‡ + +# Abstract + +The limited data annotations have made semi-supervised learning (SSL) increasingly popular in medical image analysis. However, the use of pseudo labels in SSL degrades the performance of decoders that heavily rely on high-accuracy annotations. This issue is particularly pronounced in class-imbalanced multi-organ segmentation tasks, where small organs may be under-segmented or even ignored. In this paper, we propose SKCDF, a semantic knowledge complementarity based decoupling framework for multi-organ segmentation in class-imbalanced medical images. SKCDF decouples the data flow based on the responsibilities of the encoder and decoder during model training to make the model effectively learn semantic features, while mitigating the negative impact of unlabeled data on the semantic segmentation task. We also design a semantic knowledge complementarity module that adopts labeled data to guide the generation of pseudo labels and enriches the semantic features of labeled data with unlabeled data, which improves the quality of generated pseudo labels and the robustness of the overall model. Furthermore, we design an auxiliary balanced segmentation head based training strategy to further enhance the segmentation performance of small organs. Experimental results on the Synapse and AMOS datasets show that our method significantly outperforms existing methods. + +# 1. Introduction + +Computer-assisted diagnosis has been playing an increasingly significant role in the medical field, such as surgical planning and radiation therapy. Due to the high efficiency + +![](images/a968f849c683809c4d95be878643f082d793c99e24a6e8cae47c32798b897b1a.jpg) +Figure 1. (a-b) Comparison between the architecture: previous methods used labeled and unlabeled data to train all parts of the model indiscriminately. Our method uses different data flows to train different parts of the model, suppressing the negative effects of unlabeled data on the model. (c-d) Comparison between the training data flows: our method adopts unlabeled data to enrich the semantic features of labeled data, while using labeled data to guide the generation of pseudo-labels. + +of deep learning models, researchers pay more and more attention to their application in medical data processing, such as image segmentation [5, 9, 20, 22, 24, 25, 29, 50] and etc. However, deep learning models often require a large amount of annotated data for training, which is a labor-intensive and time-consuming process, especially in medical image segmentation, where annotators need to master certain professional knowledge. As a result, semi-supervised learning (SSL) has become increasingly popular in medical image segmentation due to its ability to reduce the time and effort required for image annotation. + +Current SSL based medical image segmentation methods [2, 4, 15, 34, 38, 41, 43, 48] mainly focus on how to make better use of unlabeled data and generate high quality pseudo labels. However, as shown in Fig. 1 (a), these methods train all parts of themselves using both labeled and unlabeled data indiscriminately, ignoring that unlabeled data may have negative effects due to the lack of accurate annotation. In semantic segmentation tasks, the encoder is responsible for extracting the shape, texture, color and other feature information from the input images, which are subsequently used by the decoder to generate segmentation results. Since the encoder itself does not directly participate in the segmentation decision-making process, the encoder can + +still learn rich feature representations even without precise semantic segmentation labels. In contrast to the encoder, decoder is directly involved in segmentation decisions, so the training process of the decoder relies heavily on precise semantic segmentation labels. If the labels are inaccurate, the performance of the decoder will be severely affected. GenericSSL [32] proposed an aggregating & decoupling framework, which adopted a diffusion decoder to guide the diffusion encoder to capture the distribution-invariant features, and they first decouple the labeled and unlabeled data training flows to solve the overfitting issues. However, such processing would be quite redundant in pure semi-supervised learning, and there is no detailed analysis of why decoupling data flows can improve the model's performance. Chen et al. [11] proposed an effective pseudolabeling approach to segment unlabeled images in a prior-guided manner, but the complete independence between two decoders may result in insufficient learning from the unlabeled data. Additionally, their practice of directly averaging the outputs from various decoders not only lacks interpretability but also reintroduces the negative effects of unlabeled data on semantic segmentation tasks. To address above issues, as depicted in Fig. 1 (b), we propose a data flow decoupling framework, which adopts different data flows to train the different parts of the model so that the model can fully learn semantic features while suppressing the negative effects of unlabeled data on semantic segmentation tasks. + +In addition, as shown in Fig. 1 (c), most existing methods ignore the intrinsic relationship between labeled and unlabeled data, which are completely independent of each other during the training process. In this way, deep learning models are likely to overfit labeled data and fail to adequately learn from large amounts of unlabeled data. In fact, there is an intrinsic relationship between labeled data and unlabeled data, especially in medical image data, where the same organ of different individuals has relatively fixed locations and similar semantic information. Therefore, through the semantic knowledge complementarity between labeled data and unlabeled data, on the one hand, the features of labeled data can be enriched, and a more complex environment can be provided for the learning of labeled data to prevent its overfitting. On the other hand, because the labeled data has accurate manual labels, we can use its more refined semantic knowledge to guide the generation of pseudo labels. Some studies [1, 7, 12, 33, 47] have noted this problem. AllSpark [33] adopted a channel-wise cross-attention mechanism to reborn the labeled features from unlabeled ones. Unfortunately, this method generates pseudo labels through unlabeled data flow, and does not make full use of existing high-accuracy annotations to guide the generation of pseudo labels. GuidedNet [47] proposed a 3D consistent Gaussian mixture model to leverage the knowledge from + +labeled data to guide the training of unlabeled data. However, this method cannot solve the problem of overfitting to labeled data, resulting in suboptimal performance. In this paper, we design a semantic knowledge complementarity module, which can adopt unlabeled data to enrich the features of labeled data, and adopt labeled data to guide the generation of high-quality pseudo labels for unlabeled data. As shown in Fig. 1 (d), the features of both labeled and unlabeled data possess richer and more organ-distribution-related semantic knowledge. + +Furthermore, the class-imbalanced problem is very serious in medical image segmentation tasks. Most of the previous methods [3, 8, 13, 18, 31, 39] were based on reweighting or resampling, where reweighting requires additional calculations to determine new weights iteratively, and both of these methods weaken the model's learning of the majority classes. Thus, inspired by ABC [17], we develop an auxiliary balanced segmentation head training strategy, which improves the segmentation accuracy of small organs without losing the segmentation accuracy of other organs by capturing the features of small organs with auxiliary balanced segmentation heads. The main contributions can be summarized as follows: + +- We propose a data flow decoupling framework for accurate multi-organ segmentation in class-imbalanced medical images, which can fully learn semantic features while suppressing the negative effects of unlabeled data on semantic segmentation tasks. +- We design a semantic knowledge complementarity module, which can adopt unlabeled data to enrich the features of labeled data to improve the robustness of the model, and adopt labeled data to guide high-quality pseudo-label generation for unlabeled data. +- We develop an auxiliary balanced segmentation head training strategy, which can improve the segmentation accuracy of small organs without losing the segmentation accuracy of other organs. +- We verify the effectiveness of our method on the Synapse and AMOS datasets. Experimental results show that our method achieves superior performance compared to several existing methods. + +# 2. Related Work + +# 2.1. Medical Image Segmentation + +Ronneberger et al. [24] proposed the U-Net network, which significantly improves the segmentation performance of medical images through encoder-decoder structures and skip connections for fusing low resolution and high resolution semantic features, redefining the development direction of deep learning in the field of medical image segmentation. V-Net [23] proposed a volume-based, full-convolutional neural network 3D image segmentation method. Chen + +![](images/9a31c97a1ee572c77530309c7079145fd15c9ae5dae9c17b4222dc4586246a96.jpg) +Figure 2. Overview of the proposed semantic knowledge complementarity based decoupling framework. + +et al. [9] proposed TransUNet as a robust fusion network of Transformer and U-Net, which is the first network to introduce transformer into the field of medical image segmentation. Subsequently, more and more models [5, 14, 20, 25, 44, 50] have been proposed for medical image segmentation, which mainly focus on designing the network structure to adapt to the medical image segmentation task. Recently, several methods [26, 29, 30, 36, 45, 46, 49] have begun to train networks using medical prior knowledge. For example, Wang et al. [29] proposed a novel network for abdominal multi-organ segmentation, which incorporates radiologists' gaze information to boost high-precision segmentation. However, these methods often require a large amount of annotated data to train the model. + +# 2.2. Semi-supervised Medical Image Segmentation + +Semi-supervised medical image segmentation methods can be roughly classified into two categories: pseudo-labeling methods [6, 16, 19, 28, 41], and consistency regularization based methods [4, 15, 27, 34, 38, 40]. The basic pseudo-labeling method [16] first trained the model normally using labeled data, then pseudo labels are generated for the unlabeled data in each iteration, and the model is fine-tuned using this unlabeled data. Mean teacher [27] is a widely adopted consistency regularization based method for semi-supervised medical image segmentation, which learned unlabeled data through consistency training between the teacher model and the student model. UAMT [43] proposed a novel uncertainty-aware scheme to enable the student model to gradually learn from the meaningful and reliable targets by exploiting the uncertainty information. BCP [1] solved the empirical mismatch between labeled and unlabeled data distributions by copying and pasting labeled and unlabeled data bidirectionally. In fact, most of the leading methods combine pseudo-labeling and consis + +tency regularization. UMCT [42] proposed an uncertainty-aware multi-view co-training to efficiently utilize unlabeled data. DHC [31] proposed a novel Dual-debiased Heterogeneous Co-training framework to solve the imbalanced class distribution problem. MCF [37] proposed a mutual correction framework to explore network bias correction in semisupervised medical image segmentation. + +# 3. Method + +The overall framework is illustrated in Fig. 2. Our method adopts the basic pseudo-labeling framework as the baseline, where the unlabeled data is assigned with pseudo labels that are generated from current model predictions. Instead, we adopt a data flow decoupling framework, which allows the encoder to learn all the data features, and the two decoders learn the features of the labeled data and the unlabeled data, separately. To achieve the mutual guidance of labeled and unlabeled data, we incorporate a semantic knowledge complementarity module. Then, we propose an auxiliary balanced segmentation head training strategy to alleviate the class imbalance problem in semi-supervised medical image segmentation. + +# 3.1. Data Flow Decoupling Framework + +In previous semi-supervised learning, both labeled and unlabeled data are processed with the same structure, and the corresponding losses are calculated, then a weight coefficient is given to balance the loss of labeled and unlabeled data. In general, the weight coefficients are set very small, which can make the model not learn well from unlabeled data, and the model has little improvement compared to training with only labeled data. However, if it sets the weight coefficient too large, it is possible to make the model's performance worse. This is because, the unlabeled + +data can still make the model learn more data features, but it is difficult for the unlabeled data to make the model perform well on the semantic segmentation task due to the lack of accurate ground truth. Therefore, we propose an end-to-end framework to make full use of unlabeled data to train the model to learn data features, while reducing the negative impact of unlabeled data on the model on semantic segmentation tasks. + +As illustrated in Fig. 2, we adopt a VNet consisting of encoder, labeled decoder and unlabeled decoder, denoted by $E(\cdot), D_{l}(\cdot)$ , and $D_{u}(\cdot)$ , separately. A batch of input data $X$ contains equal labeled data $(X_{L}, Y_{L})$ and unlabeled data $X_{U}$ , and we first generate a prediction with labeled data as: + +$$ +P _ {L} = D _ {l} \left(E \left(X _ {L}\right)\right) \tag {1} +$$ + +So, we can get the supervised loss as: + +$$ +L _ {s u p} = \frac {1}{| B _ {l} |} \sum_ {i = 1} ^ {B _ {l}} L _ {D i c e C E} \left(P _ {L}, Y _ {L}\right) \tag {2} +$$ + +where $B_{l}$ denotes the number of labeled data in a batch, and $L_{DiceCE}(x,y) = \frac{1}{2} [L_{Dice}(x,y) + L_{CE}(x,y)]$ is the combined Dice and cross entropy loss. + +Then, we can generate the pseudo label $\hat{Y}_U$ for unlabeled data and the prediction of unlabeled data $P_U$ as: + +$$ +\hat {Y} _ {U} = D _ {l} \left(E \left(X _ {U}\right)\right), P _ {U} = D _ {u} \left(E \left(X _ {U}\right)\right) \tag {3} +$$ + +Since, we can get the unsupervised loss as: + +$$ +L _ {u n s u p} = \frac {1}{\left| B _ {u} \right|} \sum_ {i = 1} ^ {B _ {l}} L _ {D i c e C E} \left(P _ {U}, \hat {Y} _ {U}\right) \tag {4} +$$ + +where $B_{u}$ denotes the number of unlabeled data in a batch. The optimization target is to minimize the overall loss, which can be formulated as: + +$$ +L = L _ {s u p} + \lambda_ {u} L _ {u n s u p} \tag {5} +$$ + +where $\lambda_{u}$ is a hyper-parameter that can adjust the influence of unsupervised loss on the model. Since the encoder is trained on both labeled and unlabeled data, it can learn rich semantic features. We decouple the labeled and unlabeled data flows into the two decoders, the unlabeled data does not affect the labeled decoder, and therefore does not affect its effect on the semantic segmentation task. It is important to note that the semantic knowledge contained in unlabeled data may improve the decoder's performance in some other ways. An intuitive idea is to use unlabeled data to fine-tune the labeled decoder after pre-training, but this approach increases the cost of training on the one hand and inevitably weakens the semantic segmentation capability of the labeled decoder on the other hand. So we use a knowledge transfer strategy for the decoder as: + +$$ +\xi_ {u} = \omega_ {e m a} \times \xi_ {u} + (1 - \omega_ {e m a}) \times \xi_ {l} \tag {6} +$$ + +where $\xi_{l}$ and $\xi_{u}$ denote the parameters of labeled and unlabeled decoders, respectively, and the $\omega_{ema}$ is a hyperparameter. In this way, we can obtain an encoder and a decoder with rich semantic information, which will be discussed in detail in the Sec. 4.4. + +# 3.2. Semantic Knowledge Complementarity Module + +In order to make full use of unlabeled data to enrich the semantic features of labeled data, and use the labeled data to guide the unlabeled data to generate pseudo labels, we propose a semantic knowledge complementarity module, which improves the model's ability to capture long- and short-range relationships, enabling efficient mutual learning of semantic knowledge between labeled and unlabeled data. As shown in Fig. 2, we decouple our semantic knowledge complementarity module into local channel-wise cross-attention and global channel-wise cross-attention, and their detailed architecture is shown in Fig. 3. + +Local Channel-wise Cross-attention. Given the hidden features $h^l, h^u \in \mathbb{R}^{C \times D \times H \times W}$ of labeled and unlabeled data, we reshape and map them into $e^l, e^u \in \mathbb{R}^{\frac{DHW}{P^3} \times P^3 \times C}$ , where $(P, P, P)$ is the resolution of each channel patch. Then, we can calculate the query, key and value as follows: + +$$ +q ^ {l} = e ^ {l} w _ {q}, k ^ {u} = e ^ {u} w _ {k}, v ^ {u} = e ^ {u} w _ {v} \tag {7} +$$ + +$$ +q ^ {u} = e ^ {u} w _ {q}, k ^ {l} = e ^ {l} w _ {k}, v ^ {l} = e ^ {l} w _ {v} \tag {8} +$$ + +where $w_{q},w_{k},w_{v}\in \mathbb{R}^{C\times NC}$ are transformation weights, $N$ is the number of heads. The local channel-wise crossattention is defined as: + +$$ +\hat {e} ^ {l} = \sigma \left[ \psi \left(q _ {l}\right) ^ {\top} k ^ {u} \right] \left(v ^ {u}\right) ^ {\top} w _ {o u t} \tag {9} +$$ + +$$ +\hat {e} ^ {u} = \sigma \left[ \psi \left(q ^ {u}\right) ^ {\mathsf {T}} k ^ {l} \right] \left(v ^ {l}\right) ^ {\mathsf {T}} w _ {o u t} \tag {10} +$$ + +where $\psi (\cdot)$ and $\sigma (\cdot)$ denote the instance normalization and the softmax function, $w_{out}\in \mathbb{R}^{NC\times C}$ + +Global Channel-wise Cross-attention. Given the hidden features $h^l, h^u$ of labeled and unlabeled data, we calculate the query, key, value and the global channel-wise cross-attention similar to Eqs. (7) to (10). The only difference is that we reshape and map the hidden features into $e^l, e^u \in \mathbb{R}^{DHW \times C}$ . Since, the whole process of our semantic knowledge complementarity module can be expressed as follows: + +$$ +\hat {h} = \operatorname {C o n v} _ {L} (L N (\text {A t t e n t i o n} _ {L} (h) + h)) \tag {11} +$$ + +$$ +\hat {\hat {h}} = \operatorname {C o n v} _ {G} \left(L N \left(\operatorname {A t t e n t i o n} _ {G} (\hat {h}) + \hat {h}\right)\right) \tag {12} +$$ + +where $h$ denotes the hidden features of labeled and unlabeled data after the encoder, $LN(\cdot)$ denotes the layer normalization, $Attention_{L}(\cdot)$ denotes the local channel-wise + +![](images/c8ce37435de644e8a92426ed1ac0825f0cdb4f3694ea6ea51dccb0d138367f48.jpg) +Figure 3. The detailed architecture of our proposed semantic knowledge complementarity module, which contains local channel-wise cross-attention and global channel-wise cross-attention. The different colors between $Q$ , $K$ , and $V$ indicate that they come from different data flows. In order to facilitate visualization, in the middle is a schematic of the 2D image. The (a) is to perform attention operations in different channels of the two groups of data flows, and (b) is to perform attention operations in the same patch of different channels of the two groups of data flows. + +cross-attention, $Attention_{G}(\cdot)$ denote the global channelwise cross-attention, $Conv_{L}(\cdot)$ and $Conv_{G}(\cdot)$ denote the $1 \times 1$ convolution, respectively. Then, we input the features into the labeled decoder to generate predictions of the labeled data and the pseudo labels of unlabeled data. When we generate predictions of the unlabeled data, we replace the above operations with the corresponding self-attention mechanism. + +Through the above methods, we adopt labeled data to guide unlabeled data to generate high quality pseudo labels, and unlabeled data to enrich the features of labeled data to improve the robustness of the model. + +# 3.3. Auxiliary Balanced Segmentation Head Training Strategy + +The class-imbalanced problem is very serious in medical image segmentation. Thus, inspired by ABC [17], we propose a method to improve the model's segmentation accuracy of the minority organs without weakening the learning of the majority organs. + +As shown in Fig. 2, we first replace the last layer of the two decoders with two segmentation heads, respectively. Thus, we can get a total of 4 segmentation heads, $S_{l}^{m}(\cdot)$ , $S_{l}^{a}(\cdot)$ , $S_{u}^{m}(\cdot)$ , and $S_{u}^{a}(\cdot)$ . For the main segmentation heads $S_{l}^{m}(\cdot)$ and $S_{u}^{m}(\cdot)$ , the loss function is calculated similarly to the previous formulas Eq. (2) and Eq. (4). The main difference is the loss calculation of the auxiliary balanced segmentation heads. The predictions of two auxiliary balanced segmentation heads and pseudo label are expressed as: + +$$ +P _ {L} ^ {A} = S _ {l} ^ {a} \left(D _ {l} \left(E \left(X _ {L}\right)\right)\right) \tag {13} +$$ + +$$ +P _ {U} ^ {A} = S _ {u} ^ {a} \left(D _ {u} \left(E \left(X _ {U}\right)\right)\right) \tag {14} +$$ + +$$ +\hat {Y} _ {U} ^ {A} = S _ {l} ^ {a} \left(D _ {l} \left(E \left(X _ {U}\right)\right)\right) \tag {15} +$$ + +We first calculate the number of voxels of each organ in all labeled data before training. To train the auxiliary balanced segmentation head $S_{l}^{a}(\cdot)$ to be balanced, we generate $0/1$ mask $M(X_{L})$ for each labeled data using a Bernoulli distribution $B(\cdot)$ with the parameter set to be inversely proportional to the number of voxels of each class as follows: + +$$ +M \left(X _ {L}\right) \sim B \left(\frac {N _ {L}}{N _ {C}}\right) \tag {16} +$$ + +where $N_{L}$ denotes the total number of voxels in labeled data, and $N_{C}$ denotes the total number of voxels belonging to the class of the corresponding position in labeled data. This setting makes $B(\cdot)$ generate mask 1 with high probability for the voxels in the minority organs, but with low probability for those in the majority organs. Then, the supervised balanced loss is multiplied by the generated mask, which can be expressed as: + +$$ +L _ {s u p} ^ {\text {b a l a n c e}} = \frac {1}{| B _ {l} |} \sum_ {i = 1} ^ {B _ {l}} M \left(X _ {L}\right) L _ {\text {D i c e C E}} \left(P _ {L} ^ {A}, Y _ {L}\right) \tag {17} +$$ + +To train the auxiliary balanced segmentation head $S_{u}^{a}(\cdot)$ to be balanced, we generate $0/1$ mask $M(X_{U})$ for each unlabeled data using a Bernoulli distribution by the pseudo label. To take full advantage of few unlabeled data voxels in the early stage, we gradually decrease the parameter of the Bernoulli distribution $\mathrm{B}(\cdot)$ for $X_{U}$ from 1 to $\frac{N_L}{N_C}$ as: + +$$ +M (X _ {U}) \sim B (1 - \frac {e p o c h s}{m a x _ {-} e p o c h s} (1 - \frac {N _ {L}}{N _ {C}})) \tag {18} +$$ + +where epochs and max_epochs denote the current epoch and maximum epoch, respectively. Then the unsupervised balanced loss can be expressed as: + +$$ +L _ {u n s u p} ^ {\text {b a l a n c e}} = \frac {1}{\left| B _ {U} \right|} \sum_ {i = 1} ^ {B _ {U}} M \left(X _ {U}\right) L _ {\text {D i c e C E}} \left(P _ {U} ^ {A}, \hat {Y} _ {U} ^ {A}\right) \tag {19} +$$ + +The final loss function is updated as: + +$$ +L = L _ {s u p} + L _ {s u p} ^ {\text {b a l a n c e}} + \lambda_ {u} \left(L _ {u n s u p} + L _ {u n s u p} ^ {\text {b a l a n c e}}\right) \tag {20} +$$ + +Through the above methods, the auxiliary segmentation head can learn the semantic information of minority organs well, but the segmentation performance of the majority organs will be affected. Therefore, we adopt a knowledge transfer strategy similar to Sec. 3.1 for the two main segmentation heads. + +# 4. Experiments + +# 4.1. Dataset and Pre-processing + +Synapse dataset. The Synapse dataset has 13 foreground classes, including spleen (Sp), right kidney (RK), left kidney (LK), gallbladder (Ga), esophagus (Es), liver (Li), stomach (St), aorta (Ao), inferior vena cava (IVC), portal & splenic veins (PSV), pancreas (Pa), right adrenal gland (RAG), left adrenal gland (LAG) with one background and 30 axial contrast-enhanced abdominal CT scans. Following DHC[31], we resample all the data to $80 \times 160 \times 160$ , and randomly split them into 20, 4, and 6 scans for training, validation, and testing, respectively. + +AMOS dataset. Compared with Synapse, the AMOS dataset excludes PSV but adds three new classes: duodenum(Du), bladder(Bl) and prostate/uterus(P/U). 360 scans are divided into 216, 24, and 120 scans for training, validation, and testing. + +# 4.2. Implementation Details + +We conduct our experiments using PyTorch 2.4.1, CUDA 12.6, and a single NVIDIA 3090 GPU. The network parameters are optimized with SGD with a momentum of 0.9 and an initial learning rate (lr) of 0.3 with a warming-up strategy: $lr = \text{base\_lr} \times (1 - \frac{\text{epochs}}{\text{max\_epochs}})^{0.9}$ . In the training stage, we randomly crop a volume of size $64 \times 128 \times 128$ , and don't use any other data augmentations. We train the networks for 1500 epochs with a batch size of 4, consisting of 2 labeled and 2 unlabeled data. We empirically set $\lambda_u$ to 10, and $\omega_{ema}$ to 0.99. We run experiments on Synapse three times with different seeds to eliminate the effect of randomness due to the limited samples. In the inference stage, we use the unlabeled decoder for prediction. Final segmentation results are obtained using a sliding window strategy with a stride size of $32 \times 32 \times 16$ . We choose two evaluation metrics: Dice Score (\%) and Average Surface Distance + +(ASD) in voxel. Given two object regions, Dice computes the percentage of overlap between them, ASD computes the average distance between their boundaries. + +# 4.3. Comparison with Existing Methods + +We compare our method with several state-of-the-art semi-supervised segmentation methods [3, 6, 8, 10, 13, 18, 21, 31, 32, 35, 39, 41, 43], and some of them include class-imbalanced designs [3, 8, 13, 18, 31, 32, 39]. + +In Tab. 1, we summarize the results of $20\%$ labeled Synapse dataset. We can observe that general semisupervised methods are very poor at the segmentation performance of small organs, and the average Dice may even be 0. The methods that consider the class imbalance problem have better results on some minority classes, but they still fail to capture the features of some other small organs, such as esophagus, right adrenal gland and left adrenal gland. Our proposed method achieves the best result in terms of average Dice $(64.27\% \uparrow)$ and average ASD $(1.45\downarrow)$ , and our method has significantly improved on some complex organs and small organs, such as the stomach $(\uparrow 9.6\%)$ , pancreas $(\uparrow 7.5\%)$ and left adrenal gland $(\uparrow 10.6\%)$ . + +We then evaluate our method on $5\%$ labeled AMOS dataset. As shown in Tab. 2, we can observe that our method achieves the best result in terms of average Dice $(53.81\% \uparrow)$ , which achieves a $3.78\%$ improvement compared to the state-of-the-art results. Similar to the Synapse dataset, the problem of class imbalance still exists, but our method still maintains superior performance on these complex organs and small organs, such as esophagus $(\uparrow 5.5\%)$ , stomach $(\uparrow 2.6\%)$ and right adrenal gland $(\uparrow 2.8\%)$ . + +The visual segmentation results of the various methods are presented in Fig. 4. It is evident that the compared methods tend to either over-segment or under-segment, and our method performs better, especially on minority organs. + +# 4.4. Ablation Studies + +Effectiveness of each component in Method. We conduct ablation studies to show the impact of each component in our framework, the results of which are shown in Tab. 3. The first row in Tab. 3 represents the basic pseudo-labeling framework baseline. Compared to the baseline, employing the data flow decoupling framework yields improvements in the average Dice of $12.74\%$ . When we add semantic knowledge complementarity module and auxiliary balanced segmentation head training strategy individually to the data flow decoupling framework, the average Dice increases by $1.77\%$ and $7.17\%$ , respectively. It is worth pointing out that our ABSH has a significant improvement on small organs, especially gallbladder, where the average Dice increases from $0.0\%$ to $62.0\%$ . Compared to the baseline, our complete method improves the average Dice by $19.61\%$ . + +Design choices of inference decoder and knowledge + +
MethodAvg. DiceAvg. ASDDice of Each Class
SpRKLKGaEsLiStAoIVCPSVPaRAGLAG
VNet (fully)62.09 ± 1.210.28 ± 3.984.677.273.873.338.294.668.472.171.258.248.517.929.0
GeneralUA-MT[43]20.26 ± 2.271.67 ± 7.448.231.722.20.00.081.229.123.327.50.00.00.00.0
URPC[21]25.68 ± 5.172.74 ± 15.566.738.256.80.00.085.333.933.114.80.05.10.00.0
CPS[10]33.55 ± 3.741.21 ± 9.162.855.245.435.90.091.131.341.949.28.814.50.00.0
SS-Net[41]35.08 ± 2.850.81 ± 6.562.767.960.934.30.089.920.961.744.80.08.74.20.0
DST[6]34.47 ± 1.637.69 ± 2.957.757.246.443.70.089.033.943.346.99.021.00.00.0
DePL[35]36.27 ± 0.936.02 ± 0.862.861.048.254.80.090.236.042.548.210.717.00.00.0
ImbalanceAdsh[13]35.29 ± 0.539.61 ± 4.655.159.645.852.20.089.432.847.653.08.914.40.00.0
CReST[39]38.33 ± 3.422.85 ± 9.062.164.753.843.88.185.927.254.447.714.413.018.74.6
SimiS[8]40.0 ± 0.632.98 ± 0.562.369.450.761.40.087.033.059.057.229.211.80.00.0
Basak et al.[3]33.24 ± 0.643.78 ± 2.557.453.848.546.90.087.828.742.345.46.315.00.00.0
CLD[18]41.07 ± 1.232.15 ± 3.362.066.059.361.50.089.031.762.849.428.618.50.00.0
DHC[31]48.61 ± 0.910.71 ± 2.662.869.559.266.013.285.236.967.961.537.030.931.410.6
GenericSSL[32]60.88 ± 0.72.52 ± 0.485.266.967.052.762.989.652.183.074.941.843.444.827.2
SKCDF (Ours)64.27 ± 1.361.45 ± 0.0979.572.167.659.860.793.361.785.478.541.850.946.437.8
+ +Table 1. Quantitative comparison between our approach and SSL segmentation methods on $20\%$ labeled Synapse dataset. 'General' or 'Imbalance' indicates whether the methods consider the class imbalance issue or not. Sp: spleen, RK: right kidney, LK: left kidney, Ga: gallbladder, Es: esophagus, Li: liver, St: stomach, Ao: aorta, IVC: inferior vena cava, PSV: portal & splenic veins, Pa: pancreas, RAG: right adrenal gland, LAG: left adrenal gland. Results of 3-times repeated experiments are reported in the 'mean±std' format. Best results are boldfaced, and 2nd best results are underlined. + +
MethodAvg. DiceAvg. ASDDice of Each Class
SpRKLKGaEsLiStAoIVCPaRAGLAGDuBlP/U
VNet (fully)76.502.0192.292.293.365.570.395.382.491.485.074.958.658.165.664.458.3
GeneralUA-MT[43]42.1615.4859.864.964.035.334.177.737.861.046.033.326.912.318.129.731.6
URPC[21]44.9327.4467.064.267.236.10.083.145.567.454.446.70.029.435.244.533.2
CPS[10]41.0820.3756.160.359.433.325.473.832.465.752.131.125.56.218.440.735.8
SS-Net[41]33.8854.7265.468.369.937.80.075.133.268.056.633.50.00.00.00.20.2
DST[6]41.4421.1258.963.363.837.729.674.636.166.149.932.813.55.517.639.133.1
DePL[35]41.9720.4255.762.457.736.631.368.433.965.651.930.223.310.220.943.937.7
ImbalanceAdsh[13]40.3324.5356.063.657.334.725.773.930.765.751.927.120.20.018.643.535.9
CReST[39]46.5514.6266.564.265.436.032.277.843.668.552.940.324.719.526.543.936.4
SimiS[8]47.2711.5177.472.568.732.114.786.646.374.654.241.624.417.921.947.928.2
Basak et al.[3]38.7331.7668.859.054.229.00.083.739.361.752.134.60.00.026.845.726.2
CLD[18]46.1015.8667.268.571.441.021.076.142.469.852.137.924.723.422.738.135.2
DHC[31]49.5313.8968.169.671.142.337.076.843.870.857.443.227.028.729.141.436.7
GenericSSL[32]50.035.2173.176.076.529.144.982.549.072.861.748.530.219.736.432.918.2
SKCDF (Ours)53.815.9777.177.971.234.150.488.651.680.958.948.833.030.232.245.926.4
+ +Table 2. Quantitative comparison between our approach and SSL segmentation methods on $5\%$ labeled AMOS dataset. Sp: spleen, RK: right kidney, LK: left kidney, Ga: gallbladder, Es: esophagus, Li: liver, St: stomach, Ao: aorta, IVC: inferior vena cava, Pa: pancreas, RAG: right adrenal gland, LAG: left adrenal gland, Du: duodenum, Bl: bladder, P/U: prostate/uterus. + +transfer strategy. We compare the results with different inference decoder choices and different knowledge transfer strategies. As shown in Tab. 4, When we do not adopt knowledge transfer strategy, using a labeled decoder as the final output results in an average Dice improvement of $3.18\%$ compared to using an unlabeled decoder. This is because labeled data can provide more precise semantic information for segmentation tasks. When we use the knowledge transfer strategy, there is an extremely significant improvement in both decoders. This is because the knowledge transfer strategy makes the unlabeled decoder better, so that the unlabeled data can better train the encoder, and then make the labeled decoder better, forming a virtuous cycle. The average Dice of the unlabeled decoder has a $0.19\%$ improve + +ment compared to the labeled decoder, demonstrating that unlabeled decoders make full use of both labeled and unlabeled data, which is highly consistent with our motivation. + +The knowledge transfer sequence between decoders and segmentation heads. Since we use the knowledge transfer strategy between the segmentation heads and the decoders, we compare different sequences of knowledge transfer strategy in order to choose the best one. As shown in Tab. 5, we observe that when the knowledge transfer strategy is used for the segmentation heads, the average Dice improves by at least $5.93\%$ compared with no use. And when the order of "Head $\rightarrow$ Decoder $\rightarrow$ Head" is adopted, the average Dice can reach the highest $63.57\%$ . In this sequence, the main segmentation head of the labeled decoder first learns + +
BaselineDFSKCMABSHAvg. DiceAvg. ASDDice of Each Class
SpRKLKGaEsLiStAoIVCPSVPaRAGLAG
44.66 ± 2.533.46 ± 1.9966.562.247.741.80.091.652.581.774.031.930.80.00.0
57.4 ± 0.913.0 ± 2.0874.169.361.861.80.090.060.382.779.141.348.448.329.1
59.17 ± 1.957.78 ± 4.6176.967.266.949.917.992.863.284.478.040.250.142.938.8
63.57 ± 1.211.26 ± 0.1479.070.764.159.962.093.764.284.578.139.547.146.936.7
64.27 ± 1.361.45 ± 0.0979.572.167.659.860.793.361.785.478.541.850.946.437.8
+ +![](images/6ef181f70ccd90bf1c67808242bc5e220557db3e1cffd79170c68f67455f4126.jpg) +Figure 4. Visual comparison on the $20\%$ labeled Synapse dataset: spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, protal & splenic veins, pancreas, right adrenal gland, and left adrenal gland. + +Table 3. Ablation study for the effectiveness of each component on $20\%$ labeled Synapse dataset. DF: data flow decoupling framework. SKCM: semantic knowledge complementarity module. ABSH: auxiliary balanced segmentation head training strategy. + +
MethodAvg. DiceAvg. ASD
Labeledw/o EMA53.71 ± 1.6411.39 ± 0.04
Unlabeledw/o EMA50.53 ± 1.0031.23 ± 0.78
LabeledEMA57.21 ± 0.8511.56 ± 0.75
UnlabeledEMA57.40 ± 0.9013.00 ±2.08
+ +the knowledge of the auxiliary balanced segmentation head, then passes it to the unlabeled decoder, and finally the main segmentation head of the unlabeled decoder can learn the knowledge of all the segmentation heads, so the best performance can be achieved. + +# 5. Conclusions + +Current semi-supervised methods usually use both labeled and unlabeled data to perform uniform model training without distinguishing them, which will degrade model performance. To address this issue, we propose a data flow decoupling framework, which adopts both labeled and unlabeled data to train the encoder, and the decoder is trained separately. This can make full use of the unlabeled data, and reduce the adverse impact of the unlabeled data due to the lack of accurate ground truth. Meanwhile, in order to use labeled data to guide the segmentation of unlabeled data and use unlabeled data to enrich the features of labeled data, + +Table 4. Comparison of different training strategies and decoders in inference stage on $20\%$ labeled Synapse dataset. Labeled/unlabeled: inference with labeled/unlabeled decoder, EMA: adopt knowledge transfer strategy to decoder in training stage. + +
MethodAvg. DiceAvg. ASD
D54.79 ± 1.8814.58 ± 4.91
H → D62.51 ± 1.011.89 ± 0.76
D → H → H60.90 ± 1.462.67 ± 0.64
H → H → D62.40 ± 0.671.34 ± 0.12
H → D → H63.57 ± 1.211.26 ± 0.14
+ +Table 5. Comparison of different knowledge transfer sequences between decoders and segmentation heads on $20\%$ labeled Synapse dataset. D: decoder, H: segmentation head, $\rightarrow$ : direction of sequence. + +we design a semantic knowledge complementarity module, which provides multi-scale features from different 3D medical image individuals and effectively improves the results of semi-supervised segmentation. We further develop a semi-supervised learning strategy based on auxiliary balanced segmentation head to improve the model's segmentation accuracy of the minority organs without weakening the learning of the majority organs. Finally, experiments on the Synapse and AMOS datasets show that our method significantly outperforms existing methods. + +# Acknowledgement + +This work was supported in part by the Proof of Concept Program of Zhongguancun Science City and Peking University Third Hospital (No. HDCXZHKC2022202), the Natural Science Foundation of Beijing (No. 4252046), and the National Natural Science Foundation of China (No. 61972046, 61802022 and 61802027). + +# References + +[1] Yunhao Bai, Duowen Chen, Qingli Li, Wei Shen, and Yan Wang. Bidirectional copy-paste for semi-supervised medical image segmentation. In CVPR'23, pages 11514-11524, 2023. 2, 3 +[2] Hritam Basak and Zhaozheng Yin. Pseudo-label guided contrastive learning for semi-supervised medical image segmentation. In CVPR'23, pages 19786-19797, 2023. 1 +[3] Hritam Basak, Sagnik Ghosal, and Ram Sarkar. Addressing class imbalance in semi-supervised image segmentation: A study on cardiac mri. In MICCAI'22, pages 224-233, 2022. 2, 6, 7 +[4] Gerda Bortsova, Florian Dubost, Laurens Hogeweg, Ioannis Katramados, and Marleen De Bruijne. Semi-supervised medical image segmentation via learning consistency under transformations. In MICCAI'19, pages 810-818, 2019. 1, 3 +[5] Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, and Manning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. In ECCV'22, pages 205-218, 2022. 1, 3 +[6] Baixu Chen, Junguang Jiang, Ximei Wang, Pengfei Wan, Jianmin Wang, and Mingsheng Long. Debiased self-training for semi-supervised learning. NeurIPS'22, 35:32424-32437, 2022. 3, 6, 7 +[7] Duowen Chen, Yunhao Bai, Wei Shen, Qingli Li, Lequan Yu, and Yan Wang. Magicnet: Semi-supervised multi-organ segmentation via magic-cube partition and recovery. In CVPR'23, pages 23869-23878, 2023. 2 +[8] Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, and Bhiksha Raj. An embarrassingly simple baseline for imbalanced semi-supervised learning. arXiv preprint arXiv:2211.11086, 2022. 2, 6, 7 +[9] Jieneng Chen, Jieru Mei, Xianhang Li, Yongyi Lu, Qihang Yu, Qingyue Wei, Xiangde Luo, Yutong Xie, Ehsan Adeli, Yan Wang, et al. Transunet: Rethinking the u-net architecture design for medical image segmentation through the lens of transformers. Medical Image Analysis, 97:103280, 2024. 1, 3 +[10] Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang. Semi-supervised semantic segmentation with cross pseudo supervision. In CVPR'21, pages 2613-2622, 2021. 6, 7 +[11] Yaxiong Chen, Yujie Wang, Zixuan Zheng, Jingliang Hu, Yilei Shi, Shengwu Xiong, Xiao Xiang Zhu, and Lichao Mou. Striving for simplicity: Simple yet effective prior-aware pseudo-labeling for semi-supervised ultrasound image segmentation. In MICCAI'24, pages 604-614, 2024. 2 +[12] Shengbo Gao, Ziji Zhang, Jiechao Ma, Zihao Li, and Shu Zhang. Correlation-aware mutual learning for semi-supervised medical image segmentation. In MICCAI'23, pages 98-108, 2023. 2 +[13] Lan-Zhe Guo and Yu-Feng Li. Class-imbalanced semi-supervised learning with adaptive thresholding. In ICML'22, pages 8082-8094, 2022. 2, 6, 7 +[14] Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R + +Roth, and Daguang Xu. Unetr: Transformers for 3d medical image segmentation. In WACV'22, pages 574-584, 2022. 3 +[15] Wei Huang, Chang Chen, Zhiwei Xiong, Yueyi Zhang, Xuejin Chen, Xiaoyan Sun, and Feng Wu. Semi-supervised neuron segmentation via reinforced consistency learning. IEEE Transactions on Medical Imaging, 41(11):3016-3028, 2022. 1, 3 +[16] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML'13, page 896, 2013. 3 +[17] Hyuck Lee, Seungjae Shin, and Heeyoung Kim. Abc: Auxiliary balanced classifier for class-imbalanced semi-supervised learning. NeurIPS'21, 34:7082-7094, 2021. 2, 5 +[18] Yiqun Lin, Huifeng Yao, Zezhong Li, Guoyan Zheng, and Xiaomeng Li. Calibrating label distribution for class-imbalanced barely-supervised knee segmentation. In MIC-CAI'22, pages 109-118, 2022. 2, 6, 7 +[19] Han Liu, Zhoubing Xu, Riqiang Gao, Hao Li, Jianing Wang, Guillaume Chabin, Ipek Oguz, and Sasa Grbic. Cosst: Multi-organ segmentation with partially labeled datasets using comprehensive supervisions and self-training. IEEE Transactions on Medical Imaging, 2024. 3 +[20] Wentao Liu, Tong Tian, Weijin Xu, Huihua Yang, Xipeng Pan, Songlin Yan, and Lemeng Wang. Phtrans: Parallelly aggregating global and local representations for medical image segmentation. In MICCAI'22, pages 235-244, 2022. 1, 3 +[21] Xiangde Luo, Wenjun Liao, Jieneng Chen, Tao Song, Yinan Chen, Shichuan Zhang, Nianyong Chen, Guotai Wang, and Shaoting Zhang. Efficient semi-supervised gross target volume of nasopharyngeal carcinoma segmentation via uncertainty rectified pyramid consistency. In MICCAI'21, pages 318-329, 2021. 6, 7 +[22] Zibo Ma, Bo Zhang, Zheng Zhang, Wu Liu, Wufan Wang, Hui Gao, and Wendong Wang. Addg: An adaptive domain generalization framework for cross-plane mri segmentation. In MM'24, pages 5384-5392, 2024. 1 +[23] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 3DV'16, pages 565-571, 2016. 2 +[24] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI'15, pages 234-241, 2015. 1, 2 +[25] Saikat Roy, Gregor Koehler, Constantin Ulrich, Michael Baumgartner, Jens Petersen, Fabian Isensee, Paul F Jaeger, and Klaus H Maier-Hein. Mednext: transformer-driven scaling of convnets for medical image segmentation. In MIC-CAI'23, pages 405–415, 2023. 1, 3 +[26] Yucheng Tang, Dong Yang, Wenqi Li, Holger R Roth, Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali Hatamizadeh. Self-supervised pre-training of swin transformers for 3d medical image analysis. In CVPR'22, pages 20730-20740, 2022. 3 +[27] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets im + +prove semi-supervised deep learning results. NeurIPS'17, 30, 2017. 3 +[28] Bethany H Thompson, Gaetano Di Caterina, and Jeremy P Voisey. Pseudo-label refinement using superpixels for semi-supervised brain tumour segmentation. In ISBI'22, pages 1-5, 2022. 3 +[29] Chong Wang, Daoqiang Zhang, and Rongjun Ge. Eye-guided dual-path network for multi-organ segmentation of abdomen. In MICCAI'23, pages 23-32, 2023. 1, 3 +[30] Fakai Wang, Kang Zheng, Le Lu, Jing Xiao, Min Wu, and Shun Miao. Automatic vertebra localization and identification in ct by spine rectification and anatomically-constrained optimization. In CVPR'21, pages 5280-5288, 2021. 3 +[31] Haonan Wang and Xiaomeng Li. Dhc: Dual-debiased heterogeneous co-training framework for class-imbalanced semi-supervised medical image segmentation. In MIC-CAI'23, pages 582-591, 2023. 2, 3, 6, 7 +[32] Haonan Wang and Xiaomeng Li. Towards generic semi-supervised framework for volumetric medical image segmentation. NeurIPS'24, 36, 2024. 2, 6, 7 +[33] Haonan Wang, Qixiang Zhang, Yi Li, and Xiaomeng Li. Allspark: Reborn labeled features from unlabeled in transformer for semi-supervised semantic segmentation. In CVPR'24, pages 3627-3636, 2024. 2 +[34] Liansheng Wang, Jiacheng Wang, Lei Zhu, Huazhu Fu, Ping Li, Gary Cheng, Zhipeng Feng, Shuo Li, and Pheng-Ann Heng. Dual multiscale mean teacher network for semi-supervised infection segmentation in chest ct volume for Covid-19. IEEE Transactions on Cybernetics, 53(10):6363-6375, 2022. 1, 3 +[35] Xudong Wang, Zhirong Wu, Long Lian, and Stella X Yu. Debiased learning from naturally imbalanced pseudo-labels. In CVPR'22, pages 14647-14657, 2022. 6, 7 +[36] Yan Wang, Xu Wei, Fengze Liu, Jieneng Chen, Yuyin Zhou, Wei Shen, Elliot K Fishman, and Alan L Yuille. Deep distance transform for tubular structure segmentation in ct scans. In CVPR'20, pages 3833-3842, 2020. 3 +[37] Yongchao Wang, Bin Xiao, Xiuli Bi, Weisheng Li, and Xinbo Gao. Mcf: Mutual correction framework for semi-supervised medical image segmentation. In CVPR'23, pages 15651-15660, 2023. 3 +[38] Ziyang Wang and Congying Ma. Dual-contrastive dual-consistency dual-transformer: A semi-supervised approach to medical image segmentation. In ICCV'23, pages 870–879, 2023. 1, 3 +[39] Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, and Fan Yang. Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning. In CVPR'21, pages 10857-10866, 2021. 2, 6, 7 + +[40] Yicheng Wu, Minfeng Xu, Zongyuan Ge, Jianfei Cai, and Lei Zhang. Semi-supervised left atrium segmentation with mutual consistency training. In MICCAI'21, pages 297-306, 2021. 3 +[41] Yicheng Wu, Zhonghua Wu, Qianyi Wu, Zongyuan Ge, and Jianfei Cai. Exploring smoothness and class-separation for semi-supervised medical image segmentation. In MIC-CAI'22, pages 34-43, 2022. 1, 3, 6, 7 +[42] Yingda Xia, Dong Yang, Zhiding Yu, Fengze Liu, Jinzheng Cai, Lequan Yu, Zhuotun Zhu, Daguang Xu, Alan Yuille, and Holger Roth. Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation. Medical Image Analysis, 65:101766, 2020. 3 +[43] Lequan Yu, Shujun Wang, Xiaomeng Li, Chi-Wing Fu, and Pheng-Ann Heng. Uncertainty-aware self-ensembling model for semi-supervised 3d left atrium segmentation. In MIC-CAI'19, pages 605–613, 2019. 1, 3, 6, 7 +[44] Bo Zhang, YunPeng Tan, Zheng Zhang, Wu Liu, Hui Gao, Zhijun Xi, and Wendong Wang. Factorized omnidirectional representation based vision gnn for anisotropic 3d multimodal mr image segmentation. In MM'23, pages 1607-1615, 2023. 3 +[45] Zheng Zhang, Yushan Song, Yunpeng Tan, Shuo Yan, Bo Zhang, and Yufeng Zhuang. Segmentation assisted prostate cancer grading with multitask collaborative learning. Pattern Recognition Letters, 183:42-48, 2024. 3 +[46] Zheng Zhang, Guanchun Yin, Zibo Ma, Yunpeng Tan, Bo Zhang, and Yufeng Zhuang. Ida-net: Individual difference aware medical image segmentation with meta-learning. Pattern Recognition Letters, 187:21-27, 2025. 3 +[47] Haochen Zhao, Hui Meng, Deqian Yang, Xiaozheng Xie, Xiaozhe Wu, Qingfeng Li, and Jianwei Niu. Guidednet: Semi-supervised multi-organ segmentation via labeled data guide unlabeled data. In MM'24, pages 886–895, 2024. 2 +[48] Xiangyu Zhao, Zengxin Qi, Sheng Wang, Qian Wang, Xuehai Wu, Ying Mao, and Lichi Zhang. Rcps: Rectified contrastive pseudo supervision for semi-supervised medical image segmentation. IEEE Journal of Biomedical and Health Informatics, 2023. 1 +[49] Yuan Zhong, Chenhui Tang, Yumeng Yang, Ruoxi Qi, Kang Zhou, Yuqi Gong, Pheng Ann Heng, Janet H Hsiao, and Qi Dou. Weakly-supervised medical image segmentation with gaze annotations. In MICCAI'24, pages 530-540, 2024. 3 +[50] Hong-Yu Zhou, Jiansen Guo, Yinghao Zhang, Xiaoguang Han, Lequan Yu, Liansheng Wang, and Yizhou Yu. nformer: Volumetric medical image segmentation via a 3d transformer. IEEE Transactions on Image Processing, 2023. 1, 3 \ No newline at end of file diff --git a/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/images.zip b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..72ccd0f2a92d96d7d5d9db337c6b092fe7035cae --- /dev/null +++ b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a37e40488ddd3057c34d413777de2680f614fc1f7916885416c84d81c0335ff +size 773918 diff --git a/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/layout.json b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3e5b5d91aed401d928d1154b22d133ae4691d169 --- /dev/null +++ b/CVPR/2025/A Semantic Knowledge Complementarity based Decoupling Framework for Semi-supervised Class-imbalanced Medical Image Segmentation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d61189cdd8346af4760ae42f4787913a9095279e0d82f6d43d95a13c5e86d1a +size 377402 diff --git a/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_content_list.json b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c837feeb7e2f506c9a4b6cbc3a12134dc2f6dba0 --- /dev/null +++ b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3eb503feef86a88aedf375632a3bebba73779cf09b3ce2e33c0c026ec94b4e66 +size 85495 diff --git a/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_model.json b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..583c4d02b6b0d89daf92b26bc5de5dc7de1b8afc --- /dev/null +++ b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd1136a969cebf5b4433d542403c11547dfb77098761ba3c91c6f7fe87f03219 +size 105234 diff --git a/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_origin.pdf b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3dd856721208bc92951dd13a710cbcaee4b0f405 --- /dev/null +++ b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/e0ad170f-ffdd-4ee6-8dbc-8fd4c6ea1cbc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:486a90a15a48c32f38c9a447c309f4e68ae371a6350e7ff9e0afc24325474155 +size 1877715 diff --git a/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/full.md b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..04e8e0c9953c2c45e4885cf62d009a6fdad6e4f9 --- /dev/null +++ b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/full.md @@ -0,0 +1,332 @@ +# A Simple Data Augmentation for Feature Distribution Skewed Federated Learning + +Yunlu Yan $^{1}$ , Huazhu Fu $^{2}$ , Yuexiang Li $^{3}$ , Jinheng Xie $^{4}$ , Jun Ma $^{1,6}$ , Guang Yang $^{5}$ , Lei Zhu $^{1,6*}$ + +1The Hong Kong University of Science and Technology (Guangzhou) 2IHPC, A*STAR Guangxi Medical University 4National University of Singapore 5Imperial College London 6The Hong Kong University of Science and Technology + +# Abstract + +Federated Learning (FL) facilitates collaborative learning among multiple clients in a distributed manner and ensures the security of privacy. However, its performance inevitably degrades with non-Independent and Identically Distributed (non-IID) data. In this paper, we focus on the feature distribution skewed FL scenario, a common non-IID situation in real-world applications where data from different clients exhibit varying underlying distributions. This variation leads to feature shift, which is a key issue of this scenario. While previous works have made notable progress, few pay attention to the data itself, i.e., the root of this issue. The primary goal of this paper is to mitigate feature shift from the perspective of data. To this end, we propose a simple yet remarkably effective input-level data augmentation method, namely FedRDN, which randomly injects the statistical information of the local distribution from the entire federation into the client's data. This is beneficial to improve the generalization of local feature representations, thereby mitigating feature shift. Moreover, our FedRDN is a plug-and-play component, which can be seamlessly integrated into the data augmentation flow with only a few lines of code. Extensive experiments on several datasets show that the performance of various representative FL methods can be further improved by integrating our FedRDN, demonstrating its effectiveness, strong compatibility and generalizability. Code is available at https://github.com/IAMJackYan/FedRDN. + +# 1. Introduction + +Federated Learning [17, 20, 25] (FL) has been a de facto solution for distributed learning and attracted wide attention from various communities [2, 3, 13, 29]. It utilizes a credible server to communicate the privacy irrelevant information, e.g., model parameters, thereby collaboratively training the model on multiple distributed data clients, while the + +local data of each client is invisible to others. Such a design is simple yet can achieve superior performance and preserve privacy. However, it is inevitable to suffer non-Independent and Identically Distributed (non-IID) data when we deploy FL in real-world applications, which will greatly degrade the performance of the trained model [9, 21, 26]. + +Since non-IID data is widely common in many real-world cases, many researchers try to address this issue and improve the practicability of FL [6, 9, 21, 31]. However, most of them try to design a general FL method that can handle different non-IID scenarios. In fact, this is suboptimal due to the discrepancy among various non-IID scenarios, such as feature distribution skew and label distribution skew [18]. Hence, recent studies try to design special FL algorithms for different non-IID scenarios [44, 48]. In this work, we focus on solving feature distribution skew (Definition in §3.1), a typical non-IID scenario in FL, as the data of discrete clients are always collected from different devices or environments, incurring different underlying distributions. For instance, different hospitals possess MR images scanned by different devices, and different phones store images in different styles (e.g., cartoon images and natural images). Such data situation leads to inconsistent feature distribution across different clients, namely feature shift [22, 48], resulting in serious performance degradation. To address this problem, FedBN [22] learns the specific batch normalization layer parameters for each client. HarmoFL [8] utilizes the potential knowledge of medical images in the frequency domain to mitigate feature shift. + +While feature distribution skew has been explored in FL, little attention has been paid to the data itself. When it comes to data processing, the common practice [8, 22, 48] is to integrate some traditional data operations in the data augmentation flow (e.g., transforms composing) in Pytorch of each client like flipping, cropping and rotating. This overlooks the effectiveness of data augmentation for FL at the input level, i.e., data augmentation flow, which leaves an alternative space to explore. Since the root cause of feature shift lies in the divergence of local data distributions, we ask + +two questions: 1) Why not proactively resolve it by directly processing the data? 2) Can we design an FL-specific data augmentation technology that can be integrated into the data augmentation flow? To answer these questions, we try to design a plug-and-play input-level data augmentation technique. + +Although it may appear straightforward, effectively implementing data augmentation in FL poses a significant challenge, due to the lack of direct access to the external data of other clients. Therefore, how to inject global information into augmented samples and thereby mitigate the distribution bias between different local datasets is the core of the challenge. In this regard, FedMix [40] extends Mixup to FL for label distribution skew by incorporating the mixing of averaged data across multiple clients. However, it is only suitable for the classification task and its effectiveness for feature distribution skew has not been demonstrated. Furthermore, permitting the exchange of averaged data introduces privacy concerns. In a more recent study, a data augmentation approach, namely FedFA [48], is proposed for feature distribution skew, which mitigates feature shift through feature augmentation based on the statistics of latent features. Even though, it requires modifications to the network structure, which cannot be seamlessly integrated into the data augmentation flow. Besides, FedFA adds additional computational and communication overhead, which can be constrained in resource-limited scenarios. + +In this work, we propose a novel federated data augmentation technique, called FedRDN. We argue that the local bias is rooted in the model trained on the limited and skewed distribution. In centralized learning, we can collect data from many different distributions to learn generalized feature representations. This motivates us to augment the data to a more abundant distribution, which indirectly reduces the difference between different local distributions. To achieve that in the setting of FL, our FedRDN extends the standard data normalization to FL by randomly injecting statistics of local datasets into augmented samples, which is based on the insight that statistics of data are the essential characteristic of data distribution. It enables clients to access a wider range distribution of data, thereby enhancing the generalizability of local feature representations. Our FedRDN is a remarkably simple yet surprisingly effective method. It is a non-parametric approach that incurs negligible computation and communication overhead, without requiring modifications to the network structure, and seamlessly integrates into the data augmentation pipeline with just a few lines of code. After employing our method, significant improvements have been observed across various typical FL methods. In a nutshell, our contributions are summarized as follows: + +- We explore the input-level data augmentation technique for feature distribution skew, which gives more insights + +into how to understand and solve this problem. + +- We propose a novel plug-and-play data augmentation technique, FedRDN, which can be easily integrated into the data augmentation flow to mitigate feature shift for feature distribution skewed FL. +- We conduct extensive experiments on two classification datasets and an MRI segmentation dataset to demonstrate the effectiveness and generalization of our method, i.e., it outperforms traditional data augmentation techniques and improves the performance of various typical FL methods. + +# 2. Related Work + +Federated Learning with Non-IID Data. FL [17, 38, 39, 45, 46] allows multiple discrete clients to collaborate in training a global model while preserving privacy. The pioneering work, FedAvg [27], is the most widely used FL algorithm, yielding success in various applications. However, the performance of FedAvg is inevitably degraded when suffering non-IID data [9, 21], posing a fundamental challenge in this field. To address this, a variety of novel FL frameworks have been proposed, with representative examples such as FedProx [21], Scaffold [9], and FedNova [37]. They improved FedAvg by modifying the local training [1, 16] or model aggregation [23, 42] to increase stability in heterogeneous data environments. Despite the progress, most of them ignore the differences in various non-IID scenarios, such as the heterogeneity of label distribution skew and feature distribution skew, which lie in labels and images [20], respectively. Therefore, recent studies attempt to more targeted methods for different non-IID scenarios. For example, FedLC [44] addressed label distribution skew by logits calibration. As for feature distribution skew, the target of this work, FedBN [22] learned the heterogeneous distribution of each client by personalized batch normalization layer parameters. In addition, HarmoFL [8] investigated specialized knowledge of frequency to mitigate feature shift. More recently, FedPCL [34] employed a pretrained model to reduce the number of learnable parameters and applied prototype-wise contrastive learning to regularize feature representation learning across different clients. This offers an effective solution for training large models in the feature distribution skewed FL scenario. Motivated by prototype learning, FPL [7] utilized clustering to acquire unbiased class prototypes and then alleviated feature shift through prototype and local embedding alignment. In contrast to the aforementioned methods, ADCOL [19] introduced a novel adversarial collaborative learning approach to mitigate feature shift by replacing the model-averaging scheme with adversarial learning. Although achieving significant progress, they only focus on mitigating feature shift at the local optimization or global aggregation stage, neglecting the data itself which is the root of feature shift. + +Different from them, we aim to address this issue from the perspective of data. + +Data Augmentation Data augmentation [11, 12, 30] is a widely used technique in machine learning, which can alleviate overfitting and improve the generalization of the model. For computer vision tasks, the neural networks are typically trained with a data augmentation flow containing various techniques like random flipping, cropping, and data normalization. Different from these early label-preserving techniques [47], label-perturbing techniques are recently popular such as MIXUP [43] and CUTMIX [41]. It augments samples by fusing not only two different images but also their labels. Except for the above input-level augmentation techniques, some feature-level augmentation techniques [14, 15, 36] that make augmentation in feature space have achieved success. Recently, some studies have started to introduce data augmentation techniques into FL. For example, FedMix [40] proposed a variant of MIXUP in FL, which shows its effectiveness in the label distribution skewed FL scenario. However, its effectiveness in the feature distribution skewed FL scenario has not been demonstrated. More importantly, FedMix requires sharing averaged local data which increases the risk of privacy. For feature distribution skew, FedFA [48] augmented the features of clients by the statistics of features. However, it operates on the feature level, which requires modifications to the network structure and incurs additional computation and communication cost. Different from FedFA, our proposed augmentation method operates on the input level. By this, our method can be a plug-and-play component, combined with different FL methods to further improve their performance, which shows stronger compatibility and flexibility. In addition, our method can further improve the performance of FedFA due to operating in different spaces. + +# 3. Methodology + +# 3.1. Preliminaries + +Federated Learning. Supposed that a FL system is composed of $K$ clients $\{C_1,C_2,\ldots ,C_K\}$ and a central server. For client $C_k$ $(k\in [K])$ , there are $n_k$ supervised training samples $\{x_{i},y_{i}\}_{i = 1}^{n_{k}}$ , where image $x_{i}$ and label $y_{i}$ from a joint distribution $(x_{i},y_{i})\sim P_{k}(x,y)$ . Besides, each client trains a local model $f(\pmb {w}_k)$ only on its private dataset, where $\pmb{w}_k$ is the parameters of the local model. The goal of FL is to learn a global model by minimizing the summation empirical risk of each client: + +$$ +\min \mathcal {L} = \sum_ {k = 1} ^ {K} \gamma_ {k} \mathcal {L} _ {k}, \quad \text {w h e r e} \quad \gamma_ {k} = \frac {n _ {k}}{\sum_ {i = 1} ^ {K} n _ {i}}. \tag {1} +$$ + +To achieve this goal, at each communication round $t \in [T]$ , the standard FL method, FedAvg [27], performs $E$ epochs + +local training and the local objective $\mathcal{L}_k$ can be written as: + +$$ +\mathcal {L} _ {k} = \frac {1}{n _ {k}} \sum_ {\left(x _ {i}, y _ {i}\right) \sim P _ {k}} \ell \left(f \left(x _ {i}; \boldsymbol {w} _ {k} ^ {t}\right); y _ {i}\right), \tag {2} +$$ + +where $\ell$ is the loss function. After local training, it then averages the parameters of all local models to update the parameters of the global model, which can be described as: + +$$ +\boldsymbol {w} _ {G} ^ {t + 1} = \sum_ {k = 1} ^ {K} \gamma_ {k} \boldsymbol {w} _ {k} ^ {t}. \tag {3} +$$ + +The updated parameters of global model $\boldsymbol{w}_{G}^{t + 1}$ will be returned to each client as the initialization for the next round of training. + +Feature Distribution Skew. The underlying data distribution $P_{k}(x,y)$ can be rewritten as $P_{k}(y|x)P_{k}(x)$ , and $P_{k}(x)$ varies across clients while $P_{k}(y|x)$ is consistent for all clients. Moreover, different underlying data distributions lead to inconsistent feature distribution across different clients, thereby degrading the performance of the global model. + +Data Normalization. Normalization is a popular data preprocess operation, which can transform the data distribution to standard normal distribution. In detail, given an $C$ -channel image $x \in \mathbb{R}^{C \times H \times W}$ with spatial size $(H \times W)$ , it transforms image as: + +$$ +\hat {x} = \frac {x - \mu}{\sigma}, \quad \mu , \sigma \in \mathbb {R} ^ {C}, \tag {4} +$$ + +where $\mu$ and $\sigma$ are channel-wise means and standard deviation, respectively, and they are usually manually set in experiential or statistical values from the real dataset. + +# 3.2. Federated Random Data Normalization + +In this section, we present the details of the proposed Federated Random Data Normalization (FedRDN) method. Different from previous FL methods, FedRDN focuses on mitigating the distribution discrepancy through input-level data augmentation during the data processing stage. The goal of FedRDN is to let each client learn as many distributions as possible instead of self-biased distribution, which is beneficial to feature generalization. To achieve this, it performs explicit data augmentation by manipulating multiple clients channel-wise data statistics during training at each client. We will introduce the detail of our method in the following. + +Data Distribution Statistic. The approximate distribution of the data can be estimated using statistical methods. Therefore, we can obtain an approximate distribution by computing the statistics of the local dataset, i.e., $P_{k} \sim \mathcal{N}(\mu^{k}, (\sigma^{k})^{2})$ , where $\mu^{k}$ and $\sigma^{k}$ are mean and standard deviation, respectively. Specifically, to estimate such + +# Algorithm 1: FedRDN + +Input: Number of Clients $K$ , communication rounds $T$ , epochs $E$ + +1 Compute Statistic: + +2 for client $k = 1,2,\dots,K$ parallelly do + +8 end + +9 Return $\{(\mu^k,\sigma^k)\}_{k = 1}^K$ + +$$ +\begin{array}{l} \begin{array}{c c} \mathbf {f o r} x _ {i} ^ {k} \sim P _ {k} \mathbf {d o} \\ \mathbf {4} & \mu_ {i} ^ {k} = \frac {1}{H W} \sum_ {h = 1} ^ {H} \sum_ {w = 1} ^ {W} x _ {i} ^ {k, (h, w)} \\ \mathbf {5} & \sigma_ {i} ^ {k} = \sqrt {\frac {1}{H W} \sum_ {h = 1} ^ {H} \sum_ {w = 1} ^ {W} (x _ {i} ^ {k , (h , w)} - \mu_ {i} ^ {k}) ^ {2}} \end{array} \\ \begin{array}{c c} \mathbf {6} & \mathbf {e n d} \\ \mathbf {7} & \mu^ {k} = \sum_ {i = 1} ^ {n _ {k}} \mu_ {i} ^ {k}, \quad \sigma^ {k} = \sum_ {i = 1} ^ {n _ {k}} \sigma_ {i} ^ {k} \end{array} \\ \end{array} +$$ + +$$ +\left. \begin{array}{c c c} 1 4 & & \\ & & \end{array} \right| \quad (\mu^ {j}, \sigma^ {j}) \sim \left\{\left(\mu^ {k}, \sigma^ {k}\right) \right\} _ {k = 1} ^ {K} +$$ + +$$ +\begin{array}{c c c c} \text {1 5} & & & \hat {x} _ {i} ^ {k} = \frac {x _ {i} ^ {k} - \mu^ {j}}{\sigma^ {j}} \\ & & & / / \text {t r a i n i n g} \end{array} +$$ + +10 Data Augmentation: +11 for round $t = 1,2,\dots,T$ do +12 | for epoch $e = 1,2,\dots,E$ do +13 for $(x_i^k,y_i^k)\sim P_k\mathbf{d}\mathbf{o}$ +16 end +17 end +18 end + +underlying distribution information of each client, we compute the channel-wise statistics within each local dataset in client-side before the start of training: + +$$ +\mu^ {k} = \sum_ {i = 1} ^ {n _ {k}} \mu_ {i} ^ {k} \in \mathbb {R} ^ {C}, \quad \sigma^ {k} = \sum_ {i = 1} ^ {n _ {k}} \sigma_ {i} ^ {k} \in \mathbb {R} ^ {C}, \tag {5} +$$ + +where $\mu_i^k$ and $\sigma_i^k$ are sample-level channel-wise statistics, and they can be computed as: + +$$ +\begin{array}{l} \mu_ {i} ^ {k} = \frac {1}{H W} \sum_ {h = 1} ^ {H} \sum_ {w = 1} ^ {W} x _ {i} ^ {k, (h, w)}, \\ \sigma_ {i} ^ {k} = \sqrt {\frac {1}{H W} \sum_ {h = 1} ^ {H} \sum_ {w = 1} ^ {W} \left(x _ {i} ^ {k , (h , w)} - \mu_ {i} ^ {k}\right) ^ {2}}, \tag {6} \\ \end{array} +$$ + +where $x_{i}^{k,(h,w)}$ represents the image pixel at spatial location $(h,w)$ . Following this, all data distribution statistics will be sent to the server and aggregated by the server. The aggregated statistics $\{(\mu^k,\sigma^k)\}_{k = 1}^K$ are shared among clients. + +Data Augmentation at Training Phase. After obtaining the statistical information of each client, we utilize them to augment data during training. Considering an image $x_{i}^{k}$ , different from the normal data normalization that transforms the image according to a fixed statistic, we transform the image by randomly selecting the mean and standard deviation from statistics $\{(\mu^k,\sigma^k)\}_{k = 1}^K$ , which can be described as: + +$$ +\hat {x} _ {i} ^ {k} = \frac {x _ {i} ^ {k} - \mu^ {j}}{\sigma^ {j}}, \quad (\mu^ {j}, \sigma^ {j}) \sim \left\{\left(\mu^ {k}, \sigma^ {k}\right) \right\} _ {k = 1} ^ {K}. \tag {7} +$$ + +Notably, the statistic $(\mu^j,\sigma^j)$ will be randomly reselected for each image at each training epoch. Consequently, after multiple rounds, the number of selections surpasses the quantity of statistics by a substantial margin. This implies that each client leverages the local distribution information from all clients to augment each image, which potentially + +injects all local information from every client into the client-side training. In this way, we seamlessly inject global information into augmented samples. The local model can learn the distribution information of all clients, thereby making the learned features more generalized. + +Data Processing at Testing Phase. During training, the random selection strategy integrates distribution information across all clients (i.e., local statistics). This allows the model to remain robust to variations in client-specific distributions. However, at the testing stage, communication between clients should be disabled, and each client should operate independently, without access to distribution information from others. Thus, at test time, we only apply the local statistics pertinent to each individual client for data processing. This ensures the consistency between the distribution of training and testing data. For client $k$ , the above process can be formalized as follows: + +$$ +\hat {x} _ {i} ^ {k} = \frac {x _ {i} ^ {k} - \mu^ {k}}{\sigma^ {k}}. \tag {8} +$$ + +The overview of two processes, i.e., statistic computing and data augmentation, are presented in Algorithm 1. + +# 3.3. Privacy Security + +The previous input-level data augmentation method, i.e., FedMix [40], shares the average images per batch, leading to the increased risk of privacy. Different from it, our method only shares the privacy irrelevant information, i.e., dataset-level mean and standard deviation. In addition, we can not reverse the individual image from the shared information because it is statistical information of the whole dataset. Therefore, our method has a high level of privacy security. + +# 4. Experiments + +# 4.1. Experimental Setup + +Datasets. We conduct extensive experiments on three real-world datasets: Office-Caltech-10 [4], DomainNet [28], + +Table 1. The test accuracy (%) of all approaches on office-Caltech-10 [4] and DomainNet [28]. For a detailed comparison, we present the test accuracy of each client i.e., Office-Caltech-10: A(Amazon), C(Caltech), D(DSLR), W(Webcam), DomainNet: C(Clipart), I(Infagraph), P(Painting), Q(Quickdraw), R(Real), S(Sketch), and the average result. $\uparrow$ and $\downarrow$ show the rise and fall of the average result before and after augmentation. We mark best results in bold. (norm.: conventional data normalization) + +
MethodOffice-Caltech-10DomainNet
ACDWAvg.CIPQRSAvg.
FedAvg [27]53.1244.8865.6286.4462.5150.3822.8336.9958.1046.0939.5342.32
+ norm50.5243.5568.7583.0561.46(1.05)↓48.2823.2837.8054.2048.9741.6942.37(0.05)↑
+ FedMix [40]49.4741.7775.0088.1363.59(1.08)↑48.6623.4338.1255.1049.4641.3342.68(0.36)↑
+ FedRDN60.9345.7784.3788.1369.80(7.29)↑48.8522.6739.4160.3049.4640.6143.55(1.23)↑
FedProx [21]53.1245.3362.5086.4461.8452.6623.8935.2156.7046.7541.8742.85
+ norm51.0445.7768.7584.7462.57(0.73)↑47.1424.3534.5759.6044.8638.9841.58(1.27)↓
+ FedMix [40]47.3938.6678.1291.5263.92(2.08)↑47.9022.3737.3153.9048.4743.1442.18(0.67)↓
+ FedRDN61.4544.8884.3788.1369.71(7.87)↑50.5724.9638.7761.2051.3540.9744.63(1.78)↑
FedNova [37]50.0042.2262.5088.1360.7151.7123.7438.7756.2045.5238.4442.39
+ norm52.0845.3368.7586.4463.15(2.44)↑49.2324.3534.2455.8045.5242.2341.90(0.49)↓
+ FedMix [40]48.9542.6678.1283.0563.20(2.49)↑47.9024.0436.6759.1046.6742.4142.80(0.41)↑
+ FedRDN63.0241.3384.3789.8369.63(8.71)↑50.5723.4340.2257.3051.8440.4343.96(1.57)↑
Scaffold [9]52.6042.6653.1281.3557.4346.9522.8334.5746.5047.0040.9739.80
+ norm46.8740.0059.3786.4458.17(0.74)↑47.3322.8333.1158.3046.0142.0541.61(1.81)↑
+ FedMix [40]52.0840.8875.0089.8364.45(7.02)↑45.2423.2834.7347.5044.7840.9739.42(0.38)↓
+ FedRDN65.1041.7781.2586.4468.64(11.21)↑51.5223.8937.9656.2048.9738.8042.89(3.09)↑
FedAvgM [6]48.4345.3362.5083.0559.8345.8122.5237.9650.1048.2341.8741.08
+ norm51.0444.8862.5086.4461.21(1.38)↑46.9524.9635.8649.7045.4340.4340.55(0.53)↓
+ FedMix [40]50.0041.7765.6283.0560.11(0.29)↑48.2825.8740.0651.5048.5638.6242.15(1.07)↑
+ FedRDN62.5043.1184.3788.1369.53(9.70)↑48.0922.9841.0363.8049.7938.0843.96(2.88)↑
FedBN [22]67.1844.0084.9786.4470.6549.2324.9636.3263.6047.7439.5343.56
+ norm64.8441.0784.1187.8769.47(1.18)↓48.4723.8136.6763.7649.7140.7943.88(0.32)↑
+ FedMix [40]65.6244.8884.3788.4570.83(0.18)↑49.6125.7238.4457.3046.0138.0842.53(1.03)↓
+ FedRDN67.7443.1184.3789.3571.14(0.49)↑50.7625.2637.1561.9848.9743.1444.54(0.98)↑
FedProto [33]55.7244.4468.7586.4463.8448.2825.1135.8651.3043.7937.1840.25
+ norm53.6444.8856.2586.4460.30(3.54)↓45.8123.4335.7058.3045.2740.7941.55(1.30)↑
+ FedMix [40]53.6441.7784.3788.1366.98(3.14)↑47.3323.4337.4752.7044.9442.4141.38(1.13)↑
+ FedRDN66.1446.2284.3789.8371.64(7.80)↑49.4222.3741.5157.9051.4338.4443.51(3.26)↑
FedFA [48]60.9348.4481.2589.8370.1148.0923.7439.5864.2048.0643.1444.47
+ norm60.9350.6681.2584.7469.40(0.71)↓49.0424.2038.2860.7045.9342.5943.46(1.01)↓
+ FedMix [40]56.2548.4484.3788.1369.30(0.81)↓45.8122.5233.1151.1043.1337.9038.93(5.54)↓
+ FedRDN62.5048.8890.6291.5273.38(3.27)↑52.8524.5037.8061.9050.4542.5945.01(0.54)↑
+ +and ProstateMRI [24], which are widely used in the feature distribution skewed FL scenario [8, 22, 48]. There are two different tasks including image classification (Office-Caltech-10 and DomainNet) and medical image segmentation (ProstateMRI). Following previous work [22, 48], we employ the subsets as clients when conducting experiments on each dataset. + +Baselines. To demonstrate the effectiveness of our method, we build four different data augmentation flows: ① base: one flow has some basic data augmentation techniques like random flipping, ② +norm: one flow adds the conventional normalization technique into base, ③ +FedMix: another flow adds the FedMix [40] data augmentation technique into base, and ④ +FedRDN: the rest one integrates our proposed augmentation method with base. Since it is infeasible to deploy FedMix into segmentation tasks, we only utilize it for image classification tasks. Following, we integrate them into different typical FL methods. In detail, we employ eight state-of-the-art FL methods to demonstrate + +the generalizability of our method, including FedAvg [27], FedAvgM [6], FedProx [21], Scaffold [9], FedNova [37], FedBN [22], FedProto [33], and FedFA [48] for validation of image classification task. Moreover, we select five of them which are general for different tasks to validate the effectiveness of our method on medical image segmentation tasks. To quantitatively evaluate the performance, we utilize the top-1 accuracy for image classification while the medical segmentation is evaluated with the Dice coefficient. + +Implementation Details. All methods are implemented by PyTorch, and we conduct all experiments on a single NVIDIA GTX 1080Ti GPU with 11GB of memory. Following previous work [22, 48], we employ the AlexNet [10] as the image classification model and the U-Net [32] as the medical image segmentation model. We use cross-entropy loss for image classification and Dice loss for medical image segmentation. The batch size is 32 for two image classification datasets and 16 for the ProstateMRI dataset. We adopt the SGD optimizer with a learning rate + +![](images/c551a1e3bfe06294fa986f4fe49c0006f67d3d1deb668909618cbb089b949b49.jpg) + +![](images/e005130c0779fd821f0ece6119ceb6bea12841e3687eaba92bdd3a1daf255946.jpg) + +![](images/694390196b2367076e6db8072a5325802eb2175be99c3f7a013e631fee1eacc7.jpg) + +![](images/4ae7c126f5bdb9d1153baff3050b2f94013b1b508ed0e6ada616744e30097cbe.jpg) +(a) Office-Caltech-10 + +![](images/237f9b7a805d68a5cfd47e3e1b67f1df74ab47bc6888a2966de6da8ffecddf07.jpg) +(b) DomainNet + +![](images/295b844056e49937506082cadc9c191270c5435a795fc5cabbc8663deedd2948.jpg) +(c) ProstateMRI +Figure 1. Illustration of test performance versus communication rounds on (a) Office-Caltech-10 [4], (b) DomainNet [28], and (c) ProstateMRI [24]. + +of 0.01 and weight decay of 1e-5 for image classification datasets, and the Adam optimizer with a learning rate of 1e-3 and weight decay of 1e-4 for ProstateMRI. Furthermore, we run 100 communication rounds on image classification tasks while the number of rounds for the medical segmentation task is 200, and each round has 5 epochs of local training. More importantly, for a fair comparison, we train all methods in the same environment and ensure that all methods have converged. Due to page limitation, we present additional experimental details and results in Supplementary Material. + +# 4.2. Main Results + +In this section, we present the overall results on Office-Caltech-10 and DomainNet in Table 1 and ProstateMRI in Table 2. For a detailed comparison, we present the test accuracy of each client and the average result. + +All FL methods yield significant improvements combined with FedRDN consistently over three datasets. As we can see, FedRDN leads to consistent performance improvement for all FL baselines across three benchmarks compared with using the basic data augmentation flow. The improvements of FedRDN can be large as $11.21\%$ on Office-Caltech-10, $3.26\%$ on DomainNet, and $1.37\%$ on ProstateMRI, respectively. Especially for FedFA, the state-of-the-art FL method for feature distribution skewed FL setting, can still gain improvements, e.g., $3.27\%$ on Office-Caltech-10. This indicates that input-level augmen + +Table 2. The dice score (\%) of all approaches on ProstateMRI [24]. For a detailed comparison, we present the test results of six clients: BIDMC, HK, I2CVB, BMC, RUNMC, UCL, and the average result. $\uparrow$ and $\downarrow$ show the rise and fall of the average result before and after augmentation. We mark best results in bold. (norm.: conventional data normalization) + +
MethodProstateMRI
BIDMCHKI2CVBBMCRUNMCUCLAvg.
FedAvg [27]84.1694.5194.6088.4392.7885.6490.02
+ norm86.2092.5394.7489.8592.1887.9190.57(0.55)↑
+ FedRDN89.3494.4193.8591.4694.1990.6592.32(2.30)↑
FedProx [21]84.4794.6094.8790.2992.7286.6090.59
+ norm84.4794.4895.0688.7992.9085.3590.18(0.41)↓
+ FedRDN88.8794.1995.0990.9993.0389.1791.89(1.30)↑
FedAvgM [6]87.0294.3294.2991.3592.8386.7591.09
+ norm89.0593.5994.7589.9393.5288.2291.51(0.42)↑
+ FedRDN88.3794.6795.4090.4093.2888.3991.75(0.66)↑
FedBN [22]86.4294.4595.2790.9693.1387.4891.28
+ norm87.4593.0195.4490.1693.2286.8891.02(0.26)↓
+ FedRDN89.4793.6195.6590.9993.2687.7291.78(0.50)↑
FedFA [48]89.1892.7794.1892.6293.6389.0491.90
+ norm89.1294.4095.2291.9593.4289.2892.23(0.33)↑
+ FedRDN91.8194.6595.6792.3794.3390.1993.14(1.24)↑
+ +tation and feature-level augmentation are not contradictory and can be used simultaneously. Due to the utilization of personalized batch normalization layers, the advancements achieved with FedBN might not be as pronounced as with other methods. However, it's crucial to note that there are still modest improvements present. Moreover, some weaker FL methods can even achieve better performance than oth + +Table 3. Generalization performance of local model on Office-Caltech-10 [4]. We mark the best result in bold. + +
Source-siteTarget-siteFedAvg [27]+ norm+ FedMix [40]+ FedRDN
AmazonCaltech48.0048.0942.2248.88
DSLRR46.8744.4159.3781.25
Webcam77.9677.9684.7483.05
CaltechAmazon58.3358.8551.5663.02
DSLRR68.7568.7575.0084.37
Webcam83.0583.0591.5286.35
DSLRRAmazon42.7041.1433.8560.93
Caltech33.3335.1135.5535.56
Webcam79.6677.9674.5784.74
WebcamAmazon53.6455.2047.9165.10
Caltech41.3343.5540.0048.00
DSLRR68.7568.7578.1287.50
+ +ers when using FedRDN. For example, after using FedRDN, the accuracy of FedAvg can be significantly higher than all other FL methods except FedFA, and FedProto can be even higher than FedFA. The above results demonstrate the effectiveness of the data-level solution, which can effectively mitigate feature shift. Besides, compared with other heterogeneous FL methods, our method has a stronger flexibility and generalization ability. + +FedRDN is superior to other input-level data augmentation techniques. As shown in Table 1 and 2, FedRDN shows leading performance compared with conventional data normalization and FedMix, a previous input-level data augmentation technique. Besides, these two augmentation techniques can even decrease the performance of the method in several cases, while FedRDN can achieve consistent improvements. For instance, FedAvg and FedProto yield a drop as large as $1.05\%$ and $3.54\%$ with conventional data normalization on Office-Caltech-10, respectively. FedProx and FedFA show a drop as $0.67\%$ and $5.54\%$ on DomainNet, respectively, when they are combined with FedMix. The above results demonstrated the effectiveness of FedRDN, and it has stronger compatibility compared with other data augmentation methods. + +# 4.3. Communication Efficiency + +Convergence. To explore the impact of various data augmentation techniques on the convergence, we draw the test performance curve of FedAvg and a state-of-the-art FL method, i.e., FedFA, with different communication rounds on three datasets as shown in Fig. 1. Apparently, FedRDN will not introduce any negative impact on the convergence of the method and even yield a faster convergence at the early training stage ( $0 \sim 10$ rounds) in some cases. As the training goes on, FedRDN achieves a more optimal solution. Besides, compared with other methods, the convergence curves of FedRDN are more stable. + +Communication Cost. In addition to the existing communication overhead of FL methods, the additional communication cost in FedRDN is only for statistical information. The dimension of the statistic is so small (mean and stan + +Table 4. The performance of FedRDN over FedRDN-V on three datasets. + +
MethodOffice-Caltech-10DomainNetProstateMRI
FedAvg [27]62.5142.3290.02
+ FedRDN-V61.4642.9991.14
+ FedRDN (Ours)69.8043.5592.32
+ +![](images/7f37ab2c1a3db1ad275a79510747f2b42b44cbd9777f0e99b6b2fb49679ac808.jpg) +(a) + +![](images/68b75950fd6b50fc8e5691465611e4fd5aec9a91d353512c9cf2c678884d6815.jpg) +(b) +Figure 2. Illustration of test performance versus local epochs on (a) Office-Caltech-10 [4] and (b) DomainNet [28]. + +dard deviation are $\mathbb{R}^3$ for RGB images), that the increased communication cost can be neglected. This is much different from the FedMix, which needs to share the average images per batch. The increased communication cost of FedMix is as large as 156MB on Office-Caltech-10 and 567MB on DomainNet while the batch size of averaged images is 5, even larger than the size of model parameters. + +# 4.4. Cross-site Generalization performance + +As stated before, FedRDN augments the data with luxuri- ant distribution from all clients to learn the more generalized feature representation. Therefore, we further explore the generalization performance of local models by cross-site evaluation, and the results are presented in Table 3. As we can see, All local models of FedRDN yield a consistent improvement over FedAvg, and our method shows better generalization performance compared to other methods. The above results demonstrate that FedRDN can effectively improve the generalization of local feature representation, thereby mitigating feature shift across different clients. This is the reason why our method works. + +# 4.5. Analytical Studies + +FedRDN vs. FedRDN-V. To deeply explore FedRDN, we develop a variant, FedRDN-V. Instead of randomly transforming, it transforms the images with the average mean $\hat{u}$ and standard deviation $\hat{\sigma}$ of all clients during training and testing phases: + +$$ +\hat {u} = \sum_ {k = 1} ^ {K} \mu_ {k}, \quad \hat {\sigma} = \sum_ {k = 1} ^ {K} \sigma_ {k}. \tag {9} +$$ + +This degenerates into traditional data normalization with the hyper-parameter $\hat{u}$ and $\hat{\sigma}$ . The results of the comparison over three datasets are presented in Table 4. Apparently, FedRDN-V yields a significant drop compared with our method. This indicates the effectiveness of our method + +![](images/71d708bec4c0e2a32f9dc07f57a42c65d64119d69421b54281e1630b343bde1f.jpg) +(a) FedAvg (Local model) + +![](images/d5aca4cb3a761a888bb9bf4bb6420c0ea38b05dcaf108b48344efd62ec505dbb.jpg) +(b) FedAvg + FedRDN (Local model) + +![](images/0e6fff3b8cef0e9c9a669493932faea63b737d46dd3f86f21a3708b9986108dc.jpg) +(c) FedAvg (Global model) + +![](images/77ada5d15a694e84cf99b7332a9e0b9b50450692eaf34c235152abd874094dec.jpg) +(d) FedAvg + FedRDN (Global model) +Figure 3. T-SNE visualization of features on Office-Caltech-10 [4]. The T-SNE is conducted on the test sample features of four clients. We use different colors to mark features from different clients. + +is not from the traditional data normalization but augmenting samples with the information from the multiple real local distributions. By this, each local model will be more generalized instead of biasing in skewed underlying distributions. + +Robust to Local Epochs. To explore the robustness of FedRDN for different local epochs, we tune the local epochs from the $\{1, 5, 10, 15, 20\}$ and evaluate the performance of the trained model. The results are presented in Fig. 2. Generally, more epochs of local training will increase the discrepancy under non-IID data, leading to slower convergence, which degrades the performance at the same communication rounds. The result of FedAvg validates this. By contrast, our method can obtain consistent improvements with different settings of local epochs. Moreover, our approach has stable performance across different local epoch settings due to effectively addressing feature shift. + +Varying Architectures. To validate the robustness of our method for different network architectures, we conduct additional experiments for different input-level augmentation methods using ResNet-18 [5] on Office-Caltech-10. We only change the network and do not alter any other experimental setup. The results are presented in Table 5. Due to the small size of the dataset, the performance of ResNet-18 is not as good as that of AlexNet. This is consistent with the findings reported in [49]. However, the results still show the consistent improvements obtained through our proposed augmentation approach, indicating its strong robustness and adaptability for different network architectures. + +Feature Distribution. To yield more insights about FedRDN, we utilize T-SNE [35], a popular analysis tool, to visualize the distribution of the features outputted by the + +Table 5. Accuracy(%) of input-level data augmentation methods with varying architectures on Office-Caltech-10 [4]. + +
NetworkFedAvg [27]+ norm+ FedMix [40]+ FedRDN
AlexNet62.5161.4663.5969.80
ResNet-1843.7443.6053.8156.70
+ +local model (Fig. 3 (a) and (b)) and global model (Fig. 3 (c) and (d)) before and after augmentation. Specifically, we visualize the features of test samples for each client and mark them with different colors: 0 (Amazon), 1 (Caltech), 2 (DSLR), and 3 (Webcam). Due to the different data distributions among these clients, their local feature distributions exhibit a shift (Fig. 3 (a)). This makes it challenging for the global model to learn consistent feature distributions across different clients. Intuitively, as shown in Fig. 3 (c), the features of client 2 (DSLR) are clustered in the bottom-left corner, showing a noticeable shift from the features of other clients. In contrast, our method significantly improves the feature generalization of local models, narrowing the distance between different local feature distributions (Fig. 3 (b)). This benefits the global model to learn a more consistent feature distribution. As shown in Fig. 3 (d), the global features of all four clients are uniformly distributed, indicating that the features from these clients are now in a shared feature space. The above results reveal the working principle of FedRDN and further confirm its effectiveness in mitigating feature shift. + +# 5. Conclusion + +In this paper, we focus on addressing the feature distribution skew in FL. Different from previous insights for this problem, we try to solve this challenge from the input-level data. The proposed novel data augmentation technique, FedRDN, is a plug-and-play component that can be easily integrated into the data augmentation flow, thereby effectively mitigating feature shift. Our extensive experiments show that FedRDN can further improve the performance of various state-of-the-art FL methods across three datasets, which demonstrates the effectiveness, compatibility, and generalizability of our method. + +# 6. Limitation + +This work primarily focuses on visual tasks, where data statistics are privacy-agnostic information, as they only capture the distribution information instead of individual-level information. Besides, we mainly demonstrated the effectiveness of FedRDN in the feature distribution skewed FL scenario by the empirical analysis. In the future, we plan to undertake an in-depth theoretical analysis to further enhance the understanding and explanation of our method. Considering the effectiveness, compatibility, and generalizability of FedRDN, we believe this work contributes substantively to advancing the development of FL. + +# Acknowledgment + +# References + +[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. In International Conference on Learning Representations, 2021. +[2] Jiahua Dong, Lixu Wang, Zhen Fang, Gan Sun, Shichao Xu, Xiao Wang, and Qi Zhu. Federated class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10164-10173, 2022. +[3] Xiuwen Fang and Mang Ye. Robust federated learning with noisy and heterogeneous clients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10072-10081, 2022. +[4] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE conference on computer vision and pattern recognition, pages 2066-2073. IEEE, 2012. +[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[6] Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019. +[7] Wenke Huang, Mang Ye, Zekun Shi, He Li, and Bo Du. Rethinking federated learning with domain shift: A prototype view. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16312-16322. IEEE, 2023. +[8] Meirui Jiang, Zirui Wang, and Qi Dou. Harmofl: Harmonizing local and global drifts in federated learning on heterogeneous medical images. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1087-1095, 2022. +[9] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pages 5132-5143. PMLR, 2020. +[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84-90, 2017. + +This work is supported by the Guangdong Science and Technology Department (No. 2024ZDZX2004), the Guangzhou-HKUST(GZ) Joint Funding Program (No. 2023A03J0671), the H. Fu's Agency for Science, Technology and Research (A*STAR) Central Research Fund ("Robust and Trustworthy AI system for Multi-modality Healthcare"), and the RIE2025 Industry Alignment Fund - Industry Collaboration Project (IAF-ICP) (Award No: I2301E0020), administered by A*STAR. +[11] Jan Kukačka, Vladimir Golkov, and Daniel Cremers. Regularization for deep learning: A taxonomy. arXiv preprint arXiv:1710.10686, 2017. +[12] Teerath Kumar, Muhammad Turab, Kislay Raj, Alessandra Mileo, Rob Brennan, and Malika Bendechache. Advanced data augmentation approaches: A comprehensive survey and future directions. arXiv preprint arXiv:2301.02830, 2023. +[13] Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, and Se-Young Yun. Preservation of the global knowledge by nottrue distillation in federated learning. Advances in Neural Information Processing Systems, 2021. +[14] Boyi Li, Felix Wu, Ser-Nam Lim, Serge Belongie, and Kilian Q Weinberger. On feature normalization and data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12383-12392, 2021. +[15] Pan Li, Da Li, Wei Li, Shaogang Gong, Yanwei Fu, and Timothy M Hospedales. A simple feature augmentation for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8886-8895, 2021. +[16] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10713-10722, 2021. +[17] Qinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Yuan Li, Xu Liu, and Bingsheng He. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. IEEE Transactions on Knowledge and Data Engineering, 35(4):3347-3366, 2021. +[18] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965-978. IEEE, 2022. +[19] Qinbin Li, Bingsheng He, and Dawn Song. Adversarial collaborative learning on non-IID features. In Proceedings of the 40th International Conference on Machine Learning, pages 19504-19526. PMLR, 2023. +[20] Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine, 37(3):50-60, 2020. +[21] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2:429-450, 2020. +[22] Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp, and Qi Dou. Fedbn: Federated learning on non-iid features via local batch normalization. arXiv preprint arXiv:2102.07623, 2021. +[23] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems, 33:2351-2363, 2020. +[24] Quande Liu, Qi Dou, Lequan Yu, and Pheng Ann Heng. Msnet: multi-site network for improving prostate segmentation with heterogeneous mri data. IEEE transactions on medical imaging, 39(9):2713-2724, 2020. + +[25] Yang Liu, Yan Kang, Tianyuan Zou, Yanhong Pu, Yuanqin He, Xiaozhou Ye, Ye Ouyang, Ya-Qin Zhang, and Qiang Yang. Vertical federated learning: Concepts, advances, and challenges. IEEE Transactions on Knowledge and Data Engineering, 2024. +[26] Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. Advances in Neural Information Processing Systems, 34:5972-5984, 2021. +[27] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273-1282. PMLR, 2017. +[28] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406-1415, 2019. +[29] Xingchao Peng, Zijun Huang, Yizhe Zhu, and Kate Saenko. Federated adversarial domain adaptation. In International Conference on Learning Representations, 2020. +[30] Rui Qian, Weiyao Lin, John See, and Dian Li. Controllable augmentations for video representation learning. Visual Intelligence, 2(1):1, 2024. +[31] Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, and Ali Jadbabaie. Robust federated learning: The case of affine distribution shifts. Advances in Neural Information Processing Systems, 33:21554-21565, 2020. +[32] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234-241. Springer, 2015. +[33] Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. Fedproto: Federated prototype learning across heterogeneous clients. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8432-8440, 2022. +[34] Yue Tan, Guodong Long, Jie Ma, Lu Liu, Tianyi Zhou, and Jing Jiang. Federated learning from pre-trained models: A contrastive learning approach. Advances in Neural Information Processing Systems, 35:19332-19344, 2022. +[35] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9 (11), 2008. +[36] Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, and Yannis Avrithis. Alignmixup: Improving representations by interpolating aligned features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19174-19183, 2022. +[37] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in Neural Information Processing Systems, 33, 2020. + +[38] Yunlu Yan, Hong Wang, Yawen Huang, Nanjun He, Lei Zhu, Yong Xu, Yuexiang Li, and Yefeng Zheng. Cross-modal vertical federated learning for mri reconstruction. IEEE Journal of Biomedical and Health Informatics, 28(11):6384-6394, 2024. +[39] Yunlu Yan, Lei Zhu, Yuexiang Li, Xinxing Xu, Rick Siow Mong Goh, Yong Liu, Salman Khan, and Chun-Mei Feng. A new perspective to boost performance fairness for medical federated learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 13-23. Springer, 2024. +[40] Tehrim Yoon, Sumin Shin, Sung Ju Hwang, and Eunho Yang. Fedmix: Approximation of mixup under mean augmented federated learning. arXiv preprint arXiv:2107.00233, 2021. +[41] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6023-6032, 2019. +[42] Mikhail Yurochkin, Mayank Agarwal, Soumya Ghosh, Kristjan Greenewald, Nghia Hoang, and Yasaman Khazaeni. Bayesian nonparametric federated learning of neural networks. In International conference on machine learning, pages 7252-7261. PMLR, 2019. +[43] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018. +[44] Jie Zhang, Zhiqi Li, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, and Chao Wu. Federated learning with label distribution skew via logits calibration. In International Conference on Machine Learning, pages 26311-26329. PMLR, 2022. +[45] Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou, and S Yu Philip. A game-theoretic federated learning framework for data quality improvement. IEEE Transactions on Knowledge and Data Engineering, 35(11):10952-10966, 2022. +[46] Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou, and S Yu Philip. A robust game-theoretical federated learning framework with joint differential privacy. IEEE Transactions on Knowledge and Data Engineering, 35(4):3333-3346, 2022. +[47] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI conference on artificial intelligence, pages 13001-13008, 2020. +[48] Tianfei Zhou and Ender Konukoglu. Fedfa: Federated feature augmentation. International Conference on Learning Representations, 2023. +[49] Weiming Zhuang and Lingjuan Lyu. Is normalization indispensable for multi-domain federated learning? arXiv preprint arXiv:2306.05879, 2023. \ No newline at end of file diff --git a/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/images.zip b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..affd8899e37587b290c4c5475920a7d8b269b40f --- /dev/null +++ b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc046bebb3fa76f01524cb8d0a5c5caf3e778a53be08ebbf17f1df8c2dc3827a +size 743904 diff --git a/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/layout.json b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..587a1e326687e8444cbe01a7ac4ea864925dd4e7 --- /dev/null +++ b/CVPR/2025/A Simple Data Augmentation for Feature Distribution Skewed Federated Learning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c592308047bb5d80a7ed3ac11c99ff1ec28c40001c0d9ba8e051afac4bb59d3 +size 382798 diff --git a/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_content_list.json b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f30933fa04cab147230beac4a93c8553c9bac727 --- /dev/null +++ b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccd1d84ea2f97e7c6d83c7c64839fc434a988a77d1ac66625ee8fab47c30a102 +size 75177 diff --git a/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_model.json b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..919a8f7c748c7f0b03765cb68af21acbaa8e70d3 --- /dev/null +++ b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47ad23d933831f51f5f6dc5c34112861386535bec853b1108f2a6eefcc2695cf +size 93008 diff --git a/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_origin.pdf b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..767a009b5404c73228d456a64c868ef8c9b8aba4 --- /dev/null +++ b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/5f860180-3af4-4fff-abf7-4bbdf8e78ae2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b02beaea624e0772ef7b176533427bd7943e7c477b89dd032f8782dc08c74a3 +size 1629303 diff --git a/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/full.md b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d04a08fea8e9202ad505d17909d9efdf215c0d16 --- /dev/null +++ b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/full.md @@ -0,0 +1,281 @@ +# A Simple yet Effective Layout Token in Large Language Models for Document Understanding + +Zhaoqing Zhu $^{1*}$ , Chuwei Luo $^{1*†}$ , Zirui Shao $^{2*}$ , Feiyu Gao $^{1†}$ , Hangdi Xing $^{2}$ , Qi Zheng $^{1}$ , Ji Zhang $^{1}$ $^{1}$ Alibaba Group, $^{2}$ Zhejiang University +{zzhaoqing.z, luochuwei, zhengqisjtu}@gmail.com +{shaozirui, xinghd}@zju.edu.cn, {feiyu.gfy, zj122146}@alibaba-inc.com + +# Abstract + +Recent methods that integrate spatial layouts with text for document understanding in large language models (LLMs) have shown promising results. A commonly used method is to represent layout information as text tokens and interleave them with text content as inputs to the LLMs. However, such a method still demonstrates limitations, as it requires additional position IDs for tokens that are used to represent layout information. Due to the constraint on max position IDs, assigning them to layout information reduces those available for text content, reducing the capacity for the model to learn from the text during training, while also introducing a large number of potentially untrained position IDs during long-context inference, which can hinder performance on document understanding tasks. To address these issues, we propose LayTokenLLM, a simple yet effective method for document understanding. LayTokenLLM represents layout information as a single token per text segment and uses a specialized positional encoding scheme. It shares position IDs between text and layout tokens, eliminating the need for additional position IDs. This design maintains the model's capacity to learn from text while mitigating long-context issues during inference. Furthermore, a novel pre-training objective called Next Interleaved Text and Layout Token Prediction (NTLP) is devised to enhance cross-modality learning between text and layout tokens. Extensive experiments show that LayTokenLLM outperforms existing layout-integrated LLMs and MLLMs of similar scales on multi-page document understanding tasks, as well as most single-page tasks. + +# 1. Introduction + +Document understanding [9] is currently an important area in both industry and academic research, driven by the criti- + +![](images/8c6944ae02a889ca9cd11ea9a25c049c3a299eae3bfe5790bd9d7650addde9be.jpg) +Figure 1. Comparison with other Layout-as-Token methods. Previous Layout-as-Token methods require additional position IDs for layout information which squeeze the learning space for text content, while LayTokenLLM eliminates the need for additional position IDs of layout information by sharing the first position ID of corresponding text content. + +cal need to efficiently process and understand complex documents. In recent years, large language models (LLMs) and multimodal large language models (MLLMs) have made remarkable progress in this field. Especially in tasks involving rich textual content, such as the document-oriented Visual Question Answering (DocVQA) [32] task and the visually-rich document information extraction (VIE) [19, 33, 48] task. Some works [20, 28, 44] suggest that the most valuable information for document understanding can be derived from both the text and its layout, treating spatial layouts as a form of lightweight visual information. Building on this idea, these approaches [20, 28, 44] that integrate such spatial layouts as visual information with text for LLMs have shown promising results, and sometimes superior performance compared to MLLMs. + +The integration of layout information can be broadly cat- + +
Text and Layout FormatNumber of Extra Position IDs for Representing LayoutT-Ratio (Nt/N)
In a SegmentAvg. on MP-DocVQA
Plain Text (w/o Layout)00100%
{text:"text",Box:[123, 456, 133, 500]}\[13\]27801527.02%
<ref>text</ref><box>(123,456),(133,500)</box>[3, 27]21795932.26%
<ref>text</ref><box>[253, 231, 733, 787]}</box>[6]18689435.71%
text+one box hidden representation [28]135090.91%
text+layout_token (Ours)00100%
+ +Table 1. Comparison of different paradigm to integrate layout information with text content. T-Ratio is defined as the ratio of the position utilization for text tokens $(N_{t})$ to the maximum trained position length $(N)$ . In the table, $N$ is set to 2048. + +egorized into two types: layout-as-modality methods [31, 44] and layout-as-tokens methods [13, 20, 28, 35]. Layout-as-modality methods treat layout information as an additional modality [31, 44], modeling it alongside text within LLMs and training specialized LLMs. Although layout-as-modality methods have shown good performance, they require substantial modifications to the LLM architecture to incorporate layout information, making them less lightweight. On the other hand, layout-as-tokens methods represent the layout information as text tokens, and interleave them with the corresponding text content as inputs into the LLMs, which provide a more lightweight and commonly used approach [20, 28] for document understanding. + +However, existing layout-as-token methods still encounter a significant limitation. Due to the constraint on max position IDs, assigning them to layout information reduces those available for text content, reducing the capacity for the model to learn from the text during training. As illustrated in Fig. 1, the context window during training is constrained by the maximum position ID $N$ . When tokens that are used for representing layout information are integrated into an LLM, the allocation of additional position IDs ( $m'$ ) for layout information reduces the number of position IDs available for the text content ( $N - m'$ ), leading to less capacity for LLMs to learn from the text during training. To better quantify the impact of incorporating additional layout information, a rough measure called T-Ratio which is used to represent the ratio of the position utilization for text tokens ( $N_t$ ) to the maximum trained position length ( $N$ ) is shown in Tab. 1. As can be seen, assigning position IDs to tokens that are used for representing layout information significantly affects the T-ratio, even when only a single position ID is used to represent the layout in a segment. Although the T-ratio is a rough measure, it reflects the impact of introducing layout information, as assigning position IDs to layout tokens reduces the number of position IDs available for text content, ultimately limiting the model's capacity to learn from the text during training. And it can be also seen that existing methods allocate hundreds or even thousands of additional position IDs for layout information on the MP-DocVQA dataset and these additional position IDs are potentially unlearned during training, which may exac- + +erbate the long-context inference problem [34, 37]. + +To address these issues, we propose a simple yet effective framework in this paper, called LayTokenLLM, for document understanding. LayTokenLLM is a lightweight framework that represents the layout information of each text segment as a single layout token and employs a specially designed positional encoding scheme for the layout tokens. Notably, as shown in Tab. 1, LayTokenLLM incorporates layout information as a single layout token but without allocating any additional position ID, ensuring comprehensive learning of text content (100% max position IDs for text content) during training, while alleviating long-context issues introduced by layout information during inference. Additionally, a novel pre-training objective called Next Interleaved Text and Layout Token Prediction (NTLP) is proposed to improve the comprehension of interleaved format and deepen the connection between these distinct types of information in LayTokenLLM. Different from previous methods that focus solely on either text or layout content for subsequent predictions [28, 44], NTLP leverages the auto-regressive traits of LLMs and additionally facilitates cross-prediction between text and layout. Extensive experiments across widely used benchmarks for both single-page and multi-page document understanding demonstrate the effectiveness of the proposed LayTokenLLM. + +Our contributions are summarized as follows: + +1) This paper introduces LayTokenLLM, a simple yet effective method to integrate layout information into LLMs for document understanding. It represents layout information as a single token and uses a specially designed positional encoding scheme, avoiding the issues caused by allocating additional position IDs for the layout information. +2) A novel pre-training objective called Next Interleaved Text and Layout Token Prediction is introduced to enhance cross-modal prediction and relational learning between text and layout modalities. +3) Experimental results show that the proposed LayToken-LLM significantly outperforms existing methods utilizing LLMs/MLLMs for multi-page document understanding, while also achieving superior performance in most subtasks of single-page document comprehension. + +# 2. Related Work + +Recently, leveraging large language models (LLMs) and multimodal large language models (MLLMs) for document understanding have shown significant progress. Although existing MLLMs show promising results in document understanding, they still struggle with issues associated with high-resolution input, particularly in cases of dense or difficult-to-recognize text. Considering the layout information is vital for document understanding [2, 10, 12, 17, 22, 25, 29, 30, 36, 45, 47, 50], existing an alternative approach, integrating spatial layouts with text as lightweight visual information for LLMs has shown promising results, and sometimes even superior performance compared to MLLMs. These approaches can be categorized into two types: layout-as-modality methods and layout-astokens methods. + +# 2.1. Multi-modal Large Language Models + +Existing MLLMs [1, 4, 8, 26, 39, 49, 52] show exceptional performance for document understanding. Models [4, 7, 8, 23, 51] like InternVL, Qwen-VL have augmented MLLMs with advanced visual capabilities by introducing high-resolution visual input to better handle documents containing dense or difficult-to-recognize text. However, the methods require an excessive number of image tokens, adversely affecting inference speed [7, 23, 51]. In response to this challenge, a series of MLLMs [14, 15, 24] propose to reduce the token count by compressing image patches, but this may lead to the loss of critical textual information. + +# 2.2. Layout-as-Modality Methods + +Layout-as-Modality methods treat layout information as an additional modality, modeling it alongside text within LLMs [5, 40, 42] and training specialized LLMs for document understanding. Luo et al. [31] make pioneering attempts to combine layout information with text into LLMs. In order to fully exploit the document layout information, it employs pre-trained document encoders, which represent the spatial layout of text as an additional modality similar to previous pre-trained text layout models [18, 46]. Recently, Wang et al. [44] propose to further disentangle the text and layout modalities by considering the inter-dependency between them. However, these methods require modifying the model architecture and necessitate an additional pretraining stage, making them less lightweight. + +# 2.3. Layout-as-Token Methods + +Layout-as-Tokens methods represent layout information as text sequences, embedding the sequences interleaved with the corresponding text as inputs into the LLMs as shown in Tab. 1, providing a more natural and commonly used approach. Specifically, He et al. [13] introduce an in-context learning format like “{text:“text”,Box:[123, 456, + +133, 500]” which incorporate layout information (See Tab. 1, line 2) in the demonstration to enable LLMs to understand positional relationships. And Lamott et al. [20] design a novel document verbalizer to effectively encode the layout information in the prompt. Perot et al. [35] generate LLM prompts containing both the text content and coordinate tokens, which communicate the layout modality and act as unique identifiers of the text segments for information extraction and localization, while Lu et al. [28] use one hidden token to represent layout information. Despite their convenience and effectiveness, these methods introduce an excessive number of interleaved position spaces to represent the layout, leading to a dilution of the textual content (see Tab. 1). The extra interleaved position occupied by models not only hampers the comprehension learning and increases the burden of comprehension of the text content. + +# 3. Method + +In this section, our LayTokenLLM is presented, which is an LLM-based method that incorporates text and spatial layout information which can be viewed as lightweight visual information. To incorporate layout information while avoiding issues arising from extra position ID allocations, and to enhance the connection between text and layout within the same segment, two primary components are proposed: a simple yet effective Layout Token, and a pre-training objective designed for interleaved text and layout format. + +# 3.1. Model Architecture + +The overall architecture of LayTokenLLM is shown in Fig. 2. Once the text segments with corresponding layout information are parsed from the document (e.g., by OCR), the bounding box coordinates of each text segment are first compressed into one single layout token with a layout tokenizer. Then the text tokens and their corresponding layout tokens are interleaved and input to LLM. A simple yet effective layout positional encoding scheme is designed to address the issues of additional position IDs. Furthermore, a novel pretraining objective is proposed to enhance cross-modal connections within the same segment. + +# 3.1.1 Details of Layout Token + +As shown in the upper left part of Fig. 2, a learnable embedding $t \in \mathbb{R}^d$ is employed as a query, mapping each text segment's bounding box $Box$ into only one single layout token $b \in \mathbb{R}^d$ : + +$$ +b = F _ {A t t n} \left(t, F _ {B} (B o x)\right), \tag {1} +$$ + +where $F_{B}$ represents a projector that encodes the bounding box defined by four-dimensional coordinates $[x1,y1,x2,y2]$ into a high-dimensional embedding, and + +![](images/5330eb1156768ee37596184edb3f482858b1689dc1333bed3d02a1723003c1b6.jpg) +Figure 2. The overall architecture of LayTokenLLM. Given the text segments with layouts parsed from document (e.g., by OCR), LayTokenLLM first tokenizes the layout information (bounding box) of each text segment into a single layout token by leveraging a trainable projector and an attention module with learnable query. Subsequently, the text tokens and layout tokens are interleaved and the position IDs are assigned by sharing the first position ID of each text segment with the corresponding layout token, preserving the entire learning space for textual content. Finally, distinct training objectives are employed for the text and layout information, respectively. + +$F_{Attn}$ represents an attention encoder which takes the learnable embedding as query and the high-dimensional embedding of bounding box as key and value. Through the layout tokenizer, the layout information is significantly compressed, thereby alleviating the burden of longer tokens while enhancing inference speed. + +# 3.1.2 Positional Encoding Scheme for Layout Token + +The most prevalent positional encoding method for LLMs is the Rotary Positional Encoding (RoPE) [38]. Let $T$ and $L$ represent the length of tokens used for text and layout information in an OCR segment, previous methods will allocate additional position IDs for the interleaved layout information and set the position IDs of a segment $P$ as: + +$$ +P = [ 0, 1, \dots , T - 1, T, \dots , T + L - 1 ]. \tag {2} +$$ + +However, even compressing the layout information to a single layout token for each text segment, an additional position ID must still be allocated. Moreover, the positional distance between adjacent text segments will be stretched due to the inserted layout tokens. + +To address the issues of additional position IDs and the comprehension burden of stretched positional distance introduced by layout information, a straightforward and efficient positional encoding scheme is proposed that reuses the position IDs already utilized in the text tokens for layout to- + +kens. Considering the cross-modality alignment within the same text segment, each single layout token is assigned with the position ID of the first text token in its corresponding text content (as illustrated in the lower left part of Fig. 2). Then the position IDs of a text segment $P$ is expressed as: + +$$ +P = [ 0, 1, \dots , T - 1, 0 ]. \tag {3} +$$ + +Consequently, LayTokenLLM needs no additional position IDs for layout information, enabling the trained position IDs to be entirely dedicated to text content, and achieving a $100\%$ T-Ratio. At the same time, the positional distance between adjacent text segments is preserved. + +# 3.2. Pretraining Objective + +Leveraging the autoregressive capabilities of LLMs and inspired by the "Next Token Prediction" in LLMs pretraining, the Next Interleaved Text and Layout Token Prediction (NTLP) is proposed. Previous works, such as LayTextLLM [28], focus solely on the prediction of text tokens under interleaved text and layout format without supervising layout information, even though they integrate layout information for LLMs. Considering the significant role of layout information for document understanding, as illustrated in Fig. 3, NTLP performs the next prediction task to reconstruct the document on all interleaved text and layout tokens, with training on both modalities. Thus, NTLP enables + +![](images/649aea53022b3802cab5a2b57c4ab5c3e3af5fe5cb10f80d084d2a0226e14b85.jpg) +Figure 3. Illustration of the Next Interleaved Text and Layout Token Prediction objective. The supervision is conducted on both text and layout tokens to reconstruct text content and layout information simultaneously. + +effective learning of layout information, enhances cross-modal prediction, and improves relational learning between text and layout modalities. + +Specifically, NTLP minimizes the loss between the grounding truth of the next token and its prediction, whether the token is a text token or a layout token, and the loss function is defined as: + +$$ +\mathcal {L} = \frac {1}{N - 1} \sum_ {i = 1} ^ {N - 1} \mathcal {L} _ {i} \left(z ^ {i} \mid z ^ {0}, z ^ {1}, \dots , z ^ {i - 1}\right), \tag {4} +$$ + +where $z^i$ denotes the $i$ -th token, while $\mathcal{L}_i$ represents the loss associated with predicting the token $z^i$ based on all preceding tokens $z^0, z^1, \ldots, z^{i-1}$ . For supervised training involving text, employing the commonly used cross-entropy (CE) loss associated with large language models (LLMs). Notably, given that the layout information has been encoded as a single token alongside the floating-point representation of layout (bounding box) information, NTLP introduces a dedicated layout head $f_{lay}$ to map the layout hidden states to four-dimensional coordinates $[x_1, y_1, x_2, y_2]$ , which serve as the predicted layout output for supervised training utilizing Mean Squared Error (MSE) loss. Thus, $\mathcal{L}_i$ can be expressed as: + +$$ +\mathcal {L} _ {i} = \left\{ \begin{array}{l} \mathcal {L} _ {C E} \left(f _ {\text {t e x t}} \left(z ^ {i}\right), y _ {\text {t e x t}} ^ {i}\right), z ^ {i} \in \mathcal {C} _ {\text {l a y}}, \\ \mathcal {L} _ {M S E} \left(f _ {\text {l a y}} \left(z ^ {i}\right), B o x ^ {i}\right), z ^ {i} \in \mathcal {C} _ {\text {t e x t}}, \end{array} \right. \tag {5} +$$ + +where $f_{text}$ denotes the text head, while $y_{text}^{i}$ represents the one-hot encoding of the true label corresponding to text token $z^{i} \in \mathcal{C}_{text}$ . Additionally, $Box^{i}$ signifies the true four-dimensional coordinates for layout token $z^{i} \in \mathcal{C}_{lay}$ . + +# 4. Experiments + +# 4.1. Training Dataset Collection + +Pre-training data of LayTokenLLM utilizes the open-source document dataset called Layout-aware SFT data + +from LayoutLLM [31], which comprises an ensemble of diverse and high-quality data relevant to document understanding and information extraction tasks. For pre-training efficiently, filtering out too long documents with token lengths of more than 2k for effective pre-training. + +SFT data of LayTokenLLM employs the datasets extensively used in single-page and multi-page document understanding tasks to ensure high-quality SFT. For the singlepage document understanding task, the combined training sets of DocVQA [32] and SIBR [48] constitute the SFT dataset. DocVQA includes 50k question-answer pairs grounded on 12k document images. Meanwhile, SIBR is a real-world dataset for Visual Information Extraction tasks, covering challenging scenarios with difficult-to-recognize text like blur, partial occlusions, and printing shifts. Regarding multi-page document understanding SFT, leverages an ensemble of datasets that incorporates the training sets from MP-DocVQA [41] and DUDE [43]. + +# 4.2. Training Setup + +In the experiments, two widely used models, Qwen1.5-7B [40] and LLama3-8B [11], are employed as the main LLM components of LayTokenLLM, referred to as LayTokenLLM-7B and LayTokenLLM-8B, respectively. Moreover, for a more comprehensive comparison with other Layout-as-Token methods, we also consider comparisons using the same training data and LLM backbone as ours, but employing different commonly used text and layout formats as input proposed by existing methods [4, 8, 13], such as “{text:"text",Box:[123, 456, 133, 500]}", as shown in Tab. 1. During both pre-training and SFT phases, as illustrated in Fig. 2, the LLM is frozen, while the parameters of the LoRA [16], layout tokenizer, and layout head are randomly initialized and updated to support lightweight training. The pretraining stage and single-page document SFT are trained for 3 epochs with a batch size of 64, a learning rate of $3\mathrm{e} - 4$ , and a maximum position ID set to 2048. To handle full training of long-context content under computational constraints, multi-page document SFT employs a 2-stage strategy: first, processing documents up to 4k tokens (maximum position ID) with a batch size of 32; second, handling those exceeding 4k up to 16k tokens and batch size is 8. The training is performed on 8 Nvidia A100 GPUs. + +# 4.3. Evaluation Setup + +For the single-page document understanding task, widely used benchmarks such as Document Visual Question Answering (Document VQA) and Visual Information Extraction (VIE) are employed, with only the test sets being utilized across all benchmarks. The Document VQA datasets specifically utilize the DocVQA test set, consisting of 5,188 questions. For the VIE task, which includes the SIBR [48], FUNSD [19], and CORD [33] benchmarks, the cleaned test + +
SettingSingle-page Document VQAMulti-page Document VQA
SIBRFUNSDCORDDocVQAMP-DocVQADUDE
Plain Text
Qwen1.5-7B-Chat [40]38.8152.5229.7164.2747.1528.98
Llama3-8B-Instruct [11]51.7757.4740.0074.2250.7524.89
Text + Layout-as-Modality
DocLLM-7B◇ [44]-(51.80)(67.40)69.50--
LayoutLLM-7B◇ [31]-79.9863.1074.27--
Text + Layout-as-Token
LayTextLLM-7B◇ [28]-72.0045.5077.20--
text, [123, 456, 133, 500]★91.4479.8967.7781.1659.1741.01
{text:"text",Box:[123, 456, 133, 500]★ [13]}91.4579.9868.5781.9855.9637.96
<ref>text</ref><box>(123,456),(133,500)</box>★ [3]91.4379.5669.6281.3757.8139.67
<ref>text</ref><box>[253, 231, 733, 787]]</box>★ [6]88.2478.1756.3280.1856.1640.82
LayTokenLLM-llama2-7B◇ (Ours)90.1376.10 (67.39)67.60 (73.39)79.9856.3036.59
LayTokenLLM-7B★ (Ours)92.0378.72 (69.47)73.79 (71.03)81.5072.8149.72
LayTokenLLM-8B△ (Ours)92.2081.62 (70.96)78.30 (75.35)85.1174.3152.00
+ +Table 2. Comparison with the LLMs integrating layout information. Symbols $\diamond$ , $\star$ and $\triangle$ represent the LLM backbones used: Llama2-7B, Qwen1.5-7B and Llama3-8B. Methods marked with $\star$ are trained identically to LayTokenLLM. ( $\cdot$ ) shows F1-scores on uncleaned FUNSD and CORD, as used in DocLLM [44]. 'Bold' means the best in our series, while 'Underline' marks the best among all compared methods. + +
ModelsSIBRFUNSDCORDDocVQA
QwenVL-7B [3]21.6547.0930.0065.10
InternVL2-8B [7]68.3975.8479.8891.66
TextMonkey-7B [27]51.3065.4967.5466.70
LayTokenLLM-7B92.0378.7273.7981.50
LayTokenLLM-8B92.2081.6278.3085.11
+ +Table 3. Comparison with MLLMs on single-page document datasets. 'Bold' means the best in our series, while 'Underline' marks the best among all compared methods. + +
ModelsMP-DocVQADUDE
LongVA-7B [51]60.8038.37
Idefics3-8B [21]67.1538.65
LLaVA next-leave-7B [23]44.8728.03
InternVL2-8B [6]68.0037.00
MPLUG-DocOwl2-8B [15]69.4246.77
LayTokenLLM-7B72.8149.72
LayTokenLLM-8B74.3152.00
+ +Table 4. Comparison with MLLMs on multi-page document datasets. + +sets of FUNSD and CORD provided by LayoutLLM [31] are used. SIBR's test set consists of 400 images, annotated with entity instances and links to challenge visual information extraction models. The FUNSD dataset features a test collection of 50 form images, each meticulously labeled with entities such as headers, questions, answers, and others, complemented by annotations for entity linking. Conversely, the CORD dataset encompasses a test suite of 100 receipt images, each enriched with annotations spanning + +30 distinct entity categories, including but not limited to tax amounts and total prices. Following LayoutLLM [31], transform the VIE datasets into question-answering format, and the QA for both DocVQA and VIE task is evaluated by ANLS [32]. For the multi-image document understanding task, our experiments test on MP-DocVQA and DUDE, which are widely used for multi-page document understanding. Following the evaluation metric settings of the original datasets, the MP-DocVQA is evaluated by ANLS, while DUDE adopts a version of ANLS that it has modified. The hyperparameters during inference (e.g., top k, beam search, etc.) are set to their default values. + +# 4.4. Main Results + +# 4.4.1 Effectiveness Comparison + +Comparison with LLMs combined with Layout Information is illustrated in Tab. 2. It can be seen that variant LLMs that incorporate layout information consistently outperform plain text models in all document comprehension tasks, proving that layout is crucial for document understanding. Moreover, our method achieves competitive results in single-page document VQA (leading in 2 subtasks and with a higher average compared to other methods using the same LLM). Notably, LayTokenLLM outperforms other methods by a large margin in multi-page document VQA (more than $10\%$ improvement among the marked $\star$ approaches). We believe the more significant improvement on multi-page document VQA is due to the fact that in single-page documents most cases do not exceed the trained maximum position ID. Consequently, the + +
Text and Layout FormatFLOPs/MACs↓SP Doc VQA +ANLS Avg↑MP Doc VQA +ANLS Avg↑
Plain Text (w/o Layout)7.95/3.9846.3338.07
{text:"text",Box:[123, 456, 133, 500]} [13]28.76/14.5780.4946.96
<ref>text</ref><box>(123,456),(133,500)</box>[3, 27]28.69/14.3480.5048.74
<ref>text</ref><box>[253, 231, 733, 787]}</box>[6]32.81/16.4075.7348.49
text+layout_token (LayTokenLLM)9.32/5.3681.5161.27
+ +impact of additional layout information on the LLM can be largely alleviated through further fine-tuning. In contrast, in multi-page documents with extensive context that require the allocation of numerous position IDs, the introduction of additional position IDs for layout information may exacerbate the challenges associated with long-context processing. Our LayTokenLLM demonstrates remarkable performance by effectively circumventing the need for extra position IDs dedicated to layout information, thereby emphasizing its efficiency and superiority in handling such complex scenarios. Furthermore, experiments with different LLM backbone initializations consistently achieve superior results across all benchmarks, substantiating that LayTokenLLM can adapt to various LLMs. + +Comparison with MLLMs is shown in Tab. 3 and Tab. 4. Considering the distinct advantages of existing MLLMs in both single-page and multi-page document understanding tasks, representative works in each task are selected for comparison. It can be seen that LayTokenLLM achieves the comparable performance of the best model InternVL2-8B across most single-page tasks. Particularly in challenging scenarios like SIBR, which covers difficult-to-recognize text, LayTokenLLM achieves $92.20\%$ compared to InternVL2-8B's $68.39\%$ ,showcasing a significant advantage which is attributed to the enhanced preservation of textual and layout information of the document. Furthermore, in multi-page document understanding, LayTokenLLM exceeds both InternVL2 and MPLUG-DocOwl by over $5 \%$ on the DUDE dataset. This superiority may stem from MLLMs often compressing images into fewer tokens for multi-page documents, which results in the loss of textual information. In contrast, LayTokenLLM retains a greater proportion of text, enhancing document representation and discernment. + +# 4.4.2 Efficiency Comparison + +Tab. 5 presents a comparative analysis of the Layout-as-Token method in terms of efficiency and performance. Compared to the methods with a comparable number of parameters, LayTokenLLM demonstrates superior performance in both single-page and multi-page document understanding tasks while exhibiting better efficiency. Notably, due to its lightweight design, our LayTokenLLM exhibits a + +Table 5. Comparison with Layout-as-Token Methods on the multi-page document (MP Doc) understanding tasks, which all initialized from Qwen1.5-7B-Chat. 'FLOPs/MACs' denotes the Floating Point Operations Per Second (FLOPs) and the Multiply-Accumulate Operations (MACs) on DocVQA which are broadly used to measure the computational complexity and efficiency. + +
#Layout TokenNTLPSP Doc +ANLS AvgMP Doc +ANLS Avg
Layout TokenizerLayPosID
080.0750.09
178.8958.22
279.2760.50
381.5161.27
+ +Table 6. Ablation study on single-page document (SP Doc) and multi-page document (MP Doc) understanding tasks. LayPosID represents the positional encoding scheme for our Layout Token. + +low processing time that is comparable to only Plain Text input, and more than half that of alternative methods integrating layout information. These results affirm that Lay-TokenLLM is both effective and efficient. + +# 4.5. Ablation Study + +To evaluate the effectiveness of the proposed Layout Token and pre-training objective in the document understanding task, an ablation study is conducted (see Tab. 6). + +Initial Baseline. The #0 baseline disables both Layout Token and NTLP objective. It utilizes uncompressed layout information as textual tokens for LLM input, consistent with the fine-tuning data as LayTokenLLM settings. Under this configuration, the baseline achieves a high average performance of $80.07\%$ in single-page document understanding tasks, but performs poorly in multi-page document scenarios due to the layout information occupying critical text learning and understanding space. + +Effect of Layout Token. The proposed Layout Token in LayTokenLLM is generated by our Layout Tokenizer with LayPosID. In #1, the layout tokenizer is introduced. Compared to #0, the proposed compression of layout information enables more text information to be learned within a fixed window, leading to significant performance improvements in multi-page tasks. Meanwhile, in single-page document scenarios, there is a slight degradation in performance due to the information loss caused by layout information compression. In #2, the framework extends #1 via LayPosID (our positional encoding scheme), which further eliminates the need for extra layout positional indexing and achieves $100\%$ T-ratio. As a result, #2 demonstrates additional performance gains over #1, with a substantial im + +![](images/6bd7fa416005223e5d76c4b19d3b67d633b02cad77ac6678c39c568919306c59.jpg) +(a) Single-page QA + +![](images/8d0489a2c24dfc30cbbc21c3aa37ef5ac85bf4b05c1ad0d13df4b6510eb14227.jpg) +(b) Multi-page QA + +![](images/7a4b29345b7cce47f631fa3071e9b19a76125c4820d7f1950f5f9371bf2ebb4d.jpg) +Figure 4. Qualitative results on (a) single-page and (b) multi-page document QA, where “Qwen1.5-7B (Text+Layout)” is trained with the same data and LLM as LayTokenLLM-7B, but employs norm text and layout format (“text, [123, 456, 133, 500]”) instead of Layout Token. The Yellow highlights denote the relevant areas or keys for QA, while the Green highlights indicate the correct answers. (c) Distribution of statistical ANLS in terms of pages along the posed questions on MP-DocVQA. (d) Comparison of layout-related performance using the single-page document dataset, DocVQA. + +The effect of NTLP Objectives. Compared with #2, #3 further incorporated the NTLP, which employs a next interleaved text and layout prediction task. The objective enhances both text and layout representation learning, as well as their interconnections. Performance improvements are observed in both single-page and multi-page document understanding tasks, with increases of $2.2\%$ and $0.8\%$ respectively. + +Overall, the ablation study confirms the effectiveness of the Layout Token and the NTLP pre-training objective. + +# 4.6. Qualitative Results + +To further study the effectiveness of our method, two examples from single-page and multi-page document QA scenarios and statistical analysis related to page numbers are presented in Fig. 4. In the context of key-value QAs that rely on spatial layouts, the Qwen1.5-7B model, which integrates standard text and layout formats, can accurately respond on single-page documents (Fig. 4(a)) but exhibits answer confusion on multi-page documents (Fig. 4(b)). In contrast, LayTokenLLM achieves correct reasoning on both single-page and multi-page documents. We think the confusion in multi-page documents is mainly due to the added position IDs overhead caused by incorporating layout information, leading to long-context issues. So we further conduct a statistical analysis on the performance related to the page ordinal number with proposed questions, as depicted in Fig. 4(c). It can be seen that the performance of the Qwen1.5-7B model with the direct integration of layout information declines significantly with an increasing number of pages. In contrast, our LayTokenLLM exhibits a marked performance advantage as pages increase, highlighting its superiority, especially in understanding long-context documents. Moreover, LayTokenLLM's layout representation performance is further evaluated under conditions excluding + +ing the impact of position ID overhead (short-context scenario), using the "table/list" and "layout" subset of the DocVQA dataset, see Fig. 4(d). The results show that LayTokenLLM not only avoids negative impacts but also improves results compared with Qwen1.5-7B (Text+Layout), demonstrating its effectiveness in re-expressing layout information. Overall, LayTokenLLM ensures comprehensive text learning while clearly preserving layout information, leading a more complete document understanding. + +# 5. Limitations + +Although the proposed Layout Token demonstrates that LayTokenLLM can effectively address text-dense documents with rich layout information, it may overlook certain graphical elements, such as charts and icons. Additionally, although NTLP pre-training has been shown to enhance document understanding, future work could explore more granular tasks, such as fine-grained layout relationship prediction. Further research may focus on equipping LayTokenLLM with these capabilities. + +# 6. Conclusion + +We propose LayTokenLLM, which incorporates a simple yet effective Layout Token to ensure comprehensive learning of text content while alleviating long-context issues introduced by layout information. Furthermore, an interleaved text and layout token next prediction pre-training objective is utilized to enhance cross-modal prediction and relational learning between text and layout modalities. Extensive experiments demonstrate the effectiveness of LayTokenLLM across diverse benchmarks for both single-page and multi-page document understanding. + +# References + +[1] Gpt-4v(ision) system card. 2023. 3 +[2] Srikar Appalaraju, Bhavan Jasani, and Bhargava Urala Kota. DocFormer: End-to-end transformer for document understanding. In ICCV, pages 4171-4186, 2021. 3 +[3] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023.2, 6,7 +[4] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. 2023. 3, 5 +[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, pages 1877-1901. Curran Associates, Inc., 2020. 3 +[6] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, and Jifeng Dai. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv preprint arXiv:2312.14238, 2023. 2, 6, 7 +[7] Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, et al. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. arXiv preprint arXiv:2404.16821, 2024. 3, 6 +[8] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24185-24198, 2024. 3, 5 +[9] Lei Cui, Yiheng Xu, Tengchao Lv, and Furu Wei. Document ai: Benchmarks, models and applications. arXiv preprint arXiv:2111.08609, 2021. 1 +[10] Cheng Da, Chuwei Luo, Qi Zheng, and Cong Yao. Vision grid transformer for document layout analysis. In ICCV, pages 19462-19472, 2023. 3 +[11] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 5, 6 +[12] Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu, and Liqing Zhang. Xylayoutm: + +Towards layout-aware multimodal networks for visually-rich document understanding. In $CVPR$ , pages 4583-4592, 2022. 3 +[13] Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao Shen. Icl-d3ie: In-context learning with diverse demonstrations updating for document information extraction. ICCV, 2023. 2, 3, 5, 6, 7 +[14] Anwen Hu, Haiyang Xu, Jiabo Ye, Ming Yan, Liang Zhang, Bo Zhang, Chen Li, Ji Zhang, Qin Jin, Fei Huang, et al. mplug-docowl 1.5: Unified structure learning forOCR-free document understanding. arXiv preprint arXiv:2403.12895, 2024.3 +[15] Anwen Hu, Haiyang Xu, Liang Zhang, Jiabo Ye, Ming Yan, Ji Zhang, Qin Jin, Fei Huang, and Jingren Zhou. mplug-docowl2: High-resolution compressing for ocr-free multi-page document understanding. arXiv preprint arXiv:2409.03420, 2024. 3, 6 +[16] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 5 +[17] Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. Layoutlmv3: Pre-training for document ai with unified text and image masking. In ACM Multimedia, 2022. 3 +[18] Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. Layoutlmv3: Pre-training for document ai with unified text and image masking. In Proceedings of the 30th ACM International Conference on Multimedia, pages 4083-4091, New York, NY, USA, 2022. Association for Computing Machinery. 3 +[19] Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. Funsd: A dataset for form understanding in noisy scanned documents, 2019. 1, 5 +[20] Marcel Lamott, Yves-Noel Weweler, Adrian Ulges, Faisal Shafait, Dirk Krechel, and Darko Obradovic. Lapdoc: Layout-aware prompting for documents. 2024. 1, 2, 3 +[21] Hugo Laurençon, Andrés Marafioti, Victor Sanh, and Léo Tronchon. Building and better understanding vision-language models: insights and future directions. arXiv preprint arXiv:2408.12637, 2024. 6 +[22] Chenliang Li, Bin Bi, and Ming Yan. StructuralLM: Structural pre-training for form understanding. In ACL, 2021. 3 +[23] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 3, 6 +[24] Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jianke Zhu, and Lei Zhang. Tokenpacker: Efficient visual projector for multimodal llm. arXiv preprint arXiv:2407.02392, 2024. 3 +[25] Yulin Li, Yuxi Qian, Yuechen Yu, Xiameng Qin, Chengquan Zhang, Yan Liu, Kun Yao, Junyu Han, Jingtuo Liu, and Errui Ding. Structext: Structured text understanding with multimodal transformers. In ACM Multimedia, pages 1912-1920, 2021. 3 +[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In Advances in Neural Information + +Processing Systems, pages 34892-34916. Curran Associates, Inc., 2023. 3 +[27] Yuliang Liu, Biao Yang, Qiang Liu, Zhang Li, Zhiyin Ma, Shuo Zhang, and Xiang Bai. Textmonkey: AnOCR-free large multimodal model for understanding document. arXiv preprint arXiv:2403.04473, 2024. 2, 6, 7 +[28] Jinghui Lu, Haiyang Yu, Yanjie Wang, Yongjie Ye, Jingqun Tang, Ziwei Yang, Binghong Wu, Qi Liu, Hao Feng, Han Wang, Hao Liu, and Can Huang. A bounding box is worth one token: Interleaving layout and text in a large language model for document understanding, 2024. 1, 2, 3, 4, 6 +[29] Chuwei Luo, Guozhi Tang, Qi Zheng, Cong Yao, Lianwen Jin, Chenliang Li, Yang Xue, and Luo Si. Bi-vldoc: Bidirectional vision-language modeling for visually-rich document understanding. arXiv preprint arXiv:2206.13155, 2022. 3 +[30] Chuwei Luo, Changxu Cheng, Qi Zheng, and Cong Yao. Geolayoutm: Geometric pre-training for visual information extraction. In CVPR, pages 7092-7101, 2023. 3 +[31] Chuwei Luo, Yufan Shen, Zhaoqing Zhu, Qi Zheng, Zhi Yu, and Cong Yao. Layout: Layout instruction tuning with large language models for document understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15630-15640, 2024. 2, 3, 5, 6 +[32] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In WACV, pages 2200-2209, 2021. 1, 5, 6 +[33] Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. {CORD}: A consolidated receipt dataset for post-{ocr} parsing. In Workshop on Document Intelligence at NeurIPS 2019, 2019. 1, 5 +[34] Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071, 2023. 2 +[35] Vincent Perot, Kai Kang, Florian Luisier, Guolong Su, Xiaoyu Sun, Ramya Sree Boppana, Zilong Wang, Zifeng Wang, Jiaqi Mu, Hao Zhang, Chen-Yu Lee, and Nan Hua. Lmdx: Language model-based document information extraction and localization, 2024. 2, 3 +[36] Yufan Shen, Chuwei Luo, Zhaoqing Zhu, Yang Chen, Qi Zheng, Zhi Yu, Jiajun Bu, and Cong Yao. Proctag: Process tagging for assessing the efficacy of document instruction data. arXiv preprint arXiv:2407.12358, 2024. 3 +[37] Woomin Song, Seunghyuk Oh, Sangwoo Mo, Jaehyung Kim, Sukmin Yun, Jung-Woo Ha, and Jinwoo Shin. Hierarchical context merging: Better long context understanding for pre-trained llms. arXiv preprint arXiv:2404.10308, 2024. 2 +[38] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024. 4 +[39] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 3 + +[40] Qwen Team. Introducing qwen1.5, 2024. 3, 5, 6 +[41] Rubén Tito, Dimosthenis Karatzas, and Ernest Valveny. Hierarchical multimodal transformers for multipage docvqa. Pattern Recognition, 144:109834, 2023. 5 +[42] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 3 +[43] Jordy Van Landeghem, Ruben Tito, Łukasz Borchmann, Michal Pietruszka, Pawel Joziak, Rafal Powalski, Dawid Jurkiewicz, Mickaël Coustaty, Bertrand Anckaert, Ernest Valveny, et al. Document understanding dataset and evaluation (dude). In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19528-19540, 2023. 5 +[44] Dongsheng Wang, Natraj Raman, Mathieu Sibue, Zhiqiang Ma, Petr Babkin, Simerjot Kaur, Yulong Pei, Armineh Nourbakhsh, and Xiaomo Liu. DocLLM: A layout-aware generative language model for multimodal document understanding. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8529-8548, Bangkok, Thailand, 2024. Association for Computational Linguistics. 1, 2, 3, 6 +[45] Yiheng Xu, Minghao Li, Lei Cui, and Shaohan Huang. LayoutLM: Pre-training of text and layout for document image understanding. In KDD, pages 1192-1200, 2020. 3 +[46] Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. Layoutmv2: Multi-modal pre-training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021, 2021. 3 +[47] Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. Layoutmv2: Multi-modal pre-training for visually-rich document understanding. In ACL, 2021. 3 +[48] Zhibo Yang, Rujiao Long, Pengfei Wang, Sibo Song, Humen Zhong, Wenqing Cheng, Xiang Bai, and Cong Yao. Modeling entities as semantic points for visual information extraction in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 5 +[49] Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, and Fei Huang. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13040–13051, 2024. 3 +[50] Yuechen Yu, Yulin Li, Chengquan Zhang, Xiaogiang Zhang, Zengyuan Guo, Xiameng Qin, Kun Yao, Junyu Han, Errui Ding, and Jingdong Wang. Structextv2: Masked visual-textual prediction for document image pre-training. In ICLR, 2023. 3 +[51] Peiyuan Zhang, Kaichen Zhang, Bo Li, Guangtao Zeng, Jingkang Yang, Yuanhan Zhang, Ziyue Wang, Haoran Tan, Chunyuan Li, and Ziwei Liu. Long context transfer from + +language to vision. arXiv preprint arXiv:2406.16852, 2024.3,6 +[52] Deyao Zhu, Jun Chen, Xiaogian Shen, Xiang Li, and Mohamed Elhoseiny. MiniGPT-4: Enhancing vision-language understanding with advanced large language models. In The Twelfth International Conference on Learning Representations, 2024. 3 \ No newline at end of file diff --git a/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/images.zip b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4d7e122a3b9353e8ddbeb52f6304a2b94c3177f1 --- /dev/null +++ b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8246234eafc0a3a325d3b4f48d1cbb97246eabc636f13afff20ea17af8a6a45e +size 504889 diff --git a/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/layout.json b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..41ae6ae33b1981bdca317ba8fdda0b082f4de42a --- /dev/null +++ b/CVPR/2025/A Simple yet Effective Layout Token in Large Language Models for Document Understanding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ed015e4e06ebff8fe6e4f835f04e29afaa3404eda88c0fccf697c1820861d1d +size 331476 diff --git a/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_content_list.json b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..97c99bd726249139665c7beb0b3ff4aa1e7e33d0 --- /dev/null +++ b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ef26ff652373c9f67015a60b31a3f93f0fb037d370ff13302e7e35d68432049 +size 87036 diff --git a/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_model.json b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a900cc3e471c8192b69aede351ab1386a42b5a0a --- /dev/null +++ b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36c53a1e3e9548d6f4eee0265e1d752ba332615a931e1d6e6ebcdd73ae813ed1 +size 108511 diff --git a/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_origin.pdf b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..38a3781f714f841244a9ae534677cd2268d29940 --- /dev/null +++ b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/81ce4c52-dac7-4add-aaa7-e959289e0870_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a25cb9d12557372fea53653a92107c2af0eff8003049fb5a9f5dac4eb314820e +size 4298795 diff --git a/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/full.md b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..eff7b70eb45e0052af98cde51ea96c3eb5d1f142 --- /dev/null +++ b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/full.md @@ -0,0 +1,343 @@ +# A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs + +Wangbo Zhao $^{1*}$ Yizeng Han $^{2*}$ Jiasheng Tang $^{2,3}$ Zhikai Li $^{1}$ Yibing Song $^{2,3}$ +Kai Wang $^{1\dagger}$ Zhangyang Wang $^{4}$ Yang You $^{1\dagger}$ $^{1}$ National University of Singapore $^{2}$ DAMO Academy, Alibaba Group + $^{3}$ Hupan Lab $^{4}$ The University of Texas at Austin + +# Abstract + +Vision-language models (VLMs) have shown remarkable success across various multi-modal tasks, yet large VLMs encounter significant efficiency challenges due to processing numerous visual tokens. A promising approach to accelerating large VLM inference is using partial information, such as attention maps from specific layers, to assess token importance and prune less essential tokens. However, our study reveals three key insights: (i) Partial attention information is insufficient for accurately identifying critical visual tokens, resulting in suboptimal performance, especially at low token retention ratios; (ii) Global attention information, such as the attention map aggregated across all layers, more effectively preserves essential tokens and maintains comparable performance under aggressive pruning. However, the attention maps from all layers require a full inference pass, which increases computational load and is therefore impractical in existing methods; and (iii) The global attention map aggregated from a small VLM closely resembles that of a large VLM, suggesting an efficient alternative. Based on these findings, we introduce a training-free method, Small VLM Guidance for accelerating Large VLMs (SGL). Specifically, we employ the attention map aggregated from a small VLM to guide visual token pruning in a large VLM. Additionally, an early exiting mechanism is developed to fully use the small VLM's predictions, dynamically invoking the larger VLM only when necessary, yielding a superior trade-off between accuracy and computation. Extensive evaluations across 11 benchmarks demonstrate the effectiveness and generalizability of SGL, achieving up to $91\%$ pruning ratio for visual tokens while retaining competitive performance. The code is publicly available at https://github.com/NUS-HPC-AI-Lab/SGL. + +# 1. Introduction + +Building on the notable success of language models (LMs) [3, 20, 49, 55], vision-language models (VLMs) have become a focal point of research. Most current VLMs [7, 29, 30, 50, 57, 68] integrate visual tokens from a vision encoder alongside textual tokens within an LM. However, such integration introduces significant inference overhead due to the sheer volume of visual tokens. + +Visual token compression presents a compelling solution to improving the inference efficiency of VLMs. Recent works [26, 33, 58] aim to condense the information in visual tokens into fewer tokens or parameters, but generally require training, introducing additional overhead. Training-free alternatives, such as token merging in visual encoders [2, 44], offer a lighter solution but may overlook essential vision-language interactions. To bridge this gap, [5, 63] employs partial information from the LM e.g. attention maps from specific layers, to prune less important visual tokens. These approaches can ideally integrate seamlessly with existing VLMs without fine-tuning, showing promising effectiveness. + +But how effective is visual token pruning across varying retention levels? To investigate this, we conduct an empirical study using FastV [5], a representative method that assesses visual token importance based on a single-layer attention map from the LM. For comparison, we also include an oracle method aggregating attention maps from all LM layers. As shown in Figure 1 (a), FastV struggles to maintain accuracy when the token retention ratio falls below $35\%$ , whereas the oracle method remains competitive even with only $9\%$ of visual tokens retained. This highlights that the global information from the aggregated attention map accurately identifies essential visual tokens for VLM prediction. + +However, retrieving attention maps from all layers requires a full inference pass. One cannot obtain a precise assessment of token importance before inference. This accounts for FastV using a single layer's attention as a proxy. To accurately distinguish essential vision tokens with minimal computation, we innovatively resort to a small model's + +![](images/d88dc30d06713d58d7060398557ac899c6b8731dee87cde07340f2e9547ae39b.jpg) +Figure 1. The motivation of our SGL. (a) A single-layer attention map is suboptimal compared to the global attention maps aggregated from all layers. We take InternVL [7] of 2B and 26B as representative examples. FastV [5] prunes visual tokens using the attention map from a single-layer, whereas FastV-oracle employs the aggregated attention map across all layers during inference. This approach allows for precise pruning of less significant visual tokens, maintaining performance with only $9\%$ of the tokens retained. (b) The small VLM exhibits a token retention pattern similar to the 26B model, preserving essential visuaI tokens relevant to the answer, regardless of the answer correctness. We drop $80\%$ less significant visual tokens and adopt to mark those tokens with high attention scores. Thumbsails employed in InternVL [7] are presented in the left corner. (c) The performance gap between small and large VLM is minimal compared to their computation disparity. The 2B model achieves competitive performance with significantly fewer FLOPs compared to the 26B one. This also validates our soundness of using a small model to guide early exiting and token pruning in the large one. + +![](images/75bab027fcba76ba113ae1631960fb441cf35a74854fab464f65144914577d47.jpg) + +![](images/25ccdf702829693cfb6a7b2892d38c02ff3258cb7d14c4e5332c33704be91a0d.jpg) + +global attention maps aggregated from all layers. As shown in Figure 1 (b), the token retention pattern derived from the small VLM closely mirrors that of the large VLM and consistently preserves tokens relevant to the answer, regardless of output correctness. This suggests that the small VLM's overall attention map serves as a more precise proxy to guide visual token pruning in large VLMs. + +Based on the above findings, we propose a training-free method, $\underline{\mathbf{Small}}$ VLM $\underline{\mathbf{Guidance}}$ for Large VLMs (SGL), consisting of two insightful techniques. First, along the token pruning line, we develop Small VLM-Guided visual token Pruning (SGP). Given an input image and its text prompt (question), inference is first performed by a small VLM, in which a global attention map is aggregated from all layers. This global attention map enables the calculation of vision tokens' importance scores based on their interactions with prompt and generated tokens. The scores are then used to rank the visual tokens. Subsequently, the ranking results provide a guidance for pruning less important visual tokens in the large VLM, significantly reducing computation while preserving essential information for a correct answer. + +It can be easily observed that the additional small VLM introduces some computational overhead. Fortunately, we notice that the performance gap between small and large VLMs is relatively minimal compared to their computation disparity (Figure 1 (c)). In other words, most "easy" questions could be correctly answered by the small VLM. This observation prompts us to make full use of the computation spent by the small VLM. To this end, we introduce Small VLM Early Exiting (SEE). Specifically, after obtaining the small VLM's prediction, we evaluate its decision score and directly terminate the inference pipeline without activating + +the large VLM if the score exceeds the threshold. With the adoption of a small VLM, SEE serves an effective approach complementary to SGP: (i) The computation of the large VLM could be completely skipped for many "easy" questions (SEE); (ii) When the large VLM is activated by demand, most unimportant visual tokens can be pruned based on the small VLM's aggregated attention map (SGP). + +We demonstrate the effectiveness of SGL across 11 benchmarks, achieving up to $91\%$ pruning of visual tokens in VLMs such as InternVL [7] in model sizes from 26B to 76B, while maintaining competitive performance. Moreover, our method can be seamlessly integrated with other VLMs e.g. Qwen-VL [50] and LLaVa-OV [25], highlighting its versatility in enhancing VLM efficiency across model architectures. + +# 2. Related Works + +VLMs. Advancements in language models (LMs) [3, 20, 24, 49, 55] have driven significant progress in vision-language models (VLMs) [7, 29, 30, 50, 68]. Most VLMs use a visual encoder, such as ViT [9], to extract visual tokens connected to an LM via a projection layer, significantly increasing token length and leading to high computational and memory demands. For instance, LLaVa [30] processes 576 tokens for a $336 \times 336$ image resolution, while an enhanced version [29] processes 2304 tokens at higher resolutions. InternVL [7] introduces up to 10,496 tokens through dynamic high-resolution techniques, and video understanding models [28, 54, 64] handle thousands of tokens across frames. + +Our method addresses the token overhead by using a small VLM to guide token reduction in larger VLMs, seamlessly applying to such models. + +Visual token compression. Compressing visual tokens is a promising approach to reduce computational and memory costs in transformer-based vision models. Methods such as token pruning [27, 42], token merging [2, 6], and token skipping [38, 65, 66] have been extensively studied for tasks including visual perception and generation. In VLMs, methods like Q-Former [26], token distillation [58], and parameter alignment [33] compress visual tokens but often require additional training. Training-free techniques [2, 5, 44, 63] propose merging or pruning tokens based on LM attention maps, but approaches that merge tokens in the visual encoder [2, 44] may miss key vision-language interactions. Other methods [5, 63] prune tokens using attention maps from specific LM layers, which may fail to accurately capture essential tokens at low retention levels. + +In comparison, this work proposes leveraging the aggregated attention map across all layers of a small VLM to comprehensively rank token importance, guiding more effective pruning in a larger VLM. + +Model uncertainty/confidence estimation. Assessing model uncertainty/confidence is essential for reliable predictions [8, 10, 14, 18, 32, 40, 56]. Recent work on large models focuses on estimating the confidence of generated text through information-based [12, 22], ensemble-based [34], density-based [43, 59], and reflexivity-based [21] methods. High-uncertainty content is often reviewed or processed by more advanced models for increased reliability [16, 62]. We recommend [11, 15, 17] for a more comprehensive review. + +Our SEE uses the predictions of the small VLM by measuring its confidence to determine when to activate the larger VLM, enhancing the trade-off between performance and efficiency. In addition, a consistency criterion is proposed to facilitate the early-exiting decision making procedure. + +# 3. Small Guides Large (SGL) in VLMs + +We first provide the preliminaries of VLMs in Section 3.1. Then, the proposed training-free Small VLM-Guided visual token Pruning (SGP) and Small VLM Early Exiting (SEE) techniques are detailed in Sections 3.2 and 3.3, respectively. + +# 3.1. Preliminary of Vision-language Models + +VLMs [7, 29, 30, 50, 68] primarily follow a framework where a vision encoder [9, 41, 47] converts an image into a sequence of visual tokens. These tokens are then combined with textual prompt tokens and fed into a language model [20, 48, 49, 55] to generate responses. + +Specifically, an input image $\mathbf{I}$ is encoded into visual tokens by a vision encoder model VM: + +$$ +\mathbf {x} _ {\mathrm {I}} = \operatorname {V M} (\mathbf {I}) \in \mathbb {R} ^ {N _ {\mathrm {I}} \times C}, \tag {1} +$$ + +where $N_{\mathrm{I}}$ represents the image token number. The associated textual prompts are tokenized into prompt tokens + +$\mathbf{x}_{\mathrm{T}} \in \mathbb{R}^{N_{\mathrm{T}} \times C}$ . Here, $C$ denotes the channel dimension of tokens. As observed in [5], we usually have $N_{\mathrm{I}} \gg N_{\mathrm{T}}$ , especially for high-resolution images. The tokens $\mathbf{x}_{\mathrm{I}}$ and $\mathbf{x}_{\mathrm{T}}$ are concatenated and fed into a language model LM to generate responses auto-regressively: + +$$ +\mathbf {p} _ {\mathrm {G}} ^ {i} = \operatorname {L M} \left(\mathbf {x} _ {\mathrm {I}}, \mathbf {x} _ {\mathrm {T}}, \mathbf {x} _ {\mathrm {G}} ^ {1: i - 1}\right) \in \mathbb {R} ^ {C _ {\mathrm {T}}}, \tag {2} +$$ + +where $\mathbf{p}_{\mathrm{G}}^{i}$ denotes the probability distribution over the vocabulary of size $C_{\mathrm{T}}$ . The previous generated tokens $\mathbf{x}_{\mathrm{G}}^{1:i-1}$ are used to predict the next token. The probability $\mathbf{p}_{\mathrm{G}}^{i}$ is converted into the token embedding $\mathbf{x}_{\mathrm{G}}^{i}$ via sampling, such as the argmax operation. + +# 3.2. Small VLM-Guided Visual Token Pruning + +The inference efficiency of VLMs is greatly impacted by the large number of vision tokens. A promising approach to mitigate this involves pruning less essential visual tokens using attention maps. However, pruning based on a single-layer attention map, as in [5], falls short compared to using an oracle attention map aggregated from all layers (Figure 1 (a)). Yet, obtaining this oracle attention map requires a full, computationally costly inference pass, making it impractical for real-world use. The key challenge is thus: + +How can we efficiently acquire a precise attention map for effective visual token pruning? + +Based on our findings in Figure1(b) that the aggregated attention map closely resemble that of a large VLM, we introduce SGP (Figure 2(a)): using the small VLM's aggregated attention map as an efficient and precise proxy to guide visual token pruning in a large VLM. + +Aggregating attention maps in the small VLM. We initiate inference with a compact vision-language model, $\mathrm{VLM^S}$ (e.g. InternVL-2B [7]), comprising a small vision model, $\mathrm{VM^S}$ , and a small language model, $\mathrm{LM^S}$ . This reduced model size significantly cuts computational costs compared to larger VLMs. We input visual tokens $\mathbf{x}_{\mathrm{I}} \in \mathbb{R}^{N_{\mathrm{I}} \times C}$ from vision model $\mathrm{VMS}$ and textual prompt tokens $\mathbf{x}_{\mathrm{T}} \in \mathbb{R}^{N_{\mathrm{T}} \times C}$ into $\mathrm{LMS}$ to generate answers $\mathbf{x}_{\mathrm{G}} \in \mathbb{R}^{N_{\mathrm{G}} \times C}$ , where $N_{\mathrm{G}}$ denotes the number of generated tokens. + +The inference process of $\mathrm{LM}_{\mathrm{S}}$ involves a pre-filling stage followed by a decoding stage. We update two attention maps $\mathbf{A}^{\mathrm{P}}, \mathbf{A}^{\mathrm{D}}$ for the two stages, respectively. + +(i) Pre-filling. In this stage, attention maps are extracted from each layer and head, denoted as $\mathbf{A}_{j,k}^{\mathrm{P}}\in$ $\mathbb{R}^{(N_{\mathrm{I}} + N_{\mathrm{T}})\times (N_{\mathrm{I}} + N_{\mathrm{T}})}$ , where $j$ and $k$ denote the layer and head index, respectively. Due to the causal nature of attention in $\mathrm{LM_S}$ $\mathbf{A}_{j,k}^{\mathrm{P}}$ is a lower triangular matrix. We specifically focus on the attention scores that visual tokens receive from prompt tokens. Therefore, we retrieve the bottom-left block + +$$ +\mathbf {A} _ {j, k} ^ {\mathrm {P}} \in \mathbb {R} ^ {(N _ {\mathrm {I}} + N _ {\mathrm {T}}) \times (N _ {\mathrm {I}} + N _ {\mathrm {T}})} \Rightarrow \tilde {\mathbf {A}} _ {j, k} ^ {\mathrm {P}} \in \mathbb {R} ^ {N _ {\mathrm {T}} \times N _ {\mathrm {I}}}. \tag {3} +$$ + +We then sum up the $N_{\mathrm{T}}$ scores for each image token in $\tilde{\mathbf{A}}_{j,k}^{\mathrm{P}}$ + +![](images/5aedbb93a49cdb6c156a4b7fba594125a232ed6b05f1e0b8f382a6cee894f313.jpg) +Figure 2. Overview of SGL. (a) Small VLM-guided visual token pruning in a large VLM (SGP). We update a global attention map aggregated from all layer of a small VLM. This global attention map is used to rank visual tokens and guide the visual token pruning in a large VLM. (b) Aggregation of attention maps in SGP. We aggregate the attention score of visual tokens received from prompt tokens and generated tokens across all heads and layers in the small LM. Higher scores indicate greater significance. (c) Inference with Small VLM Early Exiting (SEE). When the early exiting decision score from the small VLM is sufficient, the larger VLM will not be invoked. + +![](images/f150bb84d5de3ce0272656494139baaabf6ffc68cbe8f720d54df8ffa9cc1ebd.jpg) + +producing $\bar{\mathbf{A}}_{j,k}^{\mathrm{P}}\in \mathbb{R}^{N_1}$ . During the forward pass, these attention maps are aggregated across layers and heads: + +$$ +\mathbf {A} ^ {\mathrm {P}} = \sum_ {j = 1} ^ {L} \sum_ {k = 1} ^ {H} \bar {\mathbf {A}} _ {j, k} ^ {\mathrm {P}}, \tag {4} +$$ + +where $L$ and $H$ denote the number of layers and heads in $\mathrm{LM}_S$ , respectively. Note that we progressively update $\mathbf{A}^{\mathrm{P}}$ in an accumulative manner without caching all $\mathbf{A}_{j,k}^{\mathrm{P}}$ . This procedure is illustrated in Figure 2(b), left. + +(ii) Decoding. Attention scores of $N_{\mathrm{I}}$ visual tokens from the $i$ -th generated token can be denoted as $\mathbf{A}_{i,j,k}^{\mathrm{D}} \in \mathbb{R}^{N_{\mathrm{I}}}$ for head- $k$ in layer- $j$ . These scores are accumulated as + +$$ +\mathbf {A} ^ {\mathrm {D}} = \sum_ {i = 1} ^ {N _ {\mathrm {G}}} \sum_ {j = 1} ^ {L} \sum_ {k = 1} ^ {H} \mathbf {A} _ {i, j, k} ^ {\mathrm {D}}. \tag {5} +$$ + +The aggregation in the decoding phase is visualized in Figure 2(b), right. Upon inference completion, we calculate the overall attention scores for vision tokens via $\mathbf{A} = \mathbf{A}^{\mathrm{P}} + \mathbf{A}^{\mathrm{D}}$ . This comprehensive assessment $\mathbf{A}$ is further employed to rank and prune visual tokens. + +Visual token pruning in the large VLM. To improve the efficiency of a larger model $\mathrm{VLM}^{\mathrm{L}}$ (e.g. InternVL-26B [7]), we prune less important visual tokens based on the ranking obtained from A. Specifically, the same image is fed into its vision model $\mathrm{VM}^{\mathrm{L}}$ , producing visual tokens. Since $\mathrm{VLM}^{\mathrm{S}}$ and $\mathrm{VLM}^{\mathrm{L}}$ share the same architecture suite, $\mathrm{VM}^{\mathrm{L}}$ outputs the same number of visual tokens. These tokens, combined with prompt tokens, are fed into $\mathrm{LM}^{\mathrm{L}}$ . Inspired by FastV [5], we retain only the top $R\%$ of important visual tokens in an intermediate layer of $\mathrm{LM}^{\mathrm{L}}$ , as determined by the ranking. Leveraging the comprehensive importance score from the + +small VLM, we can apply a low retention ratio (e.g. $5\%$ ) at an early layer (e.g., the 2-rd layer), significantly reducing the computational cost of $\mathrm{VM}^{\mathrm{L}}$ . + +# 3.3. Small VLM Early Exiting + +While SGL effectively reduces the token load in the large VLM, incorporating a small VLM does add some overhead relative to using the large VLM alone. Fortunately, the performance gap between small and large VLMs is relatively minor compared to their computational difference, indicating that the small VLM's outputs are often quite competitive. This inspires us to devise Small VLM Early Exiting (SEE), maximizing the utility of the small VLM by assessing its outputs. For some "easy" questions, the inference pipeline exits early after obtaining the small VLM's prediction, further enhancing the inference efficiency. We demonstrate its pipeline in Figure 2 (c). + +During small VLM inference, the token generation probability can be recorded to estimate the answer confidence. A straightforward yet effective method for confidence estimation involves calculating the length-normalized sequence probability [11, 39], which can be expressed as: + +$$ +\mathcal {S} _ {\text {c o n f i d e n c e}} = \exp \left\{\frac {1}{N _ {\mathrm {G}}} \log P \left(\mathbf {x} _ {\mathrm {G}} ^ {1}, \dots \mathbf {x} _ {\mathrm {G}} ^ {N _ {\mathrm {G}}}\right) \right\}, \tag {6} +$$ + +where + +$$ +P \left(\mathbf {x} _ {\mathrm {G}} ^ {1}, \dots \mathbf {x} _ {\mathrm {G}} ^ {N _ {\mathrm {G}}}\right) = \prod_ {i = 1} ^ {N _ {\mathrm {G}}} P \left(\mathbf {x} _ {\mathrm {G}} ^ {i} \mid \operatorname {L M} ^ {\mathrm {S}} \left(\mathbf {x} _ {\mathrm {I}}, \mathbf {x} _ {\mathrm {T}}, \mathbf {x} _ {\mathrm {G}} ^ {1: i - 1}\right)\right). \tag {7} +$$ + +In our token pruning scenario, apart from the naive confidence metric $S_{\text{confidence}}$ , we further propose a consistency score for making early-exiting decisions. Specifically, we can naturally hypothesize that the small VLM accurately identifies essential visual tokens when it provides a correct + +
methodtoken ratiovisual question answeringcomprehensive benchmarkvisual groundingscore ratio
TextVQAChartQADocVQAGQASEEDMMBenchMM-VetMMERCRC+RC-g
InternVL-26B [7]100%82.4584.9292.1464.8976.7883.4664.00227091.2486.6788.44100.00%
InternVL-2B [7]100%73.1976.2485.9361.1671.6272.9343.30187882.2673.5377.5587.25%
26B w/ ToMe [2]64%80.2276.2479.5164.4975.6082.7460.10223584.0278.9180.3594.24%
35%75.7462.4466.7963.6173.8481.2852.50217871.0864.9768.0885.20%
9%51.6928.6028.4657.5265.1973.0937.70193320.3317.7419.3654.28%
26B w/ FastV [5]64%82.2685.0892.2064.8076.8183.2463.20227091.3086.6688.3099.84%
35%75.6271.6868.3261.2071.6478.3145.00214085.0677.6181.3988.28%
9%43.8426.2026.8144.9054.5662.3331.60179919.6516.6617.2246.99%
26B w/ SGP (ours)64%82.4185.0492.1265.0776.7183.3065.60225991.0786.7188.05100.14%
35%81.9781.6891.1464.6275.7282.1763.20225889.3884.3586.0798.36%
9%78.9872.9687.2662.1072.2375.5652.10200480.3672.2277.4589.58%
+ +Table 1. Comparison between SGP and previous visual token pruning methods. "Token ratio" denotes the average ratio of retrained visual tokens. "26B" denotes the original InternVL-26B. In "26B w/ SGP (ours)", we employ the aggregated attention map across all layers in InternVL-2B to guide the visual token pruning in InternVL-26B. For fair comparison, we do not employ SEE in these experiments. The "score ratio" is obtained by calculating the ratio of each score relative to InternVL-26B, followed by averaging these ratios. + +answer. Conversely, if the small VLM's answer is correct, a consistent prediction should be obtained when visual tokens are pruned by SGP (Section 3.2). In this basis, we introduce a consistency score $S_{\text{consistency}}$ to measure the consistency of the generation after visual token pruning. A higher score indicates a higher probability that the small VLM yields a correct answer, where early exiting is more reliable. Let $\mathbf{L}\mathbf{M}^{S'}$ represent the small language model with pruned visual tokens, the consistency score is obtained by + +$$ +\mathcal {S} _ {\text {c o n s i s t e n c y}} = \prod_ {i = 1} ^ {N _ {\mathrm {G}}} P \left(\mathbf {x} _ {\mathrm {G}} ^ {i} \mid \operatorname {L M} ^ {\mathrm {S} ^ {\prime}} \left(\mathbf {x} _ {\mathrm {I}}, \mathbf {x} _ {\mathrm {T}}, \mathbf {x} _ {\mathrm {G}} ^ {1: i - 1}\right)\right). \tag {8} +$$ + +It is worth noting that the calculation of $S_{\text{consistency}}$ is extremely efficient, because: (i) in Equation 8, visual tokens $\mathbf{x}_{\mathrm{I}}$ , text tokens $\mathbf{x}_{\mathrm{T}}$ and the generated tokens $\mathbf{x}_{\mathrm{G}}$ have all been obtained in Section 3.2, thus the consistency score can be computed in parallel rather than autoregressively; (ii) a high pruning ratio significantly reduces computational cost. We find that removing $95\%$ visual tokens is feasible in practice. In this scenario, calculating $S_{\text{consistency}}$ requires $< 10\%$ of the initial inference time with the small VLM. Finally, we compute the final early-exiting decision score: + +$$ +\mathcal {S} = \frac {1}{2} \left(\mathcal {S} _ {\text {c o n f i d e n c e}} + \mathcal {S} _ {\text {c o n s i s t e n c y}}\right). \tag {9} +$$ + +The inference pipeline exits early at the small VLM when the score is above a predefined threshold. In Section 4, we empirically show that our early-exting criterion $S$ outperforms other early-exiting criteria, such as quantile [16], entropy [11], and either one item $S_{\text{consistency}}$ or $S_{\text{confidence}}$ . + +To sum up, the small VLM in our SGL plays two roles. For any input, it first performs inference, producing + +(i) The importance scores $\mathbf{A}$ for vision tokens (SGP). +(ii) An early prediction and the corresponding early-exiting decision score $\mathcal{S}$ (SEE); + +If the large VLM is decided to be activated by SEE, SGP prunes a large amount of unimportant visual tokens to accelerate the inference of the large VLM. + +# 4. Experiment + +# 4.1. Experimental Setup + +Models. We conduct experiments using InternVL [7], which provides checkpoints for various model sizes, facilitating the exploration of small VLMs in guiding visual token pruning in large VLMs. Specifically, we use InternVL-2B as the small VLM and InternVL-26B as the large VLM dy default. We also include Qwen-VL [50] and LLaVa-OV [25] to evaluate the generalizability of our method. The default setting in experiments is marked in color. + +Evaluation benchmarks. We conduct experiments on four VQA benchmarks including TextVQA [46], ChartQA [36], DocVQA [37], and GQA [19]. To evaluate performance in visual grounding, we introduce RefCOCO (RC) [60], RefCOCO+ (RC+) [60], and RefCOCOg (RC-g) [35]. Additionally, we assess the model's capability in general multimodal understanding on comprehensive benchmarks such as SEED[23], MMBench [31], MM-Vet [61], and MME [13]. + +# 4.2. Comparing SGP with Previous Methods + +We first validate the effectiveness of our SGP without the early-exiting mechanism. The comparison with representative visual token compression methods, including ToMe [2] and FastV [5], is presented in Table 1, across different average visual token retention ratios. All experiments are conducted based on InternVL-26B model, consisting of 48 layers. For our method and FastV [5], we prune $60\%$ , $80\%$ and $95\%$ visual tokens at the 19-th, 9-th, and 2-th layer, achieving average token retention ratios of $64\%$ , $35\%$ , and $9\%$ , respectively. ToMe [2] performs token merging prior to the language model, with the merging ratio adjusted to + +![](images/b59c75614d71b1522fcad999791973779475a96df619a78044c18db2c9d7a1b3.jpg) +Figure 3. Performance-efficiency curves of SGL (SGP + SEE). The results with $18\%$ , $35\%$ , $50\%$ , and $64\%$ visual token retention ratios are presented as a curve. For the 26B and 40B, we use an NVIDIA H20 GPU, and the 76B is sharded on two GPUs. + +![](images/0445cd245a6b6713b639171f731a04f43094a07c193591bccdc55bda45068f4e.jpg) + +![](images/462e5512e7df52ba755ef2c878976b1f7a30a9a9b31d731e4806068f80b5e3ee.jpg) + +achieve similar average token retention ratios. + +At a relatively high token retention ratio, such as $64\%$ , all methods exhibit competitive performance across various tasks. This suggests significant visual token redundancy in VLMs, underscoring the importance of visual token pruning. + +When the token retention ratio is decreased to $35\%$ , the performance of ToMe and FastV starts to drop, particularly in OCR-related tasks, including TextVQA [46], ChartQA [36], and DocVQA [37] as well as visual grounding tasks. Their performance on MM-Vet [61] also drops significantly, since it also includes many OCR-related questions. These tasks require methods to accurately retain answer-related visual tokens to understand image details. This performance decline demonstrates that ToMe and FastV can not accurately retain essential tokens. In contrast, our method maintains competitive performance across all tasks. + +With only $9\%$ of visual tokens retained, FastV and ToMe collapse across all tasks, as critical visual tokens are lost due to inaccurate pruning. In this challenging scenario, our method experiences only a marginal performance drop compared to other methods, achieving over $89\%$ of the original InternVL-26B's performance. The visualization in Figure 5 also validates the superiority of our SGP, which successfully preserves the tokens most relevant to a correct answer, thanks to the global attention maps aggregated from the small VLM. + +# 4.3. SGP with SEE Towards Improved Efficiency + +We further validate the superiority of our SGL by incorporating both SGP and SEE mechanisms. The performance-efficiency curves for varying token retention ratios (18%, 35%, 50%, and 64%) across large VLMs of different sizes (InternVL-{26B, 40B, 76B}) on TextVQA are presented in Figure 3. As discussed in Section 3.3, we perform early existing based the decision score of the small VLM's answers to reduce the invocation of the large VLM. When the large VLM is activated, SGP is employed to reduce the visual token redundancy. Note that the ratio of early exiting can be flexibly controlled by adjusting the decision threshold, as a smaller threshold induces a lower ratio, i.e. less invoca + +
attention map sourceTextVQASEEDRCscore ratio
all layers of 26B (oracle)80.0471.9084.4994.44%
one layer of 26B (FastV [5])43.8454.5620.3348.37%
10% layers of 2B44.4063.0318.0951.92%
30% layers of 2B57.4263.0715.2056.15%
50% layers of 2B74.1668.1756.7480.31%
70% layers of 2B77.2970.9680.3391.40%
all layers of 2B (ours)78.9872.2380.3692.64%
+ +Table 2. Performance with attention maps from different sources. The visual token retention ratio is set to $9\%$ for all experiments. Aggregating attention maps from all layers of the small model (2B) achieves performance comparable to the oracle. + +tion for the large VLM. Here we present the performance curves at $60\%$ , $40\%$ , and $20\%$ early exiting ratios, denoted as $\mathrm{SGP}_{60\% \mathrm{SEE}}$ , $\mathrm{SGP}_{40\% \mathrm{SEE}}$ , and $\mathrm{SGP}_{20\% \mathrm{SEE}}$ , respectively. + +It can be observed that, with the 26B large VLM, our method SGP without SEE yields slower inference compared to FastV and ToMe, due to the overhead of the 2B small VLM. However, scaling the large VLM to 40B and 76B results in competitive inference speeds and superior performance relative to FastV and ToMe, particularly at low token retention ratios. Additionally, the proposed SEE enables SGP to maintain competitive performance at $20\%$ and $40\%$ early-exiting ratios while significantly reducing the average inference time across all VLM sizes. These results demonstrate the effectiveness of our SEE in identifying unreliable answers from the small VLM and appropriately invoking the large VLM. To summarize, our SGL offers superior trade-off between efficiency and performance. + +# 4.4. Ablation Study of Key Designs + +Effectiveness of all-layer attention maps. To verify the superiority of aggregating attention maps from all layers of the small VLM, we experiment using different sources to guide visual token pruning. All experiments are conducted with a $9\%$ retention ratio without SEE. + +The results shown in Table 2 indicate that using attention maps aggregated from the small model outperforms using a + +
used tokenTextVQASEEDRCscore ratio
last prompt token79.4069.5313.6367.26%
prompt tokens76.1572.2559.9084.03%
generated tokens79.3863.4783.5190.16%
prompt + generated tokens78.9872.2380.3692.64%
+ +Table 3. Performance of using different tokens in visual token importance evaluation. Prompt and generated tokens provide a comprehensive evaluation of visual tokens + +single layer from the large model, such as FastV [5]. Moreover, the average performance consistently improves when the number of aggregated layers increases. This demonstrates that leveraging attention maps from multiple layers helps accurately retain essential visual tokens, achieving performance comparable to the oracle. Notably, our method even slightly outperforms the oracle on the SEED benchmark, highlighting its effectiveness. + +Key tokens used for attention aggregation. Our SGP adopts both prompt and generated tokens in the attention maps to assess the importance of visual tokens. We ablate this design choice without SEE in Table 3. It can be observed that using only generated or prompt tokens might result in unstable performance. For example, generated tokens achieve the best performance on the RC dataset but perform poorly on the SEED benchmark. Similarly, using only the last prompt token, as in FastV [5], also leads to instability, particularly on the RC dataset. These results demonstrate that employing both prompt and generated tokens provide a comprehensive evaluation of visual tokens. + +Different strategies used in SEE. We further investigate the effectiveness of the early-exiting criteria used in our SEE. The proposed strategy $S$ is compared with other strategies, including only length-normalized sequence probability ( $S_{\text{confidence}}$ in Equation 6) [11, 39], consistency score ( $S_{\text{consistency}}$ in Equation 8), quantile [16] and entropy [11]. For the quantile strategy, the top 75th, 50th, and 25th percentile probabilities among generated tokens are used as confidence scores, denoted as $\text{Quantile}_{\text{Q1}}$ , $\text{Quantile}_{\text{Q2}}$ , and $\text{Quantile}_{\text{Q3}}$ , respectively. In the entropy strategy, we aggregate the entropy of each generated token, where higher entropy indicates lower confidence. The results are presented in Figure 4, where SGP is omitted in all experiments. + +It is observed that $S_{\text{confidence}}$ outperforms other baselines except for $S_{\text{consistency}}$ . Building on this, we develop our strategy $S$ by integrating both $S_{\text{confidence}}$ and $S_{\text{consistency}}$ , achieving the outstanding performance. It is noteworthy that the calculation of $S_{\text{consistency}}$ is efficient, consuming $< 1/10$ of the initial small VLM inference time, as analysed in Section 3.3. + +# 4.5. Visualization of Token Pruning and Answers + +To provide a deeper understanding of our SGP, we visualize the token pruning results in Figure 5. The first row demonstrates that the small VLM helps retain essential vi + +![](images/80c06009a145dcf161347225de7e5c8388fe74f86ac03a71cb537c17d7972be6.jpg) +Figure 4. Comparison of different early-exiting decision scores. We present the area between each strategy's curve and the 2B model score alongside their names. A larger area indicates a more effective criterion. With the same early exiting ratio, a higher score reflects improved accuracy in identifying incorrect responses from the small VLM. Note that SGP is not adopted for clear comparison. + +sual tokens necessary for answering questions across various token retention ratios. The second row shows that, even when the answer is incorrect, the small VLM can still retain tokens related to the answer. This suggests that, although the small VLM lacks the precise reasoning and perception necessary for accurate answers, it possesses sufficient reasoning ability to identify target regions, thereby guiding visual token pruning in the large VLM. In the last row, we present a challenging example where the answer is located at the boundary of the image and appears in a small font size. In this difficult scenario, the small VLM produces a correct answer as the large VLM, verifying competitive performance of the small VLM. This enable us to conduct SEE using the responses from the small VLM during inference. + +We also present the visual tokens pruning and answers from FastV [5]. We find that it can only preserve partial tokens relevant to the answer in the thumbnail and fails to precisely retain them in the main image. Consequently, this limitation impairs the model's ability to perceive image details, leading to inaccurate predictions. + +# 4.6. Generalization on Different Size Models + +Small VLM. In Table 4, we evaluate the effectiveness of using small VLMs of varying sizes (InternVL-\{1B, 2B, 4B\}) to guide visual token pruning in the larger InternVL-26B model [7]. These models are constructed using different language models (LMs) from various sources. Specifically, InternVL-1B uses Qwen2 [55], while InternVL-2B and InternVL-4B adpot InterLM2 [4] and Phi3 [1], respectively. The results show that our method is robust to the choice of small model and maintains compatibility with different LMs. Surprisingly, InternVL-1B performs slightly better than InternVL-2B across three tasks, motivating further reducing the small VLM size in future studies. + +Large VLM. We further substitute the large VLM in SGL + +![](images/da03876941b9b11bb051bfd0a427faca1d753eac5136ae8c6c2f4a59bfc0c776.jpg) +Figure 5. Visualization of SGP under different visual token retention ratios and answers. Visual tokens are pruned by $60\%$ , $80\%$ , and $95\%$ at the 19th, 9th, and 2nd layers of the large VLM of 26B, which comprises 48 layers. This results in average token retention ratios of $64\%$ , $35\%$ , and $9\%$ , respectively. Retained tokens are highlighted with $\blacksquare$ . Thumbsails employed in InternVL are presented in the left corner. + +
small VLM sizeLM sourceTextVQASEEDRCscore ratio
1BQwen2 [55]79.3872.2782.0693.44%
2BInterLM2 [4]78.9872.2380.3692.64%
4BPhi3 [1]79.7073.6575.3991.73%
+ +Table 4. Performance of leveraging small VLMs of different sizes. These small VLMs are based on various language models. + +
large VLM sizew/oursTextVQASEEDRCscore ratio
26BX82.4576.7891.24100.00%
26B78.9872.2380.3692.64%
40BX83.1178.1593.00100.00%
40B79.9674.1179.9992.38%
76BX84.3378.1792.20100.00%
76B80.7273.9381.8292.98%
+ +by InternVL-40B and InternVL-76B, with the small VLM fixed as InternVL-2B. The results in Table 5 indicate that our small model effectively guides significantly larger VLMs. This suggests that SGP is robust and has potential to guide the visual token pruning in huge VLMs. + +# 4.7. Generalization on Various Architectures + +We further assess the generalizability of SGL using Qwen2-VL [50] and LLaVa-OV [25]. The smallest models in the families, Qwen2-VL-2B and LLaVa-OV-0.5B, are used to guide visual token pruning in the largest models, Qwen2-VL72B and LLaVa-OV-72B. The results in Table 6 reveal that SGL enables the large VLM to maintain approximately $96\%$ of their original performance while achieving a retention + +Table 5. Visual token pruning for different-sized large VLMs. The average retention ratio is set to $9\%$ . + +
methodtoken ratioTextVQAscore ratio
Qwen2-VL-72B [50]100%85.50100%
w/ SGP (ours)64%85.4999.98%
w/ SGP (ours)35%85.1399.56%
w/ SGP (ours)9%82.8896.94%
LLaVa-OV-72B [25]100%79.30100%
w/ SGP (ours)64%79.1999.86%
w/ SGP (ours)35%78.6599.18%
w/ SGP (ours)9%75.9895.81%
+ +Table 6. Generalizability on Qwen2-VL and LLaVa-OV. We adopt Qwen2-VL-2B and LLaVa-OV-0.5B to guide the visual token pruning in Qwen2-VL-72B and LLaVa-OV-0.5B, respectively. + +ratio of $9\%$ for visual tokens. This underscores the potential applicability of our method to varied VLM architectures. + +# 5. Conclusion + +In this study, we explore the effectiveness of attention maps for visual token pruning in VLMs. Our findings reveal that the attention map aggregated from all layers of a small VLM exhibits patterns akin to that of a larger VLM. Based on this insight, we introduce SGP, which prunes visual token in large VLMs under the guidance from a small VLM. This small VLM is further exploited to perform early exiting (SEE) to make full use of its predictions. Both of these two techniques are training-free. Comprehensive experiments across 11 benchmarks demonstrate the method's effectiveness, particularly at low visual token retention ratios. + +Limitations and future works. Our method is primarily validated on multi-modal understanding tasks. Its application in recent VLMs [45, 51-53, 67], unifying both understanding and generation, is worth studying in the future. + +# References + +[1] Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024. 7, 8 +[2] Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, and Judy Hoffman. Token merging: Your vit but faster. arXiv preprint arXiv:2210.09461, 2022. 1,3,5 +[3] Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 1, 2 +[4] Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, et al. Internl m2 technical report. arXiv preprint arXiv:2403.17297, 2024. 7, 8 +[5] Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. arXiv preprint arXiv:2403.06764, 2024. 1, 2, 3, 4, 5, 6, 7 +[6] Mengzhao Chen, Wenqi Shao, Peng Xu, Mingbao Lin, Kaipeng Zhang, Fei Chao, Rongrong Ji, Yu Qiao, and Ping Luo. Diffrate: Differentiable compression rate for efficient vision transformers. In ICCV, pages 17164-17174, 2023. 3 +[7] Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, et al. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. arXiv preprint arXiv:2404.16821, 2024. 1, 2, 3, 4, 5, 7 +[8] Jiwong Choi, Dayoung Chun, Hyun Kim, and Hyuk-Jae Lee. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In ICCV, pages 502-511, 2019. 3 +[9] Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 2, 3 +[10] Stefan Eggenreich, Christian Payer, Martin Urschler, and Darko Štern. Variational inference and bayesian cnns for uncertainty estimation in multi-factorial bone age prediction. arXiv preprint arXiv:2002.10819, 2020. 3 +[11] Ekaterina Fadeeva, Roman Vashurin, Akim Tsvigun, Artem Vazhentsev, Sergey Petrakov, Kirill Fedyanin, Daniil Vasilev, Elizaveta Goncharova, Alexander Panchenko, Maxim Panov, et al. Lm-polygraph: Uncertainty estimation for language models. arXiv preprint arXiv:2311.07383, 2023. 3, 4, 5, 7 + +Acknowledgments. This work was supported by Damo Academy through Damo Academy Research Intern Program. This work also was supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-08-008). Yang You's group is being sponsored by NUS startup grant (Presidential Young Professorship), Singapore MOE Tier-1 grant, ByteDance grant, ARCTIC grant, SMI grant (WBS number: A8001104-00-00), Alibaba grant, and Google grant for TPU usage. +[12] Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. Unsupervised quality estimation for neural machine translation. Transactions of the Association for Computational Linguistics, 8:539-555, 2020. 3 +[13] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023.5 +[14] Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. A survey of uncertainty in deep neural networks. Artificial Intelligence Review, 56(Suppl 1):1513-1589, 2023. 3 +[15] Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, and Iryna Gurevych. A survey of confidence estimation and calibration in large language models. In ACL, pages 6577-6595, 2024. 3 +[16] Neha Gupta, Harikrishna Narasimhan, Wittawat Jitkrittum, Ankit Singh Rawat, Aditya Krishna Menon, and Sanjiv Kumar. Language model cascades: Token-level uncertainty and beyond. arXiv preprint arXiv:2404.10136, 2024. 3, 5, 7 +[17] Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. Dynamic neural networks: A survey. TPAMI, 44(11):7436-7456, 2021. 3 +[18] Yizeng Han, Yifan Pu, Zihang Lai, Chaofei Wang, Shiji Song, Junfeng Cao, Wenhui Huang, Chao Deng, and Gao Huang. Learning to weight samples for dynamic early-exiting networks. In ECCV, pages 362-378, 2022. 3 +[19] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, pages 6700-6709, 2019. 5 +[20] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. 1, 2, 3 +[21] Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022. 3 +[22] Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664, 2023. 3 +[23] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023. 5 +[24] Baiqi Li, Zhiqiu Lin, Wenxuan Peng, Jean de Dieu Nyandwi, Daniel Jiang, Zixian Ma, Simran Khanuja, Ranjay Krishna, Graham Neubig, and Deva Ramanan. Naturalbench: Evaluating vision-language models on natural adversarial samples. arXiv preprint arXiv:2410.14669, 2024. 2 + +[25] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326, 2024. 2, 5, 8 +[26] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. In ECCV, pages 323–340. Springer, 2025. 1, 3 +[27] Youwei Liang, Chongjian Ge, Zhan Tong, Yibing Song, Jue Wang, and Pengtao Xie. Not all patches are what you need: Expediting vision transformers via token reorganizations. arXiv preprint arXiv:2202.07800, 2022. 3 +[28] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122, 2023. 2 +[29] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023. 1, 2, 3 +[30] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. 1, 2, 3 +[31] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In ECCV, pages 216-233. Springer, 2025. 5 +[32] Antonio Loquercio, Mattia Segu, and Davide Scaramuzza. A general framework for uncertainty estimation in deep learning. IEEE Robotics and Automation Letters, 5(2):3153-3160, 2020. 3 +[33] Feipeng Ma, Hongwei Xue, Guangting Wang, Yizhou Zhou, Fengyun Rao, Shilin Yan, Yueyi Zhang, Siying Wu, Mike Zheng Shou, and Xiaoyan Sun. Visual perception by large language model's weights. arXiv preprint arXiv:2405.20339, 2024. 1, 3 +[34] Andrey Malinin and Mark Gales. Uncertainty estimation in autoregressive structured prediction. arXiv preprint arXiv:2002.07650, 2020. 3 +[35] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In CVPR, pages 11-20, 2016. 5 +[36] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244, 2022. 5, 6 +[37] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In WACV, pages 2200-2209, 2021. 5, 6 +[38] Lingchen Meng, Hengduo Li, Bor-Chun Chen, Shiyi Lan, Zuxuan Wu, Yu-Gang Jiang, and Ser-Nam Lim. Adavit: Adaptive vision transformers for efficient image recognition. In CVPR, pages 12309-12318, 2022. 3 +[39] Kenton Murray and David Chiang. Correcting length bias in neural machine translation. arXiv preprint arXiv:1808.10006, 2018. 4, 7 +[40] Tanya Nair, Doina Precup, Douglas L Arnold, and Tal Arbel. Exploring uncertainty measures in deep networks for multiple + +sclerosis lesion detection and segmentation. Medical image analysis, 59:101557, 2020. 3 +[41] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763. PMLR, 2021. 3 +[42] Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, and Cho-Jui Hsieh. Dynamicvit: Efficient vision transformers with dynamic token sparsification. NeurIPS, 34: 13937-13949, 2021. 3 +[43] Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, and Peter J Liu. Out-of-distribution detection and selective generation for conditional language models. In ICLR, 2022. 3 +[44] Yuzhang Shang, Mu Cai, Bingxin Xu, Yong Jae Lee, and Yan Yan. Llava-prumerge: Adaptive token reduction for efficient large multimodal models. arXiv preprint arXiv:2403.15388, 2024. 1, 3 +[45] Md Fahim Sikder, Resmi Ramachandranpillai, and Fredrik Heintz. Transfusion: generating long, high fidelity time series using diffusion models with transformers. arXiv preprint arXiv:2307.12667, 2023. 8 +[46] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In CVPR, pages 8317-8326, 2019. 5, 6 +[47] Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, and Yue Cao. Eva-clip: Improved training techniques for clip at scale. arXiv preprint arXiv:2303.15389, 2023. 3 +[48] InternLM Team. Internlm: A multilingual language model with progressively enhanced capabilities, 2023. 3 +[49] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 1, 2, 3 +[50] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 1, 2, 3, 5, 8 +[51] Xinlong Wang, Xiaosong Zhang, Zhengxiong Luo, Quan Sun, Yufeng Cui, Jinsheng Wang, Fan Zhang, Yueze Wang, Zhen Li, Qiying Yu, et al. Emu3: Next-token prediction is all you need. arXiv preprint arXiv:2409.18869, 2024. 8 +[52] Yecheng Wu, Zhuoyang Zhang, Junyu Chen, Haotian Tang, Dacheng Li, Yunhao Fang, Ligeng Zhu, Enze Xie, Hongxu Yin, Li Yi, et al. Vila-u: a unified foundation model integrating visual understanding and generation. arXiv preprint arXiv:2409.04429, 2024. +[53] Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528, 2024. 8 +[54] Mingze Xu, Mingfei Gao, Zhe Gan, Hong-You Chen, Zhengfeng Lai, Haiming Gang, Kai Kang, and Afshin + +Dehghan. Slowfast-llava: A strong training-free baseline for video large language models. arXiv preprint arXiv:2407.15841, 2024. 2 +[55] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprint arXiv:2407.10671, 2024. 1, 2, 3, 7, 8 +[56] Le Yang, Yizeng Han, Xi Chen, Shiji Song, Jifeng Dai, and Gao Huang. Resolution adaptive networks for efficient inference. In CVPR, pages 2369-2378, 2020. 3 +[57] Le Yang, Ziwei Zheng, Boxu Chen, Zhengyu Zhao, Chenhao Lin, and Chao Shen. Nullu: Mitigating object hallucinations in large vision-language models via halluspace projection. In CVPR, 2025. 1 +[58] Xubing Ye, Yukang Gan, Xiaoke Huang, Yixiao Ge, Ying Shan, and Yansong Tang. Voco-llama: Towards vision compression with large language models. arXiv preprint arXiv:2406.12275, 2024. 1, 3 +[59] KiYoon Yoo, Jangho Kim, Jiho Jang, and Nojun Kwak. Detection of word adversarial examples in text classification: Benchmark and baseline via robust density estimation. arXiv preprint arXiv:2203.01677, 2022. 3 +[60] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In ECCV, pages 69-85. Springer, 2016. 5 +[61] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023. 5, 6 +[62] Murong Yue, Jie Zhao, Min Zhang, Liang Du, and Ziyu Yao. Large language model cascades with mixture of thoughts representations for cost-efficient reasoning. arXiv preprint arXiv:2310.03094, 2023. 3 +[63] Yuan Zhang, Chun-Kai Fan, Junpeng Ma, Wenzhao Zheng, Tao Huang, Kuan Cheng, Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, et al. Sparsevm: Visual token sparsification for efficient vision-language model inference. arXiv preprint arXiv:2410.04417, 2024. 1, 3 +[64] Yuanhan Zhang, Bo Li, haotian Liu, Yong jae Lee, Liangke Gui, Di Fu, Jiashi Feng, Ziwei Liu, and Chunyuan Li. Llavanext: A strong zero-shot video understanding model, 2024. 2 +[65] Wangbo Zhao, Jiasheng Tang, Yizeng Han, Yibing Song, Kai Wang, Gao Huang, Fan Wang, and Yang You. Dynamic tuning towards parameter and inference efficiency for vit adaptation. NeurIPS, 2024. 3 +[66] Wangbo Zhao, Yizeng Han, Jiasheng Tang, Kai Wang, Yibing Song, Gao Huang, Fan Wang, and Yang You. Dynamic diffusion transformer. In ICLR, 2025. 3 +[67] Pengfei Zhou, Xiaopeng Peng, Jiajun Song, Chuanhao Li, Zhaopan Xu, et al. Gate opening: A comprehensive benchmark for judging open-ended interleaved image-text generation. arXiv preprint arXiv:2411.18499, 2024. 8 +[68] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 1, 2, 3 \ No newline at end of file diff --git a/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/images.zip b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ca3df9940564cef8d8a85c97d725636e84bccd46 --- /dev/null +++ b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffbcadde551bc45c4829bd9ba145325728b1b7cbc2bd577bc0154b637f44eecc +size 722828 diff --git a/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/layout.json b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cd34aee146cfba08202ca8fdeb13a0454417b0f4 --- /dev/null +++ b/CVPR/2025/A Stitch in Time Saves Nine_ Small VLM is a Precise Guidance for Accelerating Large VLMs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58a282365fcda22cb5aeaaeb3c814b56c3101d7048db8538c1432a6345a802a7 +size 456304 diff --git a/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_content_list.json b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6752c820743fc0182a82ecab134c2f11e9076ab3 --- /dev/null +++ b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:165d1643a45bf8600dd3a2d031495647d333880461cd247d4dcc523092e89b3f +size 78000 diff --git a/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_model.json b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9056eaaa860922b359545cca34c83535db381832 --- /dev/null +++ b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ade884b8395c0a2102c6d921757bc0640e5f2de9967422ca04f56bda8639e56a +size 93874 diff --git a/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_origin.pdf b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5e7b86867e1f78f61eeed40be3deea2387756260 --- /dev/null +++ b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/7be69b4e-6240-4897-87a0-9e54582fed6f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a827158481c0f49638b4a0098d625361697da0639897485d7c1bda27c7db6e28 +size 1554389 diff --git a/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/full.md b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/full.md new file mode 100644 index 0000000000000000000000000000000000000000..159eb1f553a1d0d96426a21c6d6d911f7c2056c8 --- /dev/null +++ b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/full.md @@ -0,0 +1,340 @@ +# A Tale of Two Classes: Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets + +David Mildenberger $^{1,2,*}$ , Paul Hager $^{1,*}$ , Daniel Rueckert $^{1,2,3}$ , Martin J. Menten $^{1,2,3}$ + +$^{1}$ Technical University of Munich, $^{2}$ Munich Center for Machine Learning, $^{3}$ Imperial College London + +{david.mildenberger, paul.hager, daniel.rueckert, martin.menten}@tum.de + +*These authors contributed equally. + +# Abstract + +Supervised contrastive learning (SupCon) has proven to be a powerful alternative to the standard cross-entropy loss for classification of multi-class balanced datasets. However, it struggles to learn well-conditioned representations of datasets with long-tailed class distributions. This problem is potentially exacerbated for binary imbalanced distributions, which are commonly encountered during many real-world problems such as medical diagnosis. In experiments on seven binary datasets of natural and medical images, we show that the performance of SupCon decreases with increasing class imbalance. To substantiate these findings, we introduce two novel metrics that evaluate the quality of the learned representation space. By measuring the class distribution in local neighborhoods, we are able to uncover structural deficiencies of the representation space that classical metrics cannot detect. Informed by these insights, we propose two new supervised contrastive learning strategies tailored to binary imbalanced datasets that improve the structure of the representation space and increase downstream classification accuracy over standard SupCon by up to $35\%$ . We make our code available. $^1$ + +# 1. Introduction + +Supervised contrastive learning (SupCon) has emerged as a powerful alternative to the cross-entropy loss for supervised deep learning [8, 17, 20, 21]. SupCon combines full label information with a contrastive loss to cluster samples of the same class in similar regions of the representation space. Conversely, embeddings of different classes are pushed apart. This results in a well-conditioned representation space that preserves discriminative features of each sample while separating semantic classes [2]. SupCon has + +![](images/986385359386014dee1a5652b2e79f09e1618f76fba0c6234dbc73d7f9b1c09a.jpg) +Figure 1. Supervised contrastive learning (SupCon) on multi-class balanced datasets returns a well-conditioned representation space, in which semantic classes are clearly separated. We show that for binary imbalanced datasets the prevalence of a dominant majority class causes the embeddings to collapse to a single point. Our proposed fixes restore the clear separation of semantic classes. + +![](images/9e47408f8e4f4145f1c7a2a513bbb2c9162f774b088cb14049f89a08007e27f4.jpg) + +![](images/f24a2ea5895a3be078668541891e9a0beca39c6348321b912a831c80663d60f1.jpg) + +been used to achieve state-of-the-art results across diverse fields, including genetics [3], out-of-distribution detection [31], object detection [30], video action recognition [10], and neuroscience [28]. + +SupCon has been predominantly developed on and applied to multi-class balanced benchmark datasets like ImageNet, which consist of numerous equally prevalent classes. In contrast, real-world datasets often significantly deviate from these idealized conditions. There has been a growing focus on adapting supervised contrastive learning to long-tailed datasets, which are characterized by a few common "head" classes and many rare "tail" classes [7, 14, 21, 36, 42]. These works show that SupCon often yields representation spaces with dominating head classes when applied to long-tailed datasets, leading to reduced downstream utility. + +SupCon's shortcomings on long-tailed datasets are potentially exacerbated on distributions with only two underlying classes: a common majority class and a rare minority class. Such binary imbalanced distributions are com + +mon in real-world tasks, such as anomaly detection, fraud detection, and disease classification. For example, during medical screening, subjects are classified as either healthy or diseased, with the healthy cases typically outnumbering the pathological ones [13, 19]. A survey of representation learning for medical image classification found that 78 out of 114 studies focused on binary classification problems [13]. + +This work is the first to investigate the effectiveness of SupCon on binary imbalanced datasets, identifying limitations of existing supervised contrastive learning strategies and proposing algorithmic solutions to these issues. Our main contributions are: + +- We empirically demonstrate that SupCon is ineffective on binary imbalanced datasets. In controlled experiments on seven natural and medical imaging datasets, we observe that the performance of SupCon degrades with increasing class imbalance, falling behind the standard cross-entropy loss even at moderate levels of class imbalance. +- To investigate these findings, we introduce two novel metrics for diagnosing the structural deficiencies of the representation space. We show that at high class imbalance all embeddings are closely clustered in a small region of the representation space, preventing separation of semantic classes (see Fig. 1). While canonical metrics fail to capture this problem, our proposed metrics detect this representation space collapse and diminished downstream utility. Furthermore, we theoretically substantiate our empirical observations in a proof. +- Informed by the insights gained through our metrics, we propose two new supervised contrastive learning strategies tailored to binary imbalanced datasets. These adjustments are easy to implement and incur minimal additional computational cost compared to the standard SupCon loss. We demonstrate that our fixes boost downstream classification performance by up to $35\%$ over SupCon and outperform leading strategies for long-tailed data by up to $5\%$ . + +# 2. Related works + +# 2.1. Analyzing representation spaces + +To evaluate the quality of representation spaces, Wang and Isola have introduced the notions of representation space alignment and uniformity [37]. Alignment measures how closely semantically similar samples are located in the representation space. Uniformity quantifies the utilization of the representation space's capacity. Both metrics have been empirically validated as strong indicators of the representation space's quality and downstream utility. However, high uniformity and alignment alone do not guarantee separability of classes, as shown by Wang et al. [38]. In a subsequent study, Li et al. proposed analyzing alignment and unifor + +mity at the level of semantic classes instead of individual samples [21]. While this improves upon sample-wise analysis, it still fails to properly compare representations between classes, which is crucial for downstream classification performance. We address this limitation by introducing two novel metrics, enabling the evaluation of sample and class consistency within representation neighborhoods. + +# 2.2. Supervised contrastive learning for long-tailed datasets + +When applying SupCon to long-tailed datasets, samples from the majority class often occupy a disproportionate amount of the representation space [7, 14, 15, 21, 42]. In the most severe cases the representation space collapses completely, losing all utility [4, 8, 9, 39]. Many works thus aim to enhance latent space uniformity by spreading features evenly, irrespective of data imbalance. Zhu et al. balance gradient contributions of classes to achieve a regular simplex latent structure [42]. Hou et al. split the majority classes into smaller sub-classes according to their latent features [14]. Cui et al. leverage parametric class prototypes to adaptively balance the learning signal [7]. Kang et al. and Li et al. limit the number of positives that contribute to the loss with Li et al. also using fixed class prototypes [15, 21]. Although these methods outperform SupCon on long-tailed distributions, they remain untested on binary imbalanced distributions whose unique characteristics are not explicitly addressed. + +# 3. Metrics to diagnose representation spaces of binary data distributions + +Alignment and uniformity are established metrics for evaluating the quality and structure of representation spaces obtained via unsupervised contrastive learning [37]. Alignment measures the average distance between positive pairs in the feature space. Low average distance, or high alignment, indicates consistent embeddings that are robust to noise. Uniformity measures the average Gaussian potential between embeddings on the unit hypersphere. Low average potential, or high uniformity, corresponds to expressive embeddings that fully utilize the entire representation space. Effective representation spaces exhibit both high alignment and high uniformity and should in theory yield good linear separability of the semantic classes. + +However, because these metrics were originally developed for unsupervised contrastive learning, they operate on a per-sample level and ignore latent class information. To address this limitation, Li et al. have extended alignment and uniformity to SupCon, by analyzing the alignment of classes instead of samples [21]. Although this class-level alignment better reflects semantic separability, it still does not capture the relationships between classes. + +![](images/1b216fa16e9342ed9aaa7a89b24526650135bec2766254556cf34f71cd5630e6.jpg) + +![](images/524b03d64eb8f158de44f23fc915f93dbfea962eee388b9b565828f2ce8cdfcd.jpg) +Class Alignment Distance (CAD) + +![](images/9d923e3cb25d6f9b7dd8b48c3e8c109ccca1e94ecbd68ed862c9bb6e437b0ebd.jpg) +Good SAD and CAD values can hide poor representation spaces. + +![](images/8320dee19d2725f5e9bb4e4260862b76e193aaa81611af8cee177fe267f7a7e3.jpg) +Good SAA and CAC values find good representation spaces. + +![](images/099555f206b5fe2f4431feb0d72608c61199a47dbccbddf2c0c1e514a84b24cb.jpg) +Class Alignment Consistency (CAC) + +![](images/5b40936b5d473654de7a59027c2784dbf38ad57fd10cec2a7fcff149d516e909.jpg) + +![](images/7496fd61d2e2dc9d265d0581fe5c3b28d5b18f8281e9f13c13f655ded2a83f3a.jpg) +Figure 2. Our novel sample alignment accuracy (SAA) and class alignment consistency (CAC) metrics capture the relationships between embeddings of different classes instead of just within one class. By more directly measuring the separability of latent classes, it is a stronger indicator of a representation space's downstream utility. + +We therefore propose two new alignment metrics: sample alignment accuracy (SAA) and class alignment consistency (CAC). Our metrics compare the alignment both within and between samples and classes. This is in contrast to the canonical sample alignment distance (SAD) [37] and class alignment distance (CAD) [21] which only measure the alignment within one sample or class, as shown in Fig. 2. + +# 3.1. Definitions + +We define a dataset $\mathcal{X}$ containing $N$ images, $\mathcal{X} = \{x_{k}\}_{k = 1}^{N}$ . Given that the images in $\mathcal{X}$ are labeled with a binary class distribution, we denote the samples of class $i\in \{0,1\}$ as $x\in \mathcal{X}_i$ with the function $g:\mathcal{X}\to \{0,1\}$ mapping each image to its label. Let $\tilde{x}_k$ and $\tilde{x}_k^+$ denote two randomly augmented instances of $x_{k}$ that form a positive pair. Conversely, any view that is not generated from $x_{k}$ , denoted by $\tilde{x}_k^-$ denotes, forms a negative pair with $\tilde{x}_k$ . The set of all views is called $\mathcal{W}$ with $|\mathcal{W}| = 2N$ . The set of all views of class $i$ is $\mathcal{W}_i$ . The function $f:\mathcal{W}\rightarrow S^{d - 1}$ maps a view $\tilde{x}$ onto a $d$ -dimensional representation on the unit sphere $S^{d - 1}$ . + +# 3.1.1. Sample alignment distance (SAD) + +Sample alignment distance (SAD), as defined by Wang et al. [37], measures the average distance between representations of two augmented views of the same sample. Formally, SAD is computed as the average pairwise $\ell_2$ distance between sample-wise positive pairs: + +$$ +\mathrm {S A D} = \frac {1}{| \mathcal {X} |} \sum_ {x \in \mathcal {X}} \| (f (\tilde {x}) - f (\tilde {x} ^ {+})) \| _ {2} \tag {1} +$$ + +A low SAD, or high alignment, implies that two different views of the same image produce similar embeddings. Obtaining consistent embeddings despite perturbation + +tions from augmentations typically indicates that generalized class-level semantic features are being captured, which is ultimately beneficial for downstream applications. + +# 3.1.2. Sample alignment accuracy (SAA) + +We introduce the concept of sample alignment accuracy (SAA) to determine if the embeddings of positive pairs are more closely aligned with each other compared to other samples. SAA is the proportion of all sample-wise positive pairs for which the $\ell_2$ distance between their embeddings is smaller than that to all negative pairs: + +$$ +\begin{array}{l} \mathrm {S A A} = \frac {1}{| \mathcal {X} |} \sum_ {x \in \mathcal {X}} \mathbb {1} \left(\left\| \left(f (\tilde {x}) - f \left(\tilde {x} ^ {+}\right)\right) \right\| _ {2} \right. \\ < \min _ {\tilde {x} ^ {-} \in \mathcal {W} \backslash \{\tilde {x}, \tilde {x} ^ {+} \}} \| (f (\tilde {x}) - f (\tilde {x} ^ {-})) \| _ {2}) \tag {2} \\ \end{array} +$$ + +Here, $\mathbb{K}(\cdot)$ is the indicator function, which outputs 1 if the condition $\cdot$ holds, and 0 otherwise. + +Compared to SAD, SAA is more insightful in cases in which many samples, both positives and negatives, are placed in close proximity to each other. While SAD would indicate high alignment despite low separability of semantic classes, SAA would correctly diagnose a partially degenerate representation space. + +# 3.1.3.Class alignment distance (CAD) + +Class alignment distance (CAD), introduced by Li et al. [21], calculates the average distance between all representations within a class to evaluate how well the learned representation space clusters samples according to their semantic labels [21]. Let $C$ be the number of classes and $\tilde{x}, \tilde{x}' \in \mathcal{W}_i$ all unique pairs of samples in $\mathcal{W}_i$ : + +$$ +\mathrm {C A D} = \frac {1}{C} \sum_ {i = 1} ^ {C} \frac {1}{\binom {| \mathcal {X} _ {i} | + 1} {2}} \sum_ {\tilde {x}, \tilde {x} ^ {\prime} \in \mathcal {W} _ {i}} \| (f (\tilde {x}) - f (\tilde {x} ^ {\prime})) \| _ {2} \tag {3} +$$ + +Compared to SAD, CAD captures alignment across an entire class, indicating how well a class clusters on the representation space's hypersphere. + +# 3.1.4. Class alignment consistency (CAC) + +To measure how pure embedding neighborhoods are with respect to the latent class, we introduce class alignment consistency (CAC). We define class alignment within a local neighborhood of the closest $r$ views to $\tilde{x}$ , which we call $\mathcal{W}_{\tilde{x}}$ . For our analysis we set $r$ to $5\%$ of all views. Let $\mathcal{D}$ denote the set of sets containing each $\tilde{x}$ and its local neighborhood $\mathcal{W}_{\tilde{x}}$ , with $(\tilde{x}, \mathcal{W}_{\tilde{x}}) \in \mathcal{D}$ . + +$$ +\operatorname {C A C} = \frac {1}{| \mathcal {D} |} \sum_ {(\tilde {x}, \mathcal {W} _ {\tilde {x}}) \in \mathcal {D}} \frac {1}{| \mathcal {W} _ {\tilde {x}} |} \sum_ {\tilde {x} ^ {\prime} \in \mathcal {W} _ {\tilde {x}}} \mathbb {1} (g (\tilde {x}) = g \left(\tilde {x} ^ {\prime}\right)) \tag {4} +$$ + +Unlike CAD, CAC also measures the distance of embeddings to those of the opposite class. This provides a more direct signal of the separability of classes that better correlates with downstream classification performance. + +# 3.1.5. Gaussian potential uniformity (GPU) + +Uniformity measures how evenly representations are distributed across the unit hypersphere. Wang et al. [37] define uniformity through the logarithm of the average pairwise Gaussian potential: + +$$ +\mathrm {G P U} = \log \left(\frac {1}{\binom {| N | + 1} {2}} \sum_ {k = 1} ^ {N} \sum_ {j = 1} ^ {N} e ^ {- \| f (\tilde {x} _ {k}) - f (\tilde {x} _ {j}) \| _ {2} ^ {2}}\right) \tag {5} +$$ + +A lower GPU indicates that the embeddings are more evenly spread across the hypersphere. Utilizing large portions of the hypersphere for embeddings suggests a broader range of features learned and thus increased generalizability to unseen data. + +# 4. Methods + +# 4.1. SupCon + +SupCon is best understood as an extension of the original unsupervised contrastive NT-Xent loss. In essence, the unsupervised loss maximizes the cosine similarity between embeddings of positive pairs while minimizing the similarity between embeddings of negative pairs. Using the notation introduced in Sec. 3.1, the loss is: + +$$ +\mathcal {L} _ {\mathrm {N T - X e n t}} = - \sum_ {x \in \mathcal {X}} \log \frac {e ^ {f (\tilde {x}) \cdot f \left(\tilde {x} ^ {+}\right) / \tau}}{\sum_ {\tilde {x} ^ {-} \in \mathcal {W} \backslash \{\tilde {x} \}} e ^ {f (\tilde {x}) \cdot f \left(\tilde {x} ^ {-}\right) / \tau}} \tag {6} +$$ + +![](images/1692a78b9b46d220e4e6b427af55b5ac2ccd9eb7babc2657ad23508b944eeb8b.jpg) +Figure 3. We introduce two fixes for supervised contrastive learning. Supervised Minority applies supervision exclusively to the minority class, preventing class collapse and enhancing alignment of the minority class. Supervised Prototypes attracts samples to fixed class prototypes, improving both class alignment and uniformity. + +Here, $\cdot$ denotes the dot product and $\tau \in \mathbb{R}^{+}$ is the temperature parameter. In practice, $\mathcal{X}$ and $\mathcal{W}$ are restricted to the elements and views of a single batch. + +SupCon extends the NT-Xent loss to include full label information. By considering all samples of the same class in the numerator, it aims to maximize the similarity between all projections of a class. For each class $i$ , the SupCon loss is defined as: + +$$ +\mathcal {L} _ {\operatorname {S u p C o n}} ^ {i} \triangleq - \sum_ {x \in \mathcal {X} _ {i}} \frac {1}{| \mathcal {W} _ {i} \setminus \{\tilde {x} \} |} \sum_ {p \in \mathcal {W} _ {i} \setminus \{\tilde {x} \}} \log \frac {e ^ {f (\tilde {x}) \cdot f (p) / \tau}}{\sum_ {a \in \mathcal {X} \setminus \{\tilde {x} \}} e ^ {f (\tilde {x}) \cdot f (a) / \tau}} \tag {7} +$$ + +The total loss in the binary case is then: + +$$ +\mathcal {L} _ {\operatorname {S u p C o n}} = \mathcal {L} _ {\operatorname {S u p C o n}} ^ {0} + \mathcal {L} _ {\operatorname {S u p C o n}} ^ {1} \tag {8} +$$ + +# 4.2. Supervised Minority + +We introduce Supervised Minority, a novel supervised contrastive learning strategy specifically for binary imbalanced datasets. Supervised Minority applies supervision exclusively to the minority class (see Fig. 3). Formally, we combine SupCon in the minority (min) class with the NT-Xent [5] loss in the majority (maj) class: + +$$ +\mathcal {L} _ {\operatorname {S u p M i n}} = \mathcal {L} _ {\operatorname {S u p C o n}} ^ {\min } + \mathcal {L} _ {\mathrm {N T - X e n t}} ^ {\mathrm {m a j}} \tag {9} +$$ + +By using the NT-Xent loss for the majority class, we aim to guard against class collapse and increase uniformity. Additionally, by using SupCon in the minority class we enhance its alignment. + +# 4.3. Supervised Prototypes + +Our second approach, Supervised Prototypes, builds upon the concept of fixed prototypes [16, 21, 26, 41]. We initialize two fixed class prototypes at opposite ends of the representation space's hypersphere. Each prototype attracts samples of its respective class (see Fig. 3). While prototypes improve class alignment, they can reduce latent space uniformity [21]. To mitigate this, we attract samples towards the prototype only if their cosine similarity with that prototype is less than 0.5. When a sample's representation already has high similarity to its prototype, it is influenced only by the NT-Xent loss. + +We place the majority class prototype $p_{maj}$ on the $S^{d-1}$ unit sphere so that it minimizes the average distance to all encodings of unaugmented training samples. This position is determined through gradient descent. Let $p_{min} = -p_{maj}$ represent the minority class prototype, ensuring maximal separation on the hypersphere from $p_{maj}$ . + +The loss of a sample and its class prototype is given by: + +$$ +\mathcal {L} _ {p _ {\tilde {x}}} ^ {i} = - \log \frac {e ^ {f (\tilde {x}) \cdot p _ {i} / \tau}}{\sum_ {\tilde {x} ^ {-} \in \mathcal {W} \backslash \{\tilde {x} \}} e ^ {f (\tilde {x}) \cdot f (\tilde {x} ^ {-}) / \tau}} \tag {10} +$$ + +The complete contrastive loss with prototype alignment for class $i$ , $\mathcal{L}_{\mathrm{SupConProto}}^i$ , and overall binary supervised contrastive loss with prototype alignment, $\mathcal{L}_{\mathrm{SupConProto}}$ , are defined as: + +$$ +\mathcal {L} _ {\operatorname {S u p C o n P r o t o}} ^ {i} \triangleq \sum_ {x \in \mathcal {X} _ {i}} \left\{ \begin{array}{l l} \left[ \mathcal {L} _ {\mathrm {N T - X e n t}} (\tilde {x}) + \mathcal {L} _ {p _ {\tilde {x}}} ^ {(i)} \right] & \text {i f} f (\tilde {x}) \cdot p _ {i} \leq 0. 5 \\ \mathcal {L} _ {\mathrm {N T - X e n t}} (\tilde {x}) & \text {o t h e r w i s e} \end{array} \right. \tag {11} +$$ + +$$ +\mathcal {L} _ {\text {S u p C o n P r o t o}} = \mathcal {L} _ {\text {S u p C o n P r o t o}} ^ {0} + \mathcal {L} _ {\text {S u p C o n P r o t o}} ^ {1} \tag {12} +$$ + +# 5. Experimental setup + +# 5.1. Datasets + +We utilize a total of seven datasets with binary class distributions that can be grouped into two categories: subsets of the iNaturalist21 (iNat21) dataset [35], where we artificially control class imbalance, and real-world medical datasets that naturally exhibit binary distributions and class imbalances. Our three subsets of iNat21 comprise plants (oaks and flowering plants), insects (bees and wasps), and mammals (hoved animals and carnivores). For each subset, we fix the class ratio to $50\% - 50\%$ , $95\% - 5\%$ , and $99\% - 1\%$ , while keeping the total number of samples constant. Our real-world medical datasets include a cardiac datasets curated from the UK Biobank population study [29], two datasets from the medical MNIST collection, BreastMNIST and PneumoniaMNIST [40], and the FracAtlas dataset [1]. Additional details about dataset characteristics and preprocessing are provided in supplementary Sec. S1. + +# 5.2. Network architecture and training + +In line with prior work and baselines, we use a ResNet-50 image encoder [11], and follow established pre-training protocols[5, 6, 17]. After pre-training, we fine-tune a linear layer using a balanced subset comprising $1\%$ of the training data and report accuracy on a balanced test set. As the medical datasets contain far fewer samples, we do not subsample them for fine-tuning and report the receiver operating characteristic area under the curve (AUC). Further information on the training protocols can be found in supplementary Sec. S2. + +# 5.3. Baselines + +In addition to standard SupCon [17] and weighted cross-entropy, we have included the five leading supervised contrastive learning methods for long-tailed datasets as baselines: parametric contrastive learning (PaCo) [7], $k$ -positive contrastive learning (KCL) [15], targeted supervised contrastive learning (TSC) [21], subclass-balancing contrastive learning (SBC) [14], and balanced contrastive learning (BCL) [42]. We include results for KCL and TSC with 3 and 6 positives to fairly adapt them to the heavily imbalanced binary case. A brief description of each method can be found in Sec. 2.2. Further details about the setup and tuning of the baselines is included in supplementary Sec. S3. Additional baselines using classical, non-contrastive strategies to handle class imbalance, such as focal loss, oversampling, and undersampling, can be found in supplementary Sec. S4. + +# 6. Results + +First, we measure the performance of SupCon on binary imbalanced distributions, showing that it inversely correlates with dataset imbalance. Next, we show how the newly introduced SAA and CAC can diagnose representation space collapse, an issue that canonical alignment metrics fail to detect. We substantiate these findings by introducing a proof that provides a mathematical explanation for the observed behavior. Finally, we show the benefit of our proposed supervised contrastive learning strategies for binary imbalanced datasets compared to existing baselines for long-tailed distributions. + +# 6.1. SupCon performance on binary datasets degrades with increasing class imbalance + +We first evaluate SupCon on the three binary natural image datasets while varying the degree of class imbalance (see Tab. 1). We observe a sharp drop in linear probing accuracy as class imbalance increases. Specifically, models trained with $1\%$ and $5\%$ minority class representation achieve downstream accuracies between $50\%$ and $60\%$ , compared to over $90\%$ accuracy in the balanced case. While + +Table 1. Balanced accuracy of all evaluated methods on three binary natural imaging datasets at varying degrees of class imbalance. We compare standard weighted cross-entropy loss and supervised contrastive learning (top rows) to five baselines for supervised contrastive learning on long-tailed distributions (middle rows) and our two proposed fixes (bottom rows). Supervised Minority strategy does not apply to balanced settings and thus it is not reported there. + +
MethodPlantsInsectsAnimals
50%5%1%50%5%1%50%5%1%
Weighted CE81.161.460.182.463.462.870.761.957.3
SupCon [17]93.7 ± 0.656.2 ± 1.654.4 ± 2.093.3 ± 0.162.6 ± 0.956.4 ± 0.180.8 ± 1.254.4 ± 1.656.9 ± 1.8
PaCo [7]91.5 ± 0.959.2 ± 1.455.9 ± 2.292.4 ± 2.266.4 ± 0.653.7 ± 1.279.2 ± 1.565.3 ± 1.155.4 ± 1.6
KCL (K=3) [15]90.6 ± 0.787.6 ± 0.481.1 ± 0.389.8 ± 0.381.1 ± 1.073.3 ± 1.481.8 ± 0.376.6 ± 0.671.2 ± 0.8
KCL (K=6) [15]94.2 ± 0.486.6 ± 1.478.6 ± 0.591.5 ± 0.679.8 ± 1.269.2 ± 2.182.6 ± 0.675.3 ± 0.770.1 ± 2.0
TSC (K=3) [21]93.4 ± 0.888.0 ± 0.579.4 ± 1.188.2 ± 0.681.2 ± 1.073.3 ± 1.682.4 ± 2.576.1 ± 0.971.2 ± 0.8
TSC (K=6) [21]94.6 ± 0.487.5 ± 0.780.1 ± 0.491.1 ± 0.579.8 ± 1.371.2 ± 2.083.0 ± 1.375.2 ± 2.172.2 ± 0.9
SBC [14]75.4 ± 0.957.2 ± 2.655.6 ± 1.977.6 ± 1.651.8 ± 2.354.0 ± 3.872.4 ± 1.155.1 ± 1.356.1 ± 2.1
BCL [42]94.1 ± 0.385.0 ± 4.471.3 ± 2.994.5 ± 0.180.0 ± 2.574.0 ± 0.186.2 ± 0.276.5 ± 0.560.3 ± 0.7
Sup Minority-89.8 ± 0.685.4 ± 0.5-82.8 ± 1.178.8 ± 0.8-77.9 ± 0.975.3 ± 0.5
Sup Prototypes95.1 ± 0.288.7 ± 0.783.4 ± 1.893.0 ± 0.381.2 ± 0.773.7 ± 1.382.9 ± 0.779.2 ± 1.373.0 ± 1.3
+ +SupCon outperforms the weighted cross-entropy baseline on balanced datasets by over $10\%$ , it underperforms this simple baseline by over $5\%$ on binary imbalanced distributions. We find that the performance of SupCon drops below that of weighted cross-entropy around $20\%$ imbalance, before completely collapsing between $5\%$ and $1\%$ (see supplementary Sec. S5). + +# 6.2. Analyzing representation spaces using canonical metrics and novel metrics + +To understand the underlying causes of the observed degradation, we analyze the learned representation spaces using both the canonical and new metrics (see Fig. 4). Both the canonical SAD and CAD are close to zero across all imbalances, suggesting high alignment of the learned representation space. However, they fail to put the distance in context to samples from other instances or classes. In comparison, our novel SAA and CAA metrics indicate that embeddings do not form distinct class-wise clusters. An SAA of 0 shows that the learned embeddings cannot differentiate between samples by input semantics. A CAC close to $50\%$ suggests that both minority and majority class samples are almost equally mixed in local neighborhoods. Together, the new metrics confirm that SupCon's representation space collapses under heavy imbalance. + +To further substantiate this empirically observed behavior, we present a proof in supplementary Sec. S7. The proof shows that gradients in the final network layer are upperbounded by the inverse of the number of positives for a given sample. When the majority class dominates training batches, the gradient quickly saturates, preventing meaningful updates for that class and causing collapse towards a single point in the representation space. + +# 6.3. Performance of fixes on natural and medical imaging datasets + +Next, we compare our two fixes, Supervised Minority and Supervised Prototypes, against five established baselines for long-tailed supervised contrastive learning. Our Supervised Minority fix achieves the best linear probing performance across all iNat21 datasets (see Tab. 1), outperforming SupCon by $20\%$ to $35\%$ . + +Our Supervised Minority fix also surpasses the performance of the five baselines developed for long-tailed datasets. Compared to these, its effectiveness increases at higher class imbalance. At $5\%$ class imbalance it outperforms all baselines by at least $1\%$ , and at $1\%$ imbalance by a margin of $3\%$ . Supervised Prototypes performed second best on all natural imaging datasets. + +Supervised Prototypes performed best on three of the four medical datasets (see Tab. 2). Both of our fixes generally match or outperform all five baselines developed for long-tailed data distributions. On the infarction dataset which has the strongest imbalance $(4\%)$ we see the largest gain of $+2\%$ AUC over the best performing baseline. + +Extensive ablations across temperatures, batch sizes and varying degrees of supervision in both the minority and majority class can be found in supplementary Sec. S8. Visualizations of the learned embeddings via UMAP in supplementary Sec. S9 also corroborate that Supervised Minority and Supervised Prototypes avoid the representation collapse observed in standard SupCon. + +# 6.4. Our proposed metrics correlate with downstream classification performance + +Classical uniformity (GPU) and alignment (SAD) metrics focus on pairwise sample-level relationships without considering class-level context. While class alignment dis + +![](images/c4bb15b90f40cd964405404de4e2a98bdd54c5c3e91cb7846c1ee76a1fef5678.jpg) +Figure 4. Boxplots of metrics analysing SupCon's representation space learned from the plants dataset. As class imbalance grows the representation space collapses despite the canonical SAD and CAD metrics being low. In contrast, SAA and CAC correctly identify the collapse. Similar results are observed on the insects and animals datasets (see supplementary Sec. S6). + +![](images/204946b0640cc443bf8e774a02b75a4911860af3398190c77377119c91ebd09c.jpg) + +![](images/112f897df9de8ffb80e655c787acf94e4cfd505bfd4661b581d88de6445be3f4.jpg) + +Table 2. Area under the curve (AUC) of all evaluated methods on four medical imaging datasets. We compare standard weighted cross-entropy loss and supervised contrastive learning (top rows) to five baselines for supervised contrastive learning for long-tailed distributions (middle rows) and our two proposed fixes (bottom rows). + +
MethodUKBBMedMNISTFracAtlas
Infarction (4%)BreastMNIST (37%)PneumoniaMNIST (35%)Fractures (21%)
Weighted CE72.475.198.879.8
SupCon [17]61.9 ± 1.975.1 ± 0.799.5 ± 0.184.8 ±0.1
PaCo [7]66.6 ± 1.366.0 ± 1.998.7 ± 0.283.7 ± 0.6
KCL (K=3) [15]75.3 ± 0.489.9 ± 0.899.6 ± 0.188.2 ± 0.1
KCL (K=6) [15]73.6 ± 0.289.5 ± 0.498.9 ± 0.186.5 ± 0.1
TSC (K=3) [21]75.7 ± 0.189.2 ± 0.199.5 ± 0.187.1 ± 0.1
TSC (K=6) [21]75.0 ± 0.188.5 ± 0.199.5 ± 0.186.3 ± 0.1
SBC [14]70.0 ± 0.380.8 ± 0.799.3 ± 1.280.9 ± 5.3
BCL [42]74.0 ± 0.190.5 ± 0.199.6 ± 0.184.9 ± 0.1
Sup Minority77.7 ± 1.186.4 ± 0.299.6 ± 0.182.3 ±0.7
Sup Prototypes77.9 ± 0.490.7 ± 0.599.8 ± 0.186.0 ± 0.1
+ +tance (CAD) incorporates intra-class similarity for supervised contrastive learning, it does not account for the critical inter-class relationships that influence downstream separability. + +As shown in Fig. 5, the canonical metrics exhibit near-zero correlation with linear probing accuracy when plotted globally across all datasets. In contrast, our class alignment consistency achieves an $\mathbb{R}^2$ value of 0.69, indicating a strong global linear relationship with downstream classification accuracy. + +When considering each dataset individually and averaging the correlations over all datasets, we find weak correlations between 0.15 and 0.3 for the classical metrics. In contrast, our novel metrics achieve a mean $R^2$ value of 0.45 and 0.65 due to the fact that they consider inter-sample and inter-class statistics. This shows the suitability of our metrics for evaluating representation space quality for downstream utility both on a global and local scale. + +# 7. Discussion and conclusion + +Although imbalanced binary distributions are commonly found in real-world machine learning problems and especially in medical applications, they have received little attention in the context of supervised contrastive learning. In extensive experiments on seven natural and medical imaging datasets, we have shown that SupCon on binary imbalanced distributions often results in collapsed representations, leading to poor downstream performance. + +To diagnose these failures, we introduced two new metrics: Sample Alignment Accuracy (SAA) and Class Alignment Consistency (CAC) which extend the notion of alignment to measure how well samples and classes are distinguished from each other in the learned space. These metrics uncovered shortcomings that canonical measures overlooked, showing that SupCon fails to form meaningful embeddings under high imbalance. Crucially, CAC correlates well with linear probing accuracy and is thus a suitable metric for measuring representation space quality for down- + +![](images/9275180179453b7e53aa77a754d6856367fae45448d8cc82f9f6ebc967d61d6a.jpg) + +![](images/6c0d10c01a4e9b45474f87c9f0201def58c32954a0b3e3d749abd15f0641929e.jpg) + +![](images/e0a515388bb99f3196ae800f72514ee3eead0bba271c14a748c651098daef0b3.jpg) + +![](images/9289f404fbf5959f7d2f666ea40c0d5351900b997b74bee107ed9d397ad19fde.jpg) +Figure 5. Correlations between five representation-space metrics and linear probing performance across all datasets and all considered methods. The overall $R^2$ is calculated globally over all points while the dataset mean $R^2$ is calculated per dataset and then averaged. As SAA and CAC are the only metrics that account for relationships between samples and classes instead of simply within them, they correlate much stronger with downstream performance. + +![](images/490a1c0d8d426b7238f5da5038de1d556074d832a761ecc83504a862a5e41975.jpg) + +![](images/cfaaaa08855c8704d3c33681f109b482bd23feab89ef6aa2a8644dab61983c22.jpg) + +stream applications. + +Finally, we proposed two fixes, Supervised Minority and Supervised Prototypes, specifically tailored to address binary imbalance. Both solutions boost accuracy by up to $35\%$ over standard SupCon and surpass existing methods for long-tailed distributions by up to $5\%$ . With minimal additions to the standard SupCon loss and negligible computational overhead, these fixes offer a straightforward path to improved performance on binary imbalanced classification problems. + +Limitations A limitation of our methods is that the Supervised Minority fix cannot be used when data is balanced (as there is no minority class) and there does not seem to be a particularly clear pattern when Supervised Minority performs better than Supervised Prototypes or vice versa. We observed that Supervised Prototypes always achieved the best performance on medical datasets, while Supervised Minority usually performs better on natural image datasets. We hypothesize that this could either be due to the domain-specific data characteristics or a dependence of both meth- + +ods on the degree of class imbalance that slightly differed in our experiments. When choosing an approach for a new dataset, it would be prudent to test both methods. + +Conclusion Our study complements and extends a series of previous works that have aimed to explore the theoretical foundations of contrastive learning and researched its application to long-tailed datasets. By focusing on the particularly challenging case of binary imbalanced datasets, we have improved the understanding of the dynamics of contrastive learning and developed tools to diagnose and enhance methods dealing with such datasets, which are very common in real-world applications, such as medicine. + +Acknowledgments This research has been conducted using the UK Biobank Resource under Application Number 87802. This work was supported in part by the European Research Council grant Deep4MI (Grant Agreement no. 884622). Martin J. Menten is funded by the German Research Foundation under project 532139938. + +# References + +[1] Iftekharul Abedeen, Md Ashiqur Rahman, Fatema Zohra Prottyasha, Tasnim Ahmed, Tareque Mohmud Chowdhury, and Swakkhar Shatabda. Fracatlas: A dataset for fracture classification, localization and segmentation of musculoskeletal radiographs. Scientific Data, 10(1):521, 2023. 5, 11, 13 +[2] Mido Assran, Randall Balestriero, Quentin Duval, Florian Bordes, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, and Nicolas Ballas. The hidden uniform cluster prior in self-supervised learning. In *The Eleventh International Conference on Learning Representations*, 2023. 1 +[3] Antonio Pedro Camargo, Simon Roux, Frederik Schulz, Michal Babinski, Yan Xu, Bin Hu, Patrick SG Chain, Stephen Nayfach, and Nikos C Kyrpides. Identification of mobile genetic elements with genomad. Nature Biotechnology, 42(8):1303-1312, 2024. 1 +[4] Mayee Chen, Daniel Y Fu, Avanika Narayan, Michael Zhang, Zhao Song, Kayvon Fatahalian, and Christopher Ré. Perfectly balanced: Improving transfer and robustness of supervised contrastive learning. In International Conference on Machine Learning, pages 3090-3122. PMLR, 2022. 2 +[5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PmLR, 2020. 4, 5, 14 +[6] Tsai Shien Chen, Wei Chih Hung, Hung Yu Tseng, Shao Yi Chien, and Ming Hsuan Yang. Incremental false negative detection for contrastive learning. In 10th International Conference on Learning Representations, ICLR 2022, 2022. 5, 14 +[7] Jiequan Cui, Zhisheng Zhong, Shu Liu, Bei Yu, and Jiaya Jia. Parametric contrastive learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 715-724, 2021. 1, 2, 5, 6, 7, 15 +[8] Florian Graf, Christoph Hofer, Marc Niethammer, and Roland Kwitt. Dissecting supervised contrastive learning. In International Conference on Machine Learning, pages 3821-3830. PMLR, 2021. 1, 2 +[9] Paul Hager, Martin J. Menten, and Daniel Rueckert. Best of both worlds: Multimodal contrastive learning with tabular and imaging data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 23924-23935, 2023. 2, 12 +[10] Tengda Han, Weidi Xie, and Andrew Zisserman. Self-supervised co-training for video representation learning. Advances in neural information processing systems, 33:5679-5690, 2020. 1 +[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 5, 13, 17 +[12] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF con + +ference on computer vision and pattern recognition, pages 9729-9738, 2020. 15 +[13] Kim Hee et al. Transfer learning for medical image classification: a literature review. BMC Medical Imaging, 2022. 2 +[14] Chengkai Hou, Jieyu Zhang, Haonan Wang, and Tianyi Zhou. Subclass-balancing contrastive learning for long-tailed recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5395-5407, 2023. 1, 2, 5, 6, 7, 15 +[15] Bingyi Kang, Yu Li, Sa Xie, Zehuan Yuan, and Jiashi Feng. Exploring balanced feature spaces for representation learning. In International Conference on Learning Representations, 2021. 2, 5, 6, 7, 14, 21 +[16] Tejaswi Kasarla, Gertjan Burghouts, Max Van Spengler, Elise Van Der Pol, Rita Cucchiara, and Pascal Mettes. Maximum class separation as inductive bias in one matrix. Advances in neural information processing systems, 35:19553-19566, 2022. 5 +[17] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in neural information processing systems, 33:18661-18673, 2020. 1, 5, 6, 7, 14, 17 +[18] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. 14 +[19] Vinod Kumar, Gotam Singh Lalotra, Ponnusamy Sasikala, Dharmendra Singh Rajput, Rajesh Kaluri, Kuruva Lakshmanna, Mohammad Shorfuzzaman, Abdulmajeed Alsufyani, and Mueen Uddin. Addressing binary classification over class imbalanced clinical datasets using computationally intelligent techniques. Healthcare, 10(7), 2022. 2 +[20] Shikun Li, Xiaobo Xia, Shiming Ge, and Tongliang Liu. Selective-supervised contrastive learning with noisy labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 316-325, 2022. 1 +[21] Tianhong Li, Peng Cao, Yuan Yuan, Lijie Fan, Yuzhe Yang, Rogerio S Feris, Piotr Indyk, and Dina Katabi. Targeted supervised contrastive learning for long-tailed recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6918-6928, 2022. 1, 2, 3, 5, 6, 7 +[22] Victor Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Y Zou. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. Advances in Neural Information Processing Systems, 35:17612-17625, 2022. 17 +[23] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988, 2017. 15 +[24] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2022. 14 +[25] Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. Umap: Uniform manifold approximation and + +projection. Journal of Open Source Software, 3(29):861, 2018. 22 +[26] Pascal Mettes, Elise Van der Pol, and Cees Snoek. Hyperspherical prototype networks. Advances in neural information processing systems, 32, 2019. 5 +[27] Richard W Nesto. Screening for asymptomatic coronary artery disease in diabetes. Diabetes Care, 22(9):1393, 1999. 12 +[28] Steffen Schneider, Jin Hwa Lee, and Mackenzie Weygandt Mathis. Learnable latent embeddings for joint behavioural and neural analysis. Nature, 617(7960):360-368, 2023. 1 +[29] Cathie Sudlow, John Gallacher, Naomi Allen, Valerie Beral, Paul Burton, John Danesh, Paul Downey, Paul Elliott, Jane Green, Martin Landray, et al. Uk biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS medicine, 12(3): e1001779, 2015. 5, 11, 12 +[30] Bo Sun, Banghuai Li, Shengcai Cai, Ye Yuan, and Chi Zhang. Fsce: Few-shot object detection via contrastive proposal encoding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7352-7362, 2021. 1 +[31] Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, pages 20827-20840. PMLR, 2022. 1 +[32] Christopher Tosh, Akshay Krishnamurthy, and Daniel Hsu. Contrastive learning, multi-view redundancy, and linear models. In Algorithmic Learning Theory, pages 1179-1206. PMLR, 2021. 11 +[33] Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, and Louis-Philippe Morency. Demystifying self-supervised learning: An information-theoretical framework. arXiv preprint arXiv:2006.05576, 2020. 11 +[34] Paul Valensi, Luc Lorgis, and Yves Cottin. Prevalence, incidence, predictive factors and prognosis of silent myocardial infarction: a review of the literature. Archives of cardiovascular diseases, 104(3):178-188, 2011. 12 +[35] Grant Van Horn, Elijah Cole, Sara Beery, Kimberly Wilber, Serge Belongie, and Oisin Mac Aodha. Benchmarking representation learning for natural world image collections. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12884-12893, 2021. 5, 11 +[36] Peng Wang, Kai Han, Xiu-Shen Wei, Lei Zhang, and Lei Wang. Contrastive learning based hybrid networks for long-tailed image classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 943-952, 2021. 1 +[37] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International conference on machine learning, pages 9929-9939. PMLR, 2020. 2, 3, 4 +[38] Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, and Zhouchen Lin. Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap. In International Conference on Learning Representations, 2022. 2, 11 + +[39] Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, and Baharan Mirzasoleiman. Which features are learnt by contrastive learning? On the role of simplicity bias in class collapse and feature suppression. In Proceedings of the 40th International Conference on Machine Learning, pages 38938-38970. PMLR, 2023. 2 +[40] Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bilian Ke, Hanspeter Pfister, and Bingbing Ni. Medmnist v2-a large-scale lightweight benchmark for 2d and 3d biomedical image classification. Scientific Data, 10(1):41, 2023. 5, 11, 13 +[41] Yibo Yang, Shixiang Chen, Xiangtai Li, Liang Xie, Zhouchen Lin, and Dacheng Tao. Inducing neural collapse in imbalanced learning: Do we really need a learnable classifier at the end of deep neural network? Advances in neural information processing systems, 35:37991-38002, 2022. 5 +[42] Jianggang Zhu, Zheng Wang, Jingjing Chen, Yi-Ping Phoebe Chen, and Yu-Gang Jiang. Balanced contrastive learning for long-tailed visual recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6908-6917, 2022. 1, 2, 5, 6, 7 \ No newline at end of file diff --git a/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/images.zip b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5117d506a556156c31d54d53635726bbe0c288f5 --- /dev/null +++ b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8aa41837dd19c25d6232d42f2192dbe5763fb33d6d6f2fe8df17c7353efc7c3 +size 540149 diff --git a/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/layout.json b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0c43b2f7cc2641c379585710a4457549f79fd0f8 --- /dev/null +++ b/CVPR/2025/A Tale of Two Classes_ Adapting Supervised Contrastive Learning to Binary Imbalanced Datasets/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efc198590d95aeb3f08a84d66a80bb04ed7a71d11b35734213c8eb1ee6d3be01 +size 384038