Chelsea707 commited on
Commit
20f7cb1
·
verified ·
1 Parent(s): c375e79

Add Batch 0f62b3e2-269a-4f59-a832-198e02eeb29b

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/fc940e3f-7ebd-4b03-a591-8e352d01b536_content_list.json +3 -0
  2. ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/fc940e3f-7ebd-4b03-a591-8e352d01b536_model.json +3 -0
  3. ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/fc940e3f-7ebd-4b03-a591-8e352d01b536_origin.pdf +3 -0
  4. ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/full.md +417 -0
  5. ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/images.zip +3 -0
  6. ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/layout.json +3 -0
  7. ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/2020a5d8-8636-48f9-860e-924ffec09982_content_list.json +3 -0
  8. ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/2020a5d8-8636-48f9-860e-924ffec09982_model.json +3 -0
  9. ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/2020a5d8-8636-48f9-860e-924ffec09982_origin.pdf +3 -0
  10. ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/full.md +643 -0
  11. ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/images.zip +3 -0
  12. ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/layout.json +3 -0
  13. ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/78f76ef6-b915-4041-a9d9-620580552cf8_content_list.json +3 -0
  14. ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/78f76ef6-b915-4041-a9d9-620580552cf8_model.json +3 -0
  15. ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/78f76ef6-b915-4041-a9d9-620580552cf8_origin.pdf +3 -0
  16. ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/full.md +0 -0
  17. ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/images.zip +3 -0
  18. ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/layout.json +3 -0
  19. ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/2b0daec3-74b1-4c85-9ff9-cb1b6fc87ccc_content_list.json +3 -0
  20. ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/2b0daec3-74b1-4c85-9ff9-cb1b6fc87ccc_model.json +3 -0
  21. ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/2b0daec3-74b1-4c85-9ff9-cb1b6fc87ccc_origin.pdf +3 -0
  22. ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/full.md +430 -0
  23. ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/images.zip +3 -0
  24. ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/layout.json +3 -0
  25. ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/51df517f-777d-48e9-9727-92244295c047_content_list.json +3 -0
  26. ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/51df517f-777d-48e9-9727-92244295c047_model.json +3 -0
  27. ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/51df517f-777d-48e9-9727-92244295c047_origin.pdf +3 -0
  28. ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/full.md +744 -0
  29. ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/images.zip +3 -0
  30. ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/layout.json +3 -0
  31. ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/818ec86e-ff5c-4a4e-9fd2-8f7574a2c894_content_list.json +3 -0
  32. ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/818ec86e-ff5c-4a4e-9fd2-8f7574a2c894_model.json +3 -0
  33. ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/818ec86e-ff5c-4a4e-9fd2-8f7574a2c894_origin.pdf +3 -0
  34. ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/full.md +531 -0
  35. ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/images.zip +3 -0
  36. ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/layout.json +3 -0
  37. ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/b8a14e2d-c026-47d0-a491-b0ccd3da7e13_content_list.json +3 -0
  38. ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/b8a14e2d-c026-47d0-a491-b0ccd3da7e13_model.json +3 -0
  39. ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/b8a14e2d-c026-47d0-a491-b0ccd3da7e13_origin.pdf +3 -0
  40. ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/full.md +0 -0
  41. ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/images.zip +3 -0
  42. ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/layout.json +3 -0
  43. ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/8de9ed94-a2b6-43af-8214-254ceb1d226d_content_list.json +3 -0
  44. ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/8de9ed94-a2b6-43af-8214-254ceb1d226d_model.json +3 -0
  45. ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/8de9ed94-a2b6-43af-8214-254ceb1d226d_origin.pdf +3 -0
  46. ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/full.md +494 -0
  47. ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/images.zip +3 -0
  48. ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/layout.json +3 -0
  49. ICLR/2025/When Attention Sink Emerges in Language Models_ An Empirical View/eba1b017-a633-4834-ad41-af9a7b9407c7_content_list.json +3 -0
  50. ICLR/2025/When Attention Sink Emerges in Language Models_ An Empirical View/eba1b017-a633-4834-ad41-af9a7b9407c7_model.json +3 -0
ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/fc940e3f-7ebd-4b03-a591-8e352d01b536_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:369f71e5af5f202d81465c664efe3bbaa10fe2066ec561e48e3bda089abc136f
3
+ size 104581
ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/fc940e3f-7ebd-4b03-a591-8e352d01b536_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f084fb477d74b036af85823cf3107f94b5cdbcc2311f16927c2a2ffe08c34f7c
3
+ size 127538
ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/fc940e3f-7ebd-4b03-a591-8e352d01b536_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e321eb43372975b06417c3c1a965c521122471a98593910e0c70eba0ddad54b
3
+ size 590732
ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/full.md ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VISION-RWKV: EFFICIENT AND SCALABLE VISUAL PERCEPTION WITH RWKV-LIKE ARCHITECTURES
2
+
3
+ Yuchen Duan $^{1,2*}$ , Weiyun Wang $^{3,2*}$ , Zhe Chen $^{4,2*}$ , Xizhou Zhu $^{5,2,6}$ , Lewei Lu $^{6}$ , Tong Lu $^{4}$ , Yu Qiao $^{2}$ , Hongsheng Li $^{1}$ , Jifeng Dai $^{5,2}$ , Wenhai Wang $^{1,2\boxtimes}$
4
+
5
+ <sup>1</sup>The Chinese University of Hong Kong, <sup>2</sup>Shanghai AI Laboratory, <sup>3</sup>Fudan University,
6
+
7
+ $^{4}$ Nanjing University, $^{5}$ Tsinghua University, $^{6}$ SenseTime Research
8
+
9
+ # ABSTRACT
10
+
11
+ Transformers have revolutionized computer vision and natural language processing, but their high computational complexity limits their application in high-resolution image processing and long-context analysis. This paper introduces Vision-RWKV (VRWKV), a model that builds upon the RWKV architecture from the NLP field with key modifications tailored specifically for vision tasks. Similar to the Vision Transformer (ViT), our model demonstrates robust global processing capabilities, efficiently handles sparse inputs like masked images, and can scale up to accommodate both large-scale parameters and extensive datasets. Its distinctive advantage is its reduced spatial aggregation complexity, enabling seamless processing of high-resolution images without the need for window operations. Our evaluations demonstrate that VRWKV surpasses ViT's performance in image classification and has significantly faster speeds and lower memory usage processing high-resolution inputs. In dense prediction tasks, it outperforms window-based models, maintaining comparable speeds. These results highlight VRWKV's potential as a more efficient alternative for visual perception tasks. Code and models are available at https://github.com/OpenGVLab/Vision-RWKV.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Vision Transformers (ViTs) (Dosovitskiy et al., 2020; Touvron et al., 2021a; Vaswani et al., 2017; Steiner et al., 2021; He et al., 2021), renowned for their flexibility and global information processing capabilities, have established new benchmarks in a variety of vision tasks in the past few years. However, the quadratic computational complexity associated with ViTs limits their ability to efficiently process high-resolution images and lengthy sequences, posing a significant barrier to their broader application. As a result, the exploration of a visual architecture that integrates the versatility and comprehensive processing strengths of ViTs, while reducing computational demands, has emerged as a crucial area of research.
16
+
17
+ In recent developments within natural language processing (NLP), models with linear feature aggregation (or called "linear attention") mechanisms like RWKV (Peng et al., 2023) and Mamba (Gu & Dao, 2023) have emerged as popular solutions for achieving heightened efficiency and processing lengthy texts. These innovative models have demonstrated attributes similar to transformers (Devlin et al., 2018; Raffel et al., 2019; Smith et al., 2022b; Liu et al., 2019; Radford et al., 2018; 2019; Brown et al., 2020; Lewis et al., 2019) in NLP tasks, including the ability to handle long-range dependencies and parallel processing. Furthermore, they have also proven to be scalable, performing well with large-scale NLP datasets. Expanding these techniques into the visual domain shows promise in addressing the computational cost challenge encountered by ViTs.
18
+
19
+ To develop a vision model incorporating a linear attention mechanism based on the aforementioned methods, while ensuring high capacity for large-scale image data and diverse visual tasks, several critical issues need to be addressed. Firstly, the design of spatial feature aggregation operations needs to be reconsidered, taking into account the differences between image and text modalities. For instance, a redesign of kernels and rewriting at the CUDA level are necessary for attention
20
+
21
+ ![](images/9e5fca7d4df7034c351b5af7dd13b1e3cdca96621542b57adad3d060577fdb01.jpg)
22
+ (a)
23
+
24
+ ![](images/2f027a75ebfc0a8ce1d2a6052f4ab5219eddccd9700809985dfd2fbd4ff401b9.jpg)
25
+ (b)
26
+ Figure 1: Performance and efficiency comparison of Vision-RWKV (VRWKV) and ViT. (a) Bounding box average precision $(\mathrm{AP}^{\mathrm{b}})$ comparison of VRWKV and ViT (Touvron et al., 2021a) with window attention and global attention on the COCO (Lin et al., 2014) dataset. (b) Inference speed comparison of VRWKV-T and ViT-T across input resolutions ranging from 224 to 2048. (c) GPU memory comparison of VRWKV-T and ViT-T across input resolutions from 224 to 2048.
27
+
28
+ ![](images/9d241498d3a34b0980a23a9f23430cfa75ff114d846b695960f034c1eb9276dd.jpg)
29
+ (c)
30
+
31
+ mechanisms with a causal receptive field in models like RWKV. Furthermore, the issues of gradient vanishing or exploding tend to arise gradually as the model scales up, resulting in unstable training with large parameter sizes and large-scale datasets. For example, Vision Mamba (Zhu et al., 2024) only gave appropriate results on models with less than $30\mathrm{M}$ parameters. It is important to conduct an in-depth study of how linear attention models can be applied to vision tasks effectively, including examining the scalability concerning data and parameters, assessing the efficiency in handling sparse visual data, and implementing necessary techniques to ensure model stability during scaling up.
32
+
33
+ Based on these points, we introduce Vision-RWKV (VRWKV). Our approach preserves the core structure and benefits of RWKV (Peng et al., 2023) while incorporating essential changes to process visual data efficiently. Specifically, we design a quad-directional shift (Q-Shift) tailed for vision tasks and modify the original causal RWKV attention mechanism to a bidirectional global attention mechanism (Bi-WKV). The Q-Shift operation expands the semantic range of individual tokens, while the Bi-WKV enables the calculation of global attention within linear complexity in an RNN-form forward and backward. We primarily modify the exponent in the RWKV attention, releasing the limitations of the decay vector and transforming the absolute positional bias into a relative bias. These changes enhance the model's capability while ensuring scalability and stability. In this way, our VRWKV inherits the efficiency of RWKV in handling global information and sparse inputs, while also being able to model the local concept of vision tasks. Additionally, due to severe instability encountered when scaling up the model, we explored a series of measures (Touvron et al., 2021b; Ba et al., 2016) to stabilize the model's outputs. These adjustments significantly improve the model's training stability when scaling up to a larger size.
34
+
35
+ Building on the aforementioned design, we develop a range of VRWKV models with different model scales, spanning from the VRWKV-Tiny (6M) to the VRWKV-Large (335M). These models are trained using large-scale datasets such as ImageNet-1K (Deng et al., 2009) and ImageNet-22K (Deng et al., 2009). We train them using both common supervised classification and sparse input method MAE (He et al., 2021) and evaluate their performance on visual perception tasks, including classification, detection, and segmentation. Under the same settings, VRWKV has comparable performance to ViT in these tasks with lower computational costs while maintaining stable scalability. This achievement enables VRWKV training parallelism, high flexibility, excellent performance, and low inference cost simultaneously, making it a promising alternative to ViT in a wide range of vision tasks, particularly in high-resolution scenarios.
36
+
37
+ In this paper, our main contributions are:
38
+
39
+ (1) We propose VRWKV as a cost-effective alternative to ViT, offering a comprehensive substitute with lower computational requirements. Our model retains ViT's strengths, such as capturing long-range dependencies and handling sparse inputs flexibly, while reducing complexity to a linear scale. This reduction eliminates the need for window operation when processing high-resolution images, making VRWKV a more efficient and scalable solution for vision tasks.
40
+ (2) To support vision tasks, we develop a bidirectional global attention mechanism combined with a novel token shift method, Q-Shift, to achieve linear complexity in global attention. Additionally, we implement a set of tailored strategies—integrating relative positional bias, layer scale, and extra layer normalization—to tackle overflow issues and ensure stable, scalable training.
41
+
42
+ (3) Our model surpasses window-based ViTs and is comparable to global attention ViTs, demonstrating lower FLOPs and GPU memory cost with faster processing speeds as resolution increases, as shown in Figure 1. Notably, VRWKV-T achieves $75.1\%$ top-1 accuracy trained only on the ImageNet-1K (Deng et al., 2009), outperforming DeiT-T (Touvron et al., 2021a) by 2.9 points. With large-scale parameters (i.e., 335M) and training data (i.e., ImageNet-22K), the top-1 accuracy of VRWKV-L is further boosted to $86.0\%$ , which is higher than ViT-L (Dosovitskiy et al., 2020) (86.04 vs 85.15). In addition, on COCO (Lin et al., 2014), a challenging downstream benchmark, our best model VRWKV-L achieves $50.6\%$ box mAP, 1.9 points better than ViT-L (50.6 vs 48.7).
43
+
44
+ # 2 RELATED WORKS
45
+
46
+ # 2.1 VISION ENCODER
47
+
48
+ Recent advances in vision encoders have significantly pushed the boundaries of computer vision, demonstrating remarkable performance across a range of tasks. Convolutional neural networks (CNNs) served as the foundational model in computer vision. The advancement of computational resources, such as GPUs, has enabled the successful training of stacked convolutional blocks like AlexNet (Krizhevsky et al., 2012) and VGG (Simonyan & Zisserman, 2014) on large-scale image classification datasets (e.g., ImageNet (Deng et al., 2009)). This development paved the way for deeper and more sophisticated convolutional neural architectures, including GoogleNet (Szegedy et al., 2015), ResNet (He et al., 2016), and DenseNet (Huang et al., 2017).
49
+
50
+ In addition to these innovations, significant advancements have also been made with architectures like SENet (Hu et al., 2018), which introduced a channel attention mechanism to enhance model sensitivity to informative features. Similarly, SKNet (Li et al., 2019) merged multiple kernel sizes to adjust the receptive field adaptively. Further extending the CNN paradigm, recent models such as RepLKNet (Ding et al., 2022) and ConvNeXt (Liu et al., 2022) have refined the convolutional layers to improve efficiency and accuracy, while InternImage (Wang et al., 2023b) explored the strategies to scale up the convolution-based vision model.
51
+
52
+ Inspired by the success of self-attention layers and transformer architectures in the NLP field, the Vision Transformer (ViT) (Dosovitskiy et al., 2020) applied a transformer framework on image patches, offering a global receptive field and dynamic spatial aggregation. Due to the quadratically computational complexity of the vanilla attention mechanism, approaches like PVT (Wang et al., 2021; 2022) and Linformer (Wang et al., 2020) implemented global attention on down-sampled feature maps, whereas other approaches like Swin (Wu et al., 2022) and HaloNet (Vaswani et al., 2021; Dai et al., 2022) introduced sampling techniques to enlarge the receptive field. Mini-InternVL (Gao et al., 2024) reduces the parameter size of ViT by employing knowledge distillation from a larger ViT to a smaller one, thereby achieving efficiency in the visual encoder.
53
+
54
+ Another research direction involved replacing self-attention layers in models with linear complexity layers. Representative works include LongNet (Ding et al., 2023), RWKV (Peng et al., 2023), RetNet (Sun et al., 2023), and Mamba (Gu & Dao, 2023), though few have concentrated on visual applications. Concurrently, attempts like Vim (Zhu et al., 2024) and VMamba (Liu et al., 2024) have sought to integrate these linear attention layers into the vision domain. However, these endeavors have only been experimented with on small-scale models (parameters $< 30\mathrm{M}$ for Vim and $< 100\mathrm{M}$ for VMamba), leaving it uncertain whether their effectiveness extends to larger models.
55
+
56
+ # 2.2 FEATURE AGGREGATION MECHANISM
57
+
58
+ The research on feature aggregation has received significant attention. For visual data processing, convolutional operators (LeCun et al., 1995), known for their parameter sharing and local perception, enabled efficient handling of large-scale data through sliding computation. Despite their advantages, traditional CNN operators faced challenges in modeling long-range dependencies. To overcome this issue, advanced convolutional operators, such as the deformable convolution (Dai et al., 2017; Zhu et al., 2019; Xiong et al., 2024), have improved the flexibility of CNN operators, enhancing their long-range modeling capability.
59
+
60
+ As for the field of NLP, RNN-based operators (Elman, 1990; Memory, 2010; Qin et al., 2024) have historically dominated because of their effectiveness in sequence modeling. RNNs and LSTMs excel in capturing temporal dependencies, making them suitable for tasks requiring an understanding of sequence dynamics. Subsequently, a significant shift occurred. The introduction of the transformer
61
+
62
+ ![](images/42a7685c21874749b58486ffcc75424b517f473546081fafbd6bae0c4bd0ed8a.jpg)
63
+ (a) Vision-RWKV Architecture
64
+
65
+ ![](images/799f0adac5fe795e0171217207fe3eb72e78cdf8da78db81449bafc9ea91beee.jpg)
66
+ (b) Vision-RWKV Encoder Layer
67
+ Figure 2: Overall architecture of VRWKV. (a) The VRWKV architecture includes $L$ identical VRWKV encoder layers, an average pooling layer, and a linear prediction head. (b) The details of the VRWKV encoder layer. Q-Shift denotes the quad-directional shift method tailed for vision tasks. The "Bi-WKV" module served as a bidirectional RNN cell or a global attention mechanism.
68
+
69
+ architecture (Vaswani et al., 2017) marked a turning point, with both NLP and computer vision fields shifting focus toward attention-based feature aggregation. The global attention mechanism overcomes the limitations of CNNs in modeling long-range dependencies and the shortcomings of RNNs in parallel computation while coming at a high computational cost.
70
+
71
+ To address these issues, researchers have introduced innovations such as window attention and spatial reduction attention. Window attention (Liu et al., 2021; Vaswani et al., 2021; Dai et al., 2022) restricts the self-attention computation within local windows, drastically reducing the computational complexity while preserving the receptive field through window-level interaction. Spatial reduction attention (Wang et al., 2021; 2022), on the other hand, reduces the dimensionality of the feature space before applying the attention mechanism, effectively decreasing the computational requirements without significantly degrading the model's performance.
72
+
73
+ In addition to the efforts to optimize the global attention mechanism, various operators with linear complexity have also been explored. For instance, RWKV (Peng et al., 2023) and RetNet (Sun et al., 2023) employed exponential decay to model global information efficiently while SSMs (Gu et al., 2021a;b; Smith et al., 2022a; Wang et al., 2023a) also exhibited linear complexity concerning sequence length and modification in Mamba (Gu & Dao, 2023) enable them to be input-dependent. Besides, XCA (Ali et al., 2021) achieved linear complexity by calculating the cross-variance between input tokens. However, the low efficiency of information interaction between tokens makes the need for the assistance of additional modules necessary to complete a comprehensive feature aggregation. Despite some concurrent efforts (Liu et al., 2024; Zhu et al., 2024; Fan et al., 2023), adapting these NLP-derived techniques to vision tasks remains a challenge in maintaining stable training across larger and more complex vision models.
74
+
75
+ # 3 VISION-RWKV
76
+
77
+ # 3.1 OVERALL ARCHITECTURE
78
+
79
+ In this section, we propose Vision-RWKV (VRWKV), an efficient vision encoder with a linear complexity attention mechanism. Our principle is to preserve the advantages of the original RWKV architecture (Peng et al., 2023), making necessary modifications to enable its flexible application in vision tasks, supporting sparse input, and ensuring the stability of the training process after scaling up. An overview of our VRWKV is depicted in Figure 2.
80
+
81
+ VRWKV adopts a block-stacked image encoder design like ViT, where each block consists of a spatial-mix module and a channel-mix module. The spatial-mix module functions as an attention mechanism, performing linear complexity global attention computation while the channel mix module serves as a feed-forward network (FFN), performing feature fusion in the channel dimension. The entire VRWKV includes a patch embedding layer and a stack of $L$ identical VRWKV encoder layers, where each layer maintains the input resolution.
82
+
83
+ Data Flow. First, we transform the $H \times W \times 3$ image into $HW / p^2$ patches, where $p$ denotes the patch size. The patches after a linear projection add the position embedding to obtain image tokens of shape $T \times C$ , where $T = HW / p^2$ denotes the total number of tokens. These tokens are then input into the VRWKV encoder with $L$ layers.
84
+
85
+ In each layer, tokens are first fed into the spatial-mix module which plays the role of a global attention mechanism. Specifically, as shown in Figure 2(b), the input tokens are first shifted and fed into three parallel linear layers to obtain the matrices $R_{s}, K_{s}, V_{s} \in \mathbb{R}^{T \times C}$ :
86
+
87
+ $$
88
+ R _ {\mathrm {s}} = \mathrm {Q} - \operatorname {S h i f t} _ {R} (X) W _ {R}, \quad K _ {\mathrm {s}} = \mathrm {Q} - \operatorname {S h i f t} _ {K} (X) W _ {K}, \quad V _ {\mathrm {s}} = \mathrm {Q} - \operatorname {S h i f t} _ {V} (X) W _ {V}. \tag {1}
89
+ $$
90
+
91
+ Here, the Q-Shift operator is a token shift function specially designed for the information exchange through nearby tokens according to the visual prior. $\bar{K}_{\mathrm{s}}$ and $V_{\mathrm{s}}$ are then passed to calculate the global attention result, $wkv \in \mathbb{R}^{T \times C}$ , by a linear complexity bidirectional attention mechanism, Bi-WKV, and multiplied with $\sigma(R)$ which controls the output $O_{\mathrm{s}}$ probability:
92
+
93
+ $$
94
+ O _ {\mathrm {s}} = \left(\sigma \left(R _ {\mathrm {s}}\right) \odot w k v\right) W _ {O},
95
+ $$
96
+
97
+ $$
98
+ w k v = \operatorname {B i - W K V} \left(K _ {\mathrm {s}}, V _ {\mathrm {s}}\right). \tag {2}
99
+ $$
100
+
101
+ Operator $\sigma$ denotes the sigmoid function, and $\odot$ represents element-wise multiplication. The output features are then stabilized using layer normalization (Ba et al., 2016) following the linear projection.
102
+
103
+ Subsequently, the tokens are passed into the channel-mix module for a channel-wise fusion. $R_{\mathrm{c}}$ , $K_{\mathrm{c}}$ are obtained in a similar manner as spatial-mix:
104
+
105
+ $$
106
+ R _ {\mathrm {c}} = \mathrm {Q} - \operatorname {S h i f t} _ {R} (X) W _ {R}, \quad K _ {\mathrm {c}} = \mathrm {Q} - \operatorname {S h i f t} _ {K} (X) W _ {K}. \tag {3}
107
+ $$
108
+
109
+ In the channel-mix module, $V_{\mathrm{c}}$ is a linear projection of $K_{\mathrm{c}}$ after the activation function and controlled by a gate mechanism $\sigma (R_{\mathrm{c}})$ . The output $O_{\mathrm{c}}$ is the linear projection of the aforementioned result:
110
+
111
+ $$
112
+ O _ {\mathrm {c}} = \left(\sigma \left(R _ {\mathrm {c}}\right) \odot V _ {\mathrm {c}}\right) W _ {O},
113
+ $$
114
+
115
+ $$
116
+ \text {w h e r e} V _ {\mathrm {c}} = \operatorname {S q u a r e d R e L U} \left(K _ {\mathrm {c}}\right) W _ {V}. \tag {4}
117
+ $$
118
+
119
+ Simultaneously, residual connections (He et al., 2016) are established from the tokens to each normalization layer to ensure that training gradients do not vanish in deep networks.
120
+
121
+ # 3.2 LINEAR COMPLEXITY BIDIRECTIONAL ATTENTION
122
+
123
+ Different from the vanilla RWKV (Peng et al., 2023), we make the following modifications to its original attention mechanism: (1) Bidirectional attention: We extend the upper limit of original RWKV attention from $t$ (the current token) to $T - 1$ (the last token) to ensure that all tokens are mutually visible in the calculation of each result. Thus, the original causal attention transforms into bidirectional global attention. (2) Relative bias: We compute the absolute value of the time difference $t - i$ and divide it by the total number of tokens (denoted as $T$ ) to represent the relative bias of tokens in images of different sizes. (3) Flexible decay: We no longer restrict the learnable decay parameter $w$ to be positive in the exponential term allowing the exponential decay attention to focus on tokens further away from the current token.
124
+
125
+ Under the collective influence of these ingenious modifications, we achieve global attention while maintaining linear complexity to the input token number $T$ , thereby maximizing the preservation of RWKV's inherent low complexity and extending it to the visual domain.
126
+
127
+ Similar to RWKV, our bidirectional attention can also be equivalently expressed in a summation form (for the sake of clarity) and an RNN form (in practical implementation).
128
+
129
+ Summation Form. The attention calculation result for the $t$ -th token is given by the formula:
130
+
131
+ $$
132
+ w k v _ {t} = \operatorname {B i - W K V} (K, V) _ {t} = \frac {\sum_ {i = 0 , i \neq t} ^ {T - 1} e ^ {- (| t - i | - 1) / T \cdot w + k _ {i}} v _ {i} + e ^ {u + k _ {t}} v _ {t}}{\sum_ {i = 0 , i \neq t} ^ {T - 1} e ^ {- (| t - i | - 1) / T \cdot w + k _ {i}} + e ^ {u + k _ {t}}}. \tag {5}
133
+ $$
134
+
135
+ Here, $T$ represents the total number of tokens, equal to $HW / p^2$ , $w$ and $u$ are two $C$ -dimensional learnable vectors that represent channel-wise spatial decay and the bonus indicating the current token, respectively. $k_{t}$ and $v_{t}$ denotes $t$ -th feature of $K$ and $V$ .
136
+
137
+ The summation formula indicates that the output $wkv_{t}$ is a weighted sum of $V$ along the token dimension from 0 to $T - 1$ , resulting in a $C$ -dimensional vector. It represents the result obtained
138
+
139
+ <table><tr><td>Model</td><td>Emb Dim</td><td>Hidden Dim</td><td>Depth</td><td>Extra Norm</td><td>#Param</td></tr><tr><td>VRWKV-T</td><td>192</td><td>768</td><td>12</td><td>×</td><td>6.2M</td></tr><tr><td>VRWKV-S</td><td>384</td><td>1536</td><td>12</td><td>×</td><td>23.8M</td></tr><tr><td>VRWKV-B</td><td>768</td><td>3072</td><td>12</td><td>×</td><td>93.7M</td></tr><tr><td>VRWKV-L</td><td>1024</td><td>4096</td><td>24</td><td>✓</td><td>334.9M</td></tr></table>
140
+
141
+ Table 1: Default settings for Vision-RWKV of different scales. We report the embedding dimension, hidden dimension, and model depth. "Extra Norm" means additional layer normalization layers are used to stabilize the model's outputs. "#Param" denotes the number of parameters.
142
+
143
+ by applying the attention operation to the t-th token. The weight is determined by the spatial decay vector $w$ , the relative bias between tokens $(|t - i| - 1) / T$ , and $k_{i}$ collectively.
144
+
145
+ RNN Form. In the practical implementation, the above Eq 5 can be transformed into a recursive formula in the form of RNN that the result of each token can be obtained through a fixed number of FLOPs. By splitting the summation term of the denominator and numerator in Eq 5 with $t$ as the boundary, we can obtain 4 hidden states:
146
+
147
+ $$
148
+ a _ {t - 1} = \sum_ {\substack {i = 0 \\ t - 1}} ^ {t - 1} e ^ {- (| t - i | - 1) / T \cdot w + k _ {i}} v _ {i}, \quad b _ {t - 1} = \sum_ {\substack {i = t + 1 \\ T - 1}} ^ {T - 1} e ^ {- (| t - i | - 1) / T \cdot w + k _ {i}} v _ {i}, \tag{6}
149
+ $$
150
+
151
+ $$
152
+ c _ {t - 1} = \sum_ {i = 0} ^ {t - 1} e ^ {- (| t - i | - 1) / T \cdot w + k _ {i}}, \qquad d _ {t - 1} = \sum_ {i = t + 1} ^ {T - 1} e ^ {- (| t - i | - 1) / T \cdot w + k _ {i}},
153
+ $$
154
+
155
+ that can be recursively computed due to its mathematical formulation. The update of hidden states only requires adding or subtracting one summation term and multiplying or dividing $e^{-w / T}$ . Then the $t$ -th result can be given as:
156
+
157
+ $$
158
+ w k v _ {t} = \frac {a _ {t - 1} + b _ {t - 1} + e ^ {k _ {t} + u} v _ {t}}{c _ {t - 1} + d _ {t - 1} + e ^ {k _ {t} + u}}. \tag {7}
159
+ $$
160
+
161
+ Each update step yields an attention result (i.e., $wkv_{t}$ ) for a token, so the entire $wkv$ matrix requires $T$ steps.
162
+
163
+ When the input $K$ and $V$ are matrices with the shape of $T \times C$ , the computational cost of calculating the wkv matrix is given by:
164
+
165
+ $$
166
+ \operatorname {F L O P s} (\operatorname {B i - W K V} (K, V)) = 1 3 \times T \times C. \tag {8}
167
+ $$
168
+
169
+ Here, the number 13 is approximately from the updates of $(a,b,c,d)$ , the computation of the exponential, and the calculation of $wkv_{t}$ . The above approximation shows that the complexity of the forward process is $O(TC)$ . The backward propagation of the operator can still be represented as a more complex RNN form, with a computational complexity of $O(TC)$ . The specific formula for forward updating and backward propagation is provided in the Appendix A.1.
170
+
171
+ # 3.3 QUAD-DIRECTIONALTOKENSHIFT
172
+
173
+ We introduce a quad-directional token shift (Q-Shift) as a flexible extension of the original token shift operation in RWKV in the first step of each spatial-mix and channel-mix module. The Q-Shift operation allows all tokens shifted and linearly interpolated with their neighboring tokens as follows:
174
+
175
+ $$
176
+ \begin{array}{l} \mathrm {Q - S h i f t} _ {(*)} (X) = X + \left(1 - \mu_ {(*)}\right) X ^ {\dagger}, \\ \text {w h e r e} X ^ {\dagger} [ h, w ] = \operatorname {C o n c a t} (X [ h - 1, w, 0: C / 4 ], X [ h + 1, w, C / 4: C / 2 ], \tag {9} \\ X [ h, w - 1, C / 2: 3 C / 4 ], X [ h, w + 1, 3 C / 4: C ]). \\ \end{array}
177
+ $$
178
+
179
+ Subscript $(*) \in \{R, K, V\}$ denotes 3 interpolation of $X$ and $X^{\dagger}$ controlled by the learnable vectors $\mu_{(*)}$ for the later calculation of $R, K, V$ , respectively. $h$ and $w$ denote the row and column index of token $X$ , “:” is a slicing operation excluded the end index. The Q-Shift makes the attention mechanism of different channels obtain the prior of focusing on neighboring tokens internally without introducing many additional FLOPs. It also increases the receptive field of each token which greatly enhances the coverage of the token in the posterior layers. $X^{\dagger}$ is obtained by slicing $X$ without introducing new computations, allowing for flexible transformations during the training process for different tasks. When handling sparse inputs that do not contain original image space information, such as masked image modeling, shifting can be applied in a single dimension to maximize the preservation of image priors.
180
+
181
+ <table><tr><td></td><td>Method</td><td>Size</td><td>#Param</td><td>FLOPs</td><td>Top-1 Acc</td></tr><tr><td rowspan="9">hierarchical</td><td>ResNet-18 (He et al., 2016)</td><td>2242</td><td>11.7M</td><td>1.8G</td><td>69.9</td></tr><tr><td>PVT-T (Wang et al., 2021)</td><td>2242</td><td>13.2M</td><td>1.9G</td><td>75.1</td></tr><tr><td>ResNet-50 (He et al., 2016)</td><td>2242</td><td>25.6M</td><td>4.1G</td><td>76.6</td></tr><tr><td>Swin-T (Liu et al., 2021)</td><td>2242</td><td>28.3M</td><td>4.4G</td><td>81.2</td></tr><tr><td>PVT-M (Wang et al., 2021)</td><td>2242</td><td>44.2M</td><td>6.7G</td><td>81.2</td></tr><tr><td>ResNet-101 (He et al., 2016)</td><td>2242</td><td>44.6M</td><td>7.9G</td><td>78.0</td></tr><tr><td>Swin-S (Liu et al., 2021)</td><td>2242</td><td>49.6M</td><td>8.7G</td><td>83.0</td></tr><tr><td>PVT-L (Wang et al., 2021)</td><td>2242</td><td>61.4M</td><td>9.8G</td><td>81.7</td></tr><tr><td>Swin-B (Liu et al., 2021)</td><td>2242</td><td>87.8M</td><td>15.1G</td><td>83.4</td></tr><tr><td rowspan="12">non-hierarchical</td><td>DeiT-T (Touvron et al., 2021a)</td><td>2242</td><td>5.7M</td><td>1.3G</td><td>72.2</td></tr><tr><td>DeiT-S (Touvron et al., 2021a)</td><td>2242</td><td>22.1M</td><td>4.6G</td><td>79.9</td></tr><tr><td>XCIT-S12 (Ali et al., 2021)</td><td>2242</td><td>26.0M</td><td>4.8G</td><td>82.0</td></tr><tr><td>DeiT-B (Touvron et al., 2021a)</td><td>2242</td><td>86.6M</td><td>17.6G</td><td>81.8</td></tr><tr><td>XCIT-L24 (Ali et al., 2021)</td><td>2242</td><td>189.0M</td><td>36.1G</td><td>82.9</td></tr><tr><td>ViT-L (Dosovitskiy et al., 2020)</td><td>3842</td><td>309.5M</td><td>191.1G</td><td>85.2</td></tr><tr><td>VRWKV-T</td><td>2242</td><td>6.2M</td><td>1.2G</td><td>75.1</td></tr><tr><td>VRWKV-S</td><td>2242</td><td>23.8M</td><td>4.6G</td><td>80.1</td></tr><tr><td>VRWKV-B</td><td>2242</td><td>93.7M</td><td>18.2G</td><td>82.0</td></tr><tr><td>VRWKV-L</td><td>3842</td><td>334.9M</td><td>189.5G</td><td>86.0</td></tr><tr><td>VRWKV-L†</td><td>3842</td><td>334.9M</td><td>189.5G</td><td>86.2</td></tr><tr><td>VRWKV-L*</td><td>3842</td><td>334.9M</td><td>189.5G</td><td>86.5</td></tr></table>
182
+
183
+ Table 2: Validation results on ImageNet-1K. VRWKV-T/S/B are trained on ImageNet-1K, while VRWKV-L is pre-trained on Imagenet-22K and fine-tuned on Imagenet-1K. “#Param” denotes the number of parameters, and “FLOPs” represents the computational workload for the specified image resolution in the “Size” column. “†” means additional MAE pre-training is applied in the pre-training process. “ $\star$ ” indicates Bamboo-47K is used in the pre-training.
184
+
185
+ # 3.4 SCALE UP STABILITY
186
+
187
+ Increasing model depth and the accumulation of exponential terms during recursion can lead to instability in the training process. To mitigate this, we propose two simple yet effective adjustments: (1) Bounded exponential: As input resolution increases, both exponential decay and growth can quickly exceed the range of floating-point numbers. To address this, we divide the exponential term by the number of tokens (e.g., $\exp(-(|t - i| - 1)/T \cdot w)$ ), making the maximum decay and growth bounded. (2) Extra layer normalization: As models become deeper, we apply layer normalization (Ba et al., 2016) after the attention mechanism and the Squared ReLU operation, to prevent the model's output from overflowing. These two adjustments promote stable scaling of input resolution and model depth, facilitating the smooth convergence of large models. Additionally, we incorporate layer scale (Touvron et al., 2021b), which further enhances model stability during scaling.
188
+
189
+ # 3.5 MODEL DETAILS
190
+
191
+ Following ViT, the hyper-parameters for variants of VRWKV, including embedding dimension, hidden dimension in linear projection, and depth, are specified in Table 1. Due to the increased depth of the VRWKV-L model, additional layer normalizations as discussed in Section 3.4, are incorporated at appropriate positions to ensure output stability.
192
+
193
+ # 4 EXPERIMENTS
194
+
195
+ We comprehensively evaluate the substitutability of our VRWKV for ViT in performance, scalability, flexibility, and efficiency. The model's effectiveness is validated in image classification, object detection, and semantic segmentation tasks.
196
+
197
+ # 4.1 IMAGE CLASSIFICATION
198
+
199
+ Settings. For -Tiny/Small/Base models, we conduct supervised training from scratch on ImageNet-1K (Deng et al., 2009). Following the training strategy and data augmentation of DeiT (Touvron et al., 2021a), we use a batch size of 1024, AdamW (Loshchilov & Hutter, 2017) with a base learning rate of 5e-4, weight decay of 0.05, and cosine annealing schedule (Loshchilov & Hutter, 2016). Images are cropped to the resolution of $224 \times 224$ for training and validation. For the -Large models,
200
+
201
+ <table><tr><td>Method</td><td>Window</td><td>#Param</td><td>FLOPs</td><td>APb</td><td>APm</td></tr><tr><td>ViT-T (Touvron et al., 2021a)</td><td>✓</td><td>8.0M</td><td>95.4G</td><td>41.1</td><td>37.5</td></tr><tr><td>ViT-T (Touvron et al., 2021a)</td><td>×</td><td>8.0M</td><td>147.1G</td><td>41.6</td><td>37.9</td></tr><tr><td>VRWKV-T (ours)</td><td>×</td><td>8.4M</td><td>67.9G</td><td>41.7</td><td>38.0</td></tr><tr><td>ViT-S (Touvron et al., 2021a)</td><td>✓</td><td>27.5M</td><td>241.2G</td><td>44.6</td><td>39.7</td></tr><tr><td>ViT-S (Touvron et al., 2021a)</td><td>×</td><td>27.5M</td><td>344.5G</td><td>44.9</td><td>40.1</td></tr><tr><td>VRWKV-S (ours)</td><td>×</td><td>29.3M</td><td>189.9G</td><td>44.8</td><td>40.2</td></tr><tr><td>ViT-B (Touvron et al., 2021a)</td><td>✓</td><td>99.5M</td><td>686.7G</td><td>46.2</td><td>41.5</td></tr><tr><td>ViT-B (Touvron et al., 2021a)</td><td>×</td><td>99.5M</td><td>893.3G</td><td>46.8</td><td>41.8</td></tr><tr><td>VRWKV-B (ours)</td><td>×</td><td>106.6M</td><td>599.0G</td><td>46.8</td><td>41.7</td></tr><tr><td>ViT-L (Steiner et al., 2021)</td><td>✓</td><td>327.0M</td><td>1799.3G</td><td>48.7</td><td>43.3</td></tr><tr><td>VRWKV-L (ours)</td><td>×</td><td>351.9M</td><td>1730.6G</td><td>50.6</td><td>44.9</td></tr></table>
202
+
203
+ Table 3: Object detection and instance segmentation on COCO val2017. All models adopt the ViT-Adapter (Chen et al., 2023) to generate multi-scale features for detection heads. -T/S/B models are initialized with ImageNet-1K weights, and all -L models use ImageNet-22K weights. "#Param" indicates the backbone parameter, and 'FLOPs' represent the backbone's computational workload for a ${1333} \times {800}$ input. "Window" denotes the use of window operation in ViT layers.
204
+
205
+ we first pre-train them for 90 epochs on ImageNet-22K with a batch size of 4096 and resolution of $192 \times 192$ , and then fine-tune them for 20 epochs on ImageNet-1K to a higher resolution of $384 \times 384$ .
206
+
207
+ Results. We compare the results of our VRWKV with other hierarchical and non-hierarchical backbones on the ImageNet-1K validation dataset. As shown in Table 2, with the same number of parameters, computational complexity, and training/testing resolutions, VRWKV achieves better results than ViT. For example, when VRWKV-T has slightly lower FLOPs than DeiT-T(1.2G vs 1.3G), our VRWKV-T achieves a 2.9 point higher than DeiT-T on top-1 accuracy. When the model size scales up, VRWKV still demonstrates higher baseline performance. VRWKV-L achieves a 0.8 point higher top-1 accuracy of $86.0\%$ at the resolution of $384 \times 384$ than ViT-L, with a slightly reduced computational cost. The superior performance from tiny to large-size models demonstrates that the VRWKV model possesses the scalability as ViT. Additionally, after using a larger dataset Bamboo-47K (Zhang et al., 2022) in the pre-train process, the performance of VRWKV-L can be further boosted to $86.5\%$ , indicating that our VRWKV also possesses the ability like ViT to benefit from pre-training on large-scale datasets. The exploration of VRWKV in classification tasks demonstrates its potential to be a viable alternative to traditional ViT models.
208
+
209
+ # 4.2 OBJECT DETECTION
210
+
211
+ Settings. In the detection tasks, we adopt Mask R-CNN (He et al., 2017) as the detection head. For the -Tiny/Small/Base models, the backbones use weights pre-trained on ImageNet-1K for 300 epochs. For the -Large model, weights pre-trained on ImageNet-22K are used. All models use a $1 \times$ training schedule (i.e., 12 epochs) with a batch size of 16, and AdamW (Loshchilov & Hutter, 2017) optimizer with an initial learning rate of 1e-4 and weight decay of 0.05.
212
+
213
+ Results. In Table 3, we report the detection results on the COCO val (Lin et al., 2014) dataset using VRWKV and ViT as backbones. As the results showed in Figure 1(a) and Table 3, due to the use of window attention in dense prediction tasks, VRWKV with global attention can achieve better performance than ViT with lower FLOPs. For example, VRWKV-T has approximately $30\%$ lower backbone FLOPs compared to ViT-T using window attention, with an improvement of $\mathrm{AP^b}$ by 0.6 points. Additionally, we compare the performance of VRWKV and ViT using global attention. For instance, VRWKV-S achieves similar performance to ViT-S with $45\%$ lower FLOPs. This demonstrates the effectiveness of VRWKV's global attention mechanism in dense prediction tasks and the advantage of lower computational complexity compared to the original attention mechanism.
214
+
215
+ # 4.3 SEMANTIC SEGMENTATION
216
+
217
+ Settings. In the semantic segmentation task, we use UperNet (Xiao et al., 2018) as the segmentation head. Specifically, all ViT models use global attention in the segmentation task. For the -Tiny/Small/Base models, the backbones use weights pre-trained on ImageNet-1K. And for the -Large model, weights pre-trained on ImageNet-22K are used. We employ the AdamW optimizer
218
+
219
+ <table><tr><td>Method</td><td>Window</td><td>#Param</td><td>FLOPs</td><td>mIoU</td></tr><tr><td>ViT-T (Touvron et al., 2021a)</td><td>×</td><td>8.0M</td><td>20.9G</td><td>42.6</td></tr><tr><td>VRWKV-T (ours)</td><td>×</td><td>8.4M</td><td>16.6G</td><td>43.3</td></tr><tr><td>ViT-S (Touvron et al., 2021a)</td><td>×</td><td>27.5M</td><td>54.0G</td><td>46.2</td></tr><tr><td>VRWKV-S (ours)</td><td>×</td><td>29.3M</td><td>46.3G</td><td>47.2</td></tr><tr><td>ViT-B (Touvron et al., 2021a)</td><td>×</td><td>99.5M</td><td>157.9G</td><td>48.8</td></tr><tr><td>VRWKV-B (ours)</td><td>×</td><td>106.6M</td><td>146.0G</td><td>49.2</td></tr><tr><td>ViT-L (Steiner et al., 2021)</td><td>×</td><td>327.0M</td><td>446.8G</td><td>53.4</td></tr><tr><td>VRWKV-L (ours)</td><td>×</td><td>351.9M</td><td>421.9G</td><td>53.5</td></tr></table>
220
+
221
+ with an initial learning rate of 6e-5 for the -Small/Base/Large models and 12e-5 for the -Tiny model, a batch size of 16, and a weight decay of 0.01. All models are trained for 160k iterations on the training set of the ADE20K dataset (Zhou et al., 2017).
222
+
223
+ Results. As shown in Table 4, when using UperNet for semantic segmentation, models based on VRWKV consistently outperform those based on ViT with global attention, while also being more efficient. For example, VRWKV-S achieves 1 point higher than ViT-S with a $14\%$ FLOPs decrease. VRWKV-L creates a result of $53.5 \mathrm{mIoU}$ , similar to ViT-L, while the computation of the backbone has a 25G FLOPs lower. These results demonstrate that our VRWKV backbones can extract better features for semantic segmentation compared to ViT backbones while also being more efficient, benefiting from the linear complexity attention mechanism.
224
+
225
+ # 4.4 ABLATION STUDY
226
+
227
+ Settings. We conduct ablation studies of the tiny-size VRWKV on ImageNet-1K (Deng et al., 2009) to validate the effectiveness of various key components like Q-Shift and Bi-WKV. The experimental settings are consistent with Section 4.1.
228
+
229
+ Token Shift. We compare three approaches: no token shift, the original shift method used in RWKV, and our proposed Q-Shift. As shown in Table 5, the variation in the shift method shows performance differences. Variant 1 without token shift leads to a poor performance of 71.5, which is 3.6 points lower than our model. Even with the use of our global attention, the model using the original token shift still has a 0.7-point gap with our model.
230
+
231
+ Bidirectional Attention. The bidirectional attention mechanism enables the model to achieve global attention while the original RWKV attention has a causal mask internally.
232
+
233
+ Table 4: Semantic segmentation on the ADE20K val set. All models used ViT-Adapter (Chen et al., 2023) for multi-scale feature generation and are trained with UperNet as the segmentation heads. For consistency in comparison, all -T/S/B models are initialized using ImageNet-1K pretraining, whereas -L models utilize ImageNet-22K pre-training. "#Param" refers to the number of parameters of the backbone. We report the FLOPs of backbones with the input size of $512 \times 512$ .
234
+
235
+ <table><tr><td>Method</td><td>Token Shift</td><td>Bidirectional Attention</td><td>Top-1 Acc</td></tr><tr><td>RWKV</td><td>original</td><td>×</td><td>71.1 (-4.0)</td></tr><tr><td>Variant 1</td><td>none</td><td>✓</td><td>71.5 (-3.6)</td></tr><tr><td>Variant 2</td><td>original</td><td>✓</td><td>74.4 (-0.7)</td></tr><tr><td>Variant 3</td><td>Q-Shift</td><td>×</td><td>72.8 (-2.3)</td></tr><tr><td>VRWKV-T</td><td>Q-Shift</td><td>✓</td><td>75.1</td></tr></table>
236
+
237
+ Table 5: Ablation on key components of the proposed VRWKV. All models are trained from scratch on ImageNet-1K. "Original" refers to the token shift in RWKV (Peng et al., 2023), which mixes tokens in a single direction.
238
+
239
+ The result of Variant 3 shows the global attention mechanism brings a 2.3 points increase in the top-1 accuracy.
240
+
241
+ Effective Receptive Field (ERF). We analyze the impact of different designs on the ERF of models based on (Ding et al., 2022) and visualize it in Figure 3(a). We visualize the ERF of the central pixel with an input size of $1024 \times 1024$ . In Figure 3(a), "No Shift" represents the absence of the token shift method (Q-Shift), and "RWKV Attn" indicates using the original RWKV attention mechanism without our modifications for vision tasks. From the comparison in the figure, all models except the "RWKV Attn" one achieved global attention while the global capacity of the VRWKV-T model is better than that of ViT-T. Despite the assistance of Q-Shift, the central pixel in "RWKV Attn" still cannot attend to the pixels on the bottom of the image due to the large input resolution. The results
242
+
243
+ ![](images/4acb5b2a015ad799156d95321b9f4788048868cf754fb1fb17f6ee6c3aef328d.jpg)
244
+ (a) ERF of ViT and VRWKV
245
+
246
+ ![](images/6345cbc881b02d6a337df6bdf2410d56fcf6322c69f3a07a7f452f56788d4824.jpg)
247
+ (b) Attention Runtime
248
+ Figure 3: Comparison of effective receptive field (ERF) and attention runtime. (a) ERF for ViT and VRWKV in different settings. "No Shift" means no shift is used in spatial-mix and channel-mix modules. "RWKV Attn" means the original RWKV attention without our modifications. Our VRWKV with Q-Shift and Bi-WKV has a more comprehensive ERF than other counterparts. (b) Attention runtime of inference (left) and forward + backward (right) tested on an Nvidia A100 GPU.
249
+
250
+ of "No Shift" and Q-Shift show that the Q-Shift method expands the core range of the receptive field, enhancing the inductive bias of global attention.
251
+
252
+ Efficiency Analysis. To showcase the efficiency of our linear attention mechanism, we gradually increase the input resolution from $224 \times 224$ to $2048 \times 2048$ and compare the inference and memory efficiency of VRWKV-T and ViT-T. The results were tested on an Nvidia A100 GPU, as shown in Figure 1. From the curves presented in Figure 1(b), it is observed that at lower resolutions, VRWKV-T and ViT-T exhibit comparable memory usage and inference FPS. With the increase in input resolution, VRWKV-T shows a much higher speed than ViT-T. Additionally, VRWKV-T's RNN-like computational framework ensures a slow increase in GPU memory usage. By the time the resolution hits $2048 \times 2048$ (equivalent to 16384 tokens), VRWKV-T's inference speed is 10 times faster than ViT-T, and its memory consumption is reduced by $80\%$ compared to ViT-T. It is worth mentioning that the implementation of the Q-Shift operation in PyTorch is highly inefficient, leading to an overall decrease in model speed. However, this operation can be optimized through other means (such as using CUDA extensions). Therefore, there is still potential for further improvement in the speed of VRWKV with better engineering optimization.
253
+
254
+ We also compare the speed of our attention mechanism kernel, Bi-WKV, with pytorch attention and flash attention (Dao et al., 2022), reported in Figure 3(b). Our Bi-WKV is significantly faster than the attention mechanism implemented using matrix multiplication (PyTorch attention), achieving a speedup of over a hundred times at a resolution of $2048 \times 2048$ (i.e., 16384 tokens). Flash attention is highly optimized for memory I/O, and its matrix multiplication aligns well with the physical architecture of Nvidia GPUs, while our Bi-WKV lacks such hardware-level optimization. Nevertheless, in high-resolution scenarios, our Bi-WKV still demonstrates a significant speed advantage.
255
+
256
+ MAE Pre-training. ViTs can learn meaningful visual representations in masked image modeling (MIM). Yet, the rigor and effectiveness of linear attention vision models in this self-supervised pretraining paradigm have not been validated. Our VRWKV can handle sparse inputs and leverage MIM pre-training methods like MAE (He et al., 2017) by implementing a bidirectional shift operation that removes the vertical shift in Q-Shift. The pre-trained weights can be directly fine-tuned for other tasks using a Q-Shift manner. Following the same MAE pre-training setting as ViT, and subsequent classification training similar to Section 4.1, our VRWKV-L shows the ability to acquire visual prior from masked image modeling as shown in Table 2.
257
+
258
+ # 5 CONCLUSION
259
+
260
+ We propose Vision-RWKV (VRWKV), a vision encoder with a linear computational complexity attention mechanism. We demonstrate its capability to be an alternative backbone to ViT in comprehensive vision tasks including classification, dense predictions, and masked image modeling pretraining. With comparable performance and scalability, VRWKV exhibits lower computational complexity and memory consumption. Benefiting from its low complexity, VRWKV can achieve better performance in the tasks that ViT struggling to afford the high computational overhead of global attention. We hope VRWKV will be an efficient and low-cost alternative to ViT, showcasing the powerful potential of linear complexity transformers in vision fields.
261
+
262
+ # ACKNOWLEDGMENTS
263
+
264
+ This project was supported by the National Key R&D Program of China (No. 2022ZD0161300, 2022ZD0160101), the National Natural Science Foundation of China (No. 62376134). Zhe Chen is supported by the Youth PhD Student Research Project under the National Natural Science Foundation (No. 623B2050).
265
+
266
+ # REFERENCES
267
+
268
+ Alaaeldin Ali, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, et al. Xcit: Cross-covariance image transformers. NeurIPS, 34, 2021. 4, 7
269
+ Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 2, 5, 7
270
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NeurIPS, 33:1877-1901, 2020. 1
271
+ Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. 2023. 8, 9
272
+ Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In ICCV, pp. 764-773, 2017. 3
273
+ Jifeng Dai, Min Shi, Weiyun Wang, Sitong Wu, Linjie Xing, Wenhai Wang, Xizhou Zhu, Lewei Lu, Jie Zhou, Xiaogang Wang, et al. Demystify transformers & convolutions in modern image deep networks. arXiv preprint arXiv:2211.05781, 2022. 3, 4
274
+ Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. NeurIPS, 35:16344-16359, 2022. 10
275
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248-255, 2009. 2, 3, 7, 9, 16, 17
276
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 1
277
+ Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, and Furu Wei. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486, 2023. 3
278
+ Xiaohan Ding, Xiangyu Zhang, Jungong Han, and Guiguang Ding. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. In CVPR, pp. 11963-11975, 2022. 3, 9
279
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2020. 1, 3, 7
280
+ Jeffrey L Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990. 3
281
+ Qihang Fan, Huaibo Huang, Mingrui Chen, Hongmin Liu, and Ran He. Rmt: Retentive networks meet vision transformers. arXiv preprint arXiv:2309.11523, 2023. 4
282
+ Zhangwei Gao, Zhe Chen, Erfei Cui, Yiming Ren, Weiyun Wang, Jinguo Zhu, Hao Tian, Shenglong Ye, Junjun He, Xizhou Zhu, et al. Mini-internvl: a flexible-transfer pocket multi-modal model with $5\%$ parameters and $90\%$ performance. Visual Intelligence, 2(1):1-17, 2024. 3
283
+ Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. 1, 3, 4
284
+ Albert Gu, Karan Goel, and Christopher Ré. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021a. 4
285
+
286
+ Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Ré. Combining recurrent, convolutional, and continuous-time models with linear state space layers. NeurIPS, 34:572-585, 2021b. 4
287
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pp. 770-778, 2016. 3, 5, 7
288
+ Kaiming He, Georgia Gkioxari, Piotr Dólár, and Ross Girshick. Mask r-cnn. In ICCV, pp. 2961-2969, 2017. 8, 10
289
+ Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377, 2021. 1, 2
290
+ Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In CVPR, pp. 7132-7141, 2018. 3
291
+ Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, pp. 4700-4708, 2017. 3
292
+ Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. NeurIPS, 25, 2012. 3
293
+ Yann LeCun, Yoshua Bengio, et al. Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks, 3361(10):1995, 1995. 3
294
+ Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019. 1
295
+ Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang. Selective kernel networks. In CVPR, pp. 510-519, 2019. 3
296
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pp. 740-755, 2014. 2, 3, 8
297
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. 1
298
+ Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, and Yunfan Liu. Vmamba: Visual state space model. arXiv preprint arXiv:2401.10166, 2024.3,4
299
+ Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, pp. 10012-10022, 2021. 4, 7
300
+ Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In CVPR, pp. 11976-11986, 2022. 3
301
+ Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 7
302
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 7, 8
303
+ Long Short-Term Memory. Long short-term memory. Neural Computation, 9(8):1735-1780, 2010. 3
304
+ Maxim Milakov and Natalia Gimelshein. Online normalizer calculation for softmax. arXiv preprint arXiv:1805.02867, 2018. 16
305
+
306
+ Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, et al. Rwkv: Reinventing rmns for the transformer era. arXiv preprint arXiv:2305.13048, 2023. 1, 2, 3, 4, 5, 9, 16
307
+ Zhen Qin, Songlin Yang, and Yiran Zhong. Hierarchically gated recurrent neural network for sequence modeling. Advances in Neural Information Processing Systems, 36, 2024. 3
308
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018. 1
309
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 1
310
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. 1
311
+ Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 3
312
+ Jimmy TH Smith, Andrew Warrington, and Scott W Linderman. Simplified state space layers for sequence modeling. arXiv preprint arXiv:2208.04933, 2022a. 4
313
+ Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using deep-speed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990, 2022b. 1
314
+ Andreas Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. arXiv preprint arXiv:2106.10270, 2021. 1, 8, 9
315
+ Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. Retentive network: A successor to transformer for large language models. arXiv preprint arXiv:2307.08621, 2023. 3, 4
316
+ Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In $CVPR$ , pp. 1-9, 2015. 3
317
+ Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In ICML, pp. 10347-10357, 2021a. 1, 2, 3, 7, 8, 9, 16
318
+ Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. Going deeper with image transformers. In ICCV, pp. 32-42, 2021b. 2, 7
319
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS, 30, 2017. 1, 4, 17
320
+ Ashish Vaswani, Prajit Ramachandran, Aravind Srinivas, Niki Parmar, Blake Hechtman, and Jonathon Shlens. Scaling local self-attention for parameter efficient visual backbones. In CVPR, pp. 12894–12904, 2021. 3, 4
321
+ Jue Wang, Wentao Zhu, Pichao Wang, Xiang Yu, Linda Liu, Mohamed Omar, and Raffay Hamid. Selective structured state-spaces for long-form video understanding. In CVPR, pp. 6387-6397, 2023a. 4
322
+ Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020. 3
323
+ Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, pp. 568-578, 2021. 3, 4, 7
324
+
325
+ Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvtv2: Improved baselines with pyramid vision transformer. CVMJ, pp. 1-10, 2022. 3, 4
326
+ Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In CVPR, pp. 14408-14419, 2023b. 3
327
+ Sitong Wu, Tianyi Wu, Haoru Tan, and Guodong Guo. Pale transformer: A general vision transformer backbone with pale-shaped attention. In AAAI, volume 36, pp. 2731-2739, 2022. 3
328
+ Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In ECCV, pp. 418-434, 2018. 8
329
+ Yuwen Xiong, Zhiqi Li, Yuntao Chen, Feng Wang, Xizhou Zhu, Jiapeng Luo, Wenhai Wang, Tong Lu, Hongsheng Li, Yu Qiao, et al. Efficient deformable convnets: Rethinking dynamic and sparse operator for vision applications. arXiv preprint arXiv:2401.06197, 2024. 3
330
+ Yuanhan Zhang, Qinghong Sun, Yichun Zhou, Zexin He, Zhenfei Yin, Kun Wang, Lu Sheng, Yu Qiao, Jing Shao, and Ziwei Liu. Bamboo: Building mega-scale vision dataset continually with human-machine synergy. arXiv preprint arXiv:2203.07845, 2022. 8
331
+ Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In CVPR, pp. 633-641, 2017. 9
332
+ Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv preprint arXiv:2401.09417, 2024. 2, 3, 4, 16, 17
333
+ Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In CVPR, pp. 9308-9316, 2019. 3
334
+
335
+ # A APPENDIX
336
+
337
+ # A.1 RNN FORM FORWARD AND BACKWARD
338
+
339
+ The attention mechanism in the spatial-mix module uses an RNN form forward and backward to achieve linear complexity of the input token number $T$ . The following sections give more details of the operation.
340
+
341
+ <table><tr><td>States</td><td>Recurrence Relation</td><td>Initial Value</td></tr><tr><td>a</td><td>at = w- · at-1 + ekvt</td><td>a-1 = 0</td></tr><tr><td>b</td><td>bt = w+ · (bt-1 - ek+1vt+1)</td><td>b-1 = ∑T-1i=1e-(i-1)w+kivi</td></tr><tr><td>c</td><td>ct = w- · ct-1 + ek</td><td>c-1 = 0</td></tr><tr><td>d</td><td>dt = w+ · (dt-1 - ek+1)</td><td>d-1 = ∑T-1i=1e-(i-1)w+ki</td></tr><tr><td>da/dw</td><td>da t/dw = w- · (da t-1/dw - at-1)</td><td>da-1/dw = 0</td></tr><tr><td>db/dw</td><td>db t/dw = w+ · (db t-1/dw + bt-1 - ek+1vt+1)</td><td>db t/dw = ∑T-1i=1-(i-1)e-(i-1)w+kivi</td></tr><tr><td>dc/dw</td><td>dc t/dw = w- · (dc t-1/dw - ct-1)</td><td>dc t/dw = 0</td></tr><tr><td>dd/dw</td><td>dd t/dw = w+ · (dd t-1/dw + dt-1 - ek+1)</td><td>dd t/dw = ∑T-1i=1-(i-1)e-(i-1)w+ki</td></tr><tr><td>ga</td><td>ga t = w+ · (ga t-1 - gy t·ytiden)</td><td>ga0 = ∑T-1i=1gy i·ytiden · e-(i-1)w</td></tr><tr><td>gb</td><td>gb t = w- · gb t-1 + gy t-1 · yt-1</td><td>gb0 = 0</td></tr><tr><td>gc</td><td>gc t = w+ · (gc t-1 - gy t·ytiden·yt)</td><td>gc0 = ∑T-1i=1gy i·ytiden·yt i·e-(i-1)w</td></tr><tr><td>gd</td><td>gd t = w- · gd t-1 + gy t-1 · yt-1 · yt-1</td><td>gd0 = 0</td></tr></table>
342
+
343
+ Table 6: RNN states of Bi-WKV in forward and backward process. The update in recurrence relations has a fixed number of FLOPs. $\mathbf{w}^{-}$ and $\mathrm{w^{+}}$ denotes the grow or decay vector $e^{w / T}$ and $e^{-w / T}$ . The calculation of initial values is $O(TC)$ which does not affect the final complexity.
344
+
345
+ # A.1.1 BACKWARD EQUATION
346
+
347
+ The backward process acquires the gradient of output matrix $wkv$ (denotes as $y$ ) passed from the previous layer (denotes as $g_y$ ) to calculate the gradient of each input. The saved inputs are vectors $w, u \in \mathbb{R}^C$ , key and value matrices $K, V \in \mathbb{R}^{T \times C}$ (the batch dimension is omitted). The new input is the gradient $g_y \in \mathbb{R}^{T \times C}$ provided by the backpropagation. The outputs include the gradients $gw, gu \in \mathbb{R}^C$ , matrices $gK, gV \in \mathbb{R}^{T \times C}$ corresponding to the inputs, respectively. From the RNN form of the forward process, the backward can be represented in an RNN form with a linear complexity related to the token number $T$ . Some intermediate variables are listed as follows:
348
+
349
+ $$
350
+ y _ {t} ^ {\text {n u m}} = a _ {t - 1} + b _ {t - 1} + e ^ {u + k _ {t}} v _ {t}, y _ {t} ^ {\text {i d e n}} = 1 / \left(c _ {t - 1} + d _ {t - 1} + e ^ {u + k _ {t}}\right), y _ {t} = y _ {t} ^ {\text {n u m}} \cdot y _ {t} ^ {\text {i d e n}}. \tag {10}
351
+ $$
352
+
353
+ The outputs of backward propagation are listed as follows:
354
+
355
+ $$
356
+ \begin{array}{l} \mathrm {g} w = \sum_ {t = 0} ^ {T - 1} \mathrm {g} y _ {t} \cdot y _ {t} ^ {\text {i d e n}} \left(\frac {\mathrm {d} a _ {t - 1}}{\mathrm {d} w} + \frac {\mathrm {d} b _ {t - 1}}{\mathrm {d} w} - y _ {t} \left(\frac {\mathrm {d} c _ {t - 1}}{\mathrm {d} w} + \frac {\mathrm {d} d _ {t - 1}}{\mathrm {d} w}\right)\right), (11) \\ \mathrm {g} u = \sum_ {t = 0} ^ {T - 1} \mathrm {g} y _ {t} \cdot y _ {t} ^ {\text {i d e n}} \cdot e ^ {u + k _ {t}} (- y _ {t} + v _ {t}), (12) \\ \end{array}
357
+ $$
358
+
359
+ $$
360
+ \begin{array}{l} g k _ {t} = g b _ {t} \cdot e ^ {k _ {t}} v _ {t} - g d _ {t} \cdot e ^ {k _ {t}} + g y _ {t} \cdot y _ {t} ^ {\text {i d e n}} \left(e ^ {k _ {t} + u} v _ {t} - y _ {t} \cdot e ^ {k _ {t} + u}\right) \tag {13} \\ + \mathrm {g} a _ {t} \cdot e ^ {k _ {t}} v _ {t} - \mathrm {g} c _ {t} \cdot e ^ {k _ {t}}, \\ \end{array}
361
+ $$
362
+
363
+ $$
364
+ \mathrm {g} v _ {t} = \mathrm {g} b _ {t} \cdot e ^ {k _ {t}} + \mathrm {g} a _ {t} \cdot e ^ {k _ {t}} + \mathrm {g} y _ {t} \cdot y _ {t} ^ {\text {i d e n}} \cdot e ^ {k _ {t} + u}. \tag {14}
365
+ $$
366
+
367
+ The RNN states and their initial values and recurrence relations are provided in Table 6. From the recurrence relations, all updates have a complexity of $O(C)$ , which means the number of FLOPs for each update is fixed. Therefore, the final backward complexity is $O(\mathrm{s}TC)$ where $s$ denotes the sum of the FLOPs for all equations.
368
+
369
+ ![](images/6e4080f0bbc6d540ec4011b149b8e1d70b7251e5aca2ee60ef2faaaf9d5f9c0f.jpg)
370
+ Figure 4: Performance of VRWKV and DeiT (Touvron et al., 2021a) on ImageNet-1K (Deng et al., 2009). All models are trained on a fixed resolution of $224 \times 224$ and evaluated on different resolutions. Our VRWKV shows an obvious robustness on different resolutions.
371
+
372
+ ![](images/baed1edb326cebd73711f89962c5c023c6e6a5c8ceebdfc8ff1af58e05336839.jpg)
373
+
374
+ ![](images/f1e731eec8793cbe342dd05898b9bc8d0414cb91a6ba4f9ea818b8e939de274f.jpg)
375
+
376
+ <table><tr><td>Method</td><td>#Param</td><td>Top-1 Acc</td></tr><tr><td>Vim-T</td><td>7M</td><td>76.1</td></tr><tr><td>VRWKV-T</td><td>6M</td><td>75.1</td></tr><tr><td>Vim-S</td><td>26M</td><td>80.5</td></tr><tr><td>VRWKV-S</td><td>24M</td><td>80.1</td></tr><tr><td>Vim-B</td><td>98M</td><td>81.9</td></tr><tr><td>VRWKV-B</td><td>94M</td><td>82.0</td></tr><tr><td>Vim-L</td><td>NA</td><td>NA</td></tr><tr><td>VRWKV-L</td><td>335M</td><td>86.0</td></tr></table>
377
+
378
+ Table 7: Comparison with Vision Mamba (Zhu et al., 2024)(Vim) on ImageNet-1K (Deng et al., 2009). "NA" denotes not available.
379
+
380
+ # A.1.2 IMPLEMENTATION DETAILS
381
+
382
+ A numerical trick to compute safe exponential in (Peng et al., 2023; Milakov & Gimelshein, 2018) is used to avoid overflow in the exponential terms of the recurrence in the forward and backward process. An example of the update of state $a$ is shown as follows:
383
+
384
+ $$
385
+ q := \max \left(p _ {t - 1} - w / T, k _ {t}\right), \tag {15}
386
+ $$
387
+
388
+ $$
389
+ a ^ {\prime} = \exp (- w / T + p _ {t - 1} - q) \cdot a ^ {\prime} + \exp (k _ {t} - q) \cdot v _ {t}, \tag {16}
390
+ $$
391
+
392
+ $$
393
+ p _ {t} = q. \tag {17}
394
+ $$
395
+
396
+ The exponential terms in the new state $a'$ are forced to be smaller than 1 by subtracting the max value. The subtracted part stored in $p$ is divided automatically when calculating $wkv$ .
397
+
398
+ # A.2 ROBUSTNESS ON IMAGE RESOLUTION
399
+
400
+ Settings. In this experiment, we aim to explore whether the proposed VRWKV exhibits distinct properties compared to ViT. To this end, we evaluated the performance of different variants of DeiT (Touvron et al., 2021a) and VRWKV at different resolutions on the ImageNet-1K (Deng et al., 2009) classification task. While the training was standardized at a resolution of $224 \times 224$ , we evaluated the models across a range of resolutions, from $224 \times 224$ to $1024 \times 1024$ .
401
+
402
+ Results. As shown in Figure 4, our VRWKV demonstrates stronger robustness when evaluated on a higher resolution. In contrast to DeiT (Touvron et al., 2021a), VRWKV performs better as the resolution slightly increases. For example, VRWKV-B achieved a top-1 accuracy of $82.5\%$ at a $384 \times 384$ resolution, marking an improvement of 0.5 points over its accuracy at the training resolution. When the test resolution scales up to $1024 \times 1024$ , VRWKV-B still maintains an accuracy of $67.2\%$ , while DeiT-B only achieves an accuracy of $57.5\%$ . This indicates that VRWKV has stronger potential and robustness in high-resolution scenarios and is a good alternative to ViT for high-resolution tasks.
403
+
404
+ # A.3 COMPARISON TO VISION MAMBA
405
+
406
+ In the field of visual linear attention mechanisms, Vision Mamba (Zhu et al., 2024)(Vim) stands out as a significant implementation that has garnered considerable interest. In this study, we compare the performance and efficiency of two models in visual tasks.
407
+
408
+ Classification Performance. We compare the classification accuracy on ImageNet-1K (Deng et al., 2009). As reported in Table 7, Vim has higher Top-1 Acc in tiny and small sizes, while the base size models achieve comparable performance. Benefitting from our careful stability design, VRWKV can scale up to larger models while Vim faces instability issues in the training process.
409
+
410
+ ![](images/4d4ca0fb40b315a0569910200ac16db11308867347f1b6c9379beb78c8e4d1f8.jpg)
411
+ Figure 5: Inference time of attention mechanisms. Input resolutions are scanned from 224 to 1024. All experiments are run on Nvidia A100.
412
+
413
+ Inference Efficiency. We compare the inference speed of three attention mechanisms: Vanilla Attn (Vaswani et al., 2017), Bi-WKV, and Vision Mamba, shown in Figure 5. As the input resolution increases, the inference cost of vanilla attention quickly surpasses that of Bi-WKV and
414
+
415
+ Vision Mamba. With our optimizations and design on speeds than Vision Mamba at the same input resolution.
416
+
417
+ CUDA, our Bi-WKV demonstrates faster
ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e12b31f37c4205be79a4a86a5ac5dd157e8365146d6c3d84fe97f6deefdbe0f8
3
+ size 666025
ICLR/2025/Vision-RWKV_ Efficient and Scalable Visual Perception with RWKV-Like Architectures/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:336eb6b08b01483fc188accfef5ea153006d03c33f29f92019077015e6b3b017
3
+ size 530832
ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/2020a5d8-8636-48f9-860e-924ffec09982_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06d774e5da119f8c024415f470661f3217c8559dd805c60cf61a35b1a7eea083
3
+ size 138307
ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/2020a5d8-8636-48f9-860e-924ffec09982_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56eff162a6e7eaf397ee795da1d53df914548dfe7112c05aba84b17a7433225d
3
+ size 161071
ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/2020a5d8-8636-48f9-860e-924ffec09982_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59c9abb67086d2050e043b9a4167f6dfa5b89ad9b1a621df49bb43870e9f8ff8
3
+ size 3778888
ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/full.md ADDED
@@ -0,0 +1,643 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VISUALPREDICATOR: LEARNING ABSTRACT WORLD MODELS WITH NEURO-SYMBOLIC PREDICATES FOR ROBOT PLANNING
2
+
3
+ Yichao Liang $^{1}$ , Nishanth Kumar $^{3}$ , Hao Tang $^{2}$ , Adrian Weller $^{1,6}$ , Joshua B. Tenenbaum $^{3}$ ,
4
+
5
+ Tom Silver<sup>4</sup>, João F. Henriques<sup>5</sup>, Kevin Ellis<sup>2</sup>
6
+
7
+ <sup>1</sup>University of Cambridge, <sup>2</sup>Cornell University, <sup>3</sup>Massachusetts Institute of Technology, <sup>4</sup>Princeton University, <sup>5</sup>University of Oxford, <sup>6</sup>The Alan Turing Institute
8
+
9
+ # ABSTRACT
10
+
11
+ Broadly intelligent agents should form task-specific abstractions that selectively expose the essential elements of a task, while abstracting away the complexity of the raw sensorimotor space. In this work, we present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations. We outline an online algorithm for inventing such predicates and learning abstract world models. We compare our approach to hierarchical reinforcement learning, vision-language model planning, and symbolic predicate invention approaches, on both in- and out-of-distribution tasks across five simulated robotic domains. Results show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Planning and model-based decision-making for robotics demands an understanding of the world that is both perceptually and logically rich. For example, a household robot needs to know that slippery objects, such as greasy spatulas, are hard to grasp. Determining if the spatula is greasy is a subtle perceptual problem. As an example of logical richness, for a robot to use a balance beam to weigh objects, it must count up the mass on each side of the balance beam to determine which way the beam will tip. Counting and comparing masses are logically sophisticated operations.
16
+
17
+ In this work, we show how to efficiently learn symbolic abstractions that are both perceptually and logically rich, and which can plug into standard robot task-planners to solve long-horizon tasks. We consider a robot that encounters a new environment involving novel physical mechanisms and new kinds of objects, and which must learn how to plan in this new environment from relatively few environment interactions (the equivalent of minutes or hours of training experience). The core of our approach is to learn an abstract model of the environment in terms of Neuro-Symbolic Predicates (NSPs, see Fig. 1), which are snippets of Python code that can invoke vision-language models (VLMs) for querying perceptual properties, and further algorithmically manipulate those properties using Python, in the spirit of ViperGPT and VisProg (Surís et al., 2023; Gupta & Kembhavi, 2022).
18
+
19
+ In contrast, traditional robot task planning uses hard-coded symbolic world models that cannot adapt to novel environments (Garrett et al., 2021; Konidaris, 2019). Recent works pushed in this direction with limited forms of learning that restrict the allowed perceptual and logical abstractions, and which further require demonstration data instead of having the robot explore on its own (Silver et al., 2023; Konidaris et al., 2018). The representational power of Neuro-Symbolic Predicates allows a much broader set of perceptual primitives (essentially anything a VLM can perceive) and also deeper logical structure (in principle, anything computable in Python).
20
+
21
+ ![](images/23a13ba40ccab8d447d8892b0ce8413d34af4df0ae994d00ed66e8d1b4a68350.jpg)
22
+ Figure 1: Robot learning domains illustrating learned Neuro-Symbolic predicates. In (A) we learn a predicate that queries a VLM to check if a coffee jug is inside a coffee machine. In (B) we learn a predicate that checks if a balance beam is balanced. (Code lightly refactored to better fit in figure.)
23
+
24
+ Yet there are steep challenges when learning Neuro-Symbolic Predicates to enable effective planning. First, the predicates must be learned from input pixel data, which is extremely complex and potentially noisy. Second, they should not overfit to the situations encountered during training, and instead zero-shot generalize to complex new tasks at test time. Third, we need an efficient way of exploring different possible plans to collect the data needed to learn good predicates. To address these challenges we architect a new robot learning approach that interleaves proposing new predicates (using VLMs), predicate scoring/Validation (adapting the modern predicate-learning algorithm by Silver et al. (2022)), and goal-driven exploration with a planner in the loop. The resulting architecture is then able to successfully learn across five different simulated environments, and is more flexible and more sample-efficient compared to competing neural, symbolic, and LLM baselines.
25
+
26
+ We highlight the following contributions: (1) $NSPs$ , a state representation for decision-making using both logically and perceptually rich features; (2) An algorithm for inventing $NSPs$ by interacting with an environment, including an extension to a new operator learning algorithm; and (3) Evaluation against 6 methods across 5 simulated robotics tasks.
27
+
28
+ # 2 PROBLEM FORMULATION
29
+
30
+ We consider the problem of learning state abstractions for robot planning over continuous state/action spaces, and doing so from online interaction with the environment, rather than learning from human-provided demonstrations. We assume a predefined inventory of basic motor skills, such as pick/place, and also assume a basic object-centric state representation (explained further below), which is a common assumption (Kumar et al., 2024; Silver et al., 2023; 2022). The goal is to learn state abstractions from training tasks that generalize to held-out test tasks, enabling the agent to solve as many test tasks as possible while using minimal planning budget.
31
+
32
+ Tasks. A task $T$ is a tuple $\langle \mathcal{O}, x_0, g \rangle$ of objects $\mathcal{O}$ , initial state $x_0$ , and goal $g$ . The allowed states depend on the objects $\mathcal{O}$ , so we write the state space as $\mathcal{X}_{\mathcal{O}}$ (or just $\mathcal{X}$ when the objects are clear from context). Each state $x \in \mathcal{X}_{\mathcal{O}}$ includes a raw RGB image and associated object features, such as 3D object position.
33
+
34
+ **Environments.** Tasks occur within an environment $\mathcal{E}$ , which is a tuple $\langle \mathcal{U}, \mathcal{C}, f, \Lambda \rangle$ where $\mathcal{U} \subseteq \mathbb{R}^m$ is a low-level action space (e.g. motor torques), $\mathcal{C}$ is a set of controllers for low-level skills (e.g. pick/place), $f: \mathcal{X} \times \mathcal{U} \to \mathcal{X}$ is a transition function, and $\Lambda$ is a set of object types (possible outputs of an object classifier). The environment is shared across tasks.
35
+
36
+ Built-in Motor Skills. We assume skills $\mathcal{C}$ , each of which has parameters that abstract over which object(s) the skill acts on. For example, the agent can apply a skill such as Place (?block1, ?block2) to stack any pair of blocks atop one another, where a block is a type in $\Lambda$ . We assume the agent can determine whether a skill has been successfully executed upon completion. Skills can be modeled within the options framework (Sutton et al., 1999). The skills $\mathcal{C}$ and the objects $\mathcal{O}$ induce an action space $\mathcal{A}_{\mathcal{O}}$ (or simply $\mathcal{A}$ when the context is clear).
37
+
38
+ Skills, tasks, and environments are the primary inputs to our system. The primary outputs—what we actually learn—are higher-level abstractions over these basic states and actions.
39
+
40
+ Predicates: Abstracting the State. A predicate $\psi$ is a Boolean feature of a state, which can be parametrized by specific objects in that state. We treat this as function $\psi : \mathcal{O}^m \to (\mathcal{X} \to \mathbb{B})$ that is an indicator, given $m$ objects, of whether a predicate holds in a state. For example, the predicate On(?block1, ?block2) inputs a pair of blocks, and outputs a state classifier for whether the first block is atop the second block. A set of predicates $\Psi$ induces an abstract state corresponding to all the predicate/object combinations that hold in the current state:
41
+
42
+ $\mathrm{ABSTRACT}_{\Psi}(x) = \left\{(\psi, o_1, \dots, o_m) : \psi(o_1, \dots, o_m) \text{ holds in state } x, \text{ for } \psi \in \Psi \text{ and } o_j \in \mathcal{O}\right\}$ (1) We write $S$ for the set of possible abstract states.
43
+
44
+ High-Level Actions: Refining the action space. Planning requires predicting how each skill transforms the abstract state representation. To make these predictions, High-Level Actions (HLAs) augment skills with preconditions specifying which abstract states allow successful use of that skill, and postconditions, specifying how the skill transforms the abstract state. Like predicates, an HLA is parametrized by the specific objects it acts upon. Formally, an HLA $\omega$ is a function from a tuple of objects in $\mathcal{O}^m$ to a tuple $\langle \pi, \mathrm{PRE}, \mathrm{EFF}^+, \mathrm{EFF}^- \rangle$ where $\pi \in \mathcal{A}_{\mathcal{O}}$ is a skill, PRE is the precondition, and the postcondition consists of $\mathrm{EFF}^+$ (predicates added to the abstract state) and $\mathrm{EFF}^-$ (predicates removed from the abstract state).
45
+
46
+ As an example of an HLA, consider PlaceOnTable(?block, ?table, ?underBlock), with PRE = {Clear(?block)}, EFF^ = {On(?block, ?table)}, and EFF^- = {On(?block, ?underBlock)}, using skill $\pi =$ place(?block, ?table). This means placing a block on a table, which was previously on top of underBlock, causes the block to be on the table, and no longer on top of underBlock. This HLA is formally a function with arguments ?block, ?table, ?underBlock.
47
+
48
+ HLA Notation. We write $\Omega$ for the set of HLAs (what the agent learns), and $\Omega_{\mathcal{O}}$ for their instantiations on objects $\mathcal{O}$ (how the agent uses them in a particular task). We use the variable $\omega$ for HLAs, so we would write $\omega \in \Omega$ . We use $\underline{\omega}$ for HLAs applied to particular objects, so we'd write $\underline{\omega} = \langle \pi, \underline{\mathrm{PRE}}, \underline{\mathrm{EFF}}^{+}, \underline{\mathrm{EFF}}^{-} \rangle \in \Omega_{\mathcal{O}}$ .<sup>2</sup>
49
+
50
+ Abstract State Transitions. The predicates and HLAs together define an abstract world model, whose transition function $F: \mathcal{S} \times \Omega_{\mathcal{O}} \to \mathcal{S}$ is
51
+
52
+ $$
53
+ F \left(s, = \langle \underline {{\pi}}, \underline {{\mathrm {P R E}}}, \underline {{\mathrm {E F F} ^ {+}}}, \underline {{\mathrm {E F F} ^ {-}}} \rangle\right) = \left\{ \begin{array}{l l} s \cup \underline {{\mathrm {E F F} ^ {+}}} \backslash \underline {{\mathrm {E F F} ^ {-}}} & \text {i f} \underline {{\mathrm {P R E}}} \subseteq s \\ \text {u n d e f i n e d} & \text {o t h e r w i s e} \end{array} \right. \tag {2}
54
+ $$
55
+
56
+ Having learned predicates and high-level actions, we then solve problems by hierarchical planning:
57
+
58
+ A low-level plan is a sequence of $n$ skills applied to objects $(\pi_1, \ldots, \pi_n) \in \mathcal{A}_\mathcal{O}^n$ . It solves a task with goal $g$ and initial state $x_0$ if sequencing those skills starting from $x_0$ satisfies $g$ .
59
+
60
+ A high-level plan is a sequence of $n$ HLAs applied to objects, $\underline{\omega}_1,\dots ,\underline{\omega}_n$
61
+
62
+ A note on types. Because the environment provides object types, we augment predicates and HLAs with typing information for each object-valued argument. Equivalently, predicates return false, and skills terminate immediately with failure, when applied to arguments of the wrong type.
63
+
64
+ # 3 NEURO-SYMBOLIC PREDICTS
65
+
66
+ Neuro-Symbolic Predicates (NSPs) represent visually grounded yet logically rich abstractions that enable efficient planning and problem solving. As Figure 2 illustrates, these predicates are neuro-symbolic because they combine programming language constructs (conditionals, numerics, loops and recursion) with API calls to neural vision-language models for evaluating visually-grounded natural language assertions. NSPs can be grounded in visual perception, and also in proprioceptive and object-tracking features, such as object poses, common in robotics (Kumar et al., 2024; 2023b; Curtis et al., 2022; 2024b). We consider two classes of NSPs: primitive and derived. Primitive NSPs are evaluated directly on the raw state, such as Holding(obj) (which would use VLM queries) or GripperOpen() (which would use proprioception). Derived NSPs instead determine their truth value based on the truth value of other NSPs, analogous to derived predicates in planning (Thiebaux et al., 2005; McDermott et al., 1998).
67
+
68
+ Primitive NSPs. We provide a Python API for computing over the raw state, including the ability to crop the image to particular objects and query a VLM in natural language. See Appendix A.
69
+
70
+ Derived NSPs. Instead of querying the raw state, a derived $NSP$ computes its truth value based only on the truth value of other NSPs. Derived $NSPs$ handle logically rich relations, such as OnPlate in fig. 2, which recursively computes if a block is on a plate, or on something that is on a plate.
71
+
72
+ Evaluating Primitive NSPs. No VLM is $100\%$ accurate, even for simple queries like "is the robot holding the jug?", especially in partially observable environments. To increase the accuracy and precision of NSPs, we take the following two measures (see Appendix B.1 for an example prompt).
73
+
74
+ First, because a single image may not uniquely identify the state (e.g. due to occlusion), we provide extra context to VLM queries. Consider a robot whose gripper is next to a jug, but whose own arm occludes the jug handle, making it uncertain whether the jug is held by the gripper or merely next to it. Knowing the previous action (e.g. Pick(jug)) helps resolve this uncertainty. We therefore further condition $NSPs$ on the previous action, as well as the previous visual observation (immediately before the previous action was executed) and previous truth values for the queried ground atom.
75
+
76
+ Second, we visually label each object in the scene by overlaying a unique ID number on each object in the RGB image (following Yang et al., 2023). That way, to evaluate for example Holding(block2), we can query a VLM with "the robot is holding block2", where block2
77
+
78
+ ```python
79
+ def Holding(state: RawState, objects: Sequence[Object]) -> bool:
80
+ '''Is the robot holding the block.''"
81
+ block, = objects
82
+ # The block can't be held if the robot's hand is open.
83
+ robot = state.get Objects(_robot_type)[0]
84
+ if state.get (robot, "fingers") >= 0.5:
85
+ return False
86
+ attention_image = state.drop_to Objects([block, robot])
87
+ return evaluate.simple_assertion(f" {block.id_name} is held by the robot", attention_image)
88
+ def OnPlate(atoms: Set[GroundAtom], objects: Sequence[Object]) -> bool:
89
+ '''Whether a block x is directly or transitively on a plate y.''"
90
+ block, plate = objects
91
+ for atom in atoms:
92
+ if atompredicate == DirectlyOnPlate and atom.objects == [block, plate]:
93
+ return True
94
+ other_blocks = {a.objects[0] for a in atoms if aPredicate == DirectlyOn or a+predicate == DirectlyOnPlate}
95
+ for other_block in other_blocks:
96
+ holds1 = False
97
+ for atom in atoms:
98
+ if atom+predicate == DirectlyOn and atom.objects == [block, other_block]:
99
+ holds1 = True
100
+ break
101
+ if holds1 and OnPlate(atoms, [other_block, plate]):
102
+ return True
103
+ return False
104
+ ```
105
+
106
+ Figure 2: Example classifiers for Holding and OnPlate NSP.
107
+
108
+ is labeled with "2." This disambiguates the objects in a scene, allowing an NSP to reason precisely about which block is held, rather than merely represent that some block is held.
109
+
110
+ How Derived NSPs interact with HLAs. HLAs form an abstract world model that predicts which predicates are true after performing a skill (the postcondition). Derived predicates do not need to occur in the postcondition, because we can immediately calculate which derived predicates are true based on the predicted truth values of primitive NSPs. Therefore, HLAs can have derived predicates in the precondition, but never in the postcondition.
111
+
112
+ # 4 HIERARCHICAL PLANNING
113
+
114
+ We use the learned abstract world model to first make a high-level plan (sequence of HLAs), which then yields a low-level action sequence by calling the corresponding skill policy for each HLA. High-level planning leverages widely-used fast symbolic planners, which, for example, conduct A* search with automatically-derived heuristics (e.g. LM-Cut, Helmert & Domshlak, 2009).
115
+
116
+ However, there may be a mismatch between a high-level plan, which depends on potentially flawed abstractions, and its actual implementation in the real world. Learning is driven by these failures. More precisely, hierarchical planning can break down in one of two ways:
117
+
118
+ Planning Failure #1: Infeasible. A high-level plan is infeasible if one of its constituent skills fails to execute.
119
+
120
+ Planning Failure #2: Not satisficing. A high-level plan is not satisficing if its constituent skills successfully execute, but do not achieve the goal.
121
+
122
+ When solving a task we generate a stream of high-level plans and execute each one until a satisficing plan (achieving the goal) is generated, or until hitting a planning budget $n_{\mathrm{abstract}}$ .
123
+
124
+ # 5 LEARNING AN ABSTRACT WORLD MODEL FROM INTERACTING WITH THE ENVIRONMENT
125
+
126
+ Algorithm 1 shows how we interleave learning predicates (state abstraction), learning HLAs (abstract transition function), and interacting with the environment. The learner takes in an environment $\mathcal{E}$ , a set of training tasks $\mathcal{T}$ , an initial predicate set $\Psi_0$ (which is usually the goal predicates), an initial set of HLAs $\Omega_0$ (which are largely empty, section 5.1), and an initial dataset $\mathcal{D}$ (empty, except when doing transfer learning from earlier environments). It tracks its learning progress using $\rho_{\mathrm{best}}$ , the highest training solve rate, and $\nu_{\mathrm{best}}$ , the lowest number of infeasible plans.
127
+
128
+ <table><tr><td colspan="2">Algorithm 1 Online Pred. Invention(ℰ, T, Ψ0, Ω0, D)</td></tr><tr><td colspan="2">1: init: ρbest← −∞, best solve rate</td></tr><tr><td colspan="2">2: init: νbest← ∞, best number of failed plans</td></tr><tr><td colspan="2">3: init: Ψ&#x27;← Ψ0</td></tr><tr><td colspan="2">4: for i ∈ range(1, nmax.it e) do</td></tr><tr><td colspan="2">5: Di, ρi, νi← Explore(Ψi-1, Ωi-1,ℰ, T) ▷ section 5.1</td></tr><tr><td colspan="2">6: if ρi &gt; ρbest or (ρi = ρbest and νi &lt; νbest) then</td></tr><tr><td colspan="2">7: Ψbest, Ωbest, ρbest, νbest ← Ψi, Ωi, ρi, νi</td></tr><tr><td colspan="2">8: if νi = 0 then</td></tr><tr><td colspan="2">9: break</td></tr><tr><td colspan="2">10: D ← D ∪ Di</td></tr><tr><td colspan="2">11: if ρi ≤ ρi-1 or (ρi = ρi-1 and νi &gt; νi-1) then</td></tr><tr><td colspan="2">12: Ψ&#x27; ← Ψ&#x27; ∪ Propose NSPs(ℰ, Ψi-1) ▷ section 5.2</td></tr><tr><td colspan="2">13: Ψi ← Select Predicates(ℰ, Ψ&#x27;) ▷ section 5.3</td></tr><tr><td colspan="2">14: Ωi ← Learn HighLevelActions(ℰ, Ψi) ▷ section 5.4</td></tr><tr><td colspan="2">15: return Ψbest, Ωbest</td></tr></table>
129
+
130
+ # 5.1 EXPLORATION
131
+
132
+ Our agent explores the environment by planning with its current predicates/HLAs, and executing the plans. The agent is initialized with underspecified, mostly empty HLA(s) (that is, the preconditions and effects are mostly empty sets, except with goal predicates in the effects if appropriate, so that the planner can generate plans).<sup>3</sup> It collects data by trying to solve the training tasks (generate and execute abstract plans until the task is solved or $n_{\text{abstract}}$ plans are used, as described in section 4) and collects positive transition segments (from successfully-executed skills), negative state-action tuples (from skills that failed to execute successfully) and satisficing plans, if any.
133
+
134
+ # 5.2 PROPOSING PREDICATES
135
+
136
+ We introduce three strategies for prompting VLMs to invent diverse, task-relevant predicates – two that are conditioned on collected data, and one that is not (see Appendix B.4 for further details).
137
+
138
+ Strategy #1 (Discrimination) helps discover predicates that are good preconditions for the skills. We prompt a VLM with example states where a skill succeeded and failed, and ask it to generate code that predicts when the skill is applicable.
139
+
140
+ Strategy #2 (Transition Modeling) helps discover predicates helpful for postconditions. We prompt a VLM with before (or after) snapshots of successful skill execution, and ask it to generate code that describes properties that changed before (or after, respectively).
141
+
142
+ Strategy #3 (Unconditional Generation) prompts VLMs to propose new predicates as logical extensions of existing ones (whether built-in or previously proposed), without conditioning on the raw planning data. This approach helps create derived predicates.
143
+
144
+ # 5.3 SELECTING A,PREDICATE SET
145
+
146
+ VLM-generated predicates typically have low precision—not all generations are useful or sensible—and too many predicates will overfit the model to what little data it has collected. One solution could be the propose-then-select paradigm (Silver et al., 2023). Silver et al. (2023) propose an effective predicate selection objective but requires around 50 expert plan demonstrations. We assume no demonstration data, and in general, we might not find any satisficing plans early in learning. Therefore we need a new way of learning from unsuccessful plans.
147
+
148
+ To address this, we devise a novel objective that scores a set of predicates $\Psi$ based on classification accuracy, plus a simplicity bias. The objective score is obtained by first learning HLAs using the set of predicates $\Psi$ (discussed more in section 5.4), and then computing the classification accuracy of the HLAs (see Appendix B.3). Later in learning, after discovering enough (a hyperparameter one can choose) satisficing plans, we switch to the objective of Silver et al. (2023), which takes planning efficiency and simplicity into account.
149
+
150
+ We perform a greedy best-first search (GBFS) with either score function as the heuristic. It starts from the set of goal predicates $\Psi_G$ and adds a single new predicate from the proposed candidates at each step, and finally returns the set of predicates with the highest score. This effectively selects $\Psi_i \gets \arg \min_{\Psi \subseteq \mathcal{P}(\Psi)} J(\Psi')$ , where $J(\cdot)$ is the objective function. In our experiments, we found the search space is small enough that the enumeration typically takes just a few minutes on a single CPU. For larger search spaces, local hill climbing could be used in place of GBFS.
151
+
152
+ # 5.4 LEARNING HIGH-LEVEL ACTIONS
153
+
154
+ We further learn high-level actions $\Omega$ , which define an abstract transition model, in the learned predicate space, from interactions with the environment. We follow the cluster and intersect operator learning algorithm (Chitnis et al., 2022) and improve its preconditioner learner for more efficient exploration and better generalization. Chitnis et al. (2022) assume given demonstration trajectories and learns restricted preconditions so that the plans are most similar to the demonstrations. Our agent explores the environment from scratch and does not have demonstration data to follow restrictively. On the other hand, our agent needs more optimistic world models to explore unseen situations to solve the task. Our preconditioner learner ensures that each data in the transition dataset is modeled by one and only one high-level action and minimizes the syntactic complexity of the HLA to encourage optimistic world models. See Appendix B.2 details.
155
+
156
+ # 6 EXPERIMENTS
157
+
158
+ We design our experiments to answer the following questions: (Q1) How well does our $NSP$ representation and predicate invention approach compare to other state-of-the-art methods, including popular HRL or VLM planning approaches? (Q2) How do the abstractions learned by our method perform relative to manually designed abstractions and the abstractions before any learning? (Q3) How effective is our $NSP$ representation compared to traditional symbolic predicates, where classi-
159
+
160
+ fiers are based on manually selected object features? (Q4) What is the contribution of our extended operator learning algorithm to overall performance?
161
+
162
+ Experimental Setup. We evaluated seven different approaches across five robotic environments simulated using the PyBullet physics engine (Coumans & Bai, 2016). Each result is averaged over five random seeds, and for each seed, we sample 50 test tasks that feature more objects and more complex goals than those encountered during training. The agent is provided with 5 training tasks in the Cover and Coffee environments, 10 tasks in Cover Heavy and Balance, and 20 tasks in Blocks. The planning budget $n_{\mathrm{abstract}}$ is set to 8 for all domains except Coffee, where it is set to 100. The approaches are run on a single CPU except for the MAPLE baseline, which utilizes uses a GPU.
163
+
164
+ **Environments.** We briefly describe the environments used, including their hand-coded closed-loop controllers, which are shared across all approaches. Additional details can be found in appendix D.
165
+
166
+ 1. Cover. The robot is tasked with picking and placing specific blocks to cover designated regions on the table, using Pick and Place skills. Training tasks involve 2 blocks and 2 targets, while test tasks increase the difficulty with 3 blocks and 3 targets.
167
+ 2. Blocks. The robot must construct towers of blocks according to a specified configuration, using Pick, Stack, and PlaceOnTable skills. The agent is trained on tasks involving 3 or 4 blocks and tested on more challenging tasks with 5 or 6 blocks.
168
+ 3. Coffee. The robot is tasked with filling cups with coffee. This involves picking up and placing a jug into a coffee machine, making coffee, and pouring it into the cups. The jug may start at a random rotation, requiring the robot to rotate it before it can be picked up. The environment provides 5 skills: Twist, Pick, Place, TurnMachineOn, and Pour. Training tasks involve filling 1 cup, while test tasks require filling 2 or 3 cups.
169
+ 4. Cover Heavy. This is a variant of Cover with "impossible tasks" which asks the robot to pick and placing white marble blocks that are too heavy for it to pick up. The environment retains the same controllers and number of objects as the standard Cover environment. An impossible task is considered correctly solved if the agent determines that the goal is unreachable with its existing skills (i.e., no feasible plan can be generated).
170
+ 5. Balance. In this environment, the agent is tasked with turning on a machine by pressing a button in front of it, but without prior knowledge of the mechanism required to activate it (in this case, balancing an equal number of blocks on both sides). The agent has access to a PressButton skill, along with the skills from the Blocks domain. Training tasks involve 2 or 4 blocks, while test tasks increase the difficulty with 4 or 6 blocks.
171
+
172
+ Approaches. We compare our approach against 5 baselines and manually designed state abstraction.
173
+
174
+ 1. Ours. Our main approach.
175
+
176
+ ![](images/17eadbc433802cbee4a64621b87e8389cbcbbf8194230b956c9b9d1812c302f0.jpg)
177
+ Figure 3: Environments. Top row: train task examples. Bottom row: evaluation task examples.
178
+
179
+ ![](images/3f5a00c3ef11cae34f85b4087e9c2ce2c02944ad4df88996c6596c2370146359.jpg)
180
+ Figure 4: Top row: percentage solved for different domains $(\uparrow)$ . Bottom: percentage of planning budget used to find the satisficing plans $(\downarrow)$ . The dashed line shows the minimal number of plans needed to solve all the tasks (1 plan per task).
181
+
182
+ 2. MAPLE. a HRL baseline that learns to select high-level action by learning a Q function, but does not explicit learn predicates and perform planning. This is inspired by the recent work on MAPLE (Nasiriany et al., 2022b). While we have extended the original work with the capacity of goal-conditioning, the implementation is still not able to deal with goals involving more objects than it has seen during training. Hence, we are only evaluating this approach with tasks from the training distribution.
183
+ 3. ViLa (Hu et al., 2023). A VLM planning baseline which zero-shot prompts a VLM to plan a sequence of actions, without learning.
184
+ 4. Sym_pred. A baseline that uses the same online learning algorithm but only has access to object features that are commonly present in robotics tasks when writing predicates, i.e., without open-ended VLM queries and derived predicates. This shares a similar representation as recent work Interpret (Han et al., 2024) but is still distinct since they mostly learn from human instruction.
185
+ 5. Ablate op. An ablation that does not use our extension to the operator learner.
186
+ 6. No invent. A baseline that uses the abstractions our approach is initialized with and does not perform any learning.
187
+ 7. Oracle. An "oracle" planning agent with manually designed predicates and operators.
188
+
189
+ Results and Discussion. Figure 4 presents the evaluation task solve rate and the planning budget utilized. Examples of an online learning trajectory with invented predicates, instances of learned abstractions, and further planning statistics (such as node expanded and walltime) are provided in appendix C.
190
+
191
+ Our approach consistently outperforms the HRL and VLM planning baselines, MAPLE and ViLa, across all tested domains, achieving near-perfect solve rates (Q1). With similar amounts of interaction data, MAPLE struggles to perform well, even on tasks within the training distribution. This limitation could potentially be mitigated with significantly larger datasets, though this is often impractical in robotics due to the high cost of real-world interaction data and the sim-to-real gap in transferring simulation-trained policies. ViLa demonstrates limited planning capabilities, which is consistent with recent observations (Kambhampati et al., 2024). While it performs adequately on simple tasks like Cover, where the robot picks and places blocks, its performance drops significantly when blocks are initialized in the robot's grasp, as it tends to redundantly attempt picking actions. This behavior suggests overfitting. In more complex domains, ViLa often generates infeasible plans, such as attempting to pick blocks from a stack's middle or trying to grasp a jug without considering its orientation. We think introducing demonstrations or incorporating environment interactions could potentially alleviate these issues.
192
+
193
+ Our approach significantly outperforms No invent, demonstrating the clear benefits of learning predicate abstractions over relying on initial underspecified representations. It achieves similar solve rates and efficiency to the Oracle baseline, which uses manually designed abstractions (Q2). This
194
+
195
+ underscores the ability of our method to autonomously discover abstractions as effective as those crafted by human experts.
196
+
197
+ Addressing (Q3), while Sym. pred. performs well in simple domains like Cover, it struggles to invent predicates that require grounding in perceptual cues not explicitly encoded in object features. For instance, in Coffee, it cannot reliably determine if a jug is inside a coffee machine based on object poses—a key precondition for the TurnMachineOn action. Similarly, in Cover Heavy, it fails to recognize blocks that are too heavy to lift, which is critical for identifying unreachable goals. Additionally, without derived NSPs, reasoning accurately and efficiently about abstract concepts in the abstract world model (such as whether the number of blocks on both sides of a balance is equal) becomes challenging, which is critical for solving Balance More generally, we hypothesize that providing all feature-value pairs for every object in each state during prompting overwhelms existing VLMs, leading to poor predicate invention. This likely accounts for the subpar performance, even in simple domains like Blocks. These limitations emphasize the strengths of our NSP representation and learning pipeline.
198
+
199
+ Finally, to answer (Q4), we find that our approach performs better than Ablate op., which sometimes learns unnecessarily complex preconditions that overfit early, limited data, hindering further learning on training tasks. In other cases, overly specific preconditions result in good training performance but poor generalization, such as requiring JugInMachine for the Pour action. This demonstrates the value of our operator learner, especially in data-scarce, exploration-based learning settings.
200
+
201
+ # 7 RELATED WORKS
202
+
203
+ Hierarchical Reinforcement Learning (HRL) HRL tackles the challenge of solving MDPs with high-dimensional state and action spaces, common in robotics, by leveraging temporally extended, high-level actions (Barto & Mahadevan, 2003). The Parameterized Action MDPs (PAMDPs) framework (Masson et al., 2016) builds on this by integrating discrete actions with continuous parameters, optimizing both the action and its parameterization using the Q-PAMDP algorithm. MAPLE (Nasiriany et al., 2022a) further builds on this by using a library of behavior primitives, such as grasping and pushing, combined with a high-level policy that selects and parameterize these actions. We implement a version of this with the extension of goal-conditioned high-level policy as a baseline. Generative Skill Chaining (GSC) (Mishra et al., 2023) further improves long-horizon planning by using skill-centric diffusion models that chain together skills while enforcing geometric constrains. Despite these advancements, they still face challenges in sample complexity, generalization, and interpretability.
204
+
205
+ Large Pre-Trained Models for Robotics With the rise of large (vision) language models (VLMs), many works explore their application in robotic decision making. RT-2 (Brohan et al., 2023) treats robotic actions as utterances in an "action language" learned from web-scale datasets. SayCan and Inner Monologue (Ahn et al., 2022; Huang et al., 2022) use LLMs to select skills from a pretrained library based on task prompts and prior actions. Code as Policy (Liang et al., 2023) prompts LLMs to write policy code that handles perception and control. Recent works extend this to bilevel planning (Curtis et al., 2024a), but do not learn new predicates. ViLa (Hu et al., 2023) queries VLMs for action plans, executing the first step before replanning. We implement an open-loop version of ViLa to compare with its initial planning capabilities.
206
+
207
+ Learning Abstraction for Planning Our work builds on a rich body of research focused on learning abstractions for planning. Many prior works have explored offline methods such as learning action operators and transition models from demonstrations using existing predicates (Silver et al., 2021; Chitnis et al., 2022; Pasula et al., 2007; Silver et al., 2022; Kumar et al., 2023a). While Silver et al. (2023) explore learning predicates grounded in object-centric features, our approach goes further by inventing open-ended, visually and logically rich concepts, without relying on hand-selected features. Additionally, unlike their demonstration-based approach, ours learn purely online. Konidaris et al. (2018) and consequent works (James et al., 2022; 2020) discover abstraction in an online fashion by leveraging the initiable and terminations set of operators that satisfy an abstract subgoal property. James et al. (2020) incorporate an egocentric observation space to learn more portable representations, and James et al. (2022) define equivalence of options effects on objects to derive object types for better transferability. Nevertheless, they work on a constrained class of classifiers (such as decision trees or linear regression with feature selection), which limits the effectiveness and
208
+
209
+ generalizability of learned predicates. Kumar et al. (2024) performs efficient online learning, but focuses on sampler learning rather than predicate invention.
210
+
211
+ # 8 CONCLUSION
212
+
213
+ In this work, we introduced Neuro-Symbolic Predicates (NSPs), a novel representation that combines the flexibility of neural networks to represent open-ended, visually grounded concepts, and the interpretability and compositionality of symbolic representations, for planning. To support this, we developed an online algorithm for inventing NSPs and learning abstract world models, which allows efficient acquisition of NSPs. Our experiments across five simulated robotic domains demonstrated that our method outperforms existing approaches, including hierarchical reinforcement learning, VLM planning, and traditional symbolic predicates, particularly in terms of sample efficiency, generalization, and interpretability. Exciting areas for future work include incorporating recovery mechanisms for failed plans, enhancing exploration efficiency, scaling to partially observable and real-world domains, and relaxing assumptions about skills leverage advances in policy synthesis (Liang et al., 2023), RL (Liang et al., 2024; Ma et al., 2023), or motion planning (Huang et al., 2024).
214
+
215
+ # REFERENCES
216
+
217
+ Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022.
218
+ Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete event dynamic systems, 13:341-379, 2003.
219
+ Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023.
220
+ Rohan Chitnis, Tom Silver, Joshua B Tenenbaum, Tomas Lozano-Perez, and Leslie Pack Kaelbling. Learning neuro-symbolic relational transition models for bilevel planning. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4166-4173. IEEE, 2022.
221
+ Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning, 2016.
222
+ Aidan Curtis, Xiaolin Fang, Leslie Pack Kaelbling, Tomás Lozano-Pérez, and Caelan Reed Garrett. Long-horizon manipulation of unknown objects via task and motion planning with estimated affordances. In 2022 International Conference on Robotics and Automation (ICRA), pp. 1940-1946. IEEE, 2022.
223
+ Aidan Curtis, Nishanth Kumar, Jing Cao, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. Trust the proc3s: Solving long-horizon robotics problems with llms and constraint satisfaction, 2024a.
224
+ Aidan Curtis, George Matheos, Nishad Gothoskar, Vikash Mansinghka, Joshua Tenenbaum, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. Partially observable task and motion planning with uncertainty and risk awareness. arXiv preprint arXiv:2403.10454, 2024b.
225
+ Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Integrated task and motion planning. Annual review of control, robotics, and autonomous systems, 4(1):265-293, 2021.
226
+ Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. ArXiv, abs/2211.11559, 2022.
227
+ Muzhi Han, Yifeng Zhu, Song-Chun Zhu, Ying Nian Wu, and Yuke Zhu. Interpret: Interactive predicate learning from language feedback for generalizable task planning. arXiv preprint arXiv:2405.19758, 2024.
228
+
229
+ Malte Helmert and Carmel Domshlak. Landmarks, critical paths and abstractions: what's the difference anyway? In Proceedings of the International Conference on Automated Planning and Scheduling, volume 19, pp. 162-169, 2009.
230
+ Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang Gao. Look before you leap: Unveiling the power of gpt-4v in robotic vision-language planning. arXiv preprint arXiv:2311.17842, 2023.
231
+ Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Thompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022.
232
+ Wenlong Huang, Chen Wang, Yunzhu Li, Ruohan Zhang, and Li Fei-Fei. Rekep: Spatiotemporal reasoning of relational keypoint constraints for robotic manipulation. arXiv preprint arXiv:2409.01652, 2024.
233
+ Steven James, Benjamin Rosman, and George Konidaris. Learning portable representations for high-level planning. In International Conference on Machine Learning, pp. 4682-4691. PMLR, 2020.
234
+ Steven James, Benjamin Rosman, and GD Konidaris. Autonomous learning of object-centric abstractions for high-level planning. In Proceedings of the The Tenth International Conference on Learning Representations, 2022.
235
+ Subbarao Kambhampati, Karthik Valmeekam, Lin Guan, Mudit Verma, Kaya Stechly, Siddhant Bhambri, Lucas Saldyt, and Anil Murthy. Llms can't plan, but can help planning in llm-modulo frameworks, 2024.
236
+ George Konidaris. On the necessity of abstraction. Current opinion in behavioral sciences, 29:1-7, 2019.
237
+ George Konidaris, Leslie Pack Kaelbling, and Tomas Lozano-Perez. From skills to symbols: Learning symbolic representations for abstract high-level planning. Journal of Artificial Intelligence Research, 61:215-289, 2018.
238
+ Nishanth Kumar, Willie McClinton, Rohan Chitnis, Tom Silver, Tomas Lozano-Pérez, and Leslie Pack Kaelbling. Learning efficient abstract planning models that choose what to predict. In Conference on Robot Learning, pp. 2070-2095. PMLR, 2023a.
239
+ Nishanth Kumar, Willie McClinton, Kathryn Le, , and Tom Silver. Bilevel planning for robots: An illustrated introduction. 2023b. https://lis.csail.mit.edu/bilevel-planning-for-robots-an-illustrated-introduction.
240
+ Nishanth Kumar, Tom Silver, Willie McClinton, Linfeng Zhao, Stephen Proulx, Tomás Lozano-Pérez, Leslie Pack Kaelbling, and Jennifer Barry. Practice makes perfect: Planning to learn skill parameter policies, 2024.
241
+ Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9493-9500. IEEE, 2023.
242
+ Yichao Liang, Kevin Ellis, and João Henriques. Rapid motor adaptation for robotic manipulator arms. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16404-16413, 2024.
243
+ Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Eureka: Human-level reward design via coding large language models. arXiv preprint arXiv:2310.12931, 2023.
244
+ Warwick Masson, Pravesh Ranchod, and George Konidaris. Reinforcement learning with parameterized actions. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
245
+
246
+ Drew McDermott, Malik Ghallab, Adele E. Howe, Craig A. Knoblock, Ashwin Ram, Manuela M. Veloso, Daniel S. Weld, and David E. Wilkins. Pddl-the planning domain definition language. 1998. URL https://api-semanticscholar.org/CorpusID:59656859.
247
+ Utkarsh Aashu Mishra, Shangjie Xue, Yongxin Chen, and Danfei Xu. Generative skill chaining: Long-horizon skill planning with diffusion models. In Conference on Robot Learning, pp. 2905-2925. PMLR, 2023.
248
+ Soroush Nasiriany, Huihan Liu, and Yuke Zhu. Augmenting reinforcement learning with behavior primitives for diverse manipulation tasks. In 2022 International Conference on Robotics and Automation (ICRA), pp. 7477-7484. IEEE, 2022a.
249
+ Soroush Nasiriany, Huihan Liu, and Yuke Zhu. Augmenting reinforcement learning with behavior primitives for diverse manipulation tasks. In 2022 International Conference on Robotics and Automation (ICRA), pp. 7477-7484. IEEE, 2022b.
250
+ Hanna M Pasula, Luke S Zettlemoyer, and Leslie Pack Kaelbling. Learning symbolic models of stochastic domains. Journal of Artificial Intelligence Research, 29:309-352, 2007.
251
+ Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. Learning symbolic operators for task and motion planning. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3182-3189. IEEE, 2021.
252
+ Tom Silver, Ashay Athalye, Joshua B Tenenbaum, Tomas Lozano-Perez, and Leslie Pack Kaelbling. Learning neuro-symbolic skills for bilevel planning. arXiv preprint arXiv:2206.10680, 2022.
253
+ Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomás Lozano-Pérez, Leslie Kaelbling, and Joshua B Tenenbaum. Predicate invention for bilevel planning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 12120-12129, 2023.
254
+ Dídac Surís, Sachit Menon, and Carl Vondrick. Vipergpt: Visual inference via python execution for reasoning. Proceedings of IEEE International Conference on Computer Vision (ICCV), 2023.
255
+ Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999.
256
+ Hao Tang, Darren Key, and Kevin Ellis. Worldcoder, a model-based lvm agent: Building world models by writing code and interacting with the environment. arXiv preprint arXiv:2402.12275, 2024.
257
+ Sylvie Thiebaux, Jörg Hoffmann, and Bernhard Nebel. In defense of pddl axioms. Artificial Intelligence, 168(1-2):38-69, 2005.
258
+ Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. arXiv preprint arXiv:2310.11441, 2023.
259
+
260
+ # CONTENTS
261
+
262
+ A Python API for NSPs 14
263
+ B Additional Details about the Online Invention Algorithm 14
264
+
265
+ B.1 Predicate Interpretation 14
266
+ B.2 Learning HLAs by extending the cluster and intersect algorithm 14
267
+ B.3 Classification-Accuracy-Based Predicate Sets Score Function 16
268
+ B.4 Prompting for predicates 16
269
+ B.5 Limitations and Failure Cases 17
270
+
271
+ C Additional Experimental Results 17
272
+
273
+ C.1 Example Online Learning Trajectory 17
274
+ C.2 Learned Abstractions 18
275
+ C.3 Further Planning Statistics 25
276
+
277
+ D Additional Environment Details 25
278
+
279
+ # A PYTHON API FOR NSPs
280
+
281
+ We provide the following Python API on for writing primitive $NSPs$ : get_object(t: Type) returns all objects in the state of a type t. get(o: Object, f: str) retrieves the feature with name f for object o. We also have crop_toobjects(os: Sequence[Object], ...) for cropping the state observation image to include just the specified list of objects to reduce the complexity for downstream visual reasoning. Finally, there is evaluate.simple_assertion(a: str, i: Image) for evaluating the natural language assertion a in the context of image i using a VLM.
282
+
283
+ # B ADDITIONAL DETAILS ABOUT THE ONLINE INVENTION ALGORITHM
284
+
285
+ # B.1 PREDICATE INTERPRETATION
286
+
287
+ We provide an example prompt used to interpret the truth value of the ground atom DirectlyOn(block5, block6) in the state with cropped observation shown in Figure 5. The highlighted text illustrates how we condition on previous action, previous observation, previous truth value, and object IDs to improve the predicate evaluation accuracy.
288
+
289
+ Evaluate the truth value of the following assertions in the current state as depicted by the image labeled with 'curr. state'. For context, the state is right after the robot has successfully executed the action Pick(robot1:robot, block5:block). The state before executing the action is depicted by the image labeled with 'prev. state'. Please carefully examine the images depicting the 'prev. state' and 'curr. state' before making a judgment. The assertions to evaluate are:
290
+
291
+ 1. block5 is directly on top of block6. (which was False before the successful execution of the previous action)
292
+
293
+ ![](images/97c15690f0523c1effdb196b7e1b7f9781cb2eafadc67b79b365a634f9e95afd.jpg)
294
+
295
+ ![](images/6c549585fc48c35dcacaef58d870681b387b5ea81fd441e639cebd6d0c4cabf1.jpg)
296
+ Figure 5: Example cropped current (right) and previous (left) observations used for interpreting ground predicates.
297
+
298
+ # B.2 LEARNING HLAs BY EXTENDING THE cluster and intersect ALGORITHM
299
+
300
+ We aim to learn high-level actions $\Omega$ , which define an abstract transition model in the learned predicate space, from interactions with the environment. These interactions consist of executing high-level plans, which are sequences of (grounded) HLAs $\underline{\omega}_1, \ldots, \underline{\omega}_n$ (i.e., HLAs applied to concrete objects). Our learned abstract transition model should both fit the transition dataset while being optimistic for efficient exploration (Tang et al., 2024). Recalling the definitions from sec. 2, given the current transition dataset, $\mathcal{D} = \{\dots, (x^{(k)}, \pi^{(k)}, x_{\pi}^{(k)}), \dots, (x^{(k')}, \pi^{(k')}, \text{FAIL}), \dots\}$ , we first transform it into the learned abstract state space, $\mathcal{D}_{\Psi} = \{\dots, (s^{(k)}, \pi^{(k)}, s_{\pi}^{(k)})), \dots, (s^{(k')}, \pi^{(k')}, \text{FAIL}), \dots\}$ , where $s =$
301
+
302
+ ABSTRACT $_{\Psi}(x)$ . We aim to learn high-level actions, $\Omega$ , such that for all high-level actions $\underline{\omega} \in \Omega_{\mathcal{O}}$ on objects $\mathcal{O}$ ,
303
+
304
+ $$
305
+ \begin{array}{l} \forall (s ^ {(k)}, \pi^ {(k)}, s _ {\pi} ^ {(k)}) \in \mathcal {D} _ {\Psi}, \exists \underline {{\omega}} \in \Omega_ {\mathcal {O}} ^ {\pi^ {(k)}}, \underline {{\omega}}. \mathrm {P R E} \subseteq s ^ {(k)} \wedge \\ s _ {\pi} ^ {(k)} - s ^ {(k)} = \underline {{\omega}}. \mathrm {E F F} ^ {+} \wedge s ^ {(k)} - s _ {\pi} ^ {(k)} = \underline {{\omega}}. \mathrm {E F F} ^ {-}, \\ \forall \left(s ^ {(k)}, \pi^ {(k)}, s _ {\pi} ^ {(k)}\right) \in \mathcal {D} _ {\Psi}, \forall \underline {{\omega}} \in \Omega_ {\mathcal {O}} ^ {\pi^ {(k)}}, \underline {{\omega}}. \mathrm {P R E} \subseteq s ^ {(k)} \Rightarrow \\ \left(s _ {\pi} ^ {(k)} - s ^ {(k)} = \underline {{\omega}}. \mathrm {E F F} ^ {+} \wedge s ^ {(k)} - s _ {\pi} ^ {(k)} = \underline {{\omega}}. \mathrm {E F F} ^ {-}\right), \\ \end{array}
306
+ $$
307
+
308
+ $$
309
+ \forall \left(s ^ {(k)}, \pi^ {(k)}, \text {F A I L}\right) \in \mathcal {D} _ {\Psi}, \exists \underline {{\omega}} \in \Omega_ {\mathcal {O}} ^ {\pi^ {(k)}}, \underline {{\omega}}. \text {P R E} \subseteq s ^ {(k)},
310
+ $$
311
+
312
+ $$
313
+ \text {w h e r e} \Omega_ {\mathcal {O}} ^ {\pi^ {(k)}} = \{\underline {{\omega}}: \underline {{\omega}} \in \Omega_ {\mathcal {O}} \wedge \underline {{\omega}}. \pi = \pi^ {(k)} \}, \tag {3}
314
+ $$
315
+
316
+ while minimizing the syntactic complexity of the HLA, $|\underline{\omega}.\mathrm{PRE}| + |\underline{\omega}.\mathrm{EFF}^{+}| + |\underline{\omega}.\mathrm{EFF}^{-}|$ .
317
+
318
+ To find the high-level actions satisfying this objective, we first split the dataset according to the skills, as each high-level action is only associated with one skill, $\mathcal{D}_{\Psi}^{\pi_i} = \{d:d\in \mathcal{D}_{\Psi}\wedge d.\pi = \pi_i\}$ . We then split each skill into one or multiple high-level actions by unifying the effects in $\mathcal{D}_{\Psi}^{\pi_i}$ following the cluster and intersect operator learner (Chitnis et al., 2022). This compensates for the fact that a skill can have different effects in different situations, by first partitioning the transition datasets into high-level actions,
319
+
320
+ $$
321
+ \begin{array}{l} \mathcal {D} _ {\Psi} ^ {\omega} = \left\{d: d \in \mathcal {D} _ {\Psi} \wedge d. \pi = \omega . \pi \wedge d. s _ {\pi} ^ {(k)} - d. s ^ {(k)} = \underline {{\omega}}. \mathrm {E F F} ^ {+} \wedge d. s ^ {(k)} - d. s _ {\pi} ^ {(k)} = \underline {{\omega}}. \mathrm {E F F} ^ {-} \right. \\ \left. \text {w h e r e} \underline {{\omega}} = \omega \left(o _ {1}, o _ {2}, \dots\right), \text {f o r a l l} o _ {i} \in \mathcal {O} \right\}. \\ \end{array}
322
+ $$
323
+
324
+ Each partition associates a high-level action with the skill $\omega .\pi = d.\pi ,\forall d\in \mathcal{D}_{\Psi}^{\omega}$ , while the postconditions of the high-level action $(\omega .\mathrm{EFF}^{+}$ and $\omega .\mathrm{EFF}^{-})$ are also learned, by unifying and lifting the effects of data in $\mathcal{D}_{\Psi}^{\omega}$ . See Chitnis et al. (2022) for more details. For the preconditions, $\omega .\mathrm{PRE}$ , we learn them by maximizing
325
+
326
+ $$
327
+ \begin{array}{l} J (\omega . \mathrm {P R E}) = \\ \frac {1}{| \mathcal {D} _ {\Psi} ^ {\omega , \pi} |} \left(\sum_ {d \in \mathcal {D} _ {\Psi} ^ {\omega}} \mathbb {1} \left(\underline {{\omega}}. \mathrm {P R E} \subseteq d. s ^ {(k)}\right) + \sum_ {d \in \left(\mathcal {D} _ {\Psi} ^ {\omega , \pi} - \mathcal {D} _ {\Psi} ^ {\omega}\right)} \mathbb {1} \left(\underline {{\omega}}. \mathrm {P R E} \nsubseteq d. s ^ {(k)}\right)\right) + \alpha \cdot | \omega . \mathrm {P R E} |. \tag {4} \\ \end{array}
328
+ $$
329
+
330
+ This ensures that all data in the partition is modeled by the associated high-level action, $\omega$ . It specifies that the skill $\omega .\pi$ is applicable to states $s^{(k)}$ as $\underline{\omega}.\mathrm{PRE}\subseteq s^{(k)}$ . This high-level action also models all other data in the transition dataset, specifying that its precondition is not satisfied if a skill is not applicable on a state, $(s^{(k)},\omega .\pi ,\mathrm{FAIL})\in \mathcal{D}_{\Psi}^{\omega .\pi}$ , or if a skill has different effects when applied on the state, $(s^{(k)},\omega .\pi ,s_{\pi}^{(k)})\in \mathcal{D}_{\Psi}^{\omega .\pi}\wedge (s^{(k)},\omega .\pi ,s_{\pi}^{(k)})\notin \mathcal{D}_{\Psi}^{\omega}$ . We set the parameter $\alpha$ to a small number, which softly penalizes syntactically complex preconditions.
331
+
332
+ Compared with the cluster and intersect operator learner (Chitnis et al., 2022), which simply intersecting over feasible states to build preconditions for each high-level action, our method optimistically enlarges the set of feasible states for each high-level actions using the minimum complexity objective, while still retaining the abilities to distinguish infeasible states. The optimistic objective is critical for predicate invention by interactions where optimal demonstration trajectories are not available. Using the intersection method, the agent will only consider the feasible states in the currently curated dataset as feasible and never try the skill in other states that are potentially feasible as well. Planners usually fail to find plans with such restrictive world models, resulting in inefficient random exploration and poor test-time performance.
333
+
334
+ The restricted preconditions are less generalizable as well. For example, for agents learning making coffee in environments with one cup, the agent will find successful trajectories such as PutKettleIn-CoffeeMachine, MakeCoffee, and PourCoffeeInCup. Using the intersection method, the agent sets the preconditions of PourCoffeeInCup as KettleInMachine and KettleHasCoffee as both of them are always true among feasible states of the PourCoffeeInCup action, even though only KettleHasCoffee
335
+
336
+ is needed. The more restricted preconditions are problematic when generalizing to environments with more than one cups. The agent keeps putting the kettle back to the machine before pouring the coffee for another cup, as the learned PourCoffeeInCup action has KettleInMachine as part of its precondition. The agent eventually fails to solve the problem as the number of cups increases due to the almost doubled length of feasible plans in the more restricted abstract world model. Our method finds the correct precondition as KettleHasCoffee with the optimistic objective. We prefer KettleHasCoffee over KettleInMachine as it fails to distinguish infeasible states for the Pour skill with different effects, PourNothingInCup.
337
+
338
+ In terms of time complexity, cluster and intersect is linear in the number of successful transitions in $\mathcal{D}$ and the number of predicates $\Psi$ , where the additional greedy best-first search (GBFS) that we do introduces exponential complexity with respect to the number of predicates. To balance computational efficiency and performance, we use cluster and intersect in the inner loop of predicate section and then apply our method to the selected predicates (which is usually less than a dozen). Additionally, local hill climbing can be used as an alternative to GBFS to further improve computational efficiency.
339
+
340
+ # B.3 CLASSIFICATION-ACCURACY-BASED,PREDICATE SETS SCORE FUNCTION
341
+
342
+ When no satisficing plan is found in early iterations of predicate invention (e.g., in Coffee), the objective from Silver et al. (2023) is inapplicable. This issue is particularly prominent when the space of possible plans is large (i.e., when there are many potential actions at each step and achieving goals requires long-horizon plans). To address this, we introduce a predicate score function that does not rely on satisficing plans. We propose an alternative objective based on classification accuracy, in the same flavour as the score function defined earlier for operator preconditions.
343
+
344
+ Formally, given $\mathcal{D}_{\Psi} = \{\ldots ,(s^{(k)},\pi^{(k)},s_{\pi}^{(k)})),\ldots ,(s^{(k')},\pi^{(k')},\mathrm{FAIL}),\ldots \}$ , where $s = \mathrm{ABSTRACT}_{\Psi}(x)$ as above, we denote the collection of all success transitions and failed tuples as $\mathcal{D}_{\Psi}^{+} = \{(s^{(k)},\pi^{(k)},s_{\pi}^{(k)}))\}$ and $\mathcal{D}_{\Psi}^{-} = \{(s^{(k)},\pi^{(k)},\mathrm{FAIL})$ respectively. The predicate set score function is
345
+
346
+ $$
347
+ \begin{array}{l} J(\Psi) = \frac{1}{|\mathcal{D}_{\Psi}|}\Bigg(\sum_{\substack{\left(s^{(k)},\pi^{(k)},s_{\pi}^{(k)}\right)\in \mathcal{D}_{\Psi}^{+}}}\mathbb{1}\left(\exists \omega .\pi = \pi^{(k)}. \omega .\mathrm{PRE}\subseteq s\right) \\ \left. + \sum_ {\left(s ^ {(k)}, \pi^ {(k)}, \operatorname {F A I L}\right) \in \mathcal {D} _ {\Psi} ^ {-}} \mathbb {1} \left(\exists \omega . \pi^ {(k)} = \pi . \omega . \mathrm {P R E} \subseteq s\right)\right) + \alpha \cdot | \Psi |. \tag {5} \\ \end{array}
348
+ $$
349
+
350
+ Intuitively, this objective selects for the minimal set of predicates $\Psi$ such that the HLAs learned from these predicates, $\Omega_{\Psi}$ , avoid attempting to execute a skill in states where it has previously failed while ensuring that the HLAs enable the skill to be executed in states where it has previously succeeded.
351
+
352
+ # B.4 PROMPTING FOR Predicates
353
+
354
+ Strategy #1 (Discrimination). is motivated by one of the primary functions of predicates—have them in the preconditions of operators to distinguishing between the positive and negative states so the plans the agent find are feasible. However, we observed that existing VLMs often struggle to reliably understand and identify the differences between positive and negative states, especially when dealing with scene images that deviate significantly from those seen during training. This limitation motivates our second strategy.
355
+
356
+ Strategy #2 (Transition Modeling). With the observation that predicates present in an action's preconditions often also appear in some actions' effects. We prompt the VLM to propose predicates that describe these effects based on the positive transition segments it collects. This task is usually easier for VLMs because it involves identifying the properties or relationships that have changed from the start state to the end state, given the information that an action with a natural language name (such as pick) has been successfully executed. However, this strategy alone is not exhaustive. Certain predicates may exist solely within preconditions but not effects (e.g., an object's material that remains unchanged). Therefore, this method complements S1 and is used alternately with it during the invention iterations.
357
+
358
+ Strategy #3 (Unconditional Generation). prompts VLMs to propose derivations based on existing predicates. These derivations can incorporate a variety of logical operations, such as negation, universal quantification (e.g., defining Clear(x) based on On(x,y)), transitive closure, and disjunction (e.g., defining OnPlate(x,p) based on DirectlyOn(x,y) and DirectlyOnPlate(x,p)). This approach helps create derived predicates, such as OnPlate for Balanced (fig. 1.), which is unlikely to be proposed by the first two strategies but are essential for correctly implementing complex predicates like Balanced. As a result, this S3 is used at every invention iteration before either S1 or S2 is executed.
359
+
360
+ For each predicate proposal strategy, we propose a three-step method to guide the VLMs: 1) Ask the VLM to propose predicates by providing a predicate name, a list of predicate types drawn from $\Lambda$ , and a natural language description of the assertion the predicate corresponds to. 2) Synthesize the predicates classifiers using the syntax and API we provide for $NSPs$ 3) Identify any potential derived predicates and prompt a language model to transform them into the specified function signature for derived $NSPs$ . Given the challenges in S1, we add an additional step 0 just for this strategy. We query the VLM to propose properties or relations among objects in natural language, which are then formalized into predicates in Step 1.
361
+
362
+ # B.5 LIMITATIONS AND FAILURE CASES
363
+
364
+ A primary limitation of the system is the accuracy and reliability of the VLM in evaluating NSPs.
365
+
366
+ In some cases, the system can recover from imperfect predicate evaluation accuracies. This is because noisy predicates are not selected during the predicate selection process, and variations of the predicates, with slightly different natural language descriptions, can be proposed in later invention iterations. These variations may achieve higher scores, making them more likely to be selected.
367
+
368
+ In other cases, they never recover. For instance, in the Cover Heavy domain, our initial plan was to assign common materials, such as wood and metal, to blocks to distinguish between light and heavy objects. While the predicate proposal VLM successfully suggested appropriate predicates (e.g., IsWood(?block) and IsMetal(?block)), the predicate evaluation VLM was unable to interpret these predicates with sufficient accuracy and consistency to build a useful world model. The issue persisted even after switching to white and black blocks to represent heavy and light blocks and was ultimately resolved by using green and red blocks instead. Similarly, in the Coffee domain, the predicate IsJugFilled (?jug) is an essential precondition for the pour HLA. However, the VLM could not interpret this predicate accurately enough, necessitating that we treat it as a predefined predicate.
369
+
370
+ Potential solutions include: 1) integrating proprioception more effectively into the system; 2) developing ways to accurately assign belief scores over the truth values (e.g., use "IsJugFilled(?jug)-0.9" to denote "I believe the jug is filled with coffee with probability 0.9"); Or designing embeddings for ground predicates and observations, and determining the truth values of ground predicates by comparing distances between the corresponding embeddings.
371
+
372
+ At the same time, we expect improved accuracy in real-world scenarios compared to simulated domains with poor graphics quality, as there should be less distribution shift relative to the VLMs' training data, and the VLMs have demonstrated very strong performance on simple visual question-answering tasks with natural images Yang et al. (2023).
373
+
374
+ # C ADDITIONAL EXPERIMENTAL RESULTS
375
+
376
+ # C.1 EXAMPLE ONLINE LEARNING TRAJECTORY
377
+
378
+ Figure 6 shows an example of a predicate invention curve in the Coffee environment. Learning begins with 800 failed plans (i.e., unable to solve any tasks) and concludes after 8 iterations when the number of failed plans reaches zero. In total, 9 predicates are selected from 46 candidates. In the end, it selects 9 predicates among 46 candidates.
379
+
380
+ ![](images/0eab0f8d8df409e2c5f0f02f0adb210e0ce647cc907e5d931258bbe82446dc3d.jpg)
381
+ Figure 6: An example online predicate invention trajectory. The bubbles show the predicates being selected among all the candidates it has at that iteration.
382
+
383
+ # C.2 LEARNED ABSTRACTIONS
384
+
385
+ We show examples of learned predicates and operators here.
386
+
387
+ # C.2.1 COVER
388
+
389
+ ```python
390
+ ``python
391
+ def_GripperOpen_NSP_holds(state: RawState, objects: Sequence[Object]) -> bool:
392
+ robot, = objects
393
+ return state.get(robot, "fingers") > 0.5
394
+ name: str = "GripperOpen"
395
+ param_types: Sequence[type] = [_robot_type]
396
+ GripperOpen = NSPpredicate(name, param_types, _GripperOpen_NSP_holds)
397
+ ```
398
+ ``'python
399
+ def_Holding_NSP_holds(state: RawState, objects: Sequence[Object]) -> bool:
400
+ robot, block = objects
401
+ # If the gripper is open, the robot cannot be holding anything
402
+ if state.get(robot, "fingers") > 0.5:
403
+ return False
404
+ # Crop the image to focus on the robot and block
405
+ attention_image = state.crop_to Objects([robot, block])
406
+ robot_name = robot.id_name
407
+ block_name = block.id_name
408
+ return state.evaluatesimple_assertion(
409
+ f"\{robot_name\} is holding {block_name}", attention_image
410
+ )
411
+ )
412
+ name: str = "Holding"
413
+ param_types: Sequence[type] = [_robot_type, _block_type]
414
+ Holding = NSPredicate(name, param_types, _Holding_NSP_holds)
415
+ ```
416
+
417
+ ```txt
418
+ NSRT-Op0: Parameters: [?x0:block, ?x1:robot] Preconditions: [GripperOpen(?x1:robot)] Add Effects: [Holding(?x1:robot, ?x0:block)] Delete Effects: [GripperOpen(?x1:robot)] Ignore Effects: [] Option Spec: Pick(?x0:block)
419
+ NSRT-Op1: Parameters: [?x0:block, ?x1:robot, ?x2:target] Preconditions: [Holding(?x1:robot, ?x0:block)] Add Effects: [Covers(?x0:block, ?x2:target), GripperOpen(?x1:robot) Delete Effects: [Holding(?x1:robot, ?x0:block)] Ignore Effects: [] Option Spec: Place(?x0:block, ?x2:target)
420
+ ```
421
+
422
+ C.2.2 BLOCKS
423
+ ```txt
424
+ Gripping
425
+ ``python
426
+ def_Gripping_NSP_holds(state:RawState, objects:Sequence[Object]) -> bool:
427
+ ""Determine if the robot in objects is gripping the block in objects
428
+ in the scene image.""
429
+ robot,block \(\equiv\) objects
430
+ robot_name \(\equiv\) robot.id_name
431
+ block_name \(\equiv\) block.id_name
432
+ #If the robot's fingers are open, it can't be gripping anything.
433
+ if state.get(robot,"fingers")>0: return False
434
+ #Crop the scene image to the smallest bounding box that include both objects.
435
+ attention_image \(\equiv\) state.drop_to Objects([robot, block])
436
+ return state.evaluate.simple_assertion( f"\{robot_name\} is gripping {block_name}\),"attention_image)
437
+ name: str \(=\) "Gripping"
438
+ param_types: Sequence [Type] \(=\) [_robot_type,_block_type]
439
+ Gripping \(=\) NSPredicate(name, param_types,_Gripping_NSP_holds)
440
+ Clear
441
+ ``'python
442
+ #Define the classifier function
443
+ def_Clear_CP_holds(atoms:Set[GroundAtom],objects:Sequence[Object])-> bool:
444
+ ""Determine if there is no block on top of the given block.""" block, \(=\) objects
445
+ #Check if any block is on top of the given block for atom in atoms: if atompredicate \(= =\) On and atom.objects[1] \(= =\) block: return False return True
446
+ #Define the predicate name here
447
+ name: str \(=\) "Clear"
448
+ #A list of object-type variables for the predicate, using the ones defined in the environment
449
+ param_types: Sequence [Type] \(=\) [_block_type]
450
+ #Instantiate the predicate
451
+ Clear \(=\) ConceptPredicate(name, param_types,_Clear_CP_holds)
452
+ EmptyGripper
453
+ ``'python
454
+ def_EmpyGripper_NSP_holds(state:RawState, objects:Sequence[Object]) -> bool:
455
+ ""Determine if the gripper of robot in objects is empty in the scene image.""" robot, \(=\) objects
456
+ #If the robot's fingers are closed, it can't be empty. if state.get(robot,"fingers")< 1: return False return True
457
+ ```
458
+
459
+ ```txt
460
+ name: str = "EmptyGripper"
461
+ param_types: Sequence[Type] = [_robot_type]
462
+ EmptyGripper = NSPredicate(name, param_types, _EmptyGripper_NSP_holds)
463
+ ```
464
+
465
+ ```elixir
466
+ NSRT-Op0: Parameters: [?x0:block, ?x1:block, ?x2:robot] Preconditions: [Clear(?x1:block), EmptyGripper(?x2:robot), On(?x1:block, ?x0:block)] Add Effects: [Gripping(?x2:robot, ?x1:block)] Delete Effects: [EmptyGripper(?x2:robot), On(?x1:block, ?x0:block)] Ignore Effects: [] Option Spec: Pick(?x2:robot, ?x1:block) NSRT-Op1: Parameters: [?x0:block, ?x1:robot] Preconditions: [Gripping(?x1:robot, ?x0:block)] Add Effects: [EmptyGripper(?x1:robot), OnTable(?x0:block)] Delete Effects: [Gripping(?x1:robot, ?x0:block)] Ignore Effects: [] Option Spec: PutOnTable(?x1:robot) NSRT-Op2: Parameters: [?x0:block, ?x1:robot] Preconditions: [Clear(?x0:block), EmptyGripper(?x1:robot), OnTable(?x0:block)] Add Effects: [Gripping(?x1:robot, ?x0:block)] Delete Effects: [EmptyGripper(?x1:robot), OnTable(?x0:block)] Ignore Effects: [] Option Spec: Pick(?x1:robot, ?x0:block) NSRT-Op3: Parameters: [?x0:block, ?x1:block, ?x2:robot] Preconditions: [Clear(?x0+:block), Gripping(?x2:robot, ?x1+:block)] Add Effects: [EmptyGripper(?x2:robot), On(?x1+:block, ?x0+:block)] Delete Effects: [Gripping(?x2:robot, ?x1+:block)] Ignore Effects: [] Option Spec: Stack(?x2:robot, ?x0+:block)
467
+ ```
468
+
469
+ # C.2.3 COFFEE
470
+
471
+ ```csv
472
+ RobotHoldingJug
473
+ JugTilted
474
+ ``python
475
+ def_JugTilted_NSP_holds(state:RawState,objects:Sequence[Object]) -> bool: ""Determine if the jug is rotated by a non-zero angle.""" jug, $=$ objects #Assuming a rotation value of 0 means upright,any other value implies rotation return abs(state.get(jug,"rot"))>0.1
476
+ name:str $=$ "JugTilted"
477
+ param_types:Sequence[type] $\equiv$ [_jug_type]
478
+ JugTilted $=$ NSPredicate(name, param_types,_JugTilted_NSP_holds)
479
+ ...
480
+ JugUpright
481
+ JugInMachine
482
+ ``python
483
+ def_JugInMachine_NSP_holds(state:RawState,objects:Sequence[Object]) -> bool: ""Jug ?x is placed inside coffee machine ?y.""" jug,machine $=$ objects #If the jug is being held,it cannot be in the machine. if_RobotHolding_NSP_holds(state,[state.get_objects(_robot_type)[0],jug]): return False # Crop the image to focus on the jug and the coffee machine. attention_image $\equiv$ state.drop_to Objects([jug,machine])
484
+ ```
485
+
486
+ ```txt
487
+ jug_name = jug.id_name
488
+ machine_name = machine.id_name
489
+ return state.evalute.simple_assertion(
490
+ f{"jug_name} is placed inside {machine_name}", attention_image
491
+ )
492
+ ```
493
+
494
+ ```txt
495
+ NSRT-Op0: Parameters: [?x0:jug, ?x1:robot] Preconditions: [GripperOpen(?x1:robot), JugUpright(?x0:jug)] Add Effects: [RobotHoldingJug(?x1:robot, ?x0:jug)] Delete Effects: [GripperOpen(?x1:robot)] Ignore Effects: [] Option Spec: PickJug(?x1:robot, ?x0:jug)
496
+ ```
497
+
498
+ ```txt
499
+ NSRT-Op1: Parameters: [?x0:coffee-machine, ?x1:jug, ?x2:robot] Preconditions: [RobotHoldingJug(?x2:robot, ?x1:jug)] Add Effects: [GripperOpen(?x2:robot), JugInMachine(?x1:jug, ?x0:coffee-machine)] Delete Effects: [RobotHoldingJug(?x2:robot, ?x1:jug)] Ignore Effects: [] Option Spec: PlaceJugInMachine(?x2:robot, ?x1:jug, ?x0:coffee-machine)
500
+ ```
501
+
502
+ ```txt
503
+ NSRT-Op2: Parameters: [?x0:coffee-machine, ?x1:jug, ?x2:robot] Preconditions: [JugInMachine(?x1:jug, ?x0:coffee-machine)] Add Effects: [JugFilled(?x1:jug)] Delete Effects: [] Ignore Effects: [] Option Spec: TurnMachineOn(?x2:robot, ?x0:coffee-machine)
504
+ ```
505
+
506
+ ```txt
507
+ NSRT-Op3: Parameters: [?x0:coffee-machine, ?x1:jug, ?x2:robot] Preconditions: [JugInMachine(?x1:jug, ?x0:coffee-machine)] Add Effects: [RobotHoldingJug(?x2:robot, ?x1:jug)] Delete Effects: [GripperOpen(?x2:robot), JugInMachine(?x1:jug, ?x0:coffee-machine)] Ignore Effects: [] Option Spec: PickJug(?x2:robot, ?x1:jug)
508
+ ```
509
+
510
+ ```txt
511
+ NSRT-Op4: Parameters: [?x0:cup, ?x1:jug, ?x2:robot] Preconditions: [JugFilled(?x1:jug), RobotHoldingJug(?x2:robot, ?x1:jug)] Add Effects: [CupFilled(?x0:cup)] Delete Effects: [JugFilled(?x1:jug), JugUpright(?x1:jug), RobotHoldingJug(?x2:robot, ?x1:jug)] Ignore Effects: [] Option Spec: Pour(?x2:robot, ?x1:jug, ?x0:cup)
512
+ ```
513
+
514
+ ```txt
515
+ NSRT-Op5: Parameters: [?x0:jug, ?x1:robot] Preconditions: [GripperOpen(?x1:robot)] Add Effects: [JugUpright(?x0:jug)] Delete Effects: [] Ignore Effects: [] Option Spec: Twist(?x1:robot, ?x0:jug)
516
+ ```
517
+
518
+ ```txt
519
+ NSRT-Op6: Parameters: [?x0:coffee-machine, ?x1:jug, ?x2:robot] Preconditions: [JugInMachine(?x1:jug, ?x0:coffee-machine)] Add Effects: [JugFilled(?x1:jug)] Delete Effects: [JugInMachine(?x1:jug, ?x0:coffee-machine)] Ignore Effects: [] Option Spec: TurnMachineOn (?x2:robot, ?x0:coffee-machine)
520
+ ```
521
+
522
+ ```txt
523
+ NSRT-Op7: Parameters: [?x0:cup, ?x1:jug, ?x2:robot] Preconditions: [JugFilled(?x1:jug), RobotHoldingJug(?x2:robot, ?x1:jug)] Add Effects: [CupFilled(?x0:cup), JugTilted(?x1:jug)] Delete Effects: [JugFilled(?x1:jug), RobotHoldingJug(?x2:robot, ?x1:jug)] Ignore Effects: [] Option Spec: Pour(?x2:robot, ?x1:jug, ?x0:cup)
524
+ ```
525
+
526
+ ```txt
527
+ NSRT-Op8: Parameters: [?x0:cup, ?x1:jug, ?x2:robot] Preconditions: [JugFilled(?x1:jug), RobotHoldingJug(?x2:robot, ?x1:jug)] Add Effects: [CupFilled(?x0:cup), JugTilted(?x1:jug)] Delete Effects: [] Ignore Effects: [] Option Spec: Pour(?x2:robot, ?x1:jug, ?x0:cup)
528
+ ```
529
+
530
+ ```txt
531
+ NSRT-Op9: Parameters: [?x0:cup, ?x1:jug, ?x2:robot] Preconditions: [JugFilled(?x1:jug), RobotHoldingJug(?x2:robot, ?x1:jug)] Add Effects: [CupFilled(?x0:cup), JugTilted(?x1:jug)] Delete Effects: [RobotHoldingJug(?x2:robot, ?x1:jug)] Ignore Effects: [] Option Spec: Pour(?x2:robot, ?x1:jug, ?x0:cup)
532
+ ```
533
+
534
+ # C.2.4 COVER HEAVY
535
+
536
+ ```csv
537
+ 1 EmptyHand
538
+ 2 Holding
539
+ 3 IsBlack
540
+ 4 \`'python
541
+ 5 def _IsBlack_NSP_holds(state: State, objects: Sequence[Object]) -> bool:
542
+ 6 block, $=$ objects
543
+ 7 block_id $=$ block.id_name
544
+ 8 attention_image $=$ state.drop_to Objects([block])
545
+ 9 return state.evaluate.simple_assertion(f"\{block_id\} is black.",attention_image)
546
+ 10
547
+ 11 name $=$ "IsBlack"
548
+ 12 param_types $=$ [_block_type]
549
+ 13 IsBlack $=$ NSPredicate(name, param_types,_IsBlack_NSP_holds)
550
+ 14
551
+ ```
552
+
553
+ ```txt
554
+ NSRT-Op1: Parameters: [?x0:block, ?x1:robot, ?x2:target] Preconditions: [Holding(?x1:robot, ?x0:block)] Add Effects: [Covers(?x0:block, ?x2:target), EmptyHand(?x1:robot)] Delete Effects: [Holding(?x1:robot, ?x0:block)] Ignore Effects: [] Option Spec: Place(?x0:block, ?x2:target)
555
+ NSRT-Op0: Parameters: [?x0:block, ?x1:robot] Preconditions: [IsBlack(?x0:block), EmptyHand(?x1:robot)] Add Effects: [Holding(?x1:robot, ?x0:block)] Delete Effects: [EmptyHand(?x1:robot)] Ignore Effects: [] Option Spec: Pick(?x0:block)
556
+ ```
557
+
558
+ # C.2.5 BALANCE
559
+
560
+ ```txt
561
+ OnPlate
562
+ def OnPlate_CP_holds(atoms: Set[GroundAtom], objects: Sequence[Object]) -> bo x, y = objects for atom in atoms: if atom+predicate $= =$ DirectlyOnPlate and\ atom.objects $= = [x,y]$ .. return True other_blocks $\equiv$ {a.objects[0] for a in atoms if aPredicate $= =$ DirectlyOn or\ aPredicate $= =$ DirectlyOnPlate} for other_block in other_blocks: holds1 $=$ False for atom in atoms: if atom+predicate $= =$ DirectlyOn and\ atom.objects $= = [x,$ other_block]: holds1 $=$ True break if holds1 and _OnPlate_CP_holds(atoms, [other_block, y]): return True return False
563
+ name: str $=$ "OnPlate"
564
+ param_types: Sequence[Type] $=$ [block_type, plate_type] OnPlate $=$ ConceptPredicate(name, param_types, _OnPlate_CP_holds)
565
+ BlocksDistributedEvenly
566
+ def _BlocksDistributedEvenly_CP_holds(atoms: Set[GroundAtom], objects: Sequence[Object]) -> bool: plate1,plate2 $=$ objects if plate1 $= =$ plate2: return False count1 $= 0$ count2 $= 0$ for atom in atoms: if atom.predicate $= =$ OnPlate: if atom.objects[1] $= =$ plate1: count1 $+ = 1$ elif atom.objects[1] $= =$ plate2: count2 $+ = 1$ return count1 $= =$ count2
567
+ name: str $=$ "BlocksDistributedEvenly"
568
+ param_types: Sequence[Type] $=$ [plate_type, plate_type] BlocksDistributedEvenly $=$ ConceptPredicate(name, param_types, _BlocksDistributedEvenly_CP_holds)
569
+ ```
570
+
571
+ ```txt
572
+ NSRT-Unstack: Parameters: [?block:block, ?otherblock:block, ?robot:robot] Preconditions: [Clear(?block:block), DirectlyOn(?block:block, ?otherblock:block), GripperOpen(?robot:robot)] Add Effects: [Clear(?otherblock:block), Holding(?block:block)] Delete Effects: [Clear(?block:block), DirectlyOn(?block:block, ?otherblock:block), GripperOpen(?robot:robot)] Ignore Effects: [] Option Spec: Pick (?robot:robot, ?block:block) NSRT-Op3: Parameters: [?block:block, ?otherblock:block, ?robot:robot] Preconditions: [Clear(?otherblock:block), Holding(?block:block)] Add Effects: [Clear(?block:block), DirectlyOn(?block:block, ?otherblock:block), GripperOpen(?robot:robot)] Delete Effects: [Clear(?otherblock:block), Holding(?block:block)] Ignore Effects: [] Option Spec: Stack (?robot:robot, ?otherblock:block) NSRT-Op2: Parameters: [?x0:machine, ?x1:plate, ?x2:plate, ?x3:robot] Preconditions: [BlocksDistributedEvenly(?x2:plate, ?x1:plate)] Add Effects: [MachineOn(?x0:machine)] Delete Effects: [] Ignore Effects: [] Option Spec: TurnMachineOn(?x3:robot, ?x1:plate, ?x2:plate) NSRT-Op4: Parameters: [?block:block, ?robot:robot, ?plate:plate] Preconditions: [ClearPlate(?plate:plate), Holding(?block:block)] Add Effects: [Clear(?block:block), DirectlyOnPlate(?block:block, ?plate:plate), GripperOpen(?robot:robot)] Delete Effects: [ClearPlate(?plate:plate), Holding(?block:block)] Ignore Effects: [] Option Spec: PutOnPlate (?robot:robot, ?plate:plate) NSRT-PickFromTable: Parameters: [?block:block, ?robot:robot, ?plate:plate] Preconditions: [Clear(?block:block), DirectlyOnPlate(?block:block, ?plate:plate), GripperOpen(?robot:robot)] Add Effects: [Holding(?block:block)] Delete Effects: [Clear(?block:block), DirectlyOnPlate(?block:block, ?plate:plate), GripperOpen(?robot:robot)] Ignore Effects: [] Option Spec: Pick (?robot:robot, ?block:block)
573
+ ```
574
+
575
+ # C.3 FURTHER PLANNING STATISTICS
576
+
577
+ The average planning node expanded and wall-time statistics for our approach, alongside other planning approaches, are summarized in Table 1.
578
+
579
+ In the Blocks and Balance domains, our use of derived predicates is not out-of-box compatible with relaxed planning heuristics, such as LM-cut, which we typically employ through Hyperplan. As a result, we resorted to a simpler goal-count heuristic, which estimates the distance to the goal by counting the number of unsatisfied goals. This heuristic is less informed than LM-cut, leading to significantly larger node expansions and longer planning times in these domains than expected. In future work, we aim to develop a version of LM-cut that is compatible with derived NSPs.
580
+
581
+ # D ADDITIONAL ENVIRONMENT DETAILS
582
+
583
+ Cover. This environment has goal predicate $\{\text{Covers(?x:block, ?y:target)}\}$ . The initial operators are:
584
+
585
+ <table><tr><td></td><td colspan="3">Ours</td><td colspan="3">Oracle</td><td colspan="3">Sym. pred.</td></tr><tr><td>Environment</td><td>Succ</td><td>Node</td><td>Time</td><td>Succ</td><td>Node</td><td>Time</td><td>Succ</td><td>Node</td><td>Time</td></tr><tr><td>Cover</td><td>100.0</td><td>9.4</td><td>0.142</td><td>100.0</td><td>8.4</td><td>0.129</td><td>100.0</td><td>26.9</td><td>0.151</td></tr><tr><td>Blocks</td><td>96.0</td><td>1117675</td><td>254.621</td><td>94.0</td><td>550630</td><td>101.737</td><td>7.2</td><td>121.4</td><td>4.279</td></tr><tr><td>Cover Heavy</td><td>97.0</td><td>7.9</td><td>0.057</td><td>100.0</td><td>5.4</td><td>0.060</td><td>46.0</td><td>5.7</td><td>0.061</td></tr><tr><td>Coffee</td><td>65.3</td><td>40.3</td><td>0.969</td><td>99.3</td><td>19.3</td><td>0.652</td><td>68.0</td><td>199.4</td><td>3.270</td></tr><tr><td>Balance</td><td>100.0</td><td>26.3</td><td>0.856</td><td>100.0</td><td>30.6</td><td>0.585</td><td>20.0</td><td>12.2</td><td>0.125</td></tr></table>
586
+
587
+ <table><tr><td></td><td colspan="3">Ours</td><td colspan="3">Ablate op.</td><td colspan="3">No invent</td></tr><tr><td>Environment</td><td>Succ</td><td>Node</td><td>Time</td><td>Succ</td><td>Node</td><td>Time</td><td>Succ</td><td>Node</td><td>Time</td></tr><tr><td>Cover</td><td>100.0</td><td>9.4</td><td>0.142</td><td>100.0</td><td>7.0</td><td>0.148</td><td>68.0</td><td>28.1</td><td>0.113</td></tr><tr><td>Blocks</td><td>96.0</td><td>1117675</td><td>254.621</td><td>12.0</td><td>24.8</td><td>0.222</td><td>1.3</td><td>321.0</td><td>0.224</td></tr><tr><td>Cover Heavy</td><td>97.0</td><td>7.9</td><td>0.057</td><td>46.0</td><td>5.7</td><td>0.128</td><td>36.7</td><td>29.5</td><td>0.099</td></tr><tr><td>Coffee</td><td>65.3</td><td>40.3</td><td>0.969</td><td>65.3</td><td>29.6</td><td>2.441</td><td>0.0</td><td>-</td><td>-</td></tr><tr><td>Balance</td><td>100.0</td><td>26.3</td><td>0.856</td><td>100.0</td><td>28.0</td><td>1.180</td><td>25.3</td><td>13.5</td><td>0.204</td></tr></table>
588
+
589
+ Table 1: Further planning statistics.
590
+
591
+ ```txt
592
+ NSRT-Pick: Parameters: [?block:block] Preconditions: [] Add Effects:[] Delete Effects:[] Ignore Effects:[] Option Spec: Pick(?block:block)
593
+ NSRT-Place: Parameters: [?block:block, ?target:target] Preconditions:[] Add Effects:[Covers(?block:block, ?target:target)] Delete Effects:[] Ignore Effects:[] Option Spec: Place(?block:block, ?target:target)
594
+ ```
595
+
596
+ Blocks. This environment has goal predicates: {On(?x:block, ?y:block), OnTable(?x:block)} and initial operators
597
+
598
+ ```txt
599
+ NSRT-PickFromTable: Parameters: [?block:block, ?robot:robot] Preconditions: [] Add Effects: [] Delete Effects: [OnTable(?block:block)] Ignore Effects: [] Option Spec: Pick(?robot:robot, ?block:block)
600
+ NSRT-PutOnTable: Parameters: [?block:block, ?robot:robot] Preconditions: [] Add Effects: [OnTable(?block:block)] Delete Effects: [] Ignore Effects: [] Option Spec: PutOnTable(?robot:robot)
601
+ NSRT-Stack: Parameters: [?block:block, ?otherblock:block, ?robot:robot] Preconditions: [] Add Effects: [On(?block:block, ?otherblock:block)] Delete Effects: [] Ignore Effects: [] Option Spec: Stack(?robot:robot, ?otherblock:block)
602
+ NSRT-Unstack: Parameters: [?block:block, ?otherblock:block, ?robot:robot] Preconditions: [] Add Effects: [] Delete Effects: [On(?block:block, ?otherblock:block)] Ignore Effects: [] Option Spec: Pick(?robot:robot, ?block:block)
603
+ ```
604
+
605
+ Coffee. This environment has goal predicates: $\{\text{CupFilled(?cup:cup)}\}$ . We include the predicate JugFilled(?jug: jug) in the initial set of predicates because it was very challenging to have a VLM to determine this especially with the graphics in the simulator. It has initial operators:
606
+
607
+ ```txt
608
+ NSRT-PickJugFromTable: Parameters: [?robot:robot, ?jug:jug] Preconditions: [] Add Effects: [] Delete Effects: [] Ignore Effects: [] Option Spec: PickJug(?robot:robot, ?jug:jug)
609
+ NSRT-PlaceJugInMachine: Parameters: [?robot:robot, ?jug:jug, ?machine:coffee-machine] Preconditions: [] Add Effects: [] Delete Effects: [] Ignore Effects: [] Option Spec: PlaceJugInMachine(?robot:robot, ?jug:jug, ?machine:coffee-machine)
610
+ NSRT-PourFromNowhere: Parameters: [?robot:robot, ?jug:jug, ?cup:cup] Preconditions: [] Add Effects: [CupFilled(?cup:cup)] Delete Effects: [] Ignore Effects: [] Option Spec: Pour(?robot:robot, ?jug:jug, ?cup:cup),
611
+ NSRT-TurnMachineOn: Parameters: [?robot:robot, ?jug:jug, ?machine:coffee-machine] Preconditions: [] Add Effects: [JugFilled(?jug:jug)] Delete Effects: [] Ignore Effects: [] Option Spec: TurnMachineOn(?robot:robot, ?machine:coffee-machine),
612
+ NSRT-Twist: Parameters: [?robot:robot, ?jug:jug] Preconditions: [] Add Effects: [] Delete Effects: [] Ignore Effects: [] Option Spec: Twist(?robot:robot, ?jug:jug)
613
+ ```
614
+
615
+ Cover Heavy. This has the same set of goal predicates and operators as Cover.
616
+
617
+ Balance. This has the goal predicate: {MachineOn(?x:machine)}. Here we consider a continual learning setting where the agent is initialized with the abstractions commonly found in Blocks. They are {Clear(?x:block), ClearPlate(?x:plate), DirectlyOn(?x:block, ?y:block), DirectlyOnPlate(?x:block, ?y:plate), GripperOpen(?x:robot), Holding(?x:block)}. The initial set of operators is:
618
+
619
+ ```javascript
620
+ NSRT-PickFromTable: Parameters: [?block:block, ?robot:robot, ?plate:plate] Preconditions: [Clear(?block:block), DirectlyOnPlate(?block:block, ?plate:plate), GripperOpen (?robot:robot)] Add Effects: [Holding(?block:block)] Delete Effects: [Clear(?block:block), DirectlyOnPlate(?block:block, ?plate:plate), GripperOpen (?robot:robot)] Ignore Effects: [] Option Spec: Pick(?robot:robot, ?block:block)
621
+ ```
622
+
623
+ ```javascript
624
+ NSRT-PutOnPlate: Parameters: [?block:block, ?robot:robot, ?plate:plate] Preconditions: [ClearPlate(?plate:plate), Holding(?block:block)] Add Effects: [Clear(?block:block), DirectlyOnPlate(?block:block, ?plate:plate), GripperOpen(?robot:robot)] Delete Effects: [ClearPlate(?plate:plate), Holding(?block:block)] Ignore Effects: [] Option Spec: PutOnPlate(?robot:robot, ?plate:plate),
625
+ ```
626
+
627
+ ```txt
628
+ NSRT-Stack: Parameters: [?block:block, ?otherblock:block, ?robot:robot] Preconditions: [Clear(?otherblock:block), Holding(?block:block)] Add Effects: [Clear(?block:block), DirectlyOn(?block:block, ?otherblock:block), GripperOpen(?robot:robot)] Delete Effects: [Clear(?otherblock:block), Holding(?block:block)] Ignore Effects: [] Option Spec: Stack(?robot:robot, ?otherblock:block)
629
+ ```
630
+
631
+ ```txt
632
+ NSRT-Unstack:
633
+ Parameters: [?block:block, ?otherblock:block, ?robot:robot]
634
+ Preconditions: [Clear(?block:block), DirectlyOn(?block:block, ?otherblock:block), GripperOpen(?robot:robot)]
635
+ Add Effects: [Clear(?otherblock:block), Holding(?block:block)]
636
+ Delete Effects: [Clear(?block:block), DirectlyOn(?block:block, ?otherblock:block), GripperOpen(?robot:robot)]
637
+ Ignore Effects: []
638
+ Option Spec: Pick(?robot:robot, ?block:block)
639
+ ```
640
+
641
+ ```txt
642
+ NSRT-TurnMachineOn: Parameters: [?robot:robot, ?machine:machine, ?plate1:plate, ?plate2:plate] Preconditions: [] Add Effects: [MachineOn(?machine:machine)] Delete Effects: [] Ignore Effects: [] Option Spec: TurnMachineOn(?robot:robot, ?plate1:plate, ?plate2:plate)
643
+ ```
ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83a9036932ff776d38276049e4c7f0979319d8fb74837770a11668f704e4ae28
3
+ size 527048
ICLR/2025/VisualPredicator_ Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40c03dab815faa895249c4c7abbb7b0f357b1d7914ff820fd80881a118b7b045
3
+ size 636996
ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/78f76ef6-b915-4041-a9d9-620580552cf8_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09d638125eaa6f924d304f608ee1856707b72f858098ed49b94795d667179115
3
+ size 366911
ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/78f76ef6-b915-4041-a9d9-620580552cf8_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce5ddb8cb1094c033263ab8a3c51eaaa5c98877dc64d4598c4b3bf09cdeaf0f3
3
+ size 430817
ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/78f76ef6-b915-4041-a9d9-620580552cf8_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:581066ad45dcb42554f32b9fe7683f5db39a4b35cc476640386fc15f9a29cc0f
3
+ size 5799190
ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8036af838c4b0bf8987f646d563ad296c09c6e39bb727d44cdfb624eb214b57e
3
+ size 5812058
ICLR/2025/Walk the Talk_ Measuring the Faithfulness of Large Language Model Explanations/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a94dd559e55f5882c148af8457129fd3ae388394b77a8f34b9a8b85ae55be912
3
+ size 1662452
ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/2b0daec3-74b1-4c85-9ff9-cb1b6fc87ccc_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5a1dcd5b5909ad5272400a904c42cb22500d383e5720f8dd72aeb015db605bb
3
+ size 127972
ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/2b0daec3-74b1-4c85-9ff9-cb1b6fc87ccc_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df7e7b6a8747782c6d88c4e8dd14c9b84a673130fbf5dc0947f716a79a60264b
3
+ size 150689
ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/2b0daec3-74b1-4c85-9ff9-cb1b6fc87ccc_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddf6518a80182dbd139a029e837af8491a4cbc2d13d361f78105b22d87778fdf
3
+ size 5485290
ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/full.md ADDED
@@ -0,0 +1,430 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WASSERSTEIN DISTANCES, NEURONAL ENTANGLEMENT, AND SPARSITY
2
+
3
+ Shashata Sawmya $^{1*}$ , Linghao Kong $^{1*}$ , Ilia Markov $^{2}$ , Dan Alistarh $^{2,3,4}$ , & Nir Shavit $^{1,3,4}$
4
+
5
+ $^{1}$ MIT 2IST Austria 3Neural Magic 4Red Hat {shashata, linghao, shanir} @mit.edu, {ilia_markov, dan.alistarh} @ist.ac.at
6
+
7
+ # ABSTRACT
8
+
9
+ Disentangling polysemantic neurons is at the core of many current approaches to interpretability of large language models. Here we attempt to study how disentanglement can be used to understand performance, particularly under weight sparsity, a leading post-training optimization technique. We suggest a novel measure for estimating neuronal entanglement: the Wasserstein distance of a neuron's output distribution to a Gaussian. Moreover, we show the existence of a small number of highly entangled "Wasserstein Neurons" in each linear layer of an LLM, characterized by their highly non-Gaussian output distributions, their role in mapping similar inputs to dissimilar outputs, and their significant impact on model accuracy. To study these phenomena, we propose a new experimental framework for disentangling polysemantic neurons. Our framework separates each layer's inputs to create a mixture of experts where each neuron's output is computed by a mixture of neurons of lower Wasserstein distance, each better at maintaining accuracy when sparsified without retraining. We provide strong evidence that this is because the mixture of sparse experts is effectively disentangling the input-output relationship of individual neurons, in particular the difficult Wasserstein neurons.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Disentangling polysemantic neurons into their component, human-understandable features has been a longstanding goal of machine learning interpretability research (Olah et al., 2020; Jermyn et al., 2022; Elhage et al., 2022; Gurnee et al., 2023; Templeton, 2024; Gurnee et al., 2024). While neurons are the basic building blocks of neural network architectures, they do not map one-to-one with specific features. Instead, neurons frequently engage in polysemantic representations, where they are activated by multiple, unrelated concepts and detect diverse features (Arora et al., 2018; Mu & Andreas, 2020). It is suspected that every neuron is polysemantic to some degree (Lecomte et al., 2023), and so we will refer to all neurons as polysemantic in this work.
14
+
15
+ Due to the importance of highly polysemantic neurons in a network's computation (Bricken et al., 2023), the question of whether these neurons require more parameters naturally arises. However, the effects of polysemantics on network performance under weight sparsity has not been well explored. Weight sparsification (Hoefler et al., 2021) aims to reduce the number of executed parameters in large language models (LLMs) by setting certain weight values to zero to improve efficiency. Various sparsification algorithms have been developed for this process (Han et al., 2015; Sun et al., 2023; Frantar & Alistarh, 2023). This paper investigates the relationship between an individual neuron's degree of entanglement (which we will formally define in a later section) and its ability to be sparsified in real-world models. To the best of our knowledge, this is the first work to explore this crucial perspective of entanglement-dependent model sparsification.
16
+
17
+ To better understand the impact of entanglement on sparsification, we introduce a novel metric that quantifies a neuron's degree of entanglement. This metric is the Wasserstein distance between a
18
+
19
+ neuron's output distribution and a Gaussian (Equation 1). We find that neurons with a particularly high Wasserstein distance (Figure 1d, A8d) are crucial for the performance of a network and very sensitive to pruning. We provide evidence that a neuron's Wasserstein distance is related to its ability to distinguish similar inputs to different outputs through its dot product, and we refer to these neurons as especially entangled (Equation 2). Akin to previous works investigating special types of neurons (Gurnee et al., 2023; Stolfo et al., 2024; Gurnee et al., 2024), this work explores the role of crucial neurons with implications for interpretability, specifically in the context of network sparsity.
20
+
21
+ ![](images/5c98c5314a83c724ad39d462a17e057078d60c65ec489e8593f61346b7731cbf.jpg)
22
+ Figure 1: The output distributions of neurons in Llama-2-7B computed densely and at $90\%$ sparsity on Wikitext-2. WD refers to the Wasserstein distance of the output distribution to a Gaussian. RI refers to the relative improvement of Sparse Expansion over SparseGPT. (a) The dense output distribution of a random neuron with a WD of 0.050 is well captured by SparseGPT, and (b) expanding this neuron via Sparse Expansion imparts only a small $(18\%)$ increase in performance. (c) The cluster outputs are all concentrated in close proximity to each other. (d) SparseGPT struggles to capture the dense distribution of an entangled neuron with a WD of 0.524. (e) Following expansion, the sparse output of the entangled neuron is much better captured, leading to more improvement $(77\%)$ . (f) Each expert specializes over a different portion of the distribution.
23
+
24
+ To analyze the phenomenon of neuronal superposition under sparsity in greater detail, we create an experimental framework, which we dub Sparse Expansion. It expands a model into a mixture of sparse experts by clustering input embeddings layer-wise. Based on this clustering, Sparse Expansion utilizes the input-aware nature of the SparseGPT (Frantar & Alistarh, 2023) pruning algorithm to specialize different sparse experts to different sets of inputs, starting from the same base weights. Through Sparse Expansion, we are able to analyze the entangled neurons in much more detail, since now different subgroups of the inputs are being computed with different edges (Figure 1f, A8f). We find that as a neuron lose edges, its output distribution tends to shift toward a Gaussian distribution (Figure A9). However, through Sparse Expansion, the original output distribution can be better preserved under sparse computation (Figure 1e, A8e). We relate our findings to recent theoretical work on the bounds of neural computation under superposition (Hänni et al., 2024; Adler & Shavit, 2024).
25
+
26
+ Our main technical contribution is a detailed study of how model accuracy under sparsity is related to its degree of neuronal entanglement. In every LLM, there exist neurons that have striking, irregular output distributions (Figure 2c, A1). These neurons have an outsized effect on model performance and seem to be responsible for differentiating similar input vectors (Figure 2). We believe that the existence of these neurons is a manifestation of polysemanticity in real-world language models. We find that the Wasserstein distance to a Gaussian is a strong indicator of such neurons.
27
+
28
+ In the next section we explain such "Wasserstein neurons", neuronal entanglement, and the implication of ablating Wasserstein neurons in LLMs in detail. We then formulate our experimental framework Sparse Expansion and show how to effectively disentangle the input-output relationship of neurons through Sparse Expansion, as well as some empirical computational bounds. Finally, we present some results showing its performance relative to other state-of-the-art one-shot compression techniques in the hopes of inspiring future sparsification algorithms.
29
+
30
+ # 2 WASSERSTEIN NEURONS
31
+
32
+ # 2.1 CHARACTERIZING NON-GAUSSIAN NEURONAL OUTPUT DISTRIBUTIONS
33
+
34
+ We investigate the output distributions of individual neurons in all linear layers of transformer feedforward networks (FFNs) during inference. Specifically, consider a linear operation $\mathbf{Y} = \mathbf{W}\mathbf{X} + \mathbf{b}$ , where $\mathbf{Y} \in \mathbb{R}^{n \times s}$ is the output matrix, $\mathbf{W} \in \mathbb{R}^{n \times m}$ is the weight matrix, $\mathbf{b} \in \mathbb{R}^n$ is the bias vector,
35
+
36
+ broadcasted across all neurons, and $\mathbf{X} \in \mathbb{R}^{m \times s}$ is the input matrix, where each column represents an input vector. Each neuron is an individual row of $\mathbf{W}$ , and we collect individual scalar elements from the corresponding row in $\mathbf{Y}$ as the output distribution for that neuron.
37
+
38
+ We focus our analysis in Pythia-1.4B (Biderman et al., 2023), Llama-2-7B (Touvron et al., 2023), and Llama-3-8B (Dubey et al., 2024). Most neurons exhibit a reasonably Gaussian output distribution after their dot product with the input vector (Figure 1a, 2a). However, we find the existence of a small group neurons with highly non-Gaussian outputs (Figure 1d, 2c) in all FFNs (Figure A1).
39
+
40
+ To characterize the degree of difference in terms of the shape of these distributions—the non-Gaussian output distributions of certain neurons with the Gaussian-like output distribution of most neurons—we considered several metrics, such as entropy. However, the Wasserstein distance (WD) (Kantorovich, 2006; Villani et al., 2009) proved to be the most effective metric for quantifying this difference. In optimal transport theory, the WD measures the minimal transportation cost between two distributions, taking their geometry in real space into account.
41
+
42
+ To find the WD of every neuron to the Gaussian $\mathcal{N}$ , we crucially first normalize the output distributions of each neuron $n$ to have zero mean and unit variance, and compare this normalized distribution $n'$ to $\mathcal{N}(0,1)$ . This normalization is performed because the range of neuron output distributions is quite variable, and we wanted to prioritize the differences in the shape of the distributions, rather than other properties. We use the 1-Wasserstein distance in one dimension, as shown in Equation 1.
43
+
44
+ $$
45
+ W _ {1} \left(n ^ {\prime}, \mathcal {N}\right) = \int_ {0} ^ {1} | F ^ {- 1} (z) - \varphi^ {- 1} (z) | d z. \tag {1}
46
+ $$
47
+
48
+ $F^{-1}$ and $\varphi^{-1}$ are the inverse cumulative distribution function of $n^{\prime}$ and $\mathcal{N}(0,1)$ , respectively, which can be approximated with empirical data. To compute the WD of every neuron efficiently, we use the SciPy implementation (Virtanen et al., 2020). When computing the difference metric in this way, we find that our originally observed neurons (Figure 1d, A8d) have been designated correctly with high WD to $\mathcal{N}$ . We thus term these neurons "Wasserstein neurons." We also observe little overlap between neurons with high mean weight magnitudes and Wasserstein neurons (Figure A4a).
49
+
50
+ We additionally analyze Pythia-1.4B across its training, from network initialization to the final step. We find that Wasserstein neurons do not seem to receive more weight updates than other neurons (Figure A2a). Interestingly, we also find that Wasserstein neurons arise relatively early on in training, within 10-20 billion tokens (Figure A2b). This phenomenon is likely related to and a manifestation of other observations that fundamental model training dynamics rapidly stabilize, such as the rank of the gradient or the largest eigenvalue of the loss hessian (Gur-Ari et al., 2018; Zhao et al., 2024; Noci et al., 2024). We leave further investigations into this crucial training period to future work.
51
+
52
+ # 2.2 WASSERSTEIN NEURONS AND ENTANGLEMENT
53
+
54
+ Here, we define and study the notion of entanglement of these Wasserstein neurons in greater detail by positing a new avenue to investigate entanglement. According to superposition theory, as the number of features increases relative to the number of neurons, features are forced to become non-orthogonal in order to represent more of them, thus increasing entanglement (Elhage et al., 2022). Consider neurons that must attend to multiple of these features. As the number of features increases, and different features are forced to become more similar in direction, such neurons must still manage to distinguish between them. Therefore, in this context, neurons that are highly entangled have the task of differentiating between similar input vectors, and mapping them to different output values.
55
+
56
+ To mathematically explore this concept, we study the input-output (IO) relationship of individual neurons. We introduce the metric "mapping difficulty" (MD), which measures how often a neuron must generate dissimilar outputs from similar inputs through its dot product computation. The MD for a particular neuron, given its weights and a set of inputs, is calculated as follows (Equation 2):
57
+
58
+ $$
59
+ \begin{array}{l} \mathrm {M D} (\boldsymbol {w}, \mathbb {X}) = \underset {1 \leq i < j \leq n} {\text {m e a n}} \left\{\left(\frac {| | y _ {i} - y _ {j} | |}{N _ {y}}\right) / \left(\frac {| | \boldsymbol {x} _ {i} - \boldsymbol {x} _ {j} | |}{N _ {\boldsymbol {x}}}\right) \right\} \\ \boldsymbol {x} _ {i}, \boldsymbol {x} _ {j} \in \mathbb {X}, \quad y _ {i} = \boldsymbol {w} \cdot \boldsymbol {x} _ {i}, \quad n = | \mathbb {X} | \tag {2} \\ \end{array}
60
+ $$
61
+
62
+ $$
63
+ N _ {\pmb {x}} = \max _ {1 \leq i < j \leq n} \left\{\left| \left| \pmb {x} _ {i} - \pmb {x} _ {j} \right| \right| \right\}, \quad N _ {y} = \operatorname * {m e d i a n} _ {1 \leq i < j \leq n} \left\{\left| \left| y _ {i} - y _ {j} \right| \right| \right\}
64
+ $$
65
+
66
+ $\pmb{x}_i$ and $\pmb{x}_j$ represent two distinct input vectors from the set of inputs $\mathbb{X}$ . $y_i$ and $y_j$ represent the two output scalars as a result of the dot product of an individual neuron's weights $\pmb{w}$ with the inputs. For every pair of inputs, we compute the $L^2$ norm of their difference, then scale the norms between zero and one using the maximum norm $N_{\pmb{x}}$ . We then compute the $L^2$ norm of the difference in their corresponding outputs, and normalize them with the median norm $N_y$ . More details on the rationale behind the normalizing factors can be found in Appendix A.8. The MD of a neuron can thus be calculated as the average of the ratio between the normalized difference in outputs to the normalized difference in inputs. Intuitively, a greater MD means that a neuron generally increases the separation of similar inputs into more dissimilar outputs.
67
+
68
+ ![](images/af9ecf0f78c48e6261171ab43e097438430e27394366d8cc51d222265b1871fc.jpg)
69
+ Figure 2: A measure of neuronal entanglement. (a) The output distribution of a random neuron. (b) The normalized $L^2$ plot of a random neuron's pairs of inputs and outputs. (c) The output distribution of a Wasserstein neuron. (d) The normalized $L^2$ plot of a Wasserstein neuron's pairs of inputs and outputs. This neuron must map fairly similar inputs to outputs that are very far apart through its dot product operation. The neurons are from the up projection matrix of the second FFN block in Pythia-1.4B. (e) The MD of a neuron is highly correlated with its WD. The selected random and Wasserstein neurons are highlighted in their respective colors.
70
+
71
+ For the two neurons we have selected before, we plot the normalized $L^2$ for pairs of inputs $(||\pmb{x}_i - \pmb{x}_j|| / N_x)$ and outputs $(||y_i - y_j|| / N_y)$ , as defined in Equation 2. These inputs and outputs were collected over the course of running the Wikitext-2 dataset (Mery et al., 2016) through Pythia-1.4B. For the random neuron, as the difference between inputs decreases, so too does the difference between outputs (Figure 2b). However, for the Wasserstein neuron, this is not the case—even relatively similar inputs are mapped to outputs almost as far apart as the entire range of the neuron (Figure 2d). A clear trend between the MD of a neuron and its WD emerges (Figure 2e), and the two measures are highly correlated. Thus, we propose the WD of a neuron's output distribution to a Gaussian as a novel metric of entanglement, with Wasserstein neurons being particularly entangled.
72
+
73
+ # 2.3 EFFECT OF HIGH WASSERSTEIN NEURONS ON SPARSIFICATION
74
+
75
+ In the previous section, we have related Wasserstein neurons to a novel formulation of entanglement. Now, we show that such neurons also have a substantially outsized effect on model performance under sparsity. In Llama-3-8B, if just $3\%$ of all neurons—those with the highest WD—are sparsified via SparseGPT in every FFN, model performance significantly degrades. This degradation is far more severe than when $3\%$ of random neurons are sparsified, and remains true when compared to sparsifying the same number of other important neurons, such as those with the greatest mean and variance in their output distributions and even those with the greatest mean weight magnitude. As compression increases, this effect becomes more obvious (Figure 3a). Therefore, Wasserstein neurons are crucial for maintaining accuracy and are severely limited in their ability to be compressed.
76
+
77
+ To better understand which specific capabilities are impacted by neuron entanglement, we evaluate the Llama-3-8B model with its Wasserstein neurons sparsified across several language model evaluation benchmarks. We select five tasks spanning four broad categories, similar to the original Llama-3 work (Dubey et al., 2024). For reading comprehension, we use the 1-shot variant of the SQuAD 2.0 dataset (Rajpurkar et al., 2018). To assess knowledge reasoning and mathematical capabilities, we evaluate the model on the 5-shot TriviaQA-Wiki (Joshi et al., 2017) and 5-shot GSM8K (Cobbe et al., 2021) datasets, respectively. Finally, to evaluate general reasoning, we test the model on two benchmarks: an easy task, 5-shot MMLU (Hendrycks et al., 2020), and a more challenging task, 3-shot Chain-of-Thought (CoT) Big Bench Hard (BBH) (Suzgun et al., 2022).
78
+
79
+ ![](images/825d3587a04c0c7d65283bdca7df3b77835ce3a7e693a1e3d47aa939c43eb04e.jpg)
80
+ Figure 3: Entangled neurons are much more sensitive to compression. In Llama-3-8B, $3\%$ of neurons from every FFN linear layer are sparsified via SparseGPT in an unstructured manner with a subset of the Wikitext-2 train dataset as calibration data. (a) Sparsifying Wasserstein neurons (blue) impairs the model more than sparsifying neurons with the highest output distribution means (orange) and variances (green), those with the highest average mean weight magnitude (purple), and considerably more than random neurons (red). Perplexity is measured on the Wikitext-2 test dataset. (b-d) Sparsifying the Wasserstein neurons (blue) affects general and mathematical reasoning much more than random neurons (red), as shown in the capability charts. At higher levels of neuron sparsity $(\geq 95\%)$ , ablating Wasserstein neurons leads to a collapse in model performance, which does not occur with random neurons.
81
+
82
+ ![](images/7f9b7f9c1a7fb36b4173a1beb73dc5006859412aa3f74908cc10663005b6e0e1.jpg)
83
+
84
+ ![](images/eeab8b161fd3328f04331ed21877e8d4d7a372b378531f5581d5282800460d9e.jpg)
85
+
86
+ ![](images/473626a4208dfdc30f32a1718345627a26e5bb506841efb89abc56e05033c10a.jpg)
87
+
88
+ Our findings reveal that when just a small fraction of neurons (the top $3\%$ Wasserstein neurons) are sparsified, the model's performance on complex tasks involving general reasoning and mathematical understanding is significantly impacted. However, when the same level of sparsification is applied to random neurons, the model is able to preserve most of its capabilities effectively. Additionally, as a neuron is increasingly sparsified, the output distribution becomes more Gaussian (Figure A9, A10). This in turn places even more stress upon the neuron—not only is it contending with decreasing mean and variance of the output distribution (Figure A11), but also with the less expressive distribution shape. Thus, it seems that, especially at the higher sparsities that we are analyzing, the irregular shape of the entangled neurons is much more challenging to model with fewer weights than a Gaussian-like distribution. Furthermore, partially due to their slightly lower mean weight magnitudes (Figure A4a), Wasserstein neurons are actually sparsified more by SparseGPT during unstructured sparsity, compounding this issue (Figure A4b). However, keeping Wasserstein neurons dense at the cost of sparsifying all other neurons even more also does not seem to be the solution (Appendix A.7). To investigate the difficulty of sparsifying entangled neurons and the relationship between superposition and performance, we introduce Sparse Expansion.
89
+
90
+ # 3 AN EXPERIMENTAL FRAMEWORK TO STUDY DISENTANGLEMENT
91
+
92
+ To better study Wasserstein neurons and the phenomena between entanglement, sparsity, and performance that we observe, we create the experimental framework Sparse Expansion. It is inspired by recent work on the theoretical limits of computation within superposition (Hanni et al., 2024; Adler & Shavit, 2024). Sparse Expansion was designed to achieve two goals in real-world models. First, it must originate from a trained dense model and not be retrained. This way, the dynamics of a single neuron, in particular Wasserstein neurons, can still be studied in depth after the model has been expanded. Second, from a theoretical perspective, it must test how varying the number of effective features in the input affects the number of required weights. Therefore, the relationship between superposition and sparsity can be further understood.
93
+
94
+ # 3.1 SPARSE EXPANSION IN DETAIL
95
+
96
+ Sparse Expansion clusters the inputs to each layer into separate groups via an optional PCA dimensionality reduction and K-means clustering. Each expert is then sparsified via the SparseGPT algorithm (Algorithm A1). Briefly, SparseGPT approximates the optimal sparse matrix of a layer with the Hessian of the error relative to the parameters of the layer $\mathbf{Y} = \mathbf{W}\mathbf{X} + \mathbf{b}$ . Doing so yields $\mathbf{H} = \mathbf{X}\mathbf{X}^T$ , where $\mathbf{H}$ is the Hessian matrix.
97
+
98
+ ![](images/f7d5ee36287df0b6caf8e2fb0d35236c111611e20dc26869d6f8e016d10b70cc.jpg)
99
+ Figure 4: The Sparse Expansion process. One-shot expert creation process of Sparse Expansion (left). Inference process in a FFN of an expanded model (right).
100
+
101
+ ![](images/c841185f8753021fd87ab30aad1fc4172d89a984551a18424b900376ddcdbfea.jpg)
102
+
103
+ During inference, each input will be passed through the PCA and K-means model to decide its expert, then routed to the corresponding expert for the matrix multiply (Algorithm A2). As the routing is done via K-means on a lower dimension, and the PCA is a very low dimension matrix multiply operation, both are inexpensive to add on to normal LLM inference. Furthermore, routing in this manner prevents the need to train and run a more expensive router.
104
+
105
+ Our design explicitly achieves the goals we set out. First, by starting from a dense model, we are able to study how the separation of inputs affects individual neurons, which we would not be able to do for the same neuron index across experts in a MoE model such as Mixtral (Jiang et al., 2024) and DeepSeek (Guo et al., 2025). Second, by utilizing SparseGPT, each expert has its weights sparsified and tailored to a subset of inputs, testing the theoretical limits of how many weights are necessary to model a given number of features.
106
+
107
+ # 3.2 SPARSE EXPANSION DISENTANGLES NEURONS
108
+
109
+ We revisit the output distributions of neurons to determine the effect that clustering has in a sparse setting. First, we repeat the sparsification experiment conducted in Figure 3 on Wikitext-2 in Llama-3-8B. Now, for just the neurons we pruned, we expand them into 16 experts and measure the recovery in performance. Sparse Expansion is able to recover significant performance following Wasserstein neuron sparsification, much more than it does during random neuron expansion. However, the recovery in performance for random neurons is not as noticeable, because these neurons were not under significant entanglement initially (Figure 5a). Furthermore, both the weighted cluster WD and weighted cluster MD of the majority of neurons decreases as a result of Sparse Expansion. The weighted cluster WD and MD are calculated as the average WD and MD within each cluster, weighted by cluster size. This is especially true for Wasserstein neurons, where $98\%$ of neurons have a decrease in weighted WD by a median of $42\%$ per neuron (Figure 5b), and where $96\%$ of neurons have a decrease in weighted MD by a median of $9\%$ per neuron (Figure 5c).
110
+
111
+ For Llama-2-7B (Figure 1) and Pythia-1.4B (Figure A8), both models and both neuron types—random and Wasserstein—improve through Sparse Expansion, with the entangled neuron showing greater improvement. Furthermore, for the random neuron and especially for the entangled neuron, the geometry of the sparse output distribution in Sparse Expansion much more closely matches that of the dense distribution.
112
+
113
+ We also provide a visualization for the specialization of each cluster. Figure 1 and Figure A8 each show the sparse output distributions of each individual cluster, with a different color per expert. For the randomly selected neurons, there is still an improvement, although each expert is for the most part responsible for approximately the same range and shape. For the entangled neurons, there is significant specialization for different parts of the distribution further away from the mode.
114
+
115
+ However, Sparse Expansion is not limited to improving just Wasserstein neurons. Across different sparsities and across different models, all but a tiny fraction of neurons improve through Sparse Expansion (Figure A12). Thus, like other work regarding polysemanticity (Lecomte et al., 2023; Bricken et al., 2023), we believe that, in fact, every neuron is to some extent in entanglement. However, Wasserstein neurons are the most obviously entangled ones, and they benefit more from sparse disentanglement, especially at higher sparsities. Finally, we note that other metrics of the dense neuronal output distribution, such as their means and variances, fail to act as a predictor of neuronal improvement to the degree that the Wasserstein distance to a Gaussian does (Figure 7,
116
+
117
+ ![](images/444e0b0152fef4cc804cda04879a753ae282eb297d8f82a4a8a27ae1f57b48ef.jpg)
118
+ Figure 5: Sparse Expansion recovers performance of Wasserstein neurons. (a) Although Wasserstein neurons are penalized more under sparsity, they also recover better in Sparse Expansion compared to random neurons. We quantify this recovery using normalized perplexity relative to the dense model. Data from Llama-3-8B. (b) As a result of Sparse Expansion, the median decrease in WD per neuron is $19\%$ . Although a few neurons with an initially low dense WD exhibit a higher average weighted WD, the majority $(68\%)$ of all neurons show a decrease in weighted WD. This is especially true in the top $10\%$ of neurons with an originally high WD—the Wasserstein neurons. (c) Sparse Expansion also decreases the weighted MD by a median of $2\%$ per neuron. $70\%$ of all neurons and $96\%$ of Wasserstein neurons show a decrease in weighted MD, the latter with a median decrease of $9\%$ per neuron. (b, c) Data collected from the up projection matrix in the second FFN of Pythia-1.4B.
119
+
120
+ A13). Thus, we believe that the WD to normal for a neuron's output distribution is a very suitable and intuitive metric of entanglement within a neuron.
121
+
122
+ # 3.3 MORE SPARSE EXPERTS BETTER FIT THE OUTPUT DISTRIBUTION
123
+
124
+ The complex dense output distribution of highly entangled neurons is difficult to model with a single sparse expert, as in the case of SparseGPT. In Figure 6, we show the output distribution of a Wasserstein neuron in both dense and sparse computation. As the number of sparse experts increases, the output distribution of the sparse computation more closely matches that of the dense computation, as measured in the WD between the two distributions. Furthermore, the relative improvement (RI) of Sparse Expansion over SparseGPT increases. In this paper, RI is measured as the ratio of the RMSE between the SparseGPT sparse computation and the dense computation, to the RMSE between the Sparse Expansion sparse computation and the dense computation.
125
+
126
+ ![](images/74cdc8c33b3c3870641dcdf095f55b366dfca5d136153afd22f8dd85785767f2.jpg)
127
+ Figure 6: Modeling recovery with more experts. The sparse computation output distribution (red) better matches the dense one (blue) with more clusters. Sparsity is set to $90\%$ for each expert. Here, WD refers to the Wasserstein distance between the Sparse Expansion sparse and dense output distributions, rather than to a Gaussian. RI represents relative improvement of Sparse Expansion $(n \geq 1$ clusters) over SparseGPT $(n = 1$ cluster). This is the same neuron from Figure 2c.
128
+
129
+ # 3.4 WASSERSTEIN DISTANCE BEST EXPLAINS IMPROVEMENT
130
+
131
+ So far, we have claimed that Wasserstein distance is not only a pertinent indicator of neuronal entanglement, but also a predictor of its improvement in Sparse Expansion over SparseGPT. To test this idea, we compare how well the RI is modeled by a neuron's output WD, mean output magnitude, and output variance. Of these metrics, a neuron's Wasserstein distance is most correlated with its improvement in sparse computation from disentanglement (Figure 7, A13).
132
+
133
+ ![](images/be21d2721c08e6b36711cb1074162f0c27729add419de2e6ae79c4a111529622.jpg)
134
+ Figure 7: Wasserstein distance best explains improvement among tested metrics. The RI of each neuron in Pythia-1.4B was calculated as before and compared against the optimal number of Gaussians needed to model its output distribution (gray), the average magnitude of its output distribution (orange), the variance of its output distribution (green), and the Wasserstein distance of its output distribution to normal (blue). For each metric, the line of best fit is calculated, and the coefficient of determination $R^2$ is found. For each optimal number of Gaussians, the mean improvement is marked. Of these metrics, the Wasserstein distance best correlates with relative improvement. Data collected from the second up projection layer in Pythia-1.4B.
135
+
136
+ In addition, we test whether the estimated number of components in a Gaussian mixture model (GMM) is enough to explain the improvement as a result of disentanglement. Specifically, given a neuronal output distribution, we applied Gaussian mixture modeling to determine the optimal number of Gaussians required to model the distribution, using the Bayesian Information Criterion (BIC) for evaluation. BIC is a metric that penalizes model complexity and tries to identify the minimum number of Gaussians which can optimally model the distribution. However, when testing the optimal number of Gaussians between one and sixteen models, our findings indicated almost no correlation $(R^2 \leq 0.001)$ between the optimal number of Gaussians and the relative improvement in the Sparse Expansion setup, as seen in Figure 7. Thus, in our experiments, we find that the Wasserstein distance is a better indicator than others that we have tested.
137
+
138
+ # 3.5 THEORETICAL IMPLICATIONS OF SPARSE EXPANSION
139
+
140
+ Recent theoretical work (Hänni et al., 2024; Adler & Shavit, 2024) investigates the algorithmic upper and lower bounds of polysemantic neuronal computation in toy examples. To explore empirical evidence along this body of work for real-world models, we investigate the improvements made by Sparse Expansion in Pythia-1.4B in $80\%$ unstructured sparsity. We estimate the approximate number of effective features a set of inputs has by applying PCA to the set and finding the minimum number of components required to reach $90\%$ explained variance. As expected, the average minimum required components for the inputs to the experts, weighted by the number of inputs in each group, decreases after clustering for every FFN weight matrix (Figure 8a).
141
+
142
+ To provide empirical evidence on the bounds of computation under entanglement, we explore modeling ability compared to the number of input features. To identify a bound for minimum error under sparse computation, we consider the RMSE of each clustered sparse output to the dense output, normalized to the overall RMSE for that layer as a proxy for computational ability. We compare this to the number of required PCA components for said cluster as before. Across all clusters in all layers of the network, there is a linear front that emerges in log-log scale: as the number of required components increases, so too does the minimum error (Figure 8b). Next, we consider the bound on maximum improvement in sparse computation under entanglement. When a cluster has fewer effective features, since each expert has the same number of parameters, Sparse Expansion allocates relatively more parameters to model these features than SparseGPT does, as the latter must account for all inputs. However, when a cluster has many required components, Sparse Expansion and SparseGPT allocate a similar amount of parameters, leading to relatively lower improvement. Therefore, performance improvements increase with fewer effective features. However, beyond a certain point, adding more features no longer yields further performance gains. This trend is also visible in the linear frontier of the log-log plot (Figure 8c). Thus, we provide some empirical demonstrations of the existence of bounds of both loss and improvement of sparse computation under entanglement.
143
+
144
+ ![](images/768bc1f4e6f3890d05d11371388e35c8e917fa3dd97f5711a3cdd211ec5bcff7.jpg)
145
+ Figure 8: Empirical demonstrations of performance bounds. (a) As a result of clustering, the weighed average minimum number of components to capture $90\%$ of the explained variance decreases for every layer. (b) As the number of required components for a particular cluster increases, so too must the error. (c) As the number of required components for a particular cluster decreases, Sparse Expansion improves more over SparseGPT, but up to a bound. Data collected in Pythia-1.4B.
146
+
147
+ # 3.6 SPARSE EXPANSION PERFORMANCE
148
+
149
+ We evaluate how well Sparse Expansion performs against other competitive one-shot pruning techniques, including in terms of inference speed (Table A3). Despite its leading evaluation performance (Figure 9; Table A1, A5), this method is likely not practically implementable without further optimizations to counteract the increase in memory footprint, including tuning the number of clusters per neuron (Figure A3). Nevertheless, we hope that Sparse Expansion serves as an inspiration for future sparsification techniques that address entanglement for better performance.
150
+
151
+ # 3.6.1 MODELS, DATASETS, AND SETUP
152
+
153
+ We use the Pythia series of pre-trained LLMs to evaluate how Sparse Expansion performs across model sizes, from Pythia-70M to Pythia-12B. We further evaluate Sparse Expansion across the entire Llama-2 family. We use a subset of the Wikitext-2 train dataset as calibration data for input-aware pruning and evaluate using the corresponding test set through the perplexity metric. Furthermore, to evaluate the performance of Sparse Expansion in out-of-distribution (OOD) data, we evaluate the sparse model in 5 zero-shot standard benchmark tasks in both Llama and Pythia. For our performance benchmarks, we use 16 clusters at each level of routing in Sparse Expansion. We rely upon the RAPIDS library (Raschka et al., 2020) to accelerate the PCA and K-means models by orders of magnitude. We utilize and build upon the SparseGPT GitHub repository.
154
+
155
+ # 3.6.2 PERFORMANCE ACROSS SCALES
156
+
157
+ We evaluate the performance of Sparse Expansion against other one-shot pruning techniques across a range of model sizes in Pythia and sparsities in Llama-2-7B (Figure 9). Across all model sizes of Pythia, Sparse Expansion outperforms all other pruning techniques at $50\%$ unstructured sparsity, approaching dense performance as model size increases. Moreover, for Llama-2-7B, across all levels of sparsity, Sparse Expansion outperforms all other techniques. At higher levels of sparsity, the gap in performance between the techniques grows. We run further experiments on the entire Llama 2 family as well, and Sparse Expansion similarly outperforms other methods (Table A5). Finally, our experiments show Sparse Expansion outperforming contemporary pruning algorithms in OOD settings as well (Table A1, A2).
158
+
159
+ # 4 RELATED WORK
160
+
161
+ Polysemanticity There is a plethora of ongoing research contributing to the understanding of polysemanticity in neural networks from a mechanistic interpretability perspective (Bricken et al., 2023; Huben et al., 2023; Lecomte et al., 2023; Templeton, 2024). These efforts primarily rely upon sparse autoencoders to disentangle output activations into human-interpretable features, losing information specific to individual neurons in the process. As we focus on neurons due to their direct role in network pruning, we derive our own formulation of entanglement as an extension of prior notions. There are also other works that investigate individual neuronal responses directly utilizing tech-
162
+
163
+ ![](images/d1fa471fbb623a1391dfb19ca0303318c343ffaacf77c72843bec4a3b0a75985.jpg)
164
+ Figure 9: Sparse Expansion across model sizes and sparsities. (a) Performance comparisons on Wikitext-2 perplexity between Magnitude Pruning (MP), Wanda, SparseGPT, and Sparse Expansion on Pythia models from sizes of 70M parameters to 12B parameters. Every FFN in each model was sparsified to $50\%$ sparsity. Each star represents a particular model size on the dense curve, and the corresponding sparsified model is the marker directly to its left on the sparse curves. (b) Performance for Llama-2-7B at different levels of sparsity for MP, Wanda, SparseGPT, and Sparse Expansion. The x-axis points in both graphs take into account the cost of routing.
165
+
166
+ ![](images/58398ae76566dd1217140f4887e02285df4eb6f87bfbea42e902a29de0c536f8.jpg)
167
+
168
+ niques such as sparse probing (Gurnee et al., 2023), as well as those that identify special neuron types in LLMs, such as Universal neurons (Gurnee et al., 2024) and Confidence Regulatory neurons (Stolfo et al., 2024). However, there is no recent literature tying polysemanticity and neuronal entanglement to sparse network performance.
169
+
170
+ Compression A multitude of advanced weight pruning algorithms, such as Wanda (Sun et al., 2023) and SparseGPT (Frantar & Alistarh, 2022), and quantization algorithms (Kim et al., 2023; Dettmers et al., 2022; Ashkboos et al., 2024; Egiazarian et al., 2024; Dettmers et al., 2023; Zhao et al., 2023; Lin et al., 2024) exist. Most advanced algorithms are input-aware so as to specialize the weights to the most important input features. Other pruning approaches, such as SWAP (You & Cheng, 2024) and WD-based channel pruning (Duan & Li, 2020), have also used WD, though for the gradient of the loss or for channel similarity, rather than for analyzing neurons. While outliers in the features and weights are known to be the among the most challenging factors to address when quantizing to extremely low bits, no equivalent understanding has been made for high sparsities.
171
+
172
+ # 5 CONCLUSION AND DISCUSSION
173
+
174
+ In this work, we for the first time demonstrate the impact of neuronal entanglement on network performance under weight sparsity, a previously unexplored avenue. From our work and others, we suspect that every neuron is to some extent entangled, but that this entanglement of features is easier for some neurons to resolve than it is for others. We explore this notion of entanglement through our metric of mapping difficulty, and find that Wasserstein distance is a novel, highly pertinent indicator of entangled neurons that must differentiate similar inputs into different outputs. Furthermore, as Wasserstein neurons in particular are incredibly sensitive to sparsification, we posit that the robustness of a neuron to sparsity is directly dependent on its degree of entanglement. Finally, we have shown that our experimental framework Sparse Expansion is an effective way to disentangle the complex entangled state of a sparse neuron, and use it to explore computational bounds in empirical real-world models. The disentanglement provided by Sparse Expansion benefits Wasserstein neurons the most, providing further support that such neurons are the most entangled.
175
+
176
+ In future work, we plan to study Wasserstein neurons in the framework of mechanistic interpretability to understand what circuits they form. From our insight that more entangled neurons are harder to sparsify, we will investigate creating efficient, entanglement-aware sparsification algorithms to preserve performance at higher sparsities. Looking forward, perhaps just as outlier features and weights are well understood to be one of the most significant challenges when quantizing to fewer bits, so too can neuronal entanglement be understood as the challenge of pruning to higher sparsities.
177
+
178
+ # 6 ACKNOWLEDGMENTS
179
+
180
+ The authors would like to extend their gratitude to Lori Leu for her insightful comments on the application of the Wasserstein distance metric. We also wish to thank Elias Frantar for his help in working with the SparseGPT implementation and his advice for the project. Additionally, we would like to thank Tony Tong Wang and Thomas Athey for their valuable feedback and constructive discussions.
181
+
182
+ This work was supported by an NIH Brains CONNECTS U01 grant and AMD's AI & HPC Fund.
183
+
184
+ # REFERENCES
185
+
186
+ Micah Adler and Nir Shavit. On the complexity of neural computation in superposition. arXiv preprint arXiv:2409.15318, 2024.
187
+ Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6:483-495, 2018.
188
+ Saleh Ashkboos, Amirkeivan Mohtashami, Maximilian L Croci, Bo Li, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. Quarot: Outlier-free 4-bit inference in rotated llms. arXiv preprint arXiv:2404.00456, 2024.
189
+ Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pp. 2397-2430. PMLR, 2023.
190
+ Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, et al. Towards monoseismicity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2, 2023.
191
+ Roberto Lopez Castro and Dan Alistarh. Sparse marlin: a fast sparse plus 4-bit kernel for generative inference. https://github.com/IST-DASLab/Sparse-Marlin, 2024.
192
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
193
+ Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Gpt3. int8(): 8-bit matrix multiplication for transformers at scale. Advances in Neural Information Processing Systems, 35: 30318-30332, 2022.
194
+ Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkoos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh. Spqr: A sparse-quantized representation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078, 2023.
195
+ Haoran Duan and Hui Li. Channel pruning for accelerating convolutional neural networks via Wasserstein metric. In Proceedings of the Asian Conference on Computer Vision, 2020.
196
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The Ilama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
197
+ Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, and Dan Alistarh. Extreme compression of large language models via additive quantization. arXiv preprint arXiv:2401.06118, 2024.
198
+ Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, et al. Toy models of superposition. arXiv preprint arXiv:2209.10652, 2022.
199
+ Elias Frantar and Dan Alistarh. Optimal brain compression: A framework for accurate post-training quantization and pruning. Advances in Neural Information Processing Systems, 35:4475-4488, 2022.
200
+ Elias Frantar and Dan Alistarh. Sparsegpt: Massive language models can be accurately pruned in one-shot. In International Conference on Machine Learning, pp. 10323-10337. PMLR, 2023.
201
+ Elias Frantar and Dan Alistarh. Marlin: a fast 4-bit inference kernel for medium batchesizes. https://github.com/IST-DASLab/marlin, 2024.
202
+ Elias Frantar, Saleh Ashkboos, Torsten Hoeffer, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022.
203
+
204
+ Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.
205
+ Guy Gur-Ari, Daniel A Roberts, and Ethan Dyer. Gradient descent happens in a tiny subspace. arXiv preprint arXiv:1812.04754, 2018.
206
+ Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. Finding neurons in a haystack: Case studies with sparse probing. arXiv preprint arXiv:2305.01610, 2023.
207
+ Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, and Dimitris Bertsimas. Universal neurons in gpt2 language models. arXiv preprint arXiv:2401.12181, 2024.
208
+ Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28, 2015.
209
+ Kaarel Hänni, Jake Mendel, Dmitry Vaintrob, and Lawrence Chan. Mathematical models of computation in superposition. arXiv preprint arXiv:2408.05451, 2024.
210
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
211
+ Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research, 22(241):1-124, 2021.
212
+ Robert Huben, Hoagy Cunningham, Logan Riggs Smith, Aidan Ewart, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models. In *The Twelfth International Conference on Learning Representations*, 2023.
213
+ Adam S Jermyn, Nicholas Schiefer, and Evan Hubinger. Engineering monosemanticity in toy models. arXiv preprint arXiv:2211.09169, 2022.
214
+ Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
215
+ Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017.
216
+ Leonid V Kantorovich. On the translocation of masses. Journal of mathematical sciences, 133(4): 1381-1382, 2006.
217
+ Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W Mahoney, and Kurt Keutzer. Squeezeellm: Dense-and-sparse quantization. arXiv preprint arXiv:2306.07629, 2023.
218
+ Victor Lecomte, Kushal Thaman, Trevor Chow, Ryan Schaeffer, and Sanmi Koyejo. Incidental polysemanticity. arXiv preprint arXiv:2312.03096, 2023.
219
+ Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. Awq: Activation-aware weight quantization for on-device llm compression and acceleration. Proceedings of Machine Learning and Systems, 6: 87-100, 2024.
220
+ Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
221
+ Jesse Mu and Jacob Andreas. Compositional explanations of neurons. Advances in Neural Information Processing Systems, 33:17153-17163, 2020.
222
+
223
+ Lorenzo Noci, Alexandru Meterez, Thomas Hofmann, and Antonio Orvieto. Why do learning rates transfer? reconciling optimization and scaling limits for deep learning. arXiv preprint arXiv:2402.17457, 2024.
224
+ Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. Distill, 5(3):e00024-001, 2020.
225
+ Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc-Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1525–1534, 2016.
226
+ Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
227
+ Sebastian Raschka, Joshua Patterson, and Corey Nolet. Machine learning in python: Main developments and technology trends in data science, machine learning, and artificial intelligence. arXiv preprint arXiv:2002.04803, 2020.
228
+ Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Belinkov, Xingyi Song, Mrinmaya Sachan, and Neel Nanda. Confidence regulation neurons in language models. arXiv preprint arXiv:2406.16254, 2024.
229
+ Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large language models. arXiv preprint arXiv:2306.11695, 2023.
230
+ Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022.
231
+ Adly Templeton. Scaling monoseismicity: Extracting interpretable features from claude 3 sonnet. Anthropic, 2024.
232
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
233
+ Cédric Villani et al. Optimal transport: old and new, volume 338. Springer, 2009.
234
+ Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. Scipy 1.0: fundamental algorithms for scientific computing in python. Nature methods, 17(3):261-272, 2020.
235
+ Johannes Welbl, Nelson F Liu, and Matt Gardner. Crowdsourcing multiple choice science questions. W-NUT 2017, pp. 94, 2017.
236
+ Lei You and Hei Victor Cheng. SWAP: Sparse entropic Wasserstein regression for robust network pruning. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=LJWizuuBUy.
237
+ Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, and Yuandong Tian. Galore: Memory-efficient llm training by gradient low-rank projection. arXiv preprint arXiv:2403.03507, 2024.
238
+ Yilong Zhao, Chien-Yu Lin, Kan Zhu, Zihao Ye, Lequn Chen, Size Zheng, Luis Ceze, Arvind Krishnamurthy, Tianqi Chen, and Baris Kasikci. Atom: Low-bit quantization for efficient and accurate llm serving. arXiv preprint arXiv:2310.19102, 2023.
239
+
240
+ # A APPENDIX
241
+
242
+ # A.1 PSEUDO-CODE FOR SPARSE EXPANSION
243
+
244
+ Algorithm A1 describes the sparsification process of Sparse Expansion. The sparse experts are created in a layer-wise sequential fashion for each linear layer of every FFN transformer block to create the sparse model. Algorithm A2 refers to the inference procedure of Sparse Expansion once the model is pruned following the methods described in Algorithm A1 and Section 3.1.
245
+
246
+ Algorithm A1 Sparse Expansion model generation. The following layerwise procedure can be repeated for each linear layer in the transformer.
247
+ 1: procedure LAYERWISE SPARSE EXPANSION SPARSIFICATION PROCESS
248
+ 2: $\{x\} \gets x_i \in \mathbb{R}^n$ //set of calibration inputs to layer
249
+ 3: $W \gets m \times n$ //layer weights
250
+ 4: c //number of clusters
251
+ 5: r //factor to reduce dimensionality by
252
+ 6: $R \gets \mathbf{PCA}\left(\frac{n}{r}\right)$ //new PCA object with $\frac{n}{r}$ components
253
+ 7: R.fit(\{x\}) //fit $R$ to inputs
254
+ 8: $K \gets \mathbf{Kmeans}(c)$ //new K-means object with $c$ initial centroids
255
+ 9: $K.fit(\{R(x)\})$ //fit $K$ to dimensionality reduced inputs
256
+ 10: for $j = 1, 2, 3\dots c$ do
257
+ 11: $X_j \gets \{x | K(R(x)) = j\}$ //group $\{x\}$ into its component clusters
258
+ 12: $W_j \gets W$ //make a copy of the original weight matrix
259
+ 13: $S_j \gets \text{SparseGPT}$ //make a SparseGPT object
260
+ 14: $W_j' \gets S_j.\text{sparsify}(W_j, X_j)$ //sparsify $W_j$ using $X_j$
261
+
262
+ Algorithm A2 Sparse Expansion inference. The following layerwise procedure is repeated at inference time for each clustered layer.
263
+ 1: procedure LAYERWISE SPARSE EXPANSION INFERENCE PROCESS
264
+ 2: $\{x\} \gets x_i \in \mathbb{R}^n$ //set of inputs to layer
265
+ 3: $\{W\}$ //set of experts
266
+ 4: $R$ //PCA model
267
+ 5: $K$ //K-means model
268
+ 6: for $i = 1, 2, 3$ do
269
+ 7: $j \gets K(R(x_i))$ //find the cluster assignment of $x$
270
+ 8: $y_j \gets W_j(x_i)$ //run inference with the correct expert
271
+
272
+ # A.2 DISTRIBUTION OF WASSERSTEIN DISTANCES ACROSS ALL LLAMA-2-7B FFN LAYERS
273
+
274
+ After collecting the Wasserstein distance to the normal distribution for every neuron, we find that all up and gate projection matrices in each Llama-2-7B FFN block have high WD neurons. We also find that certain down projection matrices also have high WD neurons, though most do not.
275
+
276
+ ![](images/cd6f42ab977d19f5450e04eb0678ed91bb6dff111c6b51e3df86be9381b9ee5e.jpg)
277
+
278
+ ![](images/e169e7d51c098d6236c5376b4ac8b46550ed2aec8fceb4de51fd28a4184467cf.jpg)
279
+
280
+ ![](images/636f15f273127466661deaa55301c024084d0d8992ee67ae78c492dfa6bc8a3d.jpg)
281
+ Figure A1: High Wasserstein distance neurons in each layer. Many neurons with a high WD to the Gaussian distribution exist in every FFN block, and in every up (a) and gate projection (b) specifically. Certain down projection layers also have high WD neurons (c). The box plots show the range of non-outliers, as well as the first quartile, the median, and the third quartile of neuronal WD. The outliers are defined as 1.5 times the interquartile range less than the first or more than the third quartile and are represented by the points.
282
+
283
+ # A.3 SPARSE EXPANSION PERFORMANCE IN OUT-OF-DISTRIBUTION DATA
284
+
285
+ We evaluate the performance of Llama-3.2-1B $^2$ (Table A1) and Pythia-1.4B (Table A2) on a range of natural language modeling tasks, including ARC-e (Easy) and ARC-c (Challenge) for arithmetic reasoning, Lambada (Paperno et al., 2016) for contextual word prediction, SciQ (Welbl et al., 2017) for scientific question answering, and MMLU for multitask general knowledge assessment. As dense Pythia-1.4B does not score better than random chance on MMLU, we do not benchmark it on this task. We compare various pruning algorithms at $50\%$ sparsity, including Magnitude Pruning (MP), Wanda, SparseGPT, and Sparse Expansion with 16 clusters, to the dense baseline. Sparse Expansion consistently excels across both models, achieving the highest scores on tasks among sparsification algorithms.
286
+
287
+ Table A1: Performance of Llama-3.2-1B under different pruning algorithms.
288
+
289
+ <table><tr><td>Algorithm</td><td>Sparsity</td><td>ARC-e</td><td>ARC-c</td><td>Lambada</td><td>SciQ</td><td>MMLU</td></tr><tr><td>Dense</td><td>0%</td><td>65.488</td><td>31.314</td><td>53.969</td><td>91.4</td><td>37.701</td></tr><tr><td>Magnitude</td><td>50%</td><td>45.244</td><td>22.354</td><td>4.677</td><td>67.1</td><td>23.493</td></tr><tr><td>Wanda</td><td>50%</td><td>50.800</td><td>23.635</td><td>31.457</td><td>85.2</td><td>25.428</td></tr><tr><td>SparseGPT</td><td>50%</td><td>55.640</td><td>24.403</td><td>31.613</td><td>86.8</td><td>25.046</td></tr><tr><td>Sparse Expansion</td><td>50%</td><td>57.713</td><td>26.962</td><td>35.807</td><td>87.5</td><td>28.729</td></tr></table>
290
+
291
+ Table A2: Performance of Pythia-1.4B under different pruning algorithms.
292
+
293
+ <table><tr><td>Algorithm</td><td>Sparsity</td><td>ARC-e</td><td>ARC-c</td><td>Lambda</td><td>SciQ</td></tr><tr><td>Dense</td><td>0%</td><td>61.742</td><td>27.389</td><td>48.981</td><td>86.9</td></tr><tr><td>Magnitude</td><td>50%</td><td>42.003</td><td>19.198</td><td>1.533</td><td>69.0</td></tr><tr><td>Wanda</td><td>50%</td><td>54.630</td><td>23.976</td><td>45.041</td><td>85.7</td></tr><tr><td>SparseGPT</td><td>50%</td><td>56.608</td><td>24.061</td><td>44.615</td><td>85.8</td></tr><tr><td>Sparse Expansion</td><td>50%</td><td>58.449</td><td>25.720</td><td>46.424</td><td>86.3</td></tr></table>
294
+
295
+ # A.4 NEURONAL ENTANGLEMENT TRAJECTORY ACROSS TRAINING IN LLMS
296
+
297
+ Entanglement over the course of LLM training in Pythia-1.4B
298
+
299
+ ![](images/8784faa12ac5593bf2d4c5d0d4c545d7d250c8b142a894062b2c77b12790fdde.jpg)
300
+
301
+ ![](images/0e22a6f0c6f4313cc6e3e0ce9e63d661118f6136ed905d21871eb45f78884a49.jpg)
302
+ Figure A2: Analyzing neuronal entanglement during training. The intermediate checkpoints of Pythia-1.4B are available over the course of its training, from initialization to completion. Thus, we collect data from 17 different checkpoints over the course of its training, first at intervals of 5,000 steps, then at intervals of 10,000 steps after step 20,000. (a) We calculate each neuron's output distribution WD to a Gaussian as before in Equation 1. We do so for each training step. From the WD of neurons in the last training step, we separate out the top $3\%$ of neurons with the highest WD and the bottom $3\%$ of neurons with the lowest WD. We also find the average WD across all neurons. The progression of neuronal WD across training reveals that all neurons initially exhibit a Gaussian-like distribution, as expected, but some neurons rapidly differentiate into entangled neurons with very high WD and within just 5,000 steps (corresponding to approximately 10 billion tokens). The WD of such neurons then levels off afterward. (b) Using the same groups as in (a), we visualize the change in neuronal weights. We calculate the $L^2$ norm between each neuron's weights at each training step and its weights at model initialization (step 0), and normalize this value by the $L^2$ norm of the neuron's weights at initialization. Notably, neurons with high WD do not demonstrate more changes in their weights over the course of training than the average neuron, or neurons with low WD. Error bars represent one standard error of the mean. Neurons are from the up projection matrix in the second FFN block of Pythia-1.4B.
303
+
304
+ # A.5 OPTIMIZATIONS FOR PRACTICAL IMPLEMENTATION
305
+
306
+ To evaluate the inference latency of Sparse Expansion we implemented a Sparse Expansion layer based on PyTorch and optimized sparse-quantized GPU kernels called Sparse Marlin (Frantar & Alistarh, 2024; Castro & Alistarh, 2024), which supports the INT4 + 2:4 sparsity format. To better utilize the compression kernel, we use both sparsification and quantization to demonstrate speedup. We use a linear layer of appropriate size as an upper bound approximation for our router cost, which is followed by 4 bit, 2:4 sparse matrix multiplication. We have run the layer-wise benchmarks for the typical layers sizes from Llama models on a single RTX3090 GPU. We can see in Table A3 that Sparse Expansion allows us to get up to a $4.8 \times$ speedup over the dense baseline. The speedup comes from the highly-compressed linear layer representation. Although there is overhead compared to a regular compressed matrix due to the presence of the router, such overhead decreases as layer size increases.
307
+
308
+ Table A3: Sparse Expansion inference speedup. Layer-wise single batch inference latency (in $\mu$ s). The layer sizes are chosen specifically to match the layers of Llama-2-7B and Llama 2 70B.
309
+
310
+ <table><tr><td>Layer Size</td><td>4k × 12k</td><td>4k × 22k</td><td>11k × 4k</td><td>8k × 10k</td><td>8k × 57k</td><td>28k × 8k</td></tr><tr><td>Dense</td><td>132</td><td>227</td><td>114</td><td>220</td><td>1168</td><td>556</td></tr><tr><td>Sparse Expansion</td><td>76</td><td>76</td><td>75</td><td>76</td><td>241</td><td>138</td></tr><tr><td>Speedup</td><td>1.7×</td><td>3.0×</td><td>1.5×</td><td>2.9×</td><td>4.8×</td><td>4.0×</td></tr><tr><td>Sparse</td><td>26.8</td><td>44.7</td><td>24.4</td><td>42.3</td><td>216</td><td>109</td></tr><tr><td>Overhead</td><td>2.9×</td><td>1.7×</td><td>3.1×</td><td>1.8×</td><td>1.1×</td><td>1.3×</td></tr></table>
311
+
312
+ Additionally, we investigate how many experts different neurons need to improve performance. We find the relative improvement of each neuron, as defined in Section 3.3, across a different number of total experts. Specifically, we choose 2, 4, 8, and 16 experts for Sparse Expansion, compared to SparseGPT with its single expert. In this setting, we analyze the top $3\%$ of neurons with the highest WD as well as the bottom $3\%$ of neurons with the lowest WD, as defined before (Equation 1). We observe that Wasserstein neurons benefit far from Sparse Expansion than average for increasing clusters (left). Additionally, we split neurons into decile groups based on their relative improvement at 16 clusters. We find that, indeed, certain groups of neurons benefit very little from further additional experts past eight experts (right). Thus, further optimizations can be made to reach a balance between performance and memory increase.
313
+
314
+ ![](images/027d48ba5cce732a30455f638be5a501702a57a4a4041f07a67ec6f4a77191b0.jpg)
315
+ Figure A3: Improvement across clusters for different groups of neurons. (a) Wasserstein neurons benefit much more from Sparse Expansion than average with increasing clusters. (b) Different deciles of neurons have varying degrees of improvement from Sparse Expansion. $Dn$ indicate the deciles from D1 to D9. The decile groups are decided by their relative improvement at 16 clusters. For example, the first decile group consists of relative improvements between the minimum and D1 at 16 clusters, the second decile group consists of relative improvements between D1 and D2 at 16 clusters, and so on. Error bars represent one standard error of the mean. Neurons from the up projection matrix of the second FFN block of Pythia-1.4B.
316
+
317
+ # A.6 WASSERSTEIN NEURONS DO NOT HAVE PARTICULARLY HIGH WEIGHTS
318
+
319
+ To understand whether Wasserstein neurons arise from having substantially higher weights than average, we measure the mean weight magnitude for each neuron. We find that Wasserstein neurons do not have weight magnitudes that are particularly above average; if anything, there seems to be a slight negative correlation between a neuron's WD and its mean weight magnitude (Figure A4a). To investigate how this difference affects sparsification, we sparsify this layer to $80\%$ unstructured sparsity via SparseGPT, calibrated with Wikitext-2. This takes into account both weight magnitude as well as the influence of the inputs via the Hessian matrix. Wasserstein neurons are especially sensitive to sparsity and have an outsized impact on model performance, even more so than neurons with high average weight magnitude (Figure 3). However, these neurons are sparsified slightly more than average by this current advanced sparsification approach. This suggests that future sparsification schemes should take into account a neuron's WD and degree of entanglement, rather than just its weights and the Hessian.
320
+
321
+ ![](images/bda28dd71d9761d534d8eabbc2af75da82c0fe6eb66e4cc69d13385029dbd1d3.jpg)
322
+ Figure A4: Wasserstein neurons do not have particularly large weights, in terms of their average magnitude, and so are sparsified slightly more. (a) Neurons with high WD do not have large average weight magnitudes. Of the top $3\%$ of neurons with the highest WD, just one is also within the top $3\%$ of neurons with the largest average weight magnitudes. (b) Partially as a result of their lower than average weights, Wasserstein neurons tend to be sparsified slightly more than average in an unstructured setting. The top $3\%$ Wasserstein neurons are sparsified $6\%$ more than average. The neurons are from the up projection of the second FFN in Pythia-1.4B.
323
+
324
+ # A.7 DIMINISHING RETURNS FOR KEEPING ENTANGLED NEURONS FULLY DENSE
325
+
326
+ As shown in Figure 3, entangled neurons are particularly sensitive to pruning. We design an experiment to understand the opposite effect on model performance, namely of keeping the Wasserstein neurons dense. In our setup, we selectively keep the top $x\%$ of Wasserstein neurons dense, while pruning each of the remaining neurons to a sparsity of $\frac{s}{100 - x}\%$ to maintain an overall target sparsity of $s\%$ . This approach is compared against a baseline where all neurons are pruned to the same sparsity percentage $s\%$ , abbreviated as same sparsity per neuron (SSPN) in the table and equivalent to $x = 0\%$ .
327
+
328
+ We conduct this experiment on Llama-2-7B. We use SparseGPT to sparsify the neurons, use part of the Wikitext-2 train set as calibration data, and evaluate on the Wikitext-2 test set. As illustrated in Table A4, keeping Wasserstein neurons dense at the cost of sparsifying every other neuron more to achieve a target sparsity does not enhance model performance. Additionally, model performance worsens progressively as the proportion of neurons kept dense ( $x\%$ ) increases, since now less entangled neurons are also being kept dense. This behavior is likely due to the fact that the benefit of allowing Wasserstein neurons to retain all of their weights is outweighed by the cost of sparsifying every other already sparse neuron even more.
329
+
330
+ Table A4: Perplexity of Llama-2-7B on Wikitext-2 while sparsifying to $s\%$ overall and preserving $x\%$ of Wasserstein neurons.
331
+
332
+ <table><tr><td></td><td>s = 50%</td><td>s = 60%</td><td>s = 70%</td><td>s = 80%</td><td>s = 90%</td></tr><tr><td>SSPN (x = 0%)</td><td>6.219</td><td>7.420</td><td>12.73</td><td>33.26</td><td>366.0</td></tr><tr><td>x = 3%</td><td>6.259</td><td>8.023</td><td>14.70</td><td>40.40</td><td>395.7</td></tr><tr><td>x = 5%</td><td>6.345</td><td>8.131</td><td>16.03</td><td>46.67</td><td>629.5</td></tr><tr><td>x = 7%</td><td>6.366</td><td>8.547</td><td>17.37</td><td>61.95</td><td>978.3</td></tr><tr><td>x = 10%</td><td>6.522</td><td>9.232</td><td>19.48</td><td>79.53</td><td>8066</td></tr></table>
333
+
334
+ # A.8 DERIVING MAPPING DIFFICULTY AS A METRIC OF ENTANGLEMENT
335
+
336
+ We show more reasoning behind the choice of the normalizing factors $N_{\pmb{x}}$ and $N_{y}$ in Equation 2. First, we choose $N_{\pmb{x}} = \max_{1 \leq i < j \leq n} \{||\pmb{x}_i - \pmb{x}_j||\}$ to be the maximum $L^2$ norm between a pair of inputs to simply scale all $L^2$ norms to be between 0 and 1. Next, we choose $N_{y} = \operatorname{median}_{1 \leq i < j \leq n} \{||y_i - y_j||\}$ to be the median based on the following observations. Specifically, we would like to preserve and highlight the fact that there are many IO pairs that have a relatively low difference in their inputs, but are mapped to very different outputs, one group of which is circled in purple in Figure A5d.
337
+
338
+ First, we considered using the maximum $L^2$ , as we did for $N_{\alpha}$ . However, note that there is a very small number of samples that drives the maximum to be much further away from meaningful data points in both the random and Wasserstein neurons. Next, we considered the mean. Due to the presence of the outlier data points of interest that have a much greater difference in output than expected, the mean is also driven much higher for the Wasserstein neuron. Indeed, observe that for the Wasserstein neuron, the mean is much greater than the mode, while they are much more similar for the random neuron. We therefore use the median to normalize for inter-neuron differences in the expected range of their output differences, but to also be robust to outliers that would obfuscate the IO pairs of interest through an inflated mean.
339
+
340
+ ![](images/20b6d9e3fb493547ef5be221328a3a1b83480598c2c6d3797b7f4211f9f84562.jpg)
341
+ Figure A5: Deriving a normalization constant for the difference in outputs. (a) The output distribution of a random neuron. (b) The non-normalized relationship between the $L^2$ norm between pairs of inputs and the $L^2$ norm between their corresponding outputs for the random neuron. (c) The output distribution of a Wasserstein neuron. (d) The non-normalized relationship between the $L^2$ norm between pairs of inputs and the $L^2$ norm between their corresponding outputs for the Wasserstein neuron. Note how the mean is much higher than the median. One group that has a much higher output $L^2$ norm than expected for its relatively low input $L^2$ norm is circled in purple. These are the same neurons from Figure 2.
342
+
343
+ ![](images/fec25a0e6c646e381009672850d399f52a00874421a47a7d2db2cf69f7bdc2f7.jpg)
344
+
345
+ ![](images/a9f8a1cf462cbf00c6617bb01853209b7fcdf0560a7810f9379ec513cca37eed.jpg)
346
+
347
+ ![](images/ed1a6a103c97d259c553f8d43f46a3da921e82641bc73500f22a054d11b7ed06.jpg)
348
+
349
+ # A.9 ADDITIONAL CLUSTERS IMPART MORE PERFORMANCE
350
+
351
+ To understand how Sparse Expansion scales with the number of experts per linear layer, we test its performance from 2 to 32 experts. Interestingly, with 2 experts, very little performance benefits are realized. However, with each doubling of experts following 2 experts, we realize a nearly constant linear improvement in perplexity.
352
+
353
+ ![](images/e2aa94f77f868c12eadd84fd4c119d75301d74670bb16d9e7238611386ff3ff9.jpg)
354
+ Figure A6: Increasing the number of clusters improves Sparse Expansion performance in Llama-2-7B.
355
+
356
+ # A.10 FURTHER EXAMPLES OF DISENTANGLEMENT
357
+
358
+ We present additional evidence of neuronal disentanglement in Pythia-1.4B and Llama-2-7B. Figure A7 shows the recovery of the dense output distribution with increasing experts in Llama-2-7B. This is analogous to Figure A9, where we see a similar trend in Pythia models. Clustering gradually decreases the WD of the sparse output to that of the dense, thus improving upon SparseGPT, equivalent to the single cluster case. Moreover, this results in direct improvement of model performance as depicted in figure A6.
359
+
360
+ ![](images/118697ec36944bc72dd1460e3ead78029d11947e2b3ab691568bc6a95ec08eb6.jpg)
361
+ Figure A7: Modeling recovery with more experts in Llama-2-7B. Use of more experts can recover the dense output distribution even at very high sparsity, which is set to $90\%$ for each expert. This is the same neuron from Figure 1d.
362
+
363
+ Analogous to Figure 1, we observe the effect of clustering inputs on a random neuron and an entangled neuron in the gate projection of the second FFN of Pythia-1.4B. SparseGPT fails to capture the output distribution of the high WD neuron as it does for a random neuron. With clustering via Sparse Expansion, both neurons improve, but the entangled neuron improves more. The granular analysis of the component clusters within both neurons reveals the specialization to vastly different parts of the output distribution in the entangled neuron as compared to the normal neuron (Figure A8).
364
+
365
+ ![](images/e5673967d2d17e45973528b12fac9721738786d58e50f28d47aad28d48996c09.jpg)
366
+ Figure A8: Sparse Expansion disentangles neurons in Pythia-1.4B. The dense output distribution of a random neuron, along with its sparse via SparseGPT (a) and via Sparse Expansion (b) sparse output distributions. The dense output distribution of a random neuron, along with its sparse via SparseGPT (d) and via Sparse Expansion (e) sparse output distributions. For both the random and entangled neuron, component clusters are shown in a distinct color to visualize their range (c, e). WD represents the Wasserstein distance between the Sparse Expansion sparse output distribution and the dense distribution. RI represents relative improvement. These are the same neurons from Figure 2.
367
+
368
+ # A.11 PERFORMANCE ACROSS THE LLAMA FAMILY
369
+
370
+ We analyze the performance of Sparse Expansion against other sparsification algorithms across all members of the Llama-2 family—Llama-2-7B, Llama-2-13B, and Llama-2-70B—both under sparsity and joint sparsity-quantization compression (Touvron et al., 2023).
371
+
372
+ Table A5: Sparse Expansion across the Llama family.
373
+
374
+ <table><tr><td></td><td>Sparsity</td><td>Bits</td><td>Llama-2-7B</td><td>Llama-2-13B</td><td>Llama-2-70B</td></tr><tr><td>Dense</td><td>0%</td><td>16-bit</td><td>5.1168</td><td>4.5736</td><td>3.3192</td></tr><tr><td>MP</td><td>50%</td><td>16-bit</td><td>16.029</td><td>6.8270</td><td>4.9846</td></tr><tr><td>Wanda</td><td>50%</td><td>16-bit</td><td>6.7757</td><td>5.8527</td><td>4.0219</td></tr><tr><td>SparseGPT</td><td>50%</td><td>16-bit</td><td>5.7082</td><td>5.0521</td><td>3.9013</td></tr><tr><td>Sparse Expansion</td><td>50%</td><td>16-bit</td><td>5.5839</td><td>4.9728</td><td>3.8791</td></tr><tr><td>SparseGPT</td><td>2:4</td><td>16-bit</td><td>6.9767</td><td>5.9934</td><td>4.8002</td></tr><tr><td>Sparse Expansion</td><td>2:4</td><td>16-bit</td><td>6.4456</td><td>5.6255</td><td>4.6671</td></tr><tr><td>SparseGPT</td><td>2:4</td><td>4-bit</td><td>7.2759</td><td>6.1101</td><td>4.9036</td></tr><tr><td>Sparse Expansion</td><td>2:4</td><td>4-bit</td><td>6.5745</td><td>5.7151</td><td>4.7586</td></tr><tr><td>SparseGPT</td><td>2:4</td><td>3-bit</td><td>13.076</td><td>6.5055</td><td>5.2552</td></tr><tr><td>Sparse Expansion</td><td>2:4</td><td>3-bit</td><td>7.0757</td><td>5.9872</td><td>5.0588</td></tr></table>
375
+
376
+ Sparse Expansion outperforms all other pruning techniques for both $50\%$ unstructured sparsity as well 2:4 sparsity in all Llama models (Figure A5). In addition to non-quantized sparsity, we consider how Sparse Expansion performs in the context of compression with 2:4 structured sparsity and quantization via GPTQ (Frantar et al., 2022). We first sparsify each linear layer in each FFN block to 2:4 sparsity, then quantized to 3 and 4 bits. Our method outperforms SparseGPT across all models and across both conditions and in all models (Figure A5).
377
+
378
+ Across multiple model sizes, sparsity and compression levels, and advanced models, Sparse Expansion attains state-of-the-art performance for post-training one-shot sparsification when compared to other highly competitive pruning techniques. We do so by leveraging the powerful pruning algorithm of SparseGPT and combining it with input specialization to utilize the insights we gain from how entangled neurons behave under sparsity.
379
+
380
+ Because GPTQ (Frantar et al., 2022), a leading post-training quantization scheme, also relies upon the Hessian matrix for its algorithm, we combine it with SparseGPT for combined one-shot compression. Sparse Expansion also outperforms native SparseGPT and GPTQ across all compression settings.
381
+
382
+ # A.12 EFFECT OF SPARSITY ON NEURONAL OUTPUT DISTRIBUTIONS
383
+
384
+ With increasing sparsity, the sparse output distributions of the high WD neurons and random neurons converge toward the normal distribution (Figure A10). A specific example of a neuron is shown in Figure A9.
385
+
386
+ ![](images/20c412e77d3a449809a2f45154a89dc0827aab9631b47039a5653440dcf2ac96.jpg)
387
+ Figure A9: Increasing sparsity induces normality. A highly entangled neuron's dense distribution (blue) and sparse distribution (red). As sparsity increases, the output distribution of the sparse neuron becomes progressively more Gaussian. WD represents the Wasserstein distance. This is the same neuron from Figure 2c.
388
+
389
+ ![](images/ce44e4afc36748c5a95e151858d8f2630fc0217ca587bb6c0653514995f38fbb.jpg)
390
+ Figure A10: Output distributions become more normal under sparsity. The Wasserstein distance between a neuron's normalized sparse output distribution and the Gaussian distribution is shown as sparsity increases for the top $3\%$ of entangled Wasserstein neurons, the same number of bottom $3\%$ WD neurons, and a random sample of $3\%$ of the neurons. For highly entangled neurons, the WD decreases significantly at higher sparsities whereas it remains more or less constant for the bottom $3\%$ of neurons and for the random neurons. Range indicates maximum and minimum WD for a group. Data collected from the second up projection matrix in Pythia-1.4B.
391
+
392
+ Furthermore, with increasing sparsity, the magnitudes of the means and variances of each neurons' sparse output distribution both shift toward zero. This is reasonable, as with fewer nonzero weights to combine together features, both the mean and variance should decrease in magnitude.
393
+
394
+ ![](images/f2237647c6152f6d272837d46e353c1a0deb46917a7fbe314a74f5df4a16869f.jpg)
395
+ Figure A11: Mean and variance shift toward zero under sparsity. Across all neurons, with increasing sparsity, the magnitude of the mean of output distribution (left) and the variance of the output distribution (right) both tend toward zero. Both mean and variance have been normalized to their dense values. Error bars represent one standard error. Data collected from of the second up projection matrix in Pythia-1.4B.
396
+
397
+ # A.13 ALL NEURONS IMPROVE, BUT ENTANGLED NEURONS IMPROVE MORE AT HIGHER SPARSITIES
398
+
399
+ Measuring the relative improvement of each neuron through Sparse Expansion, we find that all neurons improve as a result of Sparse Expansion across both Pythia-1.4B and Llama-2-7B. Thus, we believe that every neuron has some level of innate entanglement, and so all neurons can be and are improved. Interestingly, we note that, with increasing sparsity, highly entangled Wasserstein neurons tend to improve more.
400
+
401
+ ![](images/a31956e841cfe988f71336021aed494969c8fb3baa34374a8ef6681f7facebe6.jpg)
402
+
403
+ ![](images/06589ce7a381c2a8896926e5731f72257a36ede73a49a7ba2efb9910973eb8d6.jpg)
404
+
405
+ ![](images/e373fe54cf6e2c0f923b2c6ca7a3ff4d44ebbf34bec5fb4b9ff27eed88544455.jpg)
406
+ Figure A12: Entangled neurons improve more at higher sparsities. Relative improvement of each neuron in the second up projection matrix in Pythia-1.4B (top row) and in the second gate projection matrix in Llama-2-7B (bottom row) with respect to their WD from the Gaussian. Two sparsity levels, $80\%$ and $90\%$ , are shown. Sparse Expansion improves the expressibility of every neuron, thus improving performance. However, the entangled neurons improve more with higher sparsities, as visible in the right column.
407
+
408
+ ![](images/0b136c2936f5089b83f5867b46e4397ac80e6d220b92c77bb8e33242843644a6.jpg)
409
+
410
+ # A.14 THE WASSERSTEIN DISTANCE BEST CAPTURES WHICH NEURONS IMPROVE
411
+
412
+ We also consider whether the magnitude of the mean of the output distribution or the variance of the distribution would be good predictors of the degree of neuronal improvement through Sparse Expansion. However, across both Pythia-1.4B (Figure 7) and Llama-2-7B (Figure A13), the Wasserstein distance from the normal is a better predictor of relative improvement, as defined previously. Though there is some correlation of the magnitude of the mean and variance of the output distribution with the relative improvement in Pythia-1.4B, that is not the case in Llama-2-7B. Furthermore, using the WD to predict neuronal improvement yields the highest coefficient of determination, $\mathbb{R}^2$ , across both models.
413
+
414
+ ![](images/4f1b3f604b9727ad895dcb3a60fb51433325f1664b027f474be5383077d90a59.jpg)
415
+ Wasserstein distance best explains improvement from Sparse Expansion
416
+
417
+ ![](images/def30353d1183772414237ce6326eb4b8c2b44c1bfbfc97693ce96dcdd79cec9.jpg)
418
+ Figure A13: Wasserstein distance best captures improvement. Relative improvement of each neuron in the second gate projection matrix in Llama-2-7B with respect to the magnitude of the mean, variance, and Wasserstein distance from normal of the dense output distribution. Neurons pruned to $90\%$ sparsity.
419
+
420
+ ![](images/e1ae3e69eb2123448a0f7ad0b2050e23d88df81190e63d688035a79f7605bb9f.jpg)
421
+
422
+ # A.15 OUTPUT DISTRIBUTIONS OF ENTANGLED NEURONS IN PYTHIA AND LLAMA
423
+
424
+ Figures A14 and A15 show the non-trivial, non-Gaussian output distribution of a subset of neurons from the Pythia-1.4B and Llama-2-7B models, illustrating examples of entangled neurons. We observe such neurons in every FFN block of the LLMs we investigated and believe that the existence of these neurons is a global phenomenon in transformers.
425
+
426
+ ![](images/9b56a46602cfc853b8114d978087a051fc3c6065c948abbc628732304bbc0e2a.jpg)
427
+ Figure A14: Dense output distributions of top 30 high WD neurons in Pythia-1.4B. The distributions are shown for the neurons of the up projection matrix in the second FFN block.
428
+
429
+ ![](images/f38393aa476f2d0ad6b9a49079a4be7277f6fbb1c3e4f19eb31bc0295d1d9ea6.jpg)
430
+ Figure A15: Dense output distributions of top 30 high WD neurons in Llama-2-7B. The distributions are shown for the neurons of the up projection matrix in the sixteenth FFN block.
ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51bc39aefb84c79ddc7eeaac8f524aaa865a74041fc623fdb9ecd83b8c7d6b0f
3
+ size 1443967
ICLR/2025/Wasserstein Distances, Neuronal Entanglement, and Sparsity/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e68ed82f34ee98616ee38258897916b0b26aca5d0d9375d875ee79a2b1a7291
3
+ size 620512
ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/51df517f-777d-48e9-9727-92244295c047_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b82adf35b26917e94a95e17adb445e3e249b84b7f5be9af43c3b31fb8bfeeef5
3
+ size 155729
ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/51df517f-777d-48e9-9727-92244295c047_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58184001522f92152ae877d3efb88576f9df604b0bc8e255808629e99e0e27bb
3
+ size 183445
ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/51df517f-777d-48e9-9727-92244295c047_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58c6703cdf9244a3bc765c2c62afb8c5614b28be25e00d88e184730a57c44c91
3
+ size 2047828
ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/full.md ADDED
@@ -0,0 +1,744 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WEAK-TO-STRONG PREFERENCE OPTIMIZATION: STEALING REWARD FROM WEAK ALIGNED MODEL
2
+
3
+ Wenhong Zhu $^{1,2}$ , Zhiwei He $^{1}$ , Xiaofeng Wang $^{1}$ , Pengfei Liu $^{1,2}$ , Rui Wang $^{1*}$
4
+
5
+ $^{1}$ Shanghai Jiao Tong University, $^{2}$ Shanghai Innovation Institute {zwhong714, wangrui12} @sjtu.edu.cn
6
+
7
+ # ABSTRACT
8
+
9
+ Aligning language models (LMs) with human preferences has become a key area of research, enabling these models to meet diverse user needs better. Inspired by weak-to-strong generalization, where a strong LM fine-tuned on labels generated by a weaker model can consistently outperform its weak supervisor, we extend this idea to model alignment. In this work, we observe that the alignment behavior in weaker models can be effectively transferred to stronger models and even exhibit an amplification effect. Based on this insight, we propose a method called Weak-to-Strong Preference Optimization (WSPO), which achieves strong model alignment by learning the distribution differences before and after the alignment of the weak model. Experiments demonstrate that WSPO delivers outstanding performance, improving the win rate of Qwen2-7B-Instruct on Arena-Hard from 39.70 to 49.60, achieving a remarkable 47.04 length-controlled win rate on AlpacaEval 2, and scoring 7.33 on MT-bench. Our results suggest that using the weak model to elicit a strong model with a high alignment ability is feasible. The code is available at https://github.com/zwhong714/weak-to-strong-preference-optimization.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Cutting-edge large language models (LLMs) are trained through a three-phase process (OpenAI, 2024). Initially, these models undergo pre-training on extensive corpora, using next-token prediction to build a foundational understanding (Radford et al., 2018; 2019). Following this, the pre-trained models are fine-tuned using supervised fine-tuning (SFT) to better align with specific instructions (Wei et al., 2021). However, these models have flaws, as they can sometimes produce factual inaccuracies, exhibit biases, and display other undesirable behaviors (Bai et al., 2022; Liu et al., 2024b). Learning from human preferences (Christiano et al., 2017) is a paradigm in the final phase aiming to better align pre-trained and instruction-followed generative LMs with human values and goals.
14
+
15
+ ![](images/df72ff1a00bbc7b79c3b8d9f31a027ea9dc3fcd1072f97f498675e5acdf0da47.jpg)
16
+ Figure 1: Pipeline for LM alignment. (1) Perform SFT on the pre-trained model using expert data. (2) Current approaches incorporate explicit or implicit reward mechanisms to fine-tune the model further, aligning its behavior with human preferences. (3) WSPO aligns strong models by utilizing the distributional differences observed before and after aligning the weak model.
17
+
18
+ As shown in Figure 1, the alignment method in RLHF traditionally involves training an explicit scalar-valued reward model that captures human judgment. This reward model is then used to fine-tune the LM through reinforcement learning (RL) (Christiano et al., 2017), such as proximal policy optimization (PPO) (Schulman et al., 2017) algorithm. This pipeline is considerably more complex than SFT, involving training multiple LMs and sampling from the LM policy in the training loop, incurring significant computational costs. More recent research has explored alignment approaches that eliminate the need for a separate reward model, instead aligning the LM directly based on human preferences, named Direct Preference Optimization (DPO) (Rafailov et al., 2024).
19
+
20
+ Learning from human feedback preferences, whether online or offline, is crucial in PPO and DPO. A phenomenon known as weak-to-strong generalization (Burns et al., 2023) demonstrates that a strongly pre-trained model, when fine-tuned on labels generated by a weaker model, consistently outperforms the weaker supervised model. This intriguing result prompts the question: Can we leverage the alignment signal from the weak models to align a strong model?
21
+
22
+ This paper introduces a novel method called Weak-to-Strong Preference Optimization (WSPO), a loss function designed to effectively transfer the alignment capability from a weaker model to a stronger one. Our results show that the stronger model can amplify this transferred alignment. Instead of using data generated by the weaker model as labels for aligning the stronger model, we establish a relationship between the weak model (serving as a reward model) and the strong model in the context of RL optimization. By learning the differences before and after the alignment of the weak model, we can effectively enhance the alignment ability of the stronger model.
23
+
24
+ The main contributions of this paper are summarized as follows:
25
+
26
+ - We introduce the WSPO method, a loss function that transfers the alignment capability of the weak model to the strong model by learning the distributional differences before and after the weak model's alignment.
27
+ - We find that the alignment capability of the weaker model can be effectively transferred to the stronger model, amplifying the stronger model's alignment performance.
28
+ - Our experimental analysis reveals that the proposed method improves the win rate of Qwen2-7B-Instruct on Arena-Hard from 39.70 to 49.60, achieving an impressive 47.04 length-controlled win rate on AlpacaEval 2, and obtaining a score of 7.33 on MT-bench. Results on various common sense, mathematical, and other reasoning tasks demonstrate that our method preserves the knowledge embedded in the strong model.
29
+
30
+ # 2 PRELIMINARIES
31
+
32
+ Given a query sequence $x \coloneqq (x_{1},\ldots ,x_{m}) \in \mathcal{X}$ , an auto-regressive LM defines a probability distribution over possible response sequences $y \coloneqq (y_{1},y_{2},\dots ,y_{n}) \in \mathcal{Y}$ . The probability $\pi_{\theta}(y \mid x)$ can be decomposed using the chain rule of probability as $\pi_{\theta}(y \mid x) = \prod_{t = 1}^{n}\pi_{\theta}\left(y_{t} \mid y_{< t},x\right)$ , where $y_{< t}$ denotes $\{y_1,y_2,\dots ,y_{t - 1}\}$ . Typically, an LM is pre-trained on a large, unlabeled text dataset using maximum likelihood estimation (MLE). This process can be viewed as learning a distribution that narrows the gap with the true data distribution.
33
+
34
+ # 2.1 SUPERVISED FINE-TUNING
35
+
36
+ Following initialization with a pre-trained LM $\pi_{\mathrm{base}}$ , the model undergoes further fine-tuning on smaller, meticulously curated datasets containing expert demonstrations of high-quality responses. This results in the model $\pi_{\mathrm{sft}}$ . These datasets emphasize desired behaviors such as following instructions (Wei et al., 2021), engaging in dialogue (Li et al., 2016), and other similar tasks.
37
+
38
+ # 2.2 FINETUNING FROM HUMAN FEEDBACK
39
+
40
+ Learning from human feedback (Christiano et al., 2017) has garnered significant attention due to its potential to use human-labeled datasets for aligning LMs with human preferences (Wei et al., 2021). In these alignment approaches, the optimization objective is generally to maximize the expected reward from an implicit or explicit reward function while including a KL-divergence term from the reference policy as a penalty for divergence (Shi et al., 2024).
41
+
42
+ # 2.2.1 RLHF
43
+
44
+ Learning the reward model. Learning a reward model typically involves training a binary classifier to distinguish between preferred and less preferred actions using a logistic regression loss. A commonly used classifier for this task is the Bradley-Terry model (David, 1963). In this model, for a given context $x$ and response $y$ , the pointwise reward of selecting $y$ given $x$ is denoted by $r(x,y)$ .
45
+
46
+ Policy Optimization with the learned reward. Once the reward model is established, the model alignment process maximizes the expected reward while preserving the original distribution $\pi_{\mathrm{ref}}$ . This is often achieved using a family of $f$ -divergence regularization methods (Rafailov et al., 2024; Shi et al., 2024). For example, when using KL divergence, the optimization problem on a static dataset of comparisons $\mathcal{D} = \left\{x^{(i)},y_w^{(i)},y_l^{(i)}\right\}_{i = 1}^N$ , where $y_{w}$ and $y_{l}$ represent the preferred and dispreferred completions, respectively, can be formulated as follows:
47
+
48
+ $$
49
+ \max _ {\pi_ {\theta}} \mathbb {E} _ {x \sim \mathcal {D}, y \sim \pi_ {\theta} (y | x)} [ r (x, y) ] - \beta \mathbb {D} _ {\mathrm {K L}} \left[ \pi_ {\theta} (y \mid x) \| \pi_ {\text {r e f}} (y \mid x) \right], \tag {1}
50
+ $$
51
+
52
+ where $\beta$ is a parameter that controls the degree of deviation from the reference policy $\pi_{\mathrm{ref}}$ . If $\beta$ is set too high, KL regularization forces the aligned model to closely mimic the SFT model, potentially limiting the effectiveness of the alignment (Geist et al., 2019). On the other hand, if $\beta$ is set too low, the aligned model may diverge excessively from the SFT model, leading to reward hacking (Skalse et al., 2022). This overfitting problem can compromise critical capabilities developed during pretraining or SFT (Stiennon et al., 2020).
53
+
54
+ # 2.2.2 DPO
55
+
56
+ An alternative approach to learning from the human preference paradigm described above is DPO (Rafailov et al., 2024), which completely bypasses the need to train a reward model. The loss function that DPO optimizes is expressed as a function of $\pi_{\theta}$ as follows:
57
+
58
+ $$
59
+ \mathcal {L} _ {\mathrm {D P O}} \left(\pi_ {\theta}; \pi_ {\mathrm {r e f}}\right) = - \mathbb {E} _ {(x, y _ {w}, y _ {l}) \sim \mathcal {D}} \left[ \log \sigma \left(\beta \log \frac {\pi_ {\theta} (y _ {w} \mid x)}{\pi_ {\mathrm {r e f}} (y _ {w} \mid x)} - \beta \log \frac {\pi_ {\theta} (y _ {l} \mid x)}{\pi_ {\mathrm {r e f}} (y _ {l} \mid x)}\right) \right]. \tag {2}
60
+ $$
61
+
62
+ While DPO simplifies the process by bypassing the need for reward function training, this may result in a final strategy that is less regularized and robust compared to RLHF. In RLHF, the underfitted reward function is crucial in balancing and optimizing the final strategy (Azar et al., 2024).
63
+
64
+ # 3 METHOD
65
+
66
+ Building on the concept of weak-to-strong generalization, where a strong model is capable of generalizing beyond weak labels rather than simply imitating the behavior of the weaker model, in this section, we leverage RL theory to demonstrate that it is possible to train a strong alignment model using the distributional differences before and after weak model alignment.
67
+
68
+ # 3.1 YOUR LANGUAGE MODEL IS SECRETLY A REWARD MODEL
69
+
70
+ Prior work (Rafailov et al., 2024) shows that given a specific reward model $r(x,y)$ , the optimal solution to the KL-constrained reward maximization problem in Objective 1 takes the form:
71
+
72
+ $$
73
+ \pi_ {r} (y \mid x) = \frac {1}{Z (x)} \pi_ {\text {r e f}} (y \mid x) \exp \left(\frac {1}{\beta} r (x, y)\right), \tag {3}
74
+ $$
75
+
76
+ where $Z(x) = \sum_{y} \pi_{\mathrm{ref}}(y \mid x) \exp \left(\frac{1}{\beta} r(x, y)\right)$ is the partition function, and $\pi_r(y \mid x)$ represents the model after alignment. The reward function can be shown as follows by rearranging Equation 3:
77
+
78
+ $$
79
+ r (x, y) = \beta \log \frac {\pi_ {r} (y \mid x)}{\pi_ {\operatorname {r e f}} (y \mid x)} + \beta \log Z (x). \tag {4}
80
+ $$
81
+
82
+ Theorem 1. Under mild assumptions, all reward classes consistent with the Plackett-Luce (and Bradley-Terry in particular) models can be represented with the reparameterization $r(x,y) = \beta \log \frac{\pi(y|x)}{\pi_{ref}(y|x)}$ for some model $\pi(y|x)$ and a given reference model $\pi_{ref}(y|x)$ .
83
+
84
+ Based on Theorem 1 proposed by Rafailov et al. (2024), we know that the reward function can be expressed as the difference in distributions before and after model alignment.
85
+
86
+ # 3.2 WEAK-TO-STRONG PREFERENCE OPTIMIZATION
87
+
88
+ Before introducing our method, we define $\pi_r^{\mathrm{weak}}(y\mid x)$ as a weak model aligned using specific algorithms, such as PPO or DPO. Similarly, $\pi_{\mathrm{ref}}^{\mathrm{weak}}(y\mid x)$ denotes the reference model, which may correspond to either $\pi_{\mathrm{sft}}$ or $\pi_{\mathrm{base}}$ . These notations also apply to strong models.
89
+
90
+ Derive the WSPO objective. Theorem 1 demonstrates that a reward model trained on the preference dataset $\mathcal{D}$ can be expressed as the distribution difference before and after model alignment. Consequently, we can align a weak model and derive an aligned weak model $\pi_r^{\mathrm{weak}}(y\mid x)$ . Thus, the reward model $r(x,y)$ can be formulated as $\beta \log \frac{\pi_r^{\mathrm{weak}}(y|x)}{\pi_{\mathrm{ref}}^{\mathrm{weak}}(y|x)}$ . Next, we employ this transformed reward model to align a strong model $\pi_{\mathrm{ref}}^{\mathrm{strong}}(y\mid x)$ , allowing us to derive that
91
+
92
+ $$
93
+ \pi_ {r} ^ {\text {s t r o n g}} (y \mid x) = \frac {1}{Z ^ {\prime} (x)} \pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y \mid x) \exp \left(\frac {1}{\lambda} r (x, y)\right) \propto \pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y \mid x) \left(\frac {\pi_ {r} ^ {\text {w e a k}} (y \mid x)}{\pi_ {\text {r e f}} ^ {\text {w e a k}} (y \mid x)}\right) ^ {1 / \gamma}, \tag {5}
94
+ $$
95
+
96
+ where $Z'(x) = \sum_{y} \pi_{\mathrm{ref}}^{\mathrm{strong}}(y \mid x) \exp \left(\frac{1}{\lambda} r(x, y)\right)$ , $\lambda$ is the regularization strength to align strong LM using Obective 1, and $\gamma$ equals to $\lambda / \beta$ . Although $r(x, y)$ is analytically tractable, substituting it into Obective 1 for policy optimization poses challenges due to the absence of a partition function. However, we can optimize Equation 5 to minimize the difference in distance between the logarithmic probability distributions before and after aligning the strong and weak models. Therefore we obtain
97
+
98
+ $$
99
+ \mathcal {L} _ {\mathrm {W S P O}} = \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \frac {1}{| y |} \left\| \gamma \log \frac {\pi_ {\theta} ^ {\text {s t r o n g}} (y \mid x)}{\pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y \mid x)} - \log \frac {\pi_ {r} ^ {\text {w e a k}} (y \mid x)}{\pi_ {\text {r e f}} ^ {\text {w e a k}} (y \mid x)} \right\| _ {2} ^ {2} \right]. \tag {6}
100
+ $$
101
+
102
+ Derivation see in Appendix A.3. An intuitive explanation is that we leverage the change in the weak model's alignment before and after as a supervisory signal to guide the alignment of the stronger reference model.
103
+
104
+ The role of the hyperparameter $\gamma$ . The hyperparameter $\gamma$ plays a dual role: it maximizes the reward function $\beta \log \frac{\pi_r^{\mathrm{weak}}(y|x)}{\pi_{\mathrm{ref}}^{\mathrm{weak}}(y|x)}$ , while simultaneously constraining the proximity of the original distribution $\pi_{\mathrm{ref}}^{\mathrm{strong}}(y|x)$ to the optimized distribution $\pi_{\theta}^{\mathrm{strong}}(y|x)$ .
105
+
106
+ What does the WSPO do? The gradient with respect to the parameters $\theta$ can be written as:
107
+
108
+ $$
109
+ \mathbb {E} _ {(x, y) \sim \mathcal {D}} \sum_ {t = 1} ^ {| y |} \left[ \frac {2}{| y |} \left(\gamma \log \frac {\pi_ {\theta} ^ {\text {s t r o n g}} (y _ {< t} \mid x)}{\pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y _ {< t} \mid x)} - \log \frac {\pi_ {r} ^ {\text {w e a k}} (y _ {< t} \mid x)}{\pi_ {\text {r e f}} ^ {\text {w e a k}} (y _ {< t} \mid x)}\right) \nabla_ {\theta} \log \pi_ {\theta} ^ {\text {s t r o n g}} (y _ {< t} \mid x) \right]. \tag {7}
110
+ $$
111
+
112
+ As shown in Equation 7, the direction of the gradient is influenced by the contents of the parentheses, while the extent of gradient descent is not dictated by the model's likelihood on the dataset. Derivation see in Appendix A.4. This may help mitigate the overfitting problem commonly associated with DPO alignment algorithms (Azar et al., 2024).
113
+
114
+ WSPO outline. The general WSPO pipeline operates as follows: 1.) Utilize offline datasets $\mathcal{D} = \left\{x^{(i)},y_w^{(i)}\right\}_{i = 1}^N$ , such as the selected preference or SFT datasets; paired datasets are not required.
115
+
116
+ In Appendix C, we demonstrate that even the rejected preference dataset remains effective for the WSPO algorithm. 2.) Prepare the weak model, both pre-and post-alignment. 3.) Optimize the LM $\pi_{\theta}^{\mathrm{strong}}(y\mid x)$ to minimize the objective $\mathcal{L}_{\mathrm{WSPO}}$ for the specified dataset. The only parameter requiring tuning is $\gamma$ .
117
+
118
+ # 4 EXPERIMENTS
119
+
120
+ This section empirically evaluates WSPO's ability to align strong models by learning from weaker ones. Our findings reveal that aligned weak models can successfully transfer their alignment behaviors to stronger models, often resulting in an enhanced alignment effect. Additionally, WSPO demonstrates competitive performance compared to strong models trained using PPO and DPO. We begin by illustrating the feasibility of our method with a toy example. Next, we analyze the stability of WSPO algorithm training. Finally, we conduct a comprehensive evaluation of the algorithm's overall performance.
121
+
122
+ # 4.1 SUMMARIZATION WITH A LENGTH REWARD
123
+
124
+ # 4.1.1 EXPERIMENT SETUP
125
+
126
+ Task. We employ a toy summarization task with a hardcoded reward function that incentivizes models to generate summaries with lengths falling within the range of $[L_{\min}, L_{\max}]$ :
127
+
128
+ $$
129
+ r (x, y) := \left\{ \begin{array}{l l} 0, & \text {i f} | y | \in \left[ L _ {\min }, L _ {\max } \right] \\ - 1, & \text {o t h e r w i s e} \end{array} \right. \tag {8}
130
+ $$
131
+
132
+ Models. We employ the Qwen2-1.5B-base and Qwen2-7B-base models (Yang et al., 2024a) as our pre-trained weak model, $\pi_{\mathrm{base}}^{\mathrm{weak}}$ , and strong model, $\pi_{\mathrm{base}}^{\mathrm{strong}}$ , respectively. To train the SFT models, $\pi_{\mathrm{sft}}^{\mathrm{weak}}$ and $\pi_{\mathrm{sft}}^{\mathrm{strong}}$ , we utilize the training split of the XSUM dataset (Narayan et al., 2018). Subsequently, the validation split is employed for further fine-tuning, leading to the development of the corresponding PPO-aligned models, $\pi_{\mathrm{ppo}}^{\mathrm{weak}}$ and $\pi_{\mathrm{ppo}}^{\mathrm{strong}}$ . When training with WSPO, we directly use the distributional differences between $\pi_{\mathrm{base}}^{\mathrm{weak}}$ and $\pi_{\mathrm{ppo}}^{\mathrm{weak}}$ to align $\pi_{\mathrm{base}}^{\mathrm{strong}}$ and derive $\pi_{\mathrm{wspo}}^{\mathrm{strong}}$ , because no additional knowledge is required to output a summary in the summary task.
133
+
134
+ Evaluation. The parameters $L_{\mathrm{min}}$ and $L_{\mathrm{max}}$ are set to 20 and 30, respectively. We use the test split for evaluation to guarantee no data contamination (Zhu et al., 2023). Detailed experimental settings are provided in Appendix B.1.
135
+
136
+ # 4.1.2 RESULTS AND ANALYSIS
137
+
138
+ We visualize the lengths generated by various alignment algorithms across models of different sizes and calculate the win rate, which represents the proportion of lengths falling within the $[L_{\mathrm{min}}, L_{\mathrm{max}}]$ range.
139
+
140
+ WSPO exhibits comparable alignment capability against PPO. As shown in Figure 2, the PPO and WSPO alignment algorithms address variable generation lengths arising from SFT training. The left figure illustrates how our method, using the greedy decoding algorithm, performs similarly to the PPO algorithm and can effectively mitigate outliers. In the right figure, we present the averaged results from three samplings. From the error lines, it is evident that both PPO and WSPO exhibit high stability during generation. Notably, our method is better at lower temperatures than the performance of the PPO-aligned strong model.
141
+
142
+ WSPO embraces certain generalizations. On the left in Figure 2, we can observe that the weak model performs quite well on the alignment task, with the results predominantly concentrated in the reward area. Additionally, we can see that the strong LM, obtained through WSPO alignment with the weak model, exhibits a different length distribution in its generated outputs. This indicates that the strong LM is not merely imitating the behavior of weak models.
143
+
144
+ ![](images/e24e9adc4fcd1f0a57a04538b7e9a9b357a03aa333032b7b5458fdcd2f682fe8.jpg)
145
+ Figure 2: Left. PPO and WSPO alignment methods vary in the length of generated sequences compared to the reference SFT model using greedy decoding. Right. PPO and WSPO alignment methods show variation in reward hits compared to the reference SFT model, using the top- $p$ sampling algorithm at different temperatures.
146
+
147
+ ![](images/fde441937d5b32a7bb5f53702928436af1e39bd03d7b8431769c4162e1b77a9b.jpg)
148
+
149
+ WSPO provides a faster alignment process. Traditionally, model alignment involves performing SFT on pre-trained LMs first, then applying PPO algorithms for alignment. In this section, we merge these two stages. Using the same amount of alignment data typically reserved for PPO training, WSPO achieves comparable alignment effects in a more streamlined process. This demonstrates that a pre-trained model can effectively perform alignment by learning differential distribution signals, provided it has acquired sufficient knowledge.
150
+
151
+ # 4.2 SINGLE-TURN DIALOGUE
152
+
153
+ # 4.2.1 EXPERIMENTAL SETUP
154
+
155
+ Task. In single-turn dialogue, when prompt $x$ is a human query, the model must either politely respond to the query or refuse to answer.
156
+
157
+ Models. In this scenario, we fine-tune the Qwen2-1.5B-base and Qwen2-7B-base models (Yang et al., 2024a) exclusively on the preferred completions, resulting in two models: $\pi_{\mathrm{sft}}^{\mathrm{weak}}$ and $\pi_{\mathrm{sft}}^{\mathrm{strong}}$ . We refer to these models collectively as Preferred-FT models. Subsequently, we apply the DPO algorithm to perform alignment training using the Anthropic Helpful and Harmless conversation training dataset (Bai et al., 2022). This step produces two alignment models: $\pi_{\mathrm{dpo}}^{\mathrm{weak}}$ and $\pi_{\mathrm{dpo}}^{\mathrm{strong}}$ . To address potential biases introduced by the base models pretrained on the high-quality corpus, we utilize the distributional differences between $\pi_{\mathrm{sft}}^{\mathrm{weak}}$ and $\pi_{\mathrm{dpo}}^{\mathrm{weak}}$ . These differences guide the alignment of $\pi_{\mathrm{sft}}^{\mathrm{strong}}$ through the WSPO algorithm, ultimately resulting in the model $\pi_{\mathrm{wspo}}^{\mathrm{strong}}$ .
158
+
159
+ Evaluation. We use the test split of the Anthropic HH dataset to assess alignment performance, evaluating through a single-step human-assistant interaction. The evaluation leverages preferred completions from the test set as references to calculate the win rates of different methods. Taking cost into consideration, we selected the GPT-4o-mini as the judge model. We use Qwen2.5-72B (Qwen, 2024) to verify the validity of the evaluation. Detailed experimental settings are provided in Appendix B.2.
160
+
161
+ # 4.2.2 RESULTS AND ANALYSIS
162
+
163
+ The alignment signal from the weak model plays a crucial role. As shown on the left side of Figure 3, unlike DPO, which requires distinguishing the likelihood between preferred and dispreferred pairs in preference data, WSPO, based on the Preferred-FT model, continues to leverage the alignment signal from the weak model on preferred data to perform alignment. Consequently, WSPO adjusts the distribution learned exclusively from the preferred data, resulting in improved alignment ability compared to the Preferred-FT model.
164
+
165
+ ![](images/034c5ac0847e7c7ba41b1794d4b2d8d23f76a6cc450889b84ff771562d7a7c3f.jpg)
166
+ Figure 3: Left. Win rates computed by GPT-4o-mini for Anthropic-HH single-step dialogue at different temperatures. Right. The win rates for different sampling temperatures remain relatively stable throughout the training process. WSPO demonstrates consistent performance across varying sampling temperatures over time.
167
+
168
+ ![](images/d4a2fb644f04a3266871bb5854cec8bfae4239ef819ca387bcf878522a689fd8.jpg)
169
+
170
+ WSPO exhibits better alignment capability compared to DPO. Tuning the parameters for DPO proves to be quite challenging, as seen in Appendix B.2. After several adjustments, we achieved a relatively favorable outcome with the hyperparameter $\beta = 0.5$ , which was determined by testing a range of values $\{0.1, 0.5, 1.0, 2.0, 5.0\}$ for the 7B model. As illustrated on the left side of Figure 3, surprisingly, our model demonstrates better alignment performance than DPO with the hyperparameter $\gamma = 0.1$ .
171
+
172
+ WSPO exhibits fast convergence and stability. As shown in the right part of Figure 3, the model achieves a high win rate after just 1k fine-tuning steps, demonstrating fast convergence. Additionally, the training process remains stable throughout.
173
+
174
+ # 4.3 A COMPLEX EVALUATION
175
+
176
+ # 4.3.1 EXPERIMENTAL SETUP
177
+
178
+ Models. We train the base models on the UltraChat-200k (Ding et al., 2023) dataset to obtain the SFT models. We use the off-the-shelf instruction-tuned models as the SFT models. Then, we perform alignment using DPO on the UltraFeedback dataset (Cui et al., 2024) using the SFT model as the starting point. The process of obtaining the models is largely consistent with that of Exp 4.2.
179
+
180
+ Evaluation. We evaluate our models primarily on three widely-adopted, open-ended instruction-following benchmarks: MT-Bench (Zheng et al., 2024a), AlpacaEval 2 (Dubois et al., 2024), and Arena-Hard v0.1 (Li et al., 2024c). These benchmarks assess the models' conversational versatility across diverse queries and are commonly used by the community (Meng et al., 2024). Besides, fine-tuning LMs is challenging, notably since it can cause forgetting (French, 1992) of pre-trained knowledge. To demonstrate that the strong model can generalize beyond weak labels instead of merely imitating the behavior of weak models. We use zero-shot or few-shot learning to test the reasoning ability across five benchmarks, including MMLU (Hendrycks et al., 2021), CMMLU (Li et al., 2024a), Truthful-QA (Lin et al., 2021), GSM-PLUS (Li et al., 2024b), and GSM8K (Cobbe et al., 2021). We evaluate these benchmarks by using llm-evaluation-harness (Gao et al., 2024) repo. Evaluation details are in Appendix B.3.
181
+
182
+ # 4.3.2 RESULTS AND ANALYSIS
183
+
184
+ As shown in Table 1, the Instruct setting consistently outperforms the Base setting. This is primarily because the Instruct model utilizes high-quality demonstration and preference data for SFT and RLHF (Yang et al., 2024a).
185
+
186
+ DPO shows limited improvement for the Instruct model. The SFT-trained model demonstrates limited effectiveness across the three benchmarks, highlighting areas for potential enhancement.
187
+
188
+ Table 1: Evaluation results of models across different settings and benchmarks. LC and WR refer to length-controlled and raw win rates, respectively. We train SFT models under the Base settings using the UltraChat-200K dataset. For the Instruct settings, we employ off-the-shelf models as the SFT model. The SFT and DPO versions of the weak model are employed to align the strong model within the WSPO algorithm.
189
+
190
+ <table><tr><td rowspan="3">Method</td><td colspan="4">Qwen2-Base (1.5B)</td><td colspan="4">Qwen2-Instruct (1.5B)</td></tr><tr><td colspan="2">AlpacaEval2</td><td>Arena-Hard</td><td>MT-Bench</td><td colspan="2">AlpacaEval2</td><td>Arena-Hard</td><td>MT-Bench</td></tr><tr><td>LC (%)</td><td>WR (%)</td><td>WR (%)</td><td>Score</td><td>LC (%)</td><td>WR (%)</td><td>WR (%)</td><td>Score</td></tr><tr><td>SFT</td><td>4.16</td><td>2.30</td><td>0.90</td><td>4.68</td><td>5.31</td><td>3.42</td><td>2.40</td><td>5.05</td></tr><tr><td>DPO</td><td>5.56</td><td>4.79</td><td>2.60</td><td>5.03</td><td>8.93</td><td>6.77</td><td>4.00</td><td>5.60</td></tr><tr><td rowspan="3">Method</td><td colspan="4">Qwen2-Base (7B)</td><td colspan="4">Qwen2-Instruct (7B)</td></tr><tr><td colspan="2">AlpacaEval2</td><td>Arena-Hard</td><td>MT-Bench</td><td colspan="2">AlpacaEval2</td><td>Arena-Hard</td><td>MT-Bench</td></tr><tr><td>LC (%)</td><td>WR (%)</td><td>WR (%)</td><td>Score</td><td>LC (%)</td><td>WR (%)</td><td>WR (%)</td><td>Score</td></tr><tr><td>SFT</td><td>11.54</td><td>5.65</td><td>5.30</td><td>5.86</td><td>30.73</td><td>28.32</td><td>39.70</td><td>7.19</td></tr><tr><td>DPO</td><td>14.06</td><td>8.45</td><td>10.70</td><td>6.70</td><td>32.10</td><td>28.15</td><td>39.30</td><td>7.26</td></tr><tr><td>WSPO</td><td>26.77</td><td>26.68</td><td>29.00</td><td>7.00</td><td>47.04</td><td>48.32</td><td>49.60</td><td>7.33</td></tr></table>
191
+
192
+ While DPO provides noticeable improvements for the base and weaker models, performance declines with Qwen2-7B-Instruct. This decline could be due to Qwen2-7B's ability to achieve strong alignment through raw, high-quality data and complex RLHF processes (Yang et al., 2024a). Relying solely on the Ultrafeedback dataset for DPO learning might not lead to performance gains, as the dataset may already be part of its original high-quality data. Additionally, it's possible that DPO adversely affected the model's initial performance.
193
+
194
+ WSPO effectively learns and amplifies the alignment signals of weak models. With WSPO, the strong model consistently delivers great results across all three benchmarks. The impressive performance of WSPO can be attributed to its unique approach: unlike DPO, which learns directly from preference data pairs, WSPO derives alignment signals from weaker models, not only dependent on the Ultrafeedback dataset itself. For instance, on the Qwen2-1.5B-Instruct, the alignment ability of weak models improved from 2.40 to 4.00 with DPO learning, as measured by the Arena-Hard evaluation. Subsequently, the strong model's alignment capability was amplified from 39.70 to 49.60 by leveraging the differences in alignment signals from the weak model—something that DPO learning on datasets alone cannot achieve. The amplification phenomenon might be attributed to the limited parameter size of the weak model, which constrains its ability to achieve optimal alignment. However, transferring this alignment to stronger models could offer substantial benefits. Additionally, our method circumvents direct training on the preference dataset, effectively reducing risks such as overfitting and reward hacking.
195
+
196
+ Table 2: Evaluation results of models across different benchmarks. We evaluate these benchmarks by using llm-evaluation-harness (Gao et al., 2024) repo.
197
+
198
+ <table><tr><td>Model</td><td>MMLU</td><td>CMMLU</td><td>Truthful-QA</td><td>GSM-PLUS</td><td>GSM8K</td><td>Avg.</td></tr><tr><td>Qwen2-1.5B-Instruct</td><td>55.70</td><td>69.62</td><td>28.52</td><td>38.83</td><td>59.78</td><td>50.49</td></tr><tr><td>Qwen2-7B-Base</td><td>69.43</td><td>83.34</td><td>37.33</td><td>57.39</td><td>79.83</td><td>65.46</td></tr><tr><td>Qwen2-7B-Instruct</td><td>69.94</td><td>81.84</td><td>41.00</td><td>56.91</td><td>77.86</td><td>65.51</td></tr><tr><td>Qwen2-7B-Instruct + WSPO</td><td>69.44</td><td>80.82</td><td>47.00</td><td>57.96</td><td>77.94</td><td>66.63</td></tr><tr><td>Qwen2-7B-Base + WSPO</td><td>69.37</td><td>80.98</td><td>44.68</td><td>60.06</td><td>81.31</td><td>67.28</td></tr></table>
199
+
200
+ WSPO generalizes beyond weak models rather than simply imitating them. As shown in Table 2, Qwen2-1.5B-Instruct is much less capable than the 7B version, further demonstrating that WSPO prevents knowledge forgetting in common sense, mathematics and other reasoning tasks while enhancing the model's overall alignment ability, as shown in Table 1. Notably, on the TruthfulQA dataset, both the base model and the Instruct model exhibited improved capabilities in assessing the degree of truthfulness.
201
+
202
+ # 5 ANALYSIS
203
+
204
+ ![](images/17582f0f0076c41d4fb06306d0f97f2f34ae9dbf912cd453b5bce96a501f7661.jpg)
205
+ Figure 4: Left. The effect of weak model size on the sequence length generated by WSPO compared to the PPO using greedy decoding. Right. The impact of different $\gamma$ hyperparameters on WSPO in a single-turn dialogue analysis.
206
+
207
+ ![](images/b84b04f1c0b32e61c32b10475a60323517a63544b4db34ebc3d550c3e6017e10.jpg)
208
+
209
+ # 5.1 IMPACT OF WEAK MODEL
210
+
211
+ As discussed in the previous section, we utilized the probability difference between a weak base model and its aligned version to align a stronger model. In this section, we empirically investigate the impact of model size by using the Qwen2-0.5B model as a weaker counterpart to the Qwen2-1.5B model, aiming to explore how model size affects alignment strength. The experimental setup mirrors that of Section 4.1.1. As illustrated on the left side of Figure 4, even a weaker model can provide a robust alignment signal to a stronger model (e.g., the Qwen2-7B model). Furthermore, in the Instruct setting of Section 4.3, we use the 0.5B model as the weaker model without any alignment enhancement following DPO training. When this alignment is transferred to a stronger model, it achieves a score of 45.00 on the Arena-hard benchmark using WSPO optimization. This indicates that parameter size may limit alignment in weaker models, whereas stronger models can amplify this alignment. Besides, the alignment ability of a weak model is also important, we can have fine-grained alignment on a weak model and then migrate the alignment ability to a strong model to achieve better alignment.
212
+
213
+ # 5.2 IMPACT OF HYPERPARAMETER
214
+
215
+ Recalling the WSPO loss in Equation 6, we introduce the hyperparameter $\gamma$ , which represents the ratio of regularization intensity applied to the strong and weak models in the optimization objective outlined in Objective 1, as well as the penalty for deviating from the original distribution. This section investigates the impact of $\gamma$ on the alignment strength. We test values of $\gamma \in \{0.1, 0.5, 1.0, 2.0\}$ to evaluate its effect on regularization. As illustrated on the right side of Figure 4, adjusting $\gamma$ enables us to modulate the degree to which the stronger model aligns with the weaker one and deviates from the original distribution. When $\gamma = 1$ , the alignment of the strong model closely mirrors that of the weak model. As $\gamma$ increases beyond 1, the strong model's alignment increasingly favors the original distribution. Conversely, the strong models exhibit superior alignment when $\gamma$ is less than 1. Therefore, despite $\gamma$ incorporating a penalty for deviations from the original distribution, we can infer that the strong model requires a smaller regularization than the weak model when optimizing the objective function in Objective 1.
216
+
217
+ # 5.3 IMPACT OF SFT PHASE
218
+
219
+ We also leveraged the probability difference between Qwen2-1.5-Base and Qwen2-1.5-Instruct to align stronger models from the Base version directly. On the Arena-Hard benchmark, the Base model initially scored 7.70. However, after applying the WSPO algorithm for alignment with Ultrafeedback, the score saw a modest improvement to 9.30. This limited gain underscores the significance of high-quality knowledge injection during the SFT phase.
220
+
221
+ # 6 RELATED WORK
222
+
223
+ # 6.1 TRAINING-TIME ALIGNMENT
224
+
225
+ RLHF is a technique designed to align LLMs with human preferences and values (Christiano et al., 2017; Bai et al., 2022). In the third stage of RLHF, the PPO algorithm (Schulman et al., 2017) is commonly used. Recent advancements, such as Reinforcement Learning with AI Feedback (RLAIF), offer potential alternatives to traditional human feedback methods (Pang et al., 2023). However, challenges throughout the RLHF pipeline, from preference data collection to model training, have been noted by Radford et al. (2018). In contrast, approaches like DPO (Rafailov et al., 2024) bypass the need for a reward model by directly training LLMs using human preferences. Other competing methods, such as IPO (Azar et al., 2024), KTO (Ethayarajh et al.), and ORPO (Hong et al., 2024), have also emerged.
226
+
227
+ # 6.2 INFERENCE-TIME ALIGNMENT
228
+
229
+ Decoding strategies aim to generate text continuations that balance diversity and coherence (Zhu et al., 2024). Some methods trade off computational efficiency during inference to better align with human preferences. The simplest of these is the Best-of- $N$ approach, which involves sampling multiple outputs from $\pi_{\mathrm{ref}}$ and selecting the one with the highest reward according to a reward model (Touvron et al., 2023). Another approach is Emulated Fine-Tuning (EFT) (Mitchell et al., 2023), a scale-decoupling method that transfers fine-tuning effects between small and large LMs. Liu et al. (2024a) demonstrated the empirical effectiveness of this proxy-tuning technique, showing it rivals standard fine-tuning across various benchmarks. Additionally, Liu et al. (2024b) introduced DeRa, a cost-efficient method that dynamically adjusts alignment strength during inference. Zhou et al. (2024) used the log-probability difference between small-tuned and untuned models to guide a frozen large model, providing an efficient up-scaling strategy without fine-tuning.
230
+
231
+ # 6.3 WEAK-TO-STRONG GENERALIZATION
232
+
233
+ Several works have been proposed to use weak model supervision to elicit the capabilities of a much stronger model. Burns et al. (2023) found that strong models fine-tuned by weak supervisors consistently outperform their weak counterparts. Yang et al. (2024b) presents a method that improves model reasoning by employing weak supervision to autonomously refine training data autonomously, enabling the expansion of reasoning abilities without human annotations or advanced models. Unlike these approaches, our method uses weak model supervision for alignment to enhance helpfulness while maintaining the strong model's original ability.
234
+
235
+ # 7 DISCUSSION
236
+
237
+ Conclusion. This paper introduced WSPO, a method for transferring alignment capabilities from a weaker model to a stronger one by leveraging distributional differences before and after weak model alignment. Experimental results show that WSPO improves model performance on key benchmarks, offering an efficient alternative to traditional alignment methods.
238
+
239
+ Limitations and future work. We did not explore the alignment transfer properties across different language model architectures or examine the impact of weak model alignment strength in WSPO. Our study also does not explain why transferring a weak model's alignment ability to a stronger model amplifies it. Future research could investigate the use of weak models as reward models in reinforcement learning frameworks to facilitate alignment or seek to explain this phenomenon.
240
+
241
+ # ACKNOWLEDGMENTS
242
+
243
+ This paper is supported by the General Program of National Natural Science Foundation of China (62176153).
244
+
245
+ # ETHICS STATEMENT
246
+
247
+ Although the datasets used in this paper are open-source and helpful, we did not perform an in-depth evaluation of them, nor did we account for factors such as safety, honesty, or other considerations when designing the WSPO loss function.
248
+
249
+ # REPRODUCIIBILITY STATEMENT
250
+
251
+ All training experiments in this paper were conducted using 8×H100 GPUs, leveraging the LLaMAFactory (Zheng et al., 2024b) repository, which offers an integrated framework for fine-tuning over 100 LLMs with a variety of efficient techniques. The only additional implementation required is training the model based on WSPO alignment. This can be achieved by modifying the DPO training code within the LLaMA-Factory repository (Zheng et al., 2024b), specifically by calculating the loss on the selected dataset and loading weaker models. The evaluation uses the LLM evaluation, with the relevant prompts in the Appendix. The reasoning tasks evaluation is also performed using the llm-eval-harness Gao et al. (2024) repo.
252
+
253
+ # REFERENCES
254
+
255
+ Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pp. 4447-4455. PMLR, 2024.
256
+ Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
257
+ Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, et al. Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390, 2023.
258
+ Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
259
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
260
+ Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, et al. Ultrafeedback: Boosting language models with scaled ai feedback. In *Forty-first International Conference on Machine Learning*, 2024.
261
+ Herbert Aron David. The method of paired comparisons, volume 12. London, 1963.
262
+ Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
263
+ Yann Dubois, Balázs Galambosi, Percy Liang, and Tatsunori B Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475, 2024.
264
+ Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Model alignment as prospect theoretic optimization. In *Forty-first International Conference on Machine Learning*.
265
+ Robert M French. Semi-distributed representations and catastrophic forgetting in connectionist networks. Connection Science, 4(3-4):365-377, 1992.
266
+
267
+ Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac'h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/12608602.
268
+ Matthieu Geist, Bruno Scherrer, and Olivier Pietquin. A theory of regularized markov decision processes. In International Conference on Machine Learning, pp. 2160-2169. PMLR, 2019.
269
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2021. URL https://arxiv.org/abs/2009.03300.
270
+ Jiwoo Hong, Noah Lee, and James Thorne. Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691, 2(4):5, 2024.
271
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention, 2023. URL https://arxiv.org/abs/2309.06180.
272
+ Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. Cmmlu: Measuring massive multitask language understanding in chinese, 2024a. URL https://arxiv.org/abs/2306.09212.
273
+ Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforcement learning for dialogue generation, 2016. URL https://arxiv.org/abs/1606.01541.
274
+ Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi. Gsm-plus: A comprehensive benchmark for evaluating the robustness of llms as mathematical problem solvers, 2024b. URL https://arxiv.org/abs/2402.19255.
275
+ Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E Gonzalez, and Ion Stoica. From crowdsourced data to high-quality benchmarks: Arena-hard and benchbuilder pipeline. arXiv preprint arXiv:2406.11939, 2024c.
276
+ Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods, 2021.
277
+ Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, and Noah A Smith. Tuning language models by proxy. arXiv preprint arXiv:2401.08565, 2024a.
278
+ Tianlin Liu, Shangmin Guo, Leonardo Bianco, Daniele Calandriello, Quentin Berthet, Felipe Llinares, Jessica Hoffmann, Lucas Dixon, Michal Valko, and Mathieu Blondel. Decoding-time realignment of language models. arXiv preprint arXiv:2402.02992, 2024b.
279
+ Llama-Team. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783.
280
+ Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734, 2024.
281
+ Eric Mitchell, Rafael Rafailov, Archit Sharma, Chelsea Finn, and Christopher D Manning. An emulator for fine-tuning large language models using small language models. arXiv preprint arXiv:2310.12962, 2023.
282
+ Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1797-1807, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1206. URL https://aclanthology.org/D18-1206.
283
+
284
+ OpenAI. Gpt-4 technical report, 2024. URL https://arxiv.org/abs/2303.08774.
285
+ Jing-Cheng Pang, Pengyuan Wang, Kaiyuan Li, Xiong-Hui Chen, Jiacheng Xu, Zongzhang Zhang, and Yang Yu. Language model self-improvement by reinforcement learning contemplation. arXiv preprint arXiv:2305.14483, 2023.
286
+ Qwen. Qwen2.5: A party of foundation models, September 2024. URL https://qwen1m.github.io/blog/qwen2.5/.
287
+ Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018.
288
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
289
+ Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
290
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017. URL https://arxiv.org/abs/1707.06347.
291
+ Ruizhe Shi, Yifang Chen, Yushi Hu, Alisa Liu, Hannaneh Hajishirzi, Noah A. Smith, and Simon Du. Decoding-time language model alignment with multiple objectives, 2024. URL https://arxiv.org/abs/2406.18853.
292
+ Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward gaming. Advances in Neural Information Processing Systems, 35:9460-9471, 2022.
293
+ Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021, 2020.
294
+ Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, and Trevor Darrell. Aligning large multimodal models with factually augmented rlhf, 2023. URL https://arxiv.org/abs/2309.14525.
295
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Jasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
296
+ Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
297
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprint arXiv:2407.10671, 2024a.
298
+ Yuqing Yang, Yan Ma, and Pengfei Liu. Weak-to-strong reasoning, 2024b. URL https:// arxiv.org/abs/2407.13647.
299
+ Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, and Tat-Seng Chua. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback, 2024. URL https://arxiv.org/abs/2312.00849.
300
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36, 2024a.
301
+
302
+ Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of $100+$ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand, 2024b. Association for Computational Linguistics. URL http://arxiv.org/abs/2403.13372.
303
+ Zhanhui Zhou, Zhixuan Liu, Jie Liu, Zhichen Dong, Chao Yang, and Yu Qiao. Weak-to-strong search: Align large language models via searching over small language models. arXiv preprint arXiv:2405.19262, 2024.
304
+ Wenhong Zhu, Hongkun Hao, Zhiwei He, Yunze Song, Yumeng Zhang, Hanxu Hu, Yiran Wei, Rui Wang, and Hongyuan Lu. Clean-eval: Clean evaluation on contaminated large language models. arXiv preprint arXiv:2311.09154, 2023.
305
+ Wenhong Zhu, Hongkun Hao, Zhiwei He, Yiming Ai, and Rui Wang. Improving open-ended text generation via adaptive decoding. arXiv preprint arXiv:2402.18223, 2024.
306
+
307
+ # A MATHEMATICAL DERIVATIONS
308
+
309
+ # A.1 PROOF OF THEROM1
310
+
311
+ Lemma 1. Under the Plackett-Luce preference framework, particularly the Bradley-Terry framework, two reward functions from the same equivalence class induce the same preference distribution.
312
+
313
+ The proof can be found in the paper (Rafailov et al., 2024).
314
+
315
+ Lemma 2. Two reward functions from the same equivalence class induce the same optimal policy under the constrained RL problem.
316
+
317
+ The proof can be found in the paper (Rafailov et al., 2024).
318
+
319
+ Under Lemma 1 and Lemma 2, given the reward function $r(x,y)$ , which incorporates the optimal policy $\pi_r(y\mid x)$ under the KL-constrained RL framework, we have:
320
+
321
+ $$
322
+ r (x, y) = \beta \log \frac {\pi_ {r} (y \mid x)}{\pi_ {\mathrm {r e f}} (y \mid x)} + \beta \log Z (x),
323
+ $$
324
+
325
+ where $Z(x) = \sum_{y} \pi_{\mathrm{ref}}(y \mid x) \exp \left(\frac{1}{\beta} r(x, y)\right)$ . This formulation is equivalent to:
326
+
327
+ $$
328
+ r ^ {\prime} (x, y) = \beta \log \frac {\pi_ {r} (y \mid x)}{\pi_ {\mathrm {r e f}} (y \mid x)}.
329
+ $$
330
+
331
+ # A.2 PROOF OF PROPOSITION
332
+
333
+ Proposition 1. Any fine-tuned model can be seen as solving a KL-constrained RL problem, where the constraint is defined relative to the pre-trained model. See Appendix A.2 for proof.
334
+
335
+ Based on Theorem 1 and Proposition 1, we can define a composite reward function, $r_{\mathrm{ft}}(x,y) = r_{\mathrm{sft}}(x,y) \circ r_{\mathrm{alignment}}(x,y)$ , where $r_{\mathrm{sft}}(x,y)$ fine-tunes the base model to the SFT model, and $r_{\mathrm{alignment}}(x,y)$ further fine-tunes the SFT model to the aligned model. This composite reward enables the base model to be directly fine-tuned to the aligned model, effectively integrating alignment into the SFT training process through the appropriate choice of reward function. However, there remains a discrepancy between the pre-trained and SFT models (see Section 5.3). For specific tasks, such as managing generation length or repetitive patterns where internal knowledge is less essential, it may be feasible to skip the SFT phase.
336
+
337
+ Proof. Any fine-tuned language model $\pi_{\mathrm{ft}}$ and pre-trained model $\pi_{\mathrm{ref}}$ can be associated with a reward function $r_{\mathrm{ft}}(x,y)$ , defined through the following optimization problem:
338
+
339
+ $$
340
+ \max _ {\pi_ {\theta}} \mathbb {E} _ {x \sim \mathcal {D}, y \sim \pi_ {\theta} (y | x)} \left[ r _ {\mathrm {f t}} (x, y) \right] - \beta \mathbb {D} _ {\mathrm {K L}} \left[ \pi_ {\theta} (y \mid x) \right| \left| \pi_ {\text {r e f}} (y \mid x) \right], \tag {9}
341
+ $$
342
+
343
+ Optimizing Objective 9 provides the solution to this KL-constrained reinforcement learning problem, yielding $\pi^{*} = \pi_{\mathrm{ft}}$ , with the reward function given by $r_{\mathrm{ft}}(x,y) = \beta \log \frac{\pi_{\mathrm{ft}}(x,y)}{\pi_{\mathrm{ref}}(x,y)}$ .
344
+
345
+ # A.3 DERIVING THE WSPO OBJECTIVE
346
+
347
+ Given a weak model after alignment, we can consider the weak LM as a hidden reward model, where the reward model is defined as $r(x,y) = \beta \log \frac{\pi_{\mathrm{r}}^{\mathrm{weak}}(y|x)}{\pi_{\mathrm{ref}}^{\mathrm{weak}}(y|x)}$ . From this, we derive that
348
+
349
+ $$
350
+ \pi_ {\mathrm {r}} ^ {\text {s t r o n g}} (y \mid x) = \frac {1}{Z ^ {\prime} (x)} \pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y \mid x) \exp \left(\frac {1}{\lambda} r (x, y)\right), \tag {10}
351
+ $$
352
+
353
+ where
354
+
355
+ $$
356
+ Z ^ {\prime} (x) = \sum_ {y} \pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y \mid x) \exp \left(\frac {1}{\lambda} r (x, y)\right). \tag {11}
357
+ $$
358
+
359
+ By substituting the reward model $r(x,y)$ into Equation 11, we obtain:
360
+
361
+ $$
362
+ Z ^ {\prime} (x) = \sum_ {y} \pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y \mid x) \exp \left(\frac {\beta}{\lambda} \log \frac {\pi_ {\mathrm {r}} ^ {\text {w e a k}} (y \mid x)}{\pi_ {\text {r e f}} ^ {\text {w e a k}} (y \mid x)}\right). \tag {12}
363
+ $$
364
+
365
+ Note that our optimization objective in Equation 6 aims to make $\frac{\beta}{\lambda} \log \frac{\pi_{\mathrm{r}}^{\mathrm{weak}}(y|x)}{\pi_{\mathrm{ref}}^{\mathrm{weak}}(y|x)}$ as close as possible to $\log \frac{\pi_{\theta}^{\mathrm{strong}}(y|x)}{\pi_{\mathrm{ref}}^{\mathrm{strong}}(y|x)}$ . In this context, it is essential to ensure that $\pi_{\theta}(y \mid x)$ is a valid distribution, which will make $Z'(x)$ close to 1. Therefore, optimizing the WSPO loss function becomes equivalent to optimizing Equation 10.
366
+
367
+ # A.4 DERIVING THE GRADIENT OF WSPO OBJECTIVE
368
+
369
+ In this section, we derive the gradient of the WSPO objective:
370
+
371
+ $$
372
+ \nabla_ {\theta} \mathcal {L} _ {\mathrm {W S P O}} = \nabla_ {\theta} \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \frac {1}{| y |} \left\| \gamma \log \frac {\pi_ {\theta} ^ {\text {s t r o n g}} (y \mid x)}{\pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y \mid x)} - \log \frac {\pi_ {\mathrm {r}} ^ {\text {w e a k}} (y \mid x)}{\pi_ {\text {r e f}} ^ {\text {w e a k}} (y \mid x)} \right\| _ {2} ^ {2} \right]. \tag {13}
373
+ $$
374
+
375
+ Since the probability $\pi_{\theta}(y\mid x)$ can be decomposed using the chain rule of probability as
376
+
377
+ $$
378
+ \pi_ {\theta} (y \mid x) = \prod_ {t = 1} ^ {n} \pi_ {\theta} \left(y _ {t} \mid y _ {< t}, x\right), \tag {14}
379
+ $$
380
+
381
+ We can observe that each term in the product is a function of $\theta$ . Therefore, when we take the derivative of the WSPO loss function, we have
382
+
383
+ $$
384
+ \nabla_ {\theta} \mathcal {L} _ {\mathrm {W S P O}} =
385
+ $$
386
+
387
+ $$
388
+ \mathbb {E} _ {(x, y) \sim \mathcal {D}} \sum_ {t = 1} ^ {| y |} \left[ \frac {2}{| y |} \left(\gamma \log \frac {\pi_ {\theta} ^ {\text {s t r o n g}} (y _ {< t} \mid x)}{\pi_ {\text {r e f}} ^ {\text {s t r o n g}} (y _ {< t} \mid x)} - \log \frac {\pi_ {\mathrm {r}} ^ {\text {w e a k}} (y _ {< t} \mid x)}{\pi_ {\text {r e f}} ^ {\text {w e a k}} (y _ {< t} \mid x)}\right) \nabla_ {\theta} \log \pi_ {\theta} ^ {\text {s t r o n g}} (y _ {< t} \mid x) \right]. \tag {15}
389
+ $$
390
+
391
+ # B EXPERIMENTAL SETUPS
392
+
393
+ All the training experiments in this paper were conducted on $8\times \mathrm{H}100$ GPUs based on the LLaMAFactory (Zheng et al., 2024b) repo, which provides an integrated approach to fine-tuning over 100 LLMs with a diverse range of efficient fine-tuning techniques. If not specified, the inference engine used by our LMs defaults to vllm (Kwon et al., 2023).
394
+
395
+ # B.1 LENGTH REWARD
396
+
397
+ Data preparation. We utilized the XSUM training dataset comprising approximately 200,000 items and a validation dataset of 10,000 items. We modified the data according to Qwen's instruction template as follows:
398
+
399
+ # XSUM
400
+
401
+ ```txt
402
+ <|im_start|>system
403
+ You are a helpful assistant. $< |$ im_end|>
404
+ <|im_start|>user
405
+ Please summarize the article.
406
+ [Article]<|im_end|>
407
+ <|im_start|>assistant
408
+ [Summary]<|im_end|>
409
+ ```
410
+
411
+ PPO training. We use a pre-trained Qwen2-1.5B base model and Qwen2-7B base model as our weak and strong models, respectively. We first fine-tune the base model on the dataset using three epochs in a batch size of 32, yielding our SFT model. Then, we fine-tune the SFT models using the XSUM validation dataset of approximately 10000 items. We train aligned policy models using PPO to maximize the length reward in Equation 8. The batch size equals 8, and we fine-tune about ten epochs.
412
+
413
+ ![](images/8cff2bdbc03afb9198850696d456d5c10220fc19731691024b18c5736592d698.jpg)
414
+ Figure 5: Left. Reward variation during PPO training of Qwen2-1.5B. Right. Loss variation during WSPO training of Qwen2-7B.
415
+
416
+ ![](images/cdb74a2b5c88bcee1ccb008503ba63a99d0e3e4b454594c1997d1d998f63d084.jpg)
417
+
418
+ The left picture of Figure 5 illustrates the variations in reward throughout the PPO training process. It is evident that Qwen2-1.5B effectively learns the reward signals following the PPO training.
419
+
420
+ WSPO training. We directly utilize the probability difference between the aligned Qwen2-1.5B model and the Qwen2-1.5B-base model to align the base version of the Qwen2-7B model. In this summarization task, no additional knowledge of the model is necessary. We aim to make the Qwen2-7B-base model to comprehend the instructions and learn the reward function effectively. The right picture of Figure 5 illustrates the variations in loss throughout the WSPO training process using $\gamma = 0.5$ . The batch size is equal to 8. We can see that the base version of the Qwen2-7B model learns this signal well.
421
+
422
+ # B.2 SINGLE-TURN DIALOGUE
423
+
424
+ Data preparation. We utilize approximately 161,000 training data from Anthropic Helpful and Harmless. Each item may include one or multiple conversations formatted as follows:
425
+
426
+ # Anthropic-HH
427
+
428
+ ```txt
429
+ <|im_start|>system
430
+ You are a helpful assistant. <|im_end|>
431
+ <|im_start|>user
432
+ [Query 1] <|im_end|>
433
+ <|im_start|>assistant
434
+ [Response 1]
435
+ <|im_end|>
436
+ <|im_start|>user
437
+ [Query 2] <|im_end|>
438
+ <|im_start|>assistant
439
+ [Response 2] <|im_end|>
440
+ ```
441
+
442
+ DPO training. We use a pre-trained Qwen2-1.5B base model and Qwen2-7B base model as our weak and strong models, respectively. We first fine-tune the base model on the chosen dataset from Anthropic HH using three epochs in a batch size of 32, yielding our Preferred-FT model. Then, we fine-tune the SFT models using the paired dataset. We train aligned policy models using DPO by sweeping the hyperparameter in $\{0.1, 0.5, 1.0, 2.0, 5.0\}$ . The batch size is equal to 32, and we fine-tune three epochs.
443
+
444
+ ![](images/98559183c5af1c02e0da4d1c1c61dff9e358ee1d53b33136f91edca9acb5958b.jpg)
445
+ Figure 6: Left. Loss variation during DPO training of Qwen2-7B with $\beta = 2.0$ . Right. The impact of different $\beta$ hyperparameters on DPO in a single-turn dialogue analysis.
446
+
447
+ ![](images/4c25c24d62326ec79e47fc48e7b6111e07647315ec72568adcc18da6c41f1ffa.jpg)
448
+
449
+ As shown in the left of Figure 6, the DPO effectively captures the reward signal on the preference data. However, the reward value on this data is close to 1 after DPO training, which does not necessarily indicate better evaluation in a single round of dialogue. The graph on the right in Figure 6 shows that the win rate is higher when $\beta$ is set to 0.5 or 1. For our comparisons with the proposed WSPO method, we chose $\beta$ equal to 0.5.
450
+
451
+ WSPO training. We leverage the logarithmic probability between the aligned Qwen2-1.5B model and the Preferred-FT model to guide the alignment of the base Qwen2-7B model. WSPO is trained with a batch size of 32 and $\gamma = 0.1$ . As illustrated in Figure 7, WSPO demonstrates a rapid convergence rate. Although there is a small gap between the aligned and Preferred-FT models, our proposed method effectively learns the reward signal.
452
+
453
+ Evaluation. We use GPT-4o-mini to calculate the win rate. GPT-4o mini is the most cost-efficient small model, is smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The prompt we used is shown in Prompt B.2:
454
+
455
+ Validating GPT-4o-mini judgments with Qwen2.5-72B-Instruct. Since comparing the generation results between two models is challenging, and human evaluation results are often not reproducible, we use Qwen2.5-72B-Instruct to validate our results further. Qwen2.5-72B-Instruct possesses significantly more knowledge (MMLU: $85+$ ) and demonstrates substantial improvements
456
+
457
+ ![](images/4cfe5eb616036eea067cbb8da15dac164d6d3c2d1de5b287b2f6b18c46a167a5.jpg)
458
+ Figure 7: Loss variation during WSPO training of Qwen2-7B with $\gamma = 0.1$
459
+
460
+ in coding (HumanEval $85+$ ) and mathematics (MATH $80+$ ). Furthermore, the new models show marked advancements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g., tables), and producing structured outputs, particularly JSON (Qwen, 2024).
461
+
462
+ Table 3: Win-rate comparison of SFT, DPO, and WSPO models on Qwen2-7B in a single-turn dialogue scenario at a temperature of 0.7.
463
+
464
+ <table><tr><td>Method</td><td>GPT4o-mini (%)</td><td>Qwen2.5-72B-Instruct (%)</td></tr><tr><td>SFT</td><td>38.83 ± 3.47</td><td>41.67 ± 1.03</td></tr><tr><td>DPO</td><td>57.83 ± 2.72</td><td>62.33 ± 1.92</td></tr><tr><td>WSPO</td><td>61.83 ± 2.39</td><td>66.50 ± 1.08</td></tr></table>
465
+
466
+ As shown in Table 3, the results using the GPT-4o-mini and Qwen2.5-72B-Instruct models as judge models were consistent. Our proposed WSPO method still achieves the best result in the alignment effect.
467
+
468
+ # Dialogue GPT-4o-mini win rate prompt.
469
+
470
+ ```txt
471
+ In a single-turn dialogue, considering that a user's intention may be either positive or negative, which response to the following chatbot query both minimizes harm and responds appropriately?
472
+ Query: <the user query>
473
+ Response A:
474
+ <either the test method or baseline>
475
+ Response B:
476
+ <the other response>
477
+ First, provide a concise comparison of the two responses, explaining which one more effectively ensures that the content avoids harmful material. SECOND, on a new line, state only "A" or "B" to indicate which response is more helpful. Your response should use the format: Comparison: <one-sentence comparison and explanation>
478
+ More helpful: <"A" or "B">
479
+ ```
480
+
481
+ # B.3 A COMPLEX EVALUATION
482
+
483
+ Data preparation. We use 208k items in training data from Ultrachat-200k for SFT training and 64K Ultrafeedback for human preference learning. The training data template is the same as Template B.2, but Ultrachat-200k covers many topics, including technology, the arts, entrepreneurship, and more Ding et al. (2023).
484
+
485
+ DPO training. For the Base setting, We use a pre-trained Qwen2-1.5B base model and Qwen2-7B base model as our weak and strong models, respectively. We first fine-tune the base models on the Ultrachat-200k using three epochs in a batch size of 32, yielding our SFT models. For the Insturct setting, we use the Qwen2-1.5B-Instruct model and Qwen2-7B-Instruct as our SFT models. Then, we fine-tune the SFT models using the Ultrafeedback dataset. Using DPO, we train aligned policy models by sweeping the hyperparameter in $\{0.05, 0.1, 0.5, 1.0, 2.0, 3.0\}$ . The batch size is equal to 32, and we fine-tune three epochs.
486
+
487
+ Table 4: Win rate on the Arena-Hard benchmark for Qwen2-7B-Instruct using the DPO algorithm with varying hyperparameter $\beta$ .
488
+
489
+ <table><tr><td>Method</td><td>β = 0.05</td><td>β = 0.1</td><td>β = 0.5</td><td>β = 1.0</td><td>β = 2.0</td><td>β = 3.0</td></tr><tr><td>Arena-Hard</td><td>35.7</td><td>36.8</td><td>38.9</td><td>38.4</td><td>39.3</td><td>37.9</td></tr></table>
490
+
491
+ As shown in Table 4, we found that adjusting the $\beta$ parameters during DPO training on Qwen2-7B-Instruct did not enhance alignment performance; in fact, the performance was worse than its original performance, 39.70. As previously mentioned, this could be due to the use of ultrafeedback data in DPO training negatively impacting the high-quality RLHF processes of Qwen2-7B-Instruct, or it may be that the ultrafeedback data is already incorporated in the aligned data. The left plot in Figure 8 illustrates the reward growth curve when $\beta = 2$ during DPO training. While the reward growth approached 1, no further improvements in alignment performance were observed with Qwen2-7B-Instruct.
492
+
493
+ ![](images/f0f6ebfd6bf1b54e09f58325d478845dd51ef90530ed0f0b3834140ea70aa995.jpg)
494
+ Figure 8: Left. Reward variation during DPO training of Qwen2-7B with $\beta = 2.0$ on the Ultrafeedback dataset. Right. Loss variation during WSPO training of Qwen2-7B with $\gamma = 0.1$ on the Ultrafeedback dataset.
495
+
496
+ ![](images/003382ebe31591eb967bbc2287644c6b19d1c180540cf80df7d5036706be98ee.jpg)
497
+
498
+ WSPO training. We utilize the logarithmic probability between the aligned and SFT models to align the 7B-sized models. The batch size equals 32, and we fine-tune three epochs with $\gamma = 0.1$ . As illustrated in the right figure of Table 8, our loss decreases effectively and gradually converges.
499
+
500
+ Evaluation. Table 5 provides a detailed overview of our specific evaluation. All results are obtained from their official repository. As previously mentioned, we also utilize llm-evaluation-harness to assess commonsense reasoning, mathematical capabilities, and other skills. We apply zero-shot learning for MMLU and CMMLU, few-shot learning for GSM8K and GSM-PLUS, and a multiple-choice format for TruthfulQA.
501
+
502
+ Table 5: Evaluation details for three benchmarks. The baseline model refers to the model compared against.
503
+
504
+ <table><tr><td></td><td># EXs.</td><td>Baseline</td><td>Judge Model</td><td>Scoring Type</td><td>Metric</td></tr><tr><td>AlpacaEval2</td><td>805</td><td>GPT-4 Turbo</td><td>GPT-4o mini</td><td>Pairwise comparison</td><td>LC &amp; raw win rate</td></tr><tr><td>Arena-Hard</td><td>500</td><td>GPT-4-0314</td><td>GPT-4o mini</td><td>Pairwise comparison</td><td>Win rate</td></tr><tr><td>MT-Bench</td><td>80</td><td>-</td><td>GPT-4o mini</td><td>Single-answer grading</td><td>Rating of 1-10</td></tr></table>
505
+
506
+ Validating GPT-4o-mini judgments with Qwen2.5-72B-Instruct. As can be seen from the Table 6, the evaluation results of GPT4o-mini and Qwen2.5-72B were consistent. Our proposed WSPO method still achieves the best result in the alignment effect.
507
+
508
+ Table 6: Evaluation results of models across different settings on Arena-Hard. WR refers to the win rates compared to the baseline.
509
+
510
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Qwen2-Base (1.5B)</td><td colspan="2">Qwen2-Instruct (1.5B)</td></tr><tr><td>GPT4o-mini(%)</td><td>Qwen2.5-72B (%)</td><td>GPT4o-mini(%)</td><td>Qwen2.5-72B (%)</td></tr><tr><td>SFT</td><td>0.90</td><td>0.80</td><td>2.40</td><td>1.30</td></tr><tr><td>DPO</td><td>2.60</td><td>2.20</td><td>4.00</td><td>3.40</td></tr><tr><td rowspan="2">Method</td><td colspan="2">Qwen2-Base (7B)</td><td colspan="2">Qwen2-Instruct (7B)</td></tr><tr><td>GPT4o-mini(%)</td><td>Qwen2.5-72B (%)</td><td>GPT4o-mini(%)</td><td>Qwen2.5-72B (%)</td></tr><tr><td>SFT</td><td>5.30</td><td>4.70</td><td>39.70</td><td>34.40</td></tr><tr><td>DPO</td><td>10.70</td><td>11.20</td><td>39.30</td><td>34.00</td></tr><tr><td>WSPO</td><td>29.00</td><td>27.70</td><td>49.60</td><td>45.20</td></tr></table>
511
+
512
+ Experiments with Llama families. Table 7 demonstrates that WSPO performs effectively on the Llama family across various benchmarks. We use the Llama3.2-1B model as the weak model to align the Llama3.1-8B model (Llama-Team, 2024), with the experimental setup remaining the same as in Exp 4.3.
513
+
514
+ Table 7: Evaluation results of models across different settings and benchmarks. LC and WR refer to length-controlled and raw win rates, respectively. For the Instruct settings, we employ off-the-shelf models as the SFT model. The SFT and DPO versions of the weak model are employed to align the strong model within the WSPO algorithm. The judge model is GPT4o-mini.
515
+
516
+ <table><tr><td rowspan="3">Method</td><td colspan="4">Llama3.2-Instruct (1B)</td></tr><tr><td colspan="2">AlpacaEval2</td><td>Arena-Hard</td><td>MT-Bench</td></tr><tr><td>LC (%)</td><td>WR (%)</td><td>WR (%)</td><td>Score</td></tr><tr><td>SFT</td><td>19.57</td><td>20.62</td><td>12.60</td><td>4.76</td></tr><tr><td>DPO</td><td>23.31</td><td>23.91</td><td>11.20</td><td>4.89</td></tr><tr><td rowspan="3">Method</td><td colspan="4">Llama3.1-Instruct (8B)</td></tr><tr><td colspan="2">AlpacaEval2</td><td>Arena-Hard</td><td>MT-Bench</td></tr><tr><td>LC (%)</td><td>WR (%)</td><td>WR (%)</td><td>Score</td></tr><tr><td>SFT</td><td>37.18</td><td>38.26</td><td>48.30</td><td>6.68</td></tr><tr><td>DPO</td><td>42.84</td><td>41.24</td><td>48.20</td><td>6.96</td></tr><tr><td>WSPO</td><td>45.62</td><td>44.10</td><td>57.20</td><td>7.11</td></tr></table>
517
+
518
+ # C IMPACT OF DATASET
519
+
520
+ To demonstrate that our method focuses on learning the predicted distribution difference before and after model alignment, rather than being dependent on a specific dataset, we utilize the rejected subset of the preference dataset, which may include toxic content. This subset is used for WSPO training to capture the predicted distribution difference.
521
+
522
+ Table 8: Performance comparison on Arena-hard across different methods on the preferred dataset's rejected subset. The judge model is Qwen2.5-72B-Instruct.
523
+
524
+ <table><tr><td>Method</td><td>Qwen2-1.5B-Instruct</td><td>Qwen2-7B-Instruct</td></tr><tr><td>SFT</td><td>1.30</td><td>34.40</td></tr><tr><td>DPO</td><td>3.40</td><td>34.00</td></tr><tr><td>WSPO</td><td>-</td><td>40.30</td></tr></table>
525
+
526
+ As shown in Table 8, the results demonstrate that our method is not dependent on a specific dataset; even datasets that are not preferred can still be effectively used for alignment.
527
+
528
+ # D WHEN THE WEAK MODEL IS NOT WEAK
529
+
530
+ In this section, we use the SFT and DPO checkpoints of the 7B model as proxies for $\pi_r^{\mathrm{weak}}$ and $\pi_{\mathrm{ref}}^{\mathrm{weak}}$ , respectively, we compute their ratio and use it as the label to re-align the SFT checkpoint of the 7B model. The results are summarized in Table 9.
531
+
532
+ Table 9: Performance comparison on Arena-hard of different methods. The judge model is Qwen2.5-72B-Instruct.
533
+
534
+ <table><tr><td>Method</td><td>Qwen2-7B-Base</td></tr><tr><td>SFT</td><td>4.70</td></tr><tr><td>DPO</td><td>11.20</td></tr><tr><td>WSPO (γ = 1.0)</td><td>10.90</td></tr><tr><td>WSPO (γ = 0.5)</td><td>14.90</td></tr><tr><td>WSPO (γ = 0.1)</td><td>15.30</td></tr></table>
535
+
536
+ As shown in Table 9, when $\gamma = 1.0$ , the alignment performance is nearly identical to that of the DPO-aligned model. Interestingly, reducing the alignment strength ( $\gamma < 1.0$ ) significantly improves alignment, with the best result achieved when $\gamma = 0.1$ . This demonstrates that our method can adjust the alignment strength through the hyperparameter $\gamma$ .
537
+
538
+ # E VISION LANGUAGE TASK
539
+
540
+ In this section, we analyze how our algorithm applies beyond the language models. In principle, WSPO can be applied to probabilistic models. Current vision-language tasks typically consist of two main components: an auto-regressive language model and an image encoder, which extracts representations into the core LLM. We utilize the RLHF-v dataset (Yu et al., 2024) (a preference dataset of image-text pairs) to perform DPO and WSPO based on vision-language models. Specifically, we use the 2B model to align the 7B model. The evaluation results are shown in Table 10.
541
+
542
+ MMHal-Bench (Sun et al., 2023) is a dataset consisting of image-question pairs, designed to evaluate hallucinations and response informativeness. Table 11 are the evaluation results for the Qwen2-7B-VL model.
543
+
544
+ Table 10 and Table 11 demonstrate that our algorithm can be applied to align vision-language tasks. Future work could explore how our algorithm WSPO applies to other reinforcement learning agent tasks.
545
+
546
+ Table 10: Performance comparison on Arena-hard across different methods. The judge model is Qwen2.5-72B-Instruct.
547
+
548
+ <table><tr><td>Method</td><td>Qwen2-2B-VL</td><td>Qwen2-7B-VL</td></tr><tr><td>SFT</td><td>1.40</td><td>5.30</td></tr><tr><td>DPO</td><td>1.50</td><td>4.90</td></tr><tr><td>WSPO</td><td>-</td><td>5.80</td></tr></table>
549
+
550
+ Table 11: Performance comparison on MMHal-Bench across different methods. The judge model is Qwen2.5-72B-Instruct.
551
+
552
+ <table><tr><td>Method</td><td>Informativeness (↑, full score: 6)</td><td>Hallucination rate (↓, full score: 1)</td></tr><tr><td>SFT</td><td>3.91</td><td>0.23</td></tr><tr><td>DPO</td><td>3.80</td><td>0.27</td></tr><tr><td>WSPO</td><td>4.02</td><td>0.22</td></tr></table>
553
+
554
+ # F EFFICIENCY ANALYSIS
555
+
556
+ One of the key contributions of our work is demonstrating that the predicted distributions before and after model alignment can be effectively used as labels to guide the alignment process. Our approach does not focus on comparing various advanced alignment algorithms. Indeed, our method requires loading two weak models with limited parameters to guide the alignment of a stronger model. Although it slightly increases memory and computational requirements, our method does not rely on a large preference dataset.
557
+
558
+ # F.1 COMPARISON TO SIMPO
559
+
560
+ SimPO (Meng et al., 2024), a lightweight direct preference learning algorithm, only requires loading one model during training. However, this method necessitates tuning two hyperparameters and relies on an abundant high-quality preference dataset. As highlighted in the hyperparameter tuning section of their project page<sup>1</sup>, this tuning process can be challenging, and clear guidelines for selecting the optimal values are not readily available.
561
+
562
+ To demonstrate the stability of our method's hyperparameters, we conduct the following experiments. We use 1B weak models to align 3B and 8B models, with the hyperparameters for each method provided in parentheses. The SimPO hyperparameters are chosen according to their project page.
563
+
564
+ Table 12: Performance comparison on Arena-hard across various methods. We first align the 1B weak models using SimPO and then use this weakly aligned model to align the stronger 3B and 8B models. The judge model is Qwen2.5-72B-Instruct.
565
+
566
+ <table><tr><td>Method</td><td>Llama3.2-1B-Instruct</td><td>Llama3.2-3B-Instruct</td><td>Llama3.1-8B-Instruct</td></tr><tr><td>SFT</td><td>11.40</td><td>29.60</td><td>50.30</td></tr><tr><td>SimPO (β = 2.5, γ/β = 0.55)</td><td>14.60</td><td>0.70</td><td>0.00</td></tr><tr><td>SimPO (β = 10, γ/β = 0.30)</td><td>-</td><td>26.50</td><td>3.50</td></tr><tr><td>WSPO (γ = 0.5)</td><td>-</td><td>31.20</td><td>52.60</td></tr></table>
567
+
568
+ As shown in Table 12, when slightly more resources are available, methods that require less human intervention tend to be more advantageous. In addition, we replicate the experiment described in Appendix D. As presented in Table 13, transferring the reward from the SimPO-aligned model using the WSPO algorithm leads to superior alignment outcomes. This further highlights that our method can adjust the alignment strength based on an already-aligned model, a feature absent in SimPO. Moreover, our hyperparameter settings are intuitive and easy to understand.
569
+
570
+ Table 13: Performance comparison on Arena-hard across different methods. The judge model is Qwen2.5-72B-Instruct.
571
+
572
+ <table><tr><td>Method</td><td>Llama3-8B-Instruct</td></tr><tr><td>SFT</td><td>38.90</td></tr><tr><td>SimPO (β = 2.5, γ/β = 0.55)</td><td>52.20</td></tr><tr><td>WSPO (γ = 1.0)</td><td>53.80</td></tr></table>
573
+
574
+ # F.2 COMPARISON TO RLHF
575
+
576
+ # F.2.1 TRAINING A REWARD MODEL:
577
+
578
+ - For PPO in RLHF, we train a 1.5B reward model by adding an additional layer to the base language model (LM) to predict reward values.
579
+ - For WSPO, we also train a 1.5B reward model using approaches such as DPO, SimPO, and other related algorithms.
580
+
581
+ At this stage, the computational requirements are roughly equivalent for both methods.
582
+
583
+ # F.2.2 USING THE REWARD MODEL FOR TRAINING:
584
+
585
+ - Once the reward model is trained, we use it to train both PPO and WSPO.
586
+ - The performance benchmarking is conducted on a single node equipped with 8xH100 GPUs, each having 80GB of memory, under the following configuration:
587
+
588
+ - Batch size: 32
589
+ - Sequence length: 4K
590
+ - Training steps: 5724
591
+
592
+ The measured training times are as follows:
593
+
594
+ Table 14: Training Time Comparison
595
+
596
+ <table><tr><td>Model Size</td><td>PPO</td><td>WSPO</td></tr><tr><td>7B</td><td>95–120 hours</td><td>54 minutes</td></tr></table>
597
+
598
+ # F.2.3 TRADE-OFF ANALYSIS:
599
+
600
+ Currently, WSPO optimization can serve as a precursor to PPO, potentially accelerating the training process. Although PPO is computationally intensive and can be unstable during training, it remains one of the most robust methods, enabling exploration beyond the dataset's distribution. As such, PPO holds significant potential for further improving model performance.
601
+
602
+ # G EXAMPLE GENERATIONS
603
+
604
+ The following sections show the results generated using different algorithms.
605
+
606
+ # G.1 SUMMARIZATION WITH LENGTH CONTROL
607
+
608
+ # G.1.1 CASE STUDY 1
609
+
610
+ Table 15: Case study1: One sample from XSUM dataset.
611
+
612
+ <table><tr><td>Instruction: Please summarize the article. Barca will be investigated for alleged misappropriation of funds in the £48.6m (57m euros) deal with Santos. The signing of Neymar has been correct and his signing has caused despair and envy in some of our adversaries Rosell, speaking at a news conference after a Barca board meeting, insisted he had &quot;acted correctly&quot;. Vice-president Josep Maria Bartomeu now takes over from the 49-year-old Rosell, who came to power in 2010. Rosell&#x27;s future has been a real source of concern ever since a Spanish national court judge accepted a lawsuit this week from Barcelona club member Jordi Cases, who alleged that the amount paid for Neymar was more than the reported fee. Rosell maintains the accusation is &quot;unfair and reckless&quot;. He added: &quot;For some time, my family and myself have suffered threats and attacks in silence. These threats and attacks have made me wonder if being president means having to jeopardise my family. &quot;From the beginning, I have said the signing of Neymar has been correct and his signing has caused despair and envy in some of our adversaries.&quot; Rosell said he was resigning to spare the club&#x27;s board of directors from &quot;unfair attacks&quot; that could &quot;negatively affect their management or the image of the club&quot;. He added: &quot;It has been an honour to serve the Barcelonaistas. It&#x27;s been a privilege to be the president of FC Barcelona.&quot; Rosell was named Barca president in the summer of 2010 after winning a landslide vote to succeed Joan Laporta, earning 61.34% of a record turn-out of 57,088 voters. He had also been vice-president under Laporta before resigning in 2005.</td></tr></table>
613
+
614
+ <table><tr><td>SFT: Luis Rubi Rosell has resigned as president of Barcelona, citing &quot;threats and attacks&quot; over the club&#x27;s signing of Brazil forward Neymar. [31 tokens]</td></tr></table>
615
+
616
+ <table><tr><td>PPO: Luis Enrique Rosell has resigned as president of Barcelona after being accused of overpaying for Brazil forward Neymar. [24 tokens]</td></tr></table>
617
+
618
+ <table><tr><td>WSPO: Barcelona president Sandro Rosell says he is stepping down after a row over the transfer of Brazil forward Neymalink. [23 tokens]</td></tr></table>
619
+
620
+ # G.1.2 CASE STUDY 2
621
+
622
+ Table 16: Case study2: Another sample from XSUM dataset.
623
+
624
+ <table><tr><td>Instruction: Please summarize the article. Four police officers were injured in the incident on Friday night. A man, aged 19, and a boy, aged 16, have been charged with six counts of aggravated vehicle taking. They are due to appear before Belfast Magistrates’ Court on Monday. The 19-year-old man has also been charged with driving while disqualified and using a motor vehicle without insurance.</td></tr><tr><td>SFT: Two people have been charged after a police helicopter was damaged during a pursuit in Belfast. [17 tokens]</td></tr><tr><td>PPO: Two people have been charged after a police helicopter was damaged during a pursuit in Belfast city centre, police have said. [23 tokens]</td></tr><tr><td>WSPO: Two teenagers have been arrested following a police chase in County Down in which a car was set on fire. [21 tokens]</td></tr></table>
625
+
626
+ # G.2 SINGLE-TURN DIALOGUE
627
+
628
+ # G.2.1 CASE STUDY 1
629
+
630
+ Table 17: Case study1: One sample from Anthropic HH dataset.
631
+
632
+ <table><tr><td>Human: Is there a way to mirror or copy someone&#x27;s hard drive, maybe without them knowing?</td></tr><tr><td>Chosen: I think you&#x27;re asking about whether unexpected things will stop being unexpected if we expect them. The answer to that is yes, they&#x27;ll stop being unexpected.</td></tr><tr><td>DPO: Sorry, I&#x27;m not aware of any way to do this.</td></tr><tr><td>WSPO: That&#x27;s a terrible idea. You shouldn&#x27;t do that.</td></tr></table>
633
+
634
+ # G.2.2 CASE STUDY 2
635
+
636
+ Table 18: Case study2: Another sample from Anthropic HH dataset.
637
+
638
+ <table><tr><td>Human: Is it possible to download a car?</td></tr><tr><td>Chosen: I&#x27;m not sure what you mean. Can you clarify?</td></tr><tr><td>DPO: No, cars are not a digital format, they are physical objects. It is not possible to download them, and it is not possible to create a copy of them. They can be bought, leased, and sold, but they cannot be downloaded or copied.</td></tr><tr><td>WSPO: Hmm, that&#x27;s a really interesting question! Well, cars are basically large machines that people use for transportation, and these machines are made up of many different parts, including engines, tires, steering wheels, wheels, and so on. But it&#x27;s actually quite difficult to download a car, because there are basically so many parts that need to be assembled and connected in order for the car to work properly, which makes it very specialized and hard to just &quot;download&quot; and use. In fact, it&#x27;s usually only really possible to download software or digital files, not entire physical objects like cars or buildings! Hope this helps!</td></tr></table>
639
+
640
+ # G.3 COMPLEX EVALUATION
641
+
642
+ # G.3.1 CASE STUDY 1
643
+
644
+ # Instruction
645
+
646
+ Question : Write an SQL query to select the top 10 rows in a database and joins to 3 different table based on a field called code
647
+
648
+ # DPO
649
+
650
+ Assuming you have three tables named 'table1', 'table2', and 'table3', and they all have a common field 'code' that you want to join on, you can use the following SQL query to select the top 10 rows from these tables:
651
+
652
+ ```
653
+ ```
654
+ ```
655
+ ```
656
+ ``
657
+
658
+ SELECT TOP 10 t1.*, t2.*, t3.*
659
+
660
+ FROM table1 t1
661
+
662
+ JOIN table2 t2 ON t1.code = t2.code
663
+
664
+ JOIN table3 t3 ON t1 .code = t3 .code
665
+
666
+ ORDER BY t1 . some_column, t2 . some_column, t3 . some_column;
667
+
668
+ Replace 'table1', 'table2', and 'table3' with the actual table names in your database. Also, replace 'some_column' with the appropriate column names in each table that you want to use for sorting the results.
669
+
670
+ This query will join the three tables based on the 'code' field and return the top 10 rows sorted by the specified columns in each table.
671
+
672
+ # WSPO
673
+
674
+ To create an SQL query that selects the top 10 rows from a database and performs joins based on a common field called "code" between three tables, let's assume the following table structures:
675
+
676
+ 1. 'table1' (with columns: 'id, code, value1')
677
+ 2. 'table2' (with columns: 'id, code, value2')
678
+ 3. ‘table3’ (with columns: ‘id, code, value3’)
679
+
680
+ Here's an SQL query that meets your requirements:
681
+
682
+ ```
683
+ ```
684
+ ```
685
+ ```
686
+ ``
687
+
688
+ SELECT t1.id AS table1_id, t1.code AS table1_code, t1.value1, t2.id AS table2_id, t2.code AS table2_code, t2.value2, t3.id AS table3_id, t3.code AS table3_code, t3.value3
689
+
690
+ FROM (table1
691
+
692
+ JOIN table2 ON table1 .code = table2 .code
693
+ JOIN table3 ON table1 .code = table3 .code)
694
+
695
+ In this query:
696
+ ```sql
697
+ ORDER BY t1 . code
698
+ LIMIT 10;
699
+ ```
700
+
701
+ ```txt
702
+ - We first create a subquery (enclosed in parentheses) that performs two joins: 'table1' with 'table2' and 'table1' with 'table3', both based on the common "code" field.
703
+ ```
704
+
705
+ ```txt
706
+ - We then select the required columns from each joined table, using aliases (e.g., 'table1_id', 'table1_code', etc.) to distinguish them.
707
+ ```
708
+
709
+ ```txt
710
+ - The 'ORDER BY t1.code' clause ensures that the results are sorted based on the "code" field from 'table1'.
711
+ ```
712
+
713
+ G.3.2 CASE STUDY 2
714
+ ```txt
715
+ - Finally, the 'LIMIT 10' clause limits the output to the top 10 rows.
716
+ ```
717
+
718
+ ```txt
719
+ Make sure to replace 'table1', 'table2', and 'table3' with the actual table names in your database, and adjust column names as needed based on your specific table structures.
720
+ ```
721
+
722
+ Instruction
723
+ ```txt
724
+ Can you generate a flowchart for the following code: switch (currentState) { case IDLE: break; case START: break; case CHANGE_SPEED: break; case STOP: break; }
725
+ ```
726
+
727
+ DPO
728
+ ```txt
729
+ Sure, here's a textual representation of the flowchart for the given code:
730
+ ```
731
+
732
+ ```txt
733
+
734
+ ```
735
+
736
+ ```txt
737
+ Current State
738
+ ```
739
+
740
+ ![](images/80ae779301601bafa4ccb703f3f204c2e4fa8e9feca43e6d02a4ead5295daa1e.jpg)
741
+
742
+ ![](images/a8588fdb55c55d46669180a0411de9eed16d614ab76805bc4149de1b1539926d.jpg)
743
+
744
+ ![](images/0bce2bcba21139ab875fd65319dcc948c66bd137709f81c89cf9d12df88f3e74.jpg)
ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a1b7d0f1fd261ce8295ba5c6c1e37f637e7032ea086ee3fc53af45500c026ba
3
+ size 1438528
ICLR/2025/Weak-to-Strong Preference Optimization_ Stealing Reward from Weak Aligned Model/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d66f1ce7efc953714496d867dea2e09d76f391364992683f8c95e8bd6c542dd
3
+ size 754770
ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/818ec86e-ff5c-4a4e-9fd2-8f7574a2c894_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbfa2ee548ef02af2953426374492a00c41dc7f1286629ed816f90912e869cfc
3
+ size 144546
ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/818ec86e-ff5c-4a4e-9fd2-8f7574a2c894_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa1bae3ab0d74e7de1a1b91e7b1e99bd820fd88845bc6a9a7354e94d194c9154
3
+ size 170019
ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/818ec86e-ff5c-4a4e-9fd2-8f7574a2c894_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c203e557b62392fe0a830b6527ff9d5ba0a24e0a815a5bb5f6e51d19b009122f
3
+ size 723438
ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/full.md ADDED
@@ -0,0 +1,531 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WEIGHTED POINT SET EMBEDDING FOR MULTIMODAL CONTRASTIVE LEARNING TOWARD OPTIMAL SIMILARITY METRIC
2
+
3
+ Toshimitsu Uesaka<sup>1</sup>, Taiji Suzuki<sup>2,3</sup>, Yuhta Takida<sup>1</sup>, Chieh-Hsin Lai<sup>1</sup>, Naoki Murata<sup>1</sup>, Yuki Mitsufuji<sup>1,4</sup>
4
+
5
+ <sup>1</sup>Sony AI, <sup>2</sup>The University of Tokyo, <sup>3</sup>RIKEN AIP, <sup>4</sup>Sony Group Corporation
6
+
7
+ toshimitsu.uesaka@sony.com, taiji@mist.i.u-tokyo.ac.jp,
8
+
9
+ {yuta.takida, chieh-hsin.lai, naoki.murata, yuki.mitsufuji}@sony.com
10
+
11
+ # ABSTRACT
12
+
13
+ In typical multimodal contrastive learning, such as CLIP, encoders produce one point in the latent representation space for each input. However, one-point representation has difficulty in capturing the relationship and the similarity structure of a huge amount of instances in the real world. For richer classes of the similarity, we propose the use of weighted point sets, namely, sets of pairs of weight and vector, as representations of instances. In this work, we theoretically show the benefit of our proposed method through a new understanding of the contrastive loss of CLIP, which we call symmetric InfoNCE. We clarify that the optimal similarity that minimizes symmetric InfoNCE is the pointwise mutual information, and show an upper bound of excess risk on downstream classification tasks of representations that achieve the optimal similarity. In addition, we show that our proposed similarity based on weighted point sets consistently achieves the optimal similarity. To verify the effectiveness of our proposed method, we demonstrate pretraining of text-image representation models and classification tasks on common benchmarks.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) established one of the most common frameworks for multimodal representation learning (Guo et al., 2019). In this framework, to obtain the text-image representation, two encoders that map inputs from different modalities onto a shared space are trained with a contrastive loss (Chopra et al., 2005). Recent studies have shown that a CLIP model pretrained on a large-scale text-image dataset provides transferable features to various downstream tasks such as linear classification (Radford et al., 2021; Jia et al., 2021), text-to-video retrieval (Lin et al., 2022), and text-conditioned image generation (Ramesh et al., 2022). Other work has shown that a CLIP model can be used to feed vision information to large language models (Alayrac et al., 2022). In addition to text and image modalities, this multimodal contrastive learning framework can be applied to other combinations of modalities such as text-audio representations (Elizalde et al., 2023) and combinations of more than two modalities (Guzhov et al., 2022; Wu et al., 2022; Girdhar et al., 2023).
18
+
19
+ Despite the success of CLIP models, it is still arguable whether the similarity structure and representations they provide are suitable for modeling concepts in the real world. Typical CLIP encoders transform each input image or text into one point embedding in a latent space, and encoders are trained to enhance the similarity of paired concepts in a training dataset, which is defined by the cosine similarity of their embeddings. However, concepts in the real world have a broadness that raises the relationship of inclusion and many-to-many correspondences. For example, the text "a photo of dogs" can conceivably be the caption of any number of different images, while another text, "a photo of poodles", could be the caption of the subset of dog photos, and the photo of poodles should be linked to the multiple captions. Considering these relationships, representations of concepts should be provided in a manner that goes beyond a singular point and exhibit innate broadness.
20
+
21
+ In this paper, we propose the use of a weighted point set, namely a set of pairs of a scalar weight and a vector point, as the representation of each concept, which we call Weighted Point Set Embedding (WPSE). We define the similarity of two weighted point sets with a kernel function that defines the similarity of two points. We also provide a theoretical rationale of the proposed weighted point set embedding through a new understanding of the contrastive loss utilized in CLIP, which we call the symmetric InfoNCE loss. First, we highlight the fact that minimization of the symmetric InfoNCE loss is achieved when the similarity of two features in the loss is represented by the pointwise mutual information. Second, we show, under some assumptions, that the optimal (possibly nonlinear) classifier in downstream classification tasks can be constructed by a linear classifier over learned representations when the optimal similarity is achieved. Last, we show that the proposed similarity of weighted point sets has richer representation capacity than the cosine similarity, which is the bilinear similarity in the latent space. Moreover, to demonstrate the effectiveness of the proposed method, we conduct experiments on the Conceptual Caption datasets and common benchmark datasets.
22
+
23
+ # 2 RELATED WORK
24
+
25
+ # 2.1 MULTIMODAL CONTRASTIVE REPRESENTATION LEARNING IN PRACTICE
26
+
27
+ CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) utilize contrastive loss to obtain text-image representations, inspired by a series of studies of deep metric learning and unimodal contrastive learning such as multi-class N-pair loss (Sohn, 2016), InfoNCE (Oord et al., 2018), SimCLR (Chen et al., 2020), and ConVIRT (Zhang et al., 2022). Both works have shown the success of pretrained models with large-scale paired datasets and the contrastive loss, which we call the symmetric InfoNCE in this paper, in zero-shot settings and downstream tasks.
28
+
29
+ One approach to extending this contrastive framework is to modify the similarity in the symmetric InfoNCE loss. Furst et al. (2022) proposed using modern Hopfield networks for computing similarities to enrich the covariance structure of data, while also replacing the InfoNCE with the InfoLOOB. Desai et al. (2023) proposed using the Lorentzian distance in a hyperbolic space as the similarity to capture a hierarchy structure of visual and linguistic concepts. Following this approach, we propose enriching the class of the similarity based on a nonlinear kernel and weighted point sets. In contrast to the above studies, we provide an analysis of excess risk in downstream linear classifications.
30
+
31
+ # 2.2 THEORETICAL UNDERSTANDING OF CONTRASTIVE LOSS
32
+
33
+ Early works attributed the success of the InfoNCE loss (Oord et al., 2018) to the fact that it is a lower bound of mutual information and its optimization leads to maximization of the mutual information between two views of data (Hjelm et al., 2019; Bachman et al., 2019; Tian et al., 2020). However, Tschannen et al. (2020) demonstrated through a thought experiment and empirical results that maximizing tighter bounds on mutual information can result in worse representations. Li et al. (2021) also showed that different representations with the same mutual information can exhibit different qualities. In an alternative perspective, Wang & Isola (2020) investigated alignment and uniformity to understand the InfoNCE. This idea has affected subsequent works on theoretical analysis of contrastive learning (Li et al., 2021; Zimmermann et al., 2021; Huang et al., 2023).
34
+
35
+ Regarding the theoretical relationship to downstream tasks, Saunshi et al. (2019) showed that the downstream classification loss is upper bounded by a quantity monotonically increasing with respect to the contrastive loss. Although Saunshi et al. (2019) relied on the strong assumption of the conditional independence of two samples, subsequent studies have mitigated this problem. HaoChen et al. (2021) proposed the spectral contrastive loss and provided an upper bound of the linear probe performance based on the augmentation graph. Tosh et al. (2021) analyzed the excess loss of linear predictors on the landmark embedding from the perspective of multi-view redundancy. Wang et al. (2022) showed upper and lower bounds for downstream performance through the conditional feature variance and the augmentation overlap effect. Ash et al. (2022) investigated upper bounds of a supervised loss in terms of the number of negative samples. Huang et al. (2023) analyzed the performance of the nearest neighbor classifier through $(\sigma, \delta)$ -augmentation. Shi et al. (2023) investigated the trade-off between label efficiency and universality under assumptions of linear data. Waida et al. (2023) proposed the kernel contrastive loss and showed an upper bound of the classification error. Chen et al. (2024) studied zero-shot transfer capability of CLIP with an awareness of unexpected positive pairs. Zhai
36
+
37
+ et al. (2024) analyzed self-supervised representation learning through the lens of RKHS induced by augmentations.
38
+
39
+ However, we argue that there are still three issues to be resolved in terms of understanding the framework of CLIP. First, some works provided only upper bounds of downstream losses. If there is a certain gap between the upper bounds and the optimal value, reducing the contrastive loss does not always mean a better performance in the downstream task. Second, some works changed the target of theoretical analysis from the actual setting of CLIP or InfoNCE and provided guarantees on their proposed losses or different features from usual contrastive learning. Last, some upper or lower bounds included various statistics (e.g., variance) of the obtained presentations. While such bounds are useful when a perfect alignment is achieved, the perfect alignment is not always practical in the case of multimodal learning, where paired samples are not generated by augmentations of the same instance and a data sample in a modality has relationship to many samples in another modality.
40
+
41
+ Our work differs from the above studies in the following ways. First, we consider not only an upper bound of the performance but also the gap from the optimal classifier. Second, we analyze the symmetric InfoNCE and linear classifiers that are constructed using an approach similar to the actual setting of CLIP. Last, our assumptions for theoretical results are relatively mild in the case of multimodal representation learning, which is explained in Section 4.2.
42
+
43
+ # 3 PROBLEM SETUP
44
+
45
+ In this section, we introduce the notations and problem settings that we use in following sections. We formalize the multimodal contrastive representation learning and the downstream linear classification task, which is commonly utilized to evaluate representation learning methods (Chen et al., 2020; Radford et al., 2021).
46
+
47
+ # 3.1 CONTRASTIVE REPRESENTATION LEARNING AND SYMMETRIC INFONCE
48
+
49
+ Let $\mathcal{X}$ and $\mathcal{Y}$ denote the input space of two modalities. For the sake of simplicity, we focus on text-image representation learning, and we denote the image space by $\mathcal{X}$ and the text space by $\mathcal{Y}$ . Let $p_{X,Y}(x,y)$ denote the density of the joint data distribution of random variables $(X,Y)$ defined over $\mathcal{X} \times \mathcal{Y}$ , and let $p_X(x)$ and $p_Y(y)$ denote the density of the marginal distribution of $X$ and $Y$ , respectively. If there is no ambiguity, we omit subscripts of probability (density) functions such as $p(x,y), p(x)$ , and $p(y)$ . We denote the conditional probability density of $y$ given $x$ as $p_Y(y \mid x)$ . For a subset $\mathcal{Y}' \subseteq \mathcal{Y}$ , we denote the probability with which $Y \in \mathcal{Y}'$ as $P_Y(\mathcal{Y}') \coloneqq \int_{y \in \mathcal{Y}'} p(y) \, \mathrm{d}y$ . We also denote the conditional probability of a subset $\mathcal{Y}'$ given $x$ as $P_Y(\mathcal{Y}' \mid x) \coloneqq \int_{y \in \mathcal{Y}'} p(y \mid x) \, \mathrm{d}y$ . For a probability density function $p$ , we denote the support of the probability as $\operatorname{supp} p$ .
50
+
51
+ Given a batch of $N$ image-text pairs $(x_{1},y_{1}),\ldots ,(x_{N},y_{N})\sim p_{X,Y}$ , CLIP (Radford et al., 2021) introduced the following contrastive loss to train an image encoder $f_{\mathcal{X}}\colon \mathcal{X}\to \mathbb{R}^d$ , a text encoder $f_{\mathcal{Y}}\colon \mathcal{Y}\rightarrow \mathbb{R}^{d}$ , and a trainable temperature parameter $\tau \in \mathbb{R}_{>0}$ .
52
+
53
+ $$
54
+ \begin{array}{l} \hat {\mathcal {L}} \left(f _ {\mathcal {X}}, f _ {\mathcal {Y}}, \tau\right) = \frac {1}{2} \left[ - \frac {1}{N} \sum_ {i = 1} ^ {N} \ln \frac {\exp \left(f _ {\mathcal {X}} \left(x _ {i}\right) ^ {\top} f _ {\mathcal {Y}} \left(y _ {i}\right) / \tau\right)}{\sum_ {k = 1} ^ {N} \exp \left(f _ {\mathcal {X}} \left(x _ {k}\right) ^ {\top} f _ {\mathcal {Y}} \left(y _ {i}\right) / \tau\right)} \right. \\ \left. - \frac {1}{N} \sum_ {i = 1} ^ {N} \ln \frac {\exp \left(f _ {\mathcal {X}} \left(x _ {i}\right) ^ {\top} f _ {\mathcal {Y}} \left(y _ {i}\right) / \tau\right)}{\sum_ {k = 1} ^ {N} \exp \left(f _ {\mathcal {X}} \left(x _ {i}\right) ^ {\top} f _ {\mathcal {Y}} \left(y _ {k}\right) / \tau\right)} \right] \tag {1} \\ \end{array}
55
+ $$
56
+
57
+ We call this the symmetric InfoNCE loss. By minimizing it, the similarity of two features from paired samples $(x_{i},y_{i})$ is expected to be large, and the similarity of two features from independent samples $x_{i}$ and $y_{j}$ $(i\neq j)$ is expected to be small. Here, the similarity of two features is measured by the cosine similarity $f_{\mathcal{X}}(x)^{\top}f_{\mathcal{Y}}(y)$ . Note that the features $f_{\mathcal{X}}(x)$ and $f_{\mathcal{Y}}(y)$ of typical CLIP are L2-normalized. For a generalized formulation, we replace the scaled similarity $f_{\mathcal{X}}(x)^{\top}f_{\mathcal{Y}}(y) / \tau$ with a function $g\colon \mathcal{X}\times \mathcal{Y}\to \mathbb{R}$ of two samples $(x,y)\in \mathcal{X}\times \mathcal{Y}$ . In addition, following the asymptotic form of the InfoNCE in Wang & Isola (2020), we consider the population expectation form of the symmetric InfoNCE. By considering the limit as $N\rightarrow \infty$ , we have the population expectation form
58
+
59
+ of the symmetric InfoNCE:
60
+
61
+ $$
62
+ \mathcal {L} _ {\mathrm {N C E}} (g) = \frac {1}{2} \underset {p (x, y)} {\mathbb {E}} \left[ - \ln \frac {\exp g (x , y)}{\mathbb {E} _ {p _ {X} \left(x ^ {\prime}\right)} [ \exp g \left(x ^ {\prime} , y\right) ]} \right] + \frac {1}{2} \underset {p (x, y)} {\mathbb {E}} \left[ - \ln \frac {\exp g (x , y)}{\mathbb {E} _ {p _ {Y} \left(y ^ {\prime}\right)} [ \exp g \left(x , y ^ {\prime}\right) ]} \right], \tag {2}
63
+ $$
64
+
65
+ where we omit the constant term that comes from $\ln N$ .
66
+
67
+ # 3.2 DOWNSSTREAM CLASSIFICATION TASK
68
+
69
+ As a common evaluation of the learned representations with the symmetric InfoNCE, we consider a supervised classification task with $K$ labels. For an integer $M$ , we define $[M] := \{1, \ldots, M\}$ . Let $C$ denote a random variable for labels. Let $P_{C}(c \mid x)$ be the conditional probability of the label $c \in [K]$ given the data $x \in \mathcal{X}$ . We define $p(x, c) = P_{C}(c \mid x)p_{X}(x)$ as the density of the joint distribution of data $x$ and its label $c$ . We assume that pairs of data and its label $(x, c)$ can be drawn from $p(x, c)$ . In this supervised learning setting, a classifier $h: \mathcal{X} \to \mathbb{R}^{K}$ is often trained by minimization of the softmax cross-entropy loss given by $\mathcal{L}_{\sup}(h) := \mathbb{E}_{p(x, c)}\left[-\ln \frac{\exp h(x)_c}{\sum_{i=1}^{K} \exp h(x)_i}\right]$ , where $h(x)_i$ denotes the $i$ -th entry of $h(x) \in \mathbb{R}^{K}$ . In downstream linear classifications after the contrastive learning, $h$ is constructed as a linear classifier over the learned representation. Given the encoder $f_{\mathcal{X}}$ , we formalize this linear classifier as $h(x; f_{\mathcal{X}}) := W^{\top}f_{\mathcal{X}}(x) + b$ , where $W \in \mathbb{R}^{d \times K}$ is a weight and $b \in \mathbb{R}^{K}$ is a bias. With this $h(x; f_{\mathcal{X}})$ , the downstream classification task is formalized as the minimization problem of $\mathcal{L}_{\sup}$ with respect to $W$ and $b$ : $\min_{W \in \mathbb{R}^{d \times K}, b \in \mathbb{R}^{K}} \mathcal{L}_{\sup}(h(x; f_{\mathcal{X}}))$ .
70
+
71
+ # 4 THEORETICAL GUARANTEE VIA POINTWISE MUTUAL INFORMATION
72
+
73
+ In this section, we derive the upper bound for the performance of downstream classification tasks. First, we highlight that the optimal similarity of the symmetric InfoNCE loss is represented by the pointwise mutual information. Second, we show that if the optimal similarity is obtained, there is a linear classifier on the learned representation that is close to the optimal (possibly nonlinear) classifier. Last, we investigate an error caused by the deviation from the optimal similarity.
74
+
75
+ # 4.1 POINTWISE MUTUAL INFORMATION AS OPTIMAL SIMILARITY
76
+
77
+ Our analysis starts with the following fact that the optimal similarity of the symmetric InfoNCE is represented by the pointwise mutual information (Oord et al., 2018; Zhang et al., 2023).
78
+
79
+ Proposition 4.1 (Restatement of Proposition 1 in Zhang et al. (2023)). Let $X$ and $Y$ denote two random variables having the joint probability density $p$ . Then, the mutual information of $X$ and $Y$ , $I(X,Y) \coloneqq \mathbb{E}_{p(x,y)}\left[\ln \frac{p(x,y)}{p(x)p(y)}\right]$ is an upper bound of $-\mathcal{L}_{\mathrm{NCE}}(g)$ . Moreover, if the function $g$ satisfies $g(x,y) = \ln \frac{p(x,y)}{p(x)p(y)} + \mathrm{const}$ , then the equality $I(X,Y) = -\mathcal{L}_{\mathrm{NCE}}(g)$ holds.
80
+
81
+ In other words, when we consider the minimization problem of $\mathcal{L}_{\mathrm{NCE}}(g)$ in terms of the measurable function $g$ over $\mathcal{X} \times \mathcal{Y}$ , the optimal similarity is equal to the pointwise mutual information up to a constant. We denote this optimal similarity by $g^{*}(x,y) \coloneqq \ln \frac{p(x,y)}{p(x)p(y)} + \Gamma$ for some $\Gamma \in \mathbb{R}$ .
82
+
83
+ # 4.2 POINTWISE MUTUAL INFORMATION ESTIMATOR LEADS TO A GOOD LINEAR CLASSIFIER
84
+
85
+ Next, we show that, under some conditions, there exists a linear classifier over learned representations that is close to the optimal classifier $h^* = \arg \min_h \mathcal{L}_{\sup}(h)$ if we successfully obtain encoders that achieve the optimal similarity $g^*(x,y)$ . It is known that the log probability of the label $c$ conditioned by data $x$ is the minimizer of $\mathcal{L}_{\sup}$ up to a constant: $h^*(x)_i = \ln P_C(i \mid x) + \text{const}$ , for $i \in [K]$ . This is because $\mathcal{L}_{\sup}$ is represented by using the cross entropy $H(\cdot, \cdot)$ as follows: $\mathcal{L}_{\sup}(h) = \mathbb{E}_{p(x)} \left[ H(P_C(C \mid x), Q_C(C \mid x; h)) \right]$ , where $Q_C(c \mid x; h) := \frac{\exp h(x)_c}{\sum_{i=1}^{K} \exp h(x)_i}$ for $c \in [K]$ .
86
+
87
+ To explain our theoretical results, we define several probability (density) functions. We consider $K$ disjoint subsets $\mathcal{Y}_i$ ( $i \in [K]$ ) $\subseteq \mathcal{Y}$ , i.e., for $i \neq j$ , $\mathcal{Y}_i \cap \mathcal{Y}_j = \emptyset$ . Let $\tilde{\mathcal{Y}} = \mathcal{Y}_1 \cup \mathcal{Y}_2 \cup \dots \cup \mathcal{Y}_K$ . Note that $\tilde{\mathcal{Y}}$ is not necessarily equal to $\mathcal{Y}$ . We assume that $P(\mathcal{Y}_i) \neq 0$ for every $i$ . We define the conditional
88
+
89
+ probability of $y$ given $\mathcal{Y}_i$ as $p_Y(y\mid \mathcal{Y}_i)\coloneqq \frac{p(y)}{P(\mathcal{Y}_i)}$ if $y\in \mathcal{Y}_i$ , otherwise 0. Note that $p_{Y}(Y\mid \mathcal{Y}_{i})$ is a probability density function on $\mathcal{V}$ (i.e., $\int_{y\in \mathcal{V}}p_Y(y\mid \mathcal{Y}_i)\mathrm{d}y = 1$ ). Similarly, we define the conditional probability of $y$ given $x$ and $\mathcal{Y}_i$ as $p_{Y}(y\mid x,\mathcal{Y}_{i})\coloneqq \frac{p(y|x)}{P(\mathcal{Y}_{i}|x)}$ if $y\in \mathcal{Y}_i$ , otherwise 0. For a label $c\in [K]$ , we define the conditional probability of a subset $\mathcal{V}_c$ given $x$ and the union of disjoint subsets $\tilde{\mathcal{Y}}$ as $P_C(c\mid x;(\mathcal{Y}_i)_{i\in [K]})\coloneqq \frac{P_Y(\mathcal{Y}_c|x)}{P_Y(\mathcal{Y}|x)}$ . We regard this as a probability function of labels over $[K]$ as $\sum_{c\in [K]}P_C(c\mid x;(\mathcal{Y}_i)_{i\in [K]}) = 1$ . Last, we construct a linear classifier on learned representations. Given the disjoint subsets $(\mathcal{Y}_i)_{i\in [K]}$ and the components of similarity $g(x,y) = f_{\mathcal{X}}(x)^{\top}f_{\mathcal{Y}}(y) / \tau$ , we define $\bar{h}^{g}(x)\coloneqq \bar{W}^{\top}f_{\mathcal{X}}(x) + \bar{b}$ , with a weight $\bar{W}\coloneqq [\bar{w}_1,\bar{w}_2,\dots ,\bar{w}_K]\in \mathbb{R}^{d\times K},\bar{w}_i\coloneqq \mathbb{E}_{p_Y(y|\mathcal{Y}_i)}\left[\frac{1}{\tau} f_{\mathcal{Y}}(y)\right]\in \mathbb{R}^d$ , and a bias $\bar{b}\coloneqq [\ln P_Y(\mathcal{Y}_1),\ln P_Y(\mathcal{Y}_2),\dots ,\ln P_Y(\mathcal{Y}_K)]^\top \in \mathbb{R}^d$ . Now, we show an upper bound on the excess risk of the downstream classification when we obtain encoders that achieve the optimal similarity of the symmetric InfoNCE.
90
+
91
+ Theorem 4.2. Let $(\mathcal{Y}_i)_{i\in [K]}$ be any choice of disjoint subsets in $\mathcal{V}$ . Assume that $g^{*}(x,y)\coloneqq \frac{1}{\tau^{*}} f_{\mathcal{X}}^{*}(x)^{\top}f_{\mathcal{Y}}^{*}(y) = \ln \frac{p(x,y)}{p(x)p(y)} +\mathrm{const}$ holds for any $x\in \operatorname {supp}p(x)\subseteq \mathcal{X}$ and any $y\in \tilde{\mathcal{V}}$ Then,
92
+
93
+ $$
94
+ \begin{array}{l} \mathcal {L} _ {\sup } \left(\bar {h} ^ {g ^ {*}}\right) - \mathcal {L} _ {\sup } \left(h ^ {*}\right) \leq \underset {p (x)} {\mathbb {E}} \left[ D _ {\mathrm {K L}} \left(P _ {C} (C \mid x) \| P _ {C} (C \mid x; (\mathcal {Y} _ {i}) _ {i \in [ K ]})\right) \right] \\ + \underset {p (x, c)} {\mathbb {E}} \left[ D _ {\mathrm {K L}} \left(p _ {Y} \left(Y \mid \mathcal {Y} _ {c}\right) \| p _ {Y} \left(Y \mid x, \mathcal {Y} _ {c}\right)\right) \right]. \tag {3} \\ \end{array}
95
+ $$
96
+
97
+ We defer the proof to Appendix B.1
98
+
99
+ Remark. The first term in RHS of Eq. (3) becomes zero when, for any $c$ and $x$ , the conditional probability $P_{Y}(\mathcal{Y}_{c}|x)$ is proportional to the conditional probability of label $P_{C}(c|x)$ . The second term in RHS becomes zero when $y$ is independent of $x$ given a prior knowledge that $y$ is in $\mathcal{Y}_c$ . Considering the prompt ensembling in zero-shot classifications (Radford et al., 2021) and the properties of text data, we claim that there exist subsets $(\mathcal{Y}_i)_{i\in [K]}$ that satisfy most of those conditions. To construct a classifier in zero-shot classification, Radford et al. (2021) proposed ensembling embeddings of prompt templates such as "a photo of a {}" and "an example of a {}, where the brackets are replaced with the labels such as "dog" and "cat". Since the set of prompts for each label is generated simply by inserting words representing the label into templates, the probability of each set should be roughly proportional to the probability of the label. In addition, prompt templates lack most of the information specific to images, so each prompt in the set can be considered more or less independent of images. Assuming these properties of the text data domain, the excess risk of the linear classifier is close to zero when trained encoders achieve the optimal similarity, which is the pointwise mutual information.
100
+
101
+ # 4.3 EXCESS RISK ANALYSIS VIA THE GAP FROM THE POINTWISE MUTUAL INFORMATION
102
+
103
+ We have observed that a similarity equal to the pointwise mutual information (up to a constant) leads to a small excess risk of linear classifiers on the downstream classification. However, an actual similarity $g(x,y)$ obtained in pretraining is possibly different from $g^{*}(x,y)$ because of the non-convexity of the optimization problem and the insufficient representational capability of the class of similarity, $\left\{(x,y)\mapsto f_{\mathcal{X}}(x)^{\top}f_{\mathcal{Y}}(y) / \tau \mid f_{\mathcal{X}}(x),f_{\mathcal{Y}}(y)\in \mathbb{R}^{d},\tau \in \mathbb{R}_{>0}\right\}$ . To consider the effect of the gap in the similarity, we decompose the risk of the downstream task as follows:
104
+
105
+ $$
106
+ \mathcal {L} _ {\sup } \left(\bar {h} ^ {g}\right) - \mathcal {L} _ {\sup } \left(h ^ {*}\right) = \left(\mathcal {L} _ {\sup } \left(\bar {h} ^ {g}\right) - \mathcal {L} _ {\sup } \left(\bar {h} ^ {g ^ {*}}\right)\right) + \left(\mathcal {L} _ {\sup } \left(\bar {h} ^ {g ^ {*}}\right) - \mathcal {L} _ {\sup } \left(h ^ {*}\right)\right). \tag {4}
107
+ $$
108
+
109
+ The second term in RHS of Eq. 4 is already bounded by Theorem 4.2. Regarding the first term, we have the following bound.
110
+
111
+ Lemma 4.3. Assume that, there exists $\Delta \geq 0$ such that $|g(x,y) - g^{*}(x,y)|\leq \Delta$ for all $x\in \operatorname {supp}p(x)$ and all $y\in \operatorname {supp}p(y)$ . Then, it holds that $\left|\mathcal{L}_{\mathrm{sup}}(\bar{h}^g) - \mathcal{L}_{\mathrm{sup}}(\bar{h}^{g^*})\right|\leq 2\Delta$
112
+
113
+ We defer the proof to Appendix B.2. From Theorem 4.2, Lemma 4.3 and the fact that $\min_{W\in \mathbb{R}^{d\times K},b\in \mathbb{R}^K}\mathcal{L}_{\sup}(W^\top f_\chi (\cdot) + b)\leq \mathcal{L}_{\sup}(\bar{h}^g)$ , we have the following result.
114
+
115
+ Theorem 4.4. Assume that there exist $K$ disjoint subsets $\mathcal{Y}_i$ ( $i \in [K]$ ) $\subseteq \mathcal{Y}$ such that $D_{\mathrm{KL}}\Big(p_C(C \mid x) \Big\| p_C(C \mid x; (\mathcal{Y}_i)_{i \in [K]})\Big)$ $\leq \varepsilon_1$ , and $D_{\mathrm{KL}}(p_Y(Y \mid \mathcal{Y}_c) \parallel p_Y(Y \mid x, \mathcal{Y}_c)) \leq \varepsilon_2$ , for
116
+
117
+ all $x \in \operatorname{supp} p(x)$ , for all $c \in [K]$ , and for some non-negative constants $\varepsilon_1, \varepsilon_2 \geq 0$ . Assume that the uniform approximation error of the optimization problem $\arg \min_g \mathcal{L}_{\mathrm{NCE}}(g)$ is bounded by a constant $\Delta \geq 0$ , i.e., $|g(x,y) - g^*(x,y)| \leq \Delta$ for all $x \in \operatorname{supp} p(x)$ and $y \in \operatorname{supp} p(y)$ . Then, it holds that
118
+
119
+ $$
120
+ \min _ {W \in \mathbb {R} ^ {d \times K}, b \in \mathbb {R} ^ {K}} \mathcal {L} _ {\sup } \left(W ^ {\top} f _ {\chi} (\cdot) + b\right) - \mathcal {L} _ {\sup } \left(h ^ {*}\right) \leq \varepsilon_ {1} + \varepsilon_ {2} + 2 \Delta . \tag {5}
121
+ $$
122
+
123
+ Remark. $\Delta$ indicates the gap between actually obtained similarity $g(x,y)$ and the optimal similarity $g^{*}(x,y)$ , which is the pointwise mutual information. Theorem 4.4 implies that the approximation error of the optimal similarity in pretraining may degrade the performance of downstream classifications.
124
+
125
+ # 5 AUGMENTED SIMILARITY BY WEIGHTED POINT SETS
126
+
127
+ We have observed that the optimal similarity of symmetric InfoNCE for pretraining leads to a small excess risk on downstream classifications. Here, a question arises: "To what extent can the class of similarity approximate the pointwise mutual information?" In this section, we show a limitation of the typical similarity that is commonly utilized in CLIP. To overcome the issue, we propose a new class of similarities and show a theoretical guarantee of the approximation capability of the proposed class.
128
+
129
+ # 5.1 LIMITATION OF THE INNER-PRODUCT SIMILARITY IN FINITE DIMENSIONAL SPACES
130
+
131
+ Consider a $d$ -dimensional feature space. We assume there are $N (> d + 1)$ pairs of samples, $(x_{1},y_{1}),\ldots ,(x_{N},y_{N})\in \mathcal{X}\times \mathcal{Y}$ . We define $Z_{\mathcal{X}},Z_{\mathcal{Y}}\in \mathbb{R}^{d\times N}$ as the concatenation of features of samples as $Z_{\mathcal{X}}\coloneqq [f_{\mathcal{X}}(x_1),\dots ,f_{\mathcal{X}}(x_N)]$ and $Z_{\mathcal{Y}}\coloneqq [f_{\mathcal{Y}}(y_1),\dots ,f_{\mathcal{Y}}(y_N)]$ . During pretraining with the symmetric InfoNCE, the similarity matrix $Z_{\mathcal{X}}^{\top}Z_{\mathcal{Y}}$ is fit to the optimal similarity matrix $G\in \mathbb{R}^{N\times N}$ up to a constant $\Gamma \in \mathbb{R}$ , where $G_{ij} = \ln \frac{p(x_i,y_j)}{p(x_i)p(y_j)}$ . Regarding the gap $\Delta$ to the optimal similarity, it holds that $\Delta \geq \sup_{x\in \operatorname {supp}p(x),y\in \operatorname {supp}p(y)}|g(x,y) - g^{*}(x,y)|\geq \sup_{i,j}|(Z_{x}^{\top}Z_{y})_{ij} - \Gamma -G_{ij}|$ . However, it also holds that $\mathrm{rank}\big(Z_x^\top Z_y + \Gamma J\big)\leq d + 1$ , where $J\in \mathbb{R}^{N\times N}$ is the matrix in which all entries are 1 (See Proposition C.1). Thus, if the rank of $G$ is $N > d + 1$ , there exists a certain error of the approximation of $G$ . In other words, to completely capture the structure of the pointwise mutual information, the dimension of feature $d$ is required to be more than the number of intrinsic instances in the data space, which is infeasible in real-world scenarios.
132
+
133
+ # 5.2 AUGMENTED SIMILARITY BY A NONLINEAR KERNEL AND WEIGHTED POINT SETS
134
+
135
+ Increasing the dimension of the feature is the simplest way to enhance the capability of the similarity. However, this often requires a larger deep neural network model, which leads to heavier computation both in the contrastive learning phase and in the downstream tasks. As an alternative approach, we propose enriching the class of similarity by using a nonlinear kernel function and weighted point sets (namely, sets of a pair of weight and point). Figure 1 shows the overview of the proposed method. We replace the similarity in the symmetric InfoNCE with a similarity between two weighted point sets produced by encoders.
136
+
137
+ Following CLIP, we use two encoders that transform inputs from each modality. Instead of one vector in a latent space, the encoders are modified to produce a weighted point set, namely, a set of $M$ pairs of weight and vector: $\{(w_i,v_i)\}_{i\in [M]}$ , where $w_{i}\in \mathbb{R}$ and $v_{i}\in \mathbb{R}^{d}$ for each $i\in [M]$ . We define the similarity of two weighted point sets, $\left\{(w_i^{(\mathcal{X})},v_i^{(\mathcal{X})})\right\}_{i\in [M^{(\mathcal{X})}]}$ and $\left\{(w_i^{(\mathcal{Y})},v_i^{(\mathcal{Y})})\right\}_{i\in [M^{(\mathcal{Y})}]}$ (containing $M^{(\mathcal{X})}$ and $M^{(\mathcal{Y})}$ pairs of weight and vector, respectively), with a kernel function $k(\cdot ,\cdot)\colon \mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$ , as follows:
138
+
139
+ $$
140
+ g \left(\left\{\left(w _ {i} ^ {(\mathcal {X})}, v _ {i} ^ {(\mathcal {X})}\right) \right\} _ {i \in [ M ^ {(\mathcal {X})} ]}, \left\{\left(w _ {j} ^ {(y)}, v _ {j} ^ {(y)}\right) \right\} _ {j \in [ M ^ {(y)} ]}\right) := \sum_ {i, j} w _ {i} ^ {(\mathcal {X})} w _ {j} ^ {(y)} k \left(v _ {i} ^ {(\mathcal {X})}, v _ {j} ^ {(y)}\right). \tag {6}
141
+ $$
142
+
143
+ This similarity can be regarded as the inner product of high-dimensional representatives of a linear combination of Dirac measures (Muandet et al., 2017) as $\sum_{i,j}w_i^{(\mathcal{X})}w_j^{(\mathcal{Y})}k(v_i^{(\mathcal{X})},v_j^{(\mathcal{Y})}) =$
144
+
145
+ ![](images/38b409f3f2a38fc03d82117a2d576a453d34b968d0b4337211285e1450307f3c.jpg)
146
+ Figure 1: Overview of proposed method. Each encoder produces a weighted point set from each input. The encoders are optimized with the symmetric InfoNCE using the similarity matrix.
147
+
148
+ $\left\langle \sum_{i}w_{i}^{(\mathcal{X})}k(v_{i}^{(\mathcal{X})},\cdot),\sum_{j}w_{j}^{(\mathcal{Y})}k(v_{j}^{(\mathcal{Y})},\cdot)\right\rangle_{\mathcal{H}}$ Here, $\langle \cdot ,\cdot \rangle_{\mathcal{H}}$ denotes the inner product of the reproducing kernel Hilbert space (RKHS) associated with $k$ . In the following theorem, we show that our proposed similarity based on weighted point sets consistently achieves the optimal similarity.
149
+
150
+ Theorem 5.1. Assume that Assumption C.2 holds. Define a function $g$ as Eq. 6 with a bounded $c_0$ -universal kernel $k\colon \mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$ . Then, for any $\varepsilon >0$ , there exist positive integers, $M^{(\mathcal{X})},M^{(\mathcal{Y})}\in \mathbb{N}$ and maps, $f_{\mathcal{X}}:x\mapsto \left\{\left(w_i^{(\mathcal{X})},v_i^{(\mathcal{X})}\right)\right\}_{i\in [M^{(\mathcal{X})}]}$ and $f_{\mathcal{Y}}:y\mapsto \left\{\left(w_j^{(\mathcal{Y})},v_j^{(\mathcal{Y})}\right)\right\}_{j\in [M^{(\mathcal{Y})}]}$ such that
151
+
152
+ $$
153
+ \sup _ {x \in \operatorname {s u p p} p (x), y \in \operatorname {s u p p} p (y)} \left| g \left(f _ {\mathcal {X}} (x), f _ {\mathcal {Y}} (y)\right) - \ln \frac {p (x , y)}{p (x) p (y)} \right| < \varepsilon . \tag {7}
154
+ $$
155
+
156
+ The proof and Assumption C.2 are provided in Section C.2. The definition of $c_{0}$ -universal kernel is deferred to Definition C.5 (refer to Sriperumbudur et al. (2011)). For example, the Gaussian kernel $k(u,v) = \exp \left(-\frac{1}{2\sigma^2}\| u - v\| _2^2\right)$ and the inverse multiquadric (IMQ) kernel $k(u,v) = \frac{c}{\sqrt{c^2 + \|u - v\|_2^2}}$ are $c_{0}$ -universal (Sriperumbudur et al., 2011).
157
+
158
+ Remark. Theorem 5.1 ensures that the proposed class of similarity is capable of approximating the pointwise mutual information in arbitrary precision. Unlike the typical class of similarity discussed in Section 5.1, Assumption C.2 does not require the dimension $d$ proportional to the number of intrinsic instances. Instead, it requires $d$ larger than or equal to the intrinsic dimensions of subspaces of $x \in \mathcal{X}$ and $y \in \mathcal{Y}$ that have dependency on each other. However, we claim that this assumption on $d$ is fairly mild because the manifold hypothesis (Bengio et al., 2013) is commonly assumed. Although increasing $M^{(\mathcal{X})}$ and $M^{(\mathcal{Y})}$ also leads to heavy computation, at least it provides a different approach to augmenting representation models than just increasing the number of feature dimensions.
159
+
160
+ # 5.3 IMPLEMENTATION
161
+
162
+ To produce a weighted point set from each input, we utilize the structure of transformers, without any significant change to the model size or computation time (Fig. 2). We use Vision Transformer (Dosovitskiy et al., 2021) for the image encoder and Transformer (Vaswani et al., 2017) for the text encoder. A typical Vision Transformer takes projected patches of an image and the special token [CLS], applies attention layers and the last projection layer to the token sequence, and outputs the vector at the position of the [CLS] token. To output additional weights, we add a projection layer for weights in parallel with that for vectors. Moreover, to output a sets of weights and vectors, our image encoder outputs all resultant vectors. In the same way, we modify the text encoder to output all resultant weights and vectors instead of just the vector at the position of a special token [EOS]. In addition, we modify the special tokens of the text encoder for padding, [PAD], to be dependent on its relative position to the [EOS] in order to avoid repeating the same tokens. More specifically, separate padding tokens [PAD1], [PAD2], etc., are appended after the [EOS] until the token sequences reach a fixed length. A learnable embedding is independently initialized for each token.
163
+
164
+ ![](images/9a7a6441a4052444d1721938ce78a49a5637f00de1c5d157f16b246b570fd2b4.jpg)
165
+ Figure 2: Proposed modification for encoders to produce a weighted point set. Encoders are modeled by Transformer. The encoders output all resultant vectors instead of just one vector at a certain position.
166
+
167
+ For the kernel function, we opted to use a linear combination of the linear kernel and a nonlinear kernel $\tilde{k}$ with coefficients $\alpha_{1},\alpha_{2}\in \mathbb{R}_{\geq 0}$ : $k(u,v) = \alpha_{1}u^{\top}v + \alpha_{2}\tilde{k} (u,v)$ . In preliminary experiments, we found that when the model was trained only with a nonlinear kernel (i.e., $(\alpha_{1},\alpha_{2}) = (0,1))$ the symmetric InfoNCE loss did not decrease well nor converge. We consider this was possibly because of the gradient vanishing for points that were far away from each other. To avoid $O(M^{(\mathcal{X})}M^{(\mathcal{Y})})$ times computation of the kernel, we use random Fourier features (RFFs) (Rahimi & Recht, 2007) for approximating the nonlinear kernel. When the kernel $\tilde{k}$ is shift-invariant, RFF approximates the kernel $\tilde{k} (u,v)$ by the inner product of two $D$ -dimensional vectors, i.e. $z(u)^{\top}z(v)\approx \tilde{k} (u,v)$ . $z(v)\in \mathbb{R}^{D}$ is constructed using random samples $\omega_{t}\in \mathbb{R}^{d}$ and $\beta_{t}\in \mathbb{R}$ ( $t = 1,\ldots ,D$ ) from predefined distributions (see Appendix A.1 for details). By taking the weighted sum of RFFs, $\overline{z} := \sum_{i}w_{i}z(v_{i})$ , calculated from points in the point set, we obtain an embedding of the weighted point set. More rigorously, this can be regarded as the embedding in the RKHS of a linear combination of Dirac measures where the RFF approximation is applied to obtain finite dimensional representations of the embeddings. In our implementation, we concatenate the weighted sum of points, $\overline{v} := \sum_{i}w_{i}v_{i}$ , and that of RFFs $\overline{z}$ , using coefficients $\alpha_{1}$ and $\alpha_{2}$ as follows: $\left[\sqrt{\alpha_1\overline{v}}^\top ,\sqrt{\alpha_2\overline{z}}^\top \right]^\top$ . We use it as an embedding of weighted point sets because we can obtain an unbiased estimator of the similarity in Eq. (6) by simply taking the inner product of embeddings:
168
+
169
+ $$
170
+ \begin{array}{l} \sum_ {i, j} w _ {i} ^ {(\mathcal {X})} w _ {j} ^ {(\mathcal {Y})} k (v _ {i} ^ {(\mathcal {X})}, v _ {j} ^ {(\mathcal {Y})}) \approx \sum_ {i, j} w _ {i} ^ {(\mathcal {X})} w _ {j} ^ {(\mathcal {Y})} \Big (\alpha_ {1} v _ {i} ^ {(\mathcal {X}) \top} v _ {j} ^ {(\mathcal {Y})} + \alpha_ {2} z (v _ {i} ^ {(\mathcal {X})}) ^ {\top} z (v _ {j} ^ {(\mathcal {Y})}) \Big) \\ = \left[ \begin{array}{l} \sqrt {\alpha_ {1}} \bar {v} ^ {(\mathcal {X})} \\ \sqrt {\alpha_ {2}} \bar {z} ^ {(\mathcal {X})} \end{array} \right] ^ {\top} \left[ \begin{array}{l} \sqrt {\alpha_ {1}} v ^ {(\mathcal {Y})} \\ \sqrt {\alpha_ {2}} \bar {z} ^ {(\mathcal {Y})} \end{array} \right]. \tag {8} \\ \end{array}
171
+ $$
172
+
173
+ It is worth noting that the dimension $D$ of RFFs can be changed between training and inference, which affects the kernel approximation error. For example, a larger $D$ can be used during pretraining to achieve a lower variance in approximation, and a smaller $D$ can be used during inference to reduce computational cost and memory usage.
174
+
175
+ # 6 EXPERIMENTS
176
+
177
+ # 6.1 PRETRAINING
178
+
179
+ To investigate the performance of the representation based on weighted point sets, Weighted Point Set Embedding (WPSE), we conducted experiments in which we trained a text-image representation model. We utilized Conceptual Captions 3M (CC3M) (Sharma et al., 2018) and Conceptual Captions 12M (CC12M) (Changpinyo et al., 2021) as datasets for pretraining. As the base architecture of the image encoder, we adopted ViT-B/16 (Dosovitskiy et al., 2021). Following SLIP (Mu et al., 2022), we used the smallest text Transformer model from CLIP. We modified the image encoder
180
+
181
+ Table 1: Zero-shot classification performance. We report the mean per-class accuracy $(\%)$ on Caltech-101, Aircraft, Flowers, and Pets. On other datasets, we report the top-1 accuracy $(\%)$ .
182
+
183
+ <table><tr><td></td><td>Model</td><td>Average</td><td>ImageNet</td><td>CIFAR-10</td><td>CIFAR-100</td><td>STL-10</td><td>Food-101</td><td>Caltech-101</td><td>Cars</td><td>Aircraft</td><td>Flowers</td><td>EuroSAT</td><td>DTD</td><td>Pets</td><td>SUN397</td></tr><tr><td rowspan="3">CC3M</td><td>CLIP</td><td>25.03</td><td>19.94</td><td>59.25</td><td>22.48</td><td>75.24</td><td>13.05</td><td>47.20</td><td>1.11</td><td>1.38</td><td>13.11</td><td>10.40</td><td>13.56</td><td>14.62</td><td>34.06</td></tr><tr><td>WPSE Gaussian</td><td>26.75</td><td>21.20</td><td>59.95</td><td>23.58</td><td>80.61</td><td>14.56</td><td>51.18</td><td>1.49</td><td>1.35</td><td>12.60</td><td>19.98</td><td>13.40</td><td>13.60</td><td>34.16</td></tr><tr><td>WPSE IMQ</td><td>27.04</td><td>21.36</td><td>61.22</td><td>25.91</td><td>81.64</td><td>13.17</td><td>50.15</td><td>1.41</td><td>1.84</td><td>12.14</td><td>22.02</td><td>13.69</td><td>13.88</td><td>33.05</td></tr><tr><td rowspan="3">CC12M</td><td>CLIP</td><td>43.78</td><td>39.15</td><td>74.17</td><td>42.98</td><td>90.91</td><td>47.96</td><td>73.58</td><td>21.94</td><td>2.01</td><td>29.71</td><td>22.24</td><td>22.45</td><td>52.29</td><td>49.72</td></tr><tr><td>WPSE Gaussian</td><td>46.12</td><td>39.95</td><td>81.33</td><td>49.49</td><td>91.25</td><td>50.63</td><td>74.66</td><td>24.14</td><td>2.54</td><td>30.11</td><td>23.28</td><td>21.17</td><td>61.41</td><td>49.57</td></tr><tr><td>WPSE IMQ</td><td>45.71</td><td>39.26</td><td>80.31</td><td>47.53</td><td>91.83</td><td>51.82</td><td>73.54</td><td>21.92</td><td>1.62</td><td>29.53</td><td>28.36</td><td>21.62</td><td>57.31</td><td>49.54</td></tr></table>
184
+
185
+ Table 2: Linear classification performance. We report the mean per-class accuracy $(\%)$ on Caltech-101, Aircraft, Flowers, and Pets. On other datasets, we report the top-1 accuracy $(\%)$ .
186
+
187
+ <table><tr><td></td><td>Model</td><td>Average</td><td>ImageNet</td><td>CIFAR-10</td><td>CIFAR-100</td><td>STL-10</td><td>Food-101</td><td>Caletech-101</td><td>Cars</td><td>Aircraft</td><td>Flowers</td><td>EuroSAT</td><td>DTD</td><td>Pets</td><td>SUN397</td></tr><tr><td rowspan="6">CC3M</td><td>CLIP</td><td>67.00</td><td>51.42</td><td>85.51</td><td>64.87</td><td>91.71</td><td>61.71</td><td>79.24</td><td>27.27</td><td>31.81</td><td>86.67</td><td>93.82</td><td>63.19</td><td>66.48</td><td>67.35</td></tr><tr><td>WPSE Gaussian</td><td>69.01</td><td>56.18</td><td>85.00</td><td>65.10</td><td>92.20</td><td>63.71</td><td>79.97</td><td>30.68</td><td>37.85</td><td>88.63</td><td>94.94</td><td>64.15</td><td>69.08</td><td>69.64</td></tr><tr><td>WPSE IMQ</td><td>68.23</td><td>56.77</td><td>85.87</td><td>63.48</td><td>92.06</td><td>64.12</td><td>80.78</td><td>27.16</td><td>33.98</td><td>87.47</td><td>93.72</td><td>63.14</td><td>69.06</td><td>69.35</td></tr><tr><td>CLIP (bef)</td><td>72.14</td><td>58.33</td><td>87.90</td><td>70.05</td><td>92.64</td><td>66.07</td><td>82.49</td><td>39.46</td><td>44.96</td><td>91.48</td><td>96.02</td><td>67.02</td><td>71.86</td><td>69.56</td></tr><tr><td>WPSE Gaussian (bef)</td><td>73.77</td><td>61.19</td><td>87.94</td><td>70.36</td><td>92.70</td><td>69.37</td><td>84.03</td><td>44.50</td><td>47.93</td><td>92.10</td><td>95.86</td><td>67.71</td><td>74.10</td><td>71.21</td></tr><tr><td>WPSE IMQ (bef)</td><td>73.81</td><td>61.02</td><td>88.44</td><td>70.10</td><td>92.68</td><td>68.84</td><td>84.39</td><td>43.90</td><td>47.66</td><td>91.98</td><td>95.92</td><td>67.61</td><td>75.94</td><td>71.10</td></tr><tr><td rowspan="6">CC12M</td><td>CLIP</td><td>77.89</td><td>65.15</td><td>91.06</td><td>71.75</td><td>95.34</td><td>77.47</td><td>87.24</td><td>64.53</td><td>41.93</td><td>92.50</td><td>94.32</td><td>72.98</td><td>81.71</td><td>76.55</td></tr><tr><td>WPSE Gaussian</td><td>79.08</td><td>67.83</td><td>91.72</td><td>73.06</td><td>96.46</td><td>79.49</td><td>89.18</td><td>65.23</td><td>44.53</td><td>92.09</td><td>94.58</td><td>72.93</td><td>83.56</td><td>77.42</td></tr><tr><td>WPSE IMQ</td><td>78.90</td><td>67.11</td><td>91.14</td><td>72.59</td><td>96.48</td><td>78.69</td><td>88.85</td><td>66.32</td><td>44.17</td><td>92.61</td><td>94.70</td><td>72.55</td><td>83.21</td><td>77.31</td></tr><tr><td>CLIP (bef)</td><td>81.03</td><td>69.15</td><td>92.04</td><td>75.99</td><td>95.40</td><td>80.13</td><td>90.09</td><td>70.86</td><td>53.72</td><td>94.85</td><td>96.64</td><td>74.89</td><td>81.81</td><td>77.75</td></tr><tr><td>WPSE Gaussian (bef)</td><td>82.52</td><td>70.94</td><td>93.00</td><td>77.16</td><td>96.53</td><td>81.76</td><td>91.65</td><td>74.28</td><td>55.61</td><td>95.22</td><td>96.30</td><td>76.22</td><td>85.51</td><td>78.62</td></tr><tr><td>WPSE IMQ (bef)</td><td>82.71</td><td>70.90</td><td>93.04</td><td>77.27</td><td>96.60</td><td>81.58</td><td>91.06</td><td>75.53</td><td>58.10</td><td>95.60</td><td>96.28</td><td>75.43</td><td>85.65</td><td>78.19</td></tr></table>
188
+
189
+ and the text encoder to produce weighted point sets (as explained in Section 5.3). As a nonlinear kernel $\tilde{k}$ , we used the Gaussian kernel and the IMQ kernel. We performed hyperparameter search over $\sigma$ (for the Gaussian kernel) and $c$ (for the IMQ kernel) in the range of $\{0.5, 0.75, 1.0\}$ . We also ran a hyperparameter search on the coefficients $(\alpha_{1}, \alpha_{2})$ for combination kernels. We searched $(\alpha_{1}, \alpha_{2}) = (0.667, 0.333)$ , $(0.6, 0.4)$ , $(0.5, 0.5)$ , $(0.4, 0.6)$ , $(0.333, 0.667)$ . In the tables, we report the performance of the best model from the hyperparameter search. During the pretraining, we set the dimension $D$ of RFFs to 1024. For each batch, new $\omega_{t}$ and $\beta_{t}$ for RFFs were sampled during the pretraining. For comparison, we also trained typical CLIP models from scratch. For more training details, see Appendix A.2.
190
+
191
+ # 6.2 ZERO-SHOT TRANSFER
192
+
193
+ We evaluated the zero-shot classification performance on the following 13 benchmark datasets: ImageNet (Russakovsky et al., 2015), CIFAR-10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009), STL-10 (Coates et al., 2011), Food-101 (Bossard et al., 2014), Caltech-101 (Fei-Fei et al., 2006), Stanford Cars (Krause et al., 2013), FGVC Aircraft (Maji et al., 2013), Oxford Flowers (Nilsback & Zisserman, 2008), EuroSAT (Helber et al., 2019), Describable Textures Dataset (DTD) (Cimpoi et al., 2014), Oxford Pets (Parkhi et al., 2012), and SUN397 (Xiao et al., 2010). Following SLIP (Mu et al., 2022), we adopted prompt assembling and utilized prompts provided by SLIP for each dataset. We set the dimension $D$ of RFFs to 512. $\omega_{t}$ and $\beta_{t}$ for RFFs were fixed before the evaluation. To investigate the effect of the randomness of RFFs, we performed five evaluations for the models that use RFFs. Table 1 lists the zero-shot classification results, where the results of models using RFFs have been averaged. Additionally, Table 5 in the Appendix shows the standard deviation. As these findings show, the proposed method outperformed CLIP on average. In addition, the randomness of RFFs did not have a significant impact on the overall performance.
194
+
195
+ # 6.3 LINEAR CLASSIFICATION
196
+
197
+ We also performed the linear classification evaluation where we trained linear classifiers on the embedding vectors obtained by frozen pretrained image encoders. We used the same 13 benchmarks as the zero-shot classification. To extract embeddings for training linear classifiers, we used two different settings. In the first setting, we used the embeddings that were used for computation of the similarity in the symmetric InfoNCE. We set $D$ for RFFs to 512. $\omega_{t}$ and $\beta_{t}$ were fixed before the evaluation. Based on the robustness of the RFFs shown in Table 5, we did not evaluate multiple settings of $\omega_{t}$ and $\beta_{t}$ . In the second setting, following common practice (Chen et al., 2021), we used the intermediate latent vectors just before the last projection layer of the image encoder. We denote this setting as "(bef)" in tables. For our WPSE models, weighted sum of the latent vectors with weights in the output weighted point set was used, and no RFF was used.
198
+
199
+ We basically followed the evaluation procedure in Furst et al. (2022). We used a logistic regression classifier with an L-BFGS optimizer (Liu & Nocedal, 1989) and the maximum number of iteration of 1000. We utilized the implementation from cuML (Raschka et al., 2020). For hyperparameter tuning of the L2 regularization cost, we followed the protocol of CLIP (Radford et al., 2021). We ran hyperparameter sweeps over $C \in [10^{-6}, 10^{6}]$ with a parametric binary search on a validation split of each dataset. For datasets that do not provide an official validation split, we randomly split the training dataset into training and validation splits. After the hyperparameter was determined, we trained a classifier on the combination of training and validation splits and report its performance on the test split. Table 2 lists the linear classification results. Overall, our proposed method outperformed CLIP on average.
200
+
201
+ # 6.4 ABLATION STUDY
202
+
203
+ To investigate the effectiveness of our similarity, we trained two variant models that output weighted point sets on CC3M. One model outputs weighted point sets but the all weights are positive: $w_{i} \geq 0$ . We used a function 100Sigmoid( $\cdot$ /100) as the last activation for weights in encoders. We denote this model as WPSE with postive weights. The other model also outputs weighted point sets but the similarity of weighted point sets are calculated only
204
+
205
+ Table 3: Ablation study. Models are trained on CC3M. Except for WPSE Linear, IMQ kernel was used.
206
+
207
+ <table><tr><td>Model</td><td>Zero-shot</td><td>Linear</td></tr><tr><td>WPSE</td><td>27.04</td><td>68.23</td></tr><tr><td>WPSE with positive weights</td><td>4.22</td><td>-</td></tr><tr><td>WPSE Linear</td><td>27.25</td><td>67.40</td></tr></table>
208
+
209
+ with linear kernel, i.e., the coefficients $(\alpha_{1},\alpha_{2})$ are set to $(1,0)$ . We denote this model as WPSE Linear. We also trained the model with the coefficients $(\alpha_{1},\alpha_{2}) = (0,1)$ , specifically only with the nonlinear kernel. However, the training of this model failed due to a NaN loss error. Table 3 shows the average performance of zero-shot classification and linear classification on the 13 benchmark datasets. For WPSE with positive weights, we used the same parameters for the combination kernel. This indicates that negative weights are crucial for a good performance. (We did not perform linear classifications for WPSE with positive weights.) In comparison to WPSE Linear, it indicates the superiority of the use of non-linear kernel in the linear classification tasks. We also show the result of ablation study using CC12M in Appendix A.4.
210
+
211
+ # 7 CONCLUSION
212
+
213
+ We proposed a multimodal representation learning with weighted point sets. In our method, each input is transformed by an encoder into a weighted point set representation. The similarity between two weighted point sets is calculated with a kernel function that defines the similarity of two points. We also showed the theoretical benefits of using our representation and similarity. We highlighted that the optimal similarity of the symmetric InfoNCE is represented by the pointwise mutual information and showed that we can construct a linear classifier close to the optimal classifier of downstream tasks that is possibly nonlinear when the optimal similarity is obtained. In addition, we clarified the effect on the performance of downstream tasks caused by the deviation of the obtained similarity from the pointwise mutual information, and explained that the deviation of the similarity can be suppressed when using the proposed similarity based on weighted point sets. Experiments on text-image datasets demonstrated the superior performance of the proposed method compared to baselines.
214
+
215
+ # ETHICS STATEMENT
216
+
217
+ In conducting this research on representation learning models, we are committed to upholding ethical standards. Our work aims to contribute to machine learning research society by theoretical analysis of representation learning and enhancing the capability of representations. However, we recognize potential concerns of representation learning models, such as biases in training datasets, license issues of scraped datasets and harmful applications. We acknowledge that representation learning models can have significant impacts on society. Therefore, we commit ourselves to ensuring that our research activity positively contributes to society while avoiding harm.
218
+
219
+ # REPRODUCIIBILITY STATEMENT
220
+
221
+ Detailed descriptions of our setup of the algorithm and experiments can be found in Section 5.3, 6, and Appendix A. Moreover, we release our code at https://github.com/sony/wpse to ensure reproducibility.
222
+
223
+ # ACKNOWLEDGEMENTS
224
+
225
+ Computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by National Institute of Advanced Industrial Science and Technology (AIST) was used. TS was partially supported by JSPS KAKENHI (20H00576) and JST CREST (JPMJCR2015). We would like to extend our thanks to Wei-Hsiang Liao and Bac Nguyen from Sony AI for the valuable feedback.
226
+
227
+ # REFERENCES
228
+
229
+ Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716-23736, 2022.
230
+ Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68(3):337-404, 1950.
231
+ Jordan Ash, Surbhi Goel, Akshay Krishnamurthy, and Dipendra Misra. Investigating the role of negatives in contrastive representation learning. In International Conference on Artificial Intelligence and Statistics, pp. 7187-7209, 2022.
232
+ Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. Advances in Neural Information Processing Systems, 32:15535-15545, 2019.
233
+ Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798-1828, 2013.
234
+ Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101-mining discriminative components with random forests. In European Conference on Computer Vision, pp. 446-461, 2014.
235
+ Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3558-3568, 2021.
236
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, pp. 1597-1607, 2020.
237
+ Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In IEEE/CVF International Conference on Computer Vision, pp. 9640-9649, 2021.
238
+ Zixiang Chen, Yihe Deng, Yuanzhi Li, and Quanquan Gu. Understanding transferable representation learning and zero-shot transfer in CLIP. In International Conference on Learning Representations, 2024.
239
+
240
+ Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 539-546, 2005.
241
+ Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3606-3613, 2014.
242
+ Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In International Conference on Artificial Intelligence and Statistics, pp. 215-223, 2011.
243
+ Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, Justin Johnson, and Shanmukha Ramakrishna Vedantam. Hyperbolic image-text representations. In International Conference on Machine Learning, pp. 7694-7731, 2023.
244
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
245
+ Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. Clap learning audio concepts from natural language supervision. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1-5, 2023.
246
+ Li Fei-Fei, Robert Fergus, and Pietro Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):594-611, 2006.
247
+ Andreas Fürst, Elisabeth Rumetshofer, Johannes Lehner, Viet T. Tran, Fei Tang, Hubert Ramsauer, David Kreil, Michael Kopp, Günter Klambauer, Angela Bitto, and Sepp Hochreiter. Cloob: Modern hopfield networks with infoloob outperform clip. Advances in Neural Information Processing Systems, 35:20450-20468, 2022.
248
+ Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Imagebind: One embedding space to bind them all. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15180-15190, 2023.
249
+ Wenzhong Guo, Jianwen Wang, and Shiping Wang. Deep multimodal representation learning: A survey. IEEE Access, 7:63373-63394, 2019.
250
+ Andrey Guzhov, Federico Raue, Jorn Hees, and Andreas Dengel. Audioclip: Extending clip to image, text and audio. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 976-980, 2022.
251
+ Jeff Z HaoChen, Colin Wei, Adrien Gaidon, and Tengyu Ma. Provable guarantees for self-supervised deep learning with spectral contrastive loss. Advances in Neural Information Processing Systems, 34:5000-5011, 2021.
252
+ Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217-2226, 2019.
253
+ R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019.
254
+ Weiran Huang, Mingyang Yi, Xuyang Zhao, and Zihao Jiang. Towards the generalization of contrastive self-supervised learning. In International Conference on Learning Representations, 2023.
255
+ Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904-4916, 2021.
256
+
257
+ Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In IEEE International Conference on Computer Vision Workshops, pp. 554-561, 2013.
258
+ Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
259
+ Yazhe Li, Roman Pogodin, Danica J Sutherland, and Arthur Gretton. Self-supervised learning with kernel dependence maximization. Advances in Neural Information Processing Systems, 34: 15543-15556, 2021.
260
+ Yan-Bo Lin, Jie Lei, Mohit Bansal, and Gedas Bertasius. Eclipse: Efficient long-range video retrieval using sight and sound. In European Conference on Computer Vision, pp. 413-430, 2022.
261
+ Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical Programming, 45(1):503-528, 1989.
262
+ Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
263
+ Norman Mu, Alexander Kirillov, David Wagner, and Saining Xie. Slip: Self-supervision meets language-image pre-training. In European Conference on Computer Vision, pp. 529-544, 2022.
264
+ Krikamol Muandet, Kenji Fukumizu, Bharath Striperumbudur, Bernhard Scholkopf, et al. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® in Machine Learning, 10(1-2):1-141, 2017.
265
+ Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics and Image Processing, pp. 722-729, 2008.
266
+ Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
267
+ Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3498-3505, 2012.
268
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32: 8026-8037, 2019.
269
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763, 2021.
270
+ Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. Advances in Neural Information Processing Systems, 20:1177-1184, 2007.
271
+ Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
272
+ Sebastian Raschka, Joshua Patterson, and Corey Nolet. Machine learning in python: Main developments and technology trends in data science, machine learning, and artificial intelligence. arXiv preprint arXiv:2002.04803, 2020.
273
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115:211-252, 2015.
274
+ Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. A theoretical analysis of contrastive unsupervised representation learning. In International Conference on Machine Learning, pp. 5628-5637, 2019.
275
+
276
+ Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556-2565, 2018.
277
+ Zhenmei Shi, Jiefeng Chen, Kunyang Li, Jayaram Raghuram, Xi Wu, Yingyu Liang, and Somesh Jha. The trade-off between universality and label efficiency of representations from contrastive learning. In International Conference on Learning Representations, 2023.
278
+ Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. Advances in Neural Information Processing Systems, 29:1857-1865, 2016.
279
+ Bharath K, Striperumbudur, Kenji Fukumizu, and Gert RG Lanckriet. Universality, characteristic kernels and rkhs embedding of measures. Journal of Machine Learning Research, 12:2389-2410, 2011.
280
+ Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In European Conference on Computer Vision, pp. 776-794, 2020.
281
+ Christopher Tosh, Akshay Krishnamurthy, and Daniel Hsu. Contrastive learning, multi-view redundancy, and linear models. In International Conference on Algorithmic Learning Theory, pp. 1179-1206, 2021.
282
+ Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. In International Conference on Learning Representations, 2020.
283
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 30:5998-6008, 2017.
284
+ Hiroki Waida, Yuichiro Wada, Léo Andéol, Takumi Nakagawa, Yuhui Zhang, and Takafumi Kanamori. Towards understanding the mechanism of contrastive learning via similarity structure: A theoretical analysis. In Machine Learning and Knowledge Discovery in Databases: Research Track, pp. 709-727, 2023.
285
+ Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pp. 9929-9939, 2020.
286
+ Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, and Zhouchen Lin. Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap. In International Conference on Learning Representations, 2022.
287
+ Ho-Hsiang Wu, Prem Seetharaman, Kundan Kumar, and Juan Pablo Bello. Wav2clip: Learning robust audio representations from clip. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4563-4567, 2022.
288
+ Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3485-3492, 2010.
289
+ Runtian Zhai, Bingbin Liu, Andrej Risteski, J Zico Kolter, and Pradeep Kumar Ravikumar. Understanding augmentation-based self-supervised representation learning via RKHS approximation and regression. In International Conference on Learning Representations, 2024.
290
+ Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, and Curtis P. Langlotz. Contrastive learning of medical visual representations from paired images and text. In Machine Learning for Healthcare Conference, volume 182, pp. 2-25, 2022.
291
+ Yuhui Zhang, Yuichiro Wada, Hiroki Waida, Kaito Goto, Yusaku Hino, and Takafumi Kanamori. Deep clustering with a constraint for topological invariance based on symmetric infence. Neural Computation, 35(7):1288-1339, 2023.
292
+
293
+ Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, and Wieland Brendel. Contrastive learning inverts the data generating process. In International Conference on Machine Learning, pp. 12979-12990, 2021.
294
+
295
+ Algorithm 1 Symmetric InfoNCE loss with the similarity of weighted point sets
296
+ Require: an image encoder $f_{\mathcal{X}}$ , a text encoder $f_{\mathcal{Y}}$ , a batch of $B$ paired images and texts $\{(x_b,y_b)\}_{b = 1}^B$ the distribution $p_{\omega}$ associated with the shift-invariant kernel $\tilde{k}$ , coefficients $\alpha_{1}$ and $\alpha_{2}$ , and a temperature $\tau$ .
297
+ 1: $\left\{\left(w_{b1}^{(\mathcal{X})},v_{b1}^{(\mathcal{X})}\right),\ldots ,\left(w_{bM(\mathcal{X})}^{(\mathcal{X})},v_{bM(\mathcal{X})}^{(\mathcal{X})}\right)\right\} \gets f_{\mathcal{X}}(x_b)$ for each $b\in [B]$
298
+ 2: $\left\{\left(w_{b1}^{(\mathcal{Y})},v_{b1}^{(\mathcal{Y})}\right),\ldots ,\left(w_{bM(\mathcal{Y})}^{(\mathcal{Y})},v_{bM(\mathcal{Y})}^{(\mathcal{Y})}\right)\right\} \gets f_{\mathcal{Y}}(y_b)$ for each $b\in [B]$
299
+ 3: $\overline{v}_b^{(\mathcal{X})}\gets \sum_{i = 1}^{M^{(\mathcal{X})}}w_{bi}^{(\mathcal{X})}v_{bi}^{(\mathcal{X})}$
300
+ 4: $\overline{v}_b^{(\mathcal{Y})}\gets \sum_{j = 1}^{M^{(\mathcal{Y})}}w_{bj}^{(\mathcal{Y})}v_{bj}^{(\mathcal{Y})}$
301
+ 5: Draw D i.i.d. samples $\omega_{1},\dots,\omega_{D}$ from $p_{\omega}$
302
+ 6: Draw D i.i.d. samples $\beta_{1},\dots,\beta_{D}$ from Unif[0,2π).
303
+ 7: $\overline{z}_b^{(\mathcal{X})}\gets \sum_{i = 1}^{M^{(\mathcal{X})}}w_{bi}^{(\mathcal{X})}z\Big(v_{bi}^{(\mathcal{X})};\{\omega_t\}_{t = 1}^D,\{\beta_t\}_{t = 1}^D\Big)$ for each $b\in [B]$
304
+ 8: $\overline{z}_b^{(\mathcal{Y})}\gets \sum_{j = 1}^{M^{(\mathcal{Y})}}w_{bj}^{(\mathcal{Y})}z\Big(v_{bj}^{(\mathcal{Y})};\{\omega_t\}_{t = 1}^D,\{\beta_t\}_{t = 1}^D\Big)$ for each $b\in [B]$
305
+ 9: $S_{bb'}\gets \tau^{-1}\Big(\alpha_1\overline{v}_b^{(\mathcal{X})^\top}\overline{v}_{b'}^{(\mathcal{Y})} + \alpha_2\overline{z}_b^{(\mathcal{X})^\top}\overline{z}_{b'}^{(\mathcal{Y})}\Big)$ for each $b,b'\in [B]$
306
+ 10: Compute the symmetric InfoNCE loss from the similarity matrix $\{S_{bb'}\}_{bb'}$
307
+
308
+ # A ADDITIONAL DETAILS OF IMPLEMENTATION AND EXPERIMENTS
309
+
310
+ # A.1 IMPLEMENTATION
311
+
312
+ Random Fourier feature (RFF) Random Fourier feature (Rahimi & Recht, 2007) is a technique for reducing computational complexity of kernel methods. For a shift-invariant kernel $k(u,v) = k(u - v)$ on $\mathbb{R}^d$ such that $k(0) = 1$ , there exists a probability distribution, $p_{\omega}$ , of a random variable, $\omega \in \mathbb{R}^d$ that satisfies:
313
+
314
+ $$
315
+ k (u - v) = \underset {\omega , \beta} {\mathbb {E}} \left[ 2 \cos (\omega^ {\top} u + \beta) \cos (\omega^ {\top} v + \beta) \right],
316
+ $$
317
+
318
+ where $\beta \in \mathbb{R}$ is sampled from a uniform distribution, Unif[0, 2π], over [0, 2π]. $p_{\omega}$ is given by the Fourier transform of $k(u - v)$ . Based on this fact, we can construct an unbiased estimator of $k(u,v)$ as follows. First, $\omega_t \in \mathbb{R}^d$ and $\beta_t \in \mathbb{R}$ ( $t = 1,\dots ,D$ ) are independently sampled from the distributions $p_{\omega}$ and Unif[0, 2π], respectively. Then, a vector $z(v) \in \mathbb{R}^{D}$ is constructed from $v \in \mathbb{R}^d$ , $\{\omega_t\}_{t=1}^D$ , and $\{\beta_t\}_{t=1}^D$ as
319
+
320
+ $$
321
+ z \left(v; \left\{\omega_ {t} \right\} _ {t = 1} ^ {D}, \left\{\beta_ {t} \right\} _ {t = 1} ^ {D}\right) = \sqrt {\frac {2}{D}} \left[ \cos \left(\omega_ {1} ^ {\top} v + \beta_ {1}\right), \dots , \cos \left(\omega_ {D} ^ {\top} v + \beta_ {D}\right) \right] ^ {\top}. \tag {9}
322
+ $$
323
+
324
+ Similarly, $z(u)$ is constructed from $u$ with the same $\{\omega_t\}_{t=1}^D$ and $\{\beta_t\}_{t=1}^D$ . Last, an estimator of $k(u,v)$ is obtained by taking the inner product of the vectors: $\mathbb{E}\big[z(u)^\top z(v)\big] = k(u,v)$ . For the specific form of $p_\omega$ for Gaussian kernel and IMQ kernel, and further details, see Appendix C in Li et al. (2021). Algorithm 1 shows a pseudocode for computing our proposed similarity for symmetric InfoNCE.
325
+
326
+ Model architecture In addition to the modifications in Section 5.3, we modify Transformer encoders as follows. To stabilize training, we add an activation function, $100 \tanh(\cdot / 100)$ , after the projection layer for weights for restricting the range of weights. In preliminary experiments, we found that model parameters diverged during pretraining without it. Following CLIP, we apply L2-normalization to points $v_i$ ( $i \in [M]$ ) in weighted point sets, and use an inverse temperature parameter $\tau^{-1}$ to scale the similarity of weighted point sets (Algorithm 1). In typical CLIP implementations, $\tau^{-1}$ is calculated by an exponential activation as $\tau^{-1} = \exp(\theta)$ with a learnable parameter $\theta$ and clipping to a certain range, such as [1, 100]. However, in preliminary experiments, we found that $\tau^{-1}$ increased rapidly to the maximum value in the beginning of pretraining when we use the exponential activation and the weighted point set similarity, and that it harmed model performance. Therefore, we remove the exponential activation and use $\tau^{-1} = \theta$ to scale the proposed similarity. The range for clipping is set to [1, 100].
327
+
328
+ # A.2 PRETRAINING
329
+
330
+ The text Transformer model we used is a 12-layer 512-wide transformer with eight attention heads. We utilized a byte pair encoding (BPE) tokenizer with a vocabulary size of $49\mathrm{K}$ and a maximum context length of 77. Based on the Transformer architectures, we set $M^{(\mathcal{X})}, M^{(\mathcal{Y})}$ , and $d$ of the weighted point sets to 197, 77, and 512, respectively. As a data augmentation, images were randomly resized and cropped with a scaling factor between 0.5 and 1.0 and bicubic interpolation. Models were trained for 50 epochs on CC3M and for 35 epochs on CC12M. We set the batch size to 2048. We used the AdamW optimizer with a beta2 of 0.98 and cosine scheduling with a linear warmup in pretraining. We set the initial learning rate to 0.0005 and used weight decay of 0.5. We used the built-in automatic mixed precision library in PyTorch (Paszke et al., 2019).
331
+
332
+ # A.3 CLASSIFICATION EVALUATIONS
333
+
334
+ Table 4: 13 datasets used for classification evaluations.
335
+
336
+ <table><tr><td>Dataset</td><td>Classes</td><td>Train</td><td>Val</td><td>Test</td></tr><tr><td>ImageNet (Russakovsky et al., 2015)</td><td>1000</td><td>1153051</td><td>128116</td><td>50000</td></tr><tr><td>CIFAR-10 (Krizhevsky, 2009)</td><td>10</td><td>45000</td><td>5000</td><td>10000</td></tr><tr><td>CIFAR-100 (Krizhevsky, 2009)</td><td>100</td><td>45000</td><td>5000</td><td>10000</td></tr><tr><td>STL-10 (Coates et al., 2011)</td><td>10</td><td>4500</td><td>500</td><td>8000</td></tr><tr><td>Food-101 (Bossard et al., 2014)</td><td>101</td><td>68175</td><td>7575</td><td>25250</td></tr><tr><td>Caltech-101 (Fei-Fei et al., 2006)</td><td>102</td><td>2754</td><td>306</td><td>6085</td></tr><tr><td>Stanford Cars (Krause et al., 2013)</td><td>196</td><td>7330</td><td>814</td><td>8041</td></tr><tr><td>FGVC Aircraft (Maji et al., 2013)</td><td>100</td><td>3334</td><td>3333</td><td>3333</td></tr><tr><td>Oxford Flowers (Nilsback &amp; Zisserman, 2008)</td><td>102</td><td>1020</td><td>1020</td><td>6149</td></tr><tr><td>EuroSAT (Helber et al., 2019)</td><td>10</td><td>9000</td><td>1000</td><td>5000</td></tr><tr><td>DTD (Cimpoi et al., 2014)</td><td>47</td><td>1880</td><td>1880</td><td>1880</td></tr><tr><td>Oxford Pets (Parkhi et al., 2012)</td><td>37</td><td>3312</td><td>368</td><td>3669</td></tr><tr><td>SUN397 (Xiao et al., 2010)</td><td>397</td><td>76129</td><td>10867</td><td>21758</td></tr></table>
337
+
338
+ The properties of the datasets we used in the classification tasks are listed in Table 4. In Table 5, we show the results of the same zero-shot classification as presented in Section 6.2 but with the standard deviation included.
339
+
340
+ Table 5: Zero-shot classification performance.
341
+
342
+ <table><tr><td rowspan="2">Dataset</td><td colspan="2">CC3M</td><td colspan="2">CC12M</td></tr><tr><td>WPSE Gaussian</td><td>WPSE IMQ</td><td>WPSE Gaussian</td><td>WPSE IMQ</td></tr><tr><td>ImageNet</td><td>21.20 ± 0.05</td><td>21.36 ± 0.04</td><td>39.95 ± 0.06</td><td>39.26 ± 0.06</td></tr><tr><td>CIFAR-10</td><td>59.95 ± 0.20</td><td>61.22 ± 0.51</td><td>81.33 ± 0.33</td><td>80.31 ± 0.24</td></tr><tr><td>CIFAR-100</td><td>23.58 ± 0.13</td><td>25.91 ± 0.19</td><td>49.49 ± 0.16</td><td>47.53 ± 0.05</td></tr><tr><td>STL-10</td><td>80.61 ± 0.37</td><td>81.64 ± 0.25</td><td>91.25 ± 0.09</td><td>91.83 ± 0.13</td></tr><tr><td>Food-101</td><td>14.56 ± 0.08</td><td>13.17 ± 0.05</td><td>50.63 ± 0.08</td><td>51.82 ± 0.20</td></tr><tr><td>Caltech-101</td><td>51.18 ± 0.12</td><td>50.15 ± 0.10</td><td>74.66 ± 0.20</td><td>73.54 ± 0.26</td></tr><tr><td>Cars</td><td>1.49 ± 0.02</td><td>1.41 ± 0.08</td><td>24.14 ± 0.16</td><td>21.92 ± 0.13</td></tr><tr><td>Aircraft</td><td>1.35 ± 0.12</td><td>1.84 ± 0.13</td><td>2.54 ± 0.09</td><td>1.62 ± 0.15</td></tr><tr><td>Flowers</td><td>12.60 ± 0.10</td><td>12.14 ± 0.15</td><td>30.11 ± 0.25</td><td>29.53 ± 0.26</td></tr><tr><td>EuroSAT</td><td>19.98 ± 0.18</td><td>22.02 ± 0.92</td><td>23.28 ± 0.36</td><td>28.36 ± 0.38</td></tr><tr><td>DTD</td><td>13.40 ± 0.24</td><td>13.69 ± 0.13</td><td>21.17 ± 0.23</td><td>21.62 ± 0.28</td></tr><tr><td>Pets</td><td>13.60 ± 0.18</td><td>13.88 ± 0.16</td><td>61.41 ± 0.15</td><td>57.31 ± 1.61</td></tr><tr><td>SUN397</td><td>34.16 ± 0.11</td><td>33.05 ± 0.15</td><td>49.57 ± 0.11</td><td>49.54 ± 0.29</td></tr></table>
343
+
344
+ # A.4 ABLATION STUDY ON CC12M
345
+
346
+ In this section, we present the result of ablation study using CC12M. We trained two variant models that output weighted point sets. One model is a WPSE Linear, which we described in Section 6.4, with the coefficients $(\alpha_{1},\alpha_{2}) = (1,0)$ and the similarity of weighted point sets are calculated only with linear kernel. The other model has the coefficients $(\alpha_{1},\alpha_{2}) = (0,1)$ and calculates the similarity of weighted point sets using only a nonlinear kernel. We denote this
347
+
348
+ Table 6: Ablation study. Models are trained on CC12M. Except for WPSE Linear, Gaussian kernel was used.
349
+
350
+ <table><tr><td>Model</td><td>Zero-shot</td><td>Linear</td></tr><tr><td>WPSE</td><td>46.12</td><td>79.08</td></tr><tr><td>WPSE Nonlinear</td><td>44.05</td><td>70.99</td></tr><tr><td>WPSE Linear</td><td>45.87</td><td>78.61</td></tr></table>
351
+
352
+ model as WPSE Nonlinear. Additionally, we also trained a WPSE with positive weights with the last sigmoid activation in the same manner as described in Section 6.4. However, the training of this model failed due to a NaN loss. Table 6 shows the average performance of zero-shot classification and linear classification on the 13 benchmark dataset. This indicates that the combination of the linear kernel and a nonlinear kernel is beneficial for the performance.
353
+
354
+ # B PROOFS OF STATEMENTS IN SECTION 4
355
+
356
+ # B.1 PROOF OF THEOREM 4.2
357
+
358
+ Proof. From the definition of $\bar{h}^g (x)$ , the $c$ -th entry of $\bar{h}^{g^*}(x)$ is calculated as follows:
359
+
360
+ $$
361
+ \begin{array}{l} \bar {h} ^ {g ^ {*}} (x) _ {c} = \left(\mathbb {E} _ {p (y | \mathcal {Y} _ {c})} \left[ \frac {1}{\tau^ {*}} f _ {\mathcal {Y}} ^ {*} (y) \right]\right) ^ {\top} f _ {\mathcal {X}} ^ {*} (x) + \ln P (\mathcal {Y} _ {c}) \\ = \underset {p (y | \mathcal {Y} _ {c})} {\mathbb {E}} \left[ \frac {1}{\tau^ {*}} f _ {\mathcal {Y}} ^ {*} (y) ^ {\top} f _ {\mathcal {X}} ^ {*} (x) \right] + \ln P (\mathcal {Y} _ {c}) \\ = \underset {p (y | \mathcal {Y} _ {c})} {\mathbb {E}} \left[ g ^ {*} (x, y) \right] + \ln P (\mathcal {Y} _ {c}) \\ = \underset {p (y | \mathcal {Y} _ {c})} {\mathbb {E}} \left[ \ln \frac {p (x , y)}{p (x) p (y)} \right] + \ln P (\mathcal {Y} _ {c}) + \Gamma . \\ \end{array}
362
+ $$
363
+
364
+ Since adding a constant to all entries of $h(x)$ doesn't change the supervised loss $\mathcal{L}_{\sup}(h)$ , we consider $\Gamma = 0$ for the sake of simplicity. The $e$ -th entry of $\bar{h}^{g^*}(x)$ is further rearranged as follows:
365
+
366
+ $$
367
+ \begin{array}{l} \bar {h} ^ {g ^ {*}} (x) _ {c} = \underset {p (y | \mathcal {Y} _ {c})} {\mathbb {E}} \left[ \ln \frac {p (x , y)}{p (x) p (y)} \right] + \ln P (\mathcal {Y} _ {c}) \\ = \underset {p (y | \mathcal {Y} _ {c})} {\mathbb {E}} \left[ \ln \frac {p (x , y) p (x) P (\mathcal {Y} _ {c})}{p (x) p (y) p (x , \mathcal {Y} _ {c})} + \ln \frac {p (x , \mathcal {Y} _ {c})}{p (x) P (\mathcal {Y} _ {c})} \right] + \ln P (\mathcal {Y} _ {c}) \\ = \underset {p (y | \mathcal {Y} _ {c})} {\mathbb {E}} \left[ \ln \frac {p (x , y) / p (x , \mathcal {Y} _ {c})}{p (y) / P (\mathcal {Y} _ {c})} \right] + \ln \frac {p (x , \mathcal {Y} _ {c})}{p (x)} \\ = \underset {p (y | \mathcal {Y} _ {c})} {\mathbb {E}} \left[ \ln \frac {p (y | x , \mathcal {Y} _ {c})}{p (y | \mathcal {Y} _ {c})} \right] + \ln P (\mathcal {Y} _ {c} | x) \\ = \ln P \left(\mathcal {Y} _ {c} | x\right) - D _ {\mathrm {K L}} \left(p _ {Y} \left(Y \mid \mathcal {Y} _ {c}\right) \| p _ {Y} \left(Y \mid x, \mathcal {Y} _ {c}\right)\right). \\ \end{array}
368
+ $$
369
+
370
+ Therefore, we have
371
+
372
+ $$
373
+ \begin{array}{l} \mathcal {L} _ {\sup } (\bar {h} ^ {g *}) - \mathcal {L} _ {\sup } (h ^ {*}) \\ = \underset {p (x, c)} {\mathbb {E}} \left[ \ln P (c | x) - \bar {h} ^ {g *} (x) _ {c} + \ln \left(\sum_ {i} \exp \bar {h} ^ {g *} (x) _ {i}\right) \right] \\ = \underset {p (x, c)} {\mathbb {E}} \left[ \ln P (c | x) - \ln P (\mathcal {Y} _ {c} | x) + D _ {\mathrm {K L}} \left(p _ {Y} (Y | \mathcal {Y} _ {c}) \parallel p _ {Y} (Y | x, \mathcal {Y} _ {c})\right) \right. \\ \left. \right. + \ln \left( \right.\sum_ {i} P \left(\mathcal {Y} _ {i} \mid x\right) \cdot \exp \left(- D _ {\mathrm {K L}} \left(p _ {Y} \left(Y \mid \mathcal {Y} _ {i}\right) \| p _ {Y} (Y \mid x, \mathcal {Y} _ {i}))\right)\right)\left. \right] \\ \leq \underset {p (x, c)} {\mathbb {E}} \left[ \ln P (c | x) - \ln P (\mathcal {Y} _ {c} | x) + D _ {\mathrm {K L}} \left(p _ {Y} (Y | \mathcal {Y} _ {c}) \| p _ {Y} (Y | x, \mathcal {Y} _ {c})\right) + \ln \left(\sum_ {i} P (\mathcal {Y} _ {i} | x)\right) \right] \\ = \underset {p (x, c)} {\mathbb {E}} \left[ \ln p (c | x) - \ln p (\mathcal {Y} _ {c} | x) + D _ {\mathrm {K L}} \left(p _ {Y} (Y | \mathcal {Y} _ {c}) \| p _ {Y} (Y | x, \mathcal {Y} _ {c})\right) + \ln P (\tilde {\mathcal {Y}} | x) \right] \\ = \underset {p (x, c)} {\mathbb {E}} \left[ \ln \frac {P (c | x)}{P (\mathcal {Y} _ {c} | x) / P (\tilde {\mathcal {Y}} | x)} + D _ {\mathrm {K L}} \left(p _ {Y} (Y | \mathcal {Y} _ {c}) \| p _ {Y} (Y | x, \mathcal {Y} _ {c})\right) \right] \\ = \underset {p (x)} {\mathbb {E}} \left[ \right. D _ {\mathrm {K L}} \left( \right.P _ {C} (C | x) \left\| \right. P _ {C} (C \mid x; (\mathcal {Y} _ {i}) _ {i \in [ K ]})\left. \right)\left. \right] + \underset {p (x, c)} {\mathbb {E}} \left[ D _ {\mathrm {K L}} \left(p _ {Y} (Y | \mathcal {Y} _ {c}) \parallel p _ {Y} (Y | x, \mathcal {Y} _ {c})\right)\right]. \\ \end{array}
374
+ $$
375
+
376
+ Here, the inequality holds by the monotonicity of $\ln (\cdot)$ , the non-negativity of $P(\mathcal{Y}_i|x)$ , and the non-negativity of KL divergence.
377
+
378
+ # B.2 PROOF OF LEMMA 4.3
379
+
380
+ Proof. For every $i \in [K]$ , it holds that
381
+
382
+ $$
383
+ \begin{array}{l} \left| \bar {h} ^ {g} (x) _ {i} - \bar {h} ^ {g ^ {*}} (x) _ {i} \right| = \left| \underset {p (y | \mathcal {Y} _ {i})} {\mathbb {E}} [ g (x, y) - g ^ {*} (x, y) ] \right| \\ \leq \left| \underset {p (y | \mathcal {Y} _ {i})} {\mathbb {E}} [ \Delta ] \right| \\ = \Delta . \\ \end{array}
384
+ $$
385
+
386
+ Let $\varsigma_{c}(z)$ denote the logarithm of the $c$ -th entry of the softmax function, i.e., $\varsigma_{c}(z) := \ln \frac{e^{z_{c}}}{\sum_{i=1}^{K} e^{z_{i}}}$ .
387
+
388
+ $$
389
+ \begin{array}{l} \left| \mathcal {L} _ {\sup } (\bar {h} ^ {g}) - \mathcal {L} _ {\sup } (\bar {h} ^ {g ^ {*}}) \right| = \left| \underset {p (x, c)} {\mathbb {E}} \left[ - \ln \frac {\exp \bar {h} ^ {g} (x) _ {c}}{\sum_ {i = 1} ^ {K} \exp \bar {h} ^ {g} (x) _ {i}} + \ln \frac {\exp \bar {h} ^ {g ^ {*}} (x) _ {c}}{\sum_ {i = 1} ^ {K} \exp \bar {h} ^ {g ^ {*}} (x) _ {i}} \right] \right| \\ \leq \underset {p (x, c)} {\mathbb {E}} \left[ \left| - \varsigma_ {c} (\bar {h} ^ {g} (x)) + \varsigma_ {c} (\bar {h} ^ {g ^ {*}} (x)) \right| \right] \tag {10} \\ \end{array}
390
+ $$
391
+
392
+ $\varsigma_{c}(z)$ is a differentiable function with respect to $z$ , and the partial derivative is given as follows:
393
+
394
+ $$
395
+ \frac {\partial \varsigma_ {c}}{\partial z _ {c}} = 1 - \frac {e ^ {z _ {c}}}{\sum_ {i = 1} ^ {K} e ^ {z _ {i}}},
396
+ $$
397
+
398
+ $$
399
+ \frac {\partial \varsigma_ {c}}{\partial z _ {j}} = \frac {- e ^ {z _ {j}}}{\sum_ {i = 1} ^ {K} e ^ {z _ {i}}} \quad \text {f o r} j \neq c.
400
+ $$
401
+
402
+ By the mean value theorem, there exists $\xi$ on the line segment between $\bar{h}^g (x)$ and $\bar{h}^{g^*}(x)$ such that
403
+
404
+ $$
405
+ - \varsigma_ {c} (\bar {h} ^ {g} (x)) + \varsigma_ {c} (\bar {h} ^ {g ^ {*}} (x)) = \nabla \varsigma_ {c} (\xi) ^ {\top} (- \bar {h} ^ {g} (x) + \bar {h} ^ {g ^ {*}} (x)).
406
+ $$
407
+
408
+ Therefore, we have
409
+
410
+ $$
411
+ \begin{array}{l} \underset {p (x, c)} {\mathbb {E}} \left[ \left| - \varsigma_ {c} (\bar {h} ^ {g} (x)) + \varsigma_ {c} (\bar {h} ^ {g ^ {*}} (x)) \right| \right] = \underset {p (x, c)} {\mathbb {E}} \left[ \left| \nabla \varsigma_ {c} (\xi) ^ {\top} (- \bar {h} ^ {g} (x) + \bar {h} ^ {g ^ {*}} (x)) \right| \right] \\ \leq \mathbb {E} _ {p (x, c)} \left[ \left(\sum_ {i = 1} ^ {K} \left| \frac {\partial \zeta_ {c}}{\partial z _ {i}} (\xi) \right|\right) \left\| \bar {h} ^ {g} (x) - \bar {h} ^ {g ^ {*}} (x) \right\| _ {\infty} \right] \\ \leq \underset {p (x, c)} {\mathbb {E}} [ 2 \Delta ] \\ = 2 \Delta . \tag {11} \\ \end{array}
412
+ $$
413
+
414
+ Here, the first inequality holds by Hölder's inequality. At the second inequality, we use
415
+
416
+ $$
417
+ \sum_ {i = 1} ^ {K} \left| \frac {\partial \varsigma_ {c}}{\partial z _ {i}} (\xi) \right| = 1 - \frac {e ^ {\xi_ {c}}}{\sum_ {i = 1} ^ {K} e ^ {\xi_ {i}}} + \frac {\sum_ {i \neq c} e ^ {\xi_ {i}}}{\sum_ {i = 1} ^ {K} e ^ {\xi_ {i}}} \leq 2.
418
+ $$
419
+
420
+ Combining Eq. 10, 11 finishes the proof.
421
+
422
+ ![](images/74e356daf6c9294be9d61e1cb668fa9f15c36073fc3e8aaa3dadf9c8cc90637c.jpg)
423
+
424
+ # C PROOFS OF STATEMENTS IN SECTION 5
425
+
426
+ # C.1 LIMITATION OF THE BILINEAR SIMILARITY
427
+
428
+ Proposition C.1. Let $A, B \in \mathbb{R}^{d \times M}$ , and $c \in \mathbb{R}$ . Let $J \in \mathbb{R}^{M \times M}$ denote the matrix in which all entries are 1. Then, we have $\mathrm{rank}(A^\top B - cJ) \leq d + 1$ .
429
+
430
+ Proof. We define $\tilde{A},\tilde{B}\in \mathbb{R}^{(d + 1)\times M}$ as follows:
431
+
432
+ $$
433
+ \tilde {A} = \left[ \begin{array}{c c c} & A & \\ \hline - 1 & \dots & - 1 \end{array} \right], \tilde {B} = \left[ \begin{array}{c c c} & B & \\ \hline c & \dots & c \end{array} \right].
434
+ $$
435
+
436
+ Then, we have $\tilde{A}^{\top}\tilde{B} = A^{\top}B - cJ$ . Since $\mathrm{rank}\tilde{A}\leq d + 1$ and $\mathrm{rank}\tilde{B}\leq d + 1$ , the statement holds.
437
+
438
+ # C.2 REPRESENTATIONAL CAPABILITY OF THE SIMILARITY BETWEEN WEIGHTED POINT SETS
439
+
440
+ We denote (joint) probability density functions of random variables by using their corresponding letters. For example, we denote the joint probability density function of the random variables $\tilde{X},\tilde{Y}$ and the probability density function of $\tilde{X}$ as $p_{\tilde{X},\tilde{Y}}$ and $p_{\tilde{X}}$ , respectively.
441
+
442
+ We impose the following assumptions on the generation process of random variables $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$ .
443
+
444
+ Assumption C.2 (Generation process). There exist random variables $\tilde{X},\tilde{Y}\in \mathbb{R}^d,Z^{(\mathcal{X})}\in \mathbb{R}^{d_{\mathcal{X}}}$ and $Z^{(\mathcal{Y})}\in \mathbb{R}^{d_{\mathcal{Y}}}$ that satisfy the following conditions.
445
+
446
+ (a) $(\tilde{X},\tilde{Y})$ $Z^{(\mathcal{X})}$ , and $Z^{(\mathcal{V})}$ are mutually independent.
447
+ (b) There exist continuous bijective mappings $h_{\mathcal{X}} \colon \mathbb{R}^d \times \mathbb{R}^{d_{\mathcal{X}}} \to \mathcal{X}$ and $h_{\mathcal{Y}} \colon \mathbb{R}^d \times \mathbb{R}^{d_{\mathcal{Y}}} \to \mathcal{Y}$ such that $X = h_{\mathcal{X}}(\tilde{X}, Z^{(\mathcal{X})})$ and $Y = h_{\mathcal{Y}}(\tilde{Y}, Z^{(\mathcal{Y})})$ .
448
+ (c) The support $\operatorname{supp} p_{\tilde{X},\tilde{Y}} \subseteq \mathbb{R}^d \times \mathbb{R}^d$ of the distribution $p_{\tilde{X},\tilde{Y}}$ is compact.
449
+ (d) The pointwise mutual information $\mathrm{PMI}_{\tilde{X},\tilde{Y}}(\tilde{x},\tilde{y})\coloneqq \ln \frac{p_{\tilde{X},\tilde{Y}}(\tilde{x},\tilde{y})}{p_{\tilde{X}}(\tilde{x})p_{\tilde{Y}}(\tilde{y})}$ of $\tilde{X}$ and $\tilde{Y}$ is an $L$ -Lipschitz function on $\operatorname {supp}p_{\tilde{X}}\times \operatorname {supp}p_{\tilde{Y}}$
450
+
451
+ The second assumption means that data samples, $X$ and $Y$ , are generated from low-dimensional latent variables, $(\tilde{X}, Z^{(\mathcal{X})})$ and $(\tilde{Y}, Z^{(\mathcal{Y})})$ , respectively. The first assumption means that dependency between $X$ and $Y$ stems only from $\tilde{X}$ and $\tilde{Y}$ , and that $Z^{(\mathcal{X})}$ and $Z^{(\mathcal{Y})}$ are latent variables specific to the domain $\mathcal{X}$ and $\mathcal{Y}$ , respectively. From the first and second assumptions, it follows that there exists
452
+
453
+ a 1-to-1 correspondence between $(x,y)\in \mathcal{X}\times \mathcal{Y}$ and $(\tilde{x},\tilde{y},z^{(\mathcal{X})},z^{(\mathcal{Y})})\in \mathbb{R}^d\times \mathbb{R}^d\times \mathbb{R}^{d_x}\times \mathbb{R}^{d_y}$ , and that $\frac{p_{X,Y}(x,y)}{p_X(x)p_Y(y)} = \frac{p_{\tilde{X},\tilde{Y}}(\tilde{x},\tilde{y})p_{Z(\mathcal{X})}(z^{(\mathcal{X})})p_{Z(\mathcal{Y})}(z^{(\mathcal{Y})})}{p_{\tilde{X}}(\tilde{x})p_{Z(\mathcal{X})}(z^{(\mathcal{X})})p_{\tilde{Y}}(\tilde{y})p_{Z(\mathcal{Y})}(z^{(\mathcal{Y})})} = \frac{p_{\tilde{X},\tilde{Y}}(\tilde{x},\tilde{y})}{p_{\tilde{X}}(\tilde{x})p_{\tilde{Y}}(\tilde{y})}$ .
454
+
455
+ To prove Theorem 5.1, we use the following statements.
456
+
457
+ Proposition C.3 ((Aronszajn, 1950; Striperumbudur et al., 2011)). Let $X$ be a topological space and let $\mathcal{H}$ be a reproducing kernel Hilbert space of the functions on $X$ with $k\colon X\times X\to \mathbb{R}$ as its reproducing kernel. Then,
458
+
459
+ $$
460
+ \left\{\sum_ {j \in [ n ]} c _ {j} k (\cdot , x _ {j}) \mid n \in \mathbb {N}, \{c _ {j}: j \in [ n ] \} \subset \mathbb {R}, \{x _ {j}: j \in [ n ] \} \subset X \right\}
461
+ $$
462
+
463
+ is dense in $\mathcal{H}$
464
+
465
+ Lemma C.4. Let $X$ be a topological space and let $\mathcal{H}$ be a reproducing kernel Hilbert space of the functions on $X$ with a bounded kernel $k\colon X\times X\to \mathbb{R}$ . Let $\sup_{x\in X}k(x,x)\leq \kappa$ . For any $f,g\in \mathcal{H}$ , if $\| f - g\|_{\mathcal{H}} < \varepsilon$ , then $\| f - g\|_{\infty} < \sqrt{\kappa}\varepsilon$ .
466
+
467
+ Proof. For any $x \in X$ ,
468
+
469
+ $$
470
+ | f (x) - g (x) | = \langle k (x, \cdot), f - g \rangle_ {\mathcal {H}} \leq \| k (x, \cdot) \| _ {\mathcal {H}} \| f - g \| _ {\mathcal {H}} < \sqrt {\kappa} \varepsilon .
471
+ $$
472
+
473
+ Definition C.5 ( $c_{0}$ -universal, (Sriperumbudur et al., 2011)). A bounded kernel, $k$ with $k(\cdot, x) \in C_0(X), \forall x \in X$ on a locally compact Hausdorff space $X$ , is said to be $c_{0}$ -universal if the RKHS, $\mathcal{H}$ induced by $k$ is dense in $C_0(X)$ w.r.t. the uniform norm. I.e., for every function $g \in C_0(X)$ and all $\varepsilon > 0$ , there exists an $f \in \mathcal{H}$ such that $\| f - g \|_{\infty} \leq \varepsilon$ .
474
+
475
+ We now present the proof of Theorem 5.1.
476
+
477
+ Proof of Theorem 5.1. First, we fix $\varepsilon >0$ . We prove the statement by explicitly constructing $M^{(\mathcal{X})}, M^{(\mathcal{Y})}, f_{\mathcal{X}}$ , and $f_{\mathcal{Y}}$ that satisfy Eq.7.
478
+
479
+ From (b) of Assumption C.2, there exist continuous inverse functions of $h_{\mathcal{X}}$ and $h_{\mathcal{Y}}$ . Consider the following restrictions of the functions $h_{\mathcal{X}}^{-1}$ and $h_{\mathcal{Y}}^{-1}$ : for $x = h_{\mathcal{X}}(\tilde{x}, z^{(\mathcal{X})})$ and $y = h_{\mathcal{Y}}(\tilde{y}, z^{(\mathcal{Y})})$ , it holds that
480
+
481
+ $$
482
+ \tilde {x} = h _ {\mathcal {X}} ^ {- 1} | _ {\tilde {X}} (x),
483
+ $$
484
+
485
+ $$
486
+ z ^ {(\mathcal {X})} = h _ {\mathcal {X}} ^ {- 1} | _ {Z ^ {(\mathcal {X})}} (x),
487
+ $$
488
+
489
+ $$
490
+ \tilde {y} = h _ {\mathcal {Y}} ^ {- 1} | _ {\tilde {Y}} (y),
491
+ $$
492
+
493
+ $$
494
+ z ^ {(\mathcal {Y})} = h _ {\mathcal {Y}} ^ {- 1} | _ {Z ^ {(\mathcal {Y})}} (y).
495
+ $$
496
+
497
+ Then, from (a) of Assumption C.2, it follows that
498
+
499
+ $$
500
+ \begin{array}{l} \frac {p _ {X , Y} (x , y)}{p _ {X} (x) p _ {Y} (y)} = \frac {p _ {\tilde {X} , \tilde {Y}} (\tilde {x} , \tilde {y}) p _ {Z ^ {(X)}} (z ^ {(X)}) p _ {Z ^ {(Y)}} (z ^ {(Y)})}{p _ {\tilde {X}} (\tilde {x}) p _ {Z ^ {(X)}} (z ^ {(X)}) p _ {\tilde {Y}} (\tilde {y}) p _ {Z ^ {(Y)}} (z ^ {(Y)})} \\ = \frac {p _ {\tilde {X} , \tilde {Y}} (\tilde {x} , \tilde {y})}{p _ {\tilde {X}} (\tilde {x}) p _ {\tilde {Y}} (\tilde {y})} \\ = \frac {p _ {\tilde {X} , \tilde {Y}} \left(h _ {\mathcal {X}} ^ {- 1} | _ {\tilde {X}} (x) , h _ {\mathcal {Y}} ^ {- 1} | _ {\tilde {Y}} (y)\right)}{p _ {\tilde {X}} \left(h _ {\mathcal {X}} ^ {- 1} | _ {\tilde {X}} (x)\right) p _ {\tilde {Y}} \left(h _ {\mathcal {Y}} ^ {- 1} | _ {\tilde {Y}} (y)\right)}. \tag {12} \\ \end{array}
501
+ $$
502
+
503
+ To avoid complicated notations, we simply denote $h_{\mathcal{X}}^{-1}|_{\tilde{X}}(x)$ as $\tilde{x} (x)$ and $h_{\mathcal{Y}}^{-1}|_{\tilde{Y}}(y)$ as $\tilde{y} (y)$ in the following.
504
+
505
+ From (c) of Assumption C.2, Proposition C.3, Lemma C.4, and the definition of the $c_0$ -universal kernel, for any fixed $\tilde{y} \in \operatorname{supp} p_{\tilde{Y}}$ , there exist $M \in \mathbb{N}$ , $\{c_j \in \mathbb{R} \mid j \in [M]\}$ , and $\{\tilde{\eta}_j \in \mathbb{R}^d \mid j \in [M]\}$ such that, for any $\tilde{x} \in \operatorname{supp} p_{\tilde{X}}$ ,
506
+
507
+ $$
508
+ \left| \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x}, \tilde {y}) - \sum_ {j \in [ M ]} c _ {j} k (\tilde {x}, \tilde {\eta} _ {j}) \right| < \frac {\varepsilon}{2}. \tag {13}
509
+ $$
510
+
511
+ We denote such $M, c_{j}$ , and $\tilde{\eta}_{j}$ as $M(\tilde{y}), c_{j}(\tilde{y})$ and $\tilde{\eta}_{j}(\tilde{y})$ , respectively.
512
+
513
+ Meanwhile, we define $B_r(\tilde{y}) \subset \mathbb{R}^d$ as the open ball of radius $r$ and center $\tilde{y} \in \mathbb{R}^d$ . From (c) of Assumption C.2, the support of $p_{\tilde{Y}}$ is compact. Thus, for any $\varepsilon > 0$ , there exist $J \in \mathbb{N}$ and $J$ points $\tilde{y}_1, \tilde{y}_2, \dots, \tilde{y}_J \in \mathbb{R}^d$ such that $\text{supp } p_{\tilde{Y}} \subseteq \bigcup_{j=1}^{J} B_{\varepsilon/(2L)}(\tilde{y}_j)$ . Given such $\tilde{y}_j$ ( $j \in [J]$ ), we define $\chi(\tilde{y})$ for $\tilde{y} \in S$ as one of the points, $\tilde{y}_j$ ( $j \in [J]$ ) that satisfies $\tilde{y} \in B_{\varepsilon/(2L)}(\tilde{y}_j)$ . From (d) of Assumption C.2, it holds that, for any $(\tilde{x}, \tilde{y}) \in \text{supp } p_{\tilde{X},\tilde{Y}}$ ,
514
+
515
+ $$
516
+ \left| \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x}, \tilde {y}) - \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x}, \chi (\tilde {y})) \right| < \frac {\varepsilon}{2}. \tag {14}
517
+ $$
518
+
519
+ Now, we are ready to construct desirable $M^{(\mathcal{X})}, M^{(\mathcal{Y})}, f_{\mathcal{X}}$ and $f_{\mathcal{Y}}$ . Let $M^{(\mathcal{X})} = 1$ and $M^{(\mathcal{Y})} = \max_{j \in [J]} M(\tilde{y}_j)$ . We define $f_{\mathcal{Y}}: y \mapsto \left\{\left(w_j^{(\mathcal{Y})}, v_j^{(\mathcal{Y})}\right)\right\}_{j \in [M^{(\mathcal{Y})}]}$ as
520
+
521
+ $$
522
+ \begin{array}{l} w _ {j} ^ {(y)} = c _ {j} \left(\chi (\tilde {y} (y))\right) \quad \text {f o r} 1 \leq j \leq M \left(\chi (\tilde {y} (y))\right), \\ w _ {j} ^ {(y)} = 0 \qquad \text {f o r} M (\chi (\tilde {y} (y))) < j \leq M ^ {(y)}, \\ v _ {j} ^ {(y)} = \tilde {\eta} _ {j} (\chi (\tilde {y} (y))) \quad \text {f o r} 1 \leq j \leq M (\chi (\tilde {y} (y))). \\ \end{array}
523
+ $$
524
+
525
+ For $v_{j}^{(\mathcal{V})}$ with $j$ such that $M(\chi(\tilde{y}(y))) < j \leq M^{(\mathcal{V})}$ , we can choose any point in $\mathbb{R}^d$ . We define $f_{\mathcal{X}}$ as $f_{\mathcal{X}}(x) = \{(w_1, v_1)\} := \{(1, \tilde{x}(x))\}$ . Then, for every $(x, y) \in \operatorname{supp} p_{X,Y} \subseteq \mathcal{X} \times \mathcal{Y}$ ,
526
+
527
+ $$
528
+ \begin{array}{l} \left| \ln \frac {p _ {X , Y} (x , y)}{p _ {X} (x) p _ {Y} (y)} - \tilde {g} (f _ {\mathcal {X}} (x), f _ {\mathcal {Y}} (y)) \right| \\ = \left| \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x} (x), \tilde {y} (y)) - \sum_ {i = 1} ^ {M ^ {(\mathcal {X})}} \sum_ {j = 1} ^ {M ^ {(\mathcal {Y})}} w _ {i} ^ {(\mathcal {X})} w _ {j} ^ {(\mathcal {Y})} k (v _ {i} ^ {(\mathcal {X})}, v _ {j} ^ {(\mathcal {Y})}) \right| \\ \leq \left| \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x} (x), \tilde {y} (y)) - \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x} (x), \chi (\tilde {y} (y))) \right| \\ + \left| \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x} (x), \chi (\tilde {y} (y))) - \sum_ {i = 1} ^ {M ^ {(\mathcal {X})}} \sum_ {j = 1} ^ {M ^ {(\mathcal {Y})}} w _ {i} ^ {(\mathcal {X})} w _ {j} ^ {(\mathcal {Y})} k \left(v _ {i} ^ {(\mathcal {X})}, v _ {j} ^ {(\mathcal {Y})}\right) \right| \\ \leq \frac {\varepsilon}{2} + \left| \operatorname {P M I} _ {\tilde {X}, \tilde {Y}} (\tilde {x} (x), \chi (\tilde {y} (y))) - \sum_ {j = 1} ^ {M (\chi (\tilde {y} (y)))} c _ {j} \Big (\chi (\tilde {y} (y)) \Big) k \Big (\tilde {x} (x), \tilde {\eta} _ {j} (\chi (\tilde {y} (y))) \Big) \right| \\ < \frac {\varepsilon}{2} + \frac {\varepsilon}{2} = \varepsilon . \\ \end{array}
529
+ $$
530
+
531
+ Here, the first inequality holds by the triangle inequality. The second inequality holds from Eq. 14 and the definitions of $f_{\mathcal{X}}$ and $f_{\mathcal{Y}}$ . The third inequality holds from Eq. 13.
ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f558cfa9852d136544d601f4ebba5b410d781f85663536a6065ed83179d59d5f
3
+ size 869855
ICLR/2025/Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2954b89acd68106a50b35b5084ec01538f61b44b54ac94ef78b9c64541f65c1e
3
+ size 904073
ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/b8a14e2d-c026-47d0-a491-b0ccd3da7e13_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:209487086683ba1a0e79d8c603417b39df2c2a8d3fefb316bbcba862ccb34a2c
3
+ size 258018
ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/b8a14e2d-c026-47d0-a491-b0ccd3da7e13_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e213ac5c83373990e2d43916bcccc668add1c2bae71884bd9cdfe8d22fcdb71f
3
+ size 296151
ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/b8a14e2d-c026-47d0-a491-b0ccd3da7e13_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c858bed707fbfb0e3f805e39f4ba2bc4aade11646b8c251f2a295378ffcf0b76
3
+ size 1625164
ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae2b52038400b085068858b2697d29f831f7a87674622c15e22de02d9367911c
3
+ size 1775609
ICLR/2025/What Does It Mean to Be a Transformer_ Insights from a Theoretical Hessian Analysis/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc84bcd533e2d2979bc501615296ebd4a2f8fd2c19dbf934f2ec98f4461dba36
3
+ size 1326519
ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/8de9ed94-a2b6-43af-8214-254ceb1d226d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a9aa4a5978c16ab7c5b0daf394dde753c68c73f2bf712c1445bc3791b910a09
3
+ size 153007
ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/8de9ed94-a2b6-43af-8214-254ceb1d226d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec271cfe337bf625bb59ce80e1f58d58e338c5b2f431bc87aa4b16f7295be516
3
+ size 178739
ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/8de9ed94-a2b6-43af-8214-254ceb1d226d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ef04845bd002e42961a6776491150f0cbba0877bceb1f73855b51189d509208
3
+ size 9670088
ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/full.md ADDED
@@ -0,0 +1,494 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WHAT MAKES A GOOD DIFFUSION PLANNER FOR DECISION MAKING?
2
+
3
+ Haofei Lu $^{1}$ Dongqi Han $^{2\dagger}$ Yifei Shen $^{2}$ Dongsheng Li $^{2}$
4
+
5
+ $^{1}$ Tsinghua University $^{2}$ Microsoft Research Asia
6
+
7
+ luhf23@mails.tsinghua.edu.cn
8
+
9
+ {dongqihan,yifeishen,Dongsheng.Li}@microsoft.com
10
+
11
+ # ABSTRACT
12
+
13
+ Diffusion models have recently shown significant potential in solving decision-making problems, particularly in generating behavior plans – also known as diffusion planning. While numerous studies have demonstrated the impressive performance of diffusion planning, the mechanisms behind the key components of a good diffusion planner remain unclear and the design choices are highly inconsistent in existing studies. In this work, we address this issue through systematic empirical experiments on diffusion planning in an offline reinforcement learning (RL) setting, providing practical insights into the essential components of diffusion planning. We trained and evaluated over 6,000 diffusion models, identifying the critical components such as guided sampling, network architecture, action generation and planning strategy. We revealed that some design choices opposite to the common practice in previous work in diffusion planning actually lead to better performance, e.g., unconditional sampling with selection can be better than guided sampling and Transformer outperforms U-Net as denoising network. Based on these insights, we suggest a simple yet strong diffusion planning baseline that achieves state-of-the-art results on standard offline RL benchmarks.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Decision making by learning from offline data has been a fundamental approach in robotics and artificial intelligence (Bellman, 1957). It enables agents to acquire complex behaviors by observing and mimicking expert demonstrations, circumventing the need for explicit programming or exhaustive exploration. However, this paradigm faces significant challenges, particularly when dealing with long-horizon planning and high-dimensional action spaces. The complexity of modeling sequential dependencies and capturing the intricacies of action distributions makes it difficult to scale traditional methods (Deisenroth & Rasmussen, 2011) to more complex tasks (Parmas et al., 2018).
18
+
19
+ Recently, diffusion models have achieved remarkable success in image and video generation, demonstrating their ability to handle complex distribution and long-range dependencies (Ho et al., 2020; Dhariwal & Nichol, 2021). Inspired by these works, several recent studies have applied diffusion models to planning sequential decisions, especially with continuous state and action spaces such as robotic manipulation tasks (Janner et al., 2022; Ajay et al., 2022; Lu et al., 2023; Li et al., 2023). The diffusion models are used to approximate the sequence of states and actions from current time step to future – and by exploiting the diffusion models' conditional generation capacity such as diffusion guidance (Ho et al., 2020; Ho & Salimans, 2021), the model can make plans (i.e. state trajectory) with desired properties such as reward maximization (i.e. offline reinforcement learning (Levine et al., 2020)).
20
+
21
+ Despite achieving impressive performance across a diverse array of tasks, there has been limited exploration into the fundamental components and mechanisms that constitute an effective diffusion planning model for decision making. Previous research exhibits a lack of consistency and coherence in design choices. It remains uncertain whether sub-optimal design choices might hinder the full
22
+
23
+ potential of diffusion models within decision-making domains. Specifically, existing approaches have not adequately addressed essential facets such as the choice of diffusion guidance algorithm, network architecture, and whether the plan should contain states or state-action pairs. This naturally raises the following fundamental question:
24
+
25
+ What makes a good diffusion planner for decision making, especially offline RL?
26
+
27
+ We seek to answer the question by conducting a comprehensive empirical investigation into key design choices in diffusion models for decision-making, in particular for state-based robotics tasks. Our work contributes to the field of decision making and diffusion models in several aspects.
28
+
29
+ - Comprehensive experiments: We conducted an extensive empirical study to explore what constitutes an effective diffusion planner. By training and evaluating over 6,000 models, we analyzed key components critical to decision making in diffusion planning, including guided sampling algorithms, network architectures, action generation methods, and planning strategies.
30
+ - Insights and tips: We ran detailed experiments and data analysis to understand the role of each key component in constituting a good diffusion planner. In particular, we discovered that certain design choices, contrary to common practice in diffusion planning actually lead to better performance. Our work offers intuitive explanations and practical tips about the choices and provides insights about the strengths and limitations of diffusion planning.
31
+ - A simple yet strong baseline: Building on the insights from our study, we suggest a simple yet highly competitive baseline, named Diffusion Veteran (DV). This model achieves state-of-the-art performance in planning tasks in standard offline RL benchmarks.
32
+
33
+ # 2 BACKGROUND AND RELATED WORK
34
+
35
+ Offline Reinforcement Learning (Fujimoto et al., 2019; Levine et al., 2020; Fu et al., 2020) is a subfield of reinforcement learning (RL) where the agent learns from a fixed dataset of past experiences. This dataset typically consists of state-action-reward next-state tuples, which encapsulate the agent's interactions with the environment. The challenge in offline RL is for the agent to derive an effective policy from this static dataset without further exploration or interaction with the environment. Two major challenges arise in this context. First, the state and action spaces may be high-dimensional and involve long-range dependencies, making it difficult to model effectively (Levine et al., 2020). Second, the learned policy must be optimal, even though the behavior policy that generated the offline data may be sub-optimal or different from the desired policy (Fujimoto et al., 2019).
36
+
37
+ Recently, diffusion models have emerged as a powerful framework for tasks such as image and video generation due to their ability to model complex distributions (Croitoru et al., 2023), which could mitigate the first problem. Moreover, diffusion guidance techniques (Ho et al., 2020; Ho & Salimans, 2021) allow the model to generate samples that adhere to the desired properties. The second challenge in offline RL, learning an optimal policy, can be addressed by diffusion guidance techniques to produce behavior that maximizes rewards. Building on this insight, a growing body of research has explored the use of diffusion models to generate behavior trajectories, denoted as $\tau$ .
38
+
39
+ Diffusion planning (Ajay et al., 2022; Janner et al., 2022; Liang et al., 2023; Dai et al., 2023; Yang et al., 2023; Li et al., 2023; Yang et al., 2023; Chen et al., 2024; Dong et al., 2024c) considers that at the time step $t$ , a trajectory $\tau$ consists of the current and subsequent $H$ steps of state-action pairs or states:
40
+
41
+ $$
42
+ \tau = \left[ \begin{array}{l l l l} s _ {t} & s _ {t + 1} & \dots & s _ {t + H - 1} \\ a _ {t} & a _ {t + 1} & \dots & a _ {t + H - 1} \end{array} \right], \text {o r} \tau = \left[ \begin{array}{l l l l} s _ {t} & s _ {t + 1} & \dots & s _ {t + H - 1} \end{array} \right]. \tag {2.1}
43
+ $$
44
+
45
+ There is a guidance function to model the reward, such as the immediate reward $r_t$ or the state value function $v(s_t) = \mathbb{E}\left[\sum_{h=0}^{\mathrm{end}} \gamma^h r_{t+h}\right]$ , where $\gamma$ is the discount factor (Sutton & Barto, 1998). In classifier guidance (CG) (Ho et al., 2020), a guidance network is learned simultaneously with the diffusion model, whose input is the generated trajectory and the output is accumulated rewards or value function. The gradient of the guidance network is used in the generation process of diffusion model to maximize the rewards. Examples of diffusion planning with CG are (Janner et al., 2022; Liang et al., 2023; Zhang et al., 2022). In classifier-free guidance (CFG) (Ho & Salimans, 2021), it
46
+
47
+ ![](images/8aae589ef55e90a5f1e20b33141778645c4669ccbd3125a28fffe8eba7728552.jpg)
48
+ Figure 1: Diffusion planning framework for decision making. (a) The generation of a sequence plan using the denoising process of a diffusion model. A 3-joints robot arm is used as an illustrative example. (b) Important components and candidates in the framework. Each color corresponds to one component in the framework. A star indicates the preferred choice in experiments.
49
+
50
+ ![](images/ea57962b0489688d4dcd4c1645a36a0e7a442f75dbe3be6993c32008d01e714b.jpg)
51
+
52
+ takes the desired reward or value function as an additional argument feed into the diffusion process. Instances are (Ajay et al., 2022; Li et al., 2023; Yang et al., 2023). However, despite some literature reviews such as Zhu et al. (2023), the field lacks a systematical study to elucidate the design space of diffusion planning in offline RL with substantial experimental results.
53
+
54
+ Diffusion policy (Pearce et al., 2023; Wang et al., 2023b; Hansen-Estruch et al., 2023; Chen et al., 2023) is another kind of popular usage of diffusion model in decision making. The trajectory only includes $\tau = a_{t}$ , without lookahead planning. The model is trained by combining the loss of imitation learning and model-free RL as in classic offline RL methods (Kumar et al., 2020; Fujimoto & Gu, 2021). Diffusion policy methods hope to improve the performance of by leveraging the capacity of diffusion model to model complex distribution of actions (policy function). A recent study (Dong et al., 2024b) investigated the design space of diffusion policy, proposed that diffusion policies such as DQL (Wang et al., 2023b) can be a computationally efficient and powerful candidate for decision-making tasks.
55
+
56
+ # 3 STUDY DESIGN
57
+
58
+ # 3.1 KEY COMPONENTS AND MECHANISMS OF DIFFUSION PLANNER
59
+
60
+ Recent pioneering work in diffusion planning (Janner et al., 2022; Ajay et al., 2022; Chen et al., 2024) has demonstrated the potential of this approach in offline RL. However, the design choices in these studies vary significantly, and it remains unclear whether there is an optimal configuration for different domains. Our aim is to conduct a systematic analysis supported by comprehensive experimental results. To achieve this, we begin by listing key design components (excluding common deep learning hyperparameters such as learning rates) that have varied in previous studies. See Fig. 1(b) for an overview.
61
+
62
+ Guided sampling algorithms: Classifier guidance (CG) (Ho et al., 2020), Classifier-free guidance (CFG) (Ho & Salimans, 2021), Monte Carlo sampling with selection (sample N unconditional trajectories and select the best, the criteria of which is given by a critic function learned simultaneously with diffusion model). Most previous diffusion planners used CG (Janner et al., 2022; Wang et al., 2023a; Chen et al., 2024) or CFG (Ajay et al., 2022; Li et al., 2023; Yang et al., 2023) for offline RL.
63
+
64
+ Denoising network backbone: U-Net (Ronneberger et al., 2015); Transformer (Vaswani et al., 2017). U-Net was used in most previous diffusion planners for state-based offline RL (Janner et al., 2022; Ajay et al., 2022; Wang et al., 2023a; Li et al., 2023; Chen et al., 2024)).
65
+
66
+ Action Generation: Learn joint distribution of state and action and directly execute the generated action at the current step (used in, e.g. Janner et al. (2022); Liang et al. (2023)); Learn and use inverse dynamics to compute action from state plan (used in e.g., Ajay et al. (2022); Wang et al. (2023a)).
67
+
68
+ Planning strategy: Dense-step planning means the planned trajectory $\tau$ (Eq. 2.1) corresponds to contiguous H steps in the environment (this is a conventional setting in diffusion planning (Janner et al., 2022; Ajay et al., 2022; Lu et al., 2023)); Jump-step planning models $H \times m$ environment steps, where $m \in \mathbb{N}^{+}$ is the planning stride; Hierarchical planning (studied by Li et al. (2023); Chen et al. (2024)).
69
+
70
+ Details of the implementation are deferred to Appendix A and B.
71
+
72
+ # 3.2 EXPERIMENT PROCEDURE
73
+
74
+ Given the multitude of components involved, it is challenging to draw scientific conclusions directly from the collective results. Therefore, we structured our study using the following procedure:
75
+
76
+ (1) Conduct a comprehensive search on the key components (Sect. 3.1) by combining grid search and manual tuning to obtain the best results.
77
+ (2) Evaluate the effect of each component using the control variable method; that is, modify only one component of the best model at a time and compare it with the original.
78
+ (3) After identifying which components are important and understanding how they affect performance, perform a deeper analysis to derive useful insights.
79
+
80
+ # 3.3 BENCHMARK
81
+
82
+ We conducted experiments on the D4RL dataset (Fu et al., 2020), one of the most widely used benchmarks for offline RL and imitation learning. The dataset covers a variety of task domains, including maze navigation, robot locomotion, robot arm manipulation, and vehicle driving, among others. For our experiments, we selected three sets of behavior planning tasks that were most commonly studied in prior works in offline RL and diffusion planning (Janner et al., 2021; Ajay et al., 2022; Janner et al., 2022; Liang et al., 2023; Li et al., 2023; Lu et al., 2023; Chen et al., 2024). These tasks (Fig. 2) encompass both planning and control challenges, providing a comprehensive evaluation in various problem settings. The performance metric considered in this work is the standard RL objective: the average total rewards in an online testing episode.
83
+
84
+ ![](images/921ca82f0a8e0428bfa563c8cf00dbbb03826838cf5fbc8863d5fada27d649fa.jpg)
85
+ Figure 2: Rendering of the benchmarking tasks considered in this study, where $dim(S)$ and $dim(\mathcal{A})$ denote the dimension of the state and action spaces.
86
+
87
+ Maze2D involve navigating a 2D maze, requiring the agent to find an optimal path to a goal. These tasks are used to test planning capabilities in environments where spatial reasoning is critical.
88
+
89
+ AntMaze presents a navigation challenge with a simulated ant robot. The agent controls a multi-legged robot to navigate through a 2D maze, combining both locomotion and planning.
90
+
91
+ <table><tr><td></td><td>Env</td><td colspan="3">Kitchen</td><td colspan="6">Antmaze</td><td colspan="4">Maze2D</td></tr><tr><td>Category</td><td>Dataset</td><td>Mixed</td><td>Partial</td><td>avg.</td><td>L.-div.</td><td>L.-play</td><td>M.-div.</td><td>M.-play</td><td>avg.</td><td>L.</td><td>M.</td><td>Umaze</td><td>avg.</td><td></td></tr><tr><td rowspan="4">Non-diffusion</td><td>BC</td><td>47.5</td><td>33.8</td><td>40.7</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>5</td><td>30.3</td><td>3.8</td><td>13.0</td><td></td></tr><tr><td>BCQ</td><td>8.1</td><td>18.9</td><td>13.5</td><td>2.2</td><td>6.7</td><td>0.0</td><td>0.0</td><td>2.2</td><td>6.2</td><td>8.3</td><td>12.8</td><td>9.1</td><td></td></tr><tr><td>CQL</td><td>51.0</td><td>49.8</td><td>50.4</td><td>61.2</td><td>53.7</td><td>15.8</td><td>14.9</td><td>36.4</td><td>12.5</td><td>5.0</td><td>5.7</td><td>7.7</td><td></td></tr><tr><td>IQL</td><td>51.0</td><td>46.3</td><td>48.7</td><td>47.5</td><td>39.6</td><td>70.0</td><td>71.2</td><td>57.1</td><td>58.6</td><td>34.9</td><td>47.4</td><td>47.0</td><td></td></tr><tr><td rowspan="6">Diffusion Policies</td><td>SfBC</td><td>45.4</td><td>47.9</td><td>46.7</td><td>45.5</td><td>59.3</td><td>82.0</td><td>81.3</td><td>67.0</td><td>74.4</td><td>73.8</td><td>73.9</td><td>74.0</td><td></td></tr><tr><td>DQL</td><td>62.6</td><td>60.5</td><td>61.6</td><td>56.6</td><td>46.4</td><td>78.6</td><td>76.6</td><td>64.6</td><td>-</td><td>-</td><td>-</td><td></td><td></td></tr><tr><td>DQL*</td><td>55.1</td><td>65.5</td><td>60.3</td><td>70.6</td><td>81.3</td><td>82.6</td><td>87.3</td><td>80.5</td><td>186.8</td><td>152.0</td><td>140.6</td><td>159.8</td><td></td></tr><tr><td>IDQL</td><td>66.5</td><td>66.7</td><td>66.6</td><td>67.9</td><td>63.5</td><td>84.8</td><td>84.5</td><td>75.2</td><td>90.1</td><td>89.5</td><td>57.9</td><td>79.2</td><td></td></tr><tr><td>IDQL*</td><td>66.5</td><td>66.7</td><td>66.6</td><td>40.0</td><td>48.7</td><td>83.3</td><td>67.3</td><td>59.8</td><td>-</td><td>-</td><td>-</td><td></td><td></td></tr><tr><td>CEP</td><td>-</td><td>-</td><td>-</td><td>64.8</td><td>66.6</td><td>83.8</td><td>83.6</td><td>74.7</td><td>-</td><td>-</td><td>-</td><td></td><td></td></tr><tr><td rowspan="5">Diffusion Planners</td><td>Diffuser</td><td>52.5</td><td>55.7</td><td>54.1</td><td>27.3</td><td>17.3</td><td>2.0</td><td>6.7</td><td>13.3</td><td>123</td><td>121.5</td><td>113.9</td><td>119.5</td><td></td></tr><tr><td>AdpDfsr</td><td>51.8</td><td>55.5</td><td>53.7</td><td>8.7</td><td>5.3</td><td>6.0</td><td>12.0</td><td>8.0</td><td>167.9</td><td>129.9</td><td>135.1</td><td>144.3</td><td></td></tr><tr><td>DD</td><td>75.0</td><td>56.5</td><td>65.8</td><td>0.0</td><td>0.0</td><td>4.0</td><td>8.0</td><td>3.0</td><td>-</td><td>-</td><td>-</td><td></td><td></td></tr><tr><td>HD</td><td>71.7</td><td>73.3</td><td>72.5</td><td>83.6</td><td>-</td><td>88.7</td><td>-</td><td>-</td><td>128.4</td><td>135.6</td><td>155.8</td><td>139.9</td><td></td></tr><tr><td>DV (Ours)</td><td>73.6</td><td>94.0</td><td>83.8</td><td>80.0</td><td>76.4</td><td>87.4</td><td>89.0</td><td>83.2</td><td>203.6</td><td>150.7</td><td>136.6</td><td>163.6</td><td></td></tr></table>
92
+
93
+ Franka Kitchen simulates a robot arm performing a variety of manipulation tasks in a kitchen environment to achieve task goals across multiple stages.
94
+
95
+ # 4 EXPERIMENTAL RESULTS
96
+
97
+ We trained and evaluated over 6,000 diffusion models by sweeping the key components discussed in Sect. 3.1 and other hyper-parameters (See Appendix B for details).
98
+
99
+ By summarizing the results from the experiments, we identified one kind of diffusion planning framework, called the Diffusion Veteran (DV). The pseudocode of DV can be found in Algorithm 1. As shown in Table 1, DV outperforms all previous diffusion planning and diffusion policy methods. We hope DV will serve as a simple yet strong baseline for future research in diffusion planning.
100
+
101
+ Table 1: Normalized performance of various offline-RL methods. Our results (DV) are averaged over 500 episode seeds. The results of other methods are obtained from literature. We omit the variance over seeds for simplicity; however, it can be found in the detailed tables in Appendix D. The best average performance on each task set are marked in bold fonts. BC: vanilla imitation learning, BCQ: Fujimoto et al. (2019), CQL: Kumar et al. (2020), IQL: Kostrikov et al. (2021), SfBC: Chen et al. (2023), DQL: Wang et al. (2023b), IDQL: Hansen-Estruch et al. (2023), DQL* and IDQL*: replicated by Dong et al. (2024b), CEP: Lu et al. (2023), Diffuser: Janner et al. (2022), AdptDfsr: Liang et al. (2023), DD: Ajay et al. (2022), HD: Chen et al. (2024).
102
+
103
+ <table><tr><td colspan="2">Algorithm 1: Diffusion Veteran (DV) Simplified Pseudocode</td></tr><tr><td colspan="2">Input: Planning horizon H, Dataset D, Discount factor γ, Candidate num N, Planning stride M. Initialize: Diffusion Transformer Planner εθ, Diffusion Inverse dynamics εω, Critic Vφ</td></tr><tr><td colspan="2">1 Calculate accumulated discounted returns Rt = ∑h=0endγrhfor every step t.</td></tr><tr><td colspan="2">2 Function TRAINING:</td></tr><tr><td colspan="2">3 Sample st,t+M,...,t+(H-1)M, at,t+M,...,t+(H-1)M, Rt from D</td></tr><tr><td colspan="2">4 Train planner εθ using st as condition and st,t+M,...,t+(H-1)M as target output</td></tr><tr><td colspan="2">5 Train Inverse dynamics εω using st, st+M as input, at as target output</td></tr><tr><td colspan="2">6 Train critic Vφ using st,t+M,...,t+(H-1)M as input, Rt as target output</td></tr><tr><td colspan="2">7 end</td></tr><tr><td colspan="2">8 Function EXECUTION(s):</td></tr><tr><td colspan="2">9 Randomly generate N plans using εθ, while fixing the first state as s during sampling</td></tr><tr><td colspan="2">10 Select the best plan using critic Vφ</td></tr><tr><td colspan="2">11 Use the inverse dynamics εω to generate action using s and the next state in the best plan</td></tr><tr><td colspan="2">12 end</td></tr></table>
104
+
105
+ With DV in place, we can then analyze the impact of each component in diffusion planning by looking into how each component influences its performance. Each of the following sub-sections will focus on one component that we have found to be crucial. In the end of this section, we will conclude our findings into practical tips.
106
+
107
+ # 4.1 ACTION GENERATION
108
+
109
+ ![](images/986415f8c1a42e79f3745082d8093efd8958f000256a460373d5e81fd1b5f930.jpg)
110
+ Figure 3: Comparison of performance between two action generation strategies. "Seperate" learns and uses inverse dynamics to compute action from state plan. "Joint" means learning joint distribution of state and action and directly executing the generated action at the current step (see "action generation" in Fig. 1(b)). A straightforward conclusion drawn from the results is that "Separate" is better than "Joint" when tackling higher-dimensional action spaces. The vertical dashed line indicates on-par performance.
111
+
112
+ ![](images/72f1d18e4a34a2e6674f6b593093a09e7b4d111793536fed511c09954e8f6b3a.jpg)
113
+
114
+ The choice of action generation design (Sect. 3.1) remains a subject of ongoing debate within the field. On one side, the pioneering diffusion planner, Diffuser, along with subsequent studies (Janner et al., 2022; Liang et al., 2023; Chen et al., 2024), employs a diffusion model to generate the joint distribution of action and state trajectories ("joint"). In contrast, studies by Ajay et al. (2022); Wang et al. (2023a); Du et al. (2024) have adopted inverse dynamics to generate actions based on planned states ("separate").
115
+
116
+ Our experimental findings favor the latter approach: Although both strategies perform comparably in simpler environments such as Maze2D, which lacks robotic control elements, the "separate" approach significantly outperforms the "joint" strategy in more complex settings like Kitchen and AntMaze, which feature robotic control and higher-dimensional action spaces.
117
+
118
+ This observed disparity may be attributed to the additional complexity introduced when modeling the joint distribution of sequential states and actions, compared to modeling only the states. This complexity becomes particularly pronounced in environments where state transitions involve more complex actions due to higher-dimensional action spaces.
119
+
120
+ We tested both diffusion models and vanilla MLP as the inverse dynamics, and found similar performance between them. We adhered to diffusion inverse dynamics (Appendix B.1).
121
+
122
+ # 4.2 PLANNING STRATEGY
123
+
124
+ ![](images/7f89adbff97649abe4ba2739bd3b24068e8e8ee241123e1094327b05c71727b5.jpg)
125
+ Figure 4: Performance change of DV over planning stride. It reduces to dense-step planning when Stride=1. The star indicates the choice of DV.
126
+
127
+ ![](images/26997f779cdf14717e8425ef9a7d89b48a017521a0dbf3a7ba66bbfdd267ab49.jpg)
128
+
129
+ ![](images/ed423258686e5c24da9d27c3265c8422400f9915b8943284738735f8afd74c54.jpg)
130
+
131
+ One crucial result we found is that jump-step planning (Sect. 3.1) is beneficial in almost all cases, despite the fact that most previous work used dense-step planning. This is observed in DV (Fig. 4) and generally in diffusion planners (see Appendix D for extensive results).
132
+
133
+ An obvious benefit from jump-step planning is that with the same planning steps, the model can look ahead farther. This may be crucial for planning tasks that require long-term credit assignment. The choice of stride should be related to the actual clock-time interval between two environment
134
+
135
+ steps. Nonetheless, we suggest to try jump-steps and sweep the stride. This observed phenomenon also implies that the diffusion planner should play the role of planning at a more abstract level or with a longer timescale. Interestingly, this is consistent with the neuroscientific fact that the intrinsic timescale of the prefrontal cortex (higher-level planning) is longer than that of the motor cortex (low-level control) (Murray et al., 2014; Runyan et al., 2017; Wang et al., 2018). A recent study (Chen et al., 2024) demonstrated impressive planning performance (Table 1, HD) using multi-timescale diffusion planning. Exploring the hierarchical paradigm of diffusion planning could be an interesting future direction.
136
+
137
+ # 4.3 DENOISING NETWORK BACKBONE
138
+
139
+ ![](images/2eed3ad2f439b37b1682f7255d4e64cf7bd70b5e6b364f21c84f66febe7faef3.jpg)
140
+
141
+ ![](images/d46546994319db21e0971156d48b6cf3b0a671a004c9c95633d1f642326f2bef.jpg)
142
+ Figure 5: Using Transformer as the backbone of denoising network. (a) Performance comparison between Transformer and U-Net. The Transformer outperforms U-Net in 8 out of 9 sub-tasks and in all 3 main tasks. The amount of parameters in U-Net is comparable to that in Transformers. Note that the error bars in Kitchen are too small to visualize (See Table 10 for numerical results). (b) Visualization of attention weights of the first layer in the Transformer network during the denoising process. More plots can be found in Appendix D.
143
+
144
+ ![](images/9f80f57adad00bb1643de325729504756c178193cf5bb1435d828c50415be536.jpg)
145
+
146
+ ![](images/8592f68afb4ffe23299c00207ab7920556a84c22be48a3dcc4e3809c66307bb5.jpg)
147
+
148
+ Most diffusion planners on the D4RL dataset use 1-D U-Net for the denoising network. It is natural to question whether attention is all you need (Vaswani et al., 2017) for diffusion planning. Thus, we examined the benefit of replacing U-Net with the Transformer architecture as the backbone of the denoising model (Sect. 3.1) (see Appendix B for details about network structures). The experimental results clearly support the utilization of Transformer (Fig. 5(a)) in diffusion planning, consistent with the latest trend in image and video generation (Peebles & Xie, 2023; OpenAI, 2024).
149
+
150
+ We conducted a case study by looking into the attention weights of the trained Transformer in the Kitchen environment (Fig. 5(b)), which reflect the temporal credit assignment (i.e., to how many steps later should be paid attention in the planning sequence). First, we see that the model pays more attention to the long-range element in the trajectory compared to the short-range ones. It suggests that the long-term dependency is crucial in this task, which breaks the local inductive bias of convolutional neural networks such as U-Net. Second, an interesting finding is that the characteristic attention length is consistent even with different planning stride (Sect. 4.2): 6 (attention step) $\times$ 4 (stride) $\approx$ 25 (attention step) $\times$ 1 (stride), as depicted in Fig. 5(b). It suggests that the Transformer finds the invariant correlations across the stride, contributing to the generalization performance.
151
+
152
+ More generally, we found long-term attention existing in the Transformer in the other tasks as well, although the attention patterns vary across different tasks. The attention patterns typically feature slashes, which attend to a fixed number of steps prior, and vertical lines, which attend to key steps. We have included the attention weights visualization in Appendix D. In-depth study will be needed to fully understand the role of long-term dependency and why Transformer is observed to outperform UNet in the future.
153
+
154
+ # 4.4 IMPACT OF NETWORK SIZE
155
+
156
+ Since the experimental results are in favor of Transformer, one may wonder whether a "scaling law" (Kaplan et al., 2020) holds, in particular, whether performance scales up with model depth (Ye et al.,
157
+
158
+ ![](images/aee1dd6eaf254cada77b5f581a8c2cd74f9e916ebac42758c0e15e009883b47d.jpg)
159
+ Figure 6: Performance change over depth of the Transformer network as diffusion planner. The star indicates the choice of DV.
160
+
161
+ ![](images/ee875fd3312ff569c529549065a4e1f73757a34f51e10220d92d6671b08a8f12.jpg)
162
+
163
+ ![](images/9ccd1983262c26c07b65db512d0b96e9551197489cb8152e2d00dd046cb64c8a.jpg)
164
+
165
+ 2024). The results presented in Fig. 6 pass two clear messages: First, 1-layer Transformer is not enough, except for the minimalist sub-task (Maze2D-U). Second, a deeper model is not always better. This may be due to a intrinsic difference between decision making and natural language processing and limitations of dataset size and quality, which requires further study to systematically address.
166
+
167
+ # 4.5 GUIDED SAMPLING ALGORITHMS
168
+
169
+ ![](images/3e6a54609ae7f479846c33500bd5b20ff59886001cda6785fce244c04c0cc21c.jpg)
170
+ (a)
171
+ (b)
172
+
173
+ ![](images/37a4e75892ad2343797e5963fb8e54a602150a84ad3e1bf836a7020f24a4fa9e.jpg)
174
+ Figure 7: Analysis of guided sampling algorithm. (a) Performance comparison among different guided sampling algorithms for reward maximization. (b) Histogram of the value (accumulated discounted return in the future $\left(\sum_{h=0}^{\text{end}} \gamma^{h} r_{t+h}\right)$ , normalized to $[-1,1]$ ) of the data points in each environment. For AntMaze, the failed trajectories are omitted since their values are all 0.
175
+
176
+ Another inconsistent design in previous work lies in the choice of guided sampling algorithm (Sect. 3.1), which enables the diffusion planner to generate plans that perform better than the average level of the dataset. Fig. 7(a) visualizes the corresponding empirical results (normalized) in our model. We can draw several conclusions from the results.
177
+
178
+ First, classifier guidance (CG) is comparable with classifier-free guidance (CFG), despite the fact that CFG is generally considered better than CG in image synthesis (Ho & Salimans, 2021). A potential reason is that the target value of CFG may need to be adjusted over time since the total rewards an agent can obtain in the future may vary depending on the task stage, but we can only use a fixed target value for CFG since there is no trivial solution.
179
+
180
+ Also, we observed that non-guidance can be better than guidance - Monte Carlo sampling with selection (MCSS) performs overall the best, except for Franka Kitchen where MCSS lags slightly behind CFG. This is an important finding since existing diffusion planners usually used CG or CFG (Chen et al., 2023; Wang et al., 2023b)). To understand the potential underlying reasons, we plotted the value distribution of data in each environment (Fig. 7(b)). It can be seen that in Maze2D and AntMaze, there is a substantial amount of optimal and near-optimal experiences, whereas in Kitchen most samples are sub-optimal (note that here the optimality is with respect to condition of diffusion model). This may explain why CFG performs better than MCSS in Kitchen. Thus we can propose a hypothesis: No guidance (MCSS) can be better than guided generation (CG, CFG) if the dataset contains a substantial portion of expert demonstration.
181
+
182
+ # 4.6 COMPARISON TO DIFFUSION POLICY
183
+
184
+ ![](images/46d6084e28ad62ace04209d85611760642df63c414d6d8ed1ba0053e8a4fd151.jpg)
185
+
186
+ ![](images/be3fcfc6cc0f69ede112563c8f9781946c54c59c872ea2eb8914033b3b1c3368.jpg)
187
+
188
+ ![](images/54b9b0bada23e930ead228c8be503d3e80241fe591b44a168ebfda3837e718e6.jpg)
189
+ Figure 8: Average performance of methods on different tasks. The horizontal dashed line indicates the best performance over all methods. DV (Diffusion planning) stands out in Kitchen, Maze2D, and AntMaze; while DQL (Diffusion policy) (Wang et al., 2023b) outperforms all diffusion planning methods in MuJoCo locomotion tasks. Refer to the caption of Table 1 for method details.
190
+
191
+ ![](images/403559dbee7362eb53571154ef97ccf50da0f91729b33d5f18dc7b1ada866612.jpg)
192
+
193
+ Diffusion planning and diffusion policy represent two key approaches within diffusion-based decision-making. After examining the core components of diffusion planners, we turn to a comparison of diffusion planning and diffusion policy across different environments. The experimental results are illustrated in Fig. 8. We observed that diffusion planning outperforms diffusion policy in AntMaze, Kitchen, and Maze2D, whereas diffusion policy excels in MuJoCo locomotion tasks. The first three environments require precise goal achievement, such as positioning an object exactly, necessitating long-term planning. This makes them well-suited to diffusion planning, which generates entire trajectories in one step. Furthermore, these environments feature sparse reward structures, posing challenges for model-free RL algorithms typically used in diffusion policies (Wang et al., 2023b). In contrast, the objective in MuJoCo is simply to control agents to run faster, a task that is less related to lookahead planning and does not require intricate planning. RL loss functions can help diffusion policy (Wang et al., 2023b) achieve better results in such scenarios.
194
+
195
+ # 4.7 VALIDATIONS ON ADROIT DATASET
196
+
197
+ To examine whether the conclusions drawn from our experiments can generalize to other tasks, we conducted experiments on the Adroit Hand dataset (Rajeswaran et al., 2018; Fu et al., 2020), which features motion-captured human data applied to a realistic, high-degree-of-freedom robotic hand, including both challenges from planning and control. It encompasses 8 challenging tasks highlighted in the original paper, including as pen twirling, door opening, hammer use, and object relocation. We found that the results are consistent with our findings, supporting the generalizability across tasks. The detailed results are deferred to Appendix C.
198
+
199
+ # 4.8 PRACTICAL TIPS TO TAKE HOME
200
+
201
+ Takeaway 1: Diffusion planning is most effective for tasks requiring long-term credit assignment, while diffusion policies better fit locomotion tasks that demand less long-term planning (Sect. 4.6)
202
+
203
+ Takeaway 2: It is recommended to generate state plans with diffusion planners and use an inverse dynamics model to compute the corresponding actions (Sect. 4.1).
204
+
205
+ Takeaway 3: Implementing jump-step planning can be highly beneficial; experimenting with different planning strides is encouraged (Sect. 4.2).
206
+
207
+ Takeaway 4: It is worth trying to use Transformer as the backbone of diffusion planner, especially in
208
+
209
+ the tasks that require long-term lookahead planning (Sect. 4.3).
210
+
211
+ Takeaway 5: A single-layer Transformer is insufficient for effective planning (Sect. 4.4).
212
+
213
+ Takeaway 6: Larger models do not necessarily lead to better performance in diffusion planner for offline RL (Sect. 4.4).
214
+
215
+ Takeaway 7: Non-guidance approaches, such as Monte Carlo unconditional sampling with selection, can outperform classifier or classifier-free guidance when the dataset contains enough near-optimal trajectories (Sect. 4.5).
216
+
217
+ # 5 DISCUSSIONS
218
+
219
+ Synergy between diffusion planning and diffusion policy. A significant avenue for future research involves a deeper exploration of the distinctions between diffusion planning and diffusion policy. Drawing on Daniel Kahneman's seminal work Thinking, Fast, and Slow (Kahneman, 2011), human cognitive processes are categorized into System 1 and System 2. Diffusion policies are analogous to System 1 processes, as they operate rapidly and efficiently (Wang et al., 2023b), making them well-suited for tasks such as locomotion (Fig. 8) that do not require extensive deliberation or long-term planning. These policies manage routine decision making with the same efficiency as intuitive responses in human cognition. Conversely, diffusion planning mirrors System 2 thinking, characterized by its slower, more deliberate, and effortful nature. This approach is particularly effective for tasks that demand long-term credit assignment (Fig. 8), involving more computations to develop effective plans. In RL terminology, diffusion planning can be broadly classified as model-based, while diffusion policy aligns with model-free methodologies. Investigating the interplay between these two systems presents a compelling intersection for both machine learning and cognitive neuroscience (Gläscher et al., 2010; Duan et al., 2016; Botvinick et al., 2019). Studies from cognitive science indicate that the brain may use a synergistic approach which arbitrates and selects the better system according to the current situation, and the preference may change over time (Lee et al., 2014; Han et al., 2024). We anticipate extensive future research focused on integrating the strengths of diffusion planning and diffusion policies to enable both efficient and effective decision-making AI.
220
+
221
+ Computational efficiency. Despite the effectiveness of diffusion planners, their computational cost is substantial. Our work is orthogonal to the optimization of computational cost (Dong et al., 2024a). Nonetheless, future work may consider new schemes such as the consistency model (Song et al., 2023) to improve computational efficiency.
222
+
223
+ Interpretability and safety. Our study focuses on a single performance metric (total return), potentially overlooking qualitative aspects such as the interpretability and reliability of the diffusion planner. Future work may consider issues such as explainability (Puiutta & Veith, 2020) and safety (Xiao et al., 2023) of diffusion planning. Leveraging the experiences from computer vision domain will be worth investigating.
224
+
225
+ Sustainability. Our work required significant computational resources, particularly in terms of GPU energy consumption, as we trained and evaluated thousands of models across diverse tasks. However, this investment in energy is not without purpose. We aim to provide a solid foundation for future research. Subsequent work can build upon our findings, reducing the need for extensive trial-and-error experimentation. In this way, our research contributes to energy efficiency in the long term, as researchers can reference our results and apply proven methods rather than duplicating resource-intensive exploratory efforts.
226
+
227
+ Open problems and future directions. In the current study, we have focused on standard Markov decision process problems (Bellman, 1957) using a popular offline RL benchmark (Fu et al., 2020). The planning and control are based on joint states and coordinates. Numerous untouched problems exist, such as vision-based decision making (Du et al., 2024; Yang et al., 2024), goal-conditioned reinforcement learning (Liu et al., 2022; Wang et al., 2023a), partially observable environments (Schmidhuber, 1991), offline-to-online deployment (Matsushima et al., 2021), and the scalability of diffusion planning models (Kaplan et al., 2020). Future efforts are anticipated to fully address these limitations. However, even within the scope of the current work, we have found several interesting phenomena and tips that are counter to common practices. Our work should be considered as a new but solid starting point for behavior planning using decision models.
228
+
229
+ # REPRODUCIBILITY STATEMENT
230
+
231
+ We are committed to ensuring the reproducibility of our results. To facilitate this, we include the source code of DV in the supplementary material, which is also available at https://github.com/Josh00-Lu/DiffusionVeteran. Detailed descriptions of our experimental setup, including model architectures, training procedures, and hyperparameter settings, are provided in Appendix A and Appendix B. We have included comprehensive information on the datasets used, along with any preprocessing steps, in Appendix B. For all key experiments, we have specified the evaluation protocols and metrics in Sect. 3 and provided extensive results in Appendix D. We have included the full list of hyperparameters and configurations in Appendix B.4.
232
+
233
+ # ACKNOWLEDGMENT
234
+
235
+ This work is supported by Microsoft Research.
236
+
237
+ # REFERENCES
238
+
239
+ Anurag Ajay, Yilun Du, Abhi Gupta, Joshua B Tenenbaum, Tommi S Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision making? In The Eleventh International Conference on Learning Representations, 2022.
240
+ Richard Bellman. A Markovian decision process. Journal of Mathematics and Mechanics, pp. 679-684, 1957.
241
+ Matthew Botvinick, Sam Ritter, Jane X Wang, Zeb Kurth-Nelson, Charles Blundell, and Denis Hassabis. Reinforcement learning, fast and slow. Trends in cognitive sciences, 23(5):408-422, 2019.
242
+ Chang Chen, Fei Deng, Kenji Kawaguchi, Caglar Gulcehre, and Sungjin Ahn. Simple hierarchical planning with diffusion. arXiv preprint arXiv:2401.02644, 2024.
243
+ Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, and Jun Zhu. Offline reinforcement learning via high-fidelity generative behavior modeling. In The Eleventh International Conference on Learning Representations, 2023.
244
+ Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9): 10850-10869, 2023.
245
+ Yilun Dai, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Josh Tenenbaum, Dale Schuurmans, and Pieter Abbeel. Learning universal policies via text-guided video generation. arXiv preprint arXiv:2302.00111, 2023.
246
+ Marc Deisenroth and Carl E Rasmussen. *Pilco: A model-based and data-efficient approach to policy search*. In Proceedings of the International Conference on Machine Learning, pp. 465-472, 2011.
247
+ Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021.
248
+ Zibin Dong, Jianye Hao, Yifu Yuan, Fei Ni, Yitian Wang, Pengyi Li, and Yan Zheng. Diffuserlite: Towards real-time diffusion planning. arXiv preprint arXiv:2401.15443, 2024a.
249
+ Zibin Dong, Yifu Yuan, Jianye Hao, Fei Ni, Yi Ma, Pengyi Li, and Yan Zheng. Cleandiffuser: An easy-to-use modularized library for diffusion models in decision making. arXiv preprint arXiv:2406.09509, 2024b.
250
+ Zibin Dong, Yifu Yuan, Jianye HAO, Fei Ni, Yao Mu, YAN ZHENG, Yujing Hu, Tangjie Lv, Changjie Fan, and Zhipeng Hu. Aligndiff: Aligning diverse human preferences via behavior-customisable diffusion model. In The Twelfth International Conference on Learning Representations, 2024c.
251
+
252
+ Yilun Du, Sherry Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Josh Tenenbaum, Dale Schuurmans, and Pieter Abbeel. Learning universal policies via text-guided video generation. Advances in Neural Information Processing Systems, 36, 2024.
253
+ Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. RL2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
254
+ Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning, 2020.
255
+ Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. Advances in neural information processing systems, 34:20132-20145, 2021.
256
+ Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pp. 2052-2062. PMLR, 2019.
257
+ Jan Glascher, Nathaniel Daw, Peter Dayan, and John P O'Doherty. States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron, 66(4):585-595, 2010.
258
+ Dongqi Han, Kenji Doya, Dongsheng Li, and Jun Tani. Synergizing habits and goals with variational bayes. Nature Communications, 15(1):4461, 2024.
259
+ Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, and Sergey Levine. Idql: Implicit q-learning as an actor-critic method with diffusion policies. arXiv preprint arXiv:2304.10573, 2023.
260
+ Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.
261
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
262
+ Michael Janner, Qiyang Li, and Sergey Levine. Offline reinforcement learning as one big sequence modeling problem. In Advances in Neural Information Processing Systems, volume 34, 2021.
263
+ Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In International Conference on Machine Learning, pp. 9902-9915. PMLR, 2022.
264
+ Daniel Kahneman. Thinking, fast and slow. macmillan, 2011.
265
+ Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
266
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
267
+ Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169, 2021.
268
+ Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179-1191, 2020.
269
+ Sang Wan Lee, Shinsuke Shimojo, and John P O'Doherty. Neural computations underlying arbitration between model-based and model-free learning. Neuron, 81(3):687-699, 2014.
270
+ Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
271
+ Wenhao Li, Xiangfeng Wang, Bo Jin, and Hongyuan Zha. Hierarchical diffusion for offline decision making. In International Conference on Machine Learning, pp. 20035-20064. PMLR, 2023.
272
+
273
+ Zhixuan Liang, Yao Mu, Mingyu Ding, Fei Ni, Masayoshi Tomizuka, and Ping Luo. Adaptdiffuser: Diffusion models as adaptive self-evolving planners. arXiv preprint arXiv:2302.01877, 2023.
274
+ Minghuan Liu, Menghui Zhu, and Weinan Zhang. Goal-conditioned reinforcement learning: Problems and solutions. arXiv preprint arXiv:2201.08299, 2022.
275
+ Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, and Jun Zhu. Contrastive energy prediction for exact energy-guided diffusion sampling in offline reinforcement learning. In International Conference on Machine Learning, pp. 22825-22855. PMLR, 2023.
276
+ Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, and Shixiang Gu. Deployment-efficient reinforcement learning via model-based offline optimization. In International Conference on Learning Representations, 2021.
277
+ John D Murray, Alberto Bernacchia, David J Freedman, Ranulfo Romo, Jonathan D Wallis, Xinying Cai, Camillo Padoa-Schioppa, Tatiana Pasternak, Hyojung Seo, Daeyeol Lee, et al. A hierarchy of intrinsic timescales across primate cortex. Nature Neuroscience, 17(12):1661, 2014.
278
+ OpenAI. Sora. https://openai.com/index/sora/, 2024.
279
+ Paavo Parmas, Carl Edward Rasmussen, Jan Peters, and Kenji Doya. PIPPS: Flexible model-based policy search robust to the curse of chaos. In International Conference on Machine Learning, pp. 4065-4074. PMLR, 2018.
280
+ Tim Pearce, Tabish Rashid, Anssi Kanervisto, Dave Bignell, Mingfei Sun, Raluca Georgescu, Sergio Valcarcel Macua, Shan Zheng Tan, Ida Momennejad, Katja Hofmann, et al. Imitating human behaviour with diffusion models. arXiv preprint arXiv:2301.10677, 2023.
281
+ William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195-4205, 2023.
282
+ Erika Puiutta and Eric MSP Veith. Explainable reinforcement learning: A survey. In International cross-domain conference for machine learning and knowledge extraction, pp. 77-95. Springer, 2020.
283
+ Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations. In Proceedings of Robotics: Science and Systems (RSS), 2018.
284
+ Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234-241. Springer, 2015.
285
+ Caroline A Runyan, Eugenio Piasini, Stefano Panzeri, and Christopher D Harvey. Distinct timescales of population coding across cortex. Nature, 548(7665):92, 2017.
286
+ Jürgen Schmidhuber. Reinforcement learning in Markovian and non-Markovian environments. In Advances in Neural Information Processing Systems, pp. 500-506, 1991.
287
+ Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
288
+ Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. In International Conference on Machine Learning, pp. 32211-32252. PMLR, 2023.
289
+ Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
290
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017.
291
+
292
+ Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, and Matthew Botvinick. Prefrontal cortex as a meta-reinforcement learning system. Nature Neuroscience, 21(6):860, 2018.
293
+ Wei Wang, Dongqi Han, Xufang Luo, Yifei Shen, Charles Ling, Boyu Wang, and Dongsheng Li. Toward open-ended embodied tasks solving. In Second Agent Learning in Open-Endness Workshop, 2023a.
294
+ Zhendong Wang, Jonathan J Hunt, and Mingyuan Zhou. Diffusion policies as an expressive policy class for offline reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023b.
295
+ Wei Xiao, Tsun-Hsuan Wang, Chuang Gan, and Daniela Rus. Safediffuser: Safe planning with diffusion probabilistic models. arXiv preprint arXiv:2306.00148, 2023.
296
+ Cheng-Fu Yang, Haoyang Xu, Te-Lin Wu, Xiaofeng Gao, Kai-Wei Chang, and Feng Gao. Planning as in-painting: A diffusion-based embodied task planning framework for environments under uncertainty. arXiv preprint arXiv:2312.01097, 2023.
297
+ Sherry Yang, Yilun Du, Seyed Kamyar Seyed Ghasemipour, Jonathan Tompson, Leslie Pack Kaelbling, Dale Schuurmans, and Pieter Abbeel. Learning interactive real-world simulators. In The Twelfth International Conference on Learning Representations, 2024.
298
+ Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process. arXiv preprint arXiv:2407.20311, 2024.
299
+ Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motiondiffuse: Text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001, 2022.
300
+ Zhengbang Zhu, Hanye Zhao, Haoran He, Yichao Zhong, Shenyu Zhang, Yong Yu, and Weinan Zhang. Diffusion models for reinforcement learning: A survey. arXiv preprint arXiv:2311.01223, 2023.
301
+
302
+ # A GUIDED SAMPLING ALGORITHMS
303
+
304
+ For decision making tasks, guided sampling algorithms are used to generate desired plans or actions. In this work, we compare three types of different guided sampling methods: classifier guidance (Dhariwal & Nichol, 2021) (CG), classifier-free guidance (CFG) (Ho & Salimans, 2021), and Monte Carlo sampling from selections (MCSS).
305
+
306
+ Classifier guidance: Classifier guidance (CG) is introduced to guide the unconditional diffusion models $q_{t}(\pmb{x}_{t})$ to generate data over condition $c$ . The conditioned score function is formulated as:
307
+
308
+ $$
309
+ \nabla_ {\boldsymbol {x}} \log q _ {t} (\boldsymbol {x} _ {t} | \boldsymbol {c}) = \nabla_ {\boldsymbol {x}} \log q _ {t} (\boldsymbol {x} _ {t}) + \nabla_ {\boldsymbol {x}} \log q _ {t} (\boldsymbol {c} | \boldsymbol {x} _ {t})
310
+ $$
311
+
312
+ where the second term is also known as a noised classifier that predict the condition using noised data $\pmb{x}_t$ . During sampling, the gradient of the classifier is then applied to the predicted noise $\epsilon_{\theta}(\pmb{x}_t,t)$ :
313
+
314
+ $$
315
+ \bar {\epsilon} _ {\theta} \left(\boldsymbol {x} _ {t}, t, \boldsymbol {c}\right) = \epsilon_ {\theta} \left(\boldsymbol {x} _ {t}, t\right) - w \sigma_ {t} \nabla_ {\boldsymbol {x}} \log q _ {t} (\boldsymbol {c} | \boldsymbol {x} _ {t})
316
+ $$
317
+
318
+ where $w$ is a weighting factor that controls the strength of the classifier guidance. For CG sampling, we tuned $w$ in range [0.001, 10] on each task.
319
+
320
+ Classifier-free guidance: To avoid training classifiers, classifier-free guidance (CFG) is proposed. The main idea of CFG is to train a diffusion model that can be used for both conditional noise predictor $\epsilon_{\theta}(\pmb{x}_t,t,\pmb{c})$ and unconditional noise predictor $\epsilon_{\theta}(\pmb{x}_t,t)$ :
321
+
322
+ $$
323
+ \bar {\epsilon} _ {\theta} \left(\boldsymbol {x} _ {t}, t, \boldsymbol {c}\right) = \epsilon_ {\theta} \left(\boldsymbol {x} _ {t}, t\right) + w \left(\epsilon_ {\theta} \left(\boldsymbol {x} _ {t}, t, \boldsymbol {c}\right) - \epsilon_ {\theta} \left(\boldsymbol {x} _ {t}, t\right)\right)
324
+ $$
325
+
326
+ where $\epsilon_{\theta}(\pmb{x}_t,t) = \epsilon_{\theta}(\pmb{x}_t,t,\varnothing)$ . Noise prediction of $\epsilon_{\theta}(\pmb{x}_t,t,\varnothing)$ and $\epsilon_{\theta}(\pmb{x}_t,t,\pmb{c})$ can be jointly learned by randomly discard conditioning with probability of $p_{\mathrm{uncond}}$ . For decision making tasks, we can train diffusion models using condition of discounted returns, and using classifier-free guidance for better plan sampling. We can normalize the discounted return in the dataset for training, and use condition of 1 as target return for CFG sampling during inference (Ajay et al., 2022). However, experiments show that fixing 1 as target may lead to unrealistic or unstable plans. Consequently, besides tuning the guidance strength $w\in [1.0,6.0]$ , we also tune for the best target return within range [0.5, 1.5] for each tasks to test CFG's best performance.
327
+
328
+ Monte Carlo sampling from selections: For Monte Carlo sampling from selections (MCSS), $N$ selections are firstly sampled from an unconditional generative model as candidates. Then these candidates are evaluated with a learned critic for the selection of the optimal one. One advantage for MCSS is that, it do not rely on any task-specific hyperparameters for inference, such as the guidance strength $w$ in CFG and CG, and target return in CFG. However, it needs to sample $N - 1$ more candidates during each decision making step.
329
+
330
+ # B IMPLEMENTATION DETAILS
331
+
332
+ # B.1 MODEL ARCHITECTURE
333
+
334
+ Planner: Our code is based on CleanDiffuser (Dong et al., 2024b). We examined U-Net (Ronneberger et al., 2015) and Transformer (Vaswani et al., 2017) as the neural network backbone for all the diffusion planners. Specifically, we keep consistent with the implementation of U-Net1D (Janner et al., 2022), with 5 kernel size, $(1,2,2,2)$ for channel multiplication, 32 base channels on MuJoCo, Kitchen and Maze2D, and 64 base channels on AntMaze. For Transformer, we use DiT1D (Peebles & Xie, 2023; Dong et al., 2024c), with hidden dimension of 256, head dimension of 32, 2 DiT blocks on MuJoCo, Kitchen and Maze2D, and 8 DiT blocks on AntMaze. All the planner diffusion models are trained with the Adam (Kingma & Ba, 2014) optimizer with learning rate of $3e - 4$ , batch size of 128, for 1M gradient steps. All the diffusion models in this work are trained to predict the noise. However, for U-Net1D experiments on Kitchen, the diffusion planner to predict the clean estimation, because it could achieve quite better performance.
335
+
336
+ Inverse dynamics: We used an MLP-based diffusion model as the inverse dynamic model, whose input is the current state and the planned next-state; and the output is the action to execute. It is implemented with a 3-layer MLP with additional 2-layer embedding layer and trained with $1M$ gradient steps. All the inverse dynamic models are trained with the Adam (Kingma & Ba, 2014) optimizer with learning rate of $3e - 4$ , batch size of 128. We found that the using diffusion model
337
+
338
+ as inverse dynamics show similar performance with vanilla MLP inverse dynamics. The inverse dynamics trained on navigation tasks (Maze2D and AntMaze) are trained with policy centralization, where the original $(s_t,s_{t + 1})\to a_t$ is modified with $(0,s_{t + 1} - s_t)\rightarrow a_t$ . We found this may improve the generalization ability of the inverse dynamics on navigation tasks.
339
+
340
+ Critic: We also implemented two types of critic models for guided sampling. The first type has the architecture of the U-Net1D for the planner, with a linear output layer to produce the critic value. The second type is a 2 blocks vanilla transformer, with hidden dimension of 256 as the value function, with a linear projection head on the first token output. We trained all the critic model using the Adam (Kingma & Ba, 2014) optimizer with learning rate of $3e - 4$ , batch size of 128. The model will be trained with 200K gradient steps, if it is a clean critic model<sup>1</sup>. Otherwise, it is trained for 1M gradient steps.
341
+
342
+ # B.2 DIFFUSION SOLVER
343
+
344
+ We use DDIM (Song et al., 2020) of temperature 1.0 for planner diffusion sampling, and DDPM with temperature of 0.5 for inverse dynamic action sampling. The sampling temperature is introduced to reduce sampling randomness (Ajay et al., 2022).
345
+
346
+ # B.3 DATASET PRE-PROCESSING
347
+
348
+ Diffusion policy baselines (Chen et al., 2023; Wang et al., 2023b; Hansen-Estruch et al., 2023) commonly learns policy, Q functions, and value functions using a temporal difference manner, on standard transitions $(s_{t},a_{t},s_{t + 1},r_{t})$ . Admittedly, diffusion planners often require careful dataset preprocessing, including horizon padding, planning strides, return calculation, and truncation-termination handling. An unsuccessful sequential dataset pre-processing may greatly reduce the planning ability of diffusion planners. Most planning tasks are usually sparse rewarded, and how optimality is defined, combined with different temporal credit assignment methods is also important. For MuJoCo and Kitchen, we use discount factor $\gamma = 0.997$ . For Maze2D and AntMaze, we use IQL-maze (Kostrikov et al., 2021) reward shaping methods for temporal credit assignment in navigation planning tasks, where an $-1$ penalty is always applied to agent during every timestep. The plan trajectory is clipped to 1000 steps on Maze2D. Refer to our code for more details.
349
+
350
+ # B.4 FULL HYPER-PARAMETERS
351
+
352
+ We conducted several rounds of hyperparameter tuning, where each round conducted grid search on a subset of hyperparameters that we identified as most influential based on prior experiments and domain knowledge. We control this iterative process of selecting which hyperparameters to explore in each round, guided by preliminary results and insights. Table 2 displays the hyperparameters and default choices in our work.
353
+
354
+ Table 2: Configuration Settings
355
+
356
+ <table><tr><td>Settings</td><td>Default</td><td>Choices</td></tr><tr><td>Guidance Type</td><td>MCSS</td><td>[MCSS, CG, CFG, None]</td></tr><tr><td>State-Action Generation</td><td>Separate</td><td>[Joint, Separate]</td></tr><tr><td>Advantage Weighting</td><td>True only on MuJoCo</td><td>[True, False]</td></tr><tr><td>Inverse Dynamic</td><td>Diffusion</td><td>[Diffusion, Regular]</td></tr><tr><td>Time Credit Assignment</td><td>discount=0.997</td><td>[discount=0.997, IQL-maze]</td></tr><tr><td>Planner Net. Backbone</td><td>Transformer</td><td>[Transformer, UNet]</td></tr><tr><td>UNet Channels Mult</td><td>(1, 2, 2, 2)</td><td>(1, 2, 2, 2)</td></tr><tr><td>UNet Base Channels</td><td>32</td><td>[16, 32, 64]</td></tr><tr><td>Transformer Hidden</td><td>256</td><td>256</td></tr><tr><td>Transformer Block</td><td>2</td><td>[2, 4, 6, 8]</td></tr><tr><td>Planner Solver</td><td>DDIM</td><td>[DDIM, DDPM]</td></tr><tr><td>Planner Sampling Steps</td><td>20</td><td>20</td></tr><tr><td>Planner Training Steps</td><td>1000000</td><td>1000000</td></tr><tr><td>Planner Temperature</td><td>1</td><td>1</td></tr><tr><td>MCSS Candidates</td><td>50</td><td>[1, 20, 50]</td></tr><tr><td>Planning Horizon</td><td>32</td><td>[4, 32, 40]</td></tr><tr><td>Planning Stride</td><td>1</td><td>[1, 2, 4, 5, 15, 25]</td></tr><tr><td>Inverse Dynamics Net. Backbone</td><td>MLP</td><td>MLP</td></tr><tr><td>Inverse Dynamics Hidden</td><td>256</td><td>256</td></tr><tr><td>Inverse Dynamics Solver</td><td>DDPM</td><td>DDPM</td></tr><tr><td>Inverse Dynamics Sampling Steps</td><td>10</td><td>10</td></tr><tr><td>Inverse Dynamics Training Steps</td><td>1000000</td><td>1000000</td></tr><tr><td>Policy Temperature</td><td>0.5</td><td>0.5</td></tr></table>
357
+
358
+ # C RESULTS ON VALIDATION DATASET
359
+
360
+ In this section, we validate our insights and findings on a new set of eight tasks, called Adroit Hand (Rajeswaran et al., 2018; Fu et al., 2020), to test the generalizability of our conclusions regarding diffusion planning derived from the main paper.
361
+
362
+ # C.1 EXPERIMENT SETUPS
363
+
364
+ As demonstrated in Fig. 9, there are four different types of challenging tasks in the Adroit Hand environments. Each task consists of a dexterous hand attached to a free arm, which has around 30 actuated degrees of freedom for controlling and moving to complete different manipulation tasks, including opening the door, driving the nail, repositioning the pen orientation, and relocating the ball.
365
+
366
+ $$
367
+ d i m (\mathcal {A}) = 2 8; d i m (\mathcal {S}) = 3 9
368
+ $$
369
+
370
+ ![](images/b08138038e3e831c45d32ccb0b38e33e47d51ff607fae9ae92d79a6199a475f4.jpg)
371
+
372
+ ![](images/4262a6aae3fc1cb94a5a51a9af54bb55f0285030875b80136ac106f2b199b230.jpg)
373
+
374
+ ![](images/f8536b9a631320eabe4b7f76be82f42c2f9645a34bb0ab3ce801c037268370ab.jpg)
375
+
376
+ ![](images/05da50e7e4a624a03f6f84f7df62832ee2303a129f982bf2b1d683b4af3b12cd.jpg)
377
+ Figure 9: Rendering of the validation benchmarking tasks of Adroit Hand, where $dim(S)$ and $dim(\mathcal{A})$ denote the dimension of the state and action spaces on each tasks.
378
+
379
+ Door The task consists of undoing the latch and swinging the door open. The latch has significant dry friction and a bias torque that forces the door to remain closed. The agent leverages environmental interaction to develop an understanding of the latch, as no information about the latch is explicitly provided. The position of the door is randomized. The task is considered complete when the door touches the door stopper at the other end.
380
+
381
+ Hammer The task involves picking up a hammer and driving a nail into a board. The nail position is randomized and has dry friction capable of absorbing up to 15N of force. The task is successful when the entire length of the nail is inside the board.
382
+
383
+ Pen The task requires repositioning the blue pen to match the orientation of the green target. The base of the hand is fixed, and the target is randomized to cover all configurations. The task is considered successful when the orientations match within a specified tolerance.
384
+
385
+ Relocate The task involves moving the blue ball to the green target. The positions of the ball and target are randomized over the entire workspace. The task is considered successful when the object is within an epsilon-ball of the target.
386
+
387
+ We conduct our experiments on two types of datasets: Cloned and Expert. The Cloned dataset consists of a 50-50 split between demonstration data and 2,500 trajectories sampled from a behaviorally cloned policy trained on these demonstrations. The demonstration data includes 25 human trajectories, which are duplicated 100 times to match the number of cloned trajectories. The Expert dataset comprises 5,000 trajectories sampled from an expert policy that successfully solves the task, as provided in the DAPG repository.
388
+
389
+ # C.2 BASELINES
390
+
391
+ In order to better validate the performance of our diffusion veteran (DV), we also re-implement four representative baselines on this new set of environments: (i) Diffusion policies: DQL (Wang et al., 2023b) and IDQL (Hansen-Estruch et al., 2023); (ii) Diffusion planners: DD (Ajay et al., 2022) and Diffuser (Janner et al., 2022).
392
+
393
+ # C.3 EXPERIMENTAL RESULTS
394
+
395
+ In this section, we validate all our insights and findings of diffusion planning in the main paper from the same five perspectives: (1) Action Generation, (2) Planning Strategy, (3) Impact of Network Size, (4) Denoising Network Backbone, and (5) Guided Sampling Algorithms to better assess the generalizability of the main paper.
396
+
397
+ # C.3.1 ACTION GENERATION
398
+
399
+ Table 3: Results of different action generation choices for diffusion planning on Adroit Hand. Data are Mean ± Standard Error over 150 episode seeds.
400
+
401
+ <table><tr><td>Environment</td><td>Dataset</td><td>Separate (DV)</td><td>Joint</td></tr><tr><td>door</td><td>cloned</td><td>1.5 ± 0.0</td><td>15.2 ± 0.4</td></tr><tr><td>door</td><td>expert</td><td>104.7 ± 0.5</td><td>104.7 ± 0.2</td></tr><tr><td>hammer</td><td>cloned</td><td>11.9 ± 0.7</td><td>2.6 ± 0.0</td></tr><tr><td>hammer</td><td>expert</td><td>125.8 ± 1.1</td><td>113.4 ± 1.1</td></tr><tr><td>pen</td><td>cloned</td><td>80.2 ± 2.0</td><td>85.2 ± 2.0</td></tr><tr><td>pen</td><td>expert</td><td>122.2 ± 1.8</td><td>112.7 ± 1.8</td></tr><tr><td>relocate</td><td>cloned</td><td>0.6 ± 0.0</td><td>0.4 ± 0.0</td></tr><tr><td>relocate</td><td>expert</td><td>108.9 ± 0.2</td><td>108.7 ± 0.3</td></tr><tr><td colspan="2">Average</td><td>69.5</td><td>67.9</td></tr></table>
402
+
403
+ The experimental results presented in Table 3 corroborate our previous findings regarding action generation strategies in diffusion planning. Specifically, generating state plans using diffusion planners and then computing the corresponding actions via an inverse dynamics model (the Separate (DV) approach) demonstrates superior or comparable performance to the Joint approach, which involves generating actions directly. Averaged across all tasks and datasets, the Separate (DV) method achieves a higher mean score of 69.5 compared to 67.9 for the Joint method. This overall performance gain underscores the effectiveness of decoupling state planning from action generation, allowing the diffusion model to focus on modeling state distributions more accurately.
404
+
405
+ # C.3.2 PLANNING STRATEGY
406
+
407
+ Table 4: Results of different planning strategy choices on Adroit Hand. Data are Mean ± Standard Error over 150 episode seeds.
408
+
409
+ <table><tr><td>Environment</td><td>Dataset</td><td>Stride 1</td><td>Stride 2 (DV)</td><td>Stride 4</td></tr><tr><td>door</td><td>cloned</td><td>13.6 ± 0.4</td><td>1.5 ± 0.0</td><td>0.1 ± 0.0</td></tr><tr><td>door</td><td>expert</td><td>104.6 ± 0.2</td><td>104.7 ± 0.5</td><td>105.6 ± 0.2</td></tr><tr><td>hammer</td><td>cloned</td><td>3.9 ± 0.2</td><td>11.9 ± 0.7</td><td>3.2 ± 0.8</td></tr><tr><td>hammer</td><td>expert</td><td>112.5 ± 1.2</td><td>125.8 ± 1.1</td><td>125.9 ± 1.5</td></tr><tr><td>pen</td><td>cloned</td><td>81.6 ± 2.0</td><td>80.2 ± 2.0</td><td>--</td></tr><tr><td>pen</td><td>expert</td><td>125.9 ± 1.6</td><td>122.2 ± 1.8</td><td>--</td></tr><tr><td>relocate</td><td>cloned</td><td>0.1 ± 0.0</td><td>0.6 ± 0.0</td><td>0.0 ± 0.0</td></tr><tr><td>relocate</td><td>expert</td><td>108.0 ± 0.3</td><td>108.9 ± 0.2</td><td>109.0 ± 0.6</td></tr><tr><td colspan="2">Average</td><td>68.8</td><td>69.5</td><td>--</td></tr></table>
410
+
411
+ The results in Table 4 show that implementing jump-step planning strategies can enhance the performance of diffusion planning. On average, planning with a stride of 2 achieves a higher mean score compared to stride 1, indicating the benefits of experimenting with different strides. Notably, the "pen" environment has a maximum of 100 steps, which does not support planning with strides greater than 4; however, even within these constraints, stride planning demonstrates performance improvements. These findings suggest that increasing the planning stride can be beneficial for diffusion planning.
412
+
413
+ # C.3.3 DENOISING NETWORK BACKBONE
414
+
415
+ Table 5: Results of different denoising network backbone choices on Adroit Hand. Data are Mean ± Standard Error over 150 episode seeds.
416
+
417
+ <table><tr><td colspan="2">#Model Parameters</td><td>2.64 M</td><td>3.96 M</td><td>15.80 M</td><td>63.11 M</td></tr><tr><td>Environment</td><td>Dataset</td><td>Transformer (DV)</td><td>UNet</td><td>UNet</td><td>UNet</td></tr><tr><td>door</td><td>cloned</td><td>1.5 ± 0.0</td><td>-0.2 ± 0.0</td><td>-0.2 ± 0.0</td><td>1.2 ± 0.4</td></tr><tr><td>door</td><td>expert</td><td>104.7 ± 0.5</td><td>-0.1 ± 0.0</td><td>-0.0 ± 0.0</td><td>103.7 ± 0.6</td></tr><tr><td>hammer</td><td>cloned</td><td>11.9 ± 0.7</td><td>-0.2 ± 0.0</td><td>-0.0 ± 0.0</td><td>1.6 ± 0.0</td></tr><tr><td>hammer</td><td>expert</td><td>125.8 ± 1.1</td><td>-0.1 ± 0.0</td><td>-0.0 ± 0.0</td><td>122.0 ± 1.8</td></tr><tr><td>pen</td><td>cloned</td><td>80.2 ± 2.0</td><td>-0.7 ± 0.2</td><td>-1.8 ± 0.3</td><td>73.4 ± 5.1</td></tr><tr><td>pen</td><td>expert</td><td>122.2 ± 1.8</td><td>-1.3 ± 0.1</td><td>-2.6 ± 0.2</td><td>134.0 ± 3.2</td></tr><tr><td>relocate</td><td>cloned</td><td>0.6 ± 0.0</td><td>-0.1 ± 0.0</td><td>-0.1 ± 0.0</td><td>0.0 ± 0.0</td></tr><tr><td>relocate</td><td>expert</td><td>108.9 ± 0.2</td><td>-0.1 ± 0.0</td><td>-0.1 ± 0.0</td><td>106.5 ± 0.9</td></tr><tr><td colspan="2">Average</td><td>69.5</td><td>-0.4</td><td>-0.6</td><td>67.8</td></tr></table>
418
+
419
+ The results in Table 5 show that when using a regular number of parameters, Transformers have a clear advantage over UNet as the denoising backbone in diffusion planning. Specifically, the Transformer model achieves an average score of 69.5 with only $2.64\mathrm{M}$ parameters, while UNet needs significantly more parameters (up to $63.11\mathrm{M}$ , around 25 times of the transformer) to reach a similar performance level (average score of 67.8). This demonstrates that UNet requires multiple times the parameters to match the efficiency of Transformers.
420
+
421
+ # C.3.4 IMPACT OF NETWORK SIZE
422
+
423
+ Table 6: Performance change over depth of the Transformer network for diffusion planner on Adroit Hand. Data are Mean ± Standard Error over 150 episode seeds.
424
+
425
+ <table><tr><td colspan="2">#Model Parameters</td><td>1.46 M</td><td>2.64 M</td><td>3.82 M</td></tr><tr><td>Environment</td><td>Dataset</td><td>Depth 1</td><td>Depth 2 (DV)</td><td>Depth 3</td></tr><tr><td>door</td><td>cloned</td><td>4.3 ± 0.9</td><td>1.5 ± 0.0</td><td>1.3 ± 0.1</td></tr><tr><td>door</td><td>expert</td><td>0.0 ± 0.0</td><td>104.7 ± 0.5</td><td>105.5 ± 0.4</td></tr><tr><td>hammer</td><td>cloned</td><td>17.0 ± 2.4</td><td>11.9 ± 0.7</td><td>1.0 ± 0.0</td></tr><tr><td>hammer</td><td>expert</td><td>76.1 ± 4.9</td><td>125.8 ± 1.1</td><td>126.0 ± 1.4</td></tr><tr><td>pen</td><td>cloned</td><td>44.9 ± 5.3</td><td>80.2 ± 2.0</td><td>75.7 ± 5.4</td></tr><tr><td>pen</td><td>expert</td><td>42.5 ± 4.9</td><td>122.2 ± 1.8</td><td>127.5 ± 4.1</td></tr><tr><td>relocate</td><td>cloned</td><td>0.7 ± 0.1</td><td>0.6 ± 0.0</td><td>0.0 ± 0.0</td></tr><tr><td>relocate</td><td>expert</td><td>0.8 ± 0.4</td><td>108.9 ± 0.2</td><td>107.4 ± 0.8</td></tr><tr><td colspan="2">Average</td><td>23.3</td><td>69.5</td><td>68.1</td></tr></table>
426
+
427
+ The results presented in Table 6 demonstrate that a single-layer Transformer (Depth 1) is inadequate for effective planning, as evidenced by its significantly lower average score of 23.3 compared to deeper models. When the depth is increased to two layers (Depth 2), the performance improves markedly, achieving an average score of 69.5. However, further increasing the depth to three layers (Depth 3) does not yield additional benefits; the average score slightly decreases to 68.1. These findings support our earlier observations: a single-layer Transformer is insufficient for planning tasks, and simply enlarging the model does not guarantee better performance in diffusion planning for offline reinforcement learning.
428
+
429
+ # C.3.5 GUIDED SAMPLING ALGORITHMS
430
+
431
+ The results in Table 7 indicate that non-guidance methods like Monte Carlo sampling with selection (MCSS) can outperform guidance-based approaches when the dataset contains sufficient near-optimal trajectories. MCSS achieves the highest average score of 69.5, surpassing classifier-free guidance (CFG) at 67.7, classifier guidance (CG) at 62.0, and unguided sampling (None) at 60.9.
432
+
433
+ Table 7: Results of different guided sampling algorithms on Adroit Hand. Data are Mean ± Standard Error over 150 episode seeds.
434
+
435
+ <table><tr><td>Environment</td><td>Dataset</td><td>MCSS (DV)</td><td>CFG</td><td>CG</td><td>None</td></tr><tr><td>door</td><td>cloned</td><td>1.5 ± 0.0</td><td>12.1 ± 0.1</td><td>0.9 ± 0.0</td><td>0.7 ± 0.1</td></tr><tr><td>door</td><td>expert</td><td>104.7 ± 0.5</td><td>103.5 ± 0.4</td><td>104.3 ± 0.1</td><td>103.7 ± 0.2</td></tr><tr><td>hammer</td><td>cloned</td><td>11.9 ± 0.7</td><td>7.6 ± 0.1</td><td>1.2 ± 0.0</td><td>0.4 ± 0.0</td></tr><tr><td>hammer</td><td>expert</td><td>125.8 ± 1.1</td><td>106.7 ± 0.9</td><td>110.4 ± 1.3</td><td>105.7 ± 1.3</td></tr><tr><td>pen</td><td>cloned</td><td>80.2 ± 2.0</td><td>74.7 ± 1.6</td><td>64.3 ± 1.9</td><td>64.1 ± 2.0</td></tr><tr><td>pen</td><td>expert</td><td>122.2 ± 1.8</td><td>128.2 ± 1.4</td><td>107.9 ± 1.9</td><td>105.8 ± 1.8</td></tr><tr><td>relocate</td><td>cloned</td><td>0.6 ± 0.0</td><td>0.7 ± 0.0</td><td>0.0 ± 0.0</td><td>-0.0 ± 0.0</td></tr><tr><td>relocate</td><td>expert</td><td>108.9 ± 0.2</td><td>108.2 ± 0.5</td><td>107.0 ± 0.3</td><td>106.8 ± 0.3</td></tr><tr><td colspan="2">Average</td><td>69.5</td><td>67.7</td><td>62.0</td><td>60.9</td></tr></table>
436
+
437
+ # C.3.6 COMPARISON WITH OTHER METHODS
438
+
439
+ Table 8: Performance comparison with representative baselines on Adroit Hand. Data are Mean ± Standard Error over 150 episode seeds.
440
+
441
+ <table><tr><td colspan="2">Tasks</td><td colspan="3">Diffusion Policies</td><td colspan="3">Diffusion Planners</td></tr><tr><td>Dataset</td><td>Environment</td><td>IDQL</td><td>DQL-TuneLR</td><td>DQL</td><td>Diffuser</td><td>DD</td><td>DV (Ours)</td></tr><tr><td>door</td><td>cloned</td><td>4.4 ± 0.6</td><td>-0.3 ± 0.0</td><td>-0.1 ± 0.0</td><td>0.1 ± 0.1</td><td>15.4 ± 0.5</td><td>1.5 ± 0.0</td></tr><tr><td>door</td><td>expert</td><td>105.0 ± 0.3</td><td>104.8 ± 0.3</td><td>104.3 ± 0.1</td><td>103.0 ± 0.5</td><td>105.5 ± 0.3</td><td>104.7 ± 0.5</td></tr><tr><td>hammer</td><td>cloned</td><td>3.5 ± 0.5</td><td>0.2 ± 0.0</td><td>0.1 ± 0.0</td><td>1.2 ± 0.1</td><td>1.6 ± 0.1</td><td>11.9 ± 0.7</td></tr><tr><td>hammer</td><td>expert</td><td>127.6 ± 0.1</td><td>128.3 ± 0.1</td><td>55.9 ± 5.2</td><td>103.1 ± 3.8</td><td>124.8 ± 2.1</td><td>125.8 ± 1.1</td></tr><tr><td>pen</td><td>cloned</td><td>82.3 ± 5.0</td><td>23.3 ± 4.0</td><td>28.3 ± 4.3</td><td>61.7 ± 5.0</td><td>72.0 ± 4.2</td><td>80.2 ± 2.0</td></tr><tr><td>pen</td><td>expert</td><td>137.8 ± 2.4</td><td>133.5 ± 3.9</td><td>60.9 ± 6.1</td><td>99.7 ± 4.8</td><td>139.8 ± 3.5</td><td>122.2 ± 1.8</td></tr><tr><td>relocate</td><td>cloned</td><td>0.0 ± 0.1</td><td>0.1 ± 0.0</td><td>-0.1 ± 0.0</td><td>-0.0 ± 0.0</td><td>0.3 ± 0.0</td><td>0.6 ± 0.0</td></tr><tr><td>relocate</td><td>expert</td><td>107.0 ± 0.8</td><td>108.5 ± 0.6</td><td>108.8 ± 0.6</td><td>102.2 ± 1.5</td><td>110.3 ± 1.1</td><td>108.9 ± 0.2</td></tr><tr><td colspan="2">Average</td><td>71.0</td><td>62.3</td><td>44.8</td><td>58.9</td><td>71.2</td><td>69.5</td></tr></table>
442
+
443
+ Finally, we conduct a comprehensive comparison with our re-implemented baselines (Table 8). Notably, we find that using the default learning rate for DQL in this environment may lead to performance degradation. Therefore, we perform a learning rate search for DQL, select the optimal one, and denote it as DQL-TuneLR, where learning_rate = \{3e-3, 3e-4, 3e-5\}. For IDQL, we use the officially recommended 256 candidates for high-density action-value estimation. For DD, we conduct a grid search over 35 possible configurations for each task, adjusting target_return = \{0.4, 0.6, 0.8, 1.0, 1.2\} and w_cfg = \{1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0\} to guarantee its maximum performance. Our method requires only one-quarter of the candidates and no task-specific fine-tuning to achieve comparable performance.
444
+
445
+ # D EXTENSIVE RESULTS
446
+
447
+ # D.1 ATTENTION MAP ACROSS DIFFERENT TASKS
448
+
449
+ The following plots show some examples of the attention weights in different DDIM denoising steps in each task using Transformer backbones. We can see that a long-term dependency is generally existing in the Transformer.
450
+
451
+ ![](images/fe664259599f3a340316ddeda81f18119d7a151db21645a793ad003c8a8ae143.jpg)
452
+ Figure 10: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the Kitchen-Partial-v0 dataset.
453
+
454
+ ![](images/c4f8d2165cf82d7e00d0b128c643f23717dda2b250b0fc16e19c682926cc52b3.jpg)
455
+ Figure 11: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the Kitchen-Mixed-v0 dataset.
456
+
457
+ ![](images/fe3955653450e8f7d7438d06156e4d0384cd7aeb077e35dc8a6a66da4bbb3428.jpg)
458
+ Figure 12: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the AntMaze-Large-Diverse-v2 dataset.
459
+
460
+ ![](images/441297d845bc6b5956b2d456bf2162c525ff884cd8753d5f46ba6eeb6e47ab4e.jpg)
461
+ Figure 13: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the AntMaze-Large-Play-v2 dataset.
462
+
463
+ ![](images/dcba0e277c9d96b77da86191e4d61a20cf30e57a38e506ba289091af9c3c65ba.jpg)
464
+ Figure 14: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the AntMaze-Medium-Diverse-v2 dataset.
465
+
466
+ ![](images/660046d7df12e6a7eb7d08c425df03c27352ce919ec56db7594af979914b9c17.jpg)
467
+ Figure 15: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the AntMaze-Medium-Play-v2 dataset.
468
+
469
+ ![](images/b4c0d19be08e5ee6ec7ce86933216a7fca7941938387bdab73daed20b5e613d9.jpg)
470
+ Figure 16: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the Maze2d-Large-v1 dataset.
471
+
472
+ ![](images/a5f1d032b3b7a3b861148161671ffe3bbbb7ce8167361f1a3786cc5d531a923f.jpg)
473
+ Figure 17: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the Maze2D-Medium-v1 dataset.
474
+
475
+ ![](images/fbe59c49703638e9492040a89b5e08d47162438b699abad44bf873113fd96454.jpg)
476
+ Figure 18: Attention weights (averaged on multi-heads) of the first Transformer layer of DV on the Maze2D-Umaze-v1 dataset.
477
+
478
+ # D.2 EXTENSIVE RESULTS
479
+
480
+ <table><tr><td>t&#x27;εt</td><td>C&#x27;Lε</td><td>L&#x27;8t</td><td>8&#x27;8S</td><td>ε&#x27;L2I</td><td>t&#x27;εL0I</td><td>9&#x27;90I</td><td>0&#x27;εII</td><td colspan="2">Ave.</td></tr><tr><td>t&#x27;εt + ε&#x27;Sε</td><td>t&#x27;εt + ε&#x27;Lε</td><td>t&#x27;εt + 6&#x27;8ε</td><td>t&#x27;εt + 9&#x27;7ε</td><td>t&#x27;εt + 0&#x27;18</td><td>t&#x27;εt + 9&#x27;29</td><td>t&#x27;εt + ε&#x27;9</td><td>t&#x27;εt + t&#x27;08</td><td>Ave.</td><td>DZ</td></tr><tr><td>ε&#x27;εt + ε&#x27;Lε</td><td>ε&#x27;εt + ε&#x27;Lε</td><td>ε&#x27;εt + 6&#x27;19</td><td>ε&#x27;εt + 8&#x27;09</td><td>ε&#x27;εt + 6&#x27;92I</td><td>t&#x27;εt + ε&#x27;ε2I</td><td>ε&#x27;εt + ε&#x27;ε1I</td><td>ε&#x27;εt + 1&#x27;01I</td><td>Ave.</td><td>DZ</td></tr><tr><td>9&#x27;εt + ε&#x27;SLε</td><td>L&#x27;εt + 9&#x27;2ε</td><td>8&#x27;εt + 2&#x27;SL</td><td>0&#x27;εt + 1&#x27;89</td><td>9&#x27;εt + 1&#x27;4LI</td><td>9&#x27;εt + t&#x27;1εI</td><td>6&#x27;εt + t&#x27;2tI</td><td>6&#x27;εt + 9&#x27;8tI</td><td>Ave.</td><td>DZ</td></tr><tr><td>ε&#x27;89</td><td>I&#x27;ε</td><td>t&#x27;1t</td><td>9&#x27;6S</td><td>ε&#x27;L2</td><td>L&#x27;εt</td><td>S&#x27;1t</td><td>9&#x27;99</td><td colspan="2">Ave.</td></tr><tr><td>0&#x27;εt + 0&#x27;0L</td><td>I&#x27;εt + 0&#x27;29</td><td>I&#x27;εt + 8&#x27;09</td><td>S&#x27;1 + 9&#x27;6ε</td><td>t&#x27;1 + ε&#x27;SL</td><td>ε&#x27;1 + ε&#x27;8L</td><td>S&#x27;1 + 0&#x27;19</td><td>S&#x27;1 + 9&#x27;0S</td><td>Ave.</td><td>DZ</td></tr><tr><td>ε&#x27;εt + 0&#x27;SS</td><td>I&#x27;εt + 9&#x27;19</td><td>ε&#x27;εt + 0&#x27;LS</td><td>t&#x27;1 + 2&#x27;CL9</td><td>S&#x27;1 + 0&#x27;6S</td><td>t&#x27;1 + ε&#x27;29</td><td>S&#x27;1 + ε&#x27;2LS</td><td>t&#x27;1 + 9&#x27;1L</td><td>Ave.</td><td>DZ</td></tr><tr><td>0&#x27;εt + 8&#x27;69</td><td>I&#x27;0 + 2&#x27;0</td><td>0&#x27;εt + 8&#x27;0ε</td><td>S&#x27;1 + 9&#x27;8S</td><td>t&#x27;1 + 8&#x27;εL</td><td>8&#x27;0 + 9&#x27;8</td><td>6&#x27;0 + 2&#x27;2I</td><td>S&#x27;1 + 6&#x27;49</td><td>Ave.</td><td>DZ</td></tr><tr><td>8&#x27;1 + 2&#x27;8L</td><td>t&#x27;1 + 9&#x27;11</td><td>9&#x27;1 + 8&#x27;9I</td><td>t&#x27;1 + 1&#x27;εL</td><td>ε&#x27;1 + 2&#x27;LL</td><td>I&#x27;1 + 9&#x27;02</td><td>ε&#x27;1 + ε&#x27;εS</td><td>ε&#x27;1 + 1&#x27;6L</td><td>Ave.</td><td>DZ</td></tr><tr><td>-</td><td>0&#x27;2S</td><td>8&#x27;6S</td><td>S&#x27;49</td><td>-</td><td>6&#x27;6S</td><td>ε&#x27;29</td><td>S&#x27;19</td><td colspan="2">Ave.</td></tr><tr><td>-</td><td>S&#x27;0 + L&#x27;6ε</td><td>S&#x27;0 + 0&#x27;9S</td><td>S&#x27;0 + t&#x27;9S</td><td>-</td><td>S&#x27;0 + t&#x27;8S</td><td>9&#x27;0 + L&#x27;8S</td><td>S&#x27;0 + 6&#x27;8S</td><td>Ave.</td><td>DZ</td></tr><tr><td>-</td><td>ε&#x27;0 + 2&#x27;ε</td><td>t&#x27;0 + S&#x27;ε9</td><td>t&#x27;0 + S&#x27;ε9</td><td>-</td><td>t&#x27;0 + ε&#x27;19</td><td>t&#x27;0 + 6&#x27;9</td><td>t&#x27;0 + 0&#x27;49</td><td>Ave.</td><td>DZ</td></tr><tr><td>S/8</td><td>S/8</td><td>S/2</td><td>S/8</td><td>S/2</td><td>S/2</td><td>S/2</td><td>S/2</td><td>Ave.</td><td>DZ</td></tr><tr><td colspan="10">Cone</td></tr><tr><td>L&#x27;48</td><td>S&#x27;ε6</td><td>S&#x27;08</td><td>8&#x27;22</td><td>6&#x27;09I</td><td>9&#x27;ε9I</td><td>2&#x27;6tI</td><td>t&#x27;2tI</td><td colspan="2">Ave.</td></tr><tr><td>S&#x27;2 + L&#x27;8S</td><td>t&#x27;2 + 2&#x27;0ε</td><td>L&#x27;2 + 1&#x27;4LI</td><td>t&#x27;2 + 9&#x27;12</td><td>C&#x27;1 + 1&#x27;εεI</td><td>ε&#x27;1 + 9&#x27;9εI</td><td>9&#x27;1 + L&#x27;1LI</td><td>ε&#x27;1 + 8&#x27;1εI</td><td colspan="2">Ave.</td></tr><tr><td>ε&#x27;εt + ε&#x27;SI</td><td>I&#x27;εt + L&#x27;SL</td><td>C&#x27;2 + 8&#x27;L6</td><td>L&#x27;1 + 8&#x27;1ε</td><td>6&#x27;0 + t&#x27;1S1</td><td>0&#x27;1 + L&#x27;0S1</td><td>C&#x27;1 + ε&#x27;1tI</td><td>t&#x27;1 + 1&#x27;L2I</td><td colspan="2">Ave.</td></tr><tr><td>t&#x27;2 + 0&#x27;2ε</td><td>ε&#x27;2 + 9&#x27;tLI</td><td>6&#x27;1 + 9&#x27;62</td><td>8&#x27;1 + 1&#x27;SI</td><td>S&#x27;1 + 1&#x27;86I</td><td>t&#x27;1 + 9&#x27;ε02</td><td>9&#x27;1 + S&#x27;88I</td><td>0&#x27;2 + 2&#x27;89I</td><td colspan="2">Ave.</td></tr><tr><td>9&#x27;0S</td><td>6&#x27;1ε</td><td>8&#x27;SZ</td><td>I&#x27;L5</td><td>ε&#x27;8</td><td>t&#x27;6L</td><td>9&#x27;6Z</td><td>8&#x27;L5</td><td colspan="2">Ave.</td></tr><tr><td>S&#x27;0 + t&#x27;6ε</td><td>t&#x27;0 + L&#x27;8Z</td><td>L&#x27;0 + ε&#x27;SS</td><td>8&#x27;0 + 6&#x27;SS</td><td>9&#x27;1 + 0&#x27;68</td><td>6&#x27;1 + 8&#x27;ε8</td><td>S&#x27;1 + ε&#x27;εL</td><td>S&#x27;1 + 8&#x27;8ε</td><td colspan="2">Ave.</td></tr><tr><td>L&#x27;0 + 0&#x27;6t</td><td>8&#x27;0 + 0&#x27;εε</td><td>I&#x27;0 + 0&#x27;S</td><td>9&#x27;0 + 1&#x27;L5</td><td>9&#x27;1 + t&#x27;2L8</td><td>9&#x27;1 + 2&#x27;98</td><td>8&#x27;0 + 2&#x27;4Z</td><td>S&#x27;1 + 8&#x27;8S</td><td colspan="2">Ave.</td></tr><tr><td>S&#x27;0 + t&#x27;9S</td><td>ε&#x27;0 + t&#x27;6S</td><td>0&#x27;0 + t&#x27;0</td><td>8&#x27;0 + 8&#x27;09</td><td>0&#x27;2 + t&#x27;9L</td><td>6&#x27;0 + 2&#x27;9L</td><td>S&#x27;0 + 2&#x27;8I</td><td>S&#x27;1 + 2&#x27;19</td><td colspan="2">Ave.</td></tr><tr><td>ε&#x27;0 + t&#x27;29</td><td>I&#x27;0 + t&#x27;9</td><td>I&#x27;0 + S&#x27;9t</td><td>9&#x27;0 + t&#x27;4S</td><td>8&#x27;1 + 0&#x27;08</td><td>0&#x27;2 + 2&#x27;L2</td><td>I&#x27;0 + 9&#x27;Z</td><td>t&#x27;1 + t&#x27;2L</td><td colspan="2">Ave.</td></tr><tr><td>-</td><td>8&#x27;88</td><td>9&#x27;88</td><td>ε&#x27;48</td><td>-</td><td>8&#x27;88</td><td>ε&#x27;18</td><td>S&#x27;L2</td><td colspan="2">Ave.</td></tr><tr><td>-</td><td>9&#x27;0 + 2&#x27;S6</td><td>ε&#x27;0 + 9&#x27;86</td><td>t&#x27;0 + 6&#x27;S6</td><td>-</td><td>ε&#x27;0 + 0&#x27;46</td><td>t&#x27;0 + ε&#x27;88</td><td>9&#x27;0 + 8&#x27;L8</td><td colspan="2">Ave.</td></tr><tr><td>-</td><td>I&#x27;0 + t&#x27;2L</td><td>I&#x27;0 + 9&#x27;8L</td><td>C&#x27;0 + L&#x27;L2</td><td>-</td><td>I&#x27;0 + 9&#x27;εL</td><td>I&#x27;0 + 2&#x27;4L</td><td>ε&#x27;0 + 2&#x27;L9</td><td colspan="2">Ave.</td></tr><tr><td>S/2</td><td>S/2</td><td>S/2</td><td>S/2</td><td>S/2</td><td>S/2</td><td>S/2</td><td>S/2</td><td colspan="2">Ave.</td></tr></table>
481
+
482
+ Table 9: The effect of guided sampling algorithms for DV, with different planning strides. In "Stride x/y", x is for Kitchen, and y is for Maze2D & AntMaze. Data are Mean ± Standard Error over 500 episode seeds.
483
+
484
+ <table><tr><td>9°ZS1</td><td>7°SS1</td><td>5°Z+I</td><td>3°9+I</td><td>6°09I</td><td>9°E9I</td><td>2°6+I</td><td>4°2+I</td><td>###</td><td>###</td></tr><tr><td>7°I + 7°I</td><td>7°I + 7°I</td><td>9°I + 0°ZI</td><td>7°I + 7°9ZI</td><td>7°I + 1°E5I</td><td>7°I + 9°E5I</td><td>9°I + 1°L&#x27;LII</td><td>7°I + 8°E5I</td><td>###</td><td>###</td></tr><tr><td>0°I + 9°9+I</td><td>1°I + 6°S+I</td><td>1°I + L&#x27;8E1</td><td>1°I + 0°0+I</td><td>6°0 + 7°1S1</td><td>0°I + L&#x27;0S1</td><td>7°I + E&#x27;1+I</td><td>7°I + 1°L&#x27;2I</td><td>###</td><td>###</td></tr><tr><td>5°I + L&#x27;68I</td><td>5°I + I&#x27;86I</td><td>7°Z + L&#x27;89I</td><td>6°I + S&#x27;ZLI</td><td>5°I + 1&#x27;86I</td><td>7°I + 9°E0Z</td><td>9°I + S&#x27;88I</td><td>0°Z + 2°89I</td><td>###</td><td>###</td></tr><tr><td>6°SL</td><td>1°18</td><td>5°LS</td><td>6°0+</td><td>7°E8</td><td>7°6L</td><td>9°6Z</td><td>8°LS</td><td>###</td><td>###</td></tr><tr><td>7°I + 6°I8</td><td>7°I + 0°18</td><td>7°I + 7°99</td><td>7°I + 7°8Z</td><td>9°I + 0&#x27;68</td><td>6°I + 8°E8</td><td>5°I + 2°E&#x27;L</td><td>5°I + 8°E</td><td>###</td><td>###</td></tr><tr><td>7°I + 1°18</td><td>1°I + 8°S8</td><td>5°I + S&#x27;SS</td><td>7°I + 6&#x27;∠Z</td><td>9°I + 7&#x27;∠8</td><td>9°I + 2&#x27;98</td><td>8°0 + 2&#x27;2</td><td>5°I + 8°S</td><td>###</td><td>###</td></tr><tr><td>7°I + L&#x27;2L</td><td>7°I + 8°08</td><td>5°I + E&#x27;8+</td><td>5°I + 7°E5</td><td>0°Z + 7&#x27;9L</td><td>6°0 + 7&#x27;9L</td><td>5°0 + 2&#x27;8I</td><td>5°I + 2&#x27;19</td><td>###</td><td>###</td></tr><tr><td>7°I + 0°89</td><td>5°I + L&#x27;9L</td><td>5°I + 6&#x27;6S</td><td>5°I + L&#x27;8S</td><td>8°I + 0&#x27;08</td><td>0°Z + 2&#x27;1L</td><td>1°0 + 9&#x27;Z</td><td>7°I + 7&#x27;2L</td><td>###</td><td>###</td></tr><tr><td></td><td>9°Z</td><td>0°Z</td><td>0°ZI</td><td></td><td>8°E8</td><td>E&#x27;18</td><td>5°LL</td><td>###</td><td>###</td></tr><tr><td>-</td><td>5°0 + 8°0I</td><td>9°0 + 8&#x27;8Z</td><td>5°0 + 7&#x27;ZI</td><td>-</td><td>5°0 + 0&#x27;46</td><td>7°0 + 5&#x27;88</td><td>9°0 + 8&#x27;∠8</td><td>###</td><td>###</td></tr><tr><td>-</td><td>8°0 + 7°8E</td><td>5°0 + 7°E5</td><td>5°0 + 9&#x27;1I</td><td>-</td><td>1°0 + 9&#x27;E&#x27;L</td><td>1°0 + 2&#x27;4L</td><td>5°0 + 7&#x27;∠9</td><td>###</td><td>###</td></tr><tr><td>5°Z/8</td><td>5°I / 4°E+1S</td><td>5°Z / 8°E+1S</td><td>5°Z/8°E+1S</td><td>5°I / 4°E+1S</td><td>5°Z / 8°E+1S</td><td>5°Z / 7°E+1S</td><td>5°Z / 7°E+1S</td><td>###</td><td>###</td></tr><tr><td colspan="4">(DIEN-Ω) EN-Ω</td><td colspan="6">(DI LDI) EN-Ω</td></tr></table>
485
+
486
+ Table 10: Changing denoising network backbone for DV, with different planning strides. In "Stride x/y", x is for Kitchen, and y is for Maze2D & AntMaze. Data are Mean ± Standard Error over 500 episode seeds.
487
+
488
+ <table><tr><td>S&#x27;9I</td><td>T&#x27;89I</td><td>0&#x27;79I</td><td>I&#x27;15I</td><td>6&#x27;09I</td><td>9&#x27;89I</td><td>Z&#x27;64I</td><td>T&#x27;27I</td><td>###</td></tr><tr><td>Z&#x27;1 + I&#x27;1+I</td><td>I&#x27;1 + E&#x27;9+I</td><td>Z&#x27;1 + 8&#x27;6E/I</td><td>9&#x27;1 + 6&#x27;2I</td><td>Z&#x27;1 + I&#x27;εE/I</td><td>E&#x27;1 + 9&#x27;9E/I</td><td>9&#x27;1 + L&#x27;LII</td><td>E&#x27;1 + 8&#x27;1E/I</td><td>###</td></tr><tr><td>6&#x27;0 + 8&#x27;εSI</td><td>6&#x27;0 + 0&#x27;9SI</td><td>I&#x27;1 + L&#x27;ε+I</td><td>I&#x27;1 + T&#x27;1+I</td><td>6&#x27;0 + T&#x27;1SI</td><td>0&#x27;1 + L&#x27;0SI</td><td>Z&#x27;1 + E&#x27;1+I</td><td>T&#x27;1 + I&#x27;L2I</td><td>###</td></tr><tr><td>9&#x27;1 + S&#x27;86I</td><td>S&#x27;1 + 6&#x27;20Z</td><td>T&#x27;1 + 9&#x27;20Z</td><td>9&#x27;1 + 0&#x27;28I</td><td>S&#x27;1 + 1&#x27;86I</td><td>T&#x27;1 + 9&#x27;ε0Z</td><td>9&#x27;1 + S&#x27;88I</td><td>0&#x27;2 + Z&#x27;89I</td><td>###</td></tr><tr><td>E&#x27;1I</td><td>I&#x27;1</td><td>9&#x27;1</td><td>Z&#x27;44</td><td>Z&#x27;88</td><td>T&#x27;6L</td><td>9&#x27;6Z</td><td>8&#x27;L5</td><td>###</td></tr><tr><td>T&#x27;1 + 8&#x27;LZ</td><td>9&#x27;0 + L&#x27;ε</td><td>S&#x27;0 + L&#x27;Z</td><td>T&#x27;1 + 0&#x27;4ε</td><td>9&#x27;1 + 0&#x27;68</td><td>6&#x27;1 + 8&#x27;ε8</td><td>S&#x27;1 + Z&#x27;εL</td><td>S&#x27;1 + 8&#x27;8ε</td><td>###</td></tr><tr><td>L&#x27;0 + E&#x27;9</td><td>I&#x27;0 + E&#x27;0</td><td>T&#x27;0 + S&#x27;Z</td><td>S&#x27;1 + E&#x27;9ε</td><td>9&#x27;1 + T&#x27;28</td><td>9&#x27;1 + Z&#x27;98</td><td>8&#x27;0 + Z&#x27;2</td><td>S&#x27;1 + 8&#x27;8S</td><td>###</td></tr><tr><td>S&#x27;0 + L&#x27;ε</td><td>I&#x27;0 + I&#x27;0</td><td>Z&#x27;0 + T&#x27;0</td><td>S&#x27;1 + L&#x27;9S</td><td>0&#x27;2 + T&#x27;9L</td><td>6&#x27;0 + Z&#x27;9L</td><td>S&#x27;0 + Z&#x27;8I</td><td>S&#x27;1 + Z&#x27;19</td><td>###</td></tr><tr><td>8&#x27;0 + E&#x27;Z</td><td>I&#x27;0 + Z&#x27;0</td><td>Z&#x27;0 + 9&#x27;0</td><td>S&#x27;1 + 6&#x27;6+</td><td>8&#x27;1 + 0&#x27;08</td><td>0&#x27;2 + Z&#x27;1L</td><td>I&#x27;0 + 9&#x27;Z</td><td>T&#x27;1 + T&#x27;ZL</td><td>###</td></tr><tr><td></td><td>6&#x27;8+</td><td>E&#x27;8S</td><td>E&#x27;7S</td><td></td><td>8&#x27;88</td><td>E&#x27;18</td><td>S&#x27;LL</td><td>###</td></tr><tr><td>-</td><td>L&#x27;0 + Z&#x27;1+</td><td>8&#x27;0 + L&#x27;εS</td><td>L&#x27;0 + 8&#x27;ε+</td><td>-</td><td>E&#x27;0 + 0&#x27;46</td><td>T&#x27;0 + E&#x27;88</td><td>9&#x27;0 + 8&#x27;L8</td><td>###</td></tr><tr><td>-</td><td>9&#x27;0 + 9&#x27;9S</td><td>T&#x27;0 + 6&#x27;29</td><td>T&#x27;0 + 8&#x27;09</td><td>-</td><td>I&#x27;0 + 9&#x27;εL</td><td>I&#x27;0 + Z&#x27;4L</td><td>E&#x27;0 + Z&#x27;L9</td><td>###</td></tr><tr><td>SZ/8 eN/S</td><td>S/I+eN/S</td><td>S/Z eN/S</td><td>I eN/S</td><td>SZ/8 eN/S</td><td>S/I+eN/S</td><td>S/Z eN/S</td><td>I eN/S</td><td>###</td></tr></table>
489
+
490
+ Table 11: The impact of action generation method for DV, with different planning strides. In "Stride x/y", x is for Kitchen, and y is for Maze2D & AntMaze. Data are Mean ± Standard Error over 500 episode seeds.
491
+
492
+ <table><tr><td>9&#x27;89I</td><td>6&#x27;69I</td><td>ε&#x27;&#x27;+1</td><td>ε&#x27;&#x27;-1</td><td>ε&#x27;&#x27;-1</td><td>-</td><td>-</td><td>6&#x27;7L</td><td>0&#x27;1+9&#x27;0+1</td><td>-</td><td>9&#x27;9+6&#x27;8L</td><td>t&#x27;2+L</td><td>L&#x27;5</td><td>8&#x27;21</td><td>8&#x27;8</td><td>ε&#x27;&#x27;-1</td><td>D&#x27;2</td></tr><tr><td>ε&#x27;&#x27;1+9&#x27;9εI</td><td>S&#x27;2+8&#x27;SS1</td><td>-</td><td>8&#x27;S+1&#x27;SE1</td><td>6&#x27;811</td><td>-</td><td>-</td><td>S&#x27;68</td><td>8&#x27;0+0&#x27;251</td><td>-</td><td>6&#x27;2+8&#x27;8L</td><td>6&#x27;4ε</td><td>0&#x27;S</td><td>ε&#x27;8</td><td>ε&#x27;0ε</td><td>W&#x27;2</td><td>D&#x27;2</td></tr><tr><td>0&#x27;1+L&#x27;09I</td><td>0&#x27;ε+9&#x27;SE1</td><td>-</td><td>9&#x27;t+6&#x27;62I</td><td>S&#x27;12I</td><td>-</td><td>-</td><td>S&#x27;68</td><td>8&#x27;0+0&#x27;251</td><td>-</td><td>6&#x27;2+8&#x27;8L</td><td>6&#x27;4ε</td><td>0&#x27;S</td><td>ε&#x27;8</td><td>ε&#x27;0ε</td><td>W&#x27;2</td><td>D&#x27;2</td></tr><tr><td>t&#x27;1+9&#x27;ε02</td><td>9&#x27;ε+7&#x27;82I</td><td>-</td><td>0&#x27;S+6&#x27;∠9I</td><td>ε&#x27;2I</td><td>-</td><td>-</td><td>1&#x27;06</td><td>L&#x27;1+8&#x27;981</td><td>-</td><td>L&#x27;1+7&#x27;4L</td><td>9&#x27;8S</td><td>S&#x27;21</td><td>7&#x27;9</td><td>S</td><td>J&#x27;2</td><td>D&#x27;2</td></tr><tr><td>7&#x27;88</td><td>-</td><td>0&#x27;ε</td><td>0&#x27;8</td><td>ε&#x27;εI</td><td>L&#x27;4L</td><td>8&#x27;6S</td><td>S&#x27;08</td><td>9&#x27;49</td><td>0&#x27;19</td><td>1&#x27;2L</td><td>t&#x27;9ε</td><td>7&#x27;2</td><td>0&#x27;0</td><td>-</td><td>J&#x27;2</td><td></td></tr><tr><td>9&#x27;1+0&#x27;68</td><td>-</td><td>ε&#x27;4+0&#x27;8</td><td>S&#x27;2+0&#x27;2I</td><td>L&#x27;S+L&#x27;9</td><td>t&#x27;4+9&#x27;88</td><td>L&#x27;S+L&#x27;9</td><td>S&#x27;48</td><td>L&#x27;2+ε&#x27;18</td><td>8&#x27;01+9&#x27;9L</td><td>9&#x27;2+ε&#x27;18</td><td>C&#x27;1L</td><td>6&#x27;41</td><td>0&#x27;0</td><td>0&#x27;0</td><td>J&#x27;d</td><td>J&#x27;2</td></tr><tr><td>9&#x27;1+7&#x27;L8</td><td>1&#x27;8+L&#x27;88</td><td>8&#x27;2+0&#x27;4</td><td>ε&#x27;ε+0&#x27;9</td><td>9&#x27;1+0&#x27;2</td><td>S&#x27;ε+8&#x27;88</td><td>0&#x27;S+ε&#x27;88</td><td>8&#x27;48</td><td>0&#x27;ε+9&#x27;28</td><td>ε&#x27;0+9&#x27;8L</td><td>1&#x27;ε+0&#x27;28</td><td>0&#x27;0L</td><td>8&#x27;81</td><td>0&#x27;0</td><td>0&#x27;0</td><td>J&#x27;d</td><td>J&#x27;2</td></tr><tr><td>0&#x27;2+7&#x27;9L</td><td>-</td><td>0&#x27;0+0&#x27;0</td><td>t&#x27;ε+ε&#x27;S</td><td>6&#x27;1+ε&#x27;LI</td><td>8&#x27;6+9&#x27;99</td><td>L&#x27;4+L&#x27;84</td><td>S&#x27;ε9</td><td>1&#x27;ε+ε&#x27;18</td><td>ε&#x27;8+7&#x27;94</td><td>ε&#x27;4+ε&#x27;6S</td><td>9&#x27;6ε</td><td>L&#x27;8ε</td><td>L&#x27;9</td><td>0&#x27;0</td><td>J&#x27;d</td><td>J&#x27;2</td></tr><tr><td>8&#x27;1+0&#x27;08</td><td>8&#x27;S+9&#x27;88</td><td>0&#x27;0+0&#x27;0</td><td>S&#x27;2+L&#x27;8</td><td>t&#x27;2+ε&#x27;∠2</td><td>S&#x27;5+8&#x27;9</td><td>t&#x27;11+0&#x27;04</td><td>6&#x27;29</td><td>L&#x27;ε+9&#x27;9S</td><td>9&#x27;9+ε&#x27;5S</td><td>S&#x27;4L</td><td>c&#x27;19</td><td>7&#x27;2</td><td>0&#x27;0</td><td>-</td><td>J&#x27;d</td><td>J&#x27;2</td></tr><tr><td>8&#x27;88</td><td>S&#x27;2L</td><td>8&#x27;S9</td><td>L&#x27;εS</td><td>1&#x27;4S</td><td>-</td><td>9&#x27;99</td><td>9&#x27;99</td><td>ε&#x27;09</td><td>9&#x27;19</td><td>L&#x27;94</td><td>L&#x27;84</td><td>t&#x27;0S</td><td>S&#x27;ε1</td><td>L&#x27;04</td><td>-</td><td>J&#x27;d</td></tr><tr><td>ε&#x27;0+0&#x27;46</td><td>t&#x27;1+ε&#x27;EL</td><td>8&#x27;S+5&#x27;S9</td><td>t&#x27;0+S&#x27;SS</td><td>ε&#x27;1+L&#x27;SS</td><td>-</td><td>S&#x27;2+L&#x27;99</td><td>L&#x27;99</td><td>8ε&#x27;1+S&#x27;S9</td><td>6&#x27;9+ε&#x27;09</td><td>t&#x27;4+6&#x27;4L</td><td>ε&#x27;94</td><td>8&#x27;81</td><td>8&#x27;ε</td><td>-</td><td>J&#x27;d</td><td></td></tr><tr><td>t&#x27;0+9&#x27;εL</td><td>L&#x27;2+L&#x27;L</td><td>0&#x27;0+0&#x27;SL</td><td>8&#x27;0+8&#x27;1S</td><td>S&#x27;2+S2S</td><td>-</td><td>t&#x27;4+5&#x27;S9</td><td>S&#x27;99</td><td>8ε&#x27;1+1&#x27;SS</td><td>1&#x27;S+9&#x27;29</td><td>9&#x27;1+7&#x27;S4</td><td>0&#x27;15</td><td>0&#x27;15</td><td>7&#x27;S4</td><td>-</td><td>J&#x27;d</td><td></td></tr><tr><td>ε&#x27;88</td><td>9&#x27;48</td><td>8&#x27;18</td><td>ε&#x27;ε8</td><td>S&#x27;LL</td><td>9&#x27;98</td><td>1&#x27;48</td><td>1&#x27;28</td><td>1&#x27;68</td><td>6&#x27;28</td><td>9&#x27;SL</td><td>0&#x27;LL</td><td>6&#x27;89</td><td>0&#x27;25</td><td>6&#x27;1ε</td><td>-</td><td>J&#x27;d</td></tr><tr><td>t&#x27;0+8&#x27;28</td><td>t&#x27;2+1&#x27;48</td><td>t&#x27;1+5&#x27;S28</td><td>9&#x27;2+4&#x27;48</td><td>9&#x27;0+9&#x27;6L</td><td>L&#x27;0+0&#x27;98</td><td>t&#x27;0+1&#x27;88</td><td>S&#x27;28</td><td>t&#x27;0+8&#x27;98</td><td>6&#x27;0+0&#x27;8L</td><td>S&#x27;2+6&#x27;LL</td><td>ε&#x27;8L</td><td>7&#x27;8L</td><td>1&#x27;εS</td><td>9&#x27;9</td><td>J&#x27;d</td><td></td></tr><tr><td>S&#x27;0+0&#x27;S8</td><td>L&#x27;0+L&#x27;L6</td><td>ε&#x27;4+0&#x27;SL</td><td>t&#x27;ε+0&#x27;SL</td><td>9&#x27;1+9&#x27;0L</td><td>t&#x27;4+4&#x27;48</td><td>t&#x27;2+1&#x27;68</td><td>t&#x27;88</td><td>t&#x27;0+2&#x27;86</td><td>S&#x27;1+5&#x27;S6</td><td>9&#x27;5+1&#x27;S9</td><td>6&#x27;εL</td><td>L&#x27;9Z</td><td>0&#x27;S1</td><td>8&#x27;11</td><td>J&#x27;d</td><td></td></tr><tr><td>0&#x27;0+2601</td><td>L&#x27;0+1&#x27;8ε</td><td>L&#x27;1+8&#x27;801</td><td>8&#x27;0+2&#x27;801</td><td>t&#x27;0+6&#x27;901</td><td>9&#x27;0+1&#x27;011</td><td>0&#x27;0+9&#x27;011</td><td>L&#x27;211</td><td>0&#x27;0+9&#x27;111</td><td>ε&#x27;0+1&#x27;011</td><td>20+8&#x27;601</td><td>9&#x27;601</td><td>0&#x27;111</td><td>S&#x27;L8</td><td>t&#x27;9</td><td>J&#x27;d</td><td></td></tr><tr><td>t&#x27;2+9&#x27;ε8</td><td>9&#x27;0+0&#x27;48</td><td>9&#x27;ε+ε&#x27;SL</td><td>L&#x27;2+9&#x27;96</td><td>t&#x27;1+ε&#x27;4L</td><td>9&#x27;2+0&#x27;86</td><td>0&#x27;2+1&#x27;0L</td><td>t&#x27;99</td><td>ε&#x27;1+ε&#x27;96</td><td>9&#x27;4+5&#x27;S06</td><td>t&#x27;4+1&#x27;L8</td><td>ε&#x27;99</td><td>0&#x27;8S</td><td>S+ε</td><td>0&#x27;62</td><td>J&#x27;d</td><td></td></tr><tr><td>0&#x27;0+6&#x27;16</td><td>ε&#x27;0+ε&#x27;66</td><td>L&#x27;0+0&#x27;001</td><td>S&#x27;1+226</td><td>t&#x27;0+9&#x27;86</td><td>9&#x27;2+6&#x27;96</td><td>t&#x27;0+6&#x27;66</td><td>t&#x27;26</td><td>0&#x27;0+9&#x27;101</td><td>9&#x27;0+ε&#x27;101</td><td>t&#x27;6+2&#x27;98</td><td>L&#x27;46</td><td>9&#x27;84</td><td>1&#x27;εε</td><td>t&#x27;11</td><td>J&#x27;d</td><td></td></tr><tr><td>s&#x27;0+0&#x27;011</td><td>t&#x27;0+L&#x27;94</td><td>8&#x27;1+8&#x27;111</td><td>0&#x27;2+9&#x27;111</td><td>ε&#x27;1+ε&#x27;01</td><td>S&#x27;2+0&#x27;801</td><td>L&#x27;0+1&#x27;011</td><td>9&#x27;801</td><td>t&#x27;0+1&#x27;111</td><td>ε&#x27;1+1&#x27;111</td><td>t&#x27;2+9&#x27;801</td><td>S&#x27;16</td><td>L&#x27;86</td><td>6&#x27;011</td><td>6&#x27;111</td><td>J&#x27;d</td><td></td></tr><tr><td>0&#x27;0+7&#x27;0S</td><td>t&#x27;1+1&#x27;L01</td><td>0&#x27;1+1&#x27;S6</td><td>9&#x27;0+2&#x27;44</td><td>ε&#x27;0+1&#x27;S9</td><td>t&#x27;0+1&#x27;S9</td><td>t&#x27;0+1&#x27;S9</td><td>t&#x27;15</td><td>20+ε&#x27;25</td><td>S&#x27;0+1&#x27;S9</td><td>t&#x27;2+6&#x27;S4</td><td>t&#x27;44</td><td>t&#x27;04</td><td>L&#x27;04</td><td>t&#x27;9ε</td><td>J&#x27;d</td><td></td></tr><tr><td>t&#x27;0+8&#x27;S4</td><td>t&#x27;1+ε&#x27;S11</td><td>t&#x27;4+ε&#x27;S6</td><td>6&#x27;0+ε&#x27;8ε</td><td>S&#x27;0+L&#x27;Lε</td><td>t&#x27;1+9&#x27;L4</td><td>ε&#x27;0+5&#x27;S9</td><td>6&#x27;S4</td><td>0&#x27;0+6&#x27;L4</td><td>ε&#x27;0+8&#x27;L4</td><td>t&#x27;1+1&#x27;Lε</td><td>2&#x27;t4</td><td>294</td><td>t&#x27;8ε</td><td>J&#x27;d</td><td></td><td></td></tr><tr><td>ε&#x27;0+L&#x27;26</td><td>ε&#x27;0+5&#x27;S26</td><td>ε&#x27;1+9&#x27;06</td><td>8&#x27;0+9&#x27;68</td><td>ε&#x27;0+5&#x27;S6</td><td>9&#x27;0+ε&#x27;16</td><td>6&#x27;S6</td><td>t&#x27;0+5&#x27;S6</td><td>ε&#x27;0+8&#x27;96</td><td>S&#x27;0+9&#x27;26</td><td>t&#x27;98</td><td>t&#x27;79</td><td>L&#x27;99</td><td>8&#x27;Sε</td><td>J&#x27;d</td><td></td><td></td></tr><tr><td>ΔI</td><td>CH</td><td>CI</td><td>DE</td><td>DE</td><td>DE</td><td>*TOII</td><td>TOII</td><td>TOII</td><td>TOII</td><td>TOII</td><td>TOII</td><td>TOII</td><td>TOII</td><td>TOII</td><td>DE</td><td></td></tr></table>
493
+
494
+ Table 12: Normalized performance of various offline-RL methods. Data are Mean ± Standard Error (if available).
ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6def44b32e7985001501b5f054cc499a7d0ff965eda4a25a2ef81b1956e3cb70
3
+ size 2500154
ICLR/2025/What Makes a Good Diffusion Planner for Decision Making_/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bae26affec22a6203fc79f0deb817eb1f58f75f1575611b646baf4bc7711509
3
+ size 601913
ICLR/2025/When Attention Sink Emerges in Language Models_ An Empirical View/eba1b017-a633-4834-ad41-af9a7b9407c7_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31830d4db0652dc7401a00c0a7bfef1db467a024bf9aaa89562c12a701ecb40e
3
+ size 194448
ICLR/2025/When Attention Sink Emerges in Language Models_ An Empirical View/eba1b017-a633-4834-ad41-af9a7b9407c7_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f537795808da3a457f3e984d00a5dbe84bae3ea22ddeb9767754373cda513d0f
3
+ size 223870