text
string
source
string
re- lationships. Using the Factual T5-based parser (Li et al., 2023b), these elements are extracted and then matched across candidate and reference cap- tions through a three-stage strategy involving exact, synonym, and soft matching. The final evaluation score is computed as a weighted sum of F1 scores for each element type. This benchmark is partic- ularly effective for assessing fine-grained visual grounding in generated captions. We adopt the official implementation for our evaluation.B.3 Details of Recaptioning Experiment with GPT-4o and Human Annotators In § 4.2, we analyze the recaption behavior for captions generated by an MLLM. Specifically, we randomly select 100 images from CapsBench and prompt Qwen2-VL-7B-Instruct to generate initial captions. These captions, along with the corre- sponding images, are then input to GPT-4o, which is asked to refine the captions. For human recap- tioning, we recruit two well-educated researchers proficient in English and instruct them to refine the captions. We verify that their edits are consistent and of high quality. The instruction provided to both GPT-4o and the human annotators is shown in Tab. 7. For our RICO, to ensure a fair comparison, we set the number of refinement iterations N= 1, i.e., each caption is refined only once. B.4 Details of Text-to-Image Generation For the text-to-image generation experiment de- scribed in § 4.4.1, we adopt the FLUX.1-dev model (Labs, 2024). To accelerate training, we employ a LoRA-tuned (Hu et al., 2021) version of the model. The training dataset is primarily sourced from Hugging Face2, and we collect a total of 30K images for our experiments. Training is conducted 2https://huggingface.co/datasets/jackyhate/ text-to-image-2M 15 for 10,000 steps using 8 GPUs, each with a batch size of 10. The image resolution is set to 1024 × 1024. We use the AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of 10−4. C More Implementation Details C.1 Prompt in the Refinement Procedure We provide the prompt used to query GPT-4o (Ope- nAI et al., 2024) for the refinement procedure de- scribed in § 3.3, as shown in Tab. 12. C.2 Details of DPO Training For training the DPO variant, we primarily use data from the DCE dataset (Sun et al., 2025), which spans a diverse range of image domains. From this dataset, we randomly sample 10K instances to construct preference pairs, as outlined in § 3.4. For the DPO experiments, we use the LLaMA-Factory toolkit (Zheng et al., 2024). We initialize the model with Qwen2-VL and set the preference scaling parameter to β= 0.1. The model is fine-tuned for 3 epochs using 8 GPUs. The batch size is set to 64, and the learning rate is η= 1.0×10−5. We use a cutoff length of 2048 tokens and a warmup ratio of 0.1. D Basics for DPO Direct Preference Optimization (DPO) (Rafailov et al., 2024) formulates preference learning as a probabilistic binary classification task, without the need to train an explicit reward model. Given a dataset of preference tuples (x, y+, y−)—where x denotes a shared context (e.g., a prompt), and y+ andy−represent the preferred and dispreferred responses respectively–DPO aims to train a policy πθ(y|x)such
https://arxiv.org/abs/2505.22613v1
that: πθ(y+|x)> πθ(y−|x) DPO defines an implicit reward function based on the log-likelihood ratio between the current pol- icyπθand a fixed reference policy π0(e.g., the base model): r(y|x) = logπθ(y|x) π0(y|x) This leads to a binary classification objective that maximizes preference likelihood with KL reg-ularization: π∗= arg max πE(x,y+,y−) " logexp(βr(y+|x)) exp(βr(y+|x)) + exp( βr(y−|x))# −KL(π∥π0) Substituting r(y|x), the DPO training loss becomes: LDPO=−E(x,y+,y−)" logπθ(y+|x)β πθ(y+|x)β+πθ(y−|x)β# This loss encourages the model to prefer y+over y−while implicitly regularizing against the refer- ence model π0. Unlike traditional reinforcement learning, DPO requires no reward model sampling or rollouts, offering both stability and efficiency. More mathematical details can be found in the orig- inal paper of DPO.3 E Limitations Our work still has several limitations. First, a key assumption of the proposed pipeline is that the text- to-image model must be sufficiently powerful to faithfully recover as many details as possible from the candidate caption. This places high demands on the capability of the text-to-image model. In this work, we adopt the FLUX model, which demon- strates strong performance, but still leaves room for significant improvement. Secondly, given the dis- crepancies between the original and reconstructed images, multiple plausible caption revisions may exist. Determining how to refine the caption in a concise yet effective manner remains a significant challenge for the refinement model. Lastly, the iter- ative version of our method is resource-intensive. Although we propose a DPO-based variant to mit- igate this issue, reducing the coupling within the pipeline and improving inference efficiency remain important directions for future work. 3https://arxiv.org/pdf/2305.18290 16 ================ P ROMPT IN THE REFINEMENT PROCEDURE ================ We are working on a project that involves generating captions for images and using these captions to reconstruct the images. The process follows these steps: 1. Original Image (First Image): A caption is generated based on this image. 2. Reconstructed Image (Second Image): The generated caption is used as input for a text-to- image model to create this image. Your Task Compare the original andreconstructed images, analyzing their differences to identify potential improvements for the original caption. Based on your observations, provide a revised caption that could enhance the reconstruction quality. Guidelines for Comparison • Visual Details: Color, shape, texture, and material of objects. • Composition & Layout: Object positioning, spatial relationships, and overall scene structure. • Human Attributes (if applicable): Pose, facial expression, skin tone, clothing, and hairstyle. • Perspective & Style: Type of image, camera angle, depth of field, lighting, and artistic style. • Text in the Image: Accuracy of any visible words, symbols, or signs. • Image Quality: Blurriness, artifacts, or inconsistencies in object rendering. • World Knowledge: Proper nouns or specific real-world references that should be preserved. • Color Aesthetics: Color palette, grading, and overall mood consistency. How to Improve the Caption • Add missing details that were lost in reconstruction. • Clarify ambiguous descriptions to provide more precise information. • Correct any inaccuracies based on observed differences. • Specify key attributes (e.g., “a red leather couch” instead of “a couch”). Your revised caption should aim to reduce discrepancies between the original
https://arxiv.org/abs/2505.22613v1
TheEntropy Mechanism of Reinforcement Learning for Reasoning Language Models Ganqu Cui1∗, Yuchen Zhang1,4∗, Jiacheng Chen1∗, Lifan Yuan3, Zhi Wang5, Yuxin Zuo2, Haozhan Li2, Yuchen Fan1,Huayu Chen2,Weize Chen2,Zhiyuan Liu2,Hao Peng3,Lei Bai1,Wanli Ouyang1, Yu Cheng1,6†,Bowen Zhou1,2†,Ning Ding2,1† 1Shanghai AI Laboratory2Tsinghua University3UIUC4Peking University5Nanjing University6CUHK Code: https://github.com/PRIME-RL/Entropy-Mechanism-of-RL Abstract This paper aims to overcome a major obstacle in scaling reinforcement learning (RL) for reasoning with large language models (LLMs), namely the collapse of policy entropy . Such phenomenon is consistently observed across vast RL runs without entropy intervention, where the policy entropy dropped sharply at the early training stage, leading to an overly confident policy model. As a consequence, this diminished exploratory ability is always accompanied with the saturation of policy performance. In practice, we establish a trans- formation equation R=−aexpH+b, between entropy Hand downstream performance R, where a, bare fitting coefficients. This empirical law strongly indicates that, the policy performance is traded from policy entropy, thus bottlenecked by its exhaustion, and the ceiling is fully predictable ( H= 0, R=−a+b). Our finding necessitates entropy manage- ment for continuous exploration toward scaling compute for RL. To this end, we investigate entropy dynamics both theoretically and empirically. Our derivation highlights that, the change in policy entropy is driven by the covariance between action probability and the change in logits, which is proportional to its advantage when using Policy Gradient-like algorithms (Williams, 1992). This is to say, a high-probability action with high advantage would reduce policy entropy, while a rare action with high advantage would increase policy entropy. Empirical study shows that, the values of covariance term and entropy differences matched exactly, supporting the theoretical conclusion. Moreover, the covariance term stays mostly positive throughout training, further explaining why policy entropy would decrease monotonically. Through understanding the mechanism behind entropy dynamics, we moti- vate to control entropy by restricting the update of high-covariance tokens. Specifically, we propose two simple yet effective techniques, namely Clip-Cov andKL-Cov , which clip and apply KL penalty to tokens with high covariances respectively. Experiments show that these methods encourage exploration, thus helping policy escape entropy collapse and achieve better downstream performance. “Nature never undertakes any change unless her interests are served by an increase in entropy. ” —Max Planck ∗Equal contribution. †Corresponding Authors. Preprint.arXiv:2505.22617v1 [cs.LG] 28 May 2025 CONTENTS Contents 1 Introduction 3 2 The Predictable “Collapse” of Policy Entropy 4 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 A First Glance: Entropy Collapse and Performance Saturation . . . . . . . . . . . . . . . . 5 2.4 Fitting the Curves between
https://arxiv.org/abs/2505.22617v1
Entropy and Performance . . . . . . . . . . . . . . . . . . . . . 6 2.5 Understanding the Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 Dynamics Analysis of Policy Entropy 8 3.1 Entropy Dynamics of Softmax Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Entropy Dynamics under Policy Gradient / Natural Policy Gradient Algorithms . . . . . . . 9 3.3 Empirical Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4 Entropy Control by Covariance Regularization 10 4.1 Effect of Entropy Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2 Suppressing Tokens with High Covariances . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.4 Get Policy Entropy Controlled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5 Related Work 15 6 Conclusion 15 A Training Details for Different Models 20 B More Fitting Results 20 C Fitting Results of Training with Different Dataset. 20 D Fitting Results of Instruct Models 20 E Proof 21 E.1 Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 E.2 Proof for Lemma 1
https://arxiv.org/abs/2505.22617v1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 E.3 Proof for Proposition 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 E.4 Proof for Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2 1 Introduction Applied to recent reasoning-centric large language models (LLMs), reinforcement learning (RL) escapes narrow, task-specific confines: the models’ sweeping generalization introduces a new axis that vastly enlarges the exploratory landscape. This shift has yielded impressive reasoning gains (OpenAI, 2024a; DeepSeek-AI et al., 2025), yet the dilemma persists—scaling training compute for learning from experience (reinforcement learning) (Silver & Sutton, 2025) rather than imitation learning (pre-training and finetuning) remains non- trivial. Among the challenges emerges a major obstacle, the diminishment of policy entropy . 95% improvement5% improvement 95% consumption 5% consumptionDoes RL for LLM just T rade Entropy for Performance ? Performance Entropy Figure 1: Left: Entropy collapse and performance saturation. Over 95% entropy drop/performance gains take place at the early stage of RL training. The model then reaches a plateau with little improvement. Right: The predictable relationship between validation performance and policy entropy. Without intervention, the policy “trades” entropy for performance exponentially, showing clear ceilings that hinder further policy enhancement. The core challenge in RL is the exploitation-exploration trade-off (Sutton, 1988), balancing the reuse of proven strategies against the search for novel ones. For exploration, a key concept quantifying the exploratory potential ispolicy entropy , which measures the uncertainty in the policy’s action selection process. In RL literature, the ability to mitigate the decline of policy entropy is regarded as essential to most algorithms (Williams & Peng, 1991; Williams, 1992; Eysenbach & Levine, 2021), and policy entropy has been intensively steered and actively controlled via regularization (Ziebart et al., 2008; Schulman et al., 2017b; Haarnoja et al., 2018). For LLMs, while the typical behavior of policy entropy remains largely understudied (Yu et al., 2025; He et al., 2025), we find an intriguing and consistent pattern from broad experiments: Policy entropy sharply declines to near 0 in a few training steps, demonstrating that the policy becomes extremely certain. Consequently, the inability to explore new paths leads to a performance plateau, where the validation performance also struggles to improve at the same time. Quantitatively, we further reveal that, without entropy intervention like entropy loss or KL regularization, the downstream performance is fully predictable from policy entropy, and the fitted curve is a simple exponential function R=−aexpH+b, as shown in Figure 1 . Basically, the policy is trading uncertainty (entropy) for rewards (Yue et al., 2025) in
https://arxiv.org/abs/2505.22617v1
a predictable manner . This empirical law results in two critical corollaries: (1) Like Scaling Laws (Kaplan et al., 2020; Hoffmann et al., 2022), the exploitation-exploration curve is predetermined given the policy model and training data. This allows us to predict policy performance at the early stage of RL and predict the performance of large models given small models (OpenAI, 2024b) (Sec. 2.4). (2) More importantly, this equation indicates that the upper bound of the policy performance is also deterministic with the exhaustion of policy entropy (H= 0, R=−a+b), so the return of scaling training compute for RL could be marginal. What’s worse, naively applying entropy regularization methods are proven ineffective (Sec. 4.1). In short, scalable RL calls for breaking the entropy bottleneck. Solving this issue requires principled understandings of the mechanisms behind this observation, i.e., why policy entropy decreases monotonically? To this end, we analyze the dynamics of policy entropy both theoretically and empirically. Our key findings highlight that, for softmax policy like LLMs, the entropy change between two consecutive steps is proportional to the covariance of the log-probability and corresponding logit change for an action (Liu, 2025). Furthermore, under Policy Gradient (Williams, 1992)-like and Natural Policy Gradient (Kakade, 2001)-like algorithms, the logit difference is proportional to the action advantage. 3 Intuitively, an action with high advantage and high probability would reduce policy entropy, while a rare action with a high advantage would increase entropy. This theoretical conclusion is validated by experimental results. At the early stage, the policy demonstrates high covariance on training data, implicating the policy’s confidence is well-calibrated (Kadavath et al., 2022), thus can safely exploit trajectories with high confidence, strengthening belief and minimize entropy (Zuo et al., 2025; Zhang et al., 2025; Agarwal et al., 2025). As training progresses, the covariance gradually declines but still maintains positive, continually dragging policy entropy even lower. The analysis of entropy dynamics demonstrates that, the high covariance is detrimental to scalable RL, which provides us guidelines about uplifting policy entropy—limit the step sizes of high-covariance tokens. We thereby motivate to design two corresponding strategies aiming at entropy control, namely Clip-Cov and KL-Cov , to replace the clip and PPO-KL methods in surrogate loss (Schulman et al., 2017b). Clip-Cov randomly selects a small portion of tokens with positive covariances and detach their gradients. KL-Cov , on the other hand, applies KL penalty on tokens with the largest covariances. Experiment results show that, we can actively control policy entropy by tuning threshold parameters. Consequently, the policy model escapes the low entropy trap and achieves better performance on mathematical reasoning. It has become a common belief that the training computes will shift from pre-training to post-training, especially RL (Silver & Sutton, 2025). On the road to scale RL with increased compute, it is pivotal to keep exploration, find novel paths, and continuously improve, thus utilizing the compute even better (Sutton, 2019). Our work provides an entropy-centric perspective for this purpose, which hopefully could be beneficial for understanding, analyzing, and advancing the underlying mechanisms of RL for LLMs. 2 The Predictable “Collapse” of Policy Entropy
https://arxiv.org/abs/2505.22617v1
TAKEAWAY Without intervention, e.g., entropy or KL regularization, policy entropy is traded for reward predictably during RL. The empirical quantitative relationship between validation reward Rand policy entropy H can be expressed as R=−aexp(H+b). Then: • It suggests the predictability of policy performance from entropy; • The coefficients a, breflect internal characteristics of policy and data; •The performance ceiling of the policy ( H= 0, R=−a+b) greatly burdens the scalability of RL for LLM reasoning. In this section, we manage to answer the research question: What is the typical behavior of policy entropy during RL for LLMs? Through extensive experiments, we observe a consistent “entropy collapse” phenomenon, which is not favored in RL since the policy would struggle to explore new paths (Sec. 2.3). We further extend it to an empirically predictable relation between policy entropy and validation performance (Sec. 2.4), and analyze the coefficients in the equation (Sec. 2.5). 2.1 Preliminaries We consider tuning LLMs with RL on verifiable tasks, such as math and coding, to avoid reward hacking. Given an input prompt x, an LLM πθautoregressively generates an output sequence y, which consists of T tokens {y1,···, yt,···, yT}. We use RL to optimize the LLM policy to maximize the cumulative rewards r received from the verifier: max θJ(θ) :=Ex∼D,y∼πθ(x)[r(y)] (1) where Dis the training distribution. To optimize the objective function, it is a common practice to use the Policy Gradient algorithm (Williams, 1992) for gradient estimation: ∇θJ(θ) =Ex∼D,y∼πθ(x)"TX t=0∇θlogπθ(yt|y<t)At# . (2) Here Atis the advantage of current action and is implemented differently across RL algorithms. If we only have rewards for the full trajectory, the vanilla REINFORCE algorithm (Williams, 1992) directly defines 4 2.2 Settings At=r(y). To reduce variance, GRPO (Shao et al., 2024) and RLOO (Kool et al., 2019; Ahmadian et al., 2024) further incorporates group-wise normalization. For example, GRPO samples Kresponses for each prompt and estimates the advantage as follows: At=r(y)−mean r y1:K std r y1:K . (3) To handle off-policy data and constrain the policy update size, PPO (Schulman et al., 2017b) proposed to optimize a surrogate loss. L(θ) =Et" minπθ(yt|y<t) πθold(yt|y<t)At,clipπθ(yt|y<t) πθold(yt|y<t),1−ϵ,1 +ϵ At# (4) Policy entropy. Policy entropy quantifies the predictability or randomness inherent in the actions selected by an agent. Given policy model πθ, training dataset D, we measure the average token-level entropy of the policy model on training data, which is defined as follows: H(πθ,D) =−ED,πθ[logπθ(yt|y<t)] =−1 |D|X x∈D1 |y||y|X t=1Eyt∼πθ[logπθ(yt|y<t, x)] (5) Such entropy quantifies the uncertainty level of the policy on current prompts and is widely adopted in maximum entropy RL as a regularization term (Ziebart et al., 2008; Haarnoja et al., 2017, 2018). In practice, we calculate the entropy for each batch of prompts randomly sampled from the training dataset. 2.2 Settings We adopt a unified protocol covering 4 model families and 11 base models (0.5-32B parameters), verifiable task domains of math and coding evaluated on 8 public benchmarks, and 4 RL algorithms. Models. The models adopted in our experiments span 4 model families and 11 widely used open-source base models. Specifically, these consist of the Qwen2.5 family (Qwen2.5-0.5B, 1.5B,
https://arxiv.org/abs/2505.22617v1
3B, 7B, 32B) (Qwen et al., 2025), the Mistral family (Mistral-7B-v0.3 (Jiang et al., 2023), Mistral-Nemo-Base-2407 (MistralAI-NeMo), Mistral-Small-3.1-24B-Base-2501 (MistralAI-Small-3)), the LLaMA family (LLaMA3.2-3B (Meta-Llama- 3.2), LLaMA3.1-8B (Meta, 2024)), and DeepSeek-Math-7B-Base (Shao et al., 2024)). Tasks and datasets. We primarily focus on math and coding problems with verifiable rewards. Due to inherent differences in the initial reasoning abilities between model families, we train models using data of different difficulty levels to stabilize the RL process. Details can be found in Appendix A. Mean- while, we use the same data during downstream performance evaluation to maintain consistency. For math tasks, the evaluation datasets include MATH500 (Hendrycks et al., 2021), AIME 2024 (Li et al., 2024), AMC (Li et al., 2024), OlympiadBench (He et al., 2024), and OMNI-MATH (Gao et al., 2024). For code tasks, we split the testset of Eurus-2-RL-Code (Cui et al., 2025) and KodCode (Xu et al., 2025). 27% 76% 93% 6% Figure 2: Avg. entropy consumption/performance gain (%) in 11 RL runs with different models.RL training. We start RL from the base models follow- ing the “Zero” setting (DeepSeek-AI et al., 2025) with the veRL framework (Sheng et al., 2024). For RL al- gorithms, we employ GRPO (Shao et al., 2024), REIN- FORCE++ (Hu, 2025), and PRIME (Cui et al., 2025). For hyperparameters, we use a learning rate of 5×10−7for the policy model and 10−6for the implicit PRM (Yuan et al., 2025) in PRIME. Both policy and PRMs use a batch size of 256 and a micro-batch size of 128. The rollout stage collects 512 prompts with 8 sampled responses. By default, we set the reference KL divergence coefficient to 0. The ϵin policy loss (Equation 4) is 0.2. We filter out prompts that receive all correct or incorrect responses. 2.3 A First Glance: Entropy Collapse and Performance Saturation Across all experiments, we observe a consistent pattern: The policy entropy encounters a sharp drop at the very beginning of the training, and it keeps declining monotonically to near zero. Meanwhile, the policy’s 5 2.4 Fitting the Curves between Entropy and Performance validation performance presents an inverse trend, where it rises rapidly when training starts, and then saturates at a certain level. Figure 2 illustrates the average normalized entropy consumption/performance gain in percentage throughout 2400-gradient step RL runs with 11 different models. We can see that 73% of the entropy consumption and 76% of the performance gain occurred in just the first 200 gradient steps (1/12 of training), and the first 800 (1/3) steps account for over 93% performance gains together with 94% entropy losses. This means that over 2/3 of the training steps yielded marginal returns. 2.4 Fitting the Curves between Entropy and Performance Motivated by the observed entropy collapse phenomenon, we conduct a more detailed quantitative analysis. Through extensive experiments, we find the downstream performance (accuracy) and entropy can be fitted in the exponential function: R=−aexp(H) +b, (6) where Rrepresents the validation performance and His entropy. The fitting results of different model families with GRPO are presented in Figure 3 and 4. It is worth noting that, the fitted curves
https://arxiv.org/abs/2505.22617v1
precisely describe the performance-entropy relationships over all conducted experiments, with models spanning all kinds of sizes, families, and different tasks. Only 2 coefficients are needed for fitting the curve of over 200 data points, showing a high degree of regularity. The fitting results of instruct models and training on different datasets can be found in Appendix B. 101 100 Entropy (log)010203040Validation Accuracy (%)Qwen2.5 Model Family 32B 7B 3B 1.5B 0.5B Fit 101 100 Entropy (log)5 051015202530Validation Accuracy (%)Mistral Model Family Small-3-24B Nemo-Base-12B v0.3-7B Fit 101 100 Entropy (log)0.02.55.07.510.012.515.0Validation Accuracy (%)LLaMA and DeepSeek Model Family Deepseek-Math-7B-Base Llama3.1-8B Llama3.2-3B Fit Figure 3: Fitting curves between policy entropy and validation performance on math task. We conduct validation every 4 rollout steps until convergence. 101 100 Entropy (log)010203040Validation Accuracy (%)Qwen2.5 Model Family 32B 7B 3B 1.5B 0.5B Fit 102 101 100 Entropy (log)5 05101520253035Validation Accuracy (%)Mistral Model Family Small-3-24B Nemo-Base-12B v0.3-7B Fit 101 100 Entropy (log)5 051015Validation Accuracy (%)LLaMA and DeepSeek Model Family Deepseek-Math-7B Llama3.1-8B Llama3.2-3B Fit Figure 4: Fitting curves between policy entropy and validation performance in coding task. We conduct validation every 4 rollout steps until convergence. Predicting late stage from early stage. As we can precisely fit a curve between policy entropy and validation performance, one straightforward application of this fitting is to predict policy performance at low entropy with observations from high entropy data points. To verify that the functional form can be applied at the early stage of RL training, we take a step further by fitting the function within limited training steps and using the fitted function to predict the final performance. Take Qwen2.5 family as an example, we fit the function form with coefficients aandbusing only the first 36 training steps. Using this function, we perform an advance prediction for the subsequent 200 training steps. 6 2.5 Understanding the Coefficients 101 100 Entropy (log)1015202530354045Validation Accuracy (%)1.5B 3B 7B 32B Fit Pred. (a) Math Task 101 100 Entropy (log)51015202530354045Validation Accuracy (%)1.5B 3B 7B 32B Fit Pred. (b) Code Task Figure 5: Predicting the final performance of Qwen2.5 family with only 15% training steps with the fitted function. The average RMSE is 0.9 %and 1.2 %for all predicted steps, 0.5 %and 1.9 %for final step performance, respectively. As shown in Figure 5, for the math and coding task, we achieve an average Root Mean Square Error (RMSE) of 0.9 %and 1.2 %during prediction, 0.5 %and 1.9 %for final performance, respectively. It suggests that the late stage performance of the policy can be estimated early in training, without the need to run the full RL process. Moreover, we can also obtain the final performance of the policy when it becomes static. With H= 0, R=−a+b, which is the upper bound of the policy given the training data. 2.5 Understanding the Coefficients The coefficients are algorithm-irrelevant. We investigate whether different RL algorithms would affect the fitted function. Figure 6 plots the fitted curves with GRPO, RLOO, and PRIME. We find that, although these algorithms apply distinct advantage estimation methods, they do not influence the fitted entropy-performance function. This indicates that the coefficients a,
https://arxiv.org/abs/2505.22617v1
breflect some intrinsic properties of the policy model and training data. 101 100 Entropy (log)20.022.525.027.530.032.535.037.5Validation Accuracy (%)GRPO RLOO PRIME REINFORCE++ Fit Figure 6: Training Qwen2.5-7B with differ- ent RL algorithms.Predicting coefficients when scaling parameters. Taking a closer look at the coefficients a, b, their meanings are clear. By differentiating the equation, we derive dR/dH=−aexp(H), which means ais the rate at which the model converts entropy into downstream performance. Also, as stated above, −a+b is the maximum validation score the model can achieve when entropy is fully depleted. Intuitively, a, bshould be relevant with model sizes, where larger models could trade entropy for reward more efficiently, as well as achieve higher performance. To validate this, we again adopt Qwen2.5 model family, since they have similar architecture and undergo similar training pro- cess. In Figure 7, we plot the model parameter count (without embedding) versus a, bon math and coding tasks. It is observed that, both aandbvary smoothly with policy size at a log-linear rate. This log-linear relationship between model sizes and coef- ficients is also observed in Gao et al. (2022). It allows us to extrapolate the coefficients of larger models based on the training dynamics of smaller models, extending the predictability to the dimension of model sizes. In other words, it enables us to predict the final performance of larger LMs through RL training without actually training them, once we train smaller models within the same family and get their coefficients. Figure 13 also illustrates that the coefficients are related with training data. 2.6 Discussion The predictability. To now, we have established predictability between (1) policy performance and entropy; (2) coefficients in (1) and model sizes. Such predictability reminds us of Scaling Laws for language models (Kaplan et al., 2020; Hoffmann et al., 2022) and RLHF (Gao et al., 2022). It seems that, RL with LLMs keeps trading entropy for reward throughout training. However, other works that adopt different policy models (Luo et al., 2025) or use off-policy data (Yan et al., 2025) observed distinct entropy patterns. Therefore, this predictability 7 1 0 1 2 3 Model Size (log)0.000.050.100.150.200.250.300.35Value of Coef aTrue value Fitted value(a) Coef. afor math task 1 0 1 2 3 Model Size (log)0.10.20.30.40.50.60.70.8Value of Coef bTrue values Fitted value (b) Coef. bfor math task 1 0 1 2 3 Model Size (log)0.000.050.100.150.200.250.300.35Value of Coef aTrue value Fitted value (c) Coef. afor code task 1 0 1 2 3 Model Size (log)0.10.20.30.40.50.60.70.8Value of Coef bTrue values Fitted value (d) Coef. bfor code task Figure 7: Fitted curves between coefficients and model sizes of Qwen2.5 model family. The model sizes are parameter counts (B) without embeddings. a, bare obtained from experiments in Sec. 2.4. We use log-linear function to fit the curve. is not arguably universal, and we call for a more in-depth analysis of the entropy behavior under different conditions. The ceiling. There is an intensive discussion questioning whether RL merely elicits the latent behaviors that were already learned in pre-training, thus cannot break the ceiling of the base model (Yue et al., 2025). Our results conditionally support this
https://arxiv.org/abs/2505.22617v1
claim that, if policy entropy diminishes, the ceiling not only exists, but also can be predicted. However, we argue that it is not the intrinsic limitation of RL that sets up the ceiling, but the entropy mechanism of LLMs leads to the result. Although LLMs provide us with strong policy priors, their output distributions are also narrowed, which might hinder the exploration potential during RL. 3 Dynamics Analysis of Policy Entropy TAKEAWAY (1) For softmax policy including LLMs, the change of policy entropy is determined by the covariance between the log-probability and the change in logits of actions. (2) For Policy Gradient and Natural Policy Gradient, the change in logits is proportional to the action advantage, meaning that a high covariance leads to a quick decrease of policy entropy, as observed in RL for LLM reasoning. We have unveiled that the entropy collapse issue will greatly obstacle RL scaling for LLM reasoning. To solve it, we need a more principled understanding of the dynamics of policy entropy ,i.e., when will entropy decrease and when will entropy increase. In this section, we focus on the entropy dynamics, especially the step-wise entropy difference H(πk+1 θ)− H(πk θ). We start from a theoretical perspective, firstly derive the first-order derivative of entropy for softmax policy in Sec. 3.1, then extend further in Policy Gradient and Natural Policy Gradient algorithms (Sec. 3.2). Afterwards, we validate our conclusion with experiments (Sec. 3.3). 3.1 Entropy Dynamics of Softmax Policy For step k, we try to calculate the entropy difference before and after one step parameter update, i.e.,H(πk+1 θ) andH(πk θ). For this, we first consider an intrinsic property of LLMs that they are softmax policies, which means the policies are parameterized by πθ(a|s) =exp(zs,a)P a′∈Aexp(zs,a′). (7) Heres∼dπθanda∼πk θ(·|s)represent state and action, zs,ais the output logit of action agiven state s. For any softmax policy, we have the following Lemma: Lemma 1 (Entropy difference of softmax policy) (Proof in Appendix E.2, adapted from Liu (2025)) Assume that policy πθis a tabular softmax policy, where each state-action pair (s, a)is associated with an individual logit parameter zs,a=θs,a, the difference of policy entropy given state sbetween two consecutive steps under first-order approximation satisfies H(πk+1 θ)−H(πk θ)≈Es∼dπθ H(πk+1 θ|s)− H(πk θ|s) ≈Es∼dπθh −Cova∼πk θ(·|s) logπk θ(a|s), zk+1 s,a−zk s,ai 8 3.2 Entropy Dynamics under Policy Gradient / Natural Policy Gradient Algorithms Herezk+1 s,a−zk s,ais the change in the output logits between step kand step k+ 1. This Lemma indicates that, the change of policy entropy approximately equals the negative covariance between log-probability of the action and the change of logits. That is to say, when an action areceives a high probability from the policy before updating, and its corresponding logit is also increasing after updating, then it will decrease the policy entropy. 3.2 Entropy Dynamics under Policy Gradient / Natural Policy Gradient Algorithms From Lemma 1, the step-wise difference of output logits zk+1 s,a−zk s,acontributes to change of entropy, which is related with the specific training algorithm in use. Here, we further derive the logits change under Policy Gradient (Williams, 1992) and Natural
https://arxiv.org/abs/2505.22617v1
Policy Gradient (Kakade, 2001) algorithms. Assuming that we are updating the actor policy via Policy Gradient, then zk+1 s,a−zk s,a=−η· ∇zJ(θ), where J(θ)denotes the objective function and ηdenote the learning rate. ∇zJ(θ)is calculated with Eq. 2, we have the following proposition: Proposition 1 (Difference of policy logits in vanilla policy gradient) (Proof in Appendix E.3) Let the actor policy πθbe a tabular softmax policy and updated using Eq. 2 via gradient backtracking with learning rate η, the difference of zs,abetween two consecutive steps satisfies zk+1 s,a−zk s,a=η πθ(a|s)A(s, a) Applying Proposition 1 to Lemma 1, we can further describe entropy change with the following theorem: Theorem 1 (Entropy change under policy gradient) Let the actor policy πθbe a tabular softmax policy, andπθbe updated via vanilla policy gradient, the difference of policy entropy given state sbetween two consecutive steps satisfies H(πk+1 θ|s)− H(πk θ|s)≈ −η·Cova∼πk θ(·|s) logπk θ(a|s), πk θ(a|s)·A(s, a) Theorem 1 reveals how policy entropy changes under the policy gradient method. Intuitively, an action a receives both high/low probability and high/low advantage would lower the entropy, and vice versa. At the early stage, the policy demonstrates high covariance on training data, implicating the policy’s confidence is well-calibrated (Kadavath et al., 2022), thus can safely exploit trajectories with high confidence, strengthening belief and minimize entropy (Zuo et al., 2025; Zhang et al., 2025; Agarwal et al., 2025). Liu (2025) conducted derivation for Natural Policy Gradient. We present the conclusion below. Theorem 2 (Entropy change under natural policy gradient) (Proof in Appendix E.4) Let the actor policy πθbe a tabular softmax policy, and πθis updated via natural policy gradient (Kakade, 2001), the difference of policy entropy given state sbetween two consecutive steps satisfies H(πk+1 θ|s)− H(πk θ|s)≈ −η·Cova∼πk θ(·|s) logπk θ(a|s), A(s, a) Conclusion. From Theorem 1 and Theorem 2, we obtain the intuitive insight that, in principle, a strong positive correlation between the action probability P(a)under the current policy and the corresponding advantage value A(a), on average, leads to a decrease in policy entropy. Conversely, a negative correlation tends to increase the entropy. This deeper understanding of the dynamics of policy entropy provides a theoretical foundation for designing practical strategies for entropy control in policy optimization. 3.3 Empirical Verification The preceding theoretical analysis provides insights about the factors influencing policy entropy when optimizing a softmax policy via a policy gradient algorithm. In this section, we conduct experiments to validate the theoretical conclusion, specifically, Theorem 1. Settings. We apply GRPO with policy gradient, i.e.on-policy learning without PPO surrogate loss, on Qwen2.5-7B. In this context, we adopt the bandit setting where the prompt xis the state, and whole response yis the action. Then the covariance term becomes: Cova∼πθ(·|s)(logπθ(a|s), πθ(a|s)·A(s, a)) =Covy∼πθ(·|x)(logπθ(y|x), πθ(y|x)·A(y,x))(8) 9 During training, we calculate the group-wise covariance for each prompt, and average across a batch of prompts. We further normalize the log-prob by the length of the response, which gives logπθ(y|x) =1 |y| |y|X t=1logπθ(yt|y<t,x)  (9) 0 500 1000 1500 2000 Steps0.0025 0.00000.00250.00500.00750.01000.01250.01500.01750.0200Covariance 0.005 0.0000.0050.0100.0150.0200.0250.0300.0350.040 Entropy Covariance Difference of Entropy 0 250 500 750 1000 1250 1500 1750 Steps0.0000.0020.0040.0060.008CovarianceAccuracy=0.125 Accuracy=0.5
https://arxiv.org/abs/2505.22617v1
Accuracy=0.875 Figure 8: Left: The dynamics of policy entropy (step-wise entropy difference) and covariance during on-policy GRPO training. They show similar trends as expected from the theoretical results. Right: Different prompt groups show distinct covariance behaviors. Easier prompts with higher accuracy have higher covariances as well, while harder prompts have lower covariances. Experiment results. We record two key metrics based on the aforementioned derivation Cov(·)andH(πθ) across the training period and try to analyse their relationship and dynamics. 1)Similarity of dynamic between Cov(·)and−d(H).According to Theorem 1, we have the theoretical result−d(H)∝Cov(·). As shown in LHS of Figure 8, the empirical curves of −d(H)andCov(·)exhibit highly similar dynamics, providing strong empirical support for the theorem. In particular, during the early stages of training, entropy Hdecreases rapidly, accompanied by a relatively large and positive Cov(·). As the RL training progresses, the entropy decay slows down, and Cov(·)stabilizes at a lower level, reflecting the gradual convergence of the policy. It can also be observed that Cov(·)remains positive along the training process, thus resulting in a persistent decrease in entropy. 2)Variation in Cov(·)dynamics across examples of different difficulty. Leveraging our group-based sampling strategy, we categorize training examples by difficulty based on their accuracy. RHS of Figure 8 illustrates the covariance curves for three difficulty groups, where lower accuracy indicates higher difficulty. We observe that Cov(·)tends to be smaller in magnitude for harder examples, which aligns with intuition: when the model struggles to learn, high-probability actions are not reliably associated with higher expected returns. In contrast, for easier examples, where the model is more confident and better calibrated, Cov(·)tends to be higher, indicating a stronger alignment between action probabilities and advantage estimates. 4 Entropy Control by Covariance Regularization TAKEAWAY We can control policy entropy by restricting the update of tokens with high covariances , e.g., clipping ( Clip-Cov ) or applying KL penalty ( KL-Cov ). These simple techniques prevent policy from entropy collapse thus promoting exploration. The analysis of entropy dynamics gives us guidelines for entropy control, regularizing the update step size of high-covariance actions. In this section, we introduce two simple yet effective techniques, KL-Cov and Clip-Cov , that control entropy precisely and achieve better downstream performance. 4.1 Effect of Entropy Regularization A common approach in the RL literature to control policy entropy is to apply entropy loss (Schulman et al., 2017b). We conduct experiments to see whether it is effective for LLMs. Figure 9 presents the results of adding 10 4.2 Suppressing Tokens with High Covariances 256 768 1280 1792 Steps0.01.02.03.04.05.0Entropy w. entropy_coef=0.01 w. entropy_coef=0.005 w. entropy_coef=0.001 w. entropy_coef=0.0001 w. entropy_coef=0 Zoomed-in View 256 768 1280 1792 Steps22.525.027.530.032.535.037.540.0Validation Accuracy (%) w. entropy_coef=0.01 w. entropy_coef=0.005 w. entropy_coef=0.001 w. entropy_coef=0.0001 w. entropy_coef=0 Figure 9: The policy entropy and validation accuracy of adding entropy loss where Lent=L−αH(πθ).Lis the original loss and αis the coefficient of entropy loss. 256 768 1280 1792 Steps0.00.10.20.30.40.5Entropy w. kl_coef=0.1 w. kl_coef=0.05 w. kl_coef=0.01 w. kl_coef=0.001 w. kl_coef=0Zoomed-in View 256 768 1280 1792 Steps22.525.027.530.032.535.037.540.0Validation Accuracy (%) w. kl_coef=0.1 w. kl_coef=0.05 w. kl_coef=0.01 w. kl_coef=0.001 w. kl_coef=0 Figure 10: The policy entropy and validation accuracy of
https://arxiv.org/abs/2505.22617v1
adding KL penalty between policy and reference model where LKL=L+βDKL(πθ||πref).Lis the original loss and βis the coefficient of KL loss. entropy loss. As demonstrated, entropy loss is highly sensitive to coefficients, while small coefficients have a minor influence on entropy ( 0.0001,0.001), large coefficients lead to entropy explosion ( 0.01). Although setting the coefficient at 0.005successfully stabilizes policy entropy, it does not outperform other baselines. We also attempt to control the entropy by adjusting the KL penalty between the policy model and the reference model. In Figure 10, we report the results. Despite the reference KL achieves stable entropy values, it fails to improve policy and instead leads to a degradation in performance. To summarize, naively adopting entropy regularization techniques from conventional RL struggles to solve the entropy bottleneck of LLMs. These regularization terms are either hyper-parameter sensitive (He et al., 2025) or degrade policy performance. Therefore, most recent works do not include them as well (Cui et al., 2025; Hu et al., 2025; Liu et al., 2025; Yu et al., 2025). 4.2 Suppressing Tokens with High Covariances Table 1: Covariance distribu- tion of Qwen2.5-7B in train- ing step 1. Group Mean Value Top0.02% 5 .654 Top0.2% 3 .112 Top2% 1 .385 Top20% 0 .351 Top50% 0 .152 All 0.003The unsuccessful attempt to incorporate entropy and reference KL regular- ization into the policy loss drives us to seek a more fundamental approach to control entropy. As previously elaborated, we know that the policy entropy dynamic is closely connected with the covariance between action probability and advantage. Meanwhile, as shown in Table 1, a small portion of tokens exhibit extremely high covariance, far exceeding the average. That is saying that these outlier tokens take a dominant part in triggering the entropy col- lapse. To mitigate their adverse effect, we aim to impose constraints on their contribution to the policy loss. In RL literature, two variants of PPO employ either clipping or KL penalty to constrain the policy updates (Schulman et al., 2017b), preventing overly aggressive changes. Drawing inspiration from these approaches, we propose two simple but effective covariance-aware methods Clip-Cov andKL-Cov to achieve this goal. Natural policy gradient is rarely used in post-training of LLMs because of its time-consuming second-order optimization. But its introduction of target function with KL distance as constraint share a similar idea with 11 4.3 Experiments Table 2: Detailed results of GRPO, GRPO with clip-higher technique and our methods. For AIME and AMC, the results are avg.@32. Bold denotes the best results. Method AIME24 AIME25 AMC MATH-500 OMNI-MATH OlympiadBench Minerva Avg. Qwen2.5-7B GRPO 21.2 9.6 58.7 78.8 27.9 40.7 36.7 38.6 w. Clip-higher 18.1 11.5 56.6 79.2 29.8 43.3 40.4 38.8 w.CLIP-Cov 22.1 15.8 58.2 80.4 30.5 44.1 41.1 40.4 w.KL-Cov 22.6 12.9 61.4 80.8 29.1 42.6 38.2 40.6 Qwen2.5-32B GRPO 21.8 16.2 69.7 84.2 35.2 43.6 45.5 45.8 w. Clip-higher 35.6 22.3 69.5 77.2 35.1 42.5 43.0 47.2 w.CLIP-Cov 32.3 22.7 67.2 87.0 42.0 57.2 46.0 50.3 w.KL-Cov 36.8 30.8 74.5 84.6 39.1 49.0 46.3 52.2 TRPO (Schulman et al., 2015) and PPO. For this reason, we apply
https://arxiv.org/abs/2505.22617v1
Theorem 2 into algorithms like PPO later in this section. Supposing a batch of Nrollout tokens, πθ(yi)denotes the output probability of the policy model for token yi given its corresponding prefix. According to Theorem 2, we firstly define token-wise centered cross-product between log probability and advantage as: Cov(yi) = (log πθ(yi)−1 NNX j=1logπθ(yj))·(A(yi)−1 NNX j=1A(yj)) (10) TheCov is the covariance of each token in N. Its expectation is the covariance in Theorem 2. Clip-Cov. In the Clip-Cov strategy, we clip a small fraction of high-covariance tokens out from policy gradient updates as follows. With Eq. 10 calculated, we randomly select r·Nof high-covariance tokens according to the covariance value: Iclip=I∼Uniform (i|Cov(yi)∈[ωlow, ωhigh]},⌊r·N⌋) (11) Where Iis short for index, rdenotes the clip ratio. ωlow, ωhighare two predefined bounds for covariance, respectively. Both are set much higher than the average covariance ( >500×). Finally, tokens with the chosen indexes will be detached from the policy gradient, which is: LClip-Cov (θ) =( Eth πθ(yt|y<t) πθold(yt|y<t)Ati , t /∈Iclip 0, t ∈Iclip(12) where the tis the t-th token in one rollout response and each tuniquely corresponds to a index iinN. KL-Cov. TheKL-Cov strategy is simpler. Specifically, similar to Clip-Cov , we first compute the covariance as in Eq. 10. Then, we rank and select tokens within the top- kproportion of covariance: IKL={i|Rank( Cov(yi))≤k·N}, (13) Thekhere denotes the proportion of tokens that will be subjected to the KL penalty and k≪1. At last, we impose the KL penalty (KL divergence between the current policy and the rollout policy) on the selected tokens, the policy loss is computed as: LKL-Cov (θ) =  Eth πθ(yt|y<t) πθold(yt|y<t)Ati , t / ∈IKL Eth πθ(yt|y<t) πθold(yt|y<t)At−βDKL(πθold(yt|y<t)||πθ(yt|y<t))i , t∈IKL(14) Where βis the coefficient to control the weight for the KL penalty. We present the pseudo-code in Listing 1. 12 4.3 Experiments def compute_policy_loss ( old_log_prob , log_prob , advantages , select_ratio , method , ** args ): ratio = exp( log_prob - old_log_prob ) pg_losses1 = -ratio * advantages + # calculate token wise centered cross - product + covs = ( log_prob - log_prob . mean ()) * ( advantages - advantages . mean ()) + select_num = int( select_ratio * len( pg_losses1 )) if method == " clip_cov ": pg_losses2 = -clip (ratio , args [" clip_range_lb "], args [" clip_range_ub "]) * advantages + # randomly select index to be detached + clip_idx = random_select ( covs [ covs > args [" cov_lb "] & covs < args [" cov_ub "]], num= select_num ) + pg_losses1 [ clip_idx ]. detach_ () + pg_losses2 [ clip_idx ]. detach_ () pg_loss = maximum ( pg_losses1 , pg_losses2 ). mean () if method == " kl_cov ": kl_coef = args [" kl_coef "] kl_penalty = ( log_prob - old_log_prob ).abs () - pg_losses = pg_losses1 + kl_coef * kl_penalty + # find out index with highest conviriance + select_idx = topk (covs , k= select_num , largest = True ) + # apply KL penalty of these samples + pg_losses1 [ select_idx ] += kl_coef * kl_penalty [ select_idx ] pg_loss = pg_losses1
https://arxiv.org/abs/2505.22617v1
. mean () return pg_loss Listing 1: The pseudo-code of the policy loss computation with Clip-Cov andKL-Cov . The implementation only need to modify several lines of code. 4.3 Experiments Settings. We train Qwen2.5 models on math tasks to validate Clip-Cov andKL-Cov . We use the DAPO- MATH dataset (Yu et al., 2025) for training. In each rollout step, we sample 8 responses per prompt for a batch of 256 prompts using a temperature of 1, and subsequently perform 8 policy updates on the collected responses. We also filter out the prompts with all-correct/incorrect responses. The test datasets include MATH500, AIME 2024, AIME 2025 (Li et al., 2024), AMC, OMNI-MATH, OlympiadBench, and Minerva (Lewkowycz et al., 2022). During evaluation, we set the rollout temperature to 0.6 for AIME and AMC, while using greedy decoding for all other test sets. For baselines, we compare the original GRPO, and GRPO with Clip-higher, which tunes the upper threshold ϵin PPO loss to 0.28 (Yu et al., 2025). In Clip-Cov , the clip ratio ris 2×10−4, with ωlowandωhighequals 1 and 5, respectively. For KL-Cov , thekis set as 2×10−3and2×10−4 for Qwen2.5-7B and 32B, respectively, the KL coefficient βis set as 1. The max generation length is 8192. Results and analysis. We present the experimental results in Table 2, one can see that our two approaches both achieve non-trivial improvements across all benchmarks. Compared to GRPO, our method outperforms it by 2.0% on average for the 7B model and by 6.4% for the 32B model. As shown in Figure 11, our method is able to maintain a considerably higher level of entropy throughout training. For example, when the baseline’s entropy reaches a plateau and can no longer be consumed, the KL-Cov method still sustains an entropy level over 10 ×higher. Meanwhile, the response length of the policy model steadily increases, and its performance on the test set consistently surpasses that of the baseline. This indicates that our model is able to explore more “freely” during training, learning better policy through RL. Compared to the clip-higher technique, although it can also increase entropy and lead to performance improvement in the early stage of training, it gradually becomes unstable, with performance saturating and declining. In contrast, our method obtains more stable entropy curves throughout training, ultimately achieving non-trivial improvements over the baselines. 13 4.4 Get Policy Entropy Controlled 256 768 1280 1792 2304 2816 Steps0.00.10.20.30.40.50.6Entropy GRPO GRPO w. Clip_higher GRPO w. Clip_Cov GRPO w. KL_Cov 256 768 1280 1792 2304 2816 Steps60080010001200140016001800Response Length GRPO GRPO w. Clip_higher GRPO w. Clip_Cov GRPO w. KL_Cov 256 768 1280 1792 2304 2816 Steps2628303234363840Avg. T est Acc(%) GRPO GRPO w. Clip_higher GRPO w. Clip_Cov GRPO w. KL_Cov 128 256 384 512 640 768 896 1024 1152 Steps0.00.10.20.30.40.50.60.7Entropy GRPO GRPO w. Clip_higher GRPO w. Clip_Cov GRPO w. KL_Cov 128 256 384 512 640 768 896 1024 1152 Steps5001000150020002500300035004000Response Length GRPO GRPO w. Clip_higher GRPO w. Clip_Cov GRPO w. KL_Cov 128 256 384 512 640 768 896 1024 1152 Steps253035404550Avg. T est Acc(%) GRPO GRPO w. Clip_higher GRPO w. Clip_Cov GRPO w. KL_Cov Figure 11:
https://arxiv.org/abs/2505.22617v1
Training Qwen2.5-7B ( Top) / Qwen2.5-32B ( bottom ) with GRPO with/without our methods. Left: Entropy dynamics. Our methods uplift policy entropy from collapse, enabling sustained exploration. Middle: Our method also incentivizes longer responses compared with vanilla GRPO. Right: The policy model consistently outperforms the baseline on testsets, avoiding performance plateaus. Moreover, we observe that our method yields more substantial gains on the larger Qwen2.5-32B. Specifically, our method achieves improvements of 15.0% and14.6% compared to GRPO on the most challenging benchmarks, AIME24 and AIME25, respectively. We infer that this is because the 32B model possesses greater potential from pretraining compared to the 7B model. Once the “exploration curse” caused by entropy collapse is lifted, the 32B model is able to explore more diverse and higher-quality policies. 256 768 1280 1792 2304 Steps0.00.10.20.30.40.50.60.7Entropy ratio=7 ratio=5 ratio=4 ratio=2 baseline 256 768 1280 1792 2304 Steps0.00.10.20.30.40.5Entropy kl-coef=1 kl-coef=0.5 kl-coef=0.1 kl-coef=0.05 kl-coef=0 Figure 12: Differences in entropy dynamics of Qwen2.5-7B under varying KL coefficients and Clip ratios, evaluated Clip-Cov (left) and KL-Cov (right ) settings, respectively. 4.4 Get Policy Entropy Controlled We also evaluate the capability of our methods in controlling policy entropy as shown in Figure 12. For Clip-Cov , the level of policy entropy can be adjusted by tuning the ratio of clipped samples, where more clipped samples result in higher entropy. For KL-Cov , we can modulate the entropy by controlling the KL coefficient β,i.e., the weight of the KL penalty. Specifically, a larger coefficient brings higher entropy. Comparing them, KL-Cov reaches stabler entropy curves than Clip-Cov , which might be preferable for stabilizing the training process. Although the optimal value of entropy under different scenarios remains an open question, our method demonstrates that we can simply tune the hyperparameters to control policy entropy, thus are capable of steering entropy and enabling the model to explore more effectively. 14 4.5 Discussion 4.5 Discussion Connection with clip-higher. Our main baseline, clip-higher (Yu et al., 2025), can also incentivize higher policy entropy. In fact, this technique shares similar functionality with our methods. By raising the upper threshold of the importance sampling ratio, clip-higher includes more low-probability tokens for policy updates. Also, the upper threshold only affects the tokens with positive advantages, which means clip-higher is actually adding more low-covariance (low probability, high advantage, with average covariance of ∼-0.03) tokens in gradient calculation. We take a step further by directly using the covariance as the threshold, thus controlling the entropy more precisely. The philosophy of entropy control. In experiments, we find that the policy entropy is sensitive to hyperpa- rameter settings. Specifically, our methods only interfere with a very small fraction of tokens ( 10−4to10−3), yet totally change the entropy curve. This means several “pivotal” tokens are crucial for the entropy of LLMs. Also, we don’t observe a relationship between the intervened entropy and model performance. It still remains open whether there exists an optimal entropy value to balance the exploration and training stability. 5 Related Work Policy entropy in reinforcement learning. Stemmed in information theory, entropy provides a principled mechanism to manage the exploitation-exploration tradeoff.
https://arxiv.org/abs/2505.22617v1
Entropy-regularized reinforcement learning, also referred as maximum entropy RL (Ziebart et al., 2008; Toussaint, 2009), adopts a regularization term in reward to encourage high-entropy actions. This regularization term was widely-inherited in RL algorithms (Mnih et al., 2015, 2016; Schulman et al., 2017a,b; Haarnoja et al., 2017, 2018), and is viewed as a necessity. On the other hand, in RL for LLMs, there exist different opinions on whether entropy regularization should be preserved (Ouyang et al., 2022; Shao et al., 2024; Hu et al., 2025; He et al., 2025). Our experiments indicate that, it is necessary to control entropy, but we can design better objectives than entropy loss. Predictability of reinforcement learning for reasoning language models. The first part of this work reveals the predictability of RL for LLM reasoning. The development of LLMs is largely guided by the neural scaling laws, which bridge model performances with computational budgets, model sizes, and the amount of training data (Hestness et al., 2017; Kaplan et al., 2020; Hoffmann et al., 2022). With scaling experiments on smaller models, the loss and task performance of larger models could be accurately predicted. In RL, Hilton et al. (2023); Rybkin et al. (2025) studied the scaling behavior of policy performances versus computing on non-LLM models, but the predictability of RL for LLMs has yet to be investigated. Gao et al. (2022) proposed to predict reward scores from KL divergence in RL on LLMs, which was used for modeling overoptimization effect of a proxy reward model. This work aligns with our conclusion considering that, 1) the verifiable reward eliminates the gap between the proxy reward model and ground truth; 2) the similarity between KL divergence and policy entropy. Reinforcement learning for LLMs. Reinforcement learning has emerged as a major approach for LLM post-training (Ouyang et al., 2022; Meta, 2024; Team et al., 2023; Qwen et al., 2025; Jiang et al., 2023). Recent works have achieved further breakthrough on enhancing the reasoning capability of LLMs using RL with verifiable rewards (OpenAI, 2024a; Lambert et al., 2024; DeepSeek-AI et al., 2025; Team et al., 2025), drawing great attention in research community (Cui et al., 2025; Liu et al., 2025; Hu et al., 2025; He et al., 2025). However, there still lacks systematic study on the underlying mechanisms of RL for LLMs, which constitutes the primary goal of our work. 6 Conclusion In this study, we try to address the challenge of policy entropy collapse in reinforcement learning for large lan- guage model reasoning. We empirically demonstrate that performance gains are often achieved by sacrificing exploratory capacity, which in turn imposes a foreseeable limit on improvement. To gain a deeper understand- ing, we conduct a theoretical investigation into entropy dynamics and introduce two simple regularization techniques, Clip-Cov andKL-Cov , to directly manage high-covariance tokens and thereby counteract entropy collapse. Looking further, RL has been identified as the next scaling axis after pre-training. However, scaling computing for RL requires more than entropy minimization. We hope this research could provide valuable insights into the role of entropy, fostering RL to reach a higher level of
https://arxiv.org/abs/2505.22617v1
intelligence. 15 REFERENCES References Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. Journal ofMachine Learning Research , 22(98): 1–76, 2021. Shivam Agarwal, Zimin Zhang, Lifan Yuan, Jiawei Han, and Hao Peng. The unreasonable effectiveness of entropy minimization in llm reasoning. arXiv preprint arXiv:2505.15134, 2025. Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, and Sara Hooker. Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms. arXiv preprint arXiv:2402.14740, 2024. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, Jiarui Yuan, Huayu Chen, Kaiyan Zhang, Xingtai Lv, Shuo Wang, Yuan Yao, Xu Han, Hao Peng, Yu Cheng, Zhiyuan Liu, Maosong Sun, Bowen Zhou, and Ning Ding. Process reinforcement through implicit rewards, 2025. URL https://arxiv.org/abs/2502.01456 . DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan
https://arxiv.org/abs/2505.22617v1
Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y . X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . Benjamin Eysenbach and Sergey Levine. Maximum entropy rl (provably) solves some robust rl problems. arXiv preprint arXiv:2103.06257, 2021. Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, et al. Omni-math: A universal olympiad level mathematic benchmark for large language models. arXiv preprint arXiv:2410.07985, 2024. Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference onMachine Learning, 2022. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In International conference onmachine learning, pp. 1352–1361. PMLR, 2017. 16 REFERENCES Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference onmachine learning, pp. 1861–1870. Pmlr, 2018. Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, Jie Liu, Lei Qi, Zhiyuan Liu, and Maosong Sun. OlympiadBench: A challenging benchmark for promoting AGI with olympiad-level bilingual multimodal scientific problems. In Lun- Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings ofthe62nd Annual Meeting ofthe Association forComputational Linguistics (V olume 1:Long Papers) , pp. 3828–3850, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.211. URL https://aclanthology.org/2024.acl-long.211/ . Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, and Yahui Zhou. Skywork open reaonser series. https://capricious-hydrogen-41c.notion.site/ Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680 , 2025. Notion Blog. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empiri- cally. arXiv preprint arXiv:1712.00409, 2017. Jacob Hilton, Jie Tang, and John Schulman. Scaling laws for single-agent reinforcement learning. arXiv preprint arXiv:2301.13442, 2023. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262, 2025. Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner- zero: An open source approach to scaling up reinforcement learning on the base
https://arxiv.org/abs/2505.22617v1
model. arXiv preprint arXiv:2503.24290, 2025. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. URL https://arxiv.org/abs/2310.06825 . Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022. Sham M Kakade. A natural policy gradient. Advances inneural information processing systems, 14, 2001. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. Wouter Kool, Herke van Hoof, and Max Welling. Buy 4 reinforce samples, get a baseline for free! In DeepRLStructPred@ICLR , 2019. URL https://api.semanticscholar.org/CorpusID:198489118 . Nathan Lambert, Jacob Daniel Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James Validad Miranda, Alisa Liu, Nouha Dziri, Xinxi Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Chris Wilhelm, Luca Soldaini, Noah A. Smith, Yizhong Wang, Pradeep Dasigi, and Hanna Hajishirzi. Tülu 3: Pushing frontiers in open language model post-training. ArXiv, abs/2411.15124, 2024. 17 REFERENCES Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances inNeural Information Processing Systems , 35:3843–3857, 2022. Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Huang, Kashif Rasul, Longhui Yu, Albert Q Jiang, Ziju Shen, et al. Numinamath: The largest public dataset in ai4maths with 860k pairs of competition math problems and solutions. Hugging Face repository, 13:9, 2024. Jiacai Liu. How does rl policy entropy converge during iteration? https://zhuanlan.zhihu.com/p/ 28476703733 , 2025. URL https://zhuanlan.zhihu.com/p/28476703733 . Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783, 2025. Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Tianjun Zhang, Li Erran Li, et al. Deepscaler: Surpassing o1-preview with a 1.5 b model by scaling rl. Notion Blog, 2025. Meta. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783 . Meta-Llama-3.2. Llama 3.2: Revolutionizing edge ai and vision with open, customizable models. URL https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices . MistralAI-NeMo. Mistralai-nemo. URL https://mistral.ai/news/mistral-nemo . MistralAI-Small-3. Mistralai-small-3. URL https://mistral.ai/news/mistral-small-3 . V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529–533, 2015. V olodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference onmachine learning, pp. 1928–1937. PmLR, 2016. OpenAI. Openai o1 system card. ArXiv, abs/2412.16720, 2024a. OpenAI. Gpt-4 technical report, 2024b. URL https://arxiv.org/abs/2303.08774 . Long Ouyang,
https://arxiv.org/abs/2505.22617v1
Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances inneural information processing systems, 35:27730–27744, 2022. Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . Oleh Rybkin, Michal Nauman, Preston Fu, Charlie Snell, Pieter Abbeel, Sergey Levine, and Aviral Kumar. Value-based deep rl scales predictably. arXiv preprint arXiv:2502.04327, 2025. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference onmachine learning, pp. 1889–1897. PMLR, 2015. John Schulman, Xi Chen, and Pieter Abbeel. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440, 2017a. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017b. 18 REFERENCES Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.03300 . Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256 , 2024. David Silver and Richard S Sutton. Welcome to the era of experience. Google AI, 2025. Richard Sutton. The bitter lesson. Incomplete Ideas (blog), 13(1):38, 2019. Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning , 3:9–44, 1988. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599, 2025. Marc Toussaint. Robot trajectory optimization using approximate inference. In Proceedings ofthe26th annual international conference onmachine learning, pp. 1049–1056, 2009. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8:229–256, 1992. Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241–268, 1991. Zhangchen Xu, Yang Liu, Yueqin Yin, Mingyuan Zhou, and Radha Poovendran. Kodcode: A diverse, challenging, and verifiable synthetic dataset for coding, 2025. URL https://arxiv.org/abs/2503. 02951 . Jianhao Yan, Yafu Li, Zican Hu, Zhi Wang, Ganqu Cui, Xiaoye Qu, Yu Cheng, and Yue Zhang. Learning to reason under off-policy guidance. arXiv preprint arXiv:2504.14945, 2025. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan,
https://arxiv.org/abs/2505.22617v1
Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, Jinhua Zhu, Jiaze Chen, Jiangjie Chen, Chengyi Wang, Honglin Yu, Weinan Dai, Yuxuan Song, Xiang Wei, Haodong Zhou, Jingjing Liu, Wei Ma, Ya-Qin Zhang, Lin Yan, Mu Qiao, Yong-Xu Wu, and Mingxuan Wang. Dapo: An open-source llm reinforcement learning system at scale. 2025. URL https://api.semanticscholar.org/CorpusID:277104124 . Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, Bowen Zhou, Zhiyuan Liu, and Hao Peng. Free process rewards without process labels. International conference onmachine learning , 2025. Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? ArXiv , abs/2504.13837, 2025. Huaye Zeng, Dongfu Jiang, Haozhe Wang, Ping Nie, Xiaotong Chen, and Wenhu Chen. Acecoder: Acing coder rl via automated test-case synthesis, 2025. URL https://arxiv.org/abs/2502.01718 . Qingyang Zhang, Haitao Wu, Changqing Zhang, Peilin Zhao, and Yatao Bian. Right question is already half the answer: Fully unsupervised llm reasoning incentivization. arXiv preprint arXiv:2504.05812, 2025. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pp. 1433–1438. Chicago, IL, USA, 2008. Yuxin Zuo, Kaiyan Zhang, Shang Qu, Li Sheng, Xuekai Zhu, Biqing Qi, Youbang Sun, Ganqu Cui, Ning Ding, and Bowen Zhou. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084, 2025. 19 A Training Details for Different Models Due to inherent differences in the initial reasoning abilities between model families, we train models using data of different difficulty levels to stabilize the RL process Specifically, for math tasks, we train the Qwen family and Mistral-24B model using Eurus-2-RL-Math (Cui et al., 2025), while other model families are trained using GSM8K (Cobbe et al., 2021). The downstream performance is evaluated using MATH500 (Hendrycks et al., 2021), AIME 2024 (Li et al., 2024), AMC (Li et al., 2024), OlympiadBench (He et al., 2024), and OMNI-MATH (Gao et al., 2024). For code tasks, we train the Qwen family and Mistral-24B model using AceCode (Zeng et al., 2025), Eurus-2-RL-Code (Cui et al., 2025), and Kodcode3. B More Fitting Results In this section, we present more fitting experiment results. C Fitting Results of Training with Different Dataset. 101 Entropy (log)20253035Validation Accuracy (%)DAPO-MATH GSM8K Eurus-RL-Data Eurus-RL-Data-Difficult Figure 13: Training Qwen2.5-7B with different data. D Fitting Results of Instruct Models We also conduct fitting experiments on instruct models, and the fitting function remains valid in our experiments. We present the fitting results here. 101 Entropy (log)152025303540Validation Accuracy (%)Qwen2.5 Model Family 1.5B-Instruct 3B-Instruct 7B-Instruct Fit Figure 14: Training Qwen2.5 instruct models on math task. 3We process the data with style instruct and complete into a format that can be handled by unit tests. For the online-judge style, we removed this portion of the data as it was derived from instruct style data. 20 E Proof E.1 Useful Lemmas Lemma 2 (Derivative of softmax function) ∂logπθ(a|s) ∂θs,a′=1{a=a′} −πθ(a′|s) Lemma 3 (Expectation of Advantage
https://arxiv.org/abs/2505.22617v1
function given state s) E a∼πθ(·|s)[Aπθ(s, a)] = E a∼πθ(·|s)[Qπθ(s, a)−Vπθ(s)] =E a∼πθ(·|s)[Q(s, a)]−E a∼πθ(·|s)[V(s)] =V(s)−V(s) = 0 E.2 Proof for Lemma 1 Lemma 1 :Let the actor policy πθbe a tabular softmax policy, the difference of information entropy given statesbetween two consecutive steps satisfies H(πk+1 θ|s)− H(πk θ|s)≈ −Cova∼πk θ(·|s) logπk θ(a|s), zk+1 s,a−zk s,a Proof adapted from (Liu, 2025) . In tabular softmax policy, each state-action pair (s, a)is associated with an individual logit parameter zs,a=θs,a. We assume that we are updating logits zviazk+1=zk+η· ∇J(πθ). Given ηis relatively small, leveraging Taylor’s expansion under first-order approximation, we have H(πk+1 θ|s)≈ H(πk θ|s) +⟨∇H (πk θ|s),(zk+1−zk)⟩ We then to derive what ∇H(πk θ|s)is, according to the definition of H, we have ∇θH(πθ|s) =∇θH(πθ(· |s)) =∇θ(−Ea∼πθ(·|s)[logπθ(a|s)]) =−Ea∼πθ(·|s)[∇θlogπθ(a|s) + log πθ(a|s)∇θlogπθ(a|s)] =−Ea∼π(·|s)[logπθ(a|s)∇θlogπθ(a|s)] Then we have, ⟨∇θH(θk|s),(zk+1−zk)⟩=−⟨Ea∼π(·|s)[logπθ(a|s)∇θlogπθ(a|s)],(θk+1−θk)⟩ =−Ea∼π(·|s) logπθ(a|s)⟨∇θlogπθ(a|s), θk+1−θk⟩ =−Ea∼π(·|s)" logπθ(a|s)X a′∈A∂logπθ(a|s) ∂θs,a′·(θk+1 s,a′−θk s,a′)# =−Ea∼π(·|s)" logπθ(a|s)X a′∈A(1{a=a′} −π(a′|s))·(θk+1 s,a′−θk s,a′)# =−Ea∼π(·|s)" logπθ(a|s)" (θk+1 s,a−θk s,a)−X a′∈Aπ(a′|s)(θk+1 s,a′−θk s,a′)## =−Ea∼π(·|s) logπθ(a|s)(θk+1 s,a−θk s,a) +Ea∼π(·|s)h logπθ(a|s)·Ea′∼π(·|s)h θk+1 s,a′−θk s,a′ii =−Ea∼π(·|s) logπθ(a|s)(θk+1 s,a−θk s,a) +Ea∼π(·|s)[logπθ(a|s)]·Ea′∼π(·|s)h θk+1 s,a′−θk s,a′i =−Cova∼π(·|s) logπ(a|s), θk+1−θk =−Cova∼π(·|s) logπ(a|s), zk+1−zk 21 E.3 Proof for Proposition 1 E.3 Proof for Proposition 1 Proposition 1 :Let the actor policy πθbe tabular softmax policy and updated using Eq. 2, the difference of zs,abetween two consecutive steps satisfies zk+1 s,a−zk s,a=η·πθ(a|s)·A(s, a) Proof. In tabular softmax policy, each state-action pair (s, a)is associated with an individual logit parameter zs,a=θs,a. Through gradient backtracking, zs,ais updated via zk+1 s,a=zk s,a+η· ∇θs,aJ(θ), therefore, we have zk+1 s,a−zk s,a=η· ∇θs,aJ(θ) =η·E a′∼πθ(·|s) ∇θs,alogπθ(a′|s)·A(s, a′) =η·E a′∼πθ(·|s) ∂logπθ(a′|s) ∂θs,a|{z } Lemma 2·A(s, a′)  =η·X a′∈A[πθ(a′|s)·(1{a=a′} −πθ(a|s))·A(s, a′)] =η·πθ(a|s)· (1−πθ(a|s))·A(s, a)−X a′∈A,a′̸=aπθ(a′|s)·A(s, a′)  =η·πθ(a|s)· A(s, a)−X a′∈Aπθ(a′|s)·A(s, a′) | {z } Lemma 3  =η·πθ(a|s)·[A(s, a)−0] =η·πθ(a|s)·A(s, a) E.4 Proof for Theorem 2 Theorem 2 :Let the actor policy πθbe tabular softmax policy, and πθis updated via natural policy gra- dient Kakade (2001), the difference of information entropy given state sbetween two consecutive steps satisfies H(πk+1 θ|s)− H(πk θ|s)≈ −η·Cova∼πk θ(·|s) logπk θ(a|s), A(s, a) Proof. According to Lemma 1, we first derive the difference of logits zin natural policy gradient. We learn from (Agarwal et al., 2021) that, when we are updating policy using natural policy gradient via gradient backtracking, zk+1 s,a−zk s,asatisfies, zk+1 s,a−zk s,a=η·A(s, a) Applying this into Lemma 1, we have H(πk+1 θ|s)− H(πk θ|s)≈ −η·Cova∼πk θ(·|s) logπk θ(a|s), A(s, a) 22
https://arxiv.org/abs/2505.22617v1
arXiv:2505.22626v1 [cs.RO] 28 May 2025SCIZOR : A Self-Supervised Approach to Data Curation for Large-Scale Imitation Learning Yu Zhang1∗, Yuqi Xie1,2∗, Huihan Liu1†, Rutav Shah1†, Michael Wan‡2, Linxi “Jim” Fan2, Yuke Zhu1,2 1The University of Texas at Austin2NVIDIA Research Abstract: Imitation learning advances robot capabilities by enabling the acqui- sition of diverse behaviors from human demonstrations. However, large-scale datasets used for policy training often introduce substantial variability in qual- ity, which can negatively impact performance. As a result, automatically curating datasets by filtering low-quality samples to improve quality becomes essential. Existing robotic curation approaches rely on costly manual annotations and per- form curation at a coarse granularity, such as the dataset or trajectory level, failing to account for the quality of individual state-action pairs. To address this, we introduce S CIZOR , a self-supervised data curation framework that filters out low- quality state-action pairs to improve the performance of imitation learning poli- cies. S CIZOR targets two complementary sources of low-quality data: suboptimal data, which hinders learning with undesirable actions, and redundant data, which dilutes training with repetitive patterns. S CIZOR leverages a self-supervised task progress predictor for suboptimal data to remove samples lacking task progres- sion, and a deduplication module operating on joint state-action representation for samples with redundant patterns. Empirically, we show that S CIZOR enables im- itation learning policies to achieve higher performance with less data, yielding an average improvement of 15.4% across multiple benchmarks. More information is available at: https://ut-austin-rpl.github.io/SCIZOR/ Keywords: Imitation Learning, Data Curation, Robot Foundation Models 1 Introduction Imitation learning has shown promising signs in acquiring a wide range of motor behaviors by learning from expert demonstrations, a necessary step towards general-purpose robots. The success of the imitation learning hinges on the extensive datasets collected in a wide range of tasks and environments, often collected by different demonstrators. Such diverse, large-scale data collection inherently introduces variability in data quality [1, 2, 3], including mistakes made by operators leading to suboptimal actions (e.g., dropping an object), or redundancy in data leading to skewed distributions. Such a dataset can misguide models into learning incorrect behaviors [1, 4] and hinder diversity [3], reducing the impact of rare but informative actions. Therefore, effective data curation — the process of filtering data to improve the data quality [5, 6] — becomes critical for building robust and high-performing imitation learning policies. Early efforts in robotic data curation have relied heavily on human annotations to label high- and low-quality data [1], but these methods have largely been confined to small-scale datasets. As data scales up, manual annotation becomes infeasible, making it important to automatically curate data, an approach that has already shown promise in fields like computer vision (CV) and natural lan- guage processing (NLP) [7, 8, 9, 10]. Specifically in robot learning, a key challenge in this process *,†Equal contribution. ‡This work was done while Michael Wan was interning at NVIDIA. Original Curated Better Policy Success Rate Curated Dataset RemovedDiverse Robotic Trajectories Redundant TransitionsSuboptimal Transitions Figure 1: SCIZOR overview. Each trajectory from the original robotic datasets is simultaneously passed through the suboptimal transitions
https://arxiv.org/abs/2505.22626v1
removal module and the redundant transitions removal module. Each module removes data based on its own threshold, resulting in a curated dataset. A policy trained on the curated dataset achieves a higher success rate. to maximize data utilization is the need for curating at the finest granularity: we must evaluate the quality of individual state-action pairs . For instance, a trajectory may include an initial failed grasp attempt followed by a successful recovery, containing both suboptimal and valuable segments. Ef- fective curation should isolate and remove only the uninformative or erroneous segments, like the failed grasp, while preserving segments that provide useful learning signals. The different impact of individual data points for learning has also been underscored in prior work, such as weighted behav- ior cloning [4, 11, 12]. However, current large-scale imitation learning curation methods have yet to address data quality at the transaction level. Existing approaches typically curate by reweighting entire dataset domains [13, 5] or discarding entire trajectories [6, 14], overlooking the contribution and quality of individual state-action pairs. Effectively curating individual state-action pairs is challenging, as robot demonstrations typically lack dense reward annotations, making it difficult to assess the quality of every interaction step. To address this, we adopt a self-supervised approach for filtering low-quality state-action pairs, offer- ing a scalable solution for improving data quality in imitation learning. Our work is motivated by two key observations: (1) suboptimal transitions, which contain undesirable actions like collision, jittering, and other erroneous actions, can degrade policy performance by reinforcing incorrect be- haviors; and (2) redundant transitions, which repeat common patterns excessively, can dilute the learning signal by dominating other informative and diverse samples. For effective large-scale imitation learning, we introduce S CIZOR , aself-supervised data curation method that reduces dataset s ize by filtering out sub optimal and redundant state-action pairs. First, to identify suboptimal data without access to reward information, we train a self-supervised task progress predictor using temporal distance classification [15, 16, 17], and remove frames that do not demonstrate meaningful progress toward the task goal. Second, to remove redundant data, a key in- sight is that some segments may appear visually similar while differing substantially in the executed actions. Therefore, both visual observations and their corresponding actions must be considered for effective deduplication. To this end, we apply deduplication [7] using joint representations of state and action to identify and filter redundant state-action pairs. We then filter out frames based on similarity scores to reduce repetition while preserving dataset diversity. The suboptimal frame filter targets harmful or noisy supervision, while the redundancy filter removes overrepresented pat- terns. Together, the two deletion strategies complement each other by targeting distinct modes of low-quality data. In summary, our key contributions are as follows: • We propose a unified framework for data curation that filters both suboptimal and redundant state- action pairs in large-scale imitation learning datasets. 2 • We introduce a suboptimality detector based on self-supervised task progress estimation, and a deduplication module that removes repetitive data to preserve data diversity. • We empirically demonstrate that S CIZOR improves policy
https://arxiv.org/abs/2505.22626v1
performance across diverse large-scale imitation learning benchmarks, showing on average 15.4% improvement. 2 Related Work Imitation Learning on Large-Scale Robot Datasets. Imitation learning has been a popular ap- proach to learn robot policy from human demonstrations [18, 19, 1, 20] to scale up robot policy generalization and enable diverse behaviors, there has recently been progress in large-scale multi- task imitation learning [21, 22, 13, 23, 24, 25] trained on robot trajectory data of a wide variety of tasks. This progress is driven not only by advances in policy architectures [21, 1, 26, 27], but more importantly the collection of large-scale datasets in both real-world [28, 29, 30, 31] and simu- lation [32, 33]. These datasets are often collected from multiple institutions using varied hardware configurations and teleoperation systems [34, 35, 36, 37, 38, 39, 38], resulting in inconsistencies in quality and redundancy across different datasets. Although robotics datasets have been scaled to unprecedented sizes, the study of dataset quality and data curation methods remains preliminary. Data Curation in Vision and Language Models. Data curation, which is the selection and fil- tering of data for better training results, have been extensively studied in both computer vision and language modeling to address the challenges posed by large-scale, heterogeneous datasets [8]. In vision, LAION-5B [9] uses pretrained encoders like CLIP to assign data quality on the samples. In language modeling, data mixture strategies like DoReMi [10] balance various data sources for distribution robustness, while deduplication methods like SemDeDup [7] remove near-duplicates using semantic embeddings. Data Filtering Networks [40] trains a neural network to distinguish in- formative versus less-informative data, while Ask-LLM [41] uses instruction-tuned LLMs to assess the quality of training examples directly. Meanwhile, Less-Is-More-RL [42] shows how pruning suboptimal data can improve downstream policy performance in reinforcement learning settings. Data Curation for Robotics. Data quality has been known to affect policy learning performance for robotics [1, 43]. There have been studies in improving human demonstration quality, albeit in small-scale tasks, by automatic ranking [2], or eliciting compatible behavior from humans during the data collection process [44]. As progress in general-purpose, large-scale robot learning continues, there has been growing interest in curating large-scale datasets for robot learning [13, 23, 5, 6, 14]. Octo [13] and OpenVLA [23] perform ad-hoc dataset-level curation by heuristically tuning a set of weights for data mixtures, balancing the dataset composition; Remix [5] automates this dataset-level curation with distributionally robust optimization. DemInf [6] performs trajectory-level curation with mutual information as a trajectory quality estimator, and Demo-SCORE [14] also performs trajectory-level curation, but has to rely on online rollout performance. 3 Self-Supervised Data Curation for Large-Scale Imitation Learning We introduce our data curation framework, S CIZOR , which performs fine-grained filtering of low- quality data in a self-supervised manner to improve imitation learning policy performance. We begin by introducing key formulations and background, followed by two core components of our method: (1) a self-supervised suboptimal transitions removal module and (2) a similarity-based state-action deduplication module that filters redundant transitions. 3.1 Preliminaries and Formulations We formulate a robot manipulation task as a Markov
https://arxiv.org/abs/2505.22626v1
Decision Process M= (S,A,R,P, p0, γ) representing the state space, action space, reward function, transition probability, initial state dis- tribution, and discount factor. Given the current state st∈ S , the robot action at∈ A is drawn from the policy π(· |st). The objective of imitation learning is to learn a policy πR 3 First and Last Frame ineach ChunkTask Progress Expected Progress Suboptimal ThresholdBelow Threshold (Deleted) Above Threshold (Kept)TimeSuboptimal Transition Removal TimeSimilarity -Based State -Action Deduplication Video Frames with Corresponding Actions K-means clustering for features Dedup Threshold Kept Data inCluster Removed Duplicates ∩Progress EstimatorProgress EstimatorProgress EstimatorVideo EncoderVideo EncoderVideo Encoder Curated Dataset … Deleted DataTimeFigure 2: SCIZOR ’s architecture. We apply two curation modules: (1) Suboptimal transition removal , where we estimate chunk progress from its first and last frames and discard those below a threshold; (2) State-action deduplication , where we encode all frames, cluster their features via K-means, and remove frames whose intra-cluster cosine similarity exceeds a threshold. parameterized by θthat maximizes the log-likelihood of actions aconditioned on the states s: θ∗= arg max θE (s,a)∼Dexpert[logπθ(a|s)], where (s, a)are samples from the human demonstra- tion dataset Dexpert . Our data curation objective is to refine Dexpert by filtering out suboptimal or redundant samples to improve downstream policy performance. This is achieved by assigning a quality score to each sample and excluding those below a threshold. 3.2 Suboptimal Transition Removal via Progress Estimation Human demonstrations often contain both proficient and suboptimal segments in the same trajectory with no explicit signals. Manually labeling suboptimal segments is labor-intensive and not scalable. We propose a self-supervised approach to detect suboptimal behaviors based on the intuition that progress toward task completion should increase steadily over time . By training a model to predict task progress between pairs of observations, we learn a proxy for how much progress the agent makes over a given interval. If the predicted progress for a segment is unexpectedly lower than expected progress, it can serve as a signal of suboptimality. This allows us to automatically identify and filter out segments that deviate from making progress, without requiring any manual annotations. Defining Suboptimality with Task Progress. Inspired by temporal distance classification in self- supervised representation learning [15], we evaluate action quality by estimating the extent of task progress between two timesteps, iandi+T. Specifically, we define a progress function f:Si,i+T→Tp, which inputs a sub-trajectory Si,i+Tfrom timestep itoi+T, and predicts the progress Tpmade over the sub-trajectory Si,i+Ttowards completion. Intuitively, Tpmeasures the temporal distance that the robot has moved the task forward over the sub-trajectory Si,i+T, mea- sured in seconds. We then compare this predicted progress Tpto the actual elapsed time T. If the predicted progress is significantly lower than the elapsed time, this means the robot is behind sched- ule (i.e., progressing more slowly than expected), meaning that the sub-trajectory is suboptimal. The suboptimality score for the sub-trajectory Si,i+Tis defined as Vi,i+T=T−Tp. Predicting Task Progress. Rather than regressing a real-valued progress estimate, we cast progress prediction as classification over discrete temporal bins, which is empirically more robust [15]. We discretize the temporal gap
https://arxiv.org/abs/2505.22626v1
into Bbins, where each bin is a time interval in seconds. To predict task progress for a sub-trajectory Si,i+T, we train a task progress classifier to classify the bin cor- 4 responding to the time Tbetween the start and end states. Empirically, we set B= 5, sample each sub-trajectory as 2seconds, and bins to be [0,0.5),[0.5,1.0),[1.0,2.0),[2.0,5.0), and [5.0,+∞). Assigning Suboptimal Scores to Individual Samples. Although we first compute a suboptimality score at the sub-trajectory level, our ultimate goal is to assign a suboptimality score to every indi- vidual transition to enable more fine-grained data filtering. Therefore, we aim to assign a sample- level suboptimal score Vito each transition at time i, based on the sub-trajectory-level suboptimal scores. We began by computing sub-trajectory-level suboptimal scores, V0,T, V1,1+T..., V N,N +T. Assuming that every sample within a sub-trajectory have equal probability to be suboptimal, each sub-trajectory score should be evenly distributed across its Tsamples, for a sub-trajectory Si,i+T withVi,i+T, it will contribute each transition within the sub-trajectory by1 TVi,i+T. Therefore, the aggregated sample-level suboptimality score is then computed as: ˆVi=Pi t=i−T1 TVt,t+T. Fur- ther, we apply the temporal discounting to our assigning, which gives us Vi=Pi t=0γi−tˆVtwhere γ∈[0,1]is the discount factor. This captures the influence of past actions on the current state, while prioritizing more recent behavior. Lastly, we observe that the overall quality of its trajectory can shape the quality of each action; for example, a human operator who is consistently expert-level or error-prone affects the quality of every action in the demonstration he/she collect. We therefore want each transition’s score to reflect not only its local quality but also the overall quality of the trajectory it belongs to. To compute the final curation score for each transition, we use a weighted combination of its suboptimality score with the mean score of the transitions across all timesteps,1 NPN j=1(Vj). We refer to this combination as the mixture of transition and trajectory scores. Specifically, the final score is computed as 0.5× 1 NPN j=1(Vj) + 0.5×Vi. Removing Suboptimal Samples. During policy training, we compute the suboptimality score for every sample as described above. During data curation, we define a suboptimal threshold ϵsand exclude transitions with suboptimality scores above this threshold from the training process. Note that each sample here preserves an observation history and an action sequence for algorithms that are history-dependent (e.g., BC-Transformer) and that utilize action chunking. 3.3 Similarity-Based State-Action Deduplication Large-scale imitation learning datasets often include many visually and behaviorally similar se- quences, for example, repeated demonstrations of the same skill in nearly identical contexts. Train- ing directly on all such data can hinder policy generalization by overemphasizing common patterns while underrepresenting rare but informative cases. To mitigate this, we introduce a similarity-based deduplication method that filters out redundant data. A key insight is that some segments may appear visually similar, yet differ in task intent or executed actions. To avoid discarding meaningful variations, effective deduplication must consider both the visual states and actions. To this end, we propose a similarity-based deduplication method that utilizes joint representations of visual states and
https://arxiv.org/abs/2505.22626v1
actions to identify and filter redundant state-action pairs. Defining State-Action Duplicates. Prior work on semantic deduplication [7] has focused on curat- ing large image datasets by removing semantically similar data pairs based solely on visual features. However, such visual-only deduplication methods are not well-suited for sequential decision-making tasks like imitation learning in robotics, where action dynamics play a crucial role. In this work, we extend the idea of semantic deduplication to the imitation learning domain by incorporating both visual states and action information. Specifically, we define state-action duplicates as state-action chunks (Si,i+T, ai,i+T)that are visually similar and lead to comparable actions, reflecting redundant patterns that contribute little to learning diversity. Generating State-Action Features. We first divide the dataset into non-overlapping sub- trajectories, each consisting of a state-action sequence (Si,i+T, ai,i+T), where each chunk spans a fixed duration T. Given the variations in recording frequency across datasets, we uniformly subsam- pleN= 8 RGB images from each chunk for consistency. As raw visual data is high-dimensional 5 and not directly suitable for similarity computation, we employ the Cosmos video encoder [45], a pre-trained model that encodes both temporal and semantic information from videos, to extract a compact 1D video feature vector zv. We then concatenate the actions to the visual embedding to form a joint state-action feature zv+a. Removing Duplicated Samples. We begin by performing K-means clustering to group semantically similar state-action chunks. Within each cluster, we compute pairwise cosine distances among all chunks. For each chunk, its similarity score is defined as the minimum distance it has with any other chunk in the same cluster. We identify as duplicates those chunks whose maximum similarity exceeds a defined threshold, ϵd, as they are highly similar to at least one other sample in their cluster. These chunks will be filtered out during policy training with a duplication mask. 3.4 Unified Threshold Across Datasets When curating data, we remove all samples with a score above either ϵsorϵd. To generalize S CIZOR to different datasets, we need to find a unified threshold. We run S CIZOR on the RoboMimic and OXE Magic datasets under different curation thresholds and find that ϵs= 0.58andϵd= 0.99yield the best performance on both datasets. We then adopt this unified threshold for all experiments. For more details, please refer to Appendix A. 4 Experiments In our experiments, we aim to address the following questions: 1) How much does S CIZOR improve imitation learning policy performance? 2) What advantage does S CIZOR ’s fine-grained state-action curation offer over trajectory- or dataset-level curation in prior work? 3) What design components contribute most to S CIZOR ? 4) What types of low-quality samples can S CIZOR identify and remove? 4.1 Experimental Setup Datasets and Training Details. We evaluate our method on three robotic benchmarks for imitation learning, chosen to represent a range of real-world scenarios: a large-scale crowdsourced dataset, a dataset featuring varying levels of human expertise, and a human-in-the-loop dataset with mixed data distributions. This selection enables us to evaluate S CIZOR ’s effectiveness across various scenarios and diverse data regimes. For full dataset details,
https://arxiv.org/abs/2505.22626v1
see Appendix B.1. •Open-X-Embodiment (OXE) [29] : A large-scale collection of over one million real-world robotic trajectories. We use the Simpler environment [46] and benchmark on two tasks: Pick Can andMove Near . We train the Octo model [13] with two random seeds and use the same “Magic Soup” weighting. This setting evaluates S CIZOR ’s scalability to large and diverse datasets. •RoboMimic [1] : A dataset and benchmark containing human-collected trajectories of varying proficiency. We use the simulated Multi-Human dataset for the Can andSquare tasks to be con- sistent with the baseline comparison. We train the BC policy provided in the benchmark with three random seeds. This setting evaluates S CIZOR ’s ability to curate demonstrations of mixed quality. •Sirius-Fleet [47] : A real-world multi-task dataset comprising 1,500 policy rollouts with human interventions. Our real-world evaluation spans four task sets comprising eight tasks. We train the BC-Transformer policy used in the paper with three random seeds. This setting evaluates SCIZOR ’s ability to curate mixed data from both autonomous policies and human corrections. Baselines. We benchmark S CIZOR against 3baselines, each highlighting a different aspect of data curation. We compare with Uniform to show the effectiveness of S CIZOR , with DemoInf andRe- Mix to show that fine-grained curation offers advantages over coarser filtering strategies. •Uniform : A baseline that uniformly deletes the same percentage of data as other methods to control for dataset size. This comparison ensures that the improvement observed with S CIZOR is attributed to which specific samples are removed, not simply to the reduced dataset size itself. •DemoInf [6] : A trajectory-level method that estimates mutual information between states and actions for each trajectory as a quality score and removes low-quality trajectories with insufficient 6 020406080Success Rate (%)56.9 55.165.162.3RoboMimic 46.7 45.460.479.6Sirius-Fleet 20.0 21.228.1OXE Magic Soup 27.831.3OXE RT-X No Deletion Uniform DemInf Re-Mix OursFigure 3: Performance comparison across different datasets. We use the unified threshold for SCIZOR and report success rates on 4datasets. We found that S CIZOR achieves the strongest per- formance and outperforms the baselines. contribution. We apply it to the RoboMimic and the Sirius-Fleet dataset, following the same dataset settings as in the original paper. We delete the same percentage of data as other methods. •Re-Mix [5] : Adataset-level method that learns data mixture weights for the “RT-X” variant of the OXE datasets. To ensure consistency, we train the Octo-small model on OXE RT-Xfor S CIZOR , while directly adopting their learned weights for Re-Mix. 4.2 Experimental Results RQ1: How much does S CIZOR improve imitation learning policy performance? Figure 3 sum- marizes S CIZOR ’s impact on policy success rates across all three benchmarks. Compared to training on the full dataset, S CIZOR delivers absolute gains of 5.4%on RoboMimic, 8.1%on OXE Magic, and 32.9%on the Sirius-Fleet real-robot tasks. It also surpasses uniform curation by 16.1%on aver- age, indicating that S CIZOR has a targeted selection of samples to be deleted. These improvements demonstrate that S CIZOR ’s data curation consistently filters out low-quality samples and improves policy learning in both simulated and real-world robotic
https://arxiv.org/abs/2505.22626v1
environments. RQ2: What advantage does S CIZOR ’s fine-grained state-action curation offer over trajectory- or dataset-level curation in prior work? To validate the effectiveness of fine-grained curation on state-action pairs, we compare S CIZOR with two baseline methods: a trajectory-level curation method, Deminf [6], and a dataset-level curation method, Re-Mix [10]. Deminf estimates the av- erage contribution of a trajectory towards the mutual information between states and actions in the entire dataset. Re-Mix treats each subset of data as a different “domain” and uses a distributionally robust optimization technique to assign weights to sub-datasets. To ensure a fair comparison, we apply S CIZOR to the same RT-X mixture setting used by Re-Mix. As shown in Figure 3 S CIZOR outperforms Re-Mix by 3.5%on average. In the RoboMimic dataset, S CIZOR has not outperformed DemInf, as the dataset is explicitly divided into three levels of trajectory quality, making trajectory- level filtering particularly effective. In contrast, S CIZOR significantly outperforms DemInf by 19.2% on the Sirius-Fleet dataset, where the mixed sources of policy and human actions result in uneven data quality distribution. This suggests that fine-grained state-action curation may be beneficial in datasets with complex and uneven quality distributions. Table 1: Ablation studies : Performance comparison across three datasets (RoboMimic, Sirius- Fleet, and OXE). Our approach consistently outperforms partial ablations, highlighting the impor- tance of combining both components. RoboMimic Sirius-Fleet OXE Magic Suboptimal-Removal Only 60.9 ± 1.8 64.2 ± 2.6 25.3 ± 2.9 Deduplication Only 48.3 ± 0.8 63.3 ± 6.9 22.1 ± 0.9 SCIZOR (Ours) 62.3 ± 1.6 79.6 ± 1.4 28.1 ± 3.3 RQ3: What design components contribute most to S CIZOR ?We first ablate the suboptimal data removal and the deduplication in Table 1. We run the experiments only removing suboptimal data 7 and duplicated data, and remove the same amount of data in each dataset as S CIZOR . We find that both suboptimal removal and deduplication individually lead to improvements over the baseline, but neither alone is sufficient to match the full performance of S CIZOR . Suboptimal removal is generally more effective than deduplication, but combining both components leads to the largest gains across all datasets. Table 2: Variations of S CIZOR ’s suboptimal data strategies : We evaluate different scoring strate- gies for suboptimal data removal: (i) without mixture of transition and trajectory scores, (ii) without temporal discounting, and (iii) the full proposed method (Ours). Results are reported across four tasks, showing that the full version consistently outperforms the alternatives. RoboMimic Can RoboMimic Square OXE RT-1Pick OXE RT-1Move SCIZOR w/o mixture 81.3 ± 0.6 36.0 ± 1.4 21.8 ± 7.9 12.4 ± 4.6 SCIZOR w/o discount 79.6 ± 1.4 31.5 ± 5.5 20.7 ± 6.4 9.4 ± 1.4 SCIZOR (Ours) 87.3 ± 0.7 37.2 ± 2.5 30.9 ± 8.4 17.5 ± 1.0 We further investigate S CIZOR ’s scoring strategy for suboptimal data classifier in Table 2 by ablat- ing two key components: (i) the transition–trajectory score mixture and (ii) temporal discounting discussed in Section 3.2. We train Octo on the OXE “RT-1” variant [21] with three
https://arxiv.org/abs/2505.22626v1
seeds for faster iteration. Omitting either component consistently degrades performance across all four tasks, high- lighting their importance. Temporal discounting lets S CIZOR propagate evidence of suboptimality backward in time, so that transitions leading to poorer future states can also be identified in addition to directly poor actions. The mixture of transition-level and trajectory-level scores balances these fine-grained penalties with an overall assessment of each demonstration’s quality, making it easier to filter out inherently low-quality data (for example, trajectories recorded by non-expert operators). Together, these mechanisms yield the strongest gains in suboptimal data removal. 44.0%18.0%19.0%10.0% 5.0%Sirius-Fleet 15.1% 39.1%16.8%5.2%13.8%10.0%Robomimic Suboptimal Types Manipulation Failure Slow PauseMove Back and Forth Stuck at Collision False Positive Figure 4: Breakdown of suboptimal types classified by S CIZOR .The three dominant fail- ure modes predicted by S CIZOR ’s suboptimal classifier are Slow Motion ,Manipulation Fail- ure, and Pause , showing S CIZOR removes se- mantically meaningful transitions.RQ4: What types of low-quality samples can SCIZOR identify and curate? To qualitatively visualize the suboptimal data identified, we ex- amine the low-quality data classified and investi- gate the types of low-quality data they represent. From each of the RoboMimic and Sirius-Fleet datasets, we randomly select 100demonstrations flagged with at least one suboptimal segment. We then manually visualize and classify every subop- timal segment across these trials to generate the pie chart. Figure 4 illustrates the distribution of suboptimal transitions identified by S CIZOR .Ma- nipulation Failure refers to errors during grasp- ing — e.g., failed grasps, accidental drop of ob- jects. Pause denotes transitions with very little or no movement. Stuck at Collision describes cases where the gripper or held object collides, leading to a halt. Slow captures motion that proceeds no- ticeably below the normal task speed. Move Back and Forth indicates aimless motions that do not contribute to task progress. False Positive labels mis- classified or ambiguous transitions. The most significant fractions are Slow Motion ,Manipulation Failure , and Pause , indicating that the task-progress classifier identifies semantically meaningful errors rather than spurious noise. Appendix B.4 visualizes representative examples. 5 Conclusion We introduce S CIZOR , a self-supervised data curation method that filters suboptimal and redundant state-action pairs to improve imitation learning performance. It combines a task progress predic- tor to remove suboptimal frames with a similarity-based deduplication module to eliminate over- 8 represented patterns. By curating the dataset, S CIZOR consistently enhances policy performance across diverse imitation learning benchmarks and outperforms other data curation approaches on large datasets. Future work could explore more adaptive thresholding strategies to achieve optimal deletion ratios and improve the representation of state-action pairs for better curation performance. 6 Limitation While S CIZOR improves policy success in imitation learning, it has several limitations, which we discuss in detail below: Curation Threshold: The deletion threshold in S CIZOR is currently determined empirically. Future work could derive a theoretical framework to identify this threshold more systematically. Deduplication Representation: SCIZOR ’s deduplication module currently concatenates action and state features. While it performs well in our experiments, future work could explore more expressive or learned representations [48,
https://arxiv.org/abs/2505.22626v1
49] that better integrate the action and state spaces. Dependence on Demonstration Quality: SCIZOR assumes that most demonstrations within a tra- jectory are of good quality, as we rely on self-supervised learning to learn from the majority of the data. If poor-quality demonstrations dominate, the method may become less effective. Future work could focus on better leveraging low-quality data by identifying and utilizing useful segments. Linear Task Progress Assumption: SCIZOR assumes linear task progression without slow or repet- itive behaviors. However, real-world tasks, like stirring food repeatedly or waiting for it to cook, often involve such behaviors. Future work could adapt the method to better handle these behaviors. References [1] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese, Y . Zhu, and R. Mart ´ın-Mart ´ın. What matters in learning from offline human demonstrations for robot manipulation, 2021. URL https://arxiv.org/abs/2108.03298 . [2] D. S. Brown, W. Goo, and S. Niekum. Better-than-demonstrator imitation learning via automatically-ranked demonstrations, 2019. URL https://arxiv.org/abs/1907.03976 . [3] F. Lin, Y . Hu, P. Sheng, C. Wen, J. You, and Y . Gao. Data scaling laws in imitation learning for robotic manipulation, 2025. URL https://arxiv.org/abs/2410.18647 . [4] H. Liu, S. Nasiriany, L. Zhang, Z. Bao, and Y . Zhu. Robot learning on the job: Human-in- the-loop autonomy and learning during deployment. In Robotics: Science and Systems (RSS) , 2023. [5] J. Hejna, C. Bhateja, Y . Jian, K. Pertsch, and D. Sadigh. Re-mix: Optimizing data mixtures for large scale imitation learning. arXiv preprint arXiv:2408.14037 , 2024. [6] J. Hejna, S. Mirchandani, A. Balakrishna, A. Xie, A. Wahid, J. Tompson, P. Sanketi, D. Shah, C. Devin, and D. Sadigh. Robot data curation with mutual information estimators. 2025. URL https://arxiv.org/abs/2502.08623 . [7] A. Abbas, K. Tirumala, D. Simig, S. Ganguli, and A. S. Morcos. Semdedup: Data-efficient learning at web-scale through semantic deduplication. arXiv preprint arXiv:2303.09540 , 2023. [8] A. Albalak, Y . Elazar, S. M. Xie, S. Longpre, N. Lambert, X. Wang, N. Muennighoff, B. Hou, L. Pan, H. Jeong, C. Raffel, S. Chang, T. Hashimoto, and W. Y . Wang. A survey on data selection for language models, 2024. URL https://arxiv.org/abs/2402.16827 . [9] C. Schuhmann, R. Beaumont, R. Vencu, C. Gordon, R. Wightman, M. Cherti, T. Coombes, A. Katta, C. Mullis, M. Wortsman, P. Schramowski, S. Kundurthy, K. Crowson, L. Schmidt, R. Kaczmarczyk, and J. Jitsev. Laion-5b: An open large-scale dataset for training 9 next generation image-text models. ArXiv , abs/2210.08402, 2022. URL https://api. semanticscholar.org/CorpusID:252917726 . [10] S. M. Xie, H. Pham, X. Dong, N. Du, H. Liu, Y . Lu, P. Liang, Q. V . Le, T. Ma, and A. W. Yu. Doremi: Optimizing data mixtures speeds up language model pretraining. ArXiv , abs/2305.10429, 2023. URL https://api.semanticscholar.org/CorpusID: 258741043 . [11] Z. Wang, A. Novikov, K. Zolna, J. T. Springenberg, S. Reed, B. Shahriari, N. Siegel, J. Merel, C. Gulcehre, N. Heess, and N. de Freitas. Critic regularized regression. volume 33, pages 7768–7778, 2020. [12] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning. InICLR , 2021. [13]
https://arxiv.org/abs/2505.22626v1
O. M. Team, D. Ghosh, H. Walke, K. Pertsch, K. Black, O. Mees, S. Dasari, J. Hejna, T. Kreiman, C. Xu, J. Luo, Y . L. Tan, L. Y . Chen, P. Sanketi, Q. Vuong, T. Xiao, D. Sadigh, C. Finn, and S. Levine. Octo: An open-source generalist robot policy, 2024. URL https: //arxiv.org/abs/2405.12213 . [14] A. S. Chen, A. M. Lessing, Y . Liu, and C. Finn. Curating demonstrations using online experi- ence, 2025. URL https://arxiv.org/abs/2503.03707 . [15] Y . Aytar, T. Pfaff, D. Budden, T. Paine, Z. Wang, and N. De Freitas. Playing hard exploration games by watching youtube. Advances in neural information processing systems , 31, 2018. [16] P. Sermanet, K. Xu, and S. Levine. Unsupervised perceptual rewards for imitation learning, 2017. URL https://arxiv.org/abs/1612.06699 . [17] K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning, 2021. URL https://arxiv.org/abs/2106.03911 . [18] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NeurIPS , 1989. [19] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel. Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In ICRA , 2018. [20] P. Florence, C. Lynch, A. Zeng, O. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mor- datch, and J. Tompson. Implicit behavioral cloning. In CoRL , 2021. [21] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus- man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. J. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Man- junath, I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao, M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran, V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. Rt-1: Robotics transformer for real-world control at scale, 2023. URL https://arxiv.org/abs/2212.06817 . [22] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A. Dubey, C. Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818 , 2023. [23] M. Kim, K. Pertsch, S. Karamcheti, T. Xiao, A. Balakrishna, S. Nair, R. Rafailov, E. Foster, G. Lam, P. Sanketi, Q. Vuong, T. Kollar, B. Burchfiel, R. Tedrake, D. Sadigh, S. Levine, P. Liang, and C. Finn. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. 10 [24] K. Black, N. Brown, D. Driess, A. Esmail, M. Equi, C. Finn, N. Fusai, L. Groom, K. Haus- man, B. Ichter, S. Jakubczak, T. Jones, L. Ke, S. Levine, A. Li-Bell, M. Mothukuri, S. Nair, K. Pertsch, L. X. Shi, J. Tanner, Q. Vuong, A. Walling, H. Wang, and U. Zhilinsky. π0: A vision-language-action flow model for general robot control, 2024. URL https://arxiv. org/abs/2410.24164 . [25] NVIDIA, J. Bjorck, F. Casta ˜neda,
https://arxiv.org/abs/2505.22626v1
N. Cherniadev, X. Da, R. Ding, L. J. Fan, Y . Fang, D. Fox, F. Hu, S. Huang, J. Jang, Z. Jiang, J. Kautz, K. Kundalia, L. Lao, Z. Li, Z. Lin, K. Lin, G. Liu, E. Llontop, L. Magne, A. Mandlekar, A. Narayan, S. Nasiriany, S. Reed, Y . L. Tan, G. Wang, Z. Wang, J. Wang, Q. Wang, J. Xiang, Y . Xie, Y . Xu, Z. Xu, S. Ye, Z. Yu, A. Zhang, H. Zhang, Y . Zhao, R. Zheng, and Y . Zhu. Gr00t n1: An open foundation model for generalist humanoid robots, 2025. URL https://arxiv.org/abs/2503.14734 . [26] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models, 2020. [27] C. Chi, Z. Xu, S. Feng, E. Cousineau, Y . Du, B. Burchfiel, R. Tedrake, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion, 2024. [28] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, and S. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets, 2021. [29] A. Padalkar, A. Pooley, A. Jain, A. Bewley, A. Herzog, A. Irpan, A. Khazatsky, A. Rai, A. Singh, A. Brohan, et al. Open x-embodiment: Robotic learning datasets and rt-x mod- els.arXiv preprint arXiv:2310.08864 , 2023. [30] A. Khazatsky, K. Pertsch, S. Nair, A. Balakrishna, S. Dasari, S. Karamcheti, S. Nasiriany, M. K. Srirama, L. Y . Chen, et al. Droid: A large-scale in-the-wild robot manipulation dataset, 2024. [31] AgiBot-World-Contributors, Q. Bu, J. Cai, L. Chen, X. Cui, Y . Ding, S. Feng, S. Gao, X. He, X. Huang, S. Jiang, Y . Jiang, C. Jing, H. Li, J. Li, C. Liu, Y . Liu, Y . Lu, J. Luo, P. Luo, Y . Mu, Y . Niu, Y . Pan, J. Pang, Y . Qiao, G. Ren, C. Ruan, J. Shan, Y . Shen, C. Shi, M. Shi, M. Shi, C. Sima, J. Song, H. Wang, W. Wang, D. Wei, C. Xie, G. Xu, J. Yan, C. Yang, L. Yang, S. Yang, M. Yao, J. Zeng, C. Zhang, Q. Zhang, B. Zhao, C. Zhao, J. Zhao, and J. Zhu. AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems. arXiv preprint arXiv:2503.06669 , 2025. [32] A. Mandlekar, S. Nasiriany, B. Wen, I. Akinola, Y . Narang, L. Fan, Y . Zhu, and D. Fox. Mimicgen: A data generation system for scalable robot learning using human demonstrations. In7th Annual Conference on Robot Learning , 2023. [33] S. Nasiriany, A. Maddukuri, L. Zhang, A. Parikh, A. Lo, A. Joshi, A. Mandlekar, and Y . Zhu. Robocasa: Large-scale simulation of everyday tasks for generalist robots. arXiv preprint arXiv:2406.02523 , 2024. [34] H. Walke, K. Black, A. Lee, M. J. Kim, M. Du, C. Zheng, T. Zhao, P. Hansen-Estruch, Q. Vuong, A. He, V . Myers, K. Fang, C. Finn, and S. Levine. Bridgedata v2: A dataset for robot learning at scale, 2024. URL https://arxiv.org/abs/2308.12952 . [35] A. . Team, J. Aldaco, T. Armstrong, R. Baruch, J. Bingham, S. Chan, K. Draper, D. Dwibedi, C. Finn, P. Florence, S. Goodrich,
https://arxiv.org/abs/2505.22626v1
W. Gramlich, T. Hage, A. Herzog, J. Hoech, T. Nguyen, I. Storz, B. Tabanpour, L. Takayama, J. Tompson, A. Wahid, T. Wahrburg, S. Xu, S. Yaroshenko, K. Zakka, and T. Z. Zhao. Aloha 2: An enhanced low-cost hardware for bimanual teleoperation, 2024. [36] A. Iyer, Z. Peng, Y . Dai, I. Guzey, S. Haldar, S. Chintala, and L. Pinto. Open teach: A versatile teleoperation system for robotic manipulation. arXiv preprint arXiv:2403.07870 , 2024. 11 [37] Z. Fu, T. Z. Zhao, and C. Finn. Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv:2401.02117 , 2024. [38] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705 , 2023. [39] T. Lin, Y . Zhang, Q. Li, H. Qi, B. Yi, S. Levine, and J. Malik. Learning visuotactile skills with two multifingered hands. arXiv preprint arXiv:2404.16823 , 2024. [40] A. Fang, A. M. Jose, A. Jain, L. Schmidt, A. Toshev, and V . Shankar. Data filtering networks, 2023. URL https://arxiv.org/abs/2309.17425 . [41] N. Sachdeva, B. Coleman, W.-C. Kang, J. Ni, L. Hong, E. H. Chi, J. Caverlee, J. McAuley, and D. Z. Cheng. How to train data-efficient llms, 2024. URL https://arxiv.org/abs/2402. 09668 . [42] X. Li, H. Zou, and P. Liu. Limr: Less is more for rl scaling, 2025. URL https://arxiv. org/abs/2502.11886 . [43] S. Belkhale, Y . Cui, and D. Sadigh. Data quality in imitation learning, 2023. URL https: //arxiv.org/abs/2306.02437 . [44] K. Gandhi, S. Karamcheti, M. Liao, and D. Sadigh. Eliciting compatible demonstrations for multi-human imitation learning. In Conference on Robot Learning , 2022. URL https:// api.semanticscholar.org/CorpusID:252918784 . [45] NVIDIA, N. Agarwal, A. Ali, M. Bala, Y . Balaji, E. Barker, T. Cai, P. Chattopadhyay, Y . Chen, Y . Cui, Y . Ding, D. Dworakowski, J. Fan, M. Fenzi, F. Ferroni, S. Fidler, D. Fox, S. Ge, Y . Ge, J. Gu, S. Gururani, E. He, J. Huang, J. Huffman, P. Jannaty, J. Jin, S. W. Kim, G. Kl ´ar, G. Lam, S. Lan, L. Leal-Taixe, A. Li, Z. Li, C.-H. Lin, T.-Y . Lin, H. Ling, M.-Y . Liu, X. Liu, A. Luo, Q. Ma, H. Mao, K. Mo, A. Mousavian, S. Nah, S. Niverty, D. Page, D. Paschalidou, Z. Patel, L. Pavao, M. Ramezanali, F. Reda, X. Ren, V . R. N. Sabavat, E. Schmerling, S. Shi, B. Stefaniak, S. Tang, L. Tchapmi, P. Tredak, W.-C. Tseng, J. Varghese, H. Wang, H. Wang, H. Wang, T.-C. Wang, F. Wei, X. Wei, J. Z. Wu, J. Xu, W. Yang, L. Yen-Chen, X. Zeng, Y . Zeng, J. Zhang, Q. Zhang, Y . Zhang, Q. Zhao, and A. Zolkowski. Cosmos world foundation model platform for physical ai, 2025. URL https://arxiv.org/abs/2501.03575 . [46] X. Li, K. Hsu, J. Gu, K. Pertsch, O. Mees, H. R. Walke, C. Fu, I. Lunawat, I. Sieh, S. Kir- mani, S. Levine, J. Wu, C. Finn, H. Su, Q. Vuong, and T. Xiao. Evaluating real-world robot manipulation policies in simulation. arXiv preprint arXiv:2405.05941 , 2024. [47] H. Liu, Y . Zhang, V
https://arxiv.org/abs/2505.22626v1
. Betala, E. Zhang, J. Liu, C. Ding, and Y . Zhu. Multi-task interactive robot fleet learning with visual world models, 2024. URL https://arxiv.org/abs/2410.22689 . [48] K. Pertsch, K. Stachowicz, B. Ichter, D. Driess, S. Nair, Q. Vuong, O. Mees, C. Finn, and S. Levine. Fast: Efficient action tokenization for vision-language-action models. arXiv preprint arXiv: 2501.09747 , 2025. [49] S. Li, Y . Gao, D. Sadigh, and S. Song. Unified video action model. arXiv preprint arXiv: 2503.00200 , 2025. [50] R. Shah, R. Mart ´ın-Mart ´ın, and Y . Zhu. Mutex: Learning unified policies from multimodal task specifications. In 7th Annual Conference on Robot Learning , 2023. URL https:// openreview.net/forum?id=PwqiqaaEzJ . [51] K. Pertsch. Rlds dataset modification, 2024. URL https://github.com/kpertsch/rlds_ dataset_mod . GitHub repository. 12 A Method details 46485052545658606264Success Rate (%) Policy Success Rate on Robomimic 1618202224262830 Policy Success Rate on Open-X-Embodiment 0 5 10 15 20 25 30 Percentage of Data Curated in Dataset (%)0 5 10 15 20 25 30 Percentage of Data Curated in Dataset (%) Suboptimal Only Deduplication Only Figure 5: The performance of the Suboptimal-Only method and the Deduplication-Only method when deleting different percentages of data on the Robomimic and OXE Magic dataset. A.1 Determining the Unified Threshold To find a unified threshold for both suboptimal-transition removal and similarity-based state–action deduplication, we run S CIZOR with only one sub-method at a time on the RoboMimic and OXE Magic datasets with deletion ratios of 10%, 20%, and 30%. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Threshold0102030405060708090100Percentage of scores below thresholdSuboptimality and Deduplication Scores oxe_scores subop oxe_scores dedup robomimic_scores subop robomimic_scores dedup Figure 6: Deletion ratio as a function of the chosen threshold for suboptimal-transition removal and state–action deduplication on the Robomimic and OXE Magic datasets. Figure 5 shows that Suboptimal-Only achieves its highest success rate at 30% deletion on RoboMimic and 10% deletion on OXE. Deduplication-Only performs best with 0% deletion on RoboMimic and 20% deletion on OXE. Next, we plot the deletion ratio as a function of the threshold in Figure 6. We observe that: A sub- optimal threshold of 0.58 yields deletion rates of 29.5% on RoboMimic and 11.9% on OXE, closely matching their respective optimal ratios. A deduplication threshold of 0.99 results in only 0.3% 13 deletion on RoboMimic—insufficient to harm performance—and 4.5% deletion on OXE, which still provides a notable improvement, very close to deleting 20%. Finally, we apply the unified threshold to all the datasets, and yield the deletion ratio list in Table 3. Table 3: Deletion Ratio of all the datasets when unified threshold applied RoboMimic Sirius-Fleet OXE Magic OXE RT-X OXE RT-1 Suboptimal-Only 29.3% 4.7% 11.9% 15.0% 9.2% Deduplication-Only 0.3% 3.2% 4.5% 0.8% 0.5% SCIZOR (Total) 29.6% 7.9% 15.8% 15.8% 9.7% A.2 Task Progress Prediction Model Architecture. Given a sub-trajectory Si,i+T, the model takes image observation at timesteps iandi+Tas inputs. These observations are independently encoded using a frozen DINO-V2 model to obtain the visual features, which provide robust and generalizable visual representations thanks to its self-supervised pretraining on diverse natural images. We compute the difference between the
https://arxiv.org/abs/2505.22626v1
two feature vectors to obtain a delta feature vector, which emphasizes task-relevant changes and accelerates convergence while discarding redundant static information and is concatenated with a CLS token. The feature vector is then processed through a series of multi-layer self-attention transformer blocks. The output CLS token is then fed into a classification head to produce the predicted progress bin. A.3 Training the Task Progress Predictor We divide each trajectory into five equal time bins. For each training example, we randomly select one bin and sample a time interval ∆tuniformly within its bounds. The model takes as input the frame at time tand the frame at t+ ∆t, and is trained to predict the index of the chosen time bin. A.4 Fixed Time Duration All experiments use a constant interval of 2s for progress prediction and state–action feature extrac- tion. Since datasets differ in control frequency, this 2s window corresponds to a dataset-dependent number of transitions per second. B Experimental Details B.1 Dataset Details RoboMimic [1] is a robotic imitation learning dataset and benchmark. It provides trajectories col- lected by proficient human (PH) or mixed-proficient human (MH) demonstrators. The PH dataset consists of 200 trajectories from a single experienced demonstrator, while the MH dataset includes 300 trajectories from six demonstrators — two “better”, two “okay”, and two “worse”. For our experiment, we use the MH dataset for the “Can” and “Square” tasks. Sirius-Fleet [47] uses a visual world model to predict sub-optimal behaviors during policy rollout and requests human intervention when needed. The Sirius-Fleet dataset is collected over three rounds by allowing the policy to roll out and incorporating human-corrected data for retraining. We utilize the real-world Sirius-Fleet dataset, which adopts the Mutex settings [50] and includes 1,500 trajectories. Our real-world evaluation spans four task sets comprising eight tasks. Open-X-Embodiment (OXE) [29] is a large-scale collection of over one million real-world robotic trajectories. The dataset is multi-task and cross-embodiment, covering various action and observa- tion spaces. We use the RLDS Dataset Modification [51] to unify the action space to 7 DoFs. We employ three variations of the OXE dataset, each selecting different subsets of the original data and applying different weightings: the “Magic Soup” mixture (OXE Magic) used in the Octo paper [13], 14 the “RT-X” mixture (OXE RT-X) used in the Re-Mix paper [10], and the “RT-1” dataset (OXE RT-1) from the RT-1 paper [21]. B.2 Training and Evaluation Details Table 4: Hyperparameter configurations and architectural details for the Robomimic, Sirius-fleet, and OXE datasets used in our experiments. RoboMimic Sirius-Fleet Real OXE Magic OXE RT-X OXE RT-1 Architecture BC BC-Transformer-GMM Octo Octo Octo Learning Rate 1e-4 1e-4 3e-4 3e-4 3e-4 Weight Decay 0.1 0.1 0.1 0.1 0.1 Batch Size 16 16 2048 2048 256 Params 23M 35M 93M 93M 93M Steps 300K 1M 300K 300K 200K Action Chunk 1 10 4 4 4 Obs History 1 10 2 2 2 GPU 1 L40S 48GB 1 L40S 48GB 32 H100 80GB 32 H100 80GB 8 L40 48GB We evaluate S CIZOR across various architectures and datasets to demonstrate its applicability to different imitation
https://arxiv.org/abs/2505.22626v1
learning algorithms. We intentionally leave all model architectures and hyper- parameters unchanged from the public reference implementations of each dataset, demonstrating that S CIZOR isplug-and-play . For the RoboMimic andSirius-Fleet experiments, we train each model using three random seeds. For the larger OXE experiments, we train the Octo model using two seeds. On the RoboMimic dataset, we train a basic Behavior Cloning (BC) model with MLP layers [1] for 600 epochs. We evaluate every 20 epochs, select the top three checkpoints per random seed, and report the mean success rate over 80 trials per task for each checkpoint, then average across seeds. For the Sirius-Fleet real-robot experiments, we use a BC-Transformer model with a GMM head [1, 4, 47], and train for 2000 epochs. Evaluation is performed at the 2000 epoch, and we report the average success rate of the top three checkpoints for each seed. We run 10 trials per task per seed for this setting. We train the Octo-Small model on all three variations of the OXE dataset. For OXE Magic and OXE RT-X, training is performed for 300K steps with a batch size of 2048 using two seeds. For OXE RT-1, we train for 200K steps with a batch size of 256 using three seeds. Evaluation is conducted in the SIMPLER simulation environment [46] on the “Pick Coke Can” and “Move Near” tasks. We only evaluate on the Visual-Matching setting of SIMPLER, which means there won’t be lighting or texture variation. For each seed, we identify the highest success rate among the last three saved checkpoints, and then average these best performances across seeds. We evaluate 300 trials per checkpoint on “Pick Coke Can” and 240 trials per checkpoint on “Move Near”. Table 4 provides the detailed hyperparameters and model architectures used in our experiments. B.3 Evaluation Task Details for each Dataset Table 5 presents the detailed descriptions and visualizations of the tasks used in our experimental settings. These tasks span both simulated and real-world environments, covering a diverse range of manipulation challenges. 15 Table 5: Task Name and Description for each setting. Visualization Task Name and Description Robomimic Can The robot is required to place a coke can from a large bin into a smaller target bin. Robomimic Square The robot is required to pick a square nut and place it on a rod. Substantially more difficult than Pick Place Can due to the precision required. SIMPLER Pick Coke Can The robot is instructed to grasp the empty coke can on the table and lift it up. SIMPLER Move Near The robot is instructed to move one object next to another object, while the third object serves as a distractor. Mutex Mug to Basket The robot is instructed to pick up the blue mug and then place it in the basket. Mutex Bowl to Basket The robot is instructed to pick up the red bowl and then place it in the basket. Mutex Mug to Oven Tray The robot is instructed to pick up the blue mug and then place it on the oven tray. continued on next
https://arxiv.org/abs/2505.22626v1
page 16 continued from previous page Visualization Task Name and Description Mutex Bread to Plate The robot is instructed to pick up the bread and then place it on the white plate. Mutex Mug to Plate The robot is instructed to pick up the pink mug and then place it on the white plate. Mutex Bowl to Plate The robot is instructed to pick up the bowl with hot dogs and then place it on the white plate. Mutex Book to Caddy The robot is instructed to pick up the book and then place it in the back compartment of the caddy. Mutex Cup to Caddy The robot is instructed to pick up the red cup and then place it in the front compartment of the caddy. B.4 Extra results and Visualization Table 6, 7, 8 presents the detailed results of different methods on each task of the reported datasets. Table 6: Success rates on Robomimic across different tasks and methods Can Square Suboptimal-Removal Only 84.0% 37.8% Deduplication Only 74.3% 22.4% Random Deletion 78.0% 32.2% DemInf 88.9% 41.4% SCIZOR (Ours) 84.0% 40.8% 17 Table 7: Success rates on Sirius-Fleet across different tasks and methods Book→Caddy Cup →Caddy Bowl →Plate Mug →Plate No Deletion 40.0% 65.0% 65.0% 35.0% Suboptimal-Removal Only 65.0% 75.0% 70.0% 60.0% Deduplication Only 45.0% 75.0% 80.0% 70.0% Random Deletion 50.0% 65.0% 55.0% 30.0% Deminf 66.7% 53.3% 60.0% 50.0% SCIZOR (Ours) 66.7% 73.3% 93.3% 83.3% Mug→Basket Bowl →Basket Mug →Tray Bread →Plate No Deletion 35.0% 60.0% 35.0% 50.0% Suboptimal-Removal Only 35.0% 90.0% 50.0% 60.0% Deduplication Only 65.0% 70.0% 60.0% 60.0% Random Deletion 25.0% 70.0% 30.0% 35.0% Deminf 50.0% 76.7% 53.3% 63.3% SCIZOR (Ours) 66.7% 96.7% 66.7% 90.0% Table 8: Success rates on OXE across different tasks, mixtures and methods Pick Can Move Near OXE Magic Mixture No Deletion 27.0% 13.1% Random Deletion 29.1% 12.1% Suboptimal-Removal Only 33.7% 16.9% Deduplication Only 31.8% 12.3% SCIZOR (Ours) 39.5% 16.7% OXE RT-XMixture Remix 40.5% 15.0% SCIZOR (Ours) 43.3% 19.2% We also visualize the suboptimal scenarios identified by S CIZOR across both the Robomimic and Sirius-Fleet datasets in Figure 7, 8. These visualizations highlight common suboptimal modes de- tected by our method, offering insight into S CIZOR ’s capabilities. B.5 Findings during Evaluation During evaluation of S CIZOR on the Sirius-Fleet dataset, we observed that several failure modes present in the policy trained on the full dataset disappeared when using S CIZOR to curate the dataset. For example, in the book-to-caddy task, the baseline policy often allowed the book to collide with the caddy, whereas our policy reliably avoids any contact. Furthermore, our policy is noticeably more reactive: when it does encounter a failure, the baseline tends to pause or oscillate in place, but our policy quickly returns to its original trajectory and retries until the task is completed. 18 Stuck at Collision : The book held by the robot collided with the cabby, leading to a halt. Manipulation Failure : The bowl held by the robot dropped accidently. Manipulation Failure : The robot gripper failed to grasp the blue mug. Slow Motion : The robot
https://arxiv.org/abs/2505.22626v1
gripper moved towards the bowl at a slow pace. Pause : The robot arm stopped at behind the cereal box for a long time. Figure 7: Visualizations for suboptimal scenarios detected in Robomimic dataset 19 Move Back and Forth : The robot arm moved aimlessly and didn’t contribute to the task progress. Manipulation Failure : The robot gripper missed the can when trying to pick it up. Manipulation Failure : The robot gripper knocked over the can when trying to pick it up. Slow Motion : The robot gripper move towards the square rod slowly. Stuck at Collision : The square rod held by the robot collided with the column when trying to insert. Figure 8: Visualizations for suboptimal scenarios detected in Sirius-Fleet dataset 20
https://arxiv.org/abs/2505.22626v1
Spatial Knowledge Graph-Guided Multimodal Synthesis Yida Xue1, Zhen Bi1,2, Jinnan Yang2, Jungang Lou2, Huajun Chen1,Ningyu Zhang1∗ 1Zhejiang University2Huzhou University {xueyida, bizhen_zju, huajunsir, zhangningyu}@zju.edu.cn Abstract Recent advances in multimodal large language models(MLLMs)havesignificantlyenhanced their capabilities; however, their spatial percep- tion abilities remain a notable limitation. To address this challenge, multimodal data synthe- sis offers a promising solution. Yet, ensuring thatsynthesizeddataadheretospatialcommon sense is a non-trivial task. In this work, we introduce SKG2Data , a novel multimodal syn- thesis approach guided by spatial knowledge graphs, grounded in the concept of knowledge- to-datageneration. SKG2Data automatically constructs a Spatial Knowledge Graph (SKG) to emulate human-like perception of spatial di- rections and distances, which is subsequently utilized to guide multimodal data synthesis. Extensive experiments demonstrate that data synthesizedfromdiversetypesofspatialknowl- edge,includingdirectionanddistance,notonly enhance the spatial perception and reasoning abilities of MLLMs but also exhibit strong gen- eralizationcapabilities. Wehopethattheidea ofknowledge-baseddatasynthesiscanadvance the development of spatial intelligence. 1 Introduction Despite significant advancements in Multimodal Large Language Models (MLLMs) (Zhao et al., 2023b;Yinetal.,2023)forvisualprocessingtasks, a criticallimitation persists intheir ability to com- prehend spatial relationships (Prasad et al., 2023; Liuetal.,2023b;Duetal.,2024;Leietal.,2024; Tong et al., 2024; Shiri et al., 2024; Zheng et al., 2024). Tong et al. (2024) observe that MLLMs often operate with “eyes wide shut”, revealing fun- damental deficiencies in visual-spatial comprehen- sioneveninstate-of-the-artmodels. Thislimitation underscoresasignificantgapbetweenMLLMsand humancapabilities,ashumansinherentlyexcelin tasks requiring spatial intelligence. ∗Corresponding Author. Figure1: TheSpatialKnowledgeGraph(SKG)serves as a structured representation of object attributes and spatialrelationships. GuidedbytheSKG, SKG2Data generates images and multimodal data, grounded in the concept of knowledge-to-data generation. Toaddressthischallenge,multimodaldatasyn- thesis offers a promising solution, requiring the generation of large-scale, spatially coherent mul- timodaldatathataccuratelyadherestoreal-world constraints. A straightforward approach involves developingcomputationalmodels(orworldmod- els)capableofgeneratingcontinuoussyntheticdata (Du et al., 2024; Ray et al., 2024; Li et al., 2025). However, this method is inefficient due to redun- dant data across scenarios and demands frequent model updates when environments change, leading tosignificantmaintenancecosts. Notably,human spatial intelligence stems from the brain’s acqui- sitionofspatialknowledge,whichenablesrobust generalization through the application of cognitive frameworks(Hodgesetal.,1999;HerwegandKa- hana,2018;Peeretal.,2021). Thisinsightnaturally leadstoanewparadigmformultimodaldatasyn- thesis:modeling spatial knowledge to guide the targeted generation of informative data . Inspiredbythisinsight,asillustratedinFigure1, we introduce SKG2Data , an innovative frameworkarXiv:2505.22633v1 [cs.CL] 28 May 2025 specificallydesignedtogeneratemultimodalsyn- thetic data for spatial understanding. The core ideaistoleverageaspecializedknowledgegraph (KG)asaspatialknowledgerepresentationtoguide the generation of multimodal data. Specifically, we construct a Spatial KG (SKG) that simulates human-like understanding of spatial knowledge by characterizingscenesandmodelingspatialrelation- ships. SKG2Data utilizestheSKG,whichencodes objectattributesandtheirspatialrelationships,to generate corresponding images and textual data. Weautomaticallyconstructamultimodalinstruc- tion dataset for MLLMs based on the SKG, com- prising 974 nodes and 4,731 triplets that encode spatial relationships. To ensure a fair compari- sonofresultsandpreventdatacontamination,we also use SKG2Data to synthesize an evaluation dataset,referredtoas SKG2Data-Holdout ,which includes 120 images and 566 single-choice ques- tions. Wefine-tunetheLLaVA-1.6-Vicuna-7B(Liu et al., 2024a) and Llama-3.2-Vision-11B (Meta AI,2024)modelsusingthesynthesizeddata. The fine-tuned models show significant performance improvementson SKG2Data-Holdout andother spatial understanding benchmarks, while maintain- ing their general abilities. Additionally, we carry outcomprehensiveexperimentstoexaminetheef- fect of different spatial relation types, dataset sizes, and object densities on model performance. 2 Preliminaries SpatialKnowledgeGraph. Inrecentyears,KGs havebeenwidelyappliedacrossvariousdomains, such as FACTKG (Kim et al., 2023) for fact veri- fication. UnliketraditionalKGs,weintroducethe SpatialKG(SKG),aspecializedKGdesignedtoen- code spatial knowledge and guide multimodal data synthesis for spatial tasks. The SKG is defined as a KG where nodes represent objects and edges repre-
https://arxiv.org/abs/2505.22633v1
sentspatialrelationshipsthroughtriplets. Formally, SKG can be represented as: 𝐺=(𝐸,𝑇) (1) where𝐸denotesthesetofnodes(objects)and 𝑇denotes the set of triplets describing spatial rela- tions. These relationships may include directional relationships (e.g., “to the left of” and “to the right of”),distancerelationships(e.g.,“closeto”and“far away”),oracombinationofboth. SKGrepresentsa collection of objects and their spatialrelationships within a specific scene.In contrast to Scene Graphs (Zhu et al., 2017, 2022), which provide a general representation of scenes, the triplets in an SKG are specifically de- signed to emphasize spatial relations, with nodes 𝐸containing detailed descriptions of objects. This structured representation enables the generation of complex and detailed multimodal synthesis data that emphasize spatial relationships. Multimodal Data Synthesis. The task of mul- timodal data synthesis involves generating high- quality input-output pairs related to images using a model𝑓. For a given image 𝑖(either pre-existing or synthesized), the task uses 𝑖, its description 𝑑, or a combination of both as inputs to 𝑓. The model𝑓then synthesizes a set of input 𝑄and their corresponding output 𝐴. Wefocusongeneratingmultimodaldata,withan emphasisoncapturingspatialrelationshipsbetween objects. Ourmethodologyinvolvesthesimultane- ous creation of images and their corresponding question-answerpairsthroughathree-component framework: a multimodal generative model 𝑓1, a text-to-imagegenerativemodel 𝑓2,andaSKG 𝐺. InspiredbyLLM-groundedDiffusion(LMD)(Lian et al., 2024), we use 𝑓1to generate bounding boxes 𝐵anddescription 𝑑undertheguidanceof 𝐺. These bounding boxes are then integrated into the layout- diffusion model 𝑓2to synthesize images. After imagesynthesis, 𝐺and𝐵furtherassist 𝑓1ingenerat- ing multiple question-answer pairs (𝑄,𝐴)relevant to the synthesized image 𝑖. (𝐵,𝑑)=𝑓1(𝐺) 𝑖=𝑓2(𝐵,𝑑,𝐺) (𝑄,𝐴)=𝑓1(𝐵,𝐺,𝑖)(2) For implementation, we use GPT-4o and GLI- GEN(Lietal.,2023)versionofLMDas 𝑓1and𝑓2, respectively. 3Spatial Knowledge Graph-Guided Data Synthesis In this section, we introduce SKG2Data , a novel approach that takes advantage of a SKG that incor- porates spatial positional relationships to guide the synthesisofmultimodaldata. AsshowninFigure2, SKG2Data consists of two core modules: Spatial KG Generation andMultimodal Data Synthesis . 3.1 Spatial Knowledge Graph Generation Scenes and Objects Generation. The real world encompasses a wide variety of scenes, each con- Figure 2: A comprehensive overview of the our framework. Our framework consists of two core modules: Spatial KG Generation andMultimodal Data Synthesis . TheSpatial KG Generation module generates an intermediate representation,theSpatialKG,whichguidesthesynthesisofmultimodaldata. The MultimodalDataSynthesis module is tasked with generating Image Data and their corresponding Textual Data. taining distinct sets of objects. To simulate the distribution of scenes and objects in real-world images, we leverage GPT-4o to generate a series of scenes and the objects likely to appear within them. Starting with a few scenes as few-shot ex- amples, we use GPT-4o to generate an additional 160 safe, diverse, and realistic scenes, significantly enhancing the diversity of our dataset. For each scene, GPT-4o is further employed to produce a listofplausibleobjectsthatcouldexistinthegiven context. To improve object generation, particularly for uncommon scenes, we incorporate Wikipedia documents as external knowledge to augment GPT- 4o’s understanding. This approach ensures that thesynthesizedobjectsexhibitdistributionsclosely aligned with real-world images, effectively prevent- ing the generation of incongruous or improbable objects within the specified scenes. Formally, we first employ 𝑓1to generate𝑚general scenes, de- noted as{𝑠1,𝑠2,...,𝑠 𝑚}. For each scene 𝑠𝑖, we thengenerate 𝑛objectsthatarelikelytoappearin that scene, expressed as: {𝑜𝑖1,𝑜𝑖2,...,𝑜 𝑖𝑛}=𝑓1(𝑠𝑖) (3)SpatialKnowledgeGraphConstruction. Due tothediverserulesandknowledgegoverningspatial relationships between objects in the real world, we employGPT-4otogeneratepositionalrelationships thatalignwithreal-worldspatialcognition. Addi- tionally,weintroducetheSKGasanintermediary representation tofacilitate thegeneration ofhigh- qualitymultimodaldataforspatialunderstanding tasks. The SKG is constructed using GPT-4o’s
https://arxiv.org/abs/2505.22633v1
capabilitiestosynthesizeentitiesandtheirspatial relationship triplets. Intheinitialphase, 𝑓1isusedtoselectasubset {𝑜𝑖 𝑗1,𝑜𝑖 𝑗2,...,𝑜 𝑖 𝑗𝑘}of𝑘candidate objects from theoriginalset{𝑜𝑖1,𝑜𝑖2,...,𝑜 𝑖𝑛}. Theseselected objectsareutilizedtoconstructaspecificSKG.The subset𝐶𝑘is formally represented as: 𝐶𝑘={𝑜𝑖 𝑗1,𝑜𝑖 𝑗2,...,𝑜 𝑖 𝑗𝑘}, 𝐶𝑘⊆{𝑜𝑖1,𝑜𝑖2,...,𝑜 𝑖𝑛}, |𝐶𝑘|=𝑘(4) Next, each object in 𝐶𝑘undergoes an attribute enrichment process via 𝑓1, which adds quantities and attributes such as color, orientation, and ma- terial. This results in a set of 𝑙entities, denoted as𝐸𝑙,withrichdescriptiveattributes. In 𝐸𝑙,each Figure3: Distributionoftop15objectsandspatialrelationships. Thereareatotalof974objectsand95relationships. entity representsa singleobject witha quantityof oneandclearlydefinedattributes. Ifobjectswithin the same category have a quantity greater than one, theyaredividedintomultipleentities. Forexample, a selected object “Balloon” with a higher quantity could be expanded into two distinct entities which reflecttheadditionofspecificattributesbyGPT-4o: “A blue balloon” and “A yellow balloon”. This approachensuresprecisecontroloverthenumber ofentities,maintainingconsistencywiththeobjects in the subsequently generated images. 𝐸𝑙={𝑒𝑖 𝑗1,𝑒𝑖 𝑗2,...,𝑒 𝑖 𝑗𝑙}=𝑓1(𝐶𝑘), |𝐸𝑙|=𝑙≥𝑘(5) The entities in 𝐸𝑙serve as the nodes of the SKG𝐺. Thesubsequenttaskinvolvessynthesizing spatial relationships between pairs of entities in 𝐸𝑙. This is achieved using 𝑓1, which generates a set of relationship triplets 𝑇𝑙: 𝑇𝑙={(𝑒𝑖 𝑗𝑎,𝑟,𝑒 𝑖 𝑗𝑏)|𝑒𝑖 𝑗𝑎,𝑒𝑖 𝑗𝑏∈𝐸𝑙, 𝑎≠𝑏,𝑟=𝑓1(𝑒𝑖 𝑗𝑎,𝑒𝑖 𝑗𝑏)}(6) Each triplet(𝑒𝑖 𝑗𝑎,𝑟,𝑒 𝑖 𝑗𝑏)represents a unique pair of entities 𝑒𝑖 𝑗𝑎and𝑒𝑖 𝑗𝑏, along with their spa- tial relationships 𝑟which could involve directional relationships, distance relationships, or a combi- nation of both. The collection of these triplets 𝑇𝑙comprehensively captures all pairwise spatial interactions within 𝐸𝑙. By combining the nodes 𝐸𝑙 and the triplet relationships 𝑇𝑙, we construct the 𝐺. 3.2 Multimodal Data Synthesis Image Data Generation. End-to-end text-to- image generative models, such as Stable Diffu- sion (Rombach et al., 2021) and Dall-E (Betkeret al., 2023), often encounter challenges in accu- rately controlling object positions and quantities. In contrast, layout-based diffusion models enable precise control over object placement and quan- tities, ensuring that the generated images closely conformtoSKG.WeemployGPT-4otogenerate bounding boxes and captions under the guidance ofSKG,andseamlesslyincorporatetheseoutputs intoaGLIGEN-basedmodel. Recognizingthatdif- fusion models cannot guarantee perfection in every generatedimage,wegeneratemultipleimagesfor each SKG by varying the random seed. Specifically, we input the intermediate represen- tations𝐸𝑙and𝑇𝑙derived from 𝐺directly into 𝑓as guidance. Additionally,weleveragefew-shotexam- plestoinstruct 𝑓1onhowtogenerateappropriate bounding boxes 𝐵𝑙. 𝐵𝑙={𝑏𝑖 𝑗1,𝑏𝑖 𝑗2,...,𝑏 𝑖 𝑗𝑙}=𝑓1(𝐸𝑙,𝑇𝑙),(7) In order to synthesize data on whether an object exists in the image for subsequent tasks, we ran- domlyselectanobjectfromotherobjectsincurrent sceneasanon-existentobject 𝑜𝑛𝑒𝑔,ensuringthat this object is excluded from the synthesized image. Theelements 𝐸𝑙,𝑇𝑙,and𝑠𝑖,alongwiththeoptional non-existent object 𝑜𝑛𝑒𝑔, are input to 𝑓1to produce the caption𝑑. 𝑜𝑛𝑒𝑔∈{𝑜|𝑜∉𝐶𝑘}∪{∅}, 𝑑=𝑓1(𝐸𝑙,𝑇𝑙,𝑠𝑖,𝑜𝑛𝑒𝑔)(8) Finally,weinput 𝐵𝑙and𝑑into𝑓2togeneratethe final image𝑖. 𝑖=𝑓2(𝑑,𝐵 𝑘) (9) ImageDataFiltering. Duringtheimagegener- ationstage,asignificantchallengeisthepotential for discrepancies, such as hallucinations, which can occur during the transformation of the SKG into bounding boxes and captions. Additionally, the image generation model may introduce its own inaccuracies. Theseissuescanresultininconsisten- ciesbetweenthegeneratedimageandtheoriginal SKG. To address this challenge, we conduct an image validation process using GPT-4o as a image filtering agent. This process is aimed at evaluating andensuringthatthegeneratedimagealignswith theSKG.Formally,weutilize 𝑓1toassessthealign- ment of an image 𝑖with the expected entities 𝐸𝑙 andtextualdescriptions 𝑇𝑙. If𝑓1returnsTrue,the image𝑖ispreserved;otherwise,itisdiscarded. This process is represented by the following equation: 𝑓1(𝑖,𝐸 𝑙,𝑇𝑙)=  True,if𝑖aligns with both𝐸𝑙and𝑇𝑙, False,otherwise.(10) Afterfiltering,thedatasetcontainsatotalof974 objectsand95spatialrelationships. Thedistribu- tionofthetop15objectsandspatialrelationships in the filtered images is illustrated in Figure 3. TextualDataGeneration. Thetrainingdata for textualdataisprimarilycomposedofopen-ended question-answer pairs, while the evaluation data predominantlyconsistsofsingle-choicequestions. To enhance GPT-4o’s understanding of object at- tributesandspatialrelationshipsinimages,welever- age the SKG to guide the generation of question- answer pairs. This approach not
https://arxiv.org/abs/2505.22633v1
only improves the accuracy of the data generation process but also reducestheoccurrenceofhallucinations. Further- more, it enables the creation of more diverse and complexquestionstailoredforspatialunderstand- ing tasks. These questions are categorized into two types based on their primary focus: Entity-Based DataandRelation-Based Data . Entity-Based Data focuses on the existence, at- tributes, and quantities of objects in a given image. The construction of Entity-Based Data is primarily guided by the entity set 𝐸𝑙, along with explicitly definednon-existentobjects 𝑜𝑛𝑒𝑔. Thistypeofdata serves as general visual question-answering data, maintaining the general capabilities of MLLMs without requiring additional external datasets. In contrast,Relation-BasedData emphasizesthespa- tialrelationshipsbetweenobjects. Thegeneration of Relation-Based Data is based on the spatial rela- tionshiptriplets 𝑇𝑙andgeneratedboundingboxes 𝐵𝑙.Relation-Based Data is specifically designed to im- prove the spatial reasoning capabilities of MLLMs. Thegenerationprocessforeachquestion 𝑥andits corresponding answer 𝑦is formalized as follows: 𝑥=𝑓1(𝑖,𝐵 𝑙,𝐸𝑙,𝑇𝑙), 𝑦=𝑓2(𝑖,𝐵 𝑙,𝐸𝑙,𝑇𝑙,𝑥)(11) TextualDataFiltering. Foreachgeneratedtex- tual data, we employ GPT-4o as a text filtering agenttoverifyitsaccuracy. Toavoidexcessivere- liance on the information provided in the SKG, we withholdSKGduringthetextualdataverification phase. Instead,GPT-4oreliessolelyontheimage toverifythecorrectnessoftheprovidedreference answers. The evaluation is performed using 𝑓1, whichdeterminesthecorrectnessoftheanswer 𝑦 forthequestion 𝑥basedontheimage 𝑖. If𝑓1returns True, the textual data (𝑥,𝑦)is preserved: 𝑓1(𝑖,𝑥,𝑦)=( True if𝑦is correct, False otherwise.(12) Examples of the final training data generated are illustrated in Figure 7 (Appendix A). 4 Experiments 4.1 Experimental Settings Baselines. We conduct experiments on three open-source MLLMs: LLaVA-1.5-7B (Liu et al., 2023a), LLaVA-1.6-7B (Liu et al., 2024a), and Llama-3.2-Vision-11B (Meta AI, 2024). Both LLaVA-1.5andLLaVA-1.6usetheVicuna-7B(Chi- ang et al., 2023) LLM as their LLM backbone. Benchmarks. For a comprehensive evaluation, weexamineallmodelsacrosstwodistincttaskcate- gories. Thefirstcategoryencompassesbenchmarks that are either directly or partially related to spatial understanding. This includes the publicly available benchmark MMVP and COCO-Spatial (Kamath etal.,2023). ForCOCO-Spatial,weutilizeasub- set named Two-obj that focuses on the positional relationships of two objects. We then convert the original positive and negative sample pairs into single-choice questions, which are more conducive toMLLMstesting. Weleaveaportionofthesyn- thesized image to generate test questions called SKG2Data-Holdout ,whicharealsousedforeval- uation. The second category involves general ca- pability testing benchmarks for MLLMs, including MMStar and HallusionBench. Dataset LLaVA-1.5 LLaVA-1.6 LLaVA-1.6 (w/ SKG2Data )Llama-3.2-vision Llama-3.2-vision (w/ SKG2Data ) KG2Data-Holdout 62.2 68.6 70.1 (+1.5) 73.3 74.7 (+1.4) COCO-Spatial 52.4 75.4 79.3 (+3.9) 45.9 59.8 (+13.9) MMVP 24.7 32.0 36.7 (+4.7) 29.3 30.7 (+1.4) MMStar 33.1 37.6 36.7 (-0.9) 49.8 48.1 (-1.7) HallusionBench 27.6 27.6 27.2 (-0.4) 40.3 45.1 (+4.8) Average 40.0 48.2 50.0 (+1.8) 47.7 51.7 (+4.0) Table 1:Comparison with SoTA methods on five benchmarks. Implementation Details. We fine-tune both LLaVA-v1.6andLlama-3.2-Visionusingtraining data. These modelsconsist ofa vision encoder, a projector, and an LLM backbone. During training, we tune the LLM backbone with LoRA modules (Huetal.,2021),keepingtheprojectorandvision encoder parameters fixed. The detailed settings are shown in Table 2 (Appendix A). 4.2 Main Results Table 1 presents a comprehensive comparison of multiple models evaluated across five benchmarks, highlighting the strengths of SKG2Data . Spatial Understanding Results. Fine-tuned LLaVA-1.6 and fine-tuned Llama-3.2-vision demonstratemarkedimprovementsacrossseveral challenging benchmarks for spatial tasks, high- lighting the efficacy of our methods. On the KG2Data-Holdout benchmark, Fine-tuned LLaVA- 1.6 achieves a score of 70.1 (+1.5) ↑, while Fine- tuned Llama-3.2-vision reaches 74.7 (+1.4) ↑, in- dicating robust performance enhancements. For the COCO-Spatial benchmark, Fine-tuned LLaVA- 1.6 outperforms
https://arxiv.org/abs/2505.22633v1
with a score of 79.3 (+3.9) ↑, and Fine-tuned Llama-3.2-vision impressively achieves 59.8(+13.9) ↑,suggestingsignificantspatialunder- standing advancements. In the MMVP bench- mark, Fine-tuned LLaVA-1.6 shows notable im- provement with a score of 36.7 (+4.7) ↑, while Fine- tuned Llama-3.2-vision also makes gains, reaching 30.7(+1.4) ↑. In summary, our experimental results indicate that synthesized training data can improve the spatial understanding ability of MLLMs. General Visual Understanding Results. We present the results of our models across several general visual understanding benchmarks: MM- Star, MMVet, and HallusionBench. In the MM- Star benchmark, fine-tuned LLaVA-1.6 slightly decreases to 36.7, reflecting a minor decline of 0.9↓, while fine-tuned Llama-3.2-vision shows a slightdecreaseto48.1,markingareductionof1.7 ↓. Figure 4: The result of removing specific positional relationship data with the same amount of data. Interestingly, in the HallusionBench benchmark, fine-tuned LLaVA-1.6 experiences a negligible de- crease to 27.2, showing a marginal drop of 0.4 ↓, whilefine-tunedLlama-3.2-visiondemonstratesa remarkableimprovement,advancingto45.1 (+4.8) ↑, indicatingenhancedperformanceinhallucination task. Overall, our results demonstrate that training LLaVA-1.6 and Llama-3.2-vision with synthesized data does not significantly degrade their general visual understanding abilities. Average Results. The fine-tuned LLaVA-1.6 model achieves an average score of 50.0 (+1.8), while the fine-tuned Llama-3.2-vision model reaches51.7(+4.0). Theseimprovementsconfirm thatSKG2Data effectively enhances spatial rea- soningwhilemaintainingperformanceingeneral visual understanding tasks. 4.3 Ablation and Analysis To evaluate the impact of the synthesized data generated by SKG2Data on model performance, weconductedacomprehensiveablationstudyusing LLaVA-1.6. Thisstudyfocusesonthreekeyaspects: (1)theinfluenceofdifferentrelationshiptypes , (2)theeffectofvaryingdataquantities ,and(3) the impact of the number of objects . Forthefirstaspect,GPT-4isutilizedtocatego- rize the positional relationships in the training data Figure 5: The influence of the number of objects and the quantity of data on experimental results. into two types: distance relationships and direc- tional relationships. The models are then trained separately on each type. To minimize the influence of data quantity on the experimental results, we sample 2,000 data from the training dataset after eliminatingonetypeofpositionalrelationship. The detailedexperimentalresultsaredepictedinFigure 4. Forthesecondaspect,werandomlysample2k and 5k from the 15k training data. For the third aspect, we regulate the data volume to 5k to exam- ine the influence of the number of objects on the modelperformancewithinSKG.Settingthelimit to 3 objects, we categorize the numbers into two groups: (1) counts that are greater than or equal to3and(2)thosethatarelessthan3. Theresults ofsecondandthirdaspectsareshowninFigure5. TheexperimentalresultsshowninFigure4guide us towards Finding 1 andFinding 2 . Additionally, Finding 3 is derived from the experimental results observed in Figure 5. Finding 1: Directional knowledge plays a cru- cialroleinenhancingtheMLLM’sspatialper- ception capabilities. We observe that training data containing directional relationships leads to moresignificantperformanceimprovementscom- pared to data with distance relationships across three spatially correlated datasets. This suggests thatMLLMsmaybemoresensitivetodirectional knowledge and prioritize spatial perception and understanding through directional relationships. A potential explanation is that directional knowledge, comparedtodistanceknowledge,exhibitsgreater variability, providing stronger spatial signals that aremoreeffectiveinenhancingmodelperformance. Finding 2: Data synthesized from two types of spatial knowledge, including direction and distance, exhibitgeneralizationability. Itcanbe observed that the use of two types of spatial relationshipdatacanenhanceperformanceacross three datasets. Significantly, the COCO-Spatial dataset primarily targets directional understanding, and employing datathat solely comprises distance relationshipsfor trainingmodelscan alsoenhance performance on this dataset. This improvement couldpossiblybeattributedtothevariedspatialcor- relationspresentinthedatasynthesizedthroughthe spatialknowledgeinSKG,andthegeneralization of an array of positional relationships. This phe- nomenonunderscoresthesuperiorityofknowledge- based data synthesis over previous methods. Finding 3: Increasing the number of objects
https://arxiv.org/abs/2505.22633v1
improves the performance of MLLMsin visual detail tasks Our experiments demonstrate that synthesizingdatawithvaryingquantitiesofobjects improves spatialunderstanding, with greater gains observed as the number of objects increases in MMVP benchmark. Notably, training with more objectssignificantlyenhancesfine-grainedrecog- nition capabilities, as reflected in improved perfor- mance on the detail-oriented MMVP benchmark. We attribute these improvements to the inherent complexityofmulti-objectscenes,whichexposethe modeltoricherspatialrelationshipsandconstraints duringlearning. Byleveragingthestructuredspa- tial knowledge embedded in SKG, our proposed SKG2Data systematically generates multimodal trainingdatawithdiverseandspatiallyplausibleob- jectarrangements,enablingmoreeffectivelearning of visual and spatial reasoning. DiscussiononKnowledge-to-DataSynthesis. It canbeseenthatwithmoredata,theperformance of the trained model can be further improved. At thesametime,itcanalsobefoundthathigh-quality data is important, and a small amount of high- qualitytrainingdatacansignificantlyimprovethe performanceofmodelsinspecificfields,whichis in line with the opinion of “less is more” (Zhou et al., 2023; Ye et al., 2025). As shown in Figure 6, the trained model exhibits improved accuracy on spatially related questions. The core idea of ourworkistosynthesizedatathroughautomated modeling knowledge to ensure the rationality of thedataitself. Thedatageneratedthroughspatial knowledge is consistent with human spatial cogni- tion, ensuring the relevance and reliability of the data. In addition, to synthesize additional data, we can either randomly sample more data from the SKG or expand the scenes to further enrich the Figure 6: Case analysis: On spatially related datasets, the trained model enhances spatial understanding abilities. SKG.Comparedwiththemethodofsynthesizing multimodaldatausingcode,simulators,etc.,itis moreefficientandhasbettergeneralizationability. 5 Related Work Multi-Modal Large Language Model. The remarkable success of large language models (LLMs)(Brownetal.,2020;Achiametal.,2023; Touvron et al., 2023) has driven the development ofMLLMs(Yeetal.,2023;Liuetal.,2024a;Chen et al., 2023; Meta AI, 2024; Gemini Team, 2024; Bai et al., 2023; OpenAI, 2023, 2024). By incor- poratingvisualbackbones,theseMLLMsdemon- strate strong performance across a wide range of visual-languagetasks,significantlyimprovingthe processingandunderstandingofbothtextandvisual information. Atthesametime,avarietyofbench- marks (Yu et al., 2023; Guan et al., 2024; Chen etal.,2024)havebeendevelopedtoevaluatetheca- pabilitiesoflargemultimodalmodels. Additionally, withincreasingattentiononspatialunderstanding inMLLMs,specializedbenchmarks(Kamathetal., 2023; Du et al., 2024; Tong et al., 2024) have been introduced. Thesebenchmarksarespecificallyde- signedtotacklethechallengesofevaluatingspatial comprehension in MLLMs. Synthetic DataGeneration. Synthetic datahas garnered significant attention as a novel solution to the challenges of obtaining large, diverse, and high-quality datasets (Chang et al., 2024; Liu et al., 2024b; Long et al., 2024; Kim et al., 2024). Many existing studies primarily rely on pre-existing im- ages (Wu et al., 2024), occasionally incorporat- ing additional image information such as captions,bounding boxes, or OCR input (Liu et al., 2023a; Zhaoetal.,2023a;Zhangetal.,2023). However,an- otherareaofresearchtranscendsgeneratingtextual instructiondataandfocusesoncreatinghigh-quality images through various synthesis techniques rather than relying solely on existing image datasets. For example, REACHQA (He et al., 2024) and Mul- timodal Self-Struct (Zhang et al., 2024) use code to precisely synthesize chart images. Additionally, EmbSpatial utilizes simulators to generate images, whileSynCLR(Tianetal.,2024)andVisMin(Awal et al., 2024) employ diffusion models to produce newimages. Furthermore,duetothetendencyof LLMs to exhibit biases and follow a long-tailed distribution due to their lack of specialization in specific domains (Navigli et al., 2023; Fei et al., 2023), there is a growing body of research (Feng et al., 2023; Xu et al., 2024; Wang et al., 2024; Yangetal.,2025;Ovadiaetal.,2025)thatemploys knowledge enhancement techniques, such as KG, to generate higher-quality data. 6 Conclusion In this work, we propose SKG2Data , a novel multimodalsyntheticdatagenerationmethodthat leveragesSpatialKnowledgeGraphtosynthesize imagesandquestion-answerpairs,addressingthe lackof high-qualitydataforspatialunderstanding tasks. Using SKG2Data ,wecreateamultimodal instructiondatasetandabenchmark, SKG2Data- Holdout ,toenhanceandevaluatethespatialunder- standing capabilities of MLLMs. Extensive experi- ments demonstrate the effectiveness of SKG2Data andanalyzetheimpactofspatialrelationshiptypes, dataset scales, and object densities.
https://arxiv.org/abs/2505.22633v1
Limitations Noiseinsyntheticdata. Despitetheimplemen- tation of automatic filtering, the synthesized data may still contain noise. This noise arises from potential mismatches between images and text in diffusionmodels, aswellastheinherentpossibility of hallucinations in LLMs. Difficult to synthesize data with a large number of objects. Given the limitations of current dif- fusion models, generated images can sometimes showproblemslikedistortionandinconsistenciesin object type,attributes, quantity,or spatial arrange- ment. The challenge of creating realistic images grows substantially as the number of objects in thesceneincreases. Insomereal-worldsituations where scenes are packed with many objects, our method may have difficulty accurately depicting these complex scenes. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad,IlgeAkkaya,FlorenciaLeoniAleman,Diogo Almeida,JankoAltenschmidt,SamAltman,Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . RabiulAwal,SabaAhmadi,LeZhang,andAishwarya Agrawal. 2024. Vismin: Visual minimal-change understanding. CoRR, abs/2407.16772. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A frontier large vision-languagemodelwithversatileabilities. CoRR, abs/2308.12966. JamesBetker,GabrielGoh,LiJing,TimBrooks,Jian- fengWang,LinjieLi,LongOuyang,JuntangZhuang, JoyceLee,YufeiGuo,etal.2023. Improvingimage generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf . Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan,Pranav Shyam,GirishSastry,Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 . Hsin-YuChang,Pei-YuChen,Tun-HsiangChou,Chang- ShengKao,Hsuan-YunYu,Yen-TingLin,andYun- Nung Chen. 2024. A survey of data synthesis ap- proaches. CoRR, abs/2407.03672. JunChen,DeyaoZhu,XiaoqianShen,XiangLi,Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoor- thi,VikasChandra,YunyangXiong,andMohamed Elhoseiny.2023. Minigpt-v2: largelanguagemodel asaunifiedinterfaceforvision-languagemulti-task learning. arXiv preprint arXiv:2310.09478 .LinChen,JinsongLi,XiaoyiDong,PanZhang,Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, et al. 2024. Are we on the rightwayforevaluatinglargevision-languagemod- els? arXiv preprint arXiv:2403.20330 . Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, ZhanghaoWu,HaoZhang,LianminZheng,Siyuan Zhuang,YonghaoZhuang,JosephE.Gonzalez,Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- sourcechatbotimpressinggpt-4with90%*chatgpt quality. Mengfei Du, Binhao Wu, Zejun Li, Xuanjing Huang, and Zhongyu Wei. 2024. Embspatial-bench: Benchmarking spatial understanding for embodied tasks with large vision-language models. CoRR, abs/2406.05756. YuFei,YifanHou,ZemingChen,andAntoineBosselut. 2023. Mitigating label biases for in-context learning. InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 14014–14031. Association for Computational Linguistics. Shangbin Feng, Vidhisha Balachandran, Yuyang Bai, and Yulia Tsvetkov. 2023. Factkb: Generalizable fac- tuality evaluation using language models enhanced withfactualknowledge. In Proceedings of the 2023 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2023, Singapore, De- cember 6-10, 2023 ,pages933–952.Associationfor Computational Linguistics. GeminiTeam.2024. Gemini: Afamilyofhighlycapable multimodal models. Preprint, arXiv:2312.11805. Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. 2024. Hallusionbench: An advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. InIEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 14375–14385. IEEE. Wei He, Zhiheng Xi, Wanxu Zhao, Xiaoran Fan, Yiwen Ding,ZifeiShan,TaoGui,QiZhang,andXuanjing Huang. 2024. Distill visual chart reasoning ability from llms to mllms. CoRR, abs/2410.18798. Nora A Herweg and Michael J Kahana. 2018. Spatial representations in the human brain. Frontiers in human neuroscience , 12:297. JohnRHodges,JosefSpatt,andKaralynPatterson.1999. “what” and “how”: evidence for the dissociation of object knowledge and mechanical problem-solving skillsinthehumanbrain. Proceedings of the National Academy of Sciences , 96(16):9444–9448.
https://arxiv.org/abs/2505.22633v1
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. CoRR, abs/2106.09685. Amita Kamath, Jack Hessel, and Kai-Wei Chang. 2023. What’s" up" with vision-language models? investi- gating their struggle with spatial reasoning. arXiv preprint arXiv:2310.19785 . JihoKim,SungjinPark,YeonsuKwon,YohanJo,James Thorne, and Edward Choi. 2023. Factkg: Fact ver- ification via reasoning on knowledge graphs. In Proceedings of the 61st Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages16190–16206.Associationfor Computational Linguistics. Seungone Kim, Juyoung Suk, Xiang Yue, Vijay Viswanathan, Seongyun Lee, Yizhong Wang, Kiril Gashteovski,CarolinLawrence,SeanWelleck,and Graham Neubig. 2024. Evaluating language models as synthetic data generators. CoRR, abs/2412.03679. Xuanyu Lei, Zonghan Yang, Xinrui Chen, Peng Li, and Yang Liu. 2024. Scaffolding coordinates to promote vision-language coordination in large multi-modal models. arXiv preprint arXiv:2402.12058 . Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vulić, and Furu Wei. 2025. Imagine while reasoning in space: Multimodal visualization-of-thought. Preprint, arXiv:2501.07542. Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu,JianweiYang,JianfengGao,ChunyuanLi,and Yong Jae Lee. 2023. GLIGEN: open-set grounded text-to-imagegeneration. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023 , pages 22511–22521. IEEE. Long Lian, Boyi Li, Adam Yala, and Trevor Darrell. 2024. Llm-grounded diffusion: Enhancing prompt understandingoftext-to-imagediffusionmodelswith large language models. Trans. Mach. Learn. Res. , 2024. HaotianLiu,ChunyuanLi,YuhengLi,andYongJaeLee. 2023a. Improved baselines with visual instruction tuning. Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang,ShengShen,andYongJaeLee.2024a. Llava- next: Improved reasoning, ocr, and world knowledge. Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang,JinmengRao,StevenZheng,DaiyiPeng,Diyi Yang,DennyZhou,andAndrewM.Dai.2024b. Best practices and lessons learned on synthetic data for language models. CoRR, abs/2404.07503. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. 2023b. Mm- bench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281 .Lin Long, Rui Wang, Ruixuan Xiao, Junbo Zhao, Xiao Ding, Gang Chen, and Haobo Wang. 2024. On llms-drivensyntheticdatageneration,curation,and evaluation: A survey. In Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages11065–11082. Associationfor Computational Linguistics. Meta AI. 2024. Llama 3.2: Revolutioniz- ing edge ai and vision with open, cus- tomizable models. https://ai.meta.com/blog/ llama-3-2-connect-2024-vision-edge-mobile-devices/. RobertoNavigli,SimoneConia,andBjörnRoss.2023. Biases in large language models: Origins, inventory, and discussion. ACM J. Data Inf. Qual. , 15(2):10:1– 10:21. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. OpenAI. 2024. Gpt-4o system card. Preprint, arXiv:2410.21276. Oded Ovadia, Meni Brief, Rachel Lemberg, and Eitam Sheetrit. 2025. Knowledge-instruct: Effective contin- ualpre-trainingfromlimiteddatausinginstructions. Preprint, arXiv:2504.05571. Michael Peer, Iva K Brunec, Nora S Newcombe, and Russell A Epstein. 2021. Structuring knowledge with cognitive mapsand cognitive graphs. Trends in cognitive sciences , 25(1):37–54. ArchikiPrasad,EliasStengel-Eskin, andMohitBansal. 2023. Rephrase, augment, reason: Visual ground- ing of questions for vision-language models. arXiv preprint arXiv:2310.05861 . ArijitRay,JiafeiDuan,ReubenTan,DinaBashkirova, Rose Hendrix, Kiana Ehsani, Aniruddha Kemb- havi,BryanA.Plummer,RanjayKrishna,Kuo-Hao Zeng, and KateSaenko. 2024. SAT: spatial aptitude training for multimodal language models. CoRR, abs/2412.07755. RobinRombach, Andreas Blattmann,Dominik Lorenz, Patrick Esser, and Björn Ommer. 2021. High- resolution image synthesis with latent diffusion
https://arxiv.org/abs/2505.22633v1
mod- els.CoRR, abs/2112.10752. FatemehShiri,Xiao-YuGuo,MonaFar,XinYu,Reza Haf, and Yuan-Fang Li. 2024. An empirical analysis on spatial reasoning capabilities of large multimodal models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024, pages 21440–21455. Association for Computa- tional Linguistics. YonglongTian,LijieFan,KaifengChen,DinaKatabi, Dilip Krishnan, and Phillip Isola. 2024. Learning visionfrommodelsrivalslearningvisionfromdata. InIEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 15887–15898. IEEE. Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, YannLeCun,andSainingXie.2024. Eyeswideshut? exploringthevisualshortcomingsofmultimodalllms. InIEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 9568–9578. IEEE. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave,andGuillaumeLample.2023. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Zeyuan Wang, Qiang Zhang, Keyan Ding, Ming Qin, Xiang Zhuang, Xiaotong Li, and Huajun Chen. 2024. Instructprotein: Aligning human and protein lan- guage via knowledge instruction. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 1114–1136. Association for Computational Linguistics. Ian Wu, Sravan Jayanthi, Vijay Viswanathan, Simon Rosenberg, Sina Pakazad, Tongshuang Wu, and Gra- ham Neubig. 2024. Synthetic multimodal question generation. In Findings of the Association for Compu- tational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024 , pages 12960–12993. Association for Computational Linguistics. Ran Xu, Hejie Cui, Yue Yu, Xuan Kan, Wenqi Shi, Yuchen Zhuang, May Dongmei Wang, Wei Jin, Joyce C. Ho, and Carl Yang. 2024. Knowledge- infused prompting: Assessing and advancing clinical textdatagenerationwithlargelanguagemodels. In Findings of the Association for Computational Lin- guistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 15496–15523. Association for Computational Linguistics. ZitongYang,NeilBand,ShuangpingLi,EmmanuelJ. Candès,andTatsunoriHashimoto.2025. Synthetic continued pretraining. In The Thirteenth Interna- tional Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 .OpenReview.net. QinghaoYe,HaiyangXu,GuohaiXu,JiaboYe,Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu,HehongChen,JunfengTian,QianQi,JiZhang, and Fei Huang. 2023. mplug-owl: Modularization empowerslargelanguagemodelswithmultimodality. CoRR, abs/2304.14178. Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia,andPengfeiLiu.2025. Limo: Lessismorefor reasoning. Preprint, arXiv:2502.03387. Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. 2023. A survey on multimodal large language models. arXiv preprint arXiv:2306.13549 .Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, KevinLin,ZichengLiu,XinchaoWang,andLijuan Wang.2023. MM-Vet: Evaluatinglargemultimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490 . Wenqi Zhang, Zhenglin Cheng, Yuanyu He, Mengna Wang, Yongliang Shen, Zeqi Tan, Guiyang Hou, MingqianHe,YannaMa,WeimingLu,andYueting Zhuang. 2024. Multimodal self-instruct: Synthetic abstract image and visual reasoning instruction using language model. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, Novem- ber 12-16, 2024 , pages 19228–19252. Association for Computational Linguistics. Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, NedimLipka,DiyiYang,andTongSun.2023. Llavar: Enhancedvisualinstructiontuningfortext-richimage understanding. CoRR, abs/2306.17107. Bo Zhao, Boya Wu, and Tiejun Huang. 2023a. SVIT: scaling up visual instruction tuning. CoRR, abs/2307.04087. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou,
https://arxiv.org/abs/2505.22633v1
Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023b. A survey of large language models. arXiv preprint arXiv:2303.18223 . Kening Zheng, Junkai Chen, Yibo Yan, Xin Zou, and Xuming Hu. 2024. Reefknot: A comprehensive benchmarkforrelationhallucinationevaluation,anal- ysis and mitigation in multimodal large language models. CoRR, abs/2408.09429. Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, LukeZettlemoyer,andOmerLevy.2023. LIMA:less is more for alignment. In NeurIPS 2023 . Guangming Zhu, Liang Zhang, Youliang Jiang, Yixuan Dang,HaoranHou,PeiyiShen,MingtaoFeng,Xia Zhao,QiguangMiao,SyedAfaqAliShah,andMo- hammed Bennamoun. 2022. Scene graph generation: A comprehensive survey. CoRR, abs/2201.00443. Ranjay Krishnand Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Vi- sual genome: Connecting language and vision using crowdsourced dense image annotations. Int. J. Com- put. Vis., 123(1):32–73. A Appendix Hyperparameter Setting Fine-tuning method LoRA Gobal Batch Size 64 Device† NVIDIA A800 Learning Rate (LR) 1𝑒−5 LoRA𝑟 16 LoRA𝛼 32 LoRA Dropout 0.05 Epoch 2 Mixed precision Enabled Table 2: Detailed experimental settings. Figure 7: Training data cases. Prompt for "Scenes Generation": You are an AI assistant specializing in imagining the scenes depicted in photos. Your task is to envision a variety of scenes that may appear in different photos. Your response needs to meet the following requirements: 1. Ensure that every scene is safe, accessible to most people, and common in real and daily life scenes. 2. The scenes in the photos should be diverse and non repetitive. Each generated scene typically depicts a facility (usually requiring interaction with multiple objects such as a table or furniture) or a room, or a place or area. Please directly output the generated scenes in the following list format: <In context learning> Now, please generate 160 different scenes. <image> Prompt for "Objects Generation": <Wikipedia documents> You are an AI assistant specializing in identifying objects likely to appear in a given scene. Your task is to list as many objects as possible that may be found in that scene. Your response needs to meet the following requirements: 1. Ensure that each object is based on common situations in reality and daily life. Ensure that objects are relatively simple and easy to draw, and once you see their images, you can determine what objects they are. Ensure that the object is safe and harmless. 2. The objects provided should not be duplicated or too similar. Try to describe an object with one word. If there are words with multiple meanings, it is necessary to ensure that they are distinguishable, such as "animal mouse" and "computer mouse". 3. Each scene needs to list as many objects as possible. If there may be people on site, please add ’person’. Here I will give you some examples for reference and use the following template in the format of a list for the response: <In context learning> <image> Prompt for "Spatial KG Generation (Nodes)": You are an AI assistant specializing in describing objects in a given scene. Your
https://arxiv.org/abs/2505.22633v1
task is to first select a subset of candidate objects, and then describe the attributes of the selected objects. Your response should meet the following requirements: 1. Select 2 to 4 suitable objects from the given objects to form a possible combination. Note that for some candidate objects that are suitable for repeated appearance, they can be selected repeatedly. Please prioritize selecting more common objects. Do not choose objects that are difficult to represent or recognize in the image. 2. Provide further attribute descriptions for the selected objects, including color, orientation, etc., and describe countable objects as singular. Be careful not to describe objects that do not exist in the real world or are unsuitable for the current scene. The description of each object should be concise, without being overly complex and abstract, and avoid including details that cannot be visualized in the image. For some objects that are difficult to add descriptions, additional descriptions can be omitted. 3. The selected combination of objects can be repeated, but the described attributes need to be different. 4. If there are words with multiple meanings, it is necessary to ensure that they are distinguishable and you need to determine which one is more likely based on the scene and other objects, such as ’animal mouse’ and ’computer mouse’. 5. You need to provide 25 to 50 possible combinations based on the number of objects given. Here I will give you some demonstrations for reference and use the following list template for the response: <In context learning> <image> Prompt for "Spatial KG Generation (Triplets)": You are an AI assistant specializing in describing the positional relationship between objects in a scene. Your task is to describe the positional relationship of the given objects. Your response needs to meet the following requirements: 1. Be careful not to describe positional relationships that do not exist in the real world. When describing the positional relationship between two objects, do not involve other objects. At most one positional relationship triplet is generated between every two objects. 2. There may be two types of positional relationships between two objects: directional relationships and distance relationships. Directional relationships include above, below, left, right, in front of, behind, etc. Distance relationships include close, far, near, etc. Separate two positional relationships with commas when generating them simultaneously. For situations with a small number of objects, prioritize generating directional relationships. Here I will give you some demonstrations for reference and use the following template in the format of a list for the response: <In context learning> <image> Table 3: The prompt examples that we use in the first module. Prompt for "Image Data Generation": You are an intelligent bounding box generator. I will provide you with the scene, objects and their spatial relationships in a photo. Your task is to generate the bounding boxes for the given objects, along with a background prompt describing the scene and a prompt describing the photo. The images are of size 512x512. The top-left corner has coordinate [0, 0]. The bottom-right corner has coordinnate [512, 512]. The bounding boxes should
https://arxiv.org/abs/2505.22633v1
not overlap or go beyond the image boundaries. Each bounding box should be in the format of (object name, [top-left x coordinate, top-left y coordinate, box width, box height]) and should not include more than one object. Do not put objects that are already provided in the bounding boxes into the background prompt. Do not include non-existing or excluded objects in the background prompt. Use "A realistic scene" as the background prompt if no background is given in the prompt. There may be multiple spatial relationships between objects, and special attention should be paid to using bounding boxes to represent these spatial relationshipss. Please refer to the example below for the desired format. <In context learning> <image> Prompt for "Image Data Filtering and Textual Data Generation": You are an AI assistant specializing in evaluating the quality of a given image and generating question and answer pairs related to the image. Your task is to first determine whether the given image should be saved based on its quality and whether it matches the descriptions, and then continue generating question and answer pairs if it is determined to be saved. Descriptions of the image will provide information about the objects present in the image, their quantity, their attributes, their positional relationships, their bounding boxes (object name, [top-left x coordinate, top-left y coordinate, box width, box height]), and objects that are absent from the image. You need to follow the steps below to complete the task: Step 1. You need to ensure the clarity, quality, and authenticity of the image. You need to verify if there are any unrealistic or unreasonable parts in this image, such as clearly unrealistic looking humans or animals, and if there are any indistinguishable objects. Step 2. You need to ensure that the image contains the specified objects and does not contain any excluded objects. Specified objects are given in the form of dict, and excluded objects are presented in a list format. Please note that the image should accurately reflect the attributes of the specified object, such as Orientation and Direction, State and Condition, Color and Appearance, Structural and Physical Characteristics, Viewpoint and Perspective. Step 3. You need to ensure that the objects in the image have a positional relationship with the matching descriptions. The positional relationships are given in the form of a list, and the objects correspond to the dict. Step 4. If the image matches step 1 and basically matches steps 2 and 3, please answer ’Yes’ and proceed to the next steps to generate question answer pairs. Otherwise, please answer ’No’ and do not proceed to the next steps. Step 5. You need to generate as many question answer pairs as possible based on the correct parts of these descriptions and information that can be confidently determined in the given image, while ensuring quality and diversity. For Step 5, the following requirements must be met: 1. Please note not to mention any information related to the image descriptions in your question-answer pairs, as the image description will not be provided in future tasks using these
https://arxiv.org/abs/2505.22633v1
question-answer pairs. ... 4. Use the template in the format of a JSON list to generate question and answer pairs without any other information. Below are reference responses for saving and not saving this image: <In context learning> <image> Prompt for "Textual Data Filtering": You are an AI assistant specializing in reviewing question-answer pairs about a given image. Your task is to find the correct question-answer pairs based on the given image. Your response needs to meet the following requirements: 1. You need to determine whether the answers to the questions are correct based on the given images, and save the correct question-answer pairs. 2. Use a JSON list format template with the same format as the input to generate correct question-answer pairs without generating any additional information. Here is the question and answer about the given image: <In context learning> <image> Table 4: The prompt examples that we use in the second module.
https://arxiv.org/abs/2505.22633v1
arXiv:2505.22635v1 [cs.CL] 28 May 2025Learning Composable Chains-of-Thought Fangcong Yin♠, Zeyu Leo Liu♠, Liu Leqi♠, Xi Ye♢, Greg Durrett♠ ♠The University of Texas at Austin,♢Princeton University, fangcongyin@utexas.edu Abstract A common approach for teaching large language models (LLMs) to reason is to train on chain-of-thought (CoT) traces of in-distribution reasoning problems, but such annotated data is costly to obtain for every problem of interest. We want reasoning models to generalize beyond their training distribution, and ideally to generalize compositionally: combine atomic reasoning skills to solve harder, unseen reasoning tasks. We take a step towards compositional generalization of reasoning skills when addressing a target compositional task that has no labeled CoT data. We find that simply training models on CoT data of atomic tasks leads to limited generalization, but minimally modifying CoT formats of constituent atomic tasks to be composable can lead to improvements. We can train “atomic CoT” models on the atomic tasks with Composable CoT data and combine them with multitask learning or model merging for better zero-shot performance on the target compositional task. Such a combined model can be further bootstrapped on a small amount of compositional data using rejection sampling fine-tuning (RFT). Results on string operations and natural language skill compositions show that training LLMs on Composable CoT outperforms multitask learning and continued fine-tuning baselines within a given training data budget.1 1 Introduction Large language models (LLMs) have succeeded at addressing many problems largely by virtue of the massive amounts of data they are trained on. Many problems that defied earlier approaches have become easy because they are now in-distribution for models that have seen similar data during pre-training or instruction-tuning. However, these models still fail at challenging reasoning tasks and it is impossible to scale training data to cover all possible tasks of interest. Ideally, we want models that can generalize to new settings, and particularly, can apply basic “skills” learned during training in novel combinations to solve problems at inference time. How to empower LLMs with this capability, also called compositional generalization [ 1,2,3,4], remains an open question. For instance, large reasoning models [ 5,6], built on pre-trained LLMs, are typically trained on a large amount of data annotated with chain-of-thought (CoT) traces, but demonstrating how to generalize from easy problems to harder ones would break the dependence on the scale of training data [ 7,8], and would facilitate more efficient and robust reasoning with LLMs. We explore the setting of compositional reasoning where pre-trained LLMs are fine-tuned on CoT data of simple reasoning tasks (atomic tasks) and then evaluated on the combinations of them (compositional tasks) with limited compositional supervision. We find that models trained with atomic CoT data of the standard CoT format demonstrate limited generalization: they typically memorize and reproduce the atomic CoT patterns rather than successfully composing them. We propose a simple modification of the CoT format of the atomic task training data, which we call 1Code and data are available at: https://github.com/fc2869/composable_cot . Preprint. Under review. Q: Find the ASCII value of the next letter in the alphabet following the last letter in the
https://arxiv.org/abs/2505.22635v1
sequence and multiply the ASCII value by 5: Good nightEvaluation: Compositional SkillQ: Find the next letter in the alphabet following the last letter in the sequence: What a nice day Composable CoT: <prefix> The last letter is y, and the letter following it in the alphabet is z. So the answer is z. </prefix>Composable Chain-of-Thought Q: Find the ASCII value of the letter “a” and multiply the ASCII value by 2. <prefix> prefix CoT </prefix> Composable CoT: <suffix> The ASCII value of the letter a is 97, and multiplying the ASCII value by 2 gives us 194. So the answer is 194. </suffix> TrainGenerate<prefix> The last letter is t, and the letter following it in the alphabet is u. So the answer is u. </prefix> <suffix> The ASCII value of the letter a is 117, and multiplying the ASCII value by 5 gives us 585. So the answer is 585.</suffix> Learning Composable Chain-of-ThoughtConstruct Composable CoT data for Atomic Skill<prefix> CoT </prefix><prefix> prefix CoT </prefix> <suffix> CoT </suffix>Train and Combine Atomic CoT Models Multitask Learning Zero-shot evaluationSelf Bootstrap using Compositional Supervision InferenceSampleTrainAtomic Skill: String Operations Atomic Skill: Arithmetic Operations Compositional reasonerSkill i Skill iSkill jEvaluate Model Merging Composition Figure 1: (a) Composable Chain-of-thought (left): A compositional task involves two separate atomic capabilities. We use a data augmentation scheme to teach LLMs CoT formats that can be combined at inference time to address compositional tasks. (b) Pipeline for learning Composable CoT (right): Models trained on composable CoT data of atomic skills can be combined with multitask learning or model merging for zero-shot compositional generalization, and can be further improved by rejection sampling fine-tuning on limited compositional supervision. Composable CoT , to improve the compositional reasoning capability by enabling models to compose atomic reasoning skills at inference time. We first experiment with zero-shot combination of Composable CoT models as illustrated in Figure 1a. We experiment with two different approaches: first, merging models trained on individual atomic CoT tasks, and second, multitask learning across our atomic CoT datasets. Such combined models achieve zero-shot compositional generalization, even beating models trained on compositional data in some settings. Moreover, we show that for the compositional task, zero-shot Composable CoT models can generate CoT traces that rely less on spurious correlations and reasoning shortcuts. We then demonstrate that our zero-shot models can be improved further by rejection sampling fine-tuning on a limited amount of compositional supervision as shown in Figure 1b. Using only downstream answer supervision, our models can bootstrap better compositional CoT behavior. On various tasks involving string operations and natural language skill composition, we show that our approach outperforms multi-task learning and continued fine-tuning baselines within a given budget of training data. The main contributions of this work include: (1) A novel data augmentation scheme for training CoT models on simple reasoning tasks to enable future composition of atomic reasoning skills. (2) A method for improving compositional reasoning with LLMs by combining CoT models trained with such augmentation and training with rejection sampling fine-tuning for better compositional reasoning performance. 2 Preliminaries LLM reasoning with chain-of-thought Given a prompt
https://arxiv.org/abs/2505.22635v1
qthat states a reasoning problem, an LLM Mis prompted to solve it by drawing samples from conditional distribution ˜y∼pM(y|q). Letadenote the ground truth answer to q. We consider two common ways of solving q: (1) Direct answer :˜yonly contains the predicted answer ˜a; (2) Reasoning with chain-of-thought :˜yincludes a chain-of-thought trace t, followed by a predicted answer ˜a. 2 While CoT reasoning can be elicited through zero-shot prompting [ 9,10], recent works show that fine-tuning pre-trained LLMs on CoT traces leads to strong reasoning models [ 11,6]. We define a dataset for a reasoning task Tas a set of (prompt, answer) pairs: DT={(q, a)}. A dataset with CoT traces is then DCoT T ={(q,t, a)}. To fine-tune Mparametrized as θonDCoT T with supervised fine-tuning, we minimize the following supervised learning loss objective: LDCoT T(θ) = −1 NPN(logpθ(t|q) + log pθ(a|q,t))where |DCoT T|=N. Atomic and compositional tasks Consider a set of ktasks that represent basic reasoning skills T1,T2, ...,Tk, which we call atomic tasks. We define compositional tasksT(i,j)where i, j∈[1, ..., k ] andi̸=j, as those tasks that can be expressed as a composition of two atomic tasks. That is, there existTiandTjsuch that T(i,j)=g(Ti,Tj)where gis some function to combine the two atomic tasks. We discuss more details for gin Appendix A. We define a collection of reasoning skills as basic if each skill cannot be reduced into a combination of others; i.e., each task cannot be efficiently solved by a rule-based transformation of the solution of some other combination of tasks. Compositional reasoning from atomic CoT For a pair of atomic tasks TiandTjsampled from thektasks, we assume access to atomic CoT data DCoT TiandDCoT Tjwith dataset sizes of NiandNj. We define models fine-tuned on atomic CoT data as atomic CoT models . We assume for their composition T(i,j), we only have access to a training dataset DT(i,j)of size N(i,j). We make two assumptions about this data which follow from practical considerations about how our compositional CoTs would work in practice. First, we assume that this data only contains the direct answer and nota labeled example of a CoT. This reflects that high-quality annotated CoT supervision may be harder to obtain in practice than correct answers. Second, we assume that N(i,j)is small. We may be able to collect a small amount of data for each new compositional task of interest, but these compositional tasks are too numerous to undertake large-scale data collection on. We are interested in the following question: Can we achieve good compositional performance by fine-tuning with the atomic CoT data and a limited amount of compositional direct answer data? 3 Learning Composable Chains-of-Thought 3.1 Constructing Composable CoT Training Data Question: Multiply the ASCII value of “a” by 2. <latexit sha1_base64="KT6EEQb3cuncOexEwp42k8Scu2o=">AAACd3icbVFNS8NAEN3E7/pV9aYHF2ulIpRE/Dp4KOrBo4K1QlPCZLupi5tN3N2IJeQv+OO8+T+8eHPTVqjVgYXHm/dmZmeChDOlHefDsqemZ2bn5hdKi0vLK6vltfV7FaeS0CaJeSwfAlCUM0GbmmlOHxJJIQo4bQVPl0W+9UKlYrG40/2EdiLoCRYyAtpQfvmtmvheBPqRAM9aee3VZ9iLWBcPyCDMXnM/O2f5fqnqCQg4+NmYPMcej3v4d4kfZ1iYkgn9AfauKNeAx9jSj+M598sVp+4MAv8F7ghU0Chu/PK7141JGlGhCQel2q6T6E4GUjPCqSmdKpoAeYIebRsoIKKqkw32luOqYbo4jKV5QuMBO+7IIFKqHwVGWUyoJnMF+V+unerwrJMxkaSaCjJsFKYc6xgXR8BdJinRvG8AEMnMrJg8ggSizalKZgnu5Jf/gvvDuntSP749qjQuRuuYR1toB9WQi05RA12jG9REBH1am1bF2rW+7G17z64NpbY18mygX2G738o2wIg=</latexit>q<tag k> The ASCII value of the letter a is 97, and […] </tag k> Answer: 194<tag 1> aaksebnab zldjxhl … </tag 1> Additional proxy prefix CoTs <latexit sha1_base64="j97eauo/f+rmf/LIO12iA03TH+Y=">AAACp3icbVHbbtQwEHXCrYRLF3jkxWJZqRVolaAWeOChAh4QLxTRvUibKJp4na1V51J7gnZl+df4CN74G5w0SNstI9k6OmfOeGac1VJoDMM/nn/r9p279/buBw8ePnq8P3jydKqrRjE+YZWs1DwDzaUo+QQFSj6vFYcik3yWXXxq9dlPrrSoyjPc1DwpYFWKXDBAR6WDX6M6jQvAcwbSzOzBOhU0LsSSdmSWm7VNzQdhD4NRXEImITVb6ZbGslrR6yX+OfPWVO/kv6LxZy4R6BbrSveWS4chiJGvsRttoVZZYsJx+JqG46P2Cq3pZIEGrA3SwdBxXdCbIOrBkPRxmg5+x8uKNQUvkUnQehGFNSYGFAomuQ3iRvMa2AWs+MLBEgquE9M1Y+nIMUuaV8qdEmnHbjsMFFpvisxltgPpXa0l/6ctGszfJ0aUdYO8ZFcP5Y2kWNH20+hSKM5QbhwApoTrlbJzUMDQfW27hGh35Jtg+mYcvR0ffz8annzs17FHnpMX5IBE5B05IV/IKZkQ5r30vno/vDP/0P/mT/35Varv9Z5n5Fr48BeH+c/z</latexit>a <latexit sha1_base64="kq3qz9IxNyWNVWVN+Q77/IxuIho=">AAACiXicbVFNT9tAEF27fNVASeHYy6pRBIgqshHQqOKASg89UqkhSHFkjTfrsGK9NrtjlMjyf+E3ceu/6ToYKQRGWuntm/dmZ2fiXAqDvv/PcT+srK6tb3z0Nre2P+20Pu9em6zQjPdZJjN9E4PhUijeR4GS3+SaQxpLPojvLuv84IFrIzL1F2c5H6UwUSIRDNBSUeuxk0dhCnjLQJaD6mAaCRqmYkznZJyU0yoqz0V16HVCBbGEqFyQVzSU2YS+LvHiTGpTvqQ/ouEvLhHoAmtLN5Z7i8F7uWEVBftRq+13/XnQtyBoQJs0cRW1nsJxxoqUK2QSjBkGfo6jEjQKJnnlhYXhObA7mPChhQpSbkblfJIV7VhmTJNM26OQztlFRwmpMbM0tsq6SbOcq8n3csMCk96oFCovkCv2/FBSSIoZrddCx0JzhnJmATAtbK+U3YIGhnZ5nh1CsPzlt+D6uBucdU//nLQvfjbj2CBfyFdyQALynVyQ3+SK9Alz1pxvzqlz5m66gdtzfzxLXafx7JFX4V7+B6oCxO4=</latexit>t01 <latexit sha1_base64="8Me1XZMJ/y7AIZnEzl1gGBXhwbM=">AAACiHicbVFRT9swEHYyYCzboGyPe7GoKjEhVQkqA6Q9VIWHPTKJUqSmii6uU6w6TrAviCrKb9l/4o1/M6cEqRROsvT5u+87n+/iXAqDvv/kuB82Nrc+bn/yPn/5urPb2vt2bbJCMz5kmcz0TQyGS6H4EAVKfpNrDmks+Sien9f50T3XRmTqChc5n6QwUyIRDNBSUetfJ4/CFPCWgSxH1cFDJGiYiildknFSPlRR+VtUP71OqCCWEJUr8oqGMpvR1yVenEltytf0hzS84BKBrrC2dGO5sxi8lxtW0Txqtf2uvwz6FgQNaJMmLqPWYzjNWJFyhUyCMePAz3FSgkbBJK+8sDA8BzaHGR9bqCDlZlIuB1nRjmWmNMm0PQrpkl11lJAas0hjq6x7NOu5mnwvNy4wOZ2UQuUFcsWeH0oKSTGj9VboVGjOUC4sAKaF7ZWyW9DA0O7Os0MI1r/8FlwfdYNf3eO/vXZ/0Ixjm/wg++SABOSE9MkfckmGhDmbzqHTc45dz/XdE/fsWeo6jec7eRXu4D9lxsT3</latexit>tk… Figure 2: Construction of Composable CoT data with k chain-of-thought tags. We insert k−1proxy prefixes at the end of the prompt, before the generation of tk.Consider an atomic CoT
https://arxiv.org/abs/2505.22635v1
dataset DCoT T={(q,t, a)}forT ∈ {T i,Tj} and we call it standard CoT data. Given a pair of atomic tasks, we as- sume the CoT traces in each atomic task data follow a certain distribution distinct to that dataset. A pre-trained LLM M0fine-tuned on the standard CoT data is only optimized to gener- ate CoT traces that can replicate pat- terns in those two distributions. It is unclear whether a fine-tuned model can produce compositional CoTs for a prompt drawn from the compositional task distribution, as this will generally be out-of-distribution from the perspective of each of the two datasets. Without addi- tional supervision signals, such fine-tuned models typically only replicate one of the learned atomic reasoning patterns in the generated CoT; we show the empirical evidence for this in Section 5.3. In order to generate two atomic CoTs in one sequence titj, the model must allocate substantial probability p(ti|qtj), despite these not being in the training distribution. Our goal is to make this as in-distribution as possible even for a model that does not train on explicit compositional examples. Construction We define a set of chain-of-thought tags P={p1, ..., p n}fork∈ {1, ..., n}. 3 For each training example DCoT T= (q,t, a)in the atomic CoT dataset, we randomly determine whether to augment it with one of the ntags. If an example is assigned tag pk, then we want to treat it as the k-th step in a notional compositional reasoning process. To achieve this, we append k−1proxy prefix CoTs (t′ 1, . . . , t′ k−1)to the end of the prompt: Dsuf T= (qt′ 1. . .t′ k−1,tk)where tk=<suffix> ta</suffix> . The procedure is illustrated in Figure 2. We discuss different design choices for forming the proxy prefix CoTs in Appendix B. Here we present a simple yet effective approach where the proxy prefix CoT is a sequence of randomly sampled letters of a random length. Such a design aims at teaching models to generate robust continuation following an arbitrary prefix CoT. While the method applies to n > 2, we focus on the case where n= 2 for simplicity, i.e., we only have a prefix CoT tag and a suffix CoT tag. Concretely, this gives us an implementation consisting of a prefix CoT tpre, which is a CoT conditioned only on the prompt q, and a suffix CoT tsufis conditioned on the prompt and a proxy prefix CoT. The prefix CoT dataset is constructed by the following: Dpre T= (q,tpre)where tpre=<prefix> ta</prefix> . Training objective: Given our construction, we fine-tune M0with the augmented dataset Daug T= Dpre T+Dsuf Twith a multitask learning objective to minimize the sum of the negative log likelihood of generating nprefix CoTs and msuffix CoTs, where Npre+Nsuf=N: LDaug T(θ) =LDpre T(θ) +LDsuf T(θ) =−1 NpreNpreX logpθ(tpre|q)−1 NsufNsufX logpθ(tsuf|q,t′ pre) 3.2 Combining Atomic CoT Models ComposableCoT-MTL We apply multitask learning (MTL) to fine-tune M0on the combined dataset of Daug Ti+Daug Tjand obtain a single MTL model Mcomb that can generate prefix and suffix CoTs for both atomic tasks. ComposableCoT-Merge Model merging is another way to
https://arxiv.org/abs/2505.22635v1
combine multiple models into a single multi-task model [ 12,13,14]. Starting from M0, we fine-tune two models MiandMj(parametrized byθiandθj) onDaug TiandDaug Tjrespectively to optimize for atomic task performance. Then we use Task Arithmetic [ 12] to merge the two models into a single model Mcomb parametrized by θcomb as a linear combination of the differences between the two fine-tuned parameters and the base model parameter: θcomb =α(θi−θ0) +β(θj−θ0) +θ0where αandβare tunable scaling factors. Inference Both variants of Mcomb can be used for zero-shot evaluation on the compositional task T(i,j). At inference time, we sample a response from Mcomb, append <suffix> to the end of the generated response when it stops generation, and continue generation until the model stops again. 3.3 Improving Composition with Rejection Sampling Fine-tuning Mcomb can be further improved with self-taught reasoning [ 15] by rejection sampling fine-tuning (RFT) [ 16,17] on the limited compositional data. Recall that for the compositional task, we only have the direct answer labels instead of CoT traces. Mcomb can serve as a starting point for RFT where we fine-tune Mcomb with its own, correct CoT responses on the compositional task. Algorithm 1 shows the algorithm. Concretely, we sample responses from Mcomb for each example in the compositional training data. Using the direct answer labels to verify the sampled responses, we can collect a supervised fine-tuning dataset DRFT to continued fine-tune Mcomb . Such a process can be repeated for multiple iterations. Note that some tasks do not require a single correct answer to a given question (e.g. open-ended generation), and it would be hard to verify the correctness of sampled outputs only based on direct answer labels. For those tasks, we follow [ 15,18] to perform rationalization: we first append the direct answer label to the end of the prompt and sample post-hoc explanations for the given answer from the model; because Mcomb is optimized to generate an answer following a CoT, we extract the generated answer following the generated explanation and filter out explanations whose following answer is not the same as the provided gold answer; finally, we use the accepted explanations as surrogates for CoT to form the RFT data. 4 Algorithm 1 Bootstrapping Atomic CoT Models Trained on Composable CoT Input: The combined model Mcomb ; dataset DT(i,j)={(qv, av)}N(i,j) v=1; the number of iterations c. Output: 1:M0←Mcomb ▷Initialization 2:forwin1...cdo 3: ifuse rationalization then 4: (˜tv,˜av)←Mw−1(qvav)∀v∈ {1, ..., N (i,j)} ▷Performance rationalization 5: else 6: (˜tv,˜av)←Mw−1(qv)∀v∈ {1, ..., N (i,j)} 7: end if 8: DRFT← {(qv,˜tv, av) s.t. v∈ {1, ..., N (i,j)}and ˜av=av} ▷CoTs with correct answers 9: Mw←SFT( Mcomb, DRFT) ▷Fine-tune the combined model on the accepted CoT data 10:end for 4 Experimental Setup We evaluate on two sets of tasks: a set of string operation tasks and tasks derived from the Skill- Mix [19] dataset. Each setting involves atomic tasks and compositional tasks. We ensure that all atomic tasks are learnable through supervised fine-tuning with a small amount of training data (Ni, Nj≤500); the single task learning performance can be found in Appendix E String operation tasks We consider
https://arxiv.org/abs/2505.22635v1
the following atomic tasks that involve string operations. (1) Last letter in alphabet : Determine the next letter in the alphabet following the last letter in a sequence of letters. (2) Letter concentation : Adapted from [ 9,20,4], this task prompts the LLM to concatenate the first, second, second-to-last, or last letter of each word in a given sequence of words. (3) ASCII multiplication : Perform multiplicative operations of the ASCII value of a given letter. We consider the following compositions of the atomic tasks. 1.Last letter + multiplication: Given a sequence of letters, find the next letter in the alphabet following the last letter, determine its ASCII value, and then perform multiplication with a given constant. 2.Concatenation + last letter: Given a sequence of words, concatenate the first, second, or second-to- last letter of each word and then find the next letter in the alphabet following the last letter of the concatenated sequence. 3.Concatenation + multiplication: Given a sequence of words, concatenate the first, second, or second-to-last letter of each word, find the ASCII value of the last letter of the concatenated sequence, and then perform multiplication. Data and CoT traces of these tasks are automatically generated using fixed templates. The data generation procedure and the examples can be found in Appendix C. Skill-Mix Given the definition and an example of a language skill (e.g. hyperbole), the model needs to write a sentence to demonstrate the skill about a given topic. Because each language skill in Skill-Mix dataset consists of only a few examples, we consider an atomic task to be handling skills over a category of skills, and we evaluate on two categories that are mainly mutually exclusive: literary devices ( Skill-Mix-Literary ) and rhetorical devices ( Skill-Mix-Rhetorical ). Atomic CoT traces for Skill-Mix are distilled from GPT-4o [ 21], following [ 22]. Examples and details can be found in Appendix D. The composition tasks we consider combine literary andrhetorical skills: generate a sentence to demonstrate two provided skills, each of which is sampled from one of the categories. Evaluation Metrics All string operation tasks are evaluated using exact match accuracy and a regex-based answer extractor is used to extract the answer from the generated response. For Skill-Mix tasks, we use the metrics from [ 19] to measure the quality of the generated sentence based on a rubric (namely, Full Marks andSkill Fraction ), and use GPT-4o-mini for automatic generation. Details can be found in Appendix D.2. Zero-shot/Few-shot Baselines Figure 3 summarizes the high-order variables of the configurations we evaluate. For zero-shot compositional generalization, we include the following baselines: (1) 5 Merge + Multi-task Learning (MTL) one model trained on both datasetsone model trained on each dataset, then merged CoT formatStandard CoT (past work)Composable CoT (this work; §3.1) Combination methodFurther training on compositional dataCFT: continued fine-tuning on answer-only compositional dataRFT: rejection sampling of CoTs, fine-tuning on those leading to correct answers Figure 3: Summary of settings for methods evaluated. Names in the results table reference configurations described in this figure; e.g., ComposableCoT-Merge uses ComposableCoTs with model merging, and in the
https://arxiv.org/abs/2505.22635v1
zero-shot setting does not use further tuning. Few-shot direct answer prompting: we prompt M0with 5-shot demonstrations drawn from the compositional data; (2) Few-shot CoT prompting: we prompt M0with 5-shot CoT demonstrations drawn from the atomic data; (3) Model merging of atomic CoT models ( StandardCoT-Merge ): we fine-tune two models MiandMjbased on M0withDCoT TiandDCoT Tjrespectively and merge them intoMcomb with Task Arithmetic; (4) Multitask learning of atomic CoTs ( StandardCoT-MTL ): we fine-tune M0to be a single multitask learning model MSCoT−MTL onDCoT Ti+DCoT Tj. Baselines with Compositional Supervision With the same compositional training dataset with only the answer label DT(i,j), we compare bootstrapping Composable CoT with the following baselines. (1) Continued fine-tuning (CFT) the multitask model of atomic CoTs ( CFT on StandardCoT-MTL ): we continue fine-tune the multitask model MSCoT−MTL onDT(i,j); (2) Continued fine-tuning the merged model of atomic CoTs ( CFT on StandardCoT-Merge ): we continue fine-tune the merged model of the two atomic CoT models Mcomb onDT(i,j); (3) Multitask learning of atomic CoTs and compositional answers ( StandardCoT + Comp Answer ): we fine-tune a single multitask learning model based on M0on the combined dataset of DCoT Ti+DCoT Tj+DT(i,j). We also include supervised learning baselines (SFT) where M0is fine-tuned on the same compositional answer data DT(i,j). The differences of methods we evaluate for each setting are summarized in Table 10. Models and Training We use Llama 2 7B-base [ 23] and Qwen2.5 7B-base [ 24] for experiments. We use LoRA [ 25] for supervised fine-tuning experiments. For rejection sampling, we sample 10 responses for each prompt and use temperature τ= 0.9for inference; for the other methods, we use greedy decoding. For Skill-Mix tasks, we perform rationalization for RFT because it is an open-ended generation task (see Section 3.3). Configuration and hyperparameters are in Appendix F. 5 Results 5.1 Zero-shot Generalization We evaluate the compositional generalization of the proposed method without compositional supervi- sion, including ComposableCoT-Merge and ComposableCoT-MTL. For all methods that we compare with, we control the amount of training data to be the same as NiandNj. For reference, we also include the supervised fine-tuning baseline by fine-tuning M0withN(i,j)compositional answer data. Details of the training data for each task can be found in Appendix G. Learning ComposableCoT achieves better zero-shot generalization. Table 1 shows that Com- posableCoT variants outperform all baselines on a range of settings for both models. Moreover, while having seen no compositional training data, our method achieves comparable or even better performance than supervised fine-tuning baselines with compositional supervision (e.g., last letter + multiplication). We also find that combining atomic CoT models trained on Composable CoT is better than combining models trained on standard CoT: ComposableCoT-Merge is better than StandardCoT-Merge in most settings, and so is ComposableCoT-MTL to StandardCoT-MTL. These indicate that the Composable CoT format leads to better "composability" at inference time. We note one error case of ComposableCoT-Merge on concatenation + last letter on Qwen 2.5-7B; we discuss it in details in Section 5.3. 6 Table 1: Zero-shot compositional generalization of ComposableCoT with different combination approaches vs. baselines. Without any compositional supervision , using
https://arxiv.org/abs/2505.22635v1
model merging or multitask learning to combine atomic CoT models trained on Composable CoT data outperforms baselines across settings and models, and is sometimes comparable to SFT with compositional supervision. MethodsLast Letter Concat Concat Skill-Mix Literary + Mult + Last Letter + Mult + Rhetorical EM EM EM Full Marks Skill Fraction Llama 2-7B SFT on Base Model3.1 5.0 9.0 35.5 60.1with Compositional Supervision Few-shot Answer 1.0 0.0 0.0 4.1 16.4 Few-shot CoT 2.0 3.0 1.0 7.3 23.1 StandardCoT-Merge 2.0 12.5 2.3 11.0 31.6 ComposableCoT-Merge (Ours) 16.0 19.1 3.0 19.6 37.1 StandardCoT-MTL 5.0 0.0 0.0 17.6 38.7 ComposableCoT-MTL (Ours) 18.7 6.5 3.1 22.9 49.9 Qwen 2.5-7B SFT on Base Model4.6 31.9 2.0 35.5 60.3with Compositional Supervision Few-shot Answer 2.4 0.0 2.7 34.7 56.0 Few-shot CoT 2.0 0.0 21.3 31.8 41.6 StandardCoT-Merge 70.4 54.8 77.0 29.8 48.0 ComposableCoT-Merge (Ours) 95.4 19.2 75.4 39.6 62.1 StandardCoT-MTL 3.6 60.9 72.1 42.0 58.2 ComposableCoT-MTL (Ours) 96.3 63.3 74.3 49.0 66.7 The optimal method to combine atomic CoT models differs for different settings. Regardless of using StandardCoT or ComposableCoT, combining atomic CoT models with model merging is generally better than with multitask learning for concatenation + multiplication, and multitask learning is better for Skill-Mix literary + rhetorical. We hypothesize that such differences are caused by the level of conflicts in the model parameter space between different pairs of atomic tasks. 5.2 Compositional Performance with Limited Supervision We evaluate the performance of Composable CoT models after being further improved with one iteration of RFT using the limited compositional supervision. We compare it with multitask learning and continued fine-tuning baselines given the same compositional answer dataset DT(i,j)of size N(i,j)≤500. For reference, we include the baseline of fine-tuning M0on the same compositional answer data. Details of the data condition can be found in Appendix G. Table 2 shows that within the same budget of compositional training data, using RFT on top of ComposableCoT-MTL and ComposableCoT-Merge achieves the best compositional task performance , outperforming multitask learning and continued fine-tuning baselines across settings. We further investigate if the performance is mainly driven by RFT or by learning Composable CoT format. We compare RFT upon StandardCoT-Merge with RFT upon ComposableCoT-Merge for LLama 2-7B, and StandardCoT-MTL with ComposableCoT-MTL for Qwen 2.5-7B.2Table 2 shows that RFT is a better way to improve the compositional task performance of StandardCoT models with compositional data than MTL and SFT. One explanation is that SFT or MTL based on the atomic 2To choose the ablation baselines to compare with, we take the StandardCoT model that wins on more categories and consider a model going from zero performance to nonzero as a “double win”. On Llama 2-7B, StandardCoT-Merge achieves nonzero accuracy on two tasks that StandardCoT-MTL achieves zero accuracy on, making it a better starting point for further fine-tuning. On Qwen 2.5-7B, both StandardCoT models have nonzero performance, and we choose StandardCoT-MTL as its performance is generally higher. 7 Table 2: Compositional task performance of rejection sampling fine-tuning (RFT) upon merged Composable atomic CoT models and other baselines. Mult stands for ASCII multiplication and concat stands for letter concatenation. SFT
https://arxiv.org/abs/2505.22635v1
stands for supervised fine-tuning with the compositional answer data; CFT stands for continued fine-tuning; MTL stands for multitask learning method. Results on last letter + mult are omitted because the zero-shot performance already saturates. RFT on ComposableCoT variants achieves the best compositional task performance using the same amount of compositional answer data. Category MethodLast Letter Concat Concat Skill-Mix Literary + Mult + Last Letter + Mult + Rhetorical EM EM EM Full Marks Skill Fraction Llama 2-7B SFTSFT on Base Model 3.1 5.0 9.0 35.5 60.1 CFT on StandardCoT-Merge 2.0 16.0 14.0 44.1 65.1 CFT on StandardCoT-MTL 3.0 26.0 11.0 38.0 62.1 MTL StandardCoT + Comp Answer 5.0 46.0 13.3 22.9 45.5 RFTStandardCoT-Merge 0.0 23.0 29.7 26.1 52.0 ComposableCoT-Merge (Ours) 72.0 46.0 40.0 45.3 66.6 Qwen 2.5-7B SFTSFT on Base Model - 31.9 2.0 35.5 60.3 CFT on StandardCoT-Merge - 41.1 9.3 51.0 71.4 CFT on StandardCoT-MTL - 60.3 12.7 34.7 56.3 MTL StandardCoT + Comp Answer - 65.1 7.1 41.2 55.3 RFTStandardCoT-MTL - 82.1 89.0 44.9 63.4 ComposableCoT-MTL (Ours) - 86.9 88.4 57.6 71.5 CoT models using compositional answer data can lead to a distribution shift in the output space (from generating a CoT sequence to generating a direct answer), and mitigating such distribution shift requires more advanced techniques, usually called CoT internalization [ 26] or latent CoT [ 27], beyond the focus of this work; meanwhile, RFT does not have this problem. Moreover, RFT upon ComposableCoT models is generally better than RFT upon StandardCoT models.3 5.3 Intrinsic Evaluation of Generated CoTs To understand differences in models trained with different CoT formats, we conduct intrinsic evalua- tions on CoTs generated by ComposableCoT and StandardCoT models for zero-shot composition. For the string operation tasks, we extract template-based patterns of each atomic CoT from the generated outputs of models evaluated on the compositional task. For Skill-Mix, we consider the CoT pattern of an atomic task to be used if the generated response explicitly mentions the skill corresponding to that atomic skill category. Table 3 shows results with models trained from Qwen 2.5-7B. Using the same combination method (model merging or MTL), combining ComposableCoT leads to consistently higher presence of both atomic CoT patterns in the generated responses com- pared to StandardCoT. Atomic models trained with the Composable CoT format therefore leverage the combination of learned skills in some form more frequently than StandardCoT. We note the exception of ComposableCoT-Merge on Concat + Multi which seldomly leverages both atomic CoT in the outputs, explaining to its low compositional performance, as mentioned in Section 5.1. We thus hypothesize that in some cases, model merging can still be an unstable combination method. Examples of error cases in the generated CoTs can be found in Appendix H. 3Note that RFT requires a reasonably good model to start with: RFT fails for StandardCoT-Merge on Llama 2-7B because it is unable to sample enough correct responses for training from this StandardCoT-Merge model. 8 Table 3: Intrinsic evaluation of the generated CoTs from atomic CoT models evaluated on the compositional task in the zero-shot setting. “% T1CoT” denotes the percentage
https://arxiv.org/abs/2505.22635v1
of generated responses that use the CoT format of the first atomic task of the composition, and likewise for the second.†denotes that the ComposableCoT method has a significantly higher “% Both CoT” than the StandardCoT counterpart at the 0.01level using a paired bootstrap test. Combined Composable CoT models generate responses including both atomic CoT patterns more frequently than combined atomic CoT models. Method Performance % T1CoT % T2CoT % Both CoT StandardCoT-Merge 70.4 85.3 95.1 85.3 Last Letter ComposableCoT-Merge 95.4 100.0 100.0†100.0 + Mult StandardCoT-MTL 3.6 0.0 100.0 0.0 ComposableCoT-MTL 96.3 98.9 100.0†98.9 StandardCoT-Merge 77.0 90.3 98.7 90.0 Concat ComposableCoT-Merge 75.4 91.6 100.0 91.6 + Last Letter StandardCoT-MTL 72.1 99.7 32.1 32.1 ComposableCoT-MTL 74.3 100.0 83.1†81.3 StandardCoT-Merge 54.8 100.0 99.4 99.4 Concat ComposableCoT-Merge 19.2 44.6 60.5 17.7 + Mult StandardCoT-MTL 60.9 100.0 66.7 66.7 ComposableCoT-MTL 63.3 100.0 85.9†85.0 Skill-Mix StandardCoT-Merge 29.8 60.0 59.2 35.9 Literary ComposableCoT-Merge 39.6 64.1 66.9†43.3 + Rhetorical StandardCoT-MTL 42.0 65.3 58.0 37.6 ComposableCoT-MTL 49.0 64.5 65.7†42.0 6 Related Work As an important cognitive capability of humans [ 1,2], compositional generalization has been consid- ered a core capability for human-level reasoning models [ 28,29]. As defined by [ 28], compositional generalization has three components: systematicity, productivity, and primitive application. This work mainly focuses on systematicity , the capability of applying known components in unseen combina- tions. Systematic generalization has been considered difficult for LLMs because of the limitations of the transformer architecture and autoregressive generation [ 30,4], and has been explored extensively in the context of parsing [31, 32, 33, 34]. Recent theoretical analyses show that the compositional reasoning capability of LLMs can improved by generating CoT [ 35,36], but empirical results show that non-trivial effort needs to be put through prompt engineering [ 37,38] or data selection [ 39,20,40,41] to observe such improvements with CoT [ 42], particularly in domains where compositional solutions to problems are crucial [ 43,44]. Prior work has explored more principled approaches, but they usually rely on heuristics to determine data quality [8, 45] or involve computationally intensive methods [7, 3]. We are inspired by a line of work on efficient methods for combining models of different capabilities. Past work on model merging [ 13,46,47] has shown that trained models can be merged to retain the union of their skills, including reasoning [ 48,49,50], but only limited prior work has shown any kind of skill composition [ 51]. Our work is the first to use model merging for compositional generalization with CoT. We believe that other approaches for combining models could be used, such as learning methods that make modular updates [52, 53]. 7 Conclusion We propose Composable Chain-of-Thought, a data augmentation scheme to convert CoT data of atomic reasoning skills into a format that facilitates inference-time compositional generalization. Training atomic CoT models with Composable CoT and combining them with model merging or multitask learning leads to better zero-shot compositional reasoning performance than building models with the standard CoT format. Such a combined model can be further improved by a limited amount of compositional data with rejection sampling fine-tuning. Learning to reason with 9
https://arxiv.org/abs/2505.22635v1
composable CoT shows a promising approach to improve compositional reasoning in LLMs, and could be extended to build more efficient and robust large reasoning models. Limitations: Our experiments focus on pairwise compositional tasks where two atomic reasoning skills are composed and do not cover compositions of more than two atomic tasks. There is a lack of high-quality datasets that support n-way compositional tasks, and it is a non-trivial effort to collect an appropriate evaluation suite for the purpose. We note that our framework of Composable CoT generalizes to compositional tasks that consist of more than two atomic skills as discussed in Section 3.1. In addition, we only focus on small-scale datasets where both the atomic skills and compositional skills can be learned with a small amount of training data. Our focus here is on conducting controlled experiments, but we believe our methods and their principles can be scaled up to more complex settings. Acknowledgments Thanks to Katrin Erk, Kyle Mahowald, and other members of the TAUR lab for helpful discussion and suggestions. This work was partially supported by the Sloan Foundation, a grant from Open Philanthropy, NSF CAREER Award IIS-2145280, the NSF AI Institute for Foundations of Machine Learning (IFML), and the NSF under Cooperative Agreement 2421782 and the Simons Foundation grant MPS-AI-00010515 awarded to the NSF-Simons AI Institute for Cosmic Origins — CosmicAI, https://www.cosmicai.org/. This research has been supported by computing support on the Vista GPU Cluster through the Center for Generative AI (CGAI) and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. References [1]Steven Piantadosi and Richard Aslin. Compositional reasoning in early childhood. In PloS one , volume 11, September 2016. [2]Denise M. Werchan, Anne G.E. Collins, Michael Joshua Frank, and Dima Amso. 8-month-old infants spontaneously learn and generalize hierarchical rules. Psychological Science , 26:805 – 815, 2015. [3]Henry Conklin, Bailin Wang, Kenny Smith, and Ivan Titov. Meta-learning to compositionally generalize. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 3322–3335, Online, August 2021. Association for Computational Linguistics. [4]Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Sean Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. Faith and fate: Limits of transformers on compositionality. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. [5]DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao,
https://arxiv.org/abs/2505.22635v1
Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, 10 T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y . X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv preprint arXiv:2501.12948 , 2025. [6] Qwen Team. QwQ-32B: Embracing the Power of Reinforcement Learning, March 2025. [7]Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. Easy-to-hard generalization: Scalable alignment beyond human supervision. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [8]Peter Hase, Mohit Bansal, Peter Clark, and Sarah Wiegreffe. The unreasonable effectiveness of easy training data for hard tasks. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7002–7024, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [9]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. [10] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In Alice
https://arxiv.org/abs/2505.22635v1
H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. [11] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [12] Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Ha- jishirzi, and Ali Farhadi. Editing models with task arithmetic. In The Eleventh International Conference on Learning Representations , 2023. [13] Michael S Matena and Colin Raffel. Merging Models with Fisher-Weighted Averaging. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. [14] Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal. TIES-merging: Resolving interference when merging models. In Thirty-seventh Conference on Neural Informa- tion Processing Systems , 2023. [15] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. STaR: Bootstrapping Reasoning With Reasoning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. [16] Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, KaShun SHUM, and Tong Zhang. RAFT: Reward ranked finetuning for generative foundation model alignment. Transactions on Machine Learning Research , 2023. [17] Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models. 2024. 11 [18] Xi Ye and Greg Durrett. The unreliability of explanations in few-shot prompting for textual reasoning. In Proceedings of NeurIPS , 2022. [19] Dingli Yu, Simran Kaur, Arushi Gupta, Jonah Brown-Cohen, Anirudh Goyal, and Sanjeev Arora. SKILL-MIX: a flexible and expandable family of evaluations for AI models. In The Twelfth International Conference on Learning Representations , 2024. [20] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations , 2023. [21] OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tul- loch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern,
https://arxiv.org/abs/2505.22635v1
Channing Conger, Charlotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robinson, David Sasaki, Denny Jin, Dev Valladares, Dimitris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan Asdar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wallace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Hui- wen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Silber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitschei- der, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Varavva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Landers, Joel Parish, Johannes Heidecke, John Schulman, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lilian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kondraciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Janner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Natalie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, 12 Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Peter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias,
https://arxiv.org/abs/2505.22635v1
Rachel Lim, Rahul Arora, Rajan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchandani, Romain Huet, Rory Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agarwal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shirong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stewart Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. GPT-4o System Card . arXiv preprint arXiv:2410.21276 , 2024. [22] Haoyu Zhao, Simran Kaur, Dingli Yu, Anirudh Goyal, and Sanjeev Arora. Can models learn skill composition from examples? In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [23] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiao- qing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [24] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2025. [25] Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. [26]
https://arxiv.org/abs/2505.22635v1
Yuntian Deng, Yejin Choi, and Stuart Shieber. From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step . arXiv preprint arXiv:2405.14838 , 2024. [27] Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. Training large language models to reason in a continuous latent space. arXiv preprint arXiv:2412.06769 , 2024. [28] Jerry A. Fodor and Zenon W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. Cognition , 28:3–71, 1988. 13 [29] Brenden M. Lake and Marco Baroni. Human-like systematic generalization through a meta- learning neural network. Nature , 623:115 – 121, 2023. [30] Jonathan Thomm, Giacomo Camposampiero, Aleksandar Terzic, Michael Hersche, Bernhard Schölkopf, and Abbas Rahimi. Limits of transformer language models on learning to compose algorithms. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [31] Najoung Kim and Tal Linzen. COGS: A compositional generalization challenge based on semantic interpretation. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP) , pages 9087–9105, Online, November 2020. Association for Computational Linguistics. [32] Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Jennifer Dy and Andreas Krause, edi- tors, Proceedings of the 35th International Conference on Machine Learning , volume 80 of Proceedings of Machine Learning Research , pages 2873–2882. PMLR, 10–15 Jul 2018. [33] Ekin Akyürek, Afra Feyza Akyürek, and Jacob Andreas. Learning to recombine and resample data for compositional generalization. In International Conference on Learning Representations , 2021. [34] Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. Compositional semantic parsing with large language models. In The Eleventh International Conference on Learning Representations , 2023. [35] Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of thought empowers transformers to solve inherently serial problems. In The Twelfth International Conference on Learning Representations , 2024. [36] Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris Papailiopoulos, and Samet Oymak. Dissecting chain-of-thought: Compositionality through in-context filtering and learning. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. [37] Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, and Jianshu Chen. Skills-in-context: Unlocking compositionality in large language models. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Com- putational Linguistics: EMNLP 2024 , pages 13838–13890, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [38] Peizhong Gao, Ao Xie, Shaoguang Mao, Wenshan Wu, Yan Xia, Haipeng Mi, and Furu Wei. Meta reasoning for large language models. arXiv preprint arXiv:2406.11698 , 2024. [39] Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. In The Eleventh International Conference on Learning Representations , 2023. [40] Itay Levy, Ben Bogin, and Jonathan Berant. Diverse demonstrations improve in-context compositional generalization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Toronto, Canada, July 2023. [41] Xi Ye,
https://arxiv.org/abs/2505.22635v1
Srinivasan Iyer, Asli Celikyilmaz, Veselin Stoyanov, Greg Durrett, and Ramakanth Pasunuru. Complementary explanations for effective in-context learning. In Findings of the Association for Computational Linguistics: ACL 2023 , Toronto, Canada, July 2023. Association for Computational Linguistics. [42] Zayne Rea Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Durrett. To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning. In The Thirteenth International Conference on Learning Representations , 2025. 14 [43] Wenxuan Ding, Shangbin Feng, Yuhan Liu, Zhaoxuan Tan, Vidhisha Balachandran, Tianxing He, and Yulia Tsvetkov. Knowledge crosswords: Geometric knowledge reasoning with large language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics: ACL 2024 , pages 2609–2636, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [44] Xi Ye, Fangcong Yin, Yinghui He, Joie Zhang, Howard Yen, Tianyu Gao, Greg Durrett, and Danqi Chen. LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation. arXiv preprint arXiv:2501.05414 , 2025. [45] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. InProceedings of the 26th Annual International Conference on Machine Learning , ICML ’09, page 41–48, New York, NY , USA, 2009. Association for Computing Machinery. [46] Derek Tam, Mohit Bansal, and Colin Raffel. Merging by matching models in task parameter subspaces. Transactions on Machine Learning Research , 2024. [47] Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Ha- jishirzi, and Ali Farhadi. Editing models with task arithmetic. In The Eleventh International Conference on Learning Representations , 2023. [48] Han Wu, Yuxuan Yao, Shuqi Liu, Zehua Liu, Xiaojin Fu, Xiongwei Han, Xing Li, Hui-Ling Zhen, Tao Zhong, and Mingxuan Yuan. Unlocking efficient long-to-short llm reasoning with model merging. arXiv preprint arXiv:2503.20641 , 2025. [49] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, Chuning Tang, Congcong Wang, Dehao Zhang, Enming Yuan, Enzhe Lu, Fengxiang Tang, Flood Sung, Guangda Wei, Guokun Lai, Haiqing Guo, Han Zhu, Hao Ding, Hao Hu, Hao Yang, Hao Zhang, Haotian Yao, Haotian Zhao, Haoyu Lu, Haoze Li, Haozhen Yu, Hongcheng Gao, Huabin Zheng, Huan Yuan, Jia Chen, Jianhang Guo, Jianlin Su, Jianzhou Wang, Jie Zhao, Jin Zhang, Jingyuan Liu, Junjie Yan, Junyan Wu, Lidong Shi, Ling Ye, Longhui Yu, Mengnan Dong, Neo Zhang, Ningchen Ma, Qiwei Pan, Qucheng Gong, Shaowei Liu, Shengling Ma, Shupeng Wei, Sihan Cao, Siying Huang, Tao Jiang, Weihao Gao, Weimin Xiong, Weiran He, Weixiao Huang, Wenhao Wu, Wenyang He, Xianghui Wei, Xianqing Jia, Xingzhe Wu, Xinran Xu, Xinxing Zu, Xinyu Zhou, Xuehai Pan, Y . Charles, Yang Li, Yangyang Hu, Yangyang Liu, Yanru Chen, Yejie Wang, Yibo Liu, Yidao Qin, Yifeng Liu, Ying Yang, Yiping Bao, Yulun Du, Yuxin Wu, Yuzhi Wang, Zaida Zhou, Zhaoji Wang, Zhaowei Li, Zhen Zhu, Zheng Zhang, Zhexu Wang, Zhilin Yang, Zhiqi Huang, Zihao Huang, Ziyao Xu, and Zonghan Yang. Kimi k1.5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [50] Xinyin Ma, Guangnian Wan, Runpeng Yu, Gongfan Fang, and Xinchao Wang.
https://arxiv.org/abs/2505.22635v1
CoT-Valve: Length-Compressible Chain-of-Thought Tuning. arXiv preprint arXiv:2502.09601 , 2025. [51] Derek Tam, Yash Kant, Brian Lester, Igor Gilitschenski, and Colin Raffel. Realistic evaluation of model merging for compositional generalization. arXiv preprint arXiv:2409.18314 , 2024. [52] Fangcong Yin, Xi Ye, and Greg Durrett. LoFiT: Localized Fine-tuning on LLM Representations. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [53] Qi Sun, Edoardo Cetin, and Yujin Tang. Transformer-squared: Self-adaptive LLMs. In The Thirteenth International Conference on Learning Representations , 2025. [54] Aaron Gokaslan and Vanya Cohen. OpenWebText Corpus. 2019. [55] Ryan* Marten, Trung* Vu, Charlie Cheng-Jie Ji, Kartik Sharma, Shreyas Pimpalgaonkar, Alex Dimakis, and Maheswaran Sathiamoorthy. Curator: A Tool for Synthetic Data Creation. January 2025. [56] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, and Zheyan Luo. LlamaFactory: Unified efficient fine-tuning of 100+ language models. In Yixin Cao, Yang Feng, and Deyi Xiong, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations) , pages 400–410, Bangkok, Thailand, August 2024. Association for Computational Linguistics. 15 Table 4: Performance of atomic CoT models fine-tuned on different variants of proxy prefix on Llama 2-7B. Using random letters as the proxy prefix achieves the best out-of-domain performance when evaluated with an unseen prefix at inference time. Type of Proxy PrefixExact Match Accuracy In Domain Prefix Out-of-Domain Prefix Random Letters 83.0 90.0 Random from the Prompt 86.4 82.5 Random Text 90.6 70.0 A A Note on Composing Tasks There exist various possible ways to combine atomic tasks into a compositional task with the combination function g. We simplify ginto two types: (1) composite: the output of one atomic task is used as part of the input of another task, g(Ti,Tj) =Ti◦Tjorg(Ti,Tj) =Tj◦Ti; (2) concatenation: the outputs of the two atomic tasks are concatenated using the same input, g(Ti,Tj) =Ti⊕ Tjor g(Ti,Tj) =Tj⊕Ti. Among tasks evaluated in Section 4, the string operation tasks need to be solved by a composite function, while the Skill-Mix task can be solved by either a composite function or a concatenation function. B Design Choices for Constructing Composable CoT Data When designing the proxy prefix CoT, we would like to consider the following constraints. (1) We do not assume any prior knowledge about what would possibly be put in the prefix CoT at inference time; (2) We do not assume strong relevance between the proxy prefix CoT and the suffix, i.e., not all the information in the proxy prefix CoT is useful for predicting the suffix CoT and the final answer. Based on these considerations, we experiment with the following variants: •Random letters : We sample random letters from the alphabet to form a sequence of random lengths to simulate an arbitrary prefix CoT. •Random from the prompt : We sample random letters and words from the prompt qto form a sequence of random lengths to simulate a prefix CoT in a similar distribution as the input distribution. •Random text : We sample random sentences from OpenWebText [ 54] to simulate a prefix CoT drawn from the pre-training data distribution. We evaluate these variants by
https://arxiv.org/abs/2505.22635v1
fine-tuning models on Composable CoT datasets that only have suffix CoT examples . Note that this is different from the multitask learning setting discussed in Section 3.1 where the Composable CoT dataset consists of both the prefix CoT examples and the suffix CoT examples. This experiment mainly aims at stress testing the model’s capability of learning a single atomic task with a given proxy prefix CoT variant. We use the same hyperparameter configurations for all proxy prefix variants for a given task. We evaluate the fine-tuned models on the in-domain task in two settings: (1) In-domain prefix: we append the same type of prefix as we have used for training to the end of the prompt of the in-domain test example and evaluate the model on it; (2) Out-of-domain prefix: we randomly sample a prefix from the other two variants and append it to the end of the prompt of the in-domain test example and evaluate the model on it. We run experiments on the three string operation tasks and report the average performance. Table 4 shows that while using random letters as the proxy prefix leads to the worst in-domain performance, it generalizes the best to out-of-domain prefixes, which is a more desirable behavior. C Details of String Operation Tasks Last letter in alphabet We synthetically generate data for Last letter in alphabet. We randomly sample letters from the English alphabet of a random length and concatenate them into a sequence. 16 Then we extract the last letter from the sequence and derive the next letter following it in the alphabet. An example can be found in Example C.2. We automatically generate a chain-of-thought for each generated problem, using a fixed template shown in Example C.2. ASCII multiplication Similarly, we randomly sample letters from the English alphabet of a random length and concatenate them into a sequence. Then, we randomly sample another letter s and randomly sample an integer a∈ {1, ...,9}. We find the ASCII value of sasf(s)and compute the product af(s)as the gold answer. An example can be found in Example C.3. We automatically generate a chain-of-thought for each generated problem, using a fixed template shown in Example C.3. Letter concatenation We follow [ 9] to generate the dataset by randomly sampling from the most popular first and last names in the United States and the United Kingdom from https:// namecensus.com and randomly concatenating them into a sequence of names. While the original task in [ 9] only requires concatenating the last letter of each name together, we raise the difficulty level by randomly asking for concatenations of the first, second, second-to-last, or the last letter. An example can be found in Example C.1. The CoT template is also shown in Example C.1. Compositional tasks We synthetically construct the compositional tasks of the string operation tasks in similar procedures as used to generate the atomic data. An example of last letter + ASCII multiplication can be found in Example C.4, concatenation + last letter in Example C.5, and con- catenation + multiplication in Example C.6. We made a design
https://arxiv.org/abs/2505.22635v1
decision to exclude one variant of concatenation + last letter that concatenates the last letter of each word and finds the next letter following the last letter in the concatenated sequence; this variant can be solved by the reasoning shortcut of only applying Last letter in alphabet rather than a composition of both. C.1 Atomic Task Example: Letter Concatenation Example [Instruction] Take the second-to-the-last letter of each word in the sequence and concatenate them in lower case: Tequan Monjur Khia Jodi-leigh answer [Chain-of-Thought + Answer String] The second-to-the-last letter of the 1st word is a. The second-to-the-last letter of the 2nd word is u. The second-to-the-last letter of the 3rd word is i. The second-to-the-last letter of the 4th word is g. So the answer is auig. [Answer String] auig C.2 Atomic Task Example: Last letter in alphabet [Instruction] Find the Last letter in alphabet following the last letter in the sequence: wqsisibnnicdlpwqbnoicdcxcxrfoilpcbnixucbssssejxuzods answer: [Chain-of-Thought + Answer String] The last letter is s, and the letter following it in alphabet is t. So the answer is t. [Answer String] t C.3 Atomic Task Example: ASCII Multiplication [Instruction] Find the ASCII value of the letter after ‘<letter>’ and multiply the ASCII value by 2: byaxaxcpoteznwnwseselyjlretxtxcbfvmfezbycplymfotjbfv 17 jlhotzjbjcpycbtzhorepyjckofj <letter> d answer: [Chain-of-Thought + Answer String] The ASCII value of the letter d is 100, and multiplying the ASCII value by 2 gives us 200. So the answer is 200. [Answer String] 200 C.4 Compositional Task Example: Last letter + ASCII Multiplication [Instruction] Find the ASCII value of the Last letter in alphabet following the last letter in the sequence and multiply the ASCII value by 5: knnxqsxvshqugxfuquljumsbihgxvqihnxuufuknxvumuupkpkshljqsbkiz answer: [Answer String] 485 C.5 Compositional Task Example: Concatenation + Last Letter [Instruction] Take the second-to-the-last letter of each word in the sequence, concatenate them in lower case, and find the Last letter in alphabet following the last letter in the sequence of the concatenated letters: Tyjai Ahijah Denzil Amine answer: [Answer String] o C.6 Compositional Task Example: Concatenation + Multiplication [Instruction] Take the second-to-the-last letter of each word in the sequence, concatenate them in lower case, then find the ASCII value of the last letter in the sequence of the concatenated letters, and multiply the ASCII value by 3: Zarriah Amylee Li Javarie answer: [Answer String] 315 D Details of Skill-Mix Tasks D.1 Modifications of Skill-Mix We adapt the Skill-Mix dataset from [ 19]. For each example, the model is given a natural language skill, its definition, an example of the skill, and a topic to focus on, and the model needs to write a grammatical sentence to demonstrate the skill on the topic. Because we mainly focus on pairwise composition, we only use the k= 2andk= 1composition sets of the Skill-Mix data. We apply the following modifications to the dataset to fit our setting of compositional reasoning. 1.Filtering the categories of skills: We keep examples with skills of the rhetorical and literary categories out of the five categories from the original dataset. This is because the rhetorical and literay skills have the least overlap while
https://arxiv.org/abs/2505.22635v1
other categories have more (e.g. the logical and rhetorical skills have a large body of overlaps). 2.Removing the requirements of post-hoc explanation and refinement from the prompt. The original dataset evaluates models by prompting the models to first write a sentence, provide an explanation for the written sentence, and then do another round of refinement based on 18 feedback from the grader (an LLM-as-a-judge). To fit into our setting of chain-of-thought reasoning and direct answering, we remove these irrelevant elements in the prompt. 3.Using a public test set: The original evaluation of [ 19] hides the test set from the public and models can only be evaluated based on API calls to the hidden test set. To ensure reproducibility of our results, we use a public test set collected by [22]. As an open-ended generation task, Skill-Mix does not have a single ground truth sentence. [ 22] collect a silver-labeled dataset of sentences and their post-hoc explanations generated by GPT-4o [21] and graded as correct by the grader. We further prompt GPT-4o to transform the (sentence, explanation) pairs into (CoT, sentence) pairs to resolve potential coreference issues. Examples of the prompt, CoT traces, and sentence can be found in Examples D.1, D.2, and D.3. D.1 Atomic Task Example: Skill-Mix-Literary [Instruction] Greetings! I am interested in natural language processing and I was wondering if you could help me generate an example of text that illustrates multiple skills in semantics or syntax. The example should be a minimal natural piece of text with up to a few lines in the context of Vikings that illustrates all of the following skills: oxymoron. Please keep the text as short as possible, and make sure the concepts can be found fully from the text. For reference, here are the definitions and examples for the concepts: **oxymoron**: A figure of speech in which apparently contradictory terms appear in conjunction. For example, “The hall was filled with a deafening silence.” Here, the phrase “deafening silence” is an oxymoron. Please start the minimal natural piece of text with ‘Answer:’. Thanks very much! [Chain-of-Thought + Answer String] Explanation: The goal is to construct a sentence that showcases the use of an oxymoron while remaining relevant to the Viking theme. To achieve this, we focus on creating a phrase that contains seemingly contradictory terms. We begin with the context of Vikings enduring harsh weather conditions. The word “fiery” conveys an intense heat, while “frost" indicates severe cold. When these terms are combined into “fiery frost," they form an oxymoron as they depict both extreme temperature sensations simultaneously. This juxtaposition effectively creates a semantic paradox, leading to vivid imagery and emphasizing the harsh conditions faced by Vikings. Answer: “In the bitter cold, the Viking felt a fiery frost.” [Answer String] Answer: “In the bitter cold, the Viking felt a fiery frost.” D.2 Atomic Task Example: Skill-Mix-Rhetorical [Instruction] Greetings! I am interested in natural language processing and I was wondering if you could help me generate an example of text that illustrates multiple skills in semantics or syntax. The example should be a minimal
https://arxiv.org/abs/2505.22635v1
natural piece of text with up to a few lines in the context of Hiking that illustrates all of the following skills: begging the question or assuming the conclusion. Please keep the text as short as 19 possible, and make sure the concepts can be found fully from the text. For reference, here are the definitions and examples for the concepts: **begging the question or assuming the conclusion**: A circular argument that paraphrases the question. For example, “Of course the earth is round; people have known this for hundreds of years.” Please start the minimal natural piece of text with ‘Answer:’. Thanks very much! [Chain-of-Thought + Answer String] Explanation: To illustrate the concept of begging the question or assuming the conclusion within the context of hiking, we start by identifying a statement that relies on its own conclusion as a premise. First, recognize that the claim “hiking is beneficial” needs supporting evidence to be considered valid. However, if we use the phrase “because it’s good for your health” as justification, we notice that it merely reiterates the initial claim using slightly different wording. This is because declaring something “beneficial” inherently implies a positive impact, such as being “good for your health.” Thus, the reasoning becomes circular, as it depends on the same assumption it seeks to prove. Answer: “Hiking is beneficial because it’s good for your health.” [Answer String] Answer: “Hiking is beneficial because it’s good for your health.” D.3 Compositional Task Example: Skill-Mix Literary + Rhetorical [Instruction] Greetings! I am interested in natural language processing and I was wondering if you could help me generate an example of text that illustrates multiple skills in semantics or syntax. The example should be a minimal natural piece of text with up to a few lines in the context of Vikings that illustrates all of the following skills: anaphora resolution, begging the question or assuming the conclusion. Please keep the text as short as possible, and make sure the concepts can be found fully from the text. For reference, here are the definitions and examples for the concepts: **anaphora resolution**: Resolving the antecedent of a pronoun or noun phrase. For example, “The car is falling apart, but it still works.” Here , “it” is the anaphor and “car” is the antecedent. ***begging the question or assuming the conclusion**: A circular argument that paraphrases the question. For example, “Of course the earth is round; people have known this for hundreds of years.” Please start the minimal natural piece of text with ‘Answer:’. Thanks very much! [Answer String] Answer: The Viking chief, undefeated thanks to his ship, asserted, “It remains unconquered because it’s the ‘Indomitable’.” 20 Table 5: Single-task learning performance by evaluating the atomic CoT models on the in-domain atomic tasks. CoT FormatLast Letter ASCII Mult Concat Skill-Mix Literary Skill-Mix Rhetorical EM EM EM Full Marks Skill Fraction Full Marks Skill Fraction Llama 2-7B StandardCoT 100.0 85.7 83.0 63.5 63.5 53.3 53.3 ComposableCoT 95.0 86.0 77.0 71.4 71.4 72.4 72.4 Qwen 2.5-7B StandardCoT 90.0 99.0 77.4 77.4 77.6 70.5 70.5 ComposableCoT 99.4 99.7 77.3 77.4
https://arxiv.org/abs/2505.22635v1
77.6 76.7 81.9 D.2 Evaluation Metrics We use GPT-4o-mini as the LLM-as-a-judge to grade the generated sentence using the exact same grading rubric as provided by [ 19]; the grader judges the quality of the sentence based on if: (1) All skills are used; (2) The sentence makes sense; (3) The sentence attaches to the given topic; (4) The sentence is short. We use the evaluation metrics for each generated sentence in [ 19], including the following: 1.Full Marks: 1if the generated sentence satisfies all four criteria above and 0otherwise. 2.Skill Fraction: The fraction of skills being demonstrated if all the other three criteria are satisfied; 0otherwise We aggregate these metrics by averaging over all generated responses. In general, full marks evaluate the model’s capability of writing a perfect sentence for the task, while skill fraction evaluates how good the model is at handling skills given that it is good at the other stylistic capabilities. We use Curator [55] for an efficient implementation of the evaluation pipeline. E Single-Task Learning Performance We report the single-task learning performance of the atomic CoT models by evaluating them on the in-domain atomic tasks. We would like the atomic tasks to be easy to learn to reflect the practical settings where we train models on basic, easy-to-learn skills and generalize to harder, unseen tasks. The training data conditions and hyperparameters for training can be found in Appendix F. Table 5 shows that all atomic tasks we evaluate are learnable within a small amount of training data (Ni, Nj≤500) . In addition, we observe that training on ComposableCoT or StandardCoT does not lead to consistent differences in atomic CoT performance, while the exception is on Skill-Mix-Rhetorical for Llama 2-7B where fine-tuning on ComposableCoT outperforms fine-tuning on StandardCoT by a large margin. F Training Configurations F.1 General Configurations We conduct all fine-tuning experiments with LoRA[ 25] using the following set of hyperparameters: we use a rank of 8, α= 16 , and a dropout rate of 0.2 to prevent overfitting. We apply LoRA adapters to all linear modules, including the attention matrices Q,K,Vand MLP matrices of all layers. We use bfloat16 precision for training and we use the efficient implementation of LoRA by LlamaFactory [56]. We use a training batch size of 4and train for 5epochs for all experiments that share the same number of training data; for methods that use a potentially smaller amount of training data (e.g. RFT methods usually get fewer data examples than the number of compositional training data provided, depending on how many correct responses we can sample from the model), we adjust the batch size to match the number of steps. 21 Table 6: Optimal learning rate for each method in the experiments with compositional supervision. Category MethodLast Letter Concat Concat Skill-Mix Literary + Mult + Last Letter + Mult + Rhetorical Llama 2-7B SFTSFT on Base Model 1e-3 1e-3 5e-4 5e-4 CFT on StandardCoT-Merge 1e-3 5e-4 1e-4 1e-4 CFT on StandardCoT-MTL 1e-4 1e-4 1e-4 1e-3 MTL StandardCoT + Comp Answer 1e-3 5e-4 1e-3 5e-4 RFTStandardCoT-Merge - 1e-3 1e-3 5e-4 ComposableCoT-Merge (Ours)
https://arxiv.org/abs/2505.22635v1
1e-4 1e-4 1e-3 1e-3 Qwen 2.5-7B SFTSFT on Base Model - 1e-3 1e-3 5e-4 CFT on StandardCoT-Merge - 5e-4 5e-4 1e-4 CFT on StandardCoT-MTL - 1e-3 1e-3 1e-3 MTL StandardCoT + Comp Answer - 5e-4 5e-4 1e-3 RFTStandardCoT-MTL - 1e-3 1e-4 5e-4 ComposableCoT-MTL (Ours) - 1e-3 1e-3 5e-4 F.2 Configuration for Rejection Sampling Fine-tuning In addition to the sampling parameters (see Section 4), we consider the following configuration of RFT for sampling the correct responses: if the model generates multiple correct responses for a given question, we only randomly select oneof them to be added into the RFT dataset DRFT. In this way we ensures the diversity of examples in DRFT so that the dataset will not be filled with samples from a small set of examples where the model is good at. F.3 Hyperparameters: Learning Rate We find in preliminary experiments that learning rate is the most important hyperparameter for the fine-tuning experiments of our interest. We perform hyperparameter sweeps for learning rate over the space of {5e−3,1e−3,5e−4,1e−4,5e−5}on a validation set for each experiment. The optimal learning rate for each method for the experiments with compositional supervision in Table 6. F.4 Hyperparameters: Model Merging For methods that use model merging as the combination, we use Task Arithmetic [ 47] to combine the atomic CoT models. We perform a hyperparameter sweep for the scalars αandβover the space of α∈ {0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9}andβ= 1−αon a validation set for each task. G Data Statistics G.1 General Data Conditions for Experiments Table 7 summarizes the number of training data and test data used in the evaluations in Sections 5.1 and 5.2. Note that for letter concatenation + multiplication we have two sizes of the compositional training data for Llama 2-7B and Qwen 2.5-7B: this is because all methods on Llama 2-7B perform poorly on zero-shot evaluation for this task and we need a slightly larger amount of compositional training data so that different methods can start to show distinguishable compositional task performance from each other. Regardless, we still consider 500to be a reasonably small amount of training data, satisfying our ideal data conditions defined earlier. 22 Table 7: Data conditions for each task used for our evaluation. # Train # Test Atomic TasksLast Letter 100 700 ASCII Mult 100 700 Concat 500 700 Skill-Mix Literary 100 126 Skill-Mix Rhetorical 100 105 Compositional TasksLast Letter + Mult 100 700 Concat + Last Letter 100 504 Concat + Mult (Llama 2-7B) 500 700 Concat + Mult (Qwen 2.5-7B) 100 700 Skill-Mix Literary + Rhetorical 100 245 Table 8: The detailed breakdown of the number of training data used by each method in the zero-shot setting. NiandNjdenotes the number of training data from the atomic tasks TiandTjseen by the method during training. Method NiNj StandardCoT-Merge 0 0 Last Letter + Mult; ComposableCoT-Merge 100 100 Skill-Mix Literary + Rhetorical StandardCoT-MTL 100 100 ComposableCoT-MTL 100 100 StandardCoT-Merge 500 100 Concat + Last Letter; ComposableCoT-Merge 500 100 Concat + Mult StandardCoT-MTL 500 100 ComposableCoT-MTL 500 100 G.2 Training Data Used by Each Method We show a detailed breakdown in Table 8 of the
https://arxiv.org/abs/2505.22635v1
number of training data used by each zero-shot method for both models and in Table 9 for Qwen 2.5-7B by each method with compositional answer data in the experiments in Section 5.2. Note that the statistics for Llama 2-7B in the setting with compositional supervision are mostly the same except N(i,j)= 500 for concat + last letter and concat + mult. H Error Analyses In addition to not being able to perform the individual atomic task correctly, we show three types of common errors made by ComposableCoT variants in the zero-shot compositional evaluation setting. 1.Example H.1 shows an example where the generated CoT is only able to replicate CoT of one atomic CoT and repeat the same CoT in the prefix and suffix. 2.Example H.2 shows an example where the combined model fails to continue generation after generating the prefix CoT. This is a common error for Composable models combined with model merging. 3.Example H.3 shows an example where the combined model uses the wrong atomic CoT in the prefix that should have been used in the suffix. H.1 Error Case: Replicating One Atomic CoT Pattern [Instruction] Take the first letter of each word in the sequence, concatenate them in lower case, and find the next letter in alphabet following the 23 Table 9: The detailed breakdown of the number of training data used by each method with composi- tional supervision for Qwen 2.5-7B. NiandNjdenotes the number of training data from the atomic tasksTiandTjseen by the method during training. N(i,j)denotes the number of compositional answer data seen during training. Method NiNjN(i,j) SFT on Base Model 0 0 100 CFT on StandardCoT-Merge 100 100 100 CFT on StandardCoT-MTL 100 100 100 Last Letter + Mult; MTL on StandardCoT + Comp Answer 100 100 100 Skill-Mix Literary + Rhetorical RFT on StandardCoT-Merge 100 100 100 RFT on ComposableCoT-Merge 100 100 100 RFT on StandardCoT-MTL 100 100 100 RFT on ComposableCoT-MTL 100 100 100 SFT on Base Model 0 0 100 CFT on StandardCoT-Merge 500 100 100 CFT on StandardCoT-MTL 500 100 100 MTL on StandardCoT + Comp Answer 500 100 100 Concat + Last Letter; RFT on StandardCoT-Merge 500 100 100 Concat + Mult RFT on ComposableCoT-Merge 500 100 100 RFT on StandardCoT-MTL 500 100 100 RFT on ComposableCoT-MTL 500 100 100 Table 10: Summary of methods evaluated in the zero-shot compositional evaluation and the compo- sition with limited compositional answer data.“Merge” stands for model merging; “MTL” stands for multitask learning; “CFT” stands for continued fine-tuning; “RFT” stands for rejection sampling fine-tuning. “-” means the property is not applicable to the method (e.g. MTL on Standard + Comp Answer mixes Standard CoT data with compositional answer data, and trains a single MTL model from the pretrained model, so there is no atomic CoT model trained or combined.) Method# Atomic Atomic Combination Model How is CoT Models CoT Method trained on Compositional Data Trained Format Compositional Data Used Zero-shot Evaluation StandardCoT-Merge 2 Standard Merge - - ComposableCoT-Merge (Ours) 2 Composable Merge - - StandardCoT-MTL 1 Standard MTL - - ComposableCoT-MTL (Ours) 1 Composable MTL -
https://arxiv.org/abs/2505.22635v1
- Evaluation with Limited Compositional Answer Data CFT on StandardCoT-Merge 2 Standard Merge StandardCoT-Merge CFT CFT on StandardCoT-MTL 1 Standard MTL StandardCoT-MTL CFT MTL on StandardCoT + Comp Answer - Standard - Pretrained ModelMix with Atomic CoT data and MTL RFT on StandardCoT-Merge 2 Standard Merge StandardCoT-Merge RFT RFT on ComposableCoT-Merge (Ours) 2 Composable Merge ComposableCoT-Merge RFT RFT on StandardCoT-MTL 1 Standard MTL StandardCoT-MTL RFT RFT on ComposableCoT-MTL (Ours) 1 Composable MTL ComposableCoT-MTL RFT 24 last letter in the sequence of the concatenated letters: Jessika Berk Hamilton Stephanie answer: [Answer String] t [Model Output] <prefix> The last letter is e, and the letter following it in alphabet is f. So the answer is f.</prefix> <suffix> The last letter is e, and the letter following it in alphabet is f. So the answer is f.</suffix> [Predicted Answer] f H.2 Error Case: Failing to Continue Generation after Prefix CoT [Instruction] Take the last letter of each word in the sequence, concatenate them in lower case, then find the ASCII value of the last letter in the sequence of the concatenated letters, and multiply the ASCII value by 6: Ayat Amaris Heavenly Mariela answer: [Answer String] 582 [Model Output] <prefix> The last letter of the 1st word is t. The last letter of the 2nd word is s. The last letter of the 3rd word is y. The last letter of the 4th word is a. So the answer is tasy, and the ASCII value of the last letter in the sequence of the concatenated letters is 121, so the answer is 726.</prefix> <suffix> [Predicted Answer] 726 H.3 Error Case: Use the Wrong Order of Atomic CoTs [Instruction] Take the second letter of each word in the sequence, concatenate them in lower case, and find the next letter in alphabet following the last letter in the sequence of the concatenated letters: D’shaun Takia Hiroki Arjun answer: [Answer String] s [Model Output] <prefix> the last letter is n, and the letter following it in alphabet is o. so the answer is o.</prefix> <suffix> the second letter of d is d, the second letter of t is a, the second letter of h is i, and the second letter of a is r. so the answer is dair.</suffix> [Predicted Answer] dair I Computation resources We conducted all our experiments primarily on a server with NVIDIA RTX A6000 48GB GPU, and a server with GPU of NVIDIA GH200 120GB. Each individual experiment can be run on a single A6000 of 48 GB within 1 hour. J Licensing We use the following publicly available datasets from prior works with open licenses. 25 Skill-Mix [19] uses the Apache-2.0 license and data is available at: https://huggingface.co/ spaces/dingliyu/skillmix . Letter concatenation The dataset uses the Apache-2.0 license and the data is available at: https: //huggingface.co/datasets/ChilleD/LastLetterConcat 26
https://arxiv.org/abs/2505.22635v1
FastTD3: Simple, Fast, and Capable Reinforcement Learning for Humanoid Control Younggyo Seo1Carmelo Sferrazza1Haoran Geng1 Michal Nauman1,2Zhao-Heng Yin1Pieter Abbeel1 1University of California, Berkeley2University of Warsaw https://younggyo.me/fast_td3 Abstract Reinforcement learning (RL) has driven significant progress in robotics, but its complexity and long training times remain major bottlenecks. In this report, we introduce FastTD3, a simple, fast, and capable RL algorithm that significantly speeds up training for humanoid robots in popular suites such as HumanoidBench, IsaacLab, and MuJoCo Playground. Our recipe is remarkably simple: we train an off-policy TD3 agent with several modifications – parallel simulation, large-batch updates, a distributional critic, and carefully tuned hyperparameters. FastTD3 solves a range of HumanoidBench tasks in under 3 hours on a single A100 GPU, while remaining stable during training. We also provide a lightweight and easy-to- use implementation of FastTD3 to accelerate RL research in robotics. Figure 1: Sim-to-real reinforcement learning with FastTD3. We successfully transfer the FastTD3 policy trained in MuJoCo Playground (Zakka et al., 2025) to Booster T1. This represents, to the best of our knowledge, the first successful deployment of a policy trained with off-policy RL on real humanoid hardware. See our project page for videos. 1 Introduction Reinforcement learning (RL) has been a key driver behind recent successes in robotics, enabling the successful transfer of robust simulation policies to real-world environments (Hwangbo et al., 2019; Kaufmann et al., 2023). However, progress is often bottlenecked by slow training times in complex tasks. For example, in the recently proposed benchmark HumanoidBench (Sferrazza et al., 2024), even state-of-the-art RL algorithms failed to solve many tasks after 48 hours of training. This slow training remains a major bottleneck for practitioners aiming to unlock new behaviors in humanoid robots using RL. In particular, the iterative nature of reward design in robotics – where multiple rounds of reward shaping and policy retraining are often necessary – demands RL algorithms that are not only capable but also significantly faster . These algorithms must support rapid iteration and reliably solve tasks when given well-designed rewards. Correspondence to: mail@younggyo.mearXiv:2505.22642v1 [cs.RO] 28 May 2025 Figure 2: Tasks. We consider a range of challenging humanoid and dexterous control tasks from HumanoidBench (left), MuJoCo Playground (middle), and IsaacLab (right). 0 1800 3600 5400 7200 9000 10800 Wall Time (seconds)020406080100Average Normalized ReturnHumanoidBench (39 Tasks) 0 600 1200 1800 2400 3000 3600 Wall Time (seconds)020406080100Average Normalised ReturnMuJoCo Playground (4 Tasks) 0 600 1200 1800 2400 3000 3600 Wall Time (seconds)020406080100Average Normalized ReturnIsaacLab (6 Tasks) PPO SAC SimbaV2 TD-MPC2 DreamerV3 FastTD3 Figure 3: Summary of results. FastTD3 is a simple, fast, and capable RL algorithm that significantly speeds up training for humanoid robots on tasks from popular suites such as HumanoidBench (Sferrazza et al., 2024), IsaacLab (Mittal et al., 2023), and MuJoCo Playground (Zakka et al., 2025). To accelerate RL research in robotics, we provide an easy-to-use open-source implementation of FastTD3, enabling users to easily reproduce these results or build upon our work. For that purpose, practitioners have mostly used Proximal Policy Optimization (PPO; Schulman et al. 2017) for training deployable policies in simulation, as PPO learns behaviors very
https://arxiv.org/abs/2505.22642v1
fast with massively parallel simulation (Heess et al., 2017; Hwangbo et al., 2019). However, PPO is an on-policy algorithm and not sample-efficient, making it difficult to fine-tune it during real-world deployment or initialize training with demonstrations (Hester et al., 2018). Meanwhile, recent off-policy RL research has made significant progress in improving sample- efficiency (D’Oro et al., 2023; Nauman et al., 2024b; Hansen et al., 2024). However, this line of work often suffers from increased algorithmic complexity and long wall-clock training times, making them difficult to be widely used for learning deployable policies in robotics. On the other hand, Parallel Q-Learning (PQL; Li et al. 2023b) have shown that off-policy RL can be both fast and sample-efficient by scaling up through massively parallel simulation, large batch sizes, and a distributional critic (Bellemare et al., 2017). However, the core contribution of PQL – its use of asynchronous parallel processes that cuts wall-clock time – unfortunately comes with high implementation complexity, which has hindered its widespread adoption. In this work, we do not aim to claim novelty over PQL, but rather focus on developing a simple yet highly-optimized algorithm without asynchronous processes, highlighting its effectiveness on popular humanoid control suites, and providing the easy-to-use implementation to accelerate future RL research on robotics. FastTD3 We introduce FastTD3, a simple, fast, and capable RL algorithm that significantly speeds up training for humanoid robots on tasks from popular suites such as HumanoidBench (Sferrazza et al., 2024), IsaacLab (Mittal et al., 2023), and MuJoCo Playground (Zakka et al., 2025). Our recipe is remarkably simple: by training an off-policy TD3 agent (Fujimoto et al., 2018) with parallel simulation, large-batch updates, a distributional critic (Bellemare et al., 2017), and carefully tuned hyperparameters, FastTD3 solves a range of HumanoidBench tasks in under 3 hours on a single GPU. Compared to PPO (Schulman et al., 2017), FastTD3 trains humanoid locomotion policies faster in IsaacLab and MuJoCo Playground, particularly on rough terrain with domain randomization. 2 0 3600 7200 1080002505007501000Episode Returnh1hand_walk 0 3600 7200 1080002505007501000h1hand_stand 0 3600 7200 1080002505007501000h1hand_run 0 3600 7200 1080002505007501000h1hand_hurdle 0 3600 7200 1080002505007501000Episode Returnh1hand_stair 0 3600 7200 1080002505007501000h1hand_slide 0 3600 7200 10800500 05001000h1hand_push 0 3600 7200 108000200400600800h1hand_window 0 1200 2400 3600020Episode ReturnIsaac-Velocity-Flat-G1-v0 0 1200 2400 360002040Isaac-Velocity-Rough-G1-v0 0 1200 2400 3600020004000Isaac-Repose-Cube-Allegro-Direct-v0 0 1200 2400 3600025005000750010000Isaac-Repose-Cube-Shadow-Direct-v0 0 1200 2400 3600 Wall Time (Seconds)0102030Episode ReturnG1JoystickFlatTerrain 0 1200 2400 3600 Wall Time (Seconds)0102030G1JoystickRoughTerrain 0 1200 2400 3600 Wall Time (Seconds)010203040T1JoystickFlatTerrain 0 1200 2400 3600 Wall Time (Seconds)0102030T1JoystickRoughTerrain FastTD3 DreamerV3 TDMPC2 SAC SimbaV2 PPOFigure 4: Results on a selected set of tasks. Learning curves on selected individual tasks from HumanoidBench (first two rows), IsaacLab (third row), and MuJoCo Playground (fourth row). The solid line and shaded regions represent the mean and standard deviation across three runs. The dashed lines indicate success thresholds in HumanoidBench tasks. Open-source implementation We provide an open-source implementation of FastTD3 based on PyTorch (Paszke et al., 2019) – a lightweight codebase that enables users to easily build new ideas on top of FastTD3. Our implementation is easy-to-install and versatile – users can easily train FastTD3 agents on HumanoidBench, IsaacLab, and MuJoCo
https://arxiv.org/abs/2505.22642v1
Playground, and the codebase also supports several user-friendly features, such as preconfigured hyperparameters, rendering support, logging, and loading checkpoints. We note that our work is orthogonal to latest RL research, such as SR-SAC (D’Oro et al., 2023), BBF (Schwarzer et al., 2023), BRO (Nauman et al., 2024b), Simba (Lee et al., 2024), NaP (Lyle et al., 2024), TDMPC2 (Hansen et al., 2024), TDMPBC (Zhuang et al., 2025), SimbaV2 (Lee et al., 2025), MAD-TD (V oelcker et al., 2025), and MR.Q (Fujimoto et al., 2025), we expect various improvements in these works to be also useful when incorporated into FastTD3. Our key contributions can be summarized as follows: •We introduce FastTD3, a simple, fast and capable RL algorithm that solves a variety of locomotion and manipulation tasks that prior RL algorithms take tens of hours to complete or fail to solve. We demonstrate that this performance can be achieved using a remarkably simple recipe: training a TD3 agent (Fujimoto et al., 2018) with large-batch updates, parallel simulation, distributional RL, and well-tuned hyperparameters. • We provide experimental results that show the effectiveness of the various design choices. •We release an easy-to-use open-source implementation of FastTD3 to accelerate RL research on robotics. This implementation supports popular suites such as HumanoidBench (Sferrazza et al., 2024), IsaacLab (Mittal et al., 2023), and MuJoCo Playground (Zakka et al., 2025). 2 FastTD3: Simple, Fast, Capable RL for Humanoid Control FastTD3 is a high-performance variant of the Twin Delayed Deep Deterministic Policy Gradient (TD3; Fujimoto et al. 2018) algorithm, optimized for complex robotics tasks. These optimizations are based on the observations of Li et al. (2023b) that found parallel simulation, large batch sizes, and a distributional critic are important for achieving strong performance with off-policy RL algorithms. 3 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_slide Env 1 Env 4 Env 32 Env 64 Env 128(a) Effect of parallel environments 0 2400 4800 7200 9600 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_slide Batch 2K Batch 8K Batch 32K (b) Effect of batch size 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_slide No Distributional (C51) Distributional (C51) (c) Effect of distributional RL 0 2400 4800 7200 9600 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 2400 4800 7200 9600 Wall Time (Seconds)02004006008001000h1hand_slide No CDQ CDQ (d) Effect of Clipped Double Q-learning Figure 5: Effect of design choices (1 / 2). We investigate the effect of (a) parallel environments, (b) batch size, (c) distributional RL, and (d) Clipped Double Q-learning. The solid line and shaded regions represent the mean and standard deviation across three runs. 2.1 Design Choices In this section, we describe the key design choices made in the development of FastTD3 and their impact on performance. For details of TD3, we refer the reader to Fujimoto et al. (2018). Parallel environments Similar to observations in Li et al. (2023a), we find that using massively parallel environments significantly accelerates TD3 training. We hypothesize that combining de- terministic policy gradient algorithms (Silver et al., 2014) with
https://arxiv.org/abs/2505.22642v1
parallel simulation is particularly effective, because the randomness from parallel environments increases diversity in the data distri- bution. This enables TD3 to leverage its strength – efficient exploitation of value functions – while mitigating its weakness in exploration. Large-batch training We find that using an unusually large batch size of 32,768 for training the FastTD3 agent is highly effective. We hypothesize that, with massively parallel environments, large-batch updates provide a more stable learning signal for the critic by ensuring high data diversity in each gradient update. Otherwise, unless with high update-to-data ratio, a large portion of data will never be seen by the agent. While increasing the batch size incurs a higher per-update wall-clock time, it often reduces overall training time due to improved training efficiency. Distributional RL We also find that using the distributional critic (Bellemare et al., 2017) is helpful in most cases, similar to the observation of Li et al. (2023b). However, we note that this comes at the cost of additional hyperparameters – vminandvmax. Although we empirically find that they are are not particularly difficult to tune1, one may be able to consider incorporating the reward normalization for the distributional critic proposed in SimbaV2 (Lee et al., 2025) into FastTD3. Clipped Double Q-learning (CDQ) While Nauman et al. (2024a) report that using the average of Q-values rather than the minimum employed in CDQ leads to better performance when combined with layer normalization, our findings indicate a different trend in the absence of layer normalization. Specifically, without layer normalization, CDQ remains a critical design choice and using minimum generally performs better across a range of tasks. This suggests that CDQ continues to be an important hyperparameter that must be tuned per task to achieve optimal reinforcement learning performance. Architecture We use an MLP with a descending hidden layer configuration of 1024, 512, and 256 units for the critic, and 512, 256, and 128 units for the actor. We find that using smaller models 1We provide tuned hyperparameters in our open-source implementation. 4 tends to degrade both time-efficiency and sample-efficiency in our experiments. We also experiments with residual paths and layer normalization (Ba et al., 2016) similar to BRO (Nauman et al., 2024b) or Simba (Lee et al., 2024), but they tend to slow down training without significant gains in our experiments. We hypothesize that this is because the data diversity afforded by parallel simulation and large-batch training reduces the effective off-policyness of updates, thereby mitigating instability often associated with the deadly triad of bootstrapping, function approximation, and off-policy learning (Sutton & Barto, 2018). As a result, the training process remains stable even without additional architectural stabilizers like residual connections or layer normalization. Exploration noise schedules In contrast to PQL (Li et al., 2023b) which found the effectiveness of mixed noise – using different Gaussian noise scales for each environment sampled from [σmin, σmax], we find no significant gains from the mixed noise scheme. Nonetheless, we used mixed noise schedule, as it allows for flexible noise scheduling with only a few lines of additional code. But we find that using
https://arxiv.org/abs/2505.22642v1
large σmax= 0.4is helpful for FastTD3 as shown in Li et al. (2023b). Update-to-data ratio In contrast to prior work showing that increasing the update-to-data (UTD) ratio – that is, the number of gradient updates per environment step – typically requires additional techniques (D’Oro et al., 2023; Schwarzer et al., 2023) or architectural changes (Nauman et al., 2024b; Lee et al., 2024, 2025), we find that FastTD3 does not require such modifications. Using a standard 3-layer MLP without normalization, FastTD3 scales favorably with higher UTDs in terms of sample efficiency. In particular, we find sample-efficiency tends to improve with higher UTDs, but at the cost of increased wall-time for training. We hypothesize that this is because FastTD3 operates at extremely low UTDs – typically 2,4,8updates per 128 to 4096 (parallel) environment steps – reducing the risk of early overfitting often associated with high UTDs. Replay buffer size Instead of defining a global replay buffer size, we set the size as N×num_envs (see Section 2.2 for more details on replay buffer design). In practice, we find that using a larger N improves performance, though it comes at the cost of increased GPU memory usage, as we store entire buffer on the GPU. 2.2 Implementation Details Parallel environments For IsaacLab and MuJoCo Playground, we use their native support for parallel simulation. However, for HumanoidBench that does not support GPU-based parallelization, we use SubprocVecEnv from Stable Baselines3 library (Raffin et al., 2021). We find that Humanoid- Bench’s default configuration launches a GPU-based renderer for each simulation, which makes it difficult to run more than 100 environments. We have submitted a pull request to HumanoidBench that adds support for disabling the default GPU-based renderer, which is merged into the main branch. Environment wrappers To build an easy-to-use codebase that supports different suites that assume different configurations, we built or used wrappers for each suite. •For MuJoCo Playground, we use the native RSLRLBraxWrapper that converts Jax tensors to Torch tensors and follows the API of RSL-RL (Rudin et al., 2022). Because this wrapper does not support saving final observations before resetting each environment, we implemented this in a separate fork2. We will get these changes to be merged into the main repository. •As IsaacLab natively supports PyTorch, we implemented a simple wrapper that conforms to the RSL-RL API. Currently, our implementation does not support rendering during IsaacLab training, as IsaacLab does not allow for multiple simulations to run concurrently. •For HumanoidBench, we developed a wrapper that follows the RSL-RL API and converts NumPy arrays to PyTorch tensors. Asymmetric actor-critic For IsaacLab and MuJoCo Playground, which often provide privileged states for the critic network, we implement support for the asymmetric actor-critic (Pinto et al., 2017). AMP and torch.compile While JAX-based RL implementations have become popular in recent days for its speed, we build our implementation upon PyTorch (Paszke et al., 2019) for its simplicity and flexibility. We find that mixed-precision training with AMP and bfloat16 accelerates training 2Seehttps://github.com/younggyoseo/mujoco_playground . 5 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_slide D256 (0.18M)
https://arxiv.org/abs/2505.22642v1
D512 (0.55M) D1024 (1.83M)(a) Effect of model size 0 2400 4800 7200 9600 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 2400 4800 7200 9600 Wall Time (Seconds)02004006008001000h1hand_slide Max 0.001 Max 0.1 Max 0.4 (b) Effect of noise scales 0 3000 6000 9000 12000 15000 Wall Time (Seconds)02004006008001000h1hand_slide 0 3e6 6e6 9e6 12e6 15e6 18e6 Environment Steps02004006008001000h1hand_slide UTD1 UTD2 UTD4 UTD8 UTD16 (c) Effect of update-to-data ratio 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_slide Buffer 1K Buffer 10K Buffer 50K (d) Effect of replay buffer size Figure 6: Effect of design choices (2 / 2). We investigate the effect of (a) model size, (b) noise scales, (c) update-to-data ratio, and (d) replay buffer size. The solid line and shaded regions represent the mean and standard deviation across three runs. by up to 40% in our setup using a single A100 GPU, with no observed instability. We also support torch.compile by building our codebase on LeanRL (Huang et al., 2022)3, which provides up to a 35% speedup in our experiments. When using both AMP and torch.compile , we observe a combined training speedup of up to 70%. Replay buffer Instead of defining a global replay buffer size, we set the size as N×num_envs . We find that it better decouples the effects of replay buffer size from the number of parallel environments. For instance, if the global buffer size is fixed at one million, the task episode length is 1000, and the user increases the number of environments from 1000 to 2000, the replay buffer will only be able to save half of each trajectory from each environment (it will start discarding early samples after 500 timesteps), which may negatively affect performance. On the other hand, if we specify N= 1000 for each environment, the buffer can save the whole trajectory without being affected by the number of parallel environments. Because we focus on non-vision domains, we store the entire buffer on the GPU to avoid the overhead of data transfer between CPU and GPU. 3 Experiments Setups For the DreamerV3 (Hafner et al., 2023), SAC (Haarnoja et al., 2018), and TDMPC2 (Hansen et al., 2024) baselines, we use learning curves from three runs available in the Humanoid- Bench repository. Given that each run is trained for 48 hours, we use interpolated wall-clock timestamps to plot the curves. For SimbaV2 (Lee et al., 2025), we use the official codebase to conduct experiments on tasks involving dexterous hands4. We report single-seed results for SimbaV2 but plan to include additional runs in the future. All FastTD3 results are aggregated over three runs. Experiments are conducted on a cloud instance with a single NVIDIA A100 GPU and 16 CPU cores. Results We provide the aggregate results over all tasks for each suite in Figure 3, individual results on a selected set of tasks in Figure 4, and individual results on full set of tasks in Appendix A. We provide extensive experimental results that investigate the effect of various design choices (described in Section 2.1) in Figure 5 and Figure 6. 3https://github.com/pytorch-labs/LeanRL
https://arxiv.org/abs/2505.22642v1
4SimbaV2 uses action repeat of 2 for HumanoidBench in their paper, which is unusual for joint position control. We thus run our SimbaV2 experiments with action repeat of 1. 6 (a) Gait of FastTD3 trained with PPO-tuned reward (b) Gait of PPO trained with PPO-tuned reward (c) Gait of FastTD3 trained with FastTD3-tuned reward (d) Gait of PPO trained with FastTD3-tuned reward Figure 7: Different RL algorithms may need different reward functions. We find that FastTD3 trained with the PPO-tuned reward function (a) results in an undeployable gait, as the G1 robot exhibits abrupt arm movements. (b) This is notable because the same ( PPO-tuned) reward function successfully produces a natural walking gait when used with PPO. (c) By tuning the reward function specifically for FastTD3 – adding stronger penalty terms – we were able to train a smoother gait with FastTD3. (d) On the other hand, training PPO with FastTD3-tuned reward is slow to train and results in a slowly-walking gait because of stronger penalty terms. For clarity, we use PPO-tuned reward for all experiments in other experiments except this analysis and sim-to-real experimental results. Different RL algorithms may need different reward functions While training humanoid lo- comotion policies in MuJoCo Playground, we observed that PPO and FastTD3 produced notably different gaits, despite being trained with the same reward function (see Figure 7a and Figure 7b). We hypothesize that this is because existing reward functions are typically tuned for PPO, and different algorithms may require different reward structures to produce desirable behaviors. To address this, we tuned the reward function specifically for FastTD3 – with stronger penalty terms. This process was efficient due to FastTD3’s short training time. As shown in Figure 7c, the tuned reward enabled FastTD3 to learn a stable and visually appealing gait compared to the one in Figure 7a. On the other hand, we observe that training PPO with FastTD3-tuned reward results also results in undeployable gait that walks too slowly (see Figure 7d). This observation suggests that standard metric – episode return – may not capture the practical usefulness of learned policies. FastSAC Experiments To investigate whether our recipe generalizes to other model-free RL algorithms, we develop FastSAC, which incorporates our FastTD3 recipe into SAC (Haarnoja et al., 2018). We find that FastSAC trains significantly faster than vanilla SAC (see Figure 8). However, we also observe that FastSAC tends to be unstable during training, which we hypothesize is due to the difficulty of maximizing action entropy in high-dimensional action spaces. Given that SimbaV2 (Lee et al., 2025) is notably faster than vanilla SAC in our main experiments, incorporating such recent advancements in off-policy RL into FastTD3 or FastSAC may be a promising future direction. 7 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_walk 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_run 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_hurdle 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_balance_simple 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_stair 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_slide 0 1800 3600 5400 7200 Wall Time (Seconds)02004006008001000h1hand_pole 0 1800 3600 5400 7200
https://arxiv.org/abs/2505.22642v1
Wall Time (Seconds)500 250 02505007501000h1hand_push SAC FastTD3 FastSACFigure 8: FastSAC. We develop FastSAC, a variant of SAC that incorporates our FastTD3 recipe. We find that FastSAC is significantly faster than vanilla SAC, though still slower than FastTD3. Sim-to-real RL with FastTD3 For the sim-to-real experiments in Figure 1, we use Booster Gym5, which supports 12-DOF control for a Booster T1 humanoid robot with fixed arms, waist, and heading. For convenience, we ported Booster Gym’s robot configuration and reward functions into MuJoCo Playground, which originally only supports 23-DOF T1 control with all joints enabled. We find that training FastTD3 in MuJoCo Playground significantly simplifies and accelerates iteration cycle compared to training with Booster Gym based on IsaacGym (Makoviychuk et al., 2021). 4 Discussion We have presented FastTD3, a simple, fast, and capable RL algorithm that efficiently solves a variety of locomotion and manipulation tasks from HumanoidBench (Sferrazza et al., 2024), IsaacLab (Mittal et al., 2023), and MuJoCo Playground (Zakka et al., 2025). We have demonstrated that a simple algorithm, combined with well-tuned hyperparameters and without introducing new architectures or training techniques, can serve as a surprisingly strong baseline for complex robotics tasks. Along with this report, we provide an open-source implementation of FastTD3 – a lightweight and easy-to-use codebase featuring user-friendly features and pre-configured hyperparameters. Here, we would like to emphasize the goal of this work is not to claim novelty or superiority over prior algorithms. Our approach builds directly on insights already established in the research community. Parallel Q-Learning (PQL; Li et al. 2023b) demonstrated how off-policy RL can be effectively scaled using massive parallel simulation, followed by Parallel Q-Network (PQN; Gallici et al. 2024) that made a similar observation for discrete control. Similarly, Raffin (2025) and Shukla (2025) showed that SAC can also be scaled successfully through parallel simulation and carefully tuned hyperparameters. The aim of this work is to distill those insights into a simple algorithm, provide extensive experimental analysis of various design choices, and release an easy-to-use implementation. We are excited about several future directions. Importantly, our work is orthogonal to many recent advances in RL, and these improvements can be readily incorporated into FastTD3 to further advance the state of the art. We expect this integration process to be both straightforward and effective. We also look forward to applications of FastTD3 in real-world RL setups. As an off-policy RL algorithm, FastTD3 is well-suited for demo-driven RL setups for humanoid control (Chernyadev et al., 2024; Seo & Abbeel, 2024), as well as for fine-tuning simulation-trained policies through real-world interactions. Finally, the faster iteration cycles of FastTD3 could be useful in iterative inverse RL setups that leverage language models as reward generators (Ma et al., 2023), offering a promising approach to address the longstanding challenge of reward design in humanoid control. We hope that our work and implementation help accelerate future RL research in robotics. 5https://github.com/BoosterRobotics/booster_gym 8 Acknowledgements This work is supported in part by Multidisciplinary University Research Initiative (MURI) award by the Army Research Office (ARO) grant No. W911NF-23-1-0277 and the ONR Science of Autonomy Program N000142212121, and ONR
https://arxiv.org/abs/2505.22642v1
MURI N00014-22-1-2773. Pieter Abbeel holds concurrent appointments as a Professor at UC Berkeley and as an Amazon Scholar. This paper describes work performed at UC Berkeley and is not associated with Amazon. We thank NVIDIA for providing compute resources through the NVIDIA Academic DGX Grant. References Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Geoffrey E. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. Bellemare, Marc G, Dabney, Will, and Munos, Rémi. A distributional perspective on reinforcement learning. In International Conference on Machine Learning , 2017. Chernyadev, Nikita, Backshall, Nicholas, Ma, Xiao, Lu, Yunfan, Seo, Younggyo, and James, Stephen. Bigym: A demo-driven mobile bi-manual manipulation benchmark. In Conference on Robot Learning , 2024. D’Oro, Pierluca, Schwarzer, Max, Nikishin, Evgenii, Bacon, Pierre-Luc, Bellemare, Marc G, and Courville, Aaron. Sample-efficient reinforcement learning by breaking the replay ratio barrier. In International Conference on Learning Representations , 2023. Fujimoto, Scott, Hoof, Herke, and Meger, David. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning , 2018. Fujimoto, Scott, D’Oro, Pierluca, Zhang, Amy, Tian, Yuandong, and Rabbat, Michael. Towards general-purpose model-free reinforcement learning. In International Conference on Learning Representations , 2025. Gallici, Matteo, Fellows, Mattie, Ellis, Benjamin, Pou, Bartomeu, Masmitja, Ivan, Foerster, Jakob Nicolaus, and Martin, Mario. Simplifying deep temporal difference learning. arXiv preprint arXiv:2407.04811 , 2024. Haarnoja, Tuomas, Zhou, Aurick, Hartikainen, Kristian, Tucker, George, Ha, Sehoon, Tan, Jie, Kumar, Vikash, Zhu, Henry, Gupta, Abhishek, Abbeel, Pieter, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905 , 2018. Hafner, Danijar, Pasukonis, Jurgis, Ba, Jimmy, and Lillicrap, Timothy. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104 , 2023. Hansen, Nicklas, Su, Hao, and Wang, Xiaolong. Td-mpc2: Scalable, robust world models for continuous control. In International Conference on Learning Representations , 2024. Heess, Nicolas, Tb, Dhruva, Sriram, Srinivasan, Lemmon, Jay, Merel, Josh, Wayne, Greg, Tassa, Yuval, Erez, Tom, Wang, Ziyu, Eslami, SM, et al. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286 , 2017. Hester, Todd, Vecerik, Matej, Pietquin, Olivier, Lanctot, Marc, Schaul, Tom, Piot, Bilal, Horgan, Dan, Quan, John, Sendonaris, Andrew, Osband, Ian, et al. Deep q-learning from demonstrations. In Proceedings of the AAAI conference on artificial intelligence , 2018. Huang, Shengyi, Dossa, Rousslan Fernand Julien, Ye, Chang, Braga, Jeff, Chakraborty, Dipam, Mehta, Kinal, and Araújo, João G.M. Cleanrl: High-quality single-file implementations of deep reinforcement learning algorithms. Journal of Machine Learning Research , 23(274):1–18, 2022. URL http://jmlr.org/papers/v23/21-1342.html . Hwangbo, Jemin, Lee, Joonho, Dosovitskiy, Alexey, Bellicoso, Dario, Tsounis, Vassilios, Koltun, Vladlen, and Hutter, Marco. Learning agile and dynamic motor skills for legged robots. Science Robotics , 2019. 9 Kaufmann, Elia, Bauersfeld, Leonard, Loquercio, Antonio, Müller, Matthias, Koltun, Vladlen, and Scaramuzza, Davide. Champion-level drone racing using deep reinforcement learning. Nature , 620(7976):982–987, 2023. Lee, Hojoon, Hwang, Dongyoon, Kim, Donghu, Kim, Hyunseung, Tai, Jun Jet, Subramanian, Kaushik, Wurman, Peter R, Choo, Jaegul, Stone, Peter, and Seno, Takuma. Simba: Simplicity bias for scaling up parameters in deep reinforcement learning. In International Conference on Learning Representations , 2024. Lee, Hojoon, Lee, Youngdo, Seno, Takuma, Kim, Donghu, Stone, Peter, and Choo, Jaegul. Hyper- spherical normalization for scalable
https://arxiv.org/abs/2505.22642v1
deep reinforcement learning. In International Conference on Machine Learning , 2025. Li, Tianhong, Chang, Huiwen, Mishra, Shlok Kumar, Zhang, Han, Katabi, Dina, and Krishnan, Dilip. Mage: Masked generative encoder to unify representation learning and image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2023a. Li, Zechu, Chen, Tao, Hong, Zhang-Wei, Ajay, Anurag, and Agrawal, Pulkit. Parallel q-learning: Scaling off-policy reinforcement learning under massively parallel simulation. In International Conference on Machine Learning , pp. 19440–19459. PMLR, 2023b. Lyle, Clare, Zheng, Zeyu, Khetarpal, Khimya, Martens, James, van Hasselt, Hado P, Pascanu, Razvan, and Dabney, Will. Normalization and effective learning rates in reinforcement learning. Advances in Neural Information Processing Systems , 37:106440–106473, 2024. Ma, Yecheng Jason, Liang, William, Wang, Guanzhi, Huang, De-An, Bastani, Osbert, Jayaraman, Dinesh, Zhu, Yuke, Fan, Linxi, and Anandkumar, Anima. Eureka: Human-level reward design via coding large language models. arXiv preprint arXiv:2310.12931 , 2023. Makoviychuk, Viktor, Wawrzyniak, Lukasz, Guo, Yunrong, Lu, Michelle, Storey, Kier, Macklin, Miles, Hoeller, David, Rudin, Nikita, Allshire, Arthur, Handa, Ankur, and State, Gavriel. Isaac gym: High performance gpu-based physics simulation for robot learning, 2021. Mittal, Mayank, Yu, Calvin, Yu, Qinxi, Liu, Jingzhou, Rudin, Nikita, Hoeller, David, Yuan, Jia Lin, Singh, Ritvik, Guo, Yunrong, Mazhar, Hammad, Mandlekar, Ajay, Babich, Buck, State, Gavriel, Hutter, Marco, and Garg, Animesh. Orbit: A unified simulation framework for interactive robot learning environments. IEEE Robotics and Automation Letters , 8(6):3740–3747, 2023. doi: 10.1109/LRA.2023.3270034. Nauman, Michal, Bortkiewicz, Michał, Miło ´s, Piotr, Trzcinski, Tomasz, Ostaszewski, Mateusz, and Cygan, Marek. Overestimation, overfitting, and plasticity in actor-critic: the bitter lesson of reinforcement learning. In Proceedings of the 41st International Conference on Machine Learning , 2024a. URL https://arxiv.org/pdf/2403.00514 . PMLR 235:37342-37364. Nauman, Michal, Ostaszewski, Mateusz, Jankowski, Krzysztof, Miło ´s, Piotr, and Cygan, Marek. Bigger, regularized, optimistic: scaling for compute and sample-efficient continuous control. In Advances in Neural Information Processing Systems , 2024b. Paszke, Adam, Gross, Sam, Massa, Francisco, Lerer, Adam, Bradbury, James, Chanan, Gregory, Killeen, Trevor, Lin, Zeming, Gimelshein, Natalia, Antiga, Luca, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems , 32, 2019. Pinto, Lerrel, Andrychowicz, Marcin, Welinder, Peter, Zaremba, Wojciech, and Abbeel, Pieter. Asymmetric actor critic for image-based robot learning. arXiv preprint arXiv:1710.06542 , 2017. Raffin, Antonin. Getting sac to work on a massive parallel simulator: An rl journey with off- policy algorithms. araffin.github.io , Feb 2025. URL https://araffin.github.io/post/ sac-massive-sim/ . Raffin, Antonin, Hill, Ashley, Gleave, Adam, Kanervisto, Anssi, Ernestus, Maximilian, and Dormann, Noah. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research , 22(268):1–8, 2021. URL http://jmlr.org/papers/v22/20-1364.html . 10 Rudin, Nikita, Hoeller, David, Reist, Philipp, and Hutter, Marco. Learning to walk in minutes using massively parallel deep reinforcement learning. In Proceedings of the 5th Conference on Robot Learning , volume 164 of Proceedings of Machine Learning Research , pp. 91–100. PMLR, 2022. URL https://proceedings.mlr.press/v164/rudin22a.html . Schulman, John, Wolski, Filip, Dhariwal, Prafulla, Radford, Alec, and Klimov, Oleg. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Schwarzer, Max, Ceron, Johan Samir Obando, Courville, Aaron, Bellemare, Marc G, Agarwal, Rishabh, and Castro, Pablo Samuel. Bigger,
https://arxiv.org/abs/2505.22642v1
better, faster: Human-level atari with human-level efficiency. In International Conference on Machine Learning , 2023. Seo, Younggyo and Abbeel, Pieter. Reinforcement learning with action sequence for data-efficient robot learning. arXiv preprint arXiv:2411.12155 , 2024. Sferrazza, Carmelo, Huang, Dun-Ming, Lin, Xingyu, Lee, Youngwoon, and Abbeel, Pieter. Hu- manoidbench: Simulated humanoid benchmark for whole-body locomotion and manipulation. In Robotics: Science and Systems , 2024. Shukla, Arth. Speeding up sac with massively parallel simulation. https://arthshukla.substack.com , Mar 2025. URL https://arthshukla.substack.com/p/ speeding-up-sac-with-massively-parallel . Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In International Conference on Machine Learning , 2014. Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction . MIT press, 2018. V oelcker, Claas A, Hussing, Marcel, Eaton, Eric, Farahmand, Amir-massoud, and Gilitschenski, Igor. Mad-td: Model-augmented data stabilizes high update ratio rl. In International Conference on Learning Representations , 2025. Zakka, Kevin, Tabanpour, Baruch, Liao, Qiayuan, Haiderbhai, Mustafa, Holt, Samuel, Luo, Jing Yuan, Allshire, Arthur, Frey, Erik, Sreenath, Koushil, Kahrs, Lueder A, et al. Mujoco playground. arXiv preprint arXiv:2502.08844 , 2025. Zhuang, Zifeng, Shi, Diyuan, Suo, Runze, He, Xiao, Zhang, Hongyin, Wang, Ting, Lyu, Shangke, and Wang, Donglin. Tdmpbc: Self-imitative reinforcement learning for humanoid robot control. arXiv preprint arXiv:2502.17322 , 2025. 11 A Additional Results A.1 HumanoidBench 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_walk 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_stand 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_run 0 3600 7200 10800 Wall Time (Seconds)0500010000h1hand_reach 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_hurdle 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_crawl 0 3600 7200 10800 Wall Time (Seconds)050010001500h1hand_maze 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_sit_simple 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_sit_hard 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_balance_simple 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_balance_hard 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_stair 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_slide 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_pole 0 3600 7200 10800 Wall Time (Seconds)01000h1hand_push 0 3600 7200 10800 Wall Time (Seconds)0100020003000h1hand_cabinet 0 3600 7200 10800 Wall Time (Seconds)0500h1hand_door 0 3600 7200 10800 Wall Time (Seconds)02000h1hand_truck 0 3600 7200 10800 Wall Time (Seconds)0200400h1hand_cube 0 3600 7200 10800 Wall Time (Seconds)010002000h1hand_bookshelf_simple 0 3600 7200 10800 Wall Time (Seconds)010002000h1hand_bookshelf_hard 0 3600 7200 10800 Wall Time (Seconds)050010001500h1hand_basketball 0 3600 7200 10800 Wall Time (Seconds)0500h1hand_window 0 3600 7200 10800 Wall Time (Seconds)0500h1hand_spoon 0 3600 7200 10800 Wall Time (Seconds)10000 5000 0h1hand_package 0 3600 7200 10800 Wall Time (Seconds)05001000h1hand_powerlift 0 3600 7200 10800 Wall Time (Seconds)0200400600h1hand_room 0 3600 7200 10800 Wall Time (Seconds)0200400h1hand_insert_small 0 3600 7200 10800 Wall Time (Seconds)0200400h1hand_insert_normal 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0 PPO SAC DreamerV3 TDMPC2 SimbaV2 FastTD3 Figure 9: HumanoidBench results. We provide learning curves on a 39 tasks from HumanoidBench (Sferrazza et al., 2024) The solid line and shaded regions represent the mean and standard deviation across three runs. 12 A.2 IsaacLab 0 1200 2400 3600 Wall Time (Seconds)10 0102030Isaac-Velocity-Flat-G1-v0 0 1200 2400 3600 Wall Time (Seconds)10 010203040Isaac-Velocity-Rough-G1-v0 0 1200 2400 3600 Wall Time (Seconds)0200040006000800010000Isaac-Repose-Cube-Shadow-Direct-v0 0 1200 2400 3600 Wall Time (Seconds)010002000300040005000Isaac-Repose-Cube-Allegro-Direct-v0 0 1200 2400 3600 Wall Time (Seconds)10 010203040Isaac-Velocity-Flat-H1-v0 0 1200 2400 3600 Wall Time
https://arxiv.org/abs/2505.22642v1
(Seconds)10 0102030Isaac-Velocity-Rough-H1-v0 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0 PPO FastTD3 Figure 10: IsaacLab results. We provide learning curves on six tasks from IsaacLab (Mittal et al., 2023). The solid line and shaded regions represent the mean and standard deviation across three runs. A.3 MuJoCo Playground 0 1200 2400 3600 Wall Time (Seconds)0102030G1JoystickFlatTerrain 0 1200 2400 3600 Wall Time (Seconds)0102030G1JoystickRoughTerrain 0 1200 2400 3600 Wall Time (Seconds)010203040T1JoystickFlatTerrain 0 1200 2400 3600 Wall Time (Seconds)0102030T1JoystickRoughTerrain PPO FastTD3 Figure 11: MuJoCo Playground results. We provide learning curves on four tasks from MuJoCo Playground (Zakka et al., 2025). The solid line and shaded regions represent the mean and standard deviation across three runs. 13
https://arxiv.org/abs/2505.22642v1
Pre-training for Recommendation Unlearning Guoxuan Chen The University of Hong Kong Hong Kong guoxchen@foxmail.comLianghao Xia The University of Hong Kong Hong Kong aka_xia@foxmail.comChao Huang∗ The University of Hong Kong Hong Kong chaohuang75@gmail.com Abstract Modern recommender systems powered by Graph Neural Networks (GNNs) excel at modeling complex user-item interactions, yet in- creasingly face scenarios requiring selective forgetting of training data. Beyond user requests to remove specific interactions due to privacy concerns or preference changes, regulatory frameworks mandate recommender systems’ ability to eliminate the influence of certain user data from models. This recommendation unlearn- ing challenge presents unique difficulties as removing connections within interaction graphs creates ripple effects throughout the model, potentially impacting recommendations for numerous users. Traditional approaches suffer from significant drawbacks: fragmen- tation methods damage graph structure and diminish performance, while influence function techniques make assumptions that may not hold in complex GNNs, particularly with self-supervised or random architectures. To address these limitations, we propose a novel model-agnostic pre-training paradigm UnlearnRec that pre- pares systems for efficient unlearning operations. Our Influence Encoder takes unlearning requests together with existing model parameters and directly produces updated parameters of unlearned model with little fine-tuning, avoiding complete retraining while preserving model performance characteristics. Extensive evalua- tion on public benchmarks demonstrates that our method delivers exceptional unlearning effectiveness while providing more than 10x speedup compared to retraining approaches. We release our method implementation at: https://github.com/HKUDS/UnlearnRec. CCS Concepts •Information systems →Recommender systems ;Collabora- tive filtering ;•Security and privacy →Privacy protections . Keywords Machine Unlearning, Recommender Systems, Pre-training ACM Reference Format: Guoxuan Chen, Lianghao Xia, and Chao Huang. 2025. Pre-training for Recommendation Unlearning. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ∗Chao Huang is the Corresponding Author. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGIR ’25, Padua, Italy ©2025 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-1592-1/2025/07 https://doi.org/10.1145/3726302.3730060’25), July 13–18, 2025, Padua, Italy. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3726302.3730060 1 Introduction Recommender systems (RS) [ 11,19,37] have emerged as a critical in- frastructure in modern digital landscapes, driving user engagement and satisfaction across various platforms including e-commerce, streaming services, and social media. These systems utilize vast amounts of user data to generate personalized suggestions, leverag- ing learning algorithms to predict user preferences and behaviors. However, growing concerns about data privacy and legal recog- nition of “the right to be forgotten” [ 16] through regulations like General Data Protection Regulation [ 23], the Personal Information Protection and Electronic Documents Act [ 1] have fundamentally challenged these
https://arxiv.org/abs/2505.22649v1
systems. Specifically, there is a growing need for recommendation unlearning, which involves removing specific user interactions and preferences from trained recommender models in compliance with user requests or regulatory requirements. Graph Neural Networks (GNNs) have advanced Collaborative Filtering, establishing themselves as the leading paradigm for mod- ern recommendation systems [ 5,33,34,39]. These models excel at capturing complex user-item interaction graphs, yet implementing recommendation unlearning presents significant challenges due to their interconnected architecture. When users delete interactions, the process extends beyond removing data entries; the system must recalibrate recommendation probabilities for related items while ensuring deleted information becomes completely unrecoverable from both storage and inference capabilities. Crucially, this un- learning process must preserve overall prediction accuracy while making these adjustments. The challenge intensifies in GNN-based recommender systems where embeddings form an interdependent graph, collectively refined through training iterations, creating an information web that requires careful disentanglement. Addressing recommendation unlearning challenges demands algorithms that can update models efficiently without complete re- training – a requirement that would be computationally prohibitive in real-world applications. Current approaches fall into two primary categories: i) Partition-based Methods (P-based methods)[ 2,4,7,10] and ii) Influence Function-based Methods (IF-based methods)[ 30– 32,38]. P-based techniques strategically divide the interaction graph into multiple shards; when unlearning requests arrive, only affected shards undergo retraining before being reintegrated for prediction. However, this partitioning inevitably disrupts the graph’s natu- ral topology, degrading recommendation precision. Additionally, requests spanning numerous shards significantly increase process- ing time, limiting practical application. IF-based methods, while computationally faster, employ mathematical approximations to estimate unlearning impacts. These approaches face three critical limitations: their underlying mathematical assumptions often failarXiv:2505.22649v1 [cs.IR] 28 May 2025 SIGIR ’25, July 13–18, 2025, Padua, Italy Guoxuan Chen, Lianghao Xia, and Chao Huang in complex real-world scenarios; they struggle with Self-Supervised Learning (SSL)-based GNNs where random network structures and supervision signals generated at runtime undermine gradient es- timation accuracy; and they typically impose substantially higher memory requirements. These limitations highlight the urgent need for more robust and efficient unlearning solutions that maintain recommendation quality while respecting user privacy. To address these challenges, we introduce a novel unlearning par- adigm called UnlearnRec, a learnable, model-agnostic framework built around a pre-trained Influence Encoder (IE). In our Unlearn- Rec, IE can be pre-trained in advance and then either deployed directly or quickly fine-tuned when unlearning requests emerge. The core of our solution is the Influence Encoder, a learnable mod- ule specifically designed to predict how unlearning requests will impact GNN embeddings. We develop this encoder through a two- stage process of comprehensive pre-training followed by targeted fine-tuning when needed. The key contributions of our research are as follows: •We introduce the first pretraining-based learnable paradigm for recommendation unlearning that efficiently processes requests and shifts embedding distributions while preserving accuracy. •Our UnlearnRec framework is model-agnostic, enabling effective unlearning in state-of-the-art recommender systems. •Comprehensive experiments demonstrate UnlearnRec’s superior performance across different dimensions. 2 GNN-based Recommendation Unlearning 2.1 Interaction Graph for Recommendation GNN-based methods have been shown to be the most effective solu- tions for CF [ 6,34]. In the CF paradigm, a user set U(|U|=𝐼),
https://arxiv.org/abs/2505.22649v1
an item setV(|V|=𝐽), and a user-item interaction matrix A∈R𝐼×𝐽 are utilized to represent the historical interaction records, based on which the encoders derive embeddings and make predictions. For each entry𝑎𝑖,𝑗in the interaction matrix A,𝑎𝑖,𝑗=1if user𝑢𝑖∈U has interacted with item 𝑣𝑗∈V, otherwise𝑎𝑖,𝑗=0. The interaction edge setEcorresponds one-to-one with the adjacency matrix A, i.e., if there is𝑎𝑖,𝑗=1inA, there exists an undirected edge (𝑢𝑖,𝑣𝑗)∈E . Nowadays, GNN has demonstrated its superiority in modelling this sort of interaction graph denoted by G=(U,V,E), in whichU, Vare two kinds of sets of graph nodes and Eis the edge set. 2.2 SSL-based GNN Recommender Based on interaction graph data, GNN first initializes an embedding vector for each node (user/item in U,V). It then performs multiple rounds of forward propagation and fusion along the edges of the graph, resulting in the final representation of last layer for each node. The final embeddings incorporate not only the information of the nodes themselves but also that of their multi-order neighbors, thereby encapsulating the structural information of the graph. e𝑖,𝑙=∑︁ (𝑣𝑗,𝑢𝑖)∈E1√︁ 𝛿𝑖𝛿𝑗e𝑗,𝑙−1,e𝑗,𝑙=∑︁ (𝑢𝑖,𝑣𝑗)∈E1√︁ 𝛿𝑖𝛿𝑗e𝑖,𝑙−1(1) E𝑙=D−1 2·¯A·D−1 2·E𝑙−1, user𝑢𝑖∈U,item𝑣𝑗∈V (2) Eq. 1 shows an example of GNN, which is the most widely applied backbone architecture [ 13] for most GNN models [ 5,20,29,35].e𝑖,𝑙,e𝑖,𝑙−1∈R𝑑denote the𝑙-layer and(𝑙−1)-layer embedding vectors for user 𝑢𝑖, and analogously for item 𝑣𝑗. Therefore, e𝑖,0,e𝑗,0 represent the initial embeddings for 𝑢𝑖and𝑣𝑗. And𝛿𝑖,𝛿𝑗denote the degrees of nodes 𝑢𝑖,𝑣𝑗, for Lapalacian normalization. Eq. 2 gives the corresponding matrix form where ¯A∈R(𝐼+𝐽)×(𝐼+𝐽)is the symmetric adjacent matrix for graph G, derived from the interaction matrix A[28].Dis the diagonal degree matrix of ¯A, and E𝑙∈ R(𝐼+𝐽)×𝑑denotes the nodes’ embedding matrix of layer 𝑙, where each row in E𝑙is an embedding for a node, i.e., e𝑖,𝑙ore𝑗,𝑙. After 𝐿-layers’ iteration, GNN aggregates the multi-order embeddings to output the final embeddings ¯e𝑖,¯e𝑗∈R𝑑, corresponding to row vectors in ¯E, and the user-item relation prediction score ˆ𝑦𝑖,𝑗. ˆ𝑦𝑖,𝑗=¯e⊤ 𝑖¯e𝑗, ¯e𝑖,¯e𝑗∈¯E, ¯E=𝐿−1∑︁ 𝑙=0E𝑙 (3) BPR loss [ 24] is widely employed over observed interaction pairs (𝑢𝑖,𝑣𝑗+)∈E , and sampled negative pairs (𝑢𝑖,𝑣𝑗−)to optimize GNN. L𝑏𝑝𝑟=∑︁ (𝑢𝑖,𝑣𝑗+,𝑣𝑗−)−logsigm(ˆ𝑦𝑖,𝑗+−ˆ𝑦𝑖,𝑗−) (4) where𝑠𝑖𝑔𝑚(·)denotes the Sigmoid function. By minimizing Eq. 4, GNN recommenders can produce excellent performance but nowa- days, SSL-based enhancement modules are widely used in GNNs to further achieve state-of-the-art performance. Ev1=View 1({E𝑙|𝑙∈[0,𝐿−1]}),Ev2=View 2({E𝑙|𝑙∈[0,𝐿−1]})(5) L𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 =Cross -view-similarity(Ev1,Ev2) (6) Eq. 5,6 give an example of SSL loss function where the model creates two views of the embeddings (Eq. 5) and then optimizes the contrast loss (Eq. 6). Besides, there are many other kinds of designs for SSL [ 20,29,35]. Of utmost importance is that, SSL-based GNNs usually generate random or data-dependent network structures and supervision signals in View 1|2functions at runtime, making it hard to precisely estimate the gradients for the trainable embeddings by manual computation without actual training (see 2.3). To recap, we useM𝑜𝑑𝑒𝑙(E0,¯A)to represent and sum up GNN RS of all kinds. 2.3 SSL-based Recommendation Unlearning In real-life scenarios, there is always a demand from users to undo their interaction records in order to delete accidental inputs or mistakes, change preferences, or just protect their privacy. This corresponds to deleting edges in the graph G. We
https://arxiv.org/abs/2505.22649v1
useEΔto denote the set of edges to be undone and E𝑟to denote remaining edges,i.e., EΔ=E\E𝑟. In practice, pairs in Enot only form the adjacency matrix for GNN’s forward propagation but also serve as the positive supervisory signals in the loss function( Eq. 4). Therefore, dropping a subset of edges not only changes the graph structure, but also turns the original positive labels into negative ones, definitely re- sulting in the distribution shift of embeddings. To avoid retraining, unlearning methods are proposed to fulfill the requirements that: •It must shift embeddings so that unlearned edges are predicted as negatives while maintaining a distribution similar to retraining. •Unlearned models can still achieve good prediction performance. •Unlearning process must be more efficient than retraining. Pre-training for Recommendation Unlearning SIGIR ’25, July 13–18, 2025, Padua, Italy In recommender systems, we mainly focus on edge unlearning and when some nodes need to be unlearned, we just need to unlearn all the edges associated with them, and directly delete the nodes. For SSL-based GNNs, it is difficult to accurately estimate the difference between the old well-trained embeddings before unlearn- ing and the newly obtained embeddings after unlearning through retraining, just by a manually-designed end-to-end function, i.e. IF like [ 30,31], because the complicated and randomized SSL network structures and signals will produce unforeseeable embedding distri- butions at runtime through iterative training. So, the distribution shift estimated by IF methods would be inaccurate. 3 Methodology Our goal is to build a model-agnostic pre-training paradigm, which produces a well-pretrained encoder IE(·)that takes the unlearning requests and the original embeddings, and outputs the unlearned embeddings directly (or for further fine-tuning, see Fig. 1). 3.1 Influence Dependency Matrix Here, we empirically derive a matrix as a means to construct the IE, and formalize the unlearning requests. From the BPR loss (Eq. 4), we can derive the approximate gradients over a positive interaction pair(𝑢𝑖,𝑣𝑗+)and a negative pair (𝑢𝑖,𝑣𝑗−): ∇¯e𝑖=𝜕L𝑏𝑝𝑟 𝜕¯e𝑖=sigm(ˆ𝑦𝑖,𝑗+−ˆ𝑦𝑖,𝑗−)−1·(¯e𝑗+−¯e𝑗−) (7) =𝑐·(¯e𝑗+−¯e𝑗−),sigm(·)<1, 𝑐<0 (8) These gradients will be backpropagated to every node along the graph structure during training. However, once the interaction (𝑢𝑖,𝑣𝑗+)then needs to be unlearned, ¯e𝑗+should be turned into the negative side. A compensatory gradient ∇(𝑐) ¯e𝑖should be considered: ∇′ ¯e𝑖=∇¯e𝑖−∇(𝑐) ¯e𝑖,∇(𝑐) ¯e𝑖=𝑐′¯e𝑗+, 𝑐′<𝑐,|𝑐′|>𝑐 (9) Similarly, when a number of interaction pairs {(𝑢𝑖,𝑣𝑗+ 1),...,(𝑢𝑖,𝑣𝑗+𝑛)} need to be unlearned, the compensated gradient could roughly be: ∇′ ¯e𝑖=∇¯e𝑖−∇(𝑐) ¯e𝑖=∇¯e𝑖− 𝑐′ 𝑖,𝑗+ 1¯e𝑗+ 1+𝑐′ 𝑖,𝑗+ 2¯e𝑗+ 2+,···,𝑐′ 𝑖,𝑗+𝑛¯e𝑗+𝑛 (10) The above is merely a one-order estimation. But inspired by Eq. 8, we can find that, since 𝑐<0, the training process ¯e(𝑡+1) 𝑖=¯e(𝑡) 𝑖−𝜂𝑐· (¯e(𝑡) 𝑗+−¯e(𝑡) 𝑗−)( learning rate 𝜂>0) is actually a process of increasing embedding similarity between observed pairs (𝑢𝑖,𝑣𝑗+), and decreas- ing that between negative pairs (𝑢𝑖,𝑣𝑗−), i.e., bringing ¯e𝑖closer to ¯e𝑗+and away from ¯e𝑗−, which aligns with the design philosophy of GNNs. Moreover, Eq. 9 shows that unlearning (𝑢𝑖,𝑣𝑗+)means pushing the already-pulled-closer ¯e𝑗+further away. ¯e(𝑢) 𝑖≈¯e(𝑡) 𝑖−𝜂∇′ ¯e(𝑡) 𝑖=¯e(𝑡) 𝑖−𝜂𝑐·(¯e(𝑡) 𝑗+−¯e(𝑡) 𝑗−)+𝜂𝑐′¯e(𝑡) 𝑗+ (11) =¯e(𝑡) 𝑖−𝜂(𝑐−𝑐′)¯e(𝑡) 𝑗++𝜂𝑐¯e(𝑡) 𝑗−=¯e(𝑡) 𝑖−˜𝑐1¯e(𝑡) 𝑗+−˜𝑐2¯e(𝑡) 𝑗−(12) Eq. 12 is based on Eq. 9, where ¯e(𝑢) 𝑖denotes the estimated un- learned embedding, ¯e(𝑡) 𝑖,¯e(𝑡) 𝑗+,¯e(𝑡) 𝑗−are trained
https://arxiv.org/abs/2505.22649v1
embeddings before unlearning, and ˜𝑐1,˜𝑐2>0. Since the original negative pair (𝑢𝑖,𝑣𝑗−) has already been pushed away from each other, it is unneces- sary to do that again. The only valuable part for unlearning is¯e(𝑢) 𝑖≈¯e(𝑡) 𝑖−˜𝑐1¯e(𝑡) 𝑗+because only(𝑢𝑖,𝑣𝑗+)is changed from positive to negative. Extending Eq. 12 with Eq. 10, we can similarly obtain: ¯e(𝑢) 𝑖≈¯e(𝑡) 𝑖−˜𝑐𝑖,𝑗+ 1¯e(𝑡) 𝑗+ 1−˜𝑐𝑖,𝑗+ 2¯e(𝑡) 𝑗+ 2−,···,−˜𝑐𝑖,𝑗+𝑛¯e(𝑡) 𝑗+𝑛(13) ¯E(𝑢)≈¯E(𝑡)− ˜C⊙¯AΔ ·¯E(𝑡)(14) From Eq. 13, Eq. 14, we find that matrix ¯AΔhappens to be the symmetric adjacent matrix constructed from the unlearning edges’ setEΔ.˜Cis the coefficient matrix and ⊙denotes the element-wise multiplication. Most importantly, ¯AΔactually reflects the interde- pendence among all unlearning requests, which collaboratively influence the direction of embedding distribution drift druing un- learning. So, we define ¯AΔas the Influence Dependency Matrix (IDM), the graph defined by which is correspondingly called the Influence Dependency Graph (IDG). The graph built by E𝑟is then referred to as Residual Graph (RG). ¯Atakes effect during learning, while ¯AΔtakes effect during unlearning, serving as a low-pass filter. 3.2 Learnable Influence Estimation 3.2.1 Trainable Influence Encoder. Inspired by IDM, we em- pirically propose a trainable encoder ˜E0=IE(¯AΔ,E0)that takes the unlearning requests EΔ(to directly construct IDM ¯AΔ), and the original 0-layer embeddings of the trained model 𝑴to be unlearned as inputs, and outputs the revised 0-layer embeddings ˜E0. ¯H=𝐿𝑢−1∑︁ 𝑙=0H𝑙,H𝑙=D−1 2 Δ·¯AΔ·D−1 2 Δ·H𝑙−1 (15) where DΔis the diagonal degree matrix for ¯AΔandH𝑙∈R(𝐼+𝐽)×𝑑 denotes the trainable Influence Estimation Matrix (IEM) that will be iteratively propagated along the graph IDG for 𝐿𝑢times, resulting in the readout IEM ¯H, in which each row represents a trainable influence estimation vector for each node (user/item) due to unlearning, e.g., h𝑖,h𝑗∈¯Hfor user𝑖and item𝑗. Multiple iterations of propagation and aggregation inject more multi-hop influence dependencies and structure information of the IDG into the influence matrix IEM. Similarly, we introduce: E𝑤,𝑙=D−1 2 Δ¯AΔD−1 2 Δ·E𝑤,𝑙−1,E𝑤,0=¯E⊙W𝜂, 𝑙∈[0,𝐿𝑒)(16) where ¯E∈R(𝐼+𝐽)×𝑑is the readout embeddings of model 𝑴be- fore unlearning but here they are fixed and non-trainable. W𝜂∈ R(𝐼+𝐽)×1is trainable weight initialized with small values around 0. Eq. 16 is inspired by Eq. 14 but injects higher-order information of interdependent influences. We let ¯E𝑤:=E𝑤,𝐿𝑒−1, the last layer as readout. Combining Eq. 15 and Eq. 16, we have the final estimation: ˜E0=Δ¯E0+E0,Δ¯E0=MLP(ΔE0),ΔE0=−¯E𝑤+¯H (17) MLP(X):X𝑙=𝛿(W𝑙−1X𝑙−1+b𝑙−1), 𝑙∈[0,𝐿𝑚) (18) where ˜E0denotes the revised 0-layer embeddings for 𝑴after un- learning and MLP(·)denotes a multilayer perceptron in which parameters are fixed during pre-training and updated during fine- tuning. The purpose of the entire pipeline from Eq. 15 to Eq. 17 is to quickly calculate the embedding distribution shift Δ¯E0caused by unlearning when requests EΔarrive. The only trainable parameters during pre-training are H0andW𝜂that are initialized around 0 and should be well pre-trained before unlearning requests come. SIGIR ’25, July 13–18, 2025, Padua, Italy Guoxuan Chen, Lianghao Xia, and Chao Huang Influence Encoder Pre-train/Finetune Pipeline Pre-train To Build Influence Dependency Graph (Matrix)Interaction Graph With Unlearn. Requests Figure 1: Overall framework of the proposed UnlearnRec paradigm. 3.2.2 Process Unlearning. We can directly construct the ¯AΔ upon arrival of unlearning requests EΔand then calculate ˜E0through IE(¯AΔ,E0). The unlearned model 𝑴𝑢will beM𝑜𝑑𝑒𝑙(E0:=˜E0,¯A:= ¯A𝑟)where ¯A𝑟is the symmetric
https://arxiv.org/abs/2505.22649v1
adjacent matrix built from E𝑟. Pa- rameters in IE(·)should be well-pretrained in advance. 3.3 Multi-task Loss Functions for Pretraining We now introduce the loss functions for UnlearnRec’s pretraining paradigm, in which the only trainable parameters are H0andW𝜂 during pretraining. MLP(·)will be updated during fine-tuning. 3.3.1 Model Loss Function. In Eq. 19, the trained model 𝑴to be unlearned, is initialized with ˜E0as the 0-layer embedding and computes the loss L𝑀which encompasses various complex loss functions designed by 𝑴, such as SSL-based loss, BPR loss (un- learned edges should be negative) and others. ˜Eis the revised final embeddings obtained from the forward and readout of 𝑴based on ˜E0and ¯A𝑟, which is the symmetric adjacent matrix built from E𝑟. L𝑀=L𝑜𝑠𝑠𝑀 M𝑜𝑑𝑒𝑙(˜E0,¯A𝑟) ,˜E=Fwd𝑀(˜E0,¯A𝑟)(19) 3.3.2 Unlearning Loss. Eq. 20 is referred to as the unlearning loss, which is used to enforce a decrease in the predicted scores of the interaction pairs (𝑖Δ,𝑗Δ)to be unlearned, as well as in the corresponding embedding similarities between 𝑖Δand𝑗Δ. L𝑢=∑︁ (𝑢𝑖Δ,𝑣𝑗Δ)−logsigm(−˜e⊤ 𝑖Δ˜e𝑗Δ),(𝑖Δ,𝑗Δ)∈E Δ (20) 3.3.3 Preserving Loss. In Eq. 21, 𝚿(¯E)generates a vector describ- ing the embedding distribution of remaining positive pairs (𝑢𝑖,𝑣𝑗+) inE𝑟based on 𝑴’s original final embeddings ¯E. Analogously, we can obtain 𝚿(˜E)for the distribution of revised final embeddings ˜E. 𝚿(¯E)=logexp¯e⊤ 𝑖¯e𝑗+/𝜏 Í (∗,𝑣𝑘+)∈E 𝑟exp ¯e⊤ 𝑖¯e𝑘+/𝜏,··· (𝑢𝑖,𝑣𝑗+)∈E𝑟 (21) We can align the two vectors because typically, the unlearning set EΔrepresents only a small proportion of the entire E. As a result, the embedding distribution of the remaining unaffected portion retains a significant amount of useful information. This is beneficial to preserving predictive performance of 𝑴after unlearning. L𝑝=Align 𝚿(˜E),𝚿(¯E) (22)Eq. 22 is named preserving loss and we use 𝐿2distance as Align(·). 3.3.4 Contrast Loss. The distinct aspect of recommending un- learning compared to general GNN unlearning lies in that when a user unlearns certain interactions, it implies, to some extent, a decrease in the probability of recommending similar interactions. H′ 𝑙=D−1 2 Δ′·¯A′ Δ·D−1 2 Δ′·H𝑙−1,¯A′ Δ=Dropout(¯AΔ) (23) Dropout(·)randomly removes a small portion (e.g. 𝜌%) of unlearn- ing edges from ¯AΔand is then used for forward propagation. Next, we align the generated H′ 𝑙byDropout(·)with the original H𝑙. L𝑐=∑︁ 𝑖∈U∪V−logexpcos(h𝑖,h′ 𝑖)/𝜏 Í 𝑖′∈U∪V exp𝑐𝑜𝑠(h𝑖′,h′ 𝑖′)/𝜏 (24) where h𝑖,h′ 𝑖denote row vectors in H𝑙,H′ 𝑙,respectively. To some extent, it leverages the H′ 𝑙generated on ¯A′ Δto predict the complete H𝑙, thereby implicitly utilizing a partial IDM ¯A′ Δto complete and predict the full IDM ¯AΔ, by which, it enables the incorporation of more contextual influence and correlations. To summarize all of the above, we we obtain the final loss function for pre-training: L=L𝑀+𝜆𝑢L𝑢+𝜆𝑝L𝑝+𝜆𝑐L𝑐=L𝑜𝑠𝑠𝑴,¯AΔ(25) whereL𝑀contains all the SSL random network structures of model 𝑴, which will drive the post-unlearning embeddings to conform to the specific distribution characteristics of the model. L𝑝is re- sponsible for maintaining the predictive performance while L𝑢is used for unlearning, and they represent a trade-off. L𝑐controls the influence generalization. The final loss Lcan be computed as long as the IDM ¯AΔand the trained model 𝑴to be unlearned are given. 3.4 Pretraining Paradigm for Unlearning We train the IEduring the pretraining and adjust both the IEand the RS model based on
https://arxiv.org/abs/2505.22649v1
unlearning requests during the finetuning. 3.4.1 Pre-training. In each simulation round, we randomly select a subsetE(𝑠) ΔfromE(e.g.,𝜌%) as the simulated unlearning set to construct the IDM ¯A(𝑠) Δ. Then, we optimize the loss function L to train the parameters. Currently, only H0in Eq. 15 and W𝜂in Eq. 16 are trainable, while the other parameters like ¯Eremain fixed. Please refer to lines 1 to 8 of Algorithm 1 for more details. 3.4.2 Fine-tuning. The well-pretrained IE(·)can be directly uti- lized to perform unlearning. Howver, to achieve better performance and unlearning efficacy, if necessary, we can still fine-tune IE(·)and Pre-training for Recommendation Unlearning SIGIR ’25, July 13–18, 2025, Padua, Italy the unlearned model 𝑴𝑢when actual unlearning requests arrive. During fine-tuning, we usually only need to optimize L𝑀+𝜆𝑢L𝑢to fine-tune the MLP(·)inIE(·)(Eq. 18) and E0in𝑴𝑢(Eq. 1), which enables us to accomplish the fine-tuning process very efficiently. Please refer to lines 9 and 10 of Algorithm 1 for more information. Algorithm 1: Pre-train./Fine-tun. Paradigm of UnlearnRec Input: Trained model 𝑴and graph dataG=(U,V,E) Output: Well-pretrained IE(·)and unlearned 𝑴𝑢. 1Initialize H0in Eq. 15 and W𝜂in Eq. 16 around 0. Initialize W𝑙,b𝑙in Eq. 18 with identity matrix and 0, respectively 2Lock all other parameters and leave only H0,W𝜂trainable 3forrun𝑖𝑝𝑟𝑒=1to𝑁𝑝𝑟𝑒𝑡𝑟𝑎𝑖𝑛 do 4 Sample a subsetE(𝑠) ΔfromEto construct IDM ¯A(𝑠) Δ 5 forepoch𝑗𝑡𝑟=1to𝑁𝑡𝑟𝑎𝑖𝑛 do 6 MinimizeL𝑜𝑠𝑠(𝑴,¯A(𝑠) Δ)(Eq. 25) to update H0,W𝜂; 7 end 8end 9Actual unlearning requests EΔcome,E𝑟=E\E Δ, build ¯A𝑟. 10Fix all parameters except those in MLP(·)andE0. Minimize L𝑀+𝜆𝑢L𝑢based on ( ˜E0,¯A𝑟) to fine-tune MLP(·)andE0. 11Return Well-pretrained IE(·)and𝑴𝑢=M𝑜𝑑𝑒𝑙(˜E0,¯A𝑟) 4 Evaluation We conducted numerous experiments to validate our paradigm UnlearnRecand address the following research questions (RQs): •RQ1 : Can the proposed paradigm UnlearnRec effectively pre- serve the predictive performance of the model after unlearning, compared to baselines and retrained model? •RQ2 : Can our UnlearnRec paradigm effectively unlearn the re- quested interactions in comparison to the baseline approaches? •RQ3 : How do the components of the proposed UnlearnRec para- digm affect the effectiveness of unlearning? •RQ4 : How does our UnlearnRec method alter the distribution curve of the embeddings of the model that needs to be unlearned? •RQ5 : How efficient is our UnlearnRec approach in comparison to existing techniques in terms of time and memory? •RQ6 : Can the UnlearnRec method effectively reduce the recom- mended probability of items similar to the unlearned items? 4.1 Experimental Settings 4.1.1 Evaluation Datasets. We validate the unlearning effective- ness of our UnlearnRec using three commonly used real-world datasets: Movielens-1M, Gowalla, and Yelp2018. Movielens-1M is widely used public user behavior collection, containing one million ratings from users on movies while the Gowalla dataset comprises user check-in records at geographic locations between January and June 2010, collected from the Gowalla platform. And the Yelp2018 dataset is derived from the Yelp platform and includes user ratings on venues spanning January to June 2018. For training the model 𝒎to be unlearned, we allocate 20% of the total interaction edges as the test set, while the remaining edges are utilized for training.Table 1: Statistical details of experimental datasets. Dataset # Users # Items # Interactions Interaction Density Gowalla 25557
https://arxiv.org/abs/2505.22649v1