text string | source string |
|---|---|
2% Transport 3 NA NA 0% Transport Collab. 3 NA NA 0% Overcooked 2 102.267 26 80% Overcooked 3 194.794 28 40% Rover 1 21.360 39 59% Rover 2 87.305 38 11% Rover 3 120.732 49 2% Rover 4 NA NA 0% Satellite 1 0.036 12 100% Satellite 2 1.600 21 78% Satellite 3 6.915 25 44% Satellite 4 NA NA 0% Table 3: Plan Difficulty of Transport, Overcooked, Rover, and Satellite Problems. Hetero. indicates heterogeneous agents, meaning agents have different capabilities. NA means no successful run occur within the maximum episode steps of 100. Deployment After training, the next step is to deploy and evaluate the RL models. HDDLGym supports a variety of standard quantitative evaluation metrics, including the exe- cution time of the RL-assisted planner, the number of plan- ning steps needed to achieve the specified goals, and the suc- cess rate, similar to those reported in Table 3. For qualitative assessment, HDDLGym provides a visualization tool with detailed usage instructions in the codebase, allowing users to examine the action hierarchy of each agent step by step. This visualization can be integrated with the domain renderer, for example, in Overcooked demonstration videos discussed in Section 6.2, to facilitate a more comprehensive evaluation of agent performance during deployment. Together, these quantitative and qualitative tools support in-depth analysis, helping users refine their models and compare performance across different training settings and domain configurations. 7 Discussion and Future Work We introduced HDDLGym, which transforms HDDL- defined hierarchical problems into Gym environments, en- abling the use of RL policies in hierarchical planning sys- tems. By prioritizing scalability in observation and action spaces, HDDLGym makes trade-offs that enhance complex- ity handling at the cost of slight accuracy loss in RL models. This flexibility is crucial for tackling intricate tasks. Addi- tionally, HDDLGym supports multi-agent environments, en- riching the framework for studying collaborative dynamics in hierarchical planning and offering engaging RL research scenarios. HDDLGym currently operates under certain limitations that we aim to address in future developments. First, it can only handle discrete state and action spaces, which restricts its application to scenarios that require continuous or hy- brid spaces. Furthermore, HDDLGym assumes a determin- istic transition function, meaning that action effects are pre- 1 agent 2 agents 3 agents 1 agent A B C D EActor Loss Critic Loss Actor Loss Actor Loss Critic Loss Critic Loss Training Cycles Training Cycles Training Cycles Training Steps (x10,000) Training Steps (x10,000) Cum. Reward Success Rate (%) Planning Time (sec) Avg. Steps to Success ~1200 ~2500 ~3200 Figure 4: Training Dynamics Analysis. Figures A, B, and C show the training losses for the Transport domain with 1, 2, and 3 agents, highlighting longer convergence times as the number of agents increases. Figures D and E display the PPO policy’s training progression for the 1-agent Transport problem, including cumulative reward, success rate, planning time, and steps. dictable and do not account for probabilistic outcomes. This limits its applicability to environments where uncertainty and stochastic outcomes are common. Lastly, the use of one-hot encoding for observations as input to an | https://arxiv.org/abs/2505.22597v1 |
RL policy restricts its applicability to problems involving similar ob- jects. In many cases, when only dynamic predicates are used for observations, the RL policy is confined to a fixed static condition. Changes like varying agent numbers, adding/re- moving objects, or altering static conditions require a dif- ferent RL policy, limiting scalability and adaptability across scenarios within the same domain. Overcoming these chal- lenges will be crucial for expanding HDDLGym’s applica- bility to complex, real-world settings. In the future, HDDL domains can be learned au- tonomously through advances in: (i) offline learning of HDDL domains from observations (Grand, Pellier, and Fior- ino 2022), (ii) offline learning of HTNs from observa- tions (Li et al. 2014; Zhuo, Mu ˜noz-Avila, and Yang 2014; Li et al. 2022), (iii) online learning of PDDL-like domains (Ng and Petrick 2019; Lamanna et al. 2021; Verma, Marpally, and Srivastava 2021, 2022), and (iv) online learning of HDDL domains from human input with Large Language Models (Fine-Morris et al. 2024; Favier et al. 2025). As discussed, HDDLGym has limitations that could be addressed to better support complex multi-agent hierarchical problems. An improvement is enabling HDDLGym to han- dle multiple pairs of HDDL domain and problem files for different agents within a single Gym environment. Inspired by how multi-agent features are added to PDDL and HTNsthrough MA-PDDL (Kovacs 2012) and MA-HTN (Cardoso and Bordini 2017), respectively, this approach would allow each heterogeneous agent to operate with its own unique pair of HDDL domain and problem files. This capability would enhance HDDLGym’s ability to manage complex multi- agent dynamics beyond simple collaboration, supporting scenarios with competition, agent privacy, and distributed context information. 8 Conclusion In this work, we introduce HDDLGym, a tool for applying RL to hierarchical planning by converting HDDL-defined problems into Gym environments. Its design balances scal- ability and functionality, enabling multi-agent interactions and complex task structures. We hope HDDLGym opens new possibilities for studying RL in hierarchical planning, especially in multi-agent contexts. Acknowledgments We gratefully acknowledge the financial support of the Office of Naval Research (ONR), under grant N000142312883, and sincerely thank Pulkit Verma for his valuable insights and feedback on the project. References Brockman, G.; Cheung, V .; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. OpenAI Gym. arXiv preprint arXiv:1606.01540 . Cardoso, R. C.; and Bordini, R. H. 2017. A Multi-Agent Extension of a Hierarchical Task Network Planning Formal- ism. Advances in Distributed Computing and Artificial In- telligence Journal , 6(2): 5–17. Carroll, M.; Shah, R.; Ho, M. K.; Griffiths, T.; Seshia, S.; Abbeel, P.; and Dragan, A. 2019. On the Utility of Learning About Humans for Human-AI Coordination. In Advances in Neural Information Processing Systems (NeurIPS) . Ducho ˇn, F.; Babinec, A.; Kajan, M.; Be ˇno, P.; Florek, M.; Fico, T.; and Juri ˇsica, L. 2014. Path Planning with Modified A Star Algorithm for a Mobile Robot. Procedia Engineer- ing, 96: 59–69. Erol, K.; Hendler, J. A.; and Nau, D. S. 1994. UMCP: A Sound and Complete Procedure for Hierarchical Task- network Planning. In Proceedings of the 2nd Interna- tional Conference | https://arxiv.org/abs/2505.22597v1 |
on Artificial Intelligence Planning Sys- tems (AIPS) . Favier, A.; Verma, P.; La, N.; and Shah, J. A. 2025. Lever- aging LLMs for Collaborative Human-AI Decision Making. InProceedings of the AAAI 2025 Spring Symposium on Cur- rent and Future Varieties of Human-AI Collaboration . Fine-Morris, M.; Hsiao, V .; Smith, L. N.; Hiatt, L. M.; and Roberts, M. 2024. Leveraging LLMs for Generating Document-Informed Hierarchical Planning Models: A Pro- posal. In AAAI 2025 Workshop on Planning in the Era of LLMs (LM4Plan) . Goel, S.; Wei, Y .; Lymperopoulos, P.; Chur ´a, K.; Scheutz, M.; and Sinapov, J. 2024. NovelGym: A Flexible Ecosys- tem for Hybrid Planning and Learning Agents Designed for Open Worlds. In Proceedings of the 23rd International Con- ference on Autonomous Agents and Multiagent Systems (AA- MAS) . Grand, M.; Pellier, D.; and Fiorino, H. 2022. An Accurate HDDL Domain Learning Algorithm from Partial and Noisy Observations. In Proceedings of the IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI) . H¨oller, D.; Behnke, G.; Bercher, P.; Biundo, S.; Fiorino, H.; Pellier, D.; and Alford, R. 2020. HDDL: An Extension to PDDL for Expressing Hierarchical Planning Problems. In Proceedings of the 34th AAAI Conference on Artificial In- telligence (AAAI) . IPC 2023 HTN Tracks. 2023. International Planning Com- petition 2023 HTN Tracks. Available at https://ipc2023- htn.github.io/. Kovacs, D. L. 2012. A Multi-Agent Extension of PDDL3.1. InProceedings of the ICAPS 2012 Workshop on the Interna- tional Planning Competition (WS–IPC) . Lamanna, L.; Saetti, A.; Serafini, L.; Gerevini, A.; and Traverso, P. 2021. Online Learning of Action Models for PDDL Planning. In Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI) . Li, N.; Cushing, W.; Kambhampati, S.; and Yoon, S. 2014. Learning Probabilistic Hierarchical Task Networks as Prob- abilistic Context-Free Grammars to Capture User Prefer- ences. ACM Transactions on Intelligent Systems and Tech- nology , 5(2).Li, R.; Roberts, M.; Fine-Morris, M.; and Nau, D. 2022. Teaching an HTN Learner. In Proceedings of the 5th ICAPS Workshop on Hierarchical Planning (HPlan) . Liu, M.; Sivakumar, K.; Omidshafiei, S.; Amato, C.; and How, J. P. 2017. Learning for Multi-Robot Cooperation in Partially Observable Stochastic Environments with Macro- Actions. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) . McDermott, D.; Ghallab, M.; Howe, A.; Knoblock, C.; Ram, A.; Veloso, M.; Weld, D. S.; and Wilkins, D. 1998. PDDL – The Planning Domain Definition Language. Technical Re- port CVC TR-98-003/DCS TR-1165, Yale Center for Com- putational Vision and Control. Ng, J. H. A.; and Petrick, R. P. A. 2019. Incremental Learning of Planning Actions in Model-Based Reinforce- ment Learning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI) . Sanner, S. 2010. Relational Dynamic Influence Diagram Language (RDDL): Language Description. Unpublished ms. Australian National University , 32: 27. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal Policy Optimization Algorithms. arXiv preprint arXiv:1707.06347 . Silver, T.; and Chitnis, R. 2020. PDDLGym: Gym Environ- ments from PDDL Problems. In ICAPS 2020 Workshop on Bridging the Gap Between AI Planning | https://arxiv.org/abs/2505.22597v1 |
and Reinforcement Learning (PRL) . Taitler, A.; Gimelfarb, M.; Jeong, J.; Gopalakrishnan, S.; Mladenov, M.; Liu, X.; and Sanner, S. 2023. pyRDDLGym: From RDDL to Gym Environments. In ICAPS 2023 Work- shop on Bridging the Gap Between AI Planning and Rein- forcement Learning (PRL) . Verma, P.; Marpally, S. R.; and Srivastava, S. 2021. Asking the Right Questions: Learning Interpretable Action Models Through Query Answering. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI) . Verma, P.; Marpally, S. R.; and Srivastava, S. 2022. Discov- ering User-Interpretable Capabilities of Black-Box Planning Agents. In Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning . Wu, S. A.; Wang, R. E.; Evans, J. A.; Tenenbaum, J. B.; Parkes, D. C.; and Kleiman-Weiner, M. 2021. Too Many Cooks: Bayesian Inference for Coordinating Multi-Agent Collaboration. Topics in Cognitive Science , 13(2): 414–432. Xiao, Y .; Hoffman, J.; and Amato, C. 2020. Macro-Action- Based Deep Multi-Agent Reinforcement Learning. In Pro- ceedings of the 3rd Conference on Robot Learning (CoRL) . Zhuo, H. H.; Mu ˜noz-Avila, H.; and Yang, Q. 2014. Learn- ing Hierarchical Task Network Domains from Partially Ob- served Plan Traces. Artificial Intelligence , 212: 134–157. | https://arxiv.org/abs/2505.22597v1 |
On the performance of machine-learning assisted Monte Carlo in sampling from simple statistical physics models Luca Maria Del Bono,1, 2Federico Ricci-Tersenghi,1, 2, 3and Francesco Zamponi1 1Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 5, Rome 00185, Italy 2CNR-Nanotec, Rome unit, Piazzale Aldo Moro 5, Rome 00185, Italy 3INFN, sezione di Roma1, Piazzale Aldo Moro 5, Rome 00185, Italy Recent years have seen a rise in the application of machine learning techniques to aid the sim- ulation of hard-to-sample systems that cannot be studied using traditional methods. Despite the introduction of many different architectures and procedures, a wide theoretical understanding is still lacking, with the risk of suboptimal implementations. As a first step to address this gap, we provide here a complete analytic study of the widely-used Sequential Tempering procedure applied to a shallow MADE architecture for the Curie-Weiss model. The contribution of this work is twofold: firstly, we give a description of the optimal weights and of the training under Gradient Descent optimization. Secondly, we compare what happens in Sequential Tempering with and without the addition of local Metropolis Monte Carlo steps. We are thus able to give theoretical predictions on the best procedure to apply in this case. This work establishes a clear theoretical basis for the integration of machine learning techniques into Monte Carlo sampling and optimization. I. INTRODUCTION Obtaining configurations of hard-to-sample systems, such as spin glasses, amorphous solids and proteins, is a challenging task with both theoretical and practical applications [1–3]. The goal is to sample independent and identically distributed configurations from the Gibbs-Boltzmann (GB) distribution PGB(σ) =e−βH(σ) Z(β), (1) where σ={σ1, . . . , σ N}is a set of random variables describing the system, Z(β)is the partition function, H(σ)is the Hamiltonian and β= 1/Tis the inverse temperature. Among the most widely used techniques are Parallel Tempering (PT) [4–6] and Population Annealing (PA) [7–11], which have established themselves as state-of-the-art approaches over the past few decades, remaining largely un- changed from their original formulations. In recent years, a new line of research aims to use machine-learning-assisted techniques to aid sampling. Stemming from the increasing capabilities of Generative Artificial Neural Networks, this line of research aims at using different kinds of architectures and frameworks, such as autoregressive models [12–15], normalizing flows [16, 17], diffusion models [18–20] and Boltzmann machines [21], to aid the sampling of statistical mechanics models. Methods inspired by theoretical physics techniques such as the Renormalization Group [22, 23] have been introduced and non-generative-based techniques can be used as well [24]. Moreover, similar techniques have been applied to the related task of finding the lowest energy configuration of statistical mechanics systems, a task with deep practical implications [25, 26]. At the same time, the Neural Network (NN) can also be used to obtain variational approximations to the GB distribution, thus improving over mean-field techniques, both in classical [14, 27] and quantum [28] systems. One of the most widely used NN-based techniques is the Sequential Tempering (ST) procedure [13, 29]. In ST, a set of Mconfigurations is progressively cooled to lower temperatures. At each step, | https://arxiv.org/abs/2505.22598v1 |
starting from Mequilibrated configurations, a new series of configurations is generated using a neural network at a slightly lower temperature. The new configurations are equilibrated at the lower temperature and the neural network is then retrained using the new equilibrated configurations, as the starting point for the next step. The original implementation of ST did not include the local steps and relied only on global moves (akin to an importance sampling) to equilibrate the newly generated configurations. While this procedure is exact if the thermalization is carried out for long enough times, in practice the time taken to achieve equilibrium can be very long. The work of Gabrié et al.[30] has highlighted the importance of alternating global, NN-assisted moves and standard localMonteCarlomoves. In[30], theauthorsstudytheconvergencepropertiesofaNN-assistedMonteCarloprocedure to a target distribution ρ∗. They provide numerical examples and analytical computations highlighting the faster convergence to ρ∗when local moves are alternated to the NN-based global ones. However, at variance with the standard ST procedure, in Ref. [30] no annealing in temperature is performed. The additional empirical evidence presented in subsequent studies [20, 31] underscores the need for physical intuition and a clear theoretical framework, both of which remain elusive in the context of the ST procedure.arXiv:2505.22598v1 [cond-mat.dis-nn] 28 May 2025 2 In this work, we support these results by a complete theoretical analysis of the training and application of a shallow MADE (Masked Autoencoder for Distribution Estimation) neural network to assist sampling in the Curie Weiss model. Our main results are the following: •we give a full analytical description of the MADE architecture and of its training dynamics for the Curie Weiss model, both for finite system sizes and in the thermodynamic limit; •we show that a phenomenon akin to critical slowing down happens in learning at the critical temperature of the model; •wecharacterizetheeffectivenessoftheSequentialTemperingprocedureascomparedtoastandardlocalMetropo- lis Monte Carlo, in terms of first passage times in magnetization space. In particular, our work extends some of the results in Ref. [13], in which the shallow MADE architecture was only studied in the N→ ∞limit. Although an exact architecture for sampling according to the GB distribution can be constructed [32], we focus here on a shallow MADE because it can be treated analytically also in the training regime. This paper is organized as follows. In Sec. II we give the background for our work. In particular, we introduce the Curie Weiss model (IIA) and the algorithms we study, local Metropolis Monte Carlo (IIB1) and NN-assisted Monte Carlo (IIB2) with a shallow MADE architecture (IIC). In Sec. III we present a theoretical analysis of the training and of the NN-assisted Monte Carlo. In particular, we study the optimal model and the training dynamics (IIIA); moreover, we compare the performance of the algorithms in terms of first-passage times in magnetization space (IIIB). In Sec. IV we apply these methods to compare different sampling procedures. We first consider Sequential Tempering with a fully trained machine, with and without local MC steps (IVA); then, we study how the scenario changes when one takes into account a finite training time (IVB); additionally we compare Sequential Tempering | https://arxiv.org/abs/2505.22598v1 |
with vanilla Metropolis Monte Carlo (IVC). Finally, in Sec. V we draw our conclusions and highlight some possible future developments. II. BACKGROUND A. The Curie-Weiss model In this paper, we consider the simplest model of ferromagnetic phase transitions, the Curie-Weiss (CW) model. In the CW model, the state of the system is given by a set of NIsing variables, σ,σi=±1,i= 1, . . . , N. The Hamiltonian reads: H(σ) =−N 2m(σ)2, (2) where m(σ) =1 NP iσiis the (intensive) magnetization of the system. It is well known that this model undergoes a phase transition in the thermodynamic limit at a critical temperature Tc= 1, passing from a disordered paramagnetic phase at T > 1to an ordered ferromagnetic phase at T < 1. In the ferromagnetic phase, the model develops a non-trivial spontaneous equilibrium magnetization, which is given by the non-zero solution m∗of the equation m∗= tanh( βm∗). (3) The model can be simulated using one of many different algorithms. We focus on classical local Metropolis Monte Carlo, a standard Swiss-knife algorithm for the simulation of statistical physics systems, and on a NN-assisted Monte Carlo procedure, Sequential Tempering. In particular, we want to study which algorithm is faster in equilibrating the system at T <1starting from T=Tc= 1, as measured using as a proxy the time needed to first reach the equilibrium magnetization that solves Eq. (3). In the next section, we describe in detail the algorithms that we considered in our analysis. B. Algorithms 1. Standard local Metropolis Monte Carlo In the standard local Metropolis Monte Carlo (LMMC) [33] algorithm, one performs a series of local, single-spin-flip moves. A single Monte Carlo ‘step’ consists of the following operations, starting from a configuration σ(t) =σat time t: 3 1. propose a new configuration by flipping the spin of a randomly chosen site, σi→ −σi; 2. calculate the energy difference, ∆E=−21 N−σim(σ) , between the new configuration and the current config- uration; 3. accept the move, i.e. set σ(t+ 1) = σ′where σ′is obtained from σby flipping σi, with probability: Acc[σ→σ′] = min 1, e−β∆E . Otherwise, reject the move and set σ(t+ 1) = σ. Nsuch steps are commonly referred to as a Monte Carlo Sweep (MCS). It is easy to see that the computational complexity of a MCS is O(N). Although LMMC frequently serves as a building block for more advanced and powerful algorithms, such as Parallel Tempering [4] and Population Annealing [7, 10, 34], it may encounter limitations when employed in isolation. For instance, it can remain trapped for long times in (local) minima of the free energy landscape, thus failing to sample effectively the whole space of configurations.1 2. The Sequential Tempering procedure The main disadvantage of the LMMC algorithm described in the previous section is that it can only perform local moves. A recently proposed solution is to use a generative NN to propose global moves. The idea is to start from a configuration σand use a generative NN to generate a whole new configuration σ′of the system, which is then accepted with an acceptance ratio Accchosen in | https://arxiv.org/abs/2505.22598v1 |
order to satisfy detailed balance: Acc[σ→σ′] = min 1,PGB(σ′)×PNN(σ) PGB(σ)×PNN(σ′) , (4) where PNN(σ)is the probability that the NN generates the configuration σ. Note that in this case the whole configuration is updated in a single operation, which then corresponds roughly to a Monte Carlo Sweep. While this strategy is essentially equivalent to an importance sampling of PGB(σ)using PNN(σ)as a generator, the formulation in terms of a stochastic process with an acceptance ratio allows one to combine the global moves with local ones, as we will do in the following. The success of this strategy relies on the ability of the NN to generate configurations close to the ones at equilibrium, so that the full configuration space is well sampled and the acceptance rate is not too low. It has been shown [15, 32] that it is possible to design networks that are powerful enough to sample according to the GB distribution. The question then becomes whether one is able to train such networks in practice. While variational procedures that do not require sampling have been used [27], we focus here on methods that use previously generated equilibrium configurations to train the model, because variational methods have been shown to be prone to mode collapse in complex problems [13]. In these approaches, the NN is first trained at a temperature βusing a set of Mequilibrium configurations. The training can be carried out, for instance, by maximizing the model likelihood of the available Mequilibrium configurations. Then, the NN is used to generate a new series of configurations, which are then used as proposal moves for a global Monte Carlo. The training set is updated using the acceptance rate in Eq. (4) and the model is trained again. Unfortunately, obtaining the Mconfigurations used to train the model can be complicated, especially if one is interested in the low-temperature, hard-to-sample regime of a model. And of course, if we are already able to obtain equilibrium configurations, there is no interest in developing new sampling methods. A solution to both issues is to use the Sequential Tempering (ST) procedure, an annealing procedure that uses the self-consistently trained NN in order to generate configurations at lower and lower temperatures. In ST, the NN is first trained using Mconfigurations at a high temperature, at which it is easy to sample config- urations at equilibrium (for example via LMMC). Then, the NN is used at inverse temperature β′=β+ ∆β > βto propose a new set of Mconfigurations at β′by performing θglobalglobal moves. This new set of configurations is then used to train a new NN (or retrain the previous one). The whole procedure can then be repeated until the desired temperature is reached. The general scheme of ST is summarized in Algorithm 1. Additionally, θlocalLMMC steps can be alternated to the global moves proposed by the NN (steps 9-11 of Alg. 1) but not all implementations include them [13, 29]. Understanding the importance of performing local moves is one of the goals of this paper. The ST scheme is quite general. The specific implementation then requires the choice of | https://arxiv.org/abs/2505.22598v1 |
a generative NN. In this paper, we consider a shallow MADE, as described in the next section. 1For instance, in the CW model below Tc, LMMC can remain stuck in the state with positive (negative) magnetization. Escaping the state and reaching the state with negative (positive) magnetization requires a time growing exponentially with the size of the system N. 4 Algorithm1 Sequential Tempering 1:Input:Initial inverse temperature βstart, final inverse temperature βend, temperature step ∆β, number of configurations M, number of global steps per temperature θglobal, number of local steps per global step θlocal. 2:Initialize: A set of Mequilibrium configurations at βstart(sampled e.g. using standard Metropolis MC) 3:while β < β enddo 4:Train a neural network (NN) using the set of Mconfigurations 5:Lower the temperature: β←β+ ∆β 6:formin1, . . . , Mdo 7: Choose the m-th configuration from the set as the initial state 8:fortin1, . . . , θ globaldo 9: Propose a new configuration using the NN 10: Accept or reject the configuration with probability (4) at the new temperature T= 1/β 11: fortin1, . . . , θ localdo 12: Perform a LMMC step 13: end for 14: end for 15:end for 16:end while C. The MADE architecture We consider as our architecture of choice the shallow MADE (Masked Autoencoder for Distribution Estimation, [35]) with shared weights, also considered in Ref. [13]. The MADE is an autoregressive model, in which the probability of a configuration is represented as a sequence of conditional probabilities: PNN(σ) =P(σ1)P(σ2|σ1)P(σ3|σ1, σ2)···P(σN|σ1,···, σN−1) =NY i=1P(σi|σ<i) (5) and the P(σi|σ<i)are written in terms of a set of parameters that defines the model. This formalization allows not only to compute the probability of a given configuration σ, but also to generate a new one from scratch in polynomial time using ancestral sampling . In ancestral sampling, first the spin σ1is generated using P(σ1), then σ2is generated using σ1according to P(σ2|σ1), then σ3is generated using σ2andσ3according to P(σ3|σ1, σ2)and so on. Specifically, the shallow MADE we are considering parametrizes the probability as P(σ1) = 1 /2and, for i >1, P(σi|σ<i) =expPi−1 j=1Jiσiσj 2 coshPi−1 j=1Jiσj=exp(JiσiM<i) 2 cosh( JiM<i)(6) where M<i(σ<i) =Pi−1 j=1σi. This architecture corresponds to a NN with a fully connected layer with shared weights, followed by a softmax activation function, and is fully specified by a set of N−1weights J= (J2,···, JN). As explained in Sec. IIB2, the best set of parameters of the model can be found by maximizing the likelihood of the training data. For this shallow MADE and for the Curie-Weiss model this optimization can be written down explicitly, as we do in the next section. 5 III. METHODS A. Analysis of the MADE architecture 1. Optimal values of the weights In order to analyze the MADE architecture described in Sec. IIC, let us start by introducing the cross entropy Sc between PGBandPMADEfor a set of weights J, Sc(J) =−X {σ}PGB(σ) logPMADE (σ) =−X {σ}PGB(σ) log" 1 2NY i=2P(σi|σ<i)# =Nlog 2−NX i=2X {σ}PGB(σ){JiσiM<i−log[cosh( JiM<i)]}.(7) The sum over σruns over all 2Npossible configurations of the system. Minimizing the cross entropy with respect to theℓ-th | https://arxiv.org/abs/2505.22598v1 |
coupling Jℓyields: X {σ}PGB(σ)M<ℓσℓ=X {σ}PGB(σ)M<ℓtanh( JℓM<ℓ). (8) In the CW model, the sum over the 2Nspin configurations can be reduced to a polynomial sum by rewriting the Gibbs-Boltzmann distribution PGBin Eq. (1) with the CW Hamiltonian in Eq. (2) via a Hubbard-Stratonovich transformation as: PGB(σ) =1 ˆZ(β)Z dh e−Nh2 2βehP iσi, (9) where ˆZ(β) =Z(β)q 2πβ Nis the new normalizing constant. Inserting this expression in Eq. (8) one finds: Z dh e−Nh2 2βsinh(h) coshN−ℓ(h) Σ1(h) =Z dh e−Nh2 2βcoshN−ℓ+1(h) Σ2(h, Jℓ), (10) where Σ1(h) =ℓ−1X M=−(ℓ−1) M+=2ℓ−1 ℓ−1−M 2 ehMMand Σ2(h, Jℓ) =ℓ−1X M=−(ℓ−1) M+=2ℓ−1 ℓ−1−M 2 ehMMtanh( JℓM),(11) where the magnetization Mincreases in steps of 2 in the sums. Note that with some manipulations, Eq. (10) can be simplified to 2ℓ−1(ℓ−1)Z dh e−Nh2 2βsinh2(h) coshN−2(h) =Z dh e−Nh2 2βcoshN−ℓ+1(h) Σ2(h, Jℓ). (12) The latter equations involve a single integral and a sum over a linear number of terms in N, and can thus be solved numerically to find the optimal values of the weights Jℓ. An example of the behavior of Jℓas a function of βis shown in Fig. 1a. 2. Thermodynamic limit In the N→ ∞limit the integrals over hcan be evaluated using the Laplace method, and Eq. (10) reduces to Σ2(h∗, Jℓ) = Σ 1(h∗) tanh( h∗), (13) where h∗is the solution of the saddle-point equation h∗=βtanh( h∗). (14) 6 0.5 1.0 1.5 2.0 0.00.20.40.60.8J*/ =c (a)Finite, N=20 0.5 1.0 1.5 2.0 0.00.20.40.60.8 (b)=c N 2345678910 FIG. 1. Behavior of the optimal couplings J∗ ℓ/ℓas a function of βforℓ≤10. (a) Finite N, (N= 20), obtained solving Eq. (10). (b) Infinite N, obtained solving Eq. 13. 0.0 0.5 1.0 /N 0.000.050.100.150.20Japp (a) 0.0 0.5 1.0 /N 0.00000.00250.00500.00750.0100JappJ* (b) 0.0 0.5 1.0 /N 0.000.050.10JappJ* J* (c) 2050100200350500700 N FIG. 2. Comparison of the approximated couplings Japp ℓas found by solving Eq. (15) with the exact ones. (a) Japp ℓ(dashed) compared with the exact J∗ ℓ(full). (b) Absolute error J∗ ℓ−Japp ℓ. (c) Relative error (J∗ ℓ−Japp ℓ)/J∗ ℓ. Notice that we can consider just the positive solution for h∗, since taking into account the negative one simply yields additional factors of two on both sides of the equations. These results match with the equations derived in Ref. [13] for the N→ ∞limit. An example of the behavior of the weights in the infinite- Nlimit is show in Fig. 1b. Forβ≤βc= 1, Eq. (14) only admits the h∗= 0solution. As a consequence, Jℓ= 0∀ℓ, i.e. for N→ ∞all the weights vanish. We can then try a small Jℓapproximation of Eq. (10) at finite N. At first order, this yields the equation: Japp ℓ=R∞ −∞exp −N 2βh2 coshN−2(h) sinh2(h)dh R∞ −∞exp −N 2βh2 coshN−2(h) (l−2) sinh( h)2+ cosh( h)2 dh(15) A comparison between the approximated weights and the exact ones is shown in Fig. 2. In the following section, we study how the weights of the model are learned during training. 3. Training dynamics We consider a gradient descent (GD) training dynamics, taking as a loss the cross entropy defined in Eq. (7). Then, the update rule of the parameters can be written | https://arxiv.org/abs/2505.22598v1 |
as: J(t+1) ℓ=J(t) ℓ−ηℓ∇ℓSc(J(t)), (16) 7 0.04 0.02 0.00 0.02 0.04 J 1.0 0.5 0.00.51.0J =2 =8 =12 =15 =20 FIG. 3. Comparison between the gradients obtained by linearizing around the optimal solution J∗ ℓ(full lines) and the gradients computed using pytorch backpropagation on a large dataset (data points) as a function of the distance from the optimal couplings, ∆Jℓ=Jℓ−J∗ ℓ. Details: N= 20spins, β= 1, the dataset is made of 5·106configurations obtained by starting at infinite temperature and then performing 30 MCS at β= 1. where ηℓis the learning rate for the ℓ−th weight and the gradient is: ∇ℓSc(J) =2N−ℓ Z(β)s 2N πβZ∞ −∞dh e−Nh2 2βcosh( h)N−ℓℓ−1X M=−(ℓ−1) M+=2ℓ−1 ℓ−1−M 2 ehMM[cosh( h) tanh( JℓM)−sinh(h)].(17) Note that because the cross entropy is a sum of terms, each involving a single weight, the gradient ∇ℓSc(J)depends only on Jℓand the gradient descent dynamics of different weights are decoupled. In the continuous time limit (gradient flow), Eq. (16) reads: ˙Jℓ=−ηℓ∇ℓSc(J). (18) The gradient can be linearized around the solution J∗ ℓof Eq. (10), yielding: ˙Jℓ=−ηℓ∆JℓHℓ(J∗ ℓ), (19) where ∆Jℓ=Jℓ−J∗ ℓis the difference with respect to the optimal couplings and Hℓis the second derivative of the cross entropy, given by: Hℓ(Jℓ) =2N−ℓ Z(β)Z∞ −∞dh e−Nh2 2βs 2N πβcosh( h)N−ℓ+1ℓ−1X M=−(ℓ−1) M+=2ℓ−1 ℓ−1−M 2 ehMM2sech2(JℓM). (20) A comparison between the linearized gradient and the true gradient is shown in Fig. 3, highlighting the very good agreement between the two. Unfortunately, this approach is not easily tractable analytically. Instead, for β≤βc= 1, we can consider the small J∗ ℓapproximation and linearize around zero. By linearizing Eq. (18) (in the small J∗ ℓapproximation), one finds: ˙Jℓ=−ηℓ[⟨M<ℓσℓ⟩ −Hℓ(0)Jℓ] =−η[(ℓ−1)cN−Hℓ(0)Jℓ], (21) where Hℓ(0)has now the simple form: Hℓ(0) = ( ℓ−1) [1 + ( ℓ−2)cN], (22) andcNis the two-spin correlation function cN=⟨sisj⟩,i̸=jbetween any two different spins, cN=⟨sisj⟩=Z∞ −∞e−Nh2 2βsinh2(h) coshN−2(h)dh Z∞ −∞e−Nh2 2βcoshN(h)dh, (23) 8 0 500 1000 1500 t0.000.020.040.060.08J(t) 236915264372120200 FIG. 4. Comparison between the training of the weights obtained by the approximation in Eq. (24) (full lines) and the training performed numerically using pytorch over a large dataset. Details: N= 200spins, β= 1, the dataset is made of 5·106 equilibrium configurations, learning rate ηℓ= 1/[N(ℓ−1)]. which decays as 1/NforT <1and as 1/√ NatT= 1. The latter approximation allows for the training dynamics to be solved explicitly: Jℓ(t) =cNτℓ(ℓ−1)h 1−e−ηℓt τℓi , (24) where the characteristic time τℓis simply the inverse of the Hessian, τℓ= 1/Hℓ(0), and ηℓis the learning rate. Notice that the prefactor Japp ℓ=cNτℓ(ℓ−1) = cN/(1+( ℓ−2)cN)corresponds exactly to the small Jℓapproximation derived in Eq. (15). While this argument can probably be made more rigorous, e.g. using the Polyak–Łojasiewicz inequality, we instead verify the correctness of this assumption numerically in Fig. 4, finding an excellent agreement between the exact and approximate solutions. Equation (24) allows to predict several important trends: •the relative error vanishes exponentially in time, as|∆Jℓ| Jℓ=e−ηℓt τℓ; •ifηℓ= 1/[N(ℓ−1)]then|∆Jℓ| Jℓ=e−1+(ℓ−2)cN Nt=A(t, N)e−ℓ λ(t,N), where logA(t, N) =t N(2cN−1)andλ(t, N) = N tcN; hence, the relative error also vanishes exponentially in ℓat fixed time; •again, if ηℓ= 1/[N(ℓ−1)], then the effective time scale is | https://arxiv.org/abs/2505.22598v1 |
ˆτℓ=τℓ ηℓ=N 1+(ℓ−2)cN. These predictions are verified in Fig. 5. Notice that Eq. (24) requires to specify the learning rate ηℓfor weight ℓ. From a discretization of the gradient flow Eq. (24), noticing that in GD one performs a single discrete step at each time so that the minimum increment in tis one, it follows that the learning rate must be taken as ηℓ∼τℓ= 1/Hℓ(0). (25) This result matches the known one for convex optimization problems [36]. Indeed, this choice allows one to learn all the weights in a time O(1). However, this requires the knowledge of the Hessian, which is not usually known, and the choice in Eq. (25) is therefore unrealistic. In a more realistic implementation (at least for simple GD optimization), which we therefore consider in the following, one would use a single learning rate ηfor all the weights. Then, the prescription for smooth convergence in Eq. (25) becomes η≤τN∼1 cNN2. (26) because the learning rate must be smaller than the smallest timescale in order for all the weights to be able to converge. At the critical temperature T=Tc, since cN∼1√ N(see, for instance, Ref. [37, Eq. 23]), we need η∼N−3 2, 9 0 50 100 150 200 102 101 100|J|/J (a) 0 200 400 600 800 t0.00.20.40.60.81.0A(t,N)(b) 0 200 400 600 800 t020406080100(t,N) (c) 0 50 100 150 200 050100150200/ (d)0510253050100175250375500750 Train steps FIG. 5. Comparison between theory and numerical results for different quantities. The data used come from the same training of Fig. 4. (a) Relative error |∆Jℓ|/Jℓplotted as a function of ℓ, together with an exponential fit to the form Ae−ℓ λ(dashed black lines). (b,c) The values of the fitted parameters Aandλ(data points) are compared with those derived from the theory (dashed black lines). (d) The effective timescale ˆτℓ=τℓ/ηℓ, obtained fitting the relative error as|∆Jℓ| Jℓ=e−t ˆτℓ, is compared to the prediction from the theory. which is the learning rate we will consider in the following. Notice that, since the slowest timescale (corresponding toℓ= 2) is of order 1, the effective timescale to learn all the weights goes as N3 2. On the other hand, for T≥Tc (that is when we are no longer in the critical regime), cN∼1 Nand therefore the maximum learning rate that can be chosen goes as η∼1 N, so that the timescale to learn all the weights is O(N). Therefore, at criticality, proper training requires an additional factor√ Nin training time. This factor is exactly the same that appears due to critical slowing down (with exactly the same dynamical critical exponent) when performing Glauber dynamics or Metropolis-Hastings Monte Carlo [38]. So, in this setting, the hardness of sampling at criticality is instead transferred to the training. It would be interesting to verify whether this scenario is generically present for different models and architectures. This analysis is left for future work. B. Analysis of Sequential Tempering and first passage times We are interested in the time taken by the system to generate a configuration of a given (absolute) magnetization. Therefore, we can consider the dynamics of the model in the space of | https://arxiv.org/abs/2505.22598v1 |
(intensive) magnetizations, which can be described in terms of a simple one-dimensional Markov chain. Then, we consider the time required by the chain to first reach a magnetization equal or greater (in modulus) than a target magnetization. At fixed β, we take as the target magnetization the equilibrium magnetization, i.e. the solution m∗of Eq. (3). Then, the average first-transition times τm→m∗for going from a magnetization mto a magnetization m∗can be obtained using the set of self-consistent 10 500 1000 1500 2000 Simulated M+L 500100015002000Analytic M+L 0.100.150.200.250.300.350.400.45 m FIG. 6. Comparison between the first passage times to reach magnetization mstarting from zero magnetization, τM+L. Those computed analytically are plotted versus those obtained numerically through the procedure that alternates a global MADE move and a local MCS. Data are for N= 200andβ= 1.1. The averages are performed over 5·104runs. Both times were multiplied by 2Nto take into account the computational complexity of each move. equations [39]: τm→m∗= 1 +X ˆm̸=m∗P(m→ˆm)τˆm→m∗, m ̸=m∗. (27) If we call τthe vector of the average first passage times (excluding the first passage time m∗→m∗) and Qthe matrix of the transition probabilities with the row and column corresponding to m∗removed, we can rewrite the system in matrix form as: (I−Q)τ=1, (28) where Iis the identity matrix and 1is the vector of all ones. In practice, since we are interested in reaching magnetizations |m| ≥ |m∗|, Eq. (27) reduces to τm→m∗= 1 +X ˆm<m∗P(m→ˆm)τˆm→m∗, m ̸=m∗, (29) and therefore the size of the matrix that needs to be effectively inverted is smaller. The matrix is then further reduced by looking at the space of absolute magnetizations. Notice that, if the matrix I−Qis tridiagonal (as in the case for local single-spin flip algorithms), Eq. (28) can be solved in linear time, either by inverting the matrix [40, 41], or using Thomas’ algorithm [42, 43]. We point out that, in alternative to the described procedure, one could evaluate the equilibration time also by looking at the second largest eigenvalues of the probability transition matrix. The advantage of looking at first passage times, however, is that if mis not too large, one can work with an effective matrix that is smaller than the original one, thus making the computation easier. We consider three Monte Carlo schemes, in which: •only MADE global steps are performed; •only LMMC steps are performed; •a MADE step is followed by one or more LMMC sweeps. An example of a comparison of the first passage times obtained by the procedure described above and those obtained by performing a simulation in the case in which a MADE step is followed by one LMMC sweep is shown in Fig. 6. The transition matrices obtained in the three cases are described in the following sections. 11 1. Local Metropolis Monte Carlo We recall that for the single-spin-flip Metropolis MC one selects one spin at random and flips it with probability: Acc[σ→σ′] = min 1, e−β∆E . (30) Let us consider a configuration with magnetization mand suppose one randomly selects a spin σ. The change in energy if the spin is flipped is: | https://arxiv.org/abs/2505.22598v1 |
∆E=−21 N−σm . (31) Ifm= 0,∆E <0and the move is always accepted. When |m|>0and one selects a spin at random, the probability of it having a sign opposite to mis1−|m| 2(and that of having the same sign is1+|m| 2). Using this and equations (30) and (31), we can write the transition probability of the Markov chain, keeping in mind that, since the dynamic is local, jumps in magnetization only occur between configurations separated by ∆m= 2/N: P(m→m′) = P(m→m−2 N) =1+m 2min[1 ,exp −2β m−1 N ], P(m→m−2 N) =1−m 2min[1 ,exp 2β m+1 N ], P(m→m) = 1−1+m 2min[1 ,exp −2β m−1 N ]−1−m 2min[1 ,exp 2β m+1 N ]. Analogously, the transition matrix in the space of absolute magnetizations |m|is: P(|m| → |m′|) = P(0→2 N) = 1 , for|m|= 0, P(|m| → |m|+2 N) =1−|m| 2, P(|m| → |m| −2 N) =1+|m| 2e−2β(|m|−1 N), P(|m| → |m|) = 1− 1−|m| 2+1+|m| 2e−2β(|m|−1 N) ,for|m|>0, P(|m| → |m|′) = 0 , |m−m′| ̸= 0,2/N.(32) Interestingly, taking the N→ ∞limit and requiring the probabilities of increasing and decreasing the magnetization to be equal yields: 1 +|m| 2e−2β|m|=1− |m| 2⇔ | m|= tanh β|m|, (33) which is the correct equation for the equilibrium magnetization in the Curie-Weiss model, Eq. (3). 2. MADE The transition matrix in the case of the MADE can be written as: P(|m| → |m′|) =(Ω(|m′|) Ω(|m|)min 1,pMADE (|m′|) pMADE (|m|)eβN 2(|m′|2−|m|2) pMADE (|m′|),|m| ̸=|m′|, 1−P |m′′|̸=|m|P(|m| → |m′′|), |m|=|m′|,(34) where Ω(|m|)isthedegeneracyofstate |m|andpMADE (|m|)istheprobabilitythattheMADEgeneratesaconfiguration of magnetization |m|, which can be computed in a time O(N)given the weights. The transition matrix for the signed magnetizations is similar and can be computed analogously. 3. MADE and local Metropolis Finally, if we consider a global step followed by kMCS, the transition matrix PM+Lwill be simply given by the product PM+L =PMPkN L, (35) where PLandPMare the transition matrices defined in the previous sections for the LMMC and MADE, respectively. In the following, we will always take k= 1for simplicity. In Sec. IV, we compare the first passage times for the different methods to determine which procedure is the fastest to reach the target magnetization. 12 IV. RESULTS In order to assess the relevance of adding LMMC to global NN-assisted moves, we consider the following setting, which corresponds to a single temperature jump in the ST procedure. We suppose that a NN has been trained (either perfectly or not) at the critical temperature Tcof the model, where the spontaneous magnetization is still zero. Then, we want to use the NN to perform NN-assisted MC at a temperature T < T cbelow the critical temperature, at which m∗̸= 0. We compare the dynamics with global MADE moves only (indicated by M for MADE) with the dynamics with global MADE moves and LMMC (indicated by M+L for MADE+LMMC), and compare the time it takes for the two dynamics to reach equilibrium, i.e. to reach m∗. A. Perfectly trained MADE We first consider a perfectly trained MADE, i.e., a MADE with weights given by the solution to Eq. (10) at the critical temperature | https://arxiv.org/abs/2505.22598v1 |
βc= 1(we recall that these weights are non-zero at finite N). We then evaluate the times required to reach the absolute equilibrium magnetization |m∗|given by Eq. (3) at inverse temperature β≥βc(i.e. below the critical temperature) starting from zero magnetization, and we consider the ratio R=2τM+L τM, (36) where τMandτM+Lare the average first passage times for the two procedures and the factor two takes into account the fact that performing both a global move and a MCS takes approximately twice the number of operations as performing just a global move. Hence, R > 1indicates that the MADE by itself is more efficient than MADE+LMMC, while R <1indicates that adding local moves is beneficial. We first show in Fig. 7 the results for different β. The curves start at R= 2for∆β=β−βc≃0, signaling that the perfectly trained NN is good enough to generate the target magnetization even on its own, without the need for local MC steps. Because the NN is good enough to generate configurations with the desired magnetization, adding LMMC on top of the global moves only adds to the computational time. Upon increasing ∆β, as the machine is used at temperatures that are further away from the one at which it was trained, Rdrops and it becomes increasingly necessary to add LMMC. The transition between these two regimes, however, moves at lower ∆βasNincreases. In particular, as shown in Fig. 7b, the curves collapse when plotted as a function of ∆β√ N. Therefore, as long as the temperature step is chosen as ∆β=b/√ Nwith bsmall enough, a typical annealing schedule in practical applications, adding MC moves is actually not helpful. However, already for b≳1.5(which is not an uncommon choice), the LMMC clearly improves the performances of the MADE. Moreover, this analysis does not take into account the (non-negligible) computational cost of training the MADE. Considering also the training time, the scenario changes, as we show in the following section. 0.00 0.05 0.10 0.15 0.20 0.00.51.01.52.0R (a) 0 2 4 N (b) 2050100200350500700 N FIG. 7. Ratio Rof first passage times as a function of ∆β=β−βc(a) and of ∆β√ N(b). 13 0 100 200 300 400 500 Training steps0.00.51.01.5R (a) 0 25 50 75 100 Training steps0.000.250.500.751.00R (b) 2050100200350500700 N 2050100200350500700 N FIG. 8. (a) Ratios of the average first passage time R(not taking account training) and (b) ˆR(taking account training) at ∆β=b/√ Nwith b= 0.5. The learning rate is fixed to η=N−3 2. B. Partially trained MADE We now turn to the case in which the MADE is not already pre-trained and all the weights are initialized to zero. This setting introduces a tradeoff in the training time: on the one hand, longer training is computationally more expensive; on the other hand, untrained weights make the performance worse (e.g. at initialization all weights are zero and the MADE simply extracts one configuration uniformly at random from the 2Npossible ones). While it is unclear, a priori, which is the optimal training time t, we can track the evolution of the weights as a function of the number of training steps using Eq. (24) and | https://arxiv.org/abs/2505.22598v1 |
compute the average first passage times as in the previous section. Following the discussion in Sec. IIIA3, we fix the learning rate η=N−3 2. We plot Rat fixed b= 0.5as a function of the training time in Fig. 8a. From Fig. 7b, for b= 0.5we expect R∞∼1.6 at infinite training time, which is confirmed by Fig. 8a. We see that, at finite training time, the performance of the MADE alone deteriorates with respect to MADE+LMMC, i.e. R < R ∞. This phenomenon is even more evident Fig. 8b where we considered a modification of Eq. (36) that takes into account the training time Tt(considering that each epoch takes ∼Nsteps), i.e. the ratio ˆR=2τM+L+Tt τM+Tt. (37) Performing LMMC helps, because it gives a huge hand when MADE is not trained enough, and even when MADE is good enough to be used alone, the addition of the training time adds an overhead that dominates over the additional cost of performing LMMC, so that the ratio remains always close to one or below. We thus conclude that, if one has equilibrated at the critical temperature and wants to equilibrate within one of the two states that appear just below it, MADE+LMMC is generically more efficient than MADE alone. C. Comparison with local Metropolis Monte Carlo Having assessed that MADE+LMMC is more efficient than MADE alone, one might wonder whether the MADE is needed at all. To answer this question, we compare in the same setting the MADE+LMMC dynamics with that with LMMC only. The first passage time ratio R′=2τM+L+Tt τL(38) is shown in Fig. 9a and it turns out that simply using just LMMC performs better, i.e. we always observe that R′>1. This is due to the fact that we are only considering the absolute value of the magnetization: the addition of the MADE helps in exploring better the energy landscape by allowing sudden jumps between the two states that are present at T < T c. This can be seen in Fig. 9b, in which we considered the ratio ˜R′=2˜τM+L+Tt ˜τL(39) 14 0 100 200 300 400 Training steps101102R/prime (a) 0 100 200 300 400 Training steps100101R/prime (b) 2050100200350500700 N FIG. 9. (a) Ratio R′of the average first passage time between the MADE+LMMC (including training time) and the LMMC alone, for the absolute value of magnetization, as a function of training time for fixed ∆β=b/√ Nwith b= 2. (b) Same plot for the ratio ˜R′for the signed value of the magnetization. The learning rate is fixed to η=N−3 2. where ˜τLand˜τM+LaretheaveragefirstpassagetimesofLMMCandMADE+LMMCforreachingthesigned(positive) equilibrium magnetization starting from zero magnetization. In this case, we observe that ˜R′<1, indicating a better efficiency of MADE+LMMC, when Nis sufficiently large and the number of training steps is not too high (otherwise, the computational cost outweighs the gains from using MADE). The LMMC alone has worse performance, because it can end up in the negative state and be stuck there for a long time. The addition of the MADE avoids this problem. Note that the barrier for LMMC to jump from the negative to the positive state scales as exp AN∆β2 | https://arxiv.org/abs/2505.22598v1 |
= exp Ab2 , hence it remains finite with the chosen scaling of ∆β=b/√ N. When instead β−βc=O(1), the LMMC needs a time scaling exponentially in N, and the gain from using the MADE becomes even more visible. V. CONCLUSIONS In this work, we were able to fully describe a generative autoregressive architecture, the shallow MADE, for the study of the Curie-Weiss model at finite size N. We first characterized the problem in terms of the optimal couplings that can be found in order to approximate the Curie-Weiss distribution. We were then able to describe how these couplings are learned during the training process. Interestingly, we found that the system undergoes a critical slowing down in the learning characterized by the same behavior as typical local dynamics. Further work is needed to test whether this is a general result or a peculiarity of the model. We were then able to use these results to benchmark the model performances with and without additional local Monte Carlo steps in the Sequential Tempering procedure. We found that using the perfectly trained architecture renders additional local Monte Carlo steps unnecessary, as long as a suitable annealing schedule is chosen (i.e. with small enough b= ∆β√ N). However, since the cost of training the architecture increases with the model size, one has to resort to using an imperfectly trained machine, which in turn benefits from using additional local Monte Carlo steps. However, we verified that the NN-assisted procedure is actually able to outperform that with local moves only because it allows to jump between distinct states that form below the critical temperature, thus showing the improvement coming from the usage of the architecture. In summary, the NN role is mostly to allow for efficient sampling of distinct states, while the local moves are mostly needed to efficiently sample within a state. Based on these results we predict that, in practical applications, the use of generative autoregressive NN is helpful to better simulate the systems of interest whenever more than one state is present. However, since in practice one has to compromise between the accuracy and time of training, the addition of Monte Carlo steps will actually improve the results. Further numerical tests of these ideas are left for future work. CODE AVAILABILITY ThecodeusedinthispaperisavailableattheGitHubrepository https://github.com/Laplaxe/MonteCarloST_CW . 15 ACKNOWLEDGMENTS We thank Marylou Gabrié and Guilhem Semerjian for useful discussions. The research has received financial sup- port from the “National Centre for HPC, Big Data and Quantum Computing - HPC”, Project CN_00000013, CUP B83C22002940006, NRP Mission 4 Component 2 Investment 1.5, Funded by the European Union - NextGenera- tionEU. Author LMDB acknowledges funding from the Bando Ricerca Scientifica 2024 - Avvio alla Ricerca (D.R. No. 1179/2024) of Sapienza Università di Roma, project B83C24005280001 – MaLeDiSSi. We acknowledge support from the computational infrastructure DARIAH.IT, PON Project code PIR01_00022, National Research Council of Italy. [1] Marc Mézard, Giorgio Parisi, and Miguel Angel Virasoro. Spin Glass Theory and Beyond . World Scientific, 1987. [2] DanielL.SteinandCharlesM.Newman. Applicationstootherfields. In Spin Glasses and Complexity .PrincetonUniversity Press, 01 2013. [3] Patrick Charbonneau, Enzo Marinari, Giorgio Parisi, Federico | https://arxiv.org/abs/2505.22598v1 |
Ricci-Tersenghi, Gabriele Sicuro, Francesco Zamponi, and Marc Mezard. Spin glass theory and far beyond: replica symmetry breaking after 40 years . World Scientific, 2023. [4] Koji Hukushima and Koji Nemoto. Exchange monte carlo method and application to spin glass simulations. Journal of the Physical Society of Japan , 65(6):1604–1608, 1996. [5] Jérôme Houdayer. A cluster monte carlo algorithm for 2-dimensional spin glasses. The European Physical Journal B- Condensed Matter and Complex Systems , 22:479–484, 2001. [6] Zheng Zhu, Andrew J Ochoa, and Helmut G Katzgraber. Efficient cluster algorithm for spin glasses in any space dimension. Physical review letters , 115(7):077201, 2015. [7] Koji Hukushima and Yukito Iba. Population annealing and its application to a spin glass. In AIP Conference Proceedings , volume 690, pages 200–206. American Institute of Physics, 2003. [8] Jonathan Machta. Population annealing with weighted averages: A monte carlo method for rough free-energy landscapes. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics , 82(2):026704, 2010. [9] Wenlong Wang, Jonathan Machta, and Helmut G Katzgraber. Comparing monte carlo methods for finding ground states of ising spin glasses: Population annealing, simulated annealing, and parallel tempering. Physical Review E , 92(1):013303, 2015. [10] LevYuBarash, MartinWeigel, MichalBorovsk` y, WolfhardJanke, andLevNShchur. Gpuacceleratedpopulationannealing algorithm. Computer Physics Communications , 220:341–350, 2017. [11] MartinWeigel, LevBarash, LevShchur, andWolfhardJanke. Understandingpopulationannealingmontecarlosimulations. Physical Review E , 103(5):053301, 2021. [12] PiotrBiałas,PiotrKorcyl,andTomaszStebel. Hierarchicalautoregressiveneuralnetworksforstatisticalsystems. Computer Physics Communications , 281:108502, 2022. [13] Simone Ciarella, Jeanne Trinquier, Martin Weigt, and Francesco Zamponi. Machine-learning-assisted monte carlo fails at sampling computationally hard problems. Machine Learning: Science and Technology , 4(1):010501, 2023. [14] Indaco Biazzo, Dian Wu, and Giuseppe Carleo. Sparse autoregressive neural networks for classical spin systems. Machine Learning: Science and Technology , 2024. [15] Luca Maria Del Bono, Federico Ricci-Tersenghi, and Francesco Zamponi. Nearest-neighbours neural network architecture for efficient sampling of statistical physics models. arXiv preprint arXiv:2407.19483 , 2024. [16] Christoph Schönle and Marylou Gabrié. Optimizing markov chain monte carlo convergence with normalizing flows and gibbs sampling. In NeurIPS 2023 AI for Science Workshop , 2023. [17] Kim A Nicoli, Christopher J Anders, Tobias Hartung, Karl Jansen, Pan Kessel, and Shinichi Nakajima. Detecting and mitigating mode-collapse for flow-based sampling of lattice field theories. Physical Review D , 108(11):114501, 2023. [18] Giulio Biroli and Marc Mézard. Generative diffusion in very large dimensions. Journal of Statistical Mechanics: Theory and Experiment , 2023(9):093402, 2023. [19] Stefano Bae, Enzo Marinari, and Federico Ricci-Tersenghi. A very effective and simple diffusion reconstruction for the diluted ising model. arXiv preprint arXiv:2407.07266 , 2024. [20] Nicholas T Hunt-Smith, Wally Melnitchouk, Felix Ringer, Nobuo Sato, Anthony W Thomas, and Martin J White. Ac- celerating markov chain monte carlo sampling with diffusion models. Computer Physics Communications , 296:109059, 2024. [21] Aurélien Decelle, Beatriz Seoane, Lorenzo Rosset, Cyril Furtlehner, Nicolas Bereux, Giovanni Catania, and Elisabeth Agoritsas. The restricted boltzmann machine: from the statistical physics of disordered systems to a practical and interpretative generative machine learning. Bulletin of the American Physical Society , 2024. [22] Tanguy Marchand, Misaki Ozawa, Giulio Biroli, and Stéphane Mallat. Wavelet conditional renormalization group. arXiv preprint arXiv:2207.04941 , 2022. [23] Kanta Masuki and Yuto Ashida. Generative | https://arxiv.org/abs/2505.22598v1 |
diffusion model with inverse renormalization group flows. arXiv preprint arXiv:2501.09064 , 2025. [24] Leonardo Galliano, Riccardo Rende, and Daniele Coslovich. Policy-guided monte carlo on general state spaces: Application to glass-forming mixtures. The Journal of Chemical Physics , 161(6), 2024. 16 [25] Daria Pugacheva, Andrei Ermakov, Igor Lyskov, Ilya Makarov, and Yuriy Zotov. Enhancing gnns performance on combi- natorial optimization by recurrent feature update. arXiv preprint arXiv:2407.16468 , 2024. [26] Zi-Song Shen, Feng Pan, Yao Wang, Yi-Ding Men, Wen-Biao Xu, Man-Hong Yung, and Pan Zhang. Free-energy machine for combinatorial optimization. Nature Computational Science , pages 1–11, 2025. [27] Dian Wu, Lei Wang, and Pan Zhang. Solving statistical mechanics using variational autoregressive networks. Physical review letters , 122(8):080602, 2019. [28] Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem with artificial neural networks. Science, 355(6325):602–606, 2017. [29] B McNaughton, MV Milošević, A Perali, and S Pilati. Boosting monte carlo simulations of spin glasses using autoregressive neural networks. Physical Review E , 101(5):053312, 2020. [30] Marylou Gabrié, Grant M Rotskoff, and Eric Vanden-Eijnden. Adaptive monte carlo augmented with normalizing flows. Proceedings of the National Academy of Sciences , 119(10):e2109420119, 2022. [31] Shams Mehdi, Zachary Smith, Lukas Herron, Ziyue Zou, and Pratyush Tiwary. Enhanced sampling with machine learning. Annual Review of Physical Chemistry , 75, 2024. [32] Indaco Biazzo. The autoregressive neural network architecture of the boltzmann distribution of pairwise interacting spins systems. Communications Physics , 6(1):296, 2023. [33] Mark EJ Newman and Gerard T Barkema. Monte Carlo methods in statistical physics . Clarendon Press, 1999. [34] Fernando Martínez-García and Diego Porras. Problem hardness of diluted ising models: Population annealing vs simulated annealing. arXiv preprint arXiv:2501.07638 , 2025. [35] MathieuGermain,KarolGregor,IainMurray,andHugoLarochelle. Made: Maskedautoencoderfordistributionestimation. InInternational conference on machine learning , pages 881–889. PMLR, 2015. [36] Pankaj Mehta, Marin Bukov, Ching-Hao Wang, Alexandre GR Day, Clint Richardson, Charles K Fisher, and David J Schwab. A high-bias, low-variance introduction to machine learning for physicists. Physics reports , 810:1–124, 2019. [37] Aydin Deger and Christian Flindt. Lee-yang theory of the curie-weiss model and its rare fluctuations. Physical Review Research , 2(3):033009, 2020. [38] Joris Bierkens and Gareth Roberts. A piecewise deterministic scaling limit of lifted metropolis–hastings in the curie–weiss model.The Annals of Applied Probability , 27(2):846–882, 2017. Accessed 7 May 2025. [39] Jeffrey J Hunter. The computation of the mean first passage times for markov chains. Linear Algebra and its Applications , 549:100–122, 2018. [40] Riaz A Usmani. Inversion of a tridiagonal jacobi matrix. Linear Algebra and its Applications , 212(213):413–414, 1994. [41] CM Da Fonseca. On the eigenvalues of some tridiagonal matrices. Journal of Computational and Applied Mathematics , 200(1):283–286, 2007. [42] Llewellyn Hilleth Thomas. Elliptic problems in linear difference equations over a network. Watson Sci. Comput. Lab. Rept., Columbia University, New York , 1:71, 1949. [43] Min Tian, Qi Liu, Jingshan Pan, Ying Gou, and Zanjun Zhang. swpts: an efficient parallel thomas split algorithm for tridiagonal systems on sunway manycore processors. The Journal of Supercomputing , 80(4):4682–4706, 2024. | https://arxiv.org/abs/2505.22598v1 |
arXiv:2505.22601v1 [cs.LG] 28 May 2025Machine Unlearning under Overparameterization Jacob L. Block∗Aryan Mokhtari∗Sanjay Shakkottai∗ Abstract Machine unlearning algorithms aim to remove the influence of specific training samples, ideally recovering the model that would have resulted from training on the remaining data alone. We study unlearning in the overparameterized setting, where many models interpolate the data, and defining the unlearning solution as any loss minimizer over the retained set—as in prior work in the underparameterized setting—is inadequate, since the original model may already interpolate the retained data and satisfy this condition. In this regime, loss gradients vanish, rendering prior methods based on gradient perturbations ineffective, motivating both new unlearning definitions and algorithms. For this setting, we define the unlearning solution as the minimum-complexity interpolator over the retained data and propose a new algorithmic framework that only requires access to model gradients on the retained set at the original solution. We minimize a regularized objective over perturbations constrained to be orthogonal to these model gradients, a first-order relaxation of the interpolation condition. For different model classes, we provide exact and approximate unlearning guarantees, and we demonstrate that an implementation of our framework outperforms existing baselines across various unlearning experiments. ∗Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA. {jblock@utexas.edu, mokhtari@austin.utexas.edu, sanjay.shakkottai@utexas.edu } 1 1 Introduction As modern models are trained on vast datasets, the ability to remove the influence of specific data samples from a trained model is essential—both to comply with privacy regulations such as the GDPR and CCPA [GDP16; CCP18], and to correct mislabeled or biased data that may compromise model integrity [GRBTKDYZA24]. Machine unlearning [CY15] refers to algorithms that address these challenges by modifying a model trained on a dataset Dto forget a subset of samples, termed the forget set Df, and produce a model that behaves as if it had been trained only on the remaining data, denoted the retain set Dr=D \ D f. The ideal, yet costly, “gold standard” solution to unlearning is to retrain the model from scratch on the retain set Dr, which perfectly achieves the unlearning objective but is often infeasible due to high computational cost and the potential for limited access to the original training data. Hence, the goal of an unlearning algorithm is to efficiently approximate this outcome using the knowledge of the original training procedure, the samples to be forgotten, and potentially restricted side-information related to the retained data, aiming to recover a model that could result from training on Dralone. In the underparameterized regime, where the model class cannot fit all training data, the training loss admits a unique minimizer. Thus, the natural definition of the exact unlearning solution is the unique minimizer to the loss on Dr. When the loss is further strongly convex, prior work developed efficient unlearning approximations using influence functions, which estimate the effect of removing a sample via a single gradient ascent step over the loss on Df, preconditioned by the inverse loss Hessian on D[BNLGG22; SAKS21; GGHV20]. In contrast, this paper focuses on the overparameterized regime, where the | https://arxiv.org/abs/2505.22601v1 |
model class contains many interpolating solutions. Crucially, the training loss no longer admits a unique minimizer, and defining the unlearning solution by loss optimality alone no longer suffices: the original model θ∗ minimizes the loss over both DandDr, andθ∗clearly encodes information about Df, the data to be removed. Moreover, interpolation causes the loss gradients to vanish, rendering loss-gradient-based methods such as influence functions ineffective (Theorem 2.1). This fundamental shift necessitates both a new definition of unlearning and new algorithmic tools tailored to the overparameterized setting. We begin by formalizing unlearning in the overparameterized setting. Specifically, we define the exact unlearning solution as the model which minimizes a model complexity measure R, subject to minimizing the loss over Dr; see (2). For natural choices of R, such as the parameter norm, this definition ensures that the unlearned model reveals no information about the forgotten data and maintains strong generalization performance using only the retain set. Given this definition of unlearning, we propose a new algorithmic framework to compute the solution. We focus on settings where the loss is minimized by any interpolating model, so the loss minimization constraint reduces to requiring interpolation of Dr. To solve the resulting problem of minimizing Rsubject to interpolation, we relax the constraint via a first-order Taylor expansion around θ∗and reparameterize asθ∗+∆, where ∆is the drift. Since θ∗already interpolates Dr, the linearized constraint requires ∆to be orthogonal to model gradients at θ∗onDr. This simplifies the problem, requiring only gradient access, and avoids the complex interpolation constraint. To mitigate error from this relaxation, we add a regularizer ˆR(∆) to control the size and direction of the drift. The final objective minimizes R(θ∗+∆) +ˆR(∆) under the relaxed orthogonal gradient constraint, yielding updated parameters θ∗+∆. Theoretical Contributions. For linear models and linear networks, we prove there exists a regularizer ˆRsuch that minimizing R(θ∗+∆) +ˆR(∆) over our constraint relaxation gives the 2 exact unlearning solution when Ris the ℓ2-norm of either the effective linear predictor or the full parameter vector. For two-layer perceptrons with nonlinear activations, where Rmeasures network width, we prove that the right choice of ˆRyields a solution to our relaxed problem which interpolates Drand matches the best known upper bound on the number of neurons required to fit any dataset of a given size. Algorithmic Contributions. We devise an iterative algorithm MinNorm-OG that accesses a subset of Dr, aligning with data access assumptions in prior work [KTHT23; PDLKSN25; MFSLK24], where OG refers to orthogonal gradient. MinNorm-OG alternates between two steps: solving for the minimizer of R(θ+∆) +ˆR(∆) over ∆satisfying the orthogonal gradient constraint, and descending on the loss over Dr(Algorithm 1). We take both Rand ˆRas scaled squared ℓ2norms, which apply broadly to parameterized models and yield a closed-form solution to the relaxed problem. We show strong performance of our method across three experimental settings: Data Poisoning ,Multi-Class Label Erasure , and Representation Collapse , using natural and interpretable unlearning metrics to compare our method against existing baselines. Notably, the Multi-Class Label Erasure and Representation Collapse image-domain experiments introduce novel unlearning settings for effective evaluation. Related work . Unlearning | https://arxiv.org/abs/2505.22601v1 |
theory traces back to influence functions [Ham74], a classic statistical tool for estimating the effect of down-weighting a sample on a learned function [BNLGG22]. Extensions have explored approximate unlearning via differential privacy [SAKS21; GGHV20]. Previous works have considered different unlearning paradigms. [SAKS21] analyzed the deletion capacity an unlearning method can tolerate while maintaining adequate generalization performance. [GKKMSZ23; BCCJTZLP21] proposed joint learning-unlearning schemes that store information about data subsets during training for later unlearning. Several works proposed iterative unlearning methods for large-scale models, combining loss ascent, descent, and noise injection [NRS21; GNG21; CS23; JBVRCCH25; ZLBM24]. All these methods rely on loss gradient perturbations, which we show yield vacuous updates under overparameterization (Theorem 2.1). In practice, they also struggle to unlearn effectively [PDLKSN25], as loss ascent encourages misfitting Dfrather than forgetting it. Our framework builds on components from other contexts. We enforce parameter perturbations to be orthogonal to the gradient of the model’s predictions on Drto preserve loss optimality—an idea also used in continual learning to retain past performance [FAML20]. Recent unlearning methods use similar projections which mix loss ascent and descent, but their reliance on these objectives inherits prior limitations [CZYZ24; HRGV24]. Notation . Vectors and matrices are in bold, with vectors lowercase and matrices uppercase. For setsA, B,A⊔Bdenotes disjoint union. 2Ais the power set. For a proposition a,1{a}is 1 if true and 0 otherwise; δ{a}is +∞if true and 0 otherwise. For x∈RdandA⊆Rd,PA(x)is the Euclidean projection onto A. ForZ∈Rm×n,vec(Z)∈Rmnis the columnwise vectorization. im(Z), ker(Z), and row(Z) denote the image, kernel, and rowspace. ∥Z∥Fis the Frobenius norm, ∥Z∥∗is the nuclear norm, and ∥Z∥2is the spectral norm. For Y∈Rm×n,⟨Z,Y⟩is the Frobenius inner product and Z⊙Yis the element-wise product. tr{·}is the trace. For x∈Rdxandy∈Rdy, x;y ∈Rdx+dystacks xandy.∥x∥pis the ℓpnorm. [ n] ={1, . . . , n }. For x∈R, (x)+=max{x,0} is the ReLU. Let 0and1denote the vectors with each entry equal to 0 and 1 respectively. Further, forx∈Rdandc∈R, let1x̸=cdenote the vector which is 1 in each entry of xwhich is not equal to cand 0 otherwise. 3 2 Unlearning in Overparameterized Settings We introduce notation for our unlearning setting, highlighting the unique challenges of the overpa- rameterized regime. We explain why loss optimality alone no longer suffices to define the ground truth unlearning solution, and demonstrate why loss-gradient-based methods, originally designed for the underparameterized case, prove ineffective. To formalize the unlearning problem, we now define the problem setting and notation, covering both the underparameterized and overparameterized regimes. We define the full training dataset D={(xi,yi)}n i=1, with sample inputs xi∈Rmand outputs yi∈Rldrawn from the data domain Z=Rm×Rl. Initially, training is performed on the full dataset Dover the model class {f(θ,·)|θ∈Rd}parameterized by θ∈Rd, where f:Rd+m→Rltakes a parameter vector θ∈Rd and an input x∈Rmand maps them to a prediction f(θ,x) in the output space Rl. We define the training procedure, also denoted the learning algorithm, as A: 2Z→Rd, which takes in a dataset and returns the parameter vector θ∗corresponding to the trained model. We make the minimal assumption that Ais faithful to a known loss function J, meaning A(D) =θ∗is only guaranteed to be a minimizer of JoverD, | https://arxiv.org/abs/2505.22601v1 |
where Jis defined as the average of the sample-wise loss L: A(D) =θ∗∈argmin θJ(θ;D) = argmin θ1 nX (x,y)∈DL(θ;x,y). (1) For our theoretical discussion, we consider sample-wise loss functions L(θ;x,y)which are minimized when f(θ,x) =y, meaning that sample interpolation implies loss minimization. For example, this is the case for ℓp-norm regression or classification with 0-1 loss. With this training setup, we begin the unlearning process given a request for the model to forget a subset of the training data Df⊆ D. We then apply an unlearning algorithm M(A,Ir,A(D),Df) which is given the learning algorithm A, side information Ir(e.g., a subset of the samples, or the Hessian of the training loss over the retained data), initial solution A(D), and forget set Df, and which attempts to recover the desired unlearning solution, denoted by θ∗ r, where the subscript rindicates that θ∗ ris the parameter vector that would result from training only on the retain setDr=D \ D f. To formally define θ∗ r, we must distinguish between underparameterized and overparameterized regimes, as the former’s definition requires refinement to remain meaningful in the latter. In the underparameterized setting, the loss function over both the full data set J(θ;D)as well as the retain set J(θ;Dr)admits a unique minimizer. To ensure that the unlearning solution remains consistent with the training loss, the only valid choice is to define θ∗ ras the unique minimizer of J(θ;Dr). However, in the overparameterized setting this uniqueness property fails to hold, as both J(θ;D)andJ(θ;Dr)may admit multiple minimizers. In order to sidestep the non-uniqueness issue, one may be tempted to define anyminimizer of J(θ;Dr)as a valid unlearning solution, as presumably any minimizer to J(θ;Dr)could be found from just training on Dralone. However, following this rationale allows for seemingly valid unlearning solutions to leak information relating toDf. Specifically, the original solution θ∗that interpolates all of Dis itself a valid minimizer of the retain set loss J(θ;Dr), but θ∗can reflect training dynamics influenced by Df, revealing information that cannot be inferred from Dralone (see Appendix B for a concrete illustration). 4 2.1 Defining Unlearning Beyond Loss Optimality As discussed above, the overparameterized setting requires a more fine-grained definition of the desired unlearning solution—one that goes beyond loss optimality. We define the unlearning solution in the overparameterized case to be the specific loss minimizer which minimizes an additional objective function R(θ), expressed as the output of a training algorithm AR: AR(Dr) =θ∗ r∈argmin θR(θ), subject to θ∈argmin θ′J θ′;Dr . (2) This bilevel optimization problem searches for the model which minimizes the complexity measure Ramong all models which minimize the retain set loss. Indeed, when Radmits a unique solution, this formulation overcomes the prior issues of non-uniqueness and the risk of revealing information from the forget set. While different choices of Rcan address these issues, we ultimately want Rto promote desirable model properties. In our theoretical results, we focus on Ras a regularization function that penalizes model complexity. This way, the solution θ∗ rto(2)corresponds to the simplest model that interpolates Dr—a particularly useful property in the overparameterized regime, where the simplest interpolating model is often | https://arxiv.org/abs/2505.22601v1 |
associated with optimal generalization performance [HMRT22]. Then given the training algorithm AR, side information about the retain set Ir, a minimizer to the original training loss A(D), and the forget set Df, an unlearning algorithm M(AR,Ir,A(D),Df) attempts to recover AR(Dr), the least complex loss minimizer over Dras measured by R. 2.2 Loss Gradient Methods Deployed Under Overparameterization For the characterization in (2)of the ground truth unlearning solution under overparameterization, we show that existing unlearning methods based on loss gradient perturbations fail to achieve meaningful unlearning updates. Prior theoretical works proposed gradient-ascent style updates based on influence functions, a principled technique from robust statistics [BNLGG22; SAKS21; GGHV20], while existing empirical unlearning methods perform combinations of loss ascent over Df, loss descent over Dr, and parameter noising [NRS21; GNG21; CS23; KTHT23]. We characterize these methods as loss-gradient unlearning , and show that they perform ineffective updates when deployed under overparameterization. Definition 2.1. Letθ∗=A(D). We say an unlearning algorithm Mperforms loss-gradient unlearning if for any positive semi-definite Pr,Pf∈Rd×dand zero-mean random variable ξ∈Rd, M(A,Ir,A(D),Df) =θ∗−Pr∇θJ(θ∗;Dr) +Pf∇θJ(θ∗;Df) +ξ (3) Although versions of loss-gradient unlearning have been theoretically motivated in the underpa- rameterized setting [SAKS21; GGHV20], we show they fail to unlearn in the overparameterized setting. Theorem 2.1. Letf(θ∗,·)interpolate D, sof(θ∗,x) =yfor all (x,y)∈ D, and let MLGbe any loss-gradient unlearning method. If the sample loss L(θ,x,y)is minimized when f(θ,x) =y, then for all Df⊆ D,MLGsimply noises θ∗by some zero-mean random variable ξ. MLG(A,Ir,A(D),Df) =θ∗+ξ The recovered parameters θ∗already minimize J(θ∗;Dr), so the loss gradients vanish and MLG merely adds noise to θ∗. This shows the core issue with loss gradient updates in overparameterized unlearning: the loss gradient is uninformative, as both θ∗andθ∗ rminimize the loss on Dr. 5 3 Our Proposed Framework We present a new framework to efficiently address the desired unlearning goal in overparameterized settings without full retraining. A key assumption underlying our method is the richness of the function class, allowing for perfect fitting of the retain set. This means there exist several mappings f(θ,·) where f(θ,xi) =yifor every ( xi,yi) in the retain set. This lets us replace the loss minimization in (2) with the hard constraint f(θ,xi) =yi, leading to the following formulation: θ∗ r∈argmin θR(θ) s.t. f(θ,xi) =yi∀(xi,yi)∈ D r (4) This problem can be independently solved, but this would be the equivalent of retraining on the retain set. The main goal of our proposed framework is to solve the above problem efficiently by starting from the model θ∗which fits each sample and leveraging the feasibility of this model for the above optimization problem. To do so, we simplify the problem and replace the constraints in(4)with their linear approximation around θ∗. While the constraints f(θ,xi) =yiin(4)can be highly nonconvex and difficult to satisfy in general, we demonstrate that using the proposed first-order approximation f(θ∗,xi) +∇f(θ∗,xi)⊤(θ−θ∗) =yi⇒ ∇ f(θ∗,xi)⊤(θ−θ∗) = 0 , (5) renders it tractable as leads to a set of linear constraints with respect to θ. Note that in the above simplification we used the fact that θ∗perfectly fits the retain set, so f(θ∗,xi) =yi. Now if we apply this | https://arxiv.org/abs/2505.22601v1 |
constraint relaxation the resulting optimization problem would be: min ∆R(θ∗+∆) s.t. ∇f(θ∗,xi)⊤∆= 0∀(xi,yi)∈ D r, (6) where for notational convenience, we define the drift variable as ∆=θ−θ∗. While this relaxation is sensible, it presents a clear limitation: approximating a general function with its linearization is only locally accurate and thus valid when the drift term ∆remains sufficiently small in some norm. To keep the surrogate solution close to that of the original problem in (4), we add a regularization term ˆR(∆) to the loss to control the drift. The resulting objective function is ˜R(θ∗+∆) :=R(θ∗+∆) +ˆR(∆). Consequently, the optimization problem we propose to solve instead of (4) is given by ˜∆∈argmin ∆˜R(θ∗+∆) s.t. ∇f(θ∗,xi)⊤∆= 0∀(xi,yi)∈ D r (7) Indeed, by finding ˜∆the suggested unlearned model would be θ∗+˜∆. Although (7)employs relaxed constraints, we will show that for various mapping functions f, there exists a function ˆR such that the solution to (7)either (i) solves the original unlearning problem (4)exactly, or (ii) yields a model that both interpolates Dr, remaining feasible for (4), and satisfies a tight upper bound on the complexity measure R. A key advantage of the formulation in (7), beyond simplifying the constraints, is its minimal information requirement: it only relies on the gradient of fevaluated at the original trained model, i.e., the side information Ir={∇θf(θ∗,x)}(x,y)∈Dr. This is significantly less restrictive than prior work, which requires access to the inverse Hessian of the loss over Dr [BNLGG22; SAKS21; GGHV20], and makes our method substantially simpler than full retraining. 6 4 Theoretical Guarantees This section provides theoretical guarantees for using our proposed relaxation (7)to solve the exact unlearning problem (4). For clarity, we denote the Euclidean projection onto a set SbyPS(·), and we define the penalty function δ{a}, which is + ∞if condition ais satisfied and 0 otherwise. 4.1 Linear Model We first consider training a linear model f(θ,x) =θ⊤xon data D={(xi, yi)}n i=1where xi∈Rm andyi∈R. Given initial parameters θ∗withθ∗⊤xi=yifor all ( xi, yi)∈ D, we can easily solve the exact unlearning problem (4) for R(θ) =∥θ∥2. Theorem 4.1. Let˜∆solve (7)forf(θ,x) =θ⊤xand ˜R(θ) =∥θ∥2. Then the recovered solution ˜θ=θ∗+˜∆solves the exact unlearning problem (4)forR(θ) =∥θ∥2 This result holds because, in the linear case, the surrogate and original constraints match exactly, and no approximation error is introduced. Thus, no additional regularizer (i.e., ˆR(·) = 0) is needed. 4.2 L-Layer Linear Network In this section, we extend our analysis to a more complex model: an L-layer linear network. Let the prediction function be f(θ,x) =c⊤AL−1···A1x, where the parameter vector is partitioned θ= [c;vec(A1) ;. . .;vec(AL−1)], with Aℓ∈Rhℓ×hℓ−1andc∈RhL−1forℓ= 1, . . . , L −1. The input dimension is m=h0, and we assume n < m to reflect the overparameterized regime. For clarity, define the effective linear predictor w(θ) =A⊤ 1···A⊤ L−1c, so that f(θ,x) =w(θ)⊤x. For this model class, we study two natural choices of regularizers in (4): (i) Ras the norm of the prediction function as a linear map, and (ii) Ras the norm of all model parameters. 4.2.1 Minimizing Predictor Norm We first analyze when the Rmeasures the ℓ2-norm of | https://arxiv.org/abs/2505.22601v1 |
the effective linear predictor: R(θ) = A⊤ 1···A⊤ L−1c 2=∥w(θ)∥2. Given θ∗= c∗; vec(A∗ 1) ;. . .; vec(A∗ L−1) such that w(θ∗)⊤xi=yi for all ( xi, yi)∈ D, we aim to solve (4)for this choice of R. In this case the mapping fis non-linear with respect to θ. As a result, the first-order approximation for the constraints is not tight, so solving the surrogate problem in (7)does not necessarily give a solution for the problem in (4). However, we show that adding a suitable regularizer ˆRto control model drift ensures the relaxed and original problems have the same solution. We first present an intermediate result showing the existence of a feasible perturbation ˜∆that satisfies the relaxed linearized constraints and, when added to θ∗, yields an optimal solution to (4). Lemma 1. Denote the retain set input subspace by Sr=span{x|(x, y)∈ D r}and partition the perturbation as ˜∆=˜∆c; vec( ˜∆A1) ;. . .; vec( ˜∆AL−1) in the same manner as θ. Set ˜∆A1=− A∗⊤ 2···A∗⊤ L−1c∗ −2 2A∗⊤ 2···A∗⊤ L−1c∗PS⊥r(w(θ∗))⊤(8) and all other components of ˜∆to zero. Then ˜∆is orthogonal to the gradient of mapping f(θ,x) evaluated at θ=θ∗for each input xin the retain set and hence feasible for the relaxed problem (7). Moreover, θ∗+˜∆solves the exact unlearning problem (4)forR(θ) =∥w(θ)∥2. 7 The above result shows that the perturbation direction defined in (8) leads to an optimal solution for(4)once added to θ∗, while satisfying the relaxed linear constraints of the surrogate problem. That said, it does not imply that solving (6), which only differs in the constraints from (4), would recover ˜∆. In fact, we can show that without adding a proper regularization term ˆRto the loss, ˜∆ would not be a solution of the relaxed problem (see Appendix C.4.1). We next characterize the appropriate regularization ˆR(∆) needed to ensure that ˜∆is the optimal solution to the surrogate problem in (7). Theorem 4.2. The solution to the relaxed unlearning problem (7)with the following choice of ˜R solves the exact unlearning problem (4)forR(θ) =∥w(θ)∥2. ˜R(θ;θ∗) =∥w(θ)∥2+δ{c̸=c∗}+L−1X ℓ=2δ{Aℓ̸=A∗ ℓ} (9) 4.2.2 Minimizing Parameter Norm Next, we analyze when the unlearning solution is the loss minimizer with the smallest parameter norm, so R(θ) =∥θ∥2. In this case, we can construct an exact unlearning solution from the exact unlearning solution to the previously analyzed case when R(θ) =∥w(θ)∥2. Theorem 4.3. Letˆθ∗ rsolve (4)forR(θ) =∥w(θ)∥2, sow(ˆθ∗ r)is the min ℓ2-norm linear predictor overDr. Define ρ=∥w(ˆθ∗ r)∥2and let vℓ∈Rhℓforℓ∈[L−1]each satisfy ∥vℓ∥2= 1. Set ˜A1=ρ1−L Lv1w(ˆθ∗ r)⊤,˜Aℓ=ρ1 Lvℓv⊤ ℓ−1forℓ= 2, . . . , L −1,˜c=ρ1 LvL−1. Then ˜θ= ˜c; vec( ˜A1);. . .; vec( ˜AL−1) solves the exact unlearning problem (4)forR(θ) =∥θ∥2. Thus, the solution to the minimum norm predictor problem gives the solution to minimum parameter norm problem, so we can apply the previous results to find a solution for (4)with R(θ) =∥w(θ)∥2 using the constraint relaxation and then update the parameters as prescribed by Theorem 4.3. 4.3 2-Layer Perceptron We lastly consider a 2-layer perceptron with a non-linear activation. Specifically, we define f(θ,x) = c⊤ϕ(Ax), where we use the partition θ= c; vec(A) forc∈Rh,A∈Rh×m. Here, his | https://arxiv.org/abs/2505.22601v1 |
the total number of neurons and ϕ:R→Ris some activation function. We abuse notation and write ϕ(Ax) to denote the element-wise application of ϕtoAx. We analyze the case where Rmeasures the number of active neurons, i.e., the width of the network. Formally, we denote a⊤ ias the ith row ofA, and we set R(θ) =Ph i=11{|ci|∥ai∥2>0}. With this choice of R, the unlearning solution promotes recovering a sparse network which fits Dr, where Drhasnr=|Dr|samples. Given that c∗⊤ϕ(A∗xi) =yifor all ( xi, yi)∈ D, we chase the minimum neuron interpolating solution to Dr: θ∗ r∈argmin θR(θ) s.t. c⊤ϕ(Axi) =yi∀(xi, yi)∈ D r (10) While we aim to solve (10)for any retain set Dr, the exact minimal-width solution remains unknown. Prior work shows that nr+ 1 neurons suffice for general activations [RSSZ07], while for ReLU activations specifically, some nr-sample datasets need at least nr−2 neurons [YSJ19]. Here, we 8 apply our framework to recover feasible unlearned networks with width at most nr, improving the best known worst-case bound. We begin by linearizing the constraints of problem (10) around θ∗, as directly solving this problem may be intractable due to the non-linear activation ϕ, especially since we assume access to only the model gradients over Dr, not the samples in Drthemselves. We define the drift as ∆= ∆c; vec(∆A) , yielding the specific instance of the linearized problem (6) for this model class: min ∆R(θ∗+∆) s.t. ∆⊤ cϕ(A∗xi) + trn ∆⊤ A ϕ′(A∗xi)⊙c∗ x⊤ io = 0∀(xi,yi)∈ D r, (11) where ⊙denotes element-wise product. Due to the layered structure and non-linear activation ϕ, solving (11) may not ensure feasibility for (10), as the relaxed constraints are loose. We first show that a feasible perturbation ˜∆, modifying only the last layer c∗, exists and yields a network satisfying (10) with at most nractive neurons. Lemma 2. Assume the finite-width network f(θ∗,x) =c∗⊤ϕ(A∗x)interpolates Dr, where nr=|Dr| is the number of retain set samples. Let dim span{ϕ(A∗x)}(x,y)∈Dr =s≤nr. Then, there exists a feasible perturbation ˜∆satisfying the linear constraints in (11), such that f(θ∗+˜∆,·)interpolates Dr,R(θ∗+˜∆)≤s, and ˜∆A=0. While Lemma 2 provides a feasible point for (11), it is not the solution, as the relaxed problem linearizes the interpolation constraint without limiting drift size, potentially losing interpolation overDr. The following theorem shows that choosing ˆRto restrict perturbations in A∗ensures that solving (7) yields a network feasible for (10) with at most nractive neurons. Theorem 4.4. ForR(θ)which measures the number of active neurons of the network f(θ,·), define ˜R(θ;θ∗) =R(θ) +δ{A̸=A∗} (12) as the surrogate objective. Then the solution to the relaxed unlearning problem (7)with this choice of ˜Rresults in a network which interpolates Dr, achieving feasibility for the exact unlearning problem (10), and admits at most s= dim span{ϕ(A∗x)}(x,y)∈Dr ≤nractive neurons, where nr=|Dr|. Theorem 4.4 shows that for general activation functions, linearizing the constraints to (10) and minimizing the sum of the complexity measure Ralong the appropriator regularizer ˆRfor the drift term recovers a network that interpolates Drwith at most sactive neurons, where sis the dimension of the span of the learned representations {ϕ(A∗x)}(x,y)∈Dr. Since scan never exceed nr=|Dr|our method guarantees | https://arxiv.org/abs/2505.22601v1 |
a worst-case interpolation width of at most nr, thereby improving the general bound of nr+ 1 implied by [RSSZ07] for minimum width interpolation. The drift regularizer ˆRonly allows perturbations to c∗, so the solution to (7)reduces width via sparsity in the updated last layer c∗+˜∆c, while leaving the first layer A∗unchanged. Although c∗+˜∆crelies on a small set of features, the feature map ϕ(A∗x) still reflects representations learned from all of D. We show, however, that the sparsity of c∗+˜∆ccan be propagated into A∗, producing a network with a new, sparser feature map that is less expressive and no longer consistent with having been trained on the full dataset D, yet still satisfies all unlearning guarantees in Theorem 4.4. Proposition 1. Letθ= c; vec(A) be any parameter vector, and define ˆA= (1c̸=0,1⊤)⊙A. Then the updated parameters ˆθ= c; vec( ˆA) satisfy: (i) f(θ,x) =f(ˆθ,x)for all x∈Rm, (ii) R(θ) =R(ˆθ), and (iii) ˆAhas at most R(ˆθ)number of nonzero rows. 9 Algorithm 1 MinNorm-OG 1:Input: θ∗, lossJ(θ),D′ r⊆ D r, step size ηt, regularization constant λt≥0, subsample batch size npert 2: Initialize θ←θ∗ 3:fort= 1, . . . , n epochs do 4: foreach batch BfromD′ rdo 5: ifλt<∞then 6: Compute function gradients gi=∇θf(θ,xi) for xi∈ B,i= 1, . . . , n pert 7: Solve ˜∆←argmin∆∥θ+∆∥2 2+λt∥∆∥2 2s.t.∆⊥gifor all i≤npert 8: Update θ←θ+˜∆ 9: Loss descent: θ←θ−ηt∇θJ(θ;B) 10:return θ Thus, for any parameters θ, we can apply a simple update to recover new parameters ˆθwhich behave like an R(θ)-neuron network in terms of both the function outputs and at the parameter level. We apply this result to the solution to the relaxed unlearning problem (7)in the following corollary. Corollary 1. Let˜θ=˜c; vec(A∗) solve (7)for˜Rdefined in (12), and define the updated first layer as ˆA= (1˜c̸=01⊤)⊙A∗. Then network parameterized by ˆθ= ˜c; vec( ˆA) similarly interpolates Dr, has the same number of active neurons R(˜θ) =R(ˆθ), and ˆAhas at most R(ˆθ)non-zero rows. Thus, solving the relaxed problem (7)and updating A∗via Proposition 1 yields a network that reveals no trace of having been trained on the larger dataset D=Dr⊔Df, even at the representation level. 5 From Theory to Practice We translate our framework into a practical unlearning algorithm MinNorm-OG (Algorithm 1). At epoch t, we alternate between solving a version of the relaxed unlearning problem (7)and descending the loss on Drto maintain feasibility for the exact unlearning problem (4), leveraging access to samples in Dr. Steps 6-8 of Algorithm 1 denote solving (7)forR(θ) =∥θ∥2 2and ˆR(∆) =λt∥∆∥2 2 where λt≥0 is a scaling parameter, and step 9 shows the loss descent step. To handle batched data and large models, we enforce the orthogonality constraint in (7)over a subsample of size npert of each batch. For this Rand ˆR, the solution to (7)perturbs θtoward its projection onto the span of model gradients over this subsample (see Appendix D), which can be interpreted as a proximal update under the orthogonal gradient constraint. The main overhead relative to gradient descent comes from solving for ˜∆via a QR decomposition with complexity O(dn2 pert), which is negligible compared to the O(dnB) | https://arxiv.org/abs/2505.22601v1 |
cost of gradient descent when npert<√nB, where nB=|B|is the batch size. Moreover, we often set λt=∞for many epochs in practice, skipping this cost entirely. 5.1 Experiments We test our algorithm against the following existing methods. GD [NRS21] runs gradient descent on J(θ;Dr), while Noisy GD (NGD) [CS23] adds gradient noise to the GD steps. GA [GNG21] runs gradient ascent on J(θ;Df). NegGrad+ (NGP) [KTHT23] minimizes a weighted combination of the GD and GA objectives. SCRUB [KTHT23] optimizes three objectives: minimizing J(θ;Dr), minimizing KL divergence of model outputs on Drrelative to the original model, and maximizing 10 Table 1: Data Poisoning experiment results, measured as the sup-norm distance between the retain set trend y=sin(x) and the outputs of the unlearning algorithms (smaller is better). We report medians over 20 trials, along with the range of the central 10 values Epochs GA GD NGD NGP MinNorm-OG Ridge 10 3.56 (2.34, 6.52) 3.38 (2.62, 7.48) 3.63 (2.71, 7.56) 3.70 (2.28, 7.37) 1.89 (1.10, 6.02) 3.38 (2.62, 7.48) 100 27.7 (20.6, 36.2) 1.85 (1.51, 2.76) 2.54 (1.56, 6.09) 1.81 (1.41, 2.93) 1.07 (0.62, 1.32) 1.67 (1.37, 3.31) 1000 1700 (1200, 2600) 1.58 (1.04, 2.43) 1.35 (.93, 3.47) 2.29 (1.54, 5.07) 0.84 (0.64, 1.24) 1.29 (0.87, 2.12) 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (a) MinNorm-OG (ours) 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (b) GD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (c) NGD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (d) Ridge 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (e) NGP 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (f) GA Original Model Unlearned Model sin(x) Retain Points Forget Points Figure 1: Example unlearned model fits when given 100 unlearning epochs for the Data Poisoning experiment, where the forget points distort the retain set trend y= sin( x). KL divergence on Df. Negative Preference Optimization (NPO) [ZLBM24] runs a form of gradient ascent over J(θ;Df)inspired by preference optimization. NPO and SCRUB only apply to models which output a class distribution. To highlight the performance of our algorithm, we also compare to ridge regression, which approximates our unlearning objective (4)forR(θ) =∥θ∥2by minimizing J(θ;Dr)+λt∥θ∥2 2. The minimizer of this regularized objective converges to the minimum- ℓ2-norm loss minimizer as λt→0 [HMRT22]. While recent work proposed various unlearning benchmarks, especially for LLMs [CS23; SLH- MZHLZSZ25; RWJCBVCHG25b; RWJCBVCHG25a], they often rely on opaque metrics that emphasize suppressing forget-set generation. In contrast, we present the following experiments with interpretable quantitative metrics. See Appendix E for full details. Data Poisoning. We train a shallow network on retain samples ( xr, yr)∈ D rwith yr=sin(xr) and forget samples ( xf, yf)∈ D fwith yf= 1.5, over input domain X= [−15,15]⊆R. We evaluate the output θof each unlearning method by measuring the deviation from the retain set trend, given bysupx∈X|f(θ,x)−sin(x)|. Results are reported in Table 1 as the median over 20 trials along with the range of the central 10 trials, with visualizations in Figure 1. With just 10 epochs, the results vary widely, but they | https://arxiv.org/abs/2505.22601v1 |
become more consistent as the number of unlearning epochs increases. We observe in general that the methods which mainly descend the loss on Dr(GD, NGD, Ridge) struggle to escape from the initial solution which fits the poisoned samples, while the methods which include ascent (NGP, GA) diverge from the sine curve in regions unrelated to the forget points. Multi-Class Label Erasure. We use MNIST and CIFAR-10 [LCB10; Kri09], creating red, green, 11 0.2 0.4 0.6 0.8 1.0 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 2: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on MNIST (left) and CIFAR-10 (right). Models predict both color and content, but the retain set contains only gray images. The x-axis shows accuracy on gray test images (higher is better), and the y-axis shows mean squared error between predicted probability of gray on all inputs and the target of 1 (lower is better). The ground truth unlearned model (GT) performs well on gray inputs but always predicts gray with probability 1. MinNorm-OG (ours) strictly dominates the other methods. Table 2: Unlearning performance across constraints on the number of epochs and percentage of accessible retain set samples for the Representation Collapse experiment. Models are trained on colored images where color perfectly predicts the label in the retain set but not in the full dataset D. Evaluation is measured as accuracy on duplicate training images labeled by color only (higher is better). We report medians over 5 trials, along with the range of the central 3 values. Retain % Epochs GD GA NGD NGP NPO Scrub MinNorm-OG Ridge 15 0.60 (0.52, 0.70) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 0.90 (0.77, 0.97) 0.50 (0.50, 0.50) 0.80 (0.74, 0.85) 1.00 (1.00, 1.00) 0.73 (0.53, 0.73) 8 0.72 (0.53, 0.74) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (0.99, 1.00) 0.50 (0.50, 0.50) 0.96 (0.79, 0.97) 1.00 (1.00, 1.00) 0.73 (0.66, 0.73) 10 0.76 (0.73, 0.79) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 1.00 (1.00, 1.00) 0.75 (0.73, 0.82) 105 0.73 (0.52, 0.73) 0.50 (0.50, 0.58) 0.50 (0.50, 0.50) 0.91 (0.82, 0.92) 0.52 (0.50, 0.57) 0.76 (0.73, 0.83) 1.00 (0.85, 1.00) 0.73 (0.52, 0.73) 8 0.72 (0.65, 0.74) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 0.50 (0.50, 0.50) 1.00 (0.99, 1.00) 1.00 (1.00, 1.00) 0.77 (0.70, 0.81) 10 0.73 (0.69, 0.80) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 1.00 (1.00, 1.00) 0.92 (0.81, 0.92) and gray copies of each image. The model is trained to predict both content (digit or object) and color. The retain set Drcontains all content classes only in gray, while the forget set Dfcontains all colors. The ground truth unlearned model predicts gray content well and always predicts gray color with probability 1, regardless of input. We evaluate retain quality by accuracy | https://arxiv.org/abs/2505.22601v1 |
on gray-colored test samples, and forget quality by the mean squared error between the predicted gray probability and the ideal value of 1 across all colored inputs. Figure 2 shows the Pareto frontier for each method. Each point is a median over 5 trials for one hyperparameter setting, with shaded uncertainty as half the interquartile range. The optimal point (1 ,0) indicates perfect retain accuracy and zero gray prediction error. The ground truth unlearned model is labeled GT. Our method MinNorm-OG performs best in both tasks, though all methods struggle to preserve accuracy on CIFAR-10, a harder task than to MNIST. We observe that descent-based methods (GD, NGD, Ridge) often remain near the initial model (upper right region), which is already near-optimal on Drand provides weak gradients for unlearning. Representation Collapse. We use a subset of MNIST where the digits 0 and 1 are assigned a unique color. The retain set Drcontains the digits colored uniquely, while the forget set Dfcontains 12 digits with mismatched colors. The ground truth unlearned model predicts from color alone, as it perfectly determines the label in Drand is easier to learn than digit shape. In contrast, models trained on the full dataset D=Dr⊔ Dfmust rely on shape, since color is no longer predictive. For evaluation, we relabel training images by color and assess unlearning via color-label accuracy, testing if the unlearning methods can collapse the original model into just a color classifier. Results exhibit a bimodal distribution across trials, as each method must transition from an initial model that classifies digits perfectly to one that achieves the same retain accuracy using only color. When this transition fails, the model often reverts to digit-based predictions, leading to high variance. To reflect this behavior robustly, Table 2 reports median color accuracy over 5 trials, along with the range of the central 3 values. We note that MinNorm-OG consistently performs best. 6 Conclusion We proposed a new unlearning framework under overparameterization by seeking the simplest solution consistent with the retain set. We proved guarantees on solving the exact unlearning problem through a tractable relaxed formulation. A practical implementation of our framework outperformed baselines, as the simplest solution aligns with unlearning goals and removes artifacts unrelated to the retain set. While our theoretical guarantees open the door for unlearning analysis beyond the underparameterized setting, we focused on model classes like linear networks and two-layer perceptrons. We naturally aim to analyze unlearning in more complex settings like deep networks in future work, as well as experiment within broader domains at larger scale. Acknowledgments This work was supported in part by NSF Grants 2019844, 2107037, and 2112471, ONR Grant N00014-19-1-2566, the Machine Learning Lab (MLL) at UT Austin, the NSF AI Institute for Foundations of Machine Learning (IFML), and the Wireless Networking and Communications Group (WNCG) Industrial Affiliates Program. We are grateful for computing support on the Vista GPU Cluster through the Center for Generative AI (CGAI) and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. References [BNLGG22] J. Bae, N. Ng, A. Lo, M. Ghassemi, and | https://arxiv.org/abs/2505.22601v1 |
R. B. Grosse. “If influence functions are the answer, then what is the question?” Advances in Neural Information Processing Systems 35 (2022), pp. 17953–17967 (pages 2, 3, 5, 6). [BCCJTZLP21] L. Bourtoule, V. Chandrasekaran, C. A. Choquette-Choo, H. Jia, A. Travers, B. Zhang, D. Lie, and N. Papernot. “Machine Unlearning”. In: 2021 IEEE Symposium on Security and Privacy (SP) . 2021, pp. 141–159. doi:10.1109/ SP40001.2021.00019 (page 3). [CY15] Y. Cao and J. Yang. “Towards making systems forget with machine un- learning”. In: 2015 IEEE symposium on security and privacy . IEEE. 2015, pp. 463–480 (page 2). [CCP18] CCPA. California Consumer Privacy Act of 2018 (CCPA) . 2018 (page 2). 13 [CZYZ24] H. Chen, T. Zhu, X. Yu, and W. Zhou. “Machine unlearning via null space cal- ibration”. In: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence . 2024, pp. 358–366 (page 3). [CS23] R. Chourasia and N. Shah. “Forget unlearning: Towards true data-deletion in machine learning”. In: International conference on machine learning . PMLR. 2023, pp. 6028–6073 (pages 3, 5, 10, 11). [FAML20] M. Farajtabar, N. Azizan, A. Mott, and A. Li. “Orthogonal gradient descent for continual learning”. In: International conference on artificial intelligence and statistics . PMLR. 2020, pp. 3762–3773 (pages 3, 28). [GRBTKDYZA24] I. O. Gallegos, R. A. Rossi, J. Barrow, M. M. Tanjim, S. Kim, F. Dernoncourt, T. Yu, R. Zhang, and N. K. Ahmed. “Bias and fairness in large language models: A survey”. Computational Linguistics 50.3 (2024), pp. 1097–1179 (page 2). [GDP16] GDPR. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation) . 2016 (page 2). [GKKMSZ23] B. Ghazi, P. Kamath, R. Kumar, P. Manurangsi, A. Sekhari, and C. Zhang. “Ticketed learning–unlearning schemes”. In: The Thirty Sixth Annual Confer- ence on Learning Theory . PMLR. 2023, pp. 5110–5139 (page 3). [GNG21] L. Graves, V. Nagisetty, and V. Ganesh. “Amnesiac machine learning”. Proceedings of the AAAI Conference on Artificial Intelligence 35.13 (2021), pp. 11516–11524 (pages 3, 5, 10). [GGHV20] C. Guo, T. Goldstein, A. Hannun, and L. Van Der Maaten. “Certified data removal from machine learning models”. In: Proceedings of the 37th Interna- tional Conference on Machine Learning . 2020, pp. 3832–3842 (pages 2, 3, 5, 6). [Ham74] F. R. Hampel. “The influence curve and its role in robust estimation”. Journal of the american statistical association 69.346 (1974), pp. 383–393 (page 3). [HMRT22] T. Hastie, A. Montanari, S. Rosset, and R. J. Tibshirani. “Surprises in high- dimensional ridgeless least squares interpolation”. Annals of statistics 50.2 (2022), p. 949 (pages 5, 11). [HZRS16] K. He, X. Zhang, S. Ren, and J. Sun. “Deep residual learning for image recognition”. In: Proceedings of the IEEE conference on computer vision and pattern recognition . 2016, pp. 770–778 (pages 32, 41). [HRGV24] T. Hoang, S. Rana, S. Gupta, and S. Venkatesh. “Learn to unlearn for deep neural networks: Minimizing unlearning interference with gradient projection”. In:Proceedings of the IEEE/CVF Winter Conference | https://arxiv.org/abs/2505.22601v1 |
on Applications of Computer Vision . 2024, pp. 4819–4828 (page 3). 14 [JBVRCCH25] X. Jin, Z. Bu, B. Vinzamuri, A. Ramakrishna, K. -W. Chang, V. Cevher, and M. Hong. “Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate”. In: Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) . Ed. by L. Chiruzzo, A. Ritter, and L. Wang. Albuquerque, New Mexico: Association for Computational Linguistics, Apr. 2025, pp. 11278– 11294. isbn: 979-8-89176-189-6 (page 3). [Kri09] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images . Tech. rep. Technical Report. University of Toronto, 2009. url:https://www.cs. toronto.edu/ ~kriz/learning-features-2009-TR.pdf (pages 11, 32, 41). [KTHT23] M. Kurmanji, P. Triantafillou, J. Hayes, and E. Triantafillou. “Towards unbounded machine unlearning”. Advances in neural information processing systems 36 (2023), pp. 1957–1987 (pages 3, 5, 10). [LCB10] Y. LeCun, C. Cortes, and C. J. Burges. MNIST handwritten digit database . http://yann.lecun.com/exdb/mnist/ . 2010 (pages 11, 32, 41). [LH19] I. Loshchilov and F. Hutter. “Decoupled Weight Decay Regularization”. In: International Conference on Learning Representations . 2019 (page 26). [MFSLK24] P. Maini, Z. Feng, A. Schwarzschild, Z. C. Lipton, and J. Z. Kolter. “TOFU: A Task of Fictitious Unlearning for LLMs”. In: First Conference on Language Modeling . 2024 (page 3). [NRS21] S. Neel, A. Roth, and S. Sharifi-Malvajerdi. “Descent-to-delete: Gradient- based methods for machine unlearning”. In: Algorithmic Learning Theory . PMLR. 2021, pp. 931–962 (pages 3, 5, 10). [PDLKSN25] M. Pawelczyk, J. Z. Di, Y. Lu, G. Kamath, A. Sekhari, and S. Neel. “Machine Unlearning Fails to Remove Data Poisoning Attacks”. In: The Thirteenth International Conference on Learning Representations . 2025 (page 3). [RWJCBVCHG25a] A. Ramakrishna, Y. Wan, X. Jin, K. -W. Chang, Z. Bu, B. Vinzamuri, V. Cevher, M. Hong, and R. Gupta. “Lume: Llm unlearning with multitask evaluations”. arXiv preprint arXiv:2502.15097 (2025) (page 11). [RWJCBVCHG25b] A. Ramakrishna, Y. Wan, X. Jin, K. -W. Chang, Z. Bu, B. Vinzamuri, V. Cevher, M. Hong, and R. Gupta. “Semeval-2025 task 4: Unlearning sensitive content from large language models”. arXiv preprint arXiv:2504.02883 (2025) (page 11). [RSSZ07] S. Rosset, G. Swirszcz, N. Srebro, and J. Zhu. “l1 regularization in infinite dimensional feature spaces”. In: Learning Theory: 20th Annual Conference on Learning Theory, COLT 2007, San Diego, CA, USA; June 13-15, 2007. Proceedings 20 . Springer. 2007, pp. 544–558 (pages 8, 9). [SAKS21] A. Sekhari, J. Acharya, G. Kamath, and A. T. Suresh. “Remember what you want to forget: Algorithms for machine unlearning”. Advances in Neural Information Processing Systems 34 (2021), pp. 18075–18086 (pages 2, 3, 5, 6). 15 [SLHMZHLZSZ25] W. Shi, J. Lee, Y. Huang, S. Malladi, J. Zhao, A. Holtzman, D. Liu, L. Zettlemoyer, N. A. Smith, and C. Zhang. “MUSE: Machine Unlearning Six- Way Evaluation for Language Models”. In: The Thirteenth International Conference on Learning Representations . 2025 (page 11). [YSJ19] C. Yun, S. Sra, and A. Jadbabaie. “Small relu networks are powerful memoriz- ers: a tight analysis of memorization capacity”. Advances in neural information processing systems 32 (2019) (page | https://arxiv.org/abs/2505.22601v1 |
8). [ZLBM24] R. Zhang, L. Lin, Y. Bai, and S. Mei. “Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning”. In: First Conference on Language Modeling . 2024 (pages 3, 11, 27). 16 Appendix A Minimum Norm Solutions to Linear Regression Here we prove various properties of minimum norm solutions to linear regression problems which we later use for our unlearning results. Following the notation in Section 2, we consider the full n-sample dataset D={(xi, yi)}n i=1with sample inputs xi∈Rmand outputs yi∈R. We consider training a linear model f(θ,x) =θ⊤xparameterized by θ∈Rm. We work within the overparameterized setting, so we assume m > n . Define the span of the input vectors S=span{x|(x, y)∈ D} , and assume dim(S) =nso the regression problem is realizable. Consider solving the following problem for finding the linear regression solution with minimum ℓ2norm: θ∗= argmin θ∥θ∥2s.t.f(θ,x) =y∀(x, y)∈ D LetX∈Rn×mbe the wide matrix whose ith row is equal to x⊤ i, and let y∈Rnbe the vector whose ith element is yi. Then, we can write an equivalent problem in matrix form. θ∗= argmin θ1 2∥θ∥2 2s.t.y=Xθ (13) We can then characterize the solution to the above problem relative to the constraint set. Lemma 3. θ∗is the unique vector in row(X)which is feasible for (13) Proof. The objective (13) is a convex objective with linear constraints which is bounded from below by 0 and has a non-empty feasible set. Thus, the KKT conditions are necessary and sufficient for optimality. We now derive the solution λ∗∈Rnto the dual problem. min θ1 2∥θ∥2 2s.t.y=Xθ = min θmax λ∈Rn1 2∥θ∥2 2+λ⊤(y−Xθ) = max λmin θ1 2∥θ∥2 2+λ⊤(y−Xθ) = max λ1 2 X⊤λ 2 2+λ⊤ y−XX⊤λ s.t.θ=X⊤λ = max λ−1 2 X⊤λ 2 2+λ⊤ys.t.θ=X⊤λ =⇒XX⊤λ∗=yandθ∗=X⊤λ∗(14) Thus the primal solution θ∗must be of the form X⊤λ∗∈row(X). To show uniqueness, consider θ∗ 1,θ∗ 2∈row(X) that are both feasible for (13). Then, y=Xθ∗ 1=Xθ∗ 2=⇒X(θ∗ 1−θ∗ 2) =0=⇒θ∗ 1−θ∗ 2∈ker(X). But, since row(X) is a subspace, θ∗ 1,θ∗ 2∈row(X) implies θ∗ 1−θ∗ 2∈row(X). Further, row(X) = ker(X)⊥. Thus, θ∗ 1−θ∗ 2∈ker(X)∩ker(X)⊥={0}=⇒θ∗ 1=θ∗ 2 17 Using the same analysis, we can characterize the entire feasible set in terms of θ∗. Lemma 4. The feasible set to (13){θ|y=Xθ}=θ∗+ ker( X). Proof. Letθ′satisfy y=Xθ′. Then, X(θ′−θ∗) =0soθ′−θ∈ker(X). To show the converse, take any z∈ker(X). Then X(θ∗+z) =Xθ∗+Xz=Xθ∗=y. Using this characterization of θ∗and the feasible set, we can cleanly understand how to achieve minimum norm solutions over just a subset of the constraints given a feasible point. This is central to our unlearning setup in later sections. Lemma 5. Consider any subset Dr⊆ D, and define θ∗ ras the linear regression solution over just Drwith minimum norm: θ∗ r= argmin θ∥θ∥2s.t.f(θ,x) =y∀(x, y)∈ D r (15) LetSr= span {x|(x, y)∈ D r}. Then θ∗ r=PSr(θ∗). Proof. θ∗already satisfies the feasibility constraint over the whole dataset D, so it must be be feasible for (15). Applying Lemmas 3 and 4 to the minimum norm problem over just Dr(15), we must have that θ∗ r∈Srandθ∗=θ∗ r+zfor some z∈S⊥ r. Then, PSr(θ∗) =PSr(θ∗ r+z) =PSr(θ∗ r) =θ∗ r. B Loss Minimization | https://arxiv.org/abs/2505.22601v1 |
Does not Protect Against Data Leakage The following example concretely demonstrates how certain minimizers of the retain set loss do not align with the intended goals of unlearning. Recall the unlearning problem for linear regression discussed in Section 4.1. In this case, we use the linear model f(θ,x) =θ⊤xparameterized by θ∈Rm. Further suppose the original dataset D= {(xi, yi)}n i=1hasnsamples with xi∈Rm, yi∈R. Denote the subspace S=span{x|(x, y)∈ D} , and assume dim(S) =nso the problem is realizable. We work in the overparameterized setting where m > n and the objective function is defined as the mean squared error denoted by J(θ;D) =1 nX (x,y)∈D y−θ⊤x2 Consider when the learning algorithm Aruns gradient descent on the loss, initialized at 0. Due to the overparameterization, Jhas an infinite number of minimizers which each achieve 0 loss. However, Ais biased towards a specific minimizer which is the unique minimizer to the loss on the span of the input samples, denoted as the subspace S. Proposition 2. LetAk(D)be a learning algorithm which runs ksteps of gradient descent on J(θ;D)initialized at 0, and define S=span{x|(x, y)∈ D} . If lim k→∞Ak(D)converges to some θ∗, then {θ∗}=S∩argmin J(θ;D) 18 Proof. We write the loss function J(θ;D)in vector form J(θ;D)=1 n∥y−Xθ∥2 2, where the ith entry of y∈Rnisyiand the ith row of X∈Rn×misx⊤ i. Note that the gradient of the loss for any value of θis contained the subspace S, as∇θJ(θ;D)=2 nX⊤(Xθ−y)andim(X⊤) =S. Further, the initial iterate of Akis0∈S. Since subspaces are closed under addition, every iterate of gradient descent on J(θ;D)starting from 0must be contained in S. Thus if Ak(D) converges, it must converge to a zero of the gradient of the loss, and this point must also be in S. Since the loss is convex, this point must be a loss minimizer. In this case, the original training solution θ∗which results from simply performing gradient descent interpolates all of Dand lies on S, the span of the input samples in D. Then, given an unlearning request to forget any subset DffromD,θ∗itself is a minimizer to the loss on the resulting retain setDr=D \ D f. However, since θ∗∈Sreveals information about all the input samples in D, it necessarily leaks information about the samples in Df. Thus, even though θ∗is a valid minimizer ofJ(θ;Dr), it is not an acceptable unlearning solution. C Proofs C.1 Proof of Theorem 2.1 We assume f(θ∗,·) interpolates all of D, sof(θ∗,x) =yfor all ( x,y)∈ D, and that the sample-wise lossL(θ,x,y)is minimized when f(θ,x) =y. Thus, θ∗must minimize each of the sample-wise losses L(θ,x,y) for all ( x,y)∈ D. Therefore, ∇θL(θ∗,x,y) =0for all ( x,y)∈ D. Since J(θ∗;Dr)=1 |Dr|P (x,y)∈DrL(θ,x,y)andJ(θ∗;Df)=1 |Df|P (x,y)∈DfL(θ,x,y), we must have that ∇θJ(θ∗;Dr) =∇θJ(θ∗;Df) =0. Then, if MLGis any loss-gradient unlearning method, the update rule must be of the form M(A,Ir,A(D),Df) =θ∗−Pr∇θJ(θ∗;Dr) +Pf∇θJ(θ∗;Df) +ξ, where PrandPfare positive semi-definite matrices and ξis a zero-mean random variable. Applying the fact that ∇θJ(θ∗;Dr) =∇θJ(θ∗;Df) =0to the update of MLGgives the desired result: MLG(A,Ir,A(D),Df) =θ∗+ξ C.2 Proof of Theorem 4.1 Recall we have a feasible vector θ∗such that θ∗⊤x=yfor all ( x, | https://arxiv.org/abs/2505.22601v1 |
y)∈ D, and we want to recover θ∗ r, the minimum ℓ2norm solution over just a subset Dr⊆ D: θ∗ r= argmin θ∥θ∥2s.t.θ⊤x=y∀(x, y)∈ D r (16) Consider solving the relaxed unlearning problem (7) for ˜R(θ) =∥θ∥2: ˜∆= argmin ∆∥θ∗+∆∥2s.t.∆⊥x∀(x,y)∈ D r 19 Define Sr= span {x|(x, y)∈ D r}and write the equivalent problem: ˜∆= argmin ∆∈S⊥r1 2∥θ∗+∆∥2 2 By first order optimality, θ∗+˜∆∈Sr, so we must have that ˜∆=−PS⊥r(θ∗) Thus the updated unlearned vector is θ∗+˜∆=θ∗− PS⊥r(θ∗) =PSr(θ∗). Then, PSr(θ∗) =θ∗ rby Lemma 5. C.3 Proof of Lemma 1 Recall that in this case we are interested in minimizing R(θ) =∥w(θ)∥2, where w(θ) =A⊤ 1···A⊤ L−1c returns the effective linear predictor parameterized by θ. We first show that ˜∆is feasible for the relaxed problem (7). Firstly, ˜∆is zero in all entries except those corresponding to the perturbation of A1, so we only need to ensure that ˜∆A1is orthogonal to ∇A1f(θ∗,x) for each ( x, y)∈ D r. Recall we denote the retain set input space as Sr= span {x|(x, y)∈ D r}, and ˜∆A1is defined as ˜∆A1=− A∗⊤ 2···A∗⊤ L−1c∗ −2 2A∗⊤ 2···A∗⊤ L−1c∗PS⊥r(w(θ∗))⊤. Further, the gradients are computed as ∇A1f(θ∗,x) =A∗⊤ 2···A∗⊤ L−1c∗x⊤ Then for any ( x, y)∈ D r, ⟨˜∆A1,∇A1f(θ∗,x)⟩= tr ˜∆A1⊤ ∇A1f(θ∗,x) = tr ∇A1f(θ∗,x) ˜∆A1⊤ =− A∗⊤ 2···A∗⊤ L−1c∗ −2 2trn A∗⊤ 2···A∗⊤ L−1c∗x⊤PS⊥r(w(θ∗))c∗⊤A∗ L−1···A∗ 2o = 0, where the last step follows from the fact that the inner term x⊤PS⊥r(w(θ∗))= 0 since x∈ D r implies x∈Srby definition. We now show that θ∗+˜∆achieves the optimal unlearning solution θ∗. By construction of ˜∆, the only entries of θ∗that are perturbed are those which correspond to A1. Thus, we compute the 20 effective linear predictor after the perturbation: w(θ∗+˜∆) =w(θ∗) +˜∆⊤ A1A∗⊤ 2···A∗⊤ L−1c∗ =w(θ∗)− A∗⊤ 2···A∗⊤ L−1c∗ −2 2PS⊥r(w(θ∗))c∗⊤A∗ L−1···A∗ 2A∗⊤ 2···A∗⊤ L−1c∗ =w(θ∗)− A∗⊤ 2···A∗⊤ L−1c∗ −2 2PS⊥r(w(θ∗)) A∗⊤ 2···A∗⊤ L−1c∗⊤ A∗⊤ 2···A∗⊤ L−1c∗ =w(θ∗)− PS⊥r(w(θ∗)) =PSr(w(θ∗)) Since the linear predictor w(θ∗) already interpolated D,PSr(w(θ∗))must be the minimum norm linear predictor over Drby Lemma 5. Thus, the effective predictor of the perturbed parameters w(θ∗+˜∆) solves the exact unlearning problem (4)when R(θ) =∥w(θ)∥2, soθ∗+˜∆achieves the optimal unlearning solution. C.4 Proof of Theorem 4.2 Recall for this theorem we analyze R(θ) =∥w(θ)∥2. Let ˜∆be the perturbation which satisfies the conditions in Lemma 1. Then, ˜∆is feasible for the relaxed problem (7), and further θ∗+˜∆solves the exact unlearning problem (4). Now, let ∆∗minimize the relaxed problem (7)for this ˜Rdefined in (9). Then because ˜Rensures that all elements of ∆∗which do not correspond to A1are zero, we must have that for any ( x, y)∈ D r: w(θ∗+∆∗)⊤x=c∗⊤A∗ L−1···A∗ 2 A∗ 1+∆∗ A1 x =y+c∗⊤A∗ L−1···A∗ 2∆∗ A1x =y+⟨∆∗,∇θf(θ∗,x)⟩ =y, where the last equality follows from the feasibility of ∆∗to(7). Thus, θ∗+∆∗interpolates Dr, so θ∗+∆∗is feasible for the exact unlearning problem (4). We now show this point is also optimal for (4). Since θ∗+˜∆solves the exact unlearning problem (4)andθ∗+∆∗is another feasible point, we must have that R(θ∗+˜∆)≤R(θ∗+∆∗). Further, both ˜∆and∆∗are feasible for (7)and∆∗is defined as the solution to (7), so we must have that ˜R(θ∗+∆∗)≤˜R(θ∗+˜∆). But, since both ˜∆and∆∗are | https://arxiv.org/abs/2505.22601v1 |
non-zero only in the entries corresponding to A1, applying Rand ˜R yields the same value: R(θ∗+˜∆) =˜R(θ∗+˜∆) and R(θ∗+∆∗) =˜R(θ∗+∆∗) 21 Thus, R(θ∗+∆∗) =R(θ∗+˜∆), soθ∗+∆∗achieves the optimal objective value of (4). Since we established feasibility and optimality, θ∗+∆∗must solve (4). C.4.1 Necessity of Additional Regularizer ˆRfor Theorem 4.2 In this section, we show that minimizing just Rover the relaxed constraints, i.e. solving (6), for Rwhich measures the linear network predictor norm does not solve the exact unlearning solution. Because there is no control the size and direction of the perturbation ∆, we can construct a simple example where ∆satisfies just the linearization of the data interpolation constraints but the updated network θ∗+∆no longer interpolates Dr. Consider a dataset of two samples D={(e1,1),(e2,1)}, where ei∈Rmis the ith standard basis vector for any m≥3. Consider the original 2-layer interpolating network trained on Ddefined by parameters θ∗= c∗; vec(A∗) , where c∗=e1+e2∈RmandA∗is the m×midentity matrix A∗=Im, sof(θ∗,x) =c∗⊤A∗x= (e1+e2)⊤x. We set Dr={(e1,1)}andDf={(e2,1)}, and define the perturbation variable ∆= ∆c; vec(∆A) . Translating the constraints of (6) to this specific problem instance, we have that ∆⊤ ce1+ tr{∆⊤ A(e1+e2)e⊤ 1}= 0 We then select the values ∆c=−e3and∆A=e3e⊤ 1−e2e⊤ 2−e3e⊤ 3. It is easy to see that these choices satisfy the above constraint. Further, they achieve exact minimization of (6). We show below that the resulting network’s predictor ( A∗+∆A)⊤(c∗+∆c) =0. R(θ∗+∆) = (A∗+∆A)⊤(c∗+∆c) 2 = (I+e3e⊤ 1−e2e⊤ 2−e3e⊤ 3)⊤(e1+e2−e3) 2 = (I+e1e⊤ 3−e2e⊤ 2−e3e⊤ 3)(e1+e2−e3) 2 =∥e1+e2−e3−e2−e1+e3∥2 =∥0∥2= 0 Thus, the updated network which solves (6)predicts the constant function at 0for all inputs x, as f(θ∗+∆,x) = (A∗+∆A)⊤(c∗+∆c)⊤x=0⊤x= 0. This clearly does not interpolate Dr, and this example as a whole demonstrates that failing to control the size and direction of the drift term ∆beyond just the linearized constraints does not lead to the exact unlearning solution. C.5 Proof of Theorem 4.3 Denote the minimum ℓ2norm solution w(ˆθ∗ r) toy=Xwas just w∗ rfor brevity. Using w∗ r, we construct a solution to the exact unlearning problem (4)forR(θ) =∥θ∥2, which we restate below: argmin θ∥θ∥2s.t.w(θ)⊤x=y∀(x, y)∈ D r 22 Expanding θ= c; vec(A1) ;. . .; vec(AL−1) into the sub-parameters, squaring the objective, and organizing ( x, y)∈ D rinto input data matrix Xr∈R|Dr|×dand output vector yr∈R|Dr|gives an equivalent problem: argmin c,A1,...,AL−1∥c∥2 2+L−1X ℓ=1∥Aℓ∥2 Fs.t.yr=XrA⊤ 1. . .A⊤ L−1c (17) Letc∗,A∗ 1, . . . ,A∗ L−1be a solution to (17). Then, A∗⊤ 1. . .A∗⊤ L−1c∗interpolates Dr, so we must have thatA∗⊤ 1. . .A∗⊤ L−1c∗=w∗ r+zwhere w∗ r∈row(Xr) and z∈ker(Xr) by Lemma 4. LetPw∗r=1 ∥w∗r∥2 2w∗ rw∗⊤ rbe the projection matrix onto span(w∗ r). Then replacing A∗ 1withA∗ 1Pw∗r maintains feasibility since P⊤ w∗rA∗⊤ 1. . .A∗⊤ L−1c∗=Pw∗r(w∗ r+z)=w∗ rwhich is feasible by definition. Further, A∗ 1Pw∗rachieves smaller objective function value since A∗ 1Pw∗r 2 F= tr{A∗ 1Pw∗rPw∗rA∗⊤ 1}= tr{Pw∗rA∗⊤ 1A∗ 1} ≤ Pw∗r 2 A∗⊤ 1A∗ 1 ∗=∥A∗ 1∥2 F. The second equality follows from the cyclic property of trace and the fact that Pw∗ris both symmetric and idempotent, and the inequality is a generalized H¨ older’s inequality for matrices. Thus, replacing A∗ 1with | https://arxiv.org/abs/2505.22601v1 |
the rank-1 matrix A∗ 1Pw∗rmust preserve optimality of any solution that contains A∗ 1. Write A∗ 1Pw∗r=λ1v1w∗⊤ rfor some λ1∈R,v1∈Rhℓwith∥v1∥2= 1. We can apply an analogous argument with the matrix Pv1, which projects its input onto span(v1), to show that any solution that contains A∗ 2must remain optimal with A∗ 2replaced by the rank-1 matrix A∗ 2Pv1. Continuing this argument for each A∗ ℓ,ℓ= 3, . . . , L −1 as well as for c∗shows that we can search for solution over a much smaller space. Specifically, for some λℓ∈Randv∈Rhℓ, we can decompose c∗and each A∗ ℓas A∗ 1=λ1v1w∗⊤ r A∗ ℓ=λℓvℓv⊤ ℓ−1forℓ= 2, . . . , L −1c∗=λLvL−1 Then, (17) reduces to min λi,vℓ∥λLvL−1∥2 2+ λ1v1w∗⊤ r 2 F+L−1X ℓ=2 λℓvℓv⊤ ℓ−1 2 F s.t. ( λ1w∗ rv⊤ 1)(λ2v1v⊤ 2)···(λL−1vL−2v⊤ L−1)(λLvL−1) =w∗ rand∥vℓ∥2= 1 = min λi∥w∗ r∥2 2λ2 1+LX ℓ=2λ2 ℓs.t. λ1λ2···λL= 1 (18) We perform a change of variables setting γi=λ2 iand enforcing γi>0. min γi>0∥w∗ r∥2 2γ1+LX ℓ=2γℓs.t. γ1γ2···γL= 1 (19) Define γ= (γ1, . . . , γ L), objective function g(γ) =∥w∗ r∥2 2γ1+PL ℓ=2γℓ, and constraint h(γ) = γ1γ2···γL−1 = 0. By the AM-GM inequality, we have that for any feasible γ g(γ)≥L ∥w∗ r∥2 2γ1···γL1 L=L∥w∗ r∥2/L 2, 23 where the last equality follows from the constraint h(γ) = 0. Define feasible point γ∗such that γ∗= ∥w∗ r∥2(1−L) L 2 ,∥w∗ r∥2 L 2, . . . ,∥w∗ r∥2 L 2! . Then g(γ∗) =∥w∗ r∥2/L 2achieves the lower bound, so it must solve (19). Thus, the optimal values λ∗ 1, . . . , λ∗ Lto(18) result from taking square roots of γ∗ ℓ. Then, the following values for the network parameters must be optimal for (17): A∗ 1=∥w∗ r∥(1−L) L 2 v1w∗⊤ r A∗ ℓ=∥w∗ r∥1 L 2vℓv⊤ ℓ−1forℓ= 2, . . . , L −1c∗=∥w∗ r∥1 L 2vL−1. C.6 Proof of Theorem 4.4 We prove the theorem using the following lemma. See the end of the section for a proof. Lemma 6. Forc∈Rhand subspace G ⊆Rhsuch that dim(G) =s, there exists ∆c∈ G⊥ rsuch that ∥c+∆c∥0≤s, where the ℓ0-“norm” ∥·∥0counts the number of non-zero elements. Because ˆRdoes not allow any perturbation of A∗, any solution to (12) must only perturb θ∗in the entries corresponding to c∗. Lets=dim(span{ϕ(A∗x)}(x,y)∈Dr). Note that by definition we have that s≤ |D r|. Apply the lemma to c∗andspan{ϕ(A∗x)}(x,y)∈Drso that there exists ˜∆c∈span {ϕ(A∗x)}(x,y)∈Dr⊥such that∥c∗+˜∆c∥0≤s. Define ˜∆=˜∆c;0 . Then the network defined by θ∗+˜∆has at most sactive neurons since any zero element of c∗+˜∆ccannot contribute an active neuron. Further, {ϕ(A∗x)}(x,y)∈Dr={∇cf(θ∗,x)}x,y∈Dr, so the perturbation ˜∆is feasible for the relaxed problem (7). But, fis linear in c, so this perturbation must preserve function value on Dr, since the constraints of the relaxed problem are tight when just perturbing c∗. Thus, the resulting network defined by θ∗+˜∆both interpolates Drand has at most s= dim(span {ϕ(A∗x)}(x,y)∈Dr) active neurons. Note that this construction of ˜∆satisfies the conditions of Lemma 2, so we do not include a separate proof for Lemma 2 since it is contained within the larger proof of the | https://arxiv.org/abs/2505.22601v1 |
theorem. Proof of Lemma 6 : Let the columns of some P∈Rh×(h−s)form a basis for G⊥so that im(P) =G⊥. Consider the reduced column echelon form of Pdenoted rcef(P) =˜P. By definition, im(˜P) =im(P) =G⊥, so rank(˜P) =h−sand thus each of the h−scolumns of ˜Phas a leading one. Let ˜pibe the ith column of ˜Pand let jidenote the index of the leading one in ˜pifor all i∈[h−s]. Let ( ˜pi)kdenote the kth element of ˜pi. By definition of the reduced column echelon form, we have that ( ˜pi)k= 0 for all k < j i. Define ∆c=h−sX i=1γi˜pi 24 for coefficients γi∈Rdefined as γi=− c∗+i−1X k=1˜pk! ji Since each ˜piis only non-zero in the indices jitoh, we must have that ( c∗+∆c)ji= 0 for all i∈[h−s], so∥c∗+∆c∥0≤s. C.7 Proof of Proposition 1 Consider any parameter vector θ= c; vec(A) . Then for any input x, we can write f(θ,x) =Ph i=1ciϕ(a⊤ ix) where ciis the ith element of canda⊤ iis the ith row of A. Consider the updated parameters ˆθ= c; vec( ˆA) forˆA= (1c̸=0,1⊤)⊙A. Then, f(θ,x) =hX i=1ciϕ(a⊤ ix) =hX i=1ciϕ(1{ci̸= 0}a⊤ ix) =f(ˆθ,x), where the second equality follows from the fact that we can set aito be zero whenever ci= 0 since that neuron does not contribute to the function output whenever ci= 0. Further, changing aifor anyiwhere ci= 0 does not change the number of neurons, since if for the ith neuron we have ci= 0, then this neuron can never be active no matter the value of ai: R(θ) =hX i=11{|ci|∥ai∥2>0}=X i:ci̸=01{ai̸=0}=X i:ci̸=01{ˆai̸=0}=R(ˆθ), where ˆa⊤ iis the ith row of ˆA. Lastly, since ˆaiis always equal to 0when ci= 0, we must have that ˆAhas at most R(ˆθ) number of nonzero rows. D MinNorm-OG Algorithm We derive the closed form solution of (7) for the specific choice ˜R(θ+∆) =∥θ+∆∥2 2+λ∥∆∥2 2. Define the span of the model gradients over Dras the subspace Gr=span{∇θf(θ,x)}(x,y)∈Drand consider any λ≥0. We then solve the following problem: ˜∆= argmin ∆∥θ+∆∥2 2+λ∥∆∥2 2s.t.∆∈ G⊥ r. (20) This is a strongly convex problem over a linear constraint, so its solution ˜∆is the unique point which satisfies the following condition for first order optimality: (1 +λ)˜∆+θ∈ Gr. Note that this is satisfied by the projection ˜∆=−1 1 +λPG⊥r(θ), which must then be the unique solution to (20). 25 E Experiments We first standardize the notation for each algorithm. Throughout our experiments, we sweep over hyperparameters and report the best results for each algorithm, and we sweep related hyperparame- ters for each algorithm through the same set of values. For example, every algorithm has a learning rate which is selected from searching over the same set of values. We first define the hyperparameter names we use along with the algorithms they apply to. Table 3: Hyperparameter definitions and their associated methods. Symbol Methods Description T All Number of epochs η All Learning rate λGA NGP, Scrub Loss ascent coefficient λreg NPO, Scrub, MinNorm-OG, Ridge Regularization coefficient σ NGD Gradient noise standard deviation TGD Scrub, MinNorm-OG Number of final descent epochs on retain set γreg MinNorm-OG, Ridge Regularization coefficient | https://arxiv.org/abs/2505.22601v1 |
decay rate TProj MinNorm-OG Projection period npert MinNorm-OG Subsample size to compute gradient space E.1 Implementations We now define the exact implementation of each method. Consider a batch of retain samples Br and forget samples Bf, along with loss function J. For each method, we use the AdamW [LH19] optimizer with learning rate ηon different effective loss functions. We express the loss functions below. E.1.1 GD JGD(θ;Br) =J(θ;Br) E.1.2 GA JGA(θ;Bf) =−J(θ;Bf) E.1.3 NGD JNGD(θ;Br) =J(θ;Br) +θ⊤ξ, where ξ∼ N(0, σ2I) is a zero-mean Gaussian random vector. E.1.4 NGP JNGP(θ;Br) =J(θ;Br)−λGAJ(θ;Bf) 26 E.1.5 NPO Recall that θ∗denotes the initial trained model parameters. Then, the NPO loss is JNPO(θ0;Bf, λGA) =1 |B|X (xf,yf)∈Bf2 λGAlog 1 +πθ0(yf|xf) πθ∗(yf|xf)λGA , where πθ(yf|xf) denotes the model’s predicted probability of class yffor input xffor parameter vector θ. Note that this is equivalent to setting the parameter βin the original NPO paper [ZLBM24] toλreg. E.1.6 Scrub The Scrub loss decomposes into different terms depending on the epoch. Let πθ(y|x) denote the model’s predicted distribution over classes yfor input xfor parameter vector θ, and define KL(·∥·) as the Kullback-Leiber divergence. Recall θ∗denotes the initial trained model parameters, and denote the current epoch t∈ {0, . . . , T −1}. Then the Scrub loss JScrub(θ;Br,Bf, λreg, λGA, t) is defined as: JScrub(θ;Br,Bf, λreg, λGA, t) = J(θ;Br) +λreg |Br|X (xr,yr)∈BrKL(πθ∗(y|xr)∥πθ(y|xr)) if teven or t≥T−TGD −λGA |Bf|X (xf,yf)∈BfKL(πθ∗(y|xf)∥πθ(y|xf)) otherwise E.1.7 MinNorm-OG For each batch Br, we always perform a loss descent step: JMinNorm-OG (θ;Br) =J(θ;Br) Following the AdamW update for this loss, we then (depending on the epoch) perform the model update corresponding to solving the relaxed unlearning problem (7)for˜R(θ+∆) =∥θ+∆∥2 2+ λ∥∆∥2 2, where λis a saved parameter of the algorithm. We use the parameters TProjandTGD to determine which epochs to perform the unlearning update. For the TGDlast epochs, we only perform the descent step and skip the unlearning update, similar to Scrub. In the first T−TGD epochs, we perform the unlearning update every TProjepochs. We initialize λ=1 λreg−1, and each time we perform the unlearning update, we grow the value of λ through the update λ←λ+1 γreg−1 using the decay factor γreg∈[0,1]. For our algorithm we only use values of λregsuch that λreg≤1. The update for λleads to solutions to the relaxed unlearning problem which result in more conservative perturbations. To interpret these values, first recall that we solve the relaxed unlearning problem over a subsample of each batch B′ r⊆ B rwhere |B′ r|=npert. For convenience, define the gradient subspace G′ r= span{∇θf(θ,x)}(x,y)∈B′r. As we showed in Appendix D, for any value of λ, the optimal perturbation is then ˜∆=−1 1+λPG′⊥r(θ). Thus, the initial value λ=1 λreg−1 leads to the perturbation ˜∆= 27 −λregPG′⊥r(θ). Further, the coefficient update λ=λ′+1 γreg−1 leads to a more conservative unlearning perturbation ˜∆=−γreg1 1+λ′PG′⊥r(θ), as it is down-weighted by γreg. Thus, λregis the initial strength of the perturbation, normalized to the range [0 ,1], and γregrepresents a multiplicative decay of this strength through each update to λ. We formally write the unlearning update at epoch tas follows, where θ0is the | https://arxiv.org/abs/2505.22601v1 |
current parameter vector, θnewis the updated vector, and mod denotes the modulo operation. iftmod TProj̸= 0 or t≥T−TGD θnew=θ0 else ˜∆= argmin ∆∈G′⊥r∥θ0+∆∥2 2+λ∥∆∥2 2 θnew=θ0+˜∆ λ←λ+ 1 γreg−1 Gradients for Classification. We make a special note of how we compute the gradient subspace G′ r for classification tasks. At the parameter value θ0, the model prediction is f(θ0,x) =argmax zθ0(y| x) where zθ(y|x) denotes the model’s unnormalized logits over the classes yfor input xfor parameter vector θ. This is not a continuous function of θ, so we cannot compute its gradient directly. However, following prior works [FAML20], we use the gradient ∇θ(zθ0(y|x))j, where j=f(θ0,x) is the model’s predicted class for input x. In other words, we take the gradient of the the unnormalized logits at the index of the maximum value, where we do not treat the index as a function of θ. E.1.8 Ridge We again store a regularization weighting λwhich we initialize to λ=λreg. We define the ridge loss as JRidge(θ;Br) +λ∥θ∥2 2. After updating the parameter vector using this loss on each batch, we update λas λ←γregλ. Recall γregis always set within the range [0 ,1], so the update to λapproximates the limit as λgoes to 0 as we iterate through the epochs. This attempts to recover the minimum norm, or ridgeless, training loss minimizer. E.2 Data Poisoning We train a 3-layer multilayer perceptron with a hidden dimension of 300 using the sigmoid linear unit (SiLU) activation function. For each seed, we randomly sample 50 retain set points ( xr, yr)∈ D r with yr=sin(xr) and 5 forget set points ( xf, yf)∈ D fwith yf= 1.5, over the input domain X= [−15,15]⊆R. We initially train the poisoned model on all the samples using the AdamW optimizer with a learning rate of 10−3over 100,000 epochs. 28 Table 4: Data Poisoning experiment results, measured as the sup-norm distance between the retain set trend y=sin(x) and the outputs of the unlearning algorithms (smaller is better). We report medians over 20 trials, along with the range of the central 10 values Epochs GA GD NGD NGP MinNorm-OG Ridge 10 3.56 (2.34, 6.52) 3.38 (2.62, 7.48) 3.63 (2.71, 7.56) 3.70 (2.28, 7.37) 1.89 (1.10, 6.02) 3.38 (2.62, 7.48) 100 27.7 (20.6, 36.2) 1.85 (1.51, 2.76) 2.54 (1.56, 6.09) 1.81 (1.41, 2.93) 1.07 (0.62, 1.32) 1.67 (1.37, 3.31) 1000 1700 (1200, 2600) 1.58 (1.04, 2.43) 1.35 (.93, 3.47) 2.29 (1.54, 5.07) 0.84 (0.64, 1.24) 1.29 (0.87, 2.12) Table 5: Hyperparameter settings for each entry in Table 4. Blank entries indicate that the hyperparameter is not applicable to the corresponding method. Epochs Method η λ GA λreg σ T GD γreg TProj npert 10GA 1e-4 GD 1e-4 NGD 1e-2 .5 NGP 1e-4 1.0 MinNorm-OG 1e-3 .3 0 .3 1 50 Ridge 1e-4 1.0 .3 100GA 1e-4 GD 1e-2 NGD 1e-2 1.0 NGP 1e-2 1e-3 MinNorm-OG 1e-3 0.1 50 0.9 1 50 Ridge 1e-2 3.0 0.6 1000GA 1e-4 GD 1e-2 NGD 1e-2 0.1 NGP 1e-2 1e-3 MinNorm-OG 1e-2 0.3 0 0.3 200 50 Ridge 1e-2 3.0 1.0 Given these poisoned models, we apply each of | https://arxiv.org/abs/2505.22601v1 |
the unlearning algorithms over a sweep of hyperpa- rameters and evaluate the output θof each unlearning method by measuring the deviation from the retain set trend, given by supx∈X|f(θ,x)−sin(x)|. We fix the number of epochs for each algorithm and allow full data access, so each method has access to all of Drduring unlearning. We repeat the entire process over 20 trials. For the number of unlearning epochs T∈ {10,100,1000}, we report the best performance of each algorithm in Table 4 along with the associated hyperparameters in Table 5. We select the parameters for each method by finding the best performing parameters from the possible values in Table 6 using the first 5 trials. We then evaluate over the full 20 trials to obtain our results. We also include visualizations of the recovered models from each unlearning method in Figures 3, 4, and 5. All experiments were run on either a single NVIDIA A40 GPU or a single NVIDIA GH200 GPU. 29 Table 6: Hyperparameter values tested in the experiments corresponding to Table 4. We denote the total number of epochs T. Hyperparameter Sweep Values η {10−4,10−3,10−2} λGA {10−3,10−2,10−1,1.0} λreg {0.1,0.3,0.5,1.0,3.0} σ {0.1,0.5,1.0} TGD {0, T/10, T/2} γreg {0.3,0.6,0.9,1.0} TProj {1, T/10, T/5} npert {50} 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (a) MinNorm-OG (ours) 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (b) GD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (c) NGD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (d) Ridge 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (e) NGP 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (f) GA Original Model Unlearned Model sin(x) Retain Points Forget Points Figure 3: Example unlearned model fits when given 10 unlearning epochs for the Data Poisoning experiment, where the forget points distort the retain set trend y= sin( x). 30 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y(a) MinNorm-OG (ours) 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (b) GD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (c) NGD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (d) Ridge 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (e) NGP 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (f) GA Original Model Unlearned Model sin(x) Retain Points Forget Points Figure 4: Example unlearned model fits when given 100 unlearning epochs for the Data Poisoning experiment, where the forget points distort the retain set trend y= sin( x). 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (a) MinNorm-OG (ours) 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (b) GD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (c) NGD 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (d) Ridge 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (e) NGP 15 10 5 0 5 10 15 x1.0 0.5 0.00.51.01.5y (f) GA Original Model Unlearned Model sin(x) Retain Points Forget Points Figure 5: Example unlearned model fits | https://arxiv.org/abs/2505.22601v1 |
when given 1000 unlearning epochs for the Data Poisoning experiment, where the forget points distort the retain set trend y= sin( x). 31 E.3 Multi-Class Label Erasure We use the MNIST and CIFAR-10 [LCB10; Kri09] datasets, creating red, green, and gray copies of each image in the training sets. We construct the retain set as the entire gray copy, and the forget set as a random subset of the red and green copies. Specifically, we construct the forget set as a random sample of 5 percent of the red samples combined with a random sample of the same size of the green samples. We then train a model to predict both the image content class (digit for MNIST, object for CIFAR) as well as the color on the combined data to serve as the initial model for unlearning. For MNIST, we use a CNN with two convolutional layers, one fully connected layer, and then a separate fully connected prediction head for the color and content class. We train for 100 epochs using an initial learning rate of 10−3and a batch size of 3000 along with the AdamW optimizer. For CIFAR-10, we use a modified ResNet-18 [HZRS16] architecture also with separate prediction heads for the two class types. In this case, we train for 120 epochs using stochastic gradient descent (SGD) with momentum and weight decay. We set the learning rate to 0.02, momentum to 0 .9, and weight decay to 5 ×10−4, and we use a batch size of 256. We also apply a learning rate scheduler which applies a multiplicative decay of 0.1 every 50 epochs. For each dataset, the ground truth models are trained on the gray images alone using the same training parameters. We then apply each of the unlearning algorithms over different constraints on the number of unlearning epochs and the amount of available retain data. We define pretain∈[0,1] as the proportion of Dravailable during unlearning. For each of the 5 trials, we train a new initial model and sample pretain proportion of Drto serve as the available retain data. During each unlearning epoch, the algorithms iterate over batches from the forget set. For every forget set batch, a corresponding batch of the same size is sampled from the available retained data. The epoch ends once all forget set batches have been processed, regardless of whether there are unused retain set samples remaining. Any unused retain batches are not discarded—they will be sampled in subsequent epochs. Once all available retain set batches have been used at least once, the sampling process begins again from the start of the available retain set samples. The ground truth unlearned model is only trained on gray samples, so it achieves strong accuracy on gray-colored inputs and always predicts the input image to be gray, no matter the input image color. We thus evaluate retain quality by accuracy on gray-colored test samples, and forget quality by the mean squared error between the predicted gray probability and the ideal value of 1 across all colored inputs. For each method, we sweep hyperparameters and plot | https://arxiv.org/abs/2505.22601v1 |
the Pareto frontier for each method, where the optimal point is at (1 ,0) which indicates perfect retain accuracy and zero gray prediction error. Each point in the frontier for a given method represents the median results over 5 trials of a single hyperparameter combination, with the shaded uncertainty shown as half the interquartile range in each direction. We label the performance of the ground truth unlearned model as GT. We plot the Pareto frontiers and report the hyperparameters used to obtain the optimal curves in the figures and tables below. We do not necessarily sweep over every combination of the reported settings for every algorithm, as we selected some hyperparameter choices to fill out different areas of the frontier when needed. For example, we often had to set larger learning rates for Scrub to trace a full curve from the upper right to the bottom left. Without doing so, the Scrub results did not reach the bottom half of the plot as the unlearned models remained too close to the initial trained model. Similarly, for some of the CIFAR-10 experiments our algorithm MinNorm-OG needed smaller learning rates and small values of λregthan usual to sweep through full range through the top right 32 corner, as this area represents models which remain close to the original trained model. We observe that MinNorm-OG performs the best across all settings. We see that the CIFAR-10 experiments are much more challenging than those on MNIST, as the retain set accuracy degrades much sharper on CIFAR-10 for all unlearning methods. Further, for a small number of allowed unlearning epochs, the performance of MinNorm-OG relative to the other methods can be substantial. All training and parameter searches were performed on a cluster of NVIDIA GH200 GPUs. For example, sweeping through 150 parameter combinations for Scrub using 8 GPUs at once takes around 15 minutes for 5 unlearning epochs on MNIST. Table 7: Hyperparameter values tested for the results in Figure 6 running the Multi-Label Class Erasure experiment on MNIST with pretain =.05 and T= 5. Hyperparameter Sweep Values η {10−4,3×10−4,5×10−4,10−3,5×10−3} λGA {10−3,10−2,10−1,1.0,2.0} λreg {0.1,0.3,0.5,1.0,3.0} σ {0.1,0.5,1.0} TGD {0,1,2,3,4} γreg {0.3,0.6,0.9} TProj {1,2} npert {20,40} 0.2 0.4 0.6 0.8 1.0 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 6: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on MNIST with pretain =.05 and T= 5. This is an enlarged version of the left subfigure in Figure 2. 33 Table 8: Hyperparameter values tested for the results in Figure 7 running the Multi-Label Class Erasure experiment on MNIST with pretain =.01 and T= 5. Hyperparameter Sweep Values η {10−4,3×10−4,5×10−4,10−3,5×10−3} λGA {10−3,10−2,10−1,1.0} λreg {0.1,0.3,0.5,1.0,3.0} σ {0.5,1.0} TGD {0,1,2,3,4} γreg {0.3,0.6,0.9} TProj {1,2} npert {20,40} 0.2 0.4 0.6 0.8 1.0 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 7: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on MNIST with pretain =.01 and T= 5. Table 9: Hyperparameter values tested for | https://arxiv.org/abs/2505.22601v1 |
the results in Figure 8 running the Multi-Label Class Erasure experiment on MNIST with pretain =.05 and T= 2. Hyperparameter Sweep Values η {10−4,3×10−4,5×10−4,10−3} λGA {10−3,10−2,10−1,1.0} λreg {0.1,0.3,0.5,1.0,3.0} σ {0.1,0.5} TGD {0,1} γreg {0.3,0.6,0.9} TProj {1,2} npert {20,40} 34 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 8: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on MNIST with pretain =.05 and T= 2. Table 10: Hyperparameter values tested for the results in Figure 9 running the Multi-Label Class Erasure experiment on MNIST with pretain =.01 and T= 8. Hyperparameter Sweep Values η {10−4,5×10−4,10−3,5×10−3} λGA {10−3,10−2,10−1,1.0} λreg {0.1,0.3,0.5,1.0,3.0} σ {0.1} TGD {0,1,2,3,4,5,6,7} γreg {0.3,0.6,0.9} TProj {1,2} npert {20,40} 0.2 0.4 0.6 0.8 1.0 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 9: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on MNIST with pretain =.01 and T= 8. 35 Table 11: Hyperparameter values tested for the results in Figure 10 running the Multi-Label Class Erasure experiment on CIFAR-10 with pretain =.001 and T= 5. Hyperparameter Sweep Values η {10−7,10−6,5×10−5,10−4,5×10−4,10−3} λGA {10−3,10−2,10−1,1.0} λreg {0.0,0.01,0.05,0.1,0.3,0.5,1.0,3.0} σ {0.1} TGD {0,1,2,4} γreg {0.3,0.6,0.9,1.0} TProj {1,2,3,4} npert {20} 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 10: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on CIFAR-10 with pretain =.001 and T= 5. This is an enlarged version of the right subfigure in Figure 2. Table 12: Hyperparameter values tested for the results in Figure 11 running the Multi-Label Class Erasure experiment on CIFAR-10 with pretain =.001 and T= 10. Hyperparameter Sweep Values η {10−7,10−6,5×10−5,10−4,5×10−4,10−3} λGA {10−3,10−2,10−1,1.0} λreg {0.0,0.01,0.05,0.1,0.3,0.5,1.0,3.0} σ {0.1} TGD {1,2,3,4} γreg {0.3,0.6,0.9,1.0} TProj {1,2,3,4} npert {20} 36 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 11: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on CIFAR-10 with pretain =.001 and T= 10. Table 13: Hyperparameter values tested for the results in Figure 12 running the Multi-Label Class Erasure experiment on CIFAR-10 with pretain =.01 and T= 5. Hyperparameter Sweep Values η {10−7,10−6,5×10−5,10−4,5×10−4,10−3} λGA {10−3,10−2,10−1,1.0} λreg {0.0,0.01,0.05,0.1,0.3,0.5,1.0,3.0} σ {0.1} TGD {1,2,3,4} γreg {0.3,0.6,0.9,1.0} TProj {1,2,3,4} npert {20} 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 12: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on CIFAR-10 with pretain =.01 and T= 5. 37 Table 14: Hyperparameter values tested for the results in Figure 13 running the Multi-Label Class Erasure experiment on CIFAR-10 with pretain =.01 and T= 10. Hyperparameter Sweep Values η {10−7,10−6,5×10−5,10−4,5×10−4,10−3} λGA {10−3,10−2,10−1,1.0} λreg {0.0,0.01,0.05,0.1,0.3,0.5,1.0,3.0} σ {0.1,0.5} TGD {1,2,3,4} γreg {0.3,0.6,0.9,1.0} TProj {1,2,3,4} npert {20} 0.2 0.3 | https://arxiv.org/abs/2505.22601v1 |
0.4 0.5 0.6 0.7 0.8 Retain Quality (Higher ↑)0.00.10.20.30.40.50.60.70.8Forget Quality (Lower ↓) GTGD GA NGD NGP NPO Scrub Ridge MinNorm-OG GT Figure 13: Pareto frontiers for each method across hyperparameter settings in the Multi-Class Label Erasure task on CIFAR-10 with pretain =.01 and T= 10. E.4 Representation Collapse We use a subset of MNIST where the retain set contains the images with digit 0 colored green and the images with digit 1 colored red. We then construct the forget set by randomly sampling 10% of the 0 and 1 digits and coloring them oppositely to the retain set coloring, so the forget set 0’s are colored red and the forget set 1’s are colored green. We train the initial model over 250 epochs with a learning rate of 10−3and the AdamW optimizer. For MNIST, we use the same convolutional neural network architecture as in the Multi-Class Label Erasure experiment, except with a single prediction head, and we use a batch size of 3000. For CIFAR-10, we similarly use a modified ResNet-18 architecture along with a batch size of 2048. We also train ground truth unlearned models using the same settings, except we only train for 100 epochs instead of 250. The ground truth unlearned model predicts from color alone, as color perfectly determines the label in Drand is easier to learn than digit shape. In contrast, models trained on the full dataset D=Dr⊔ Dfmust rely on shape, since color is no longer predictive. For evaluation, we relabel training images by color and assess unlearning via color-label accuracy, testing if the unlearning methods can collapse the original model into just a color classifier. 38 Table 15: Unlearning performance across constraints on the number of epochs and percentage of accessible retain set samples for the Representation Collapse experiment. Evaluation is measured as accuracy on duplicate training images labeled by color only (higher is better). We report medians over 5 trials, along with the range of the central 3 values. Retain % Epochs GD GA NGD NGP NPO Scrub MinNorm-OG Ridge 15 0.60 (0.52, 0.70) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 0.90 (0.77, 0.97) 0.50 (0.50, 0.50) 0.80 (0.74, 0.85) 1.00 (1.00, 1.00) 0.73 (0.53, 0.73) 8 0.72 (0.53, 0.74) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (0.99, 1.00) 0.50 (0.50, 0.50) 0.96 (0.79, 0.97) 1.00 (1.00, 1.00) 0.73 (0.66, 0.73) 10 0.76 (0.73, 0.79) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 1.00 (1.00, 1.00) 0.75 (0.73, 0.82) 105 0.73 (0.52, 0.73) 0.50 (0.50, 0.58) 0.50 (0.50, 0.50) 0.91 (0.82, 0.92) 0.52 (0.50, 0.57) 0.76 (0.73, 0.83) 1.00 (0.85, 1.00) 0.73 (0.52, 0.73) 8 0.72 (0.65, 0.74) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 0.50 (0.50, 0.50) 1.00 (0.99, 1.00) 1.00 (1.00, 1.00) 0.77 (0.70, 0.81) 10 0.73 (0.69, 0.80) 0.50 (0.50, 0.50) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 0.50 (0.50, 0.50) 1.00 (1.00, 1.00) 1.00 (1.00, 1.00) 0.92 (0.81, 0.92) We apply each unlearning algorithm for a set number of unlearning epochs Tas well as a fixed proportion of the retain set which is | https://arxiv.org/abs/2505.22601v1 |
accessible, which we denote pretain∈[0,1]. Just as in the Multi-Class Label Erasure experiment, during each unlearning epoch the algorithms iterate over batches from the forget set and sample a corresponding batch of the same size from the available retained data. The epoch ends once all forget set batches have been processed, regardless of whether there are unused retain set samples remaining. Any unused retain batches are not discarded—they will be sampled in subsequent epochs. Once all available retain set batches have been used at least once, the sampling process begins again from the start of the available retain set samples. We search over hyperparameters and report the best results for each algorithm in each setting in Table 15. We write Retain % to denote 100 ×pretain. We observed that the results can exhibit a bimodal distribution across trials, as each method must transition from an initial model that classifies digits perfectly to one that achieves the same retain accuracy using only color. When this transition fails, the model often reverts to digit-based predictions, leading to high variance in the results. To reflect this behavior robustly, Table 15 reports median color accuracy over 5 trials, along with the range of the central 3 values. We note that MinNorm-OG consistently performs best. For each setting of the number of epochs and the Retain %, we show the hyperparameters we considered in Tables 16,17,18,19,20,and 21 before reporting the best performance out of each combination for each algorithm. All training was performed on a cluster of NVIDIA GH200 GPUs. For example, sweeping through all hyperparameter combinations listed in Table 16 for each algorithm completed in about 10 minutes using 8 nodes. Table 16: Hyperparameter values considered for the Representation Collapse Experiment with T= 5 andpretain = 0.01. Hyperparameter Values η {10−2,8×10−3,3×10−3} λGA {10−3,10−2,0.1,1.0} λreg {0.1,0.3,0.5,1.0,3.0} σ {0.1,0.5,1.0} TGD {1,2} γreg {0.3,0.5,0.9,1.0} TProj {1,2} npert {50} 39 Table 17: Hyperparameter values considered for the Representation Collapse Experiment with T= 5 andpretain = 0.1. Hyperparameter Values η {9×10−3,7×10−3,3×10−3} λGA {10−3,10−2,0.1,1.0} λreg {0.1,0.3,0.6,1.0,3.0} σ {0.1,0.5,1.0} TGD {1,2} γreg {0.3,0.5,0.9,1.0} TProj {1,2} npert {50} Table 18: Hyperparameter values considered for the Representation Collapse Experiment with T= 8 andpretain = 0.01. Hyperparameter Values η {8×10−3,3×10−3,8×10−4} λGA {10−3,10−2,0.1,1.0} λreg {0.1,0.3,0.6,1.0,3.0} σ {0.1,0.5,1.0} TGD {4,6} γreg {0.3,0.5,0.9,1.0} TProj {1,2} npert {50} Table 19: Hyperparameter values considered for the Representation Collapse Experiment with T= 8 andpretain = 0.1. Hyperparameter Values η {8×10−3,3×10−3,8×10−4} λGA {10−3,10−2,0.1,1.0} λreg {0.1,0.3,0.6,1.0,3.0} σ {0.1,0.5,1.0} TGD {4,6} γreg {0.3,0.5,0.9,1.0} TProj {1,2} npert {50} 40 Table 20: Hyperparameter values considered for the Representation Collapse Experiment with T= 10 and pretain = 0.01. Hyperparameter Values η {8×10−3,3×10−3,8×10−4} λGA {10−3,10−2,0.1,1.0} λreg {0.1,0.3,0.6,1.0,3.0} σ {0.1,0.5,1.0} TGD {4,7} γreg {0.3,0.5,0.9,1.0} TProj {1,2} npert {50} Table 21: Hyperparameter values considered for the Representation Collapse Experiment with T= 10 and pretain =.1. Hyperparameter Values η {8×10−3,3×10−3,8×10−4} λGA {10−3,10−2,0.1,1.0} λreg {0.1,0.3,0.6,1.0,3.0} σ {0.1,0.5,1.0} TGD {4,7} γreg {0.3,0.5,0.9,1.0} TProj {1,2} npert {50} E.5 Asset Information We use the MNIST [LCB10] and CIFAR-10 [Kri09] datasets in our experiments. CIFAR-10 is publicly available but does not specify an explicit license. MNIST is | https://arxiv.org/abs/2505.22601v1 |
One Rank at a Time: Cascading Error Dynamics in Sequential Learning Mahtab Alizadeh Vandchali∗Fangshuo (Jasper) Liao† Anastasios Kyrillidis Department of Computer Science Rice University ma202@rice.edu, fl15@rice.edu, anastasios@rice.edu Abstract Sequential learning –where complex tasks are broken down into simpler, hierarchical components– has emerged as a paradigm in AI. This paper views sequential learning through the lens of low-rank linear regression, focusing specifically on how errors propagate when learning rank-1 sub- spaces sequentially. We present an analysis framework that decomposes the learning process into a series of rank-1 estimation problems, where each subsequent estimation depends on the accuracy of previous steps. Our contribution is a characterization of the error propagation in this sequential process, establishing bounds on how errors –e.g., due to limited computa- tional budgets and finite precision– affect the overall model accuracy. We prove that these errors compound in predictable ways, with implications for both algorithmic design and stability guarantees. ∗Equal contribution. †Equal contribution. Preprint. Under review.arXiv:2505.22602v1 [cs.LG] 28 May 2025 Paper Meta-Analysis Card of “One Rank at a Time: Cascading Error Dynamics in Sequential Learning” Authors: Mahtab Alizadeh Vandchali, Fangshuo (Jasper) Liao, Anastasios Kyrillidis Institution: Rice CS Research genesis: Current sequential learning approaches lack theoretical under- standing of how numerical errors compound through hierarchical decomposition. While methods like LoRA demonstrate empirical success, the question of error propa- gation in sequential rank-1 subspace learning remains uncharacterized. Thought process: Here, we first focus on the linear case as a foundational and more tractable setting to develop theoretical understanding. We recognized that sequential learning can be mathematically formulated as iterative rank-1 matrix deflation, where each step depends on the accuracy of previous estimations. This led us to decompose the problem into studying how approximation errors from individual rank-1 subroutines propagate through the sequential process. Methodology: The core innovation lies in characterizing error propagation through recursive bounds that depend on the spectral properties of the data matrix. The analysis decomposes overall error into ground-truth approximation, propagation, and optimization components, establishing that errors compound multiplicatively with factors determined by singular value gaps ( T⋆ k) and matrix condition numbers. What remains open: Extension to non-linear transformations and complex neural architectures represents the primary theoretical challenge. The optimal allocation of resources across sequential components lacks complete characterization. Limitations: The theoretical framework is constrained to linear low-rank regression settings, limiting direct application to modern deep learning architectures. Experi- mental validation focuses on relatively simple scenarios (feedforward networks, basic classification), and the analysis assumes specific spectral properties that may not hold in general practice. Practical considerations: Implementation requires careful management of iteration budgets, with theoretical results suggesting front-loading computational effort on early components. The approach offers adaptive rank determination capabilities but demands more total training iterations compared to simultaneous optimization. Theoretical implications: The analysis reveals that error propagation follows pre- dictable mathematical patterns, challenging the view that sequential approaches are inherently less stable than simultaneous methods and providing a foundation for principled algorithm design in hierarchical learning systems. Date: 05/22/2025 Correspondence: anastasios@rice.edu 2 1 Introduction Sequential learning [ 1,2,3,4,5,6] is a concept found in cognitive science that posits that | https://arxiv.org/abs/2505.22602v1 |
learning could be structured as a series of stages or levels [ 7,8,9,10,11,12]. Sequential learning is especially relevant when the notion of “skills” is orthogonal or correlated to each other, or even layered hierarchically [ 13,14]. For instance, in a multitask learning environment [ 15,16,17,18], basic skills might serve for common tasks, while specialized skills might be required for specific tasks. Understanding fully sequential learning is an open question, even for simple models: researchers study not just how AI systems learn, but why they fail to learn when they do, and under what conditions they can learn better [ 19,20,21,22,23,24,25,26]. Just to provide a non-exhaustive list of recent efforts: [ 27] presents a sequential learning strategy on videos and text for sentiment analysis, where learning features sequentially –from simpler to more complex– led to better performance. [ 28] combines deep learning with symbolic AI to tackle sequential learning tasks. [ 29] deals with recommendation systems, such as those used by Netflix or Amazon and propose a method for these systems to learn from sequences of user interactions with multiple types of data (e.g., text, images, videos) for better recommendations. Our setting. We focus on low-rank subspaces as feature representations [ 30]. Low-rank models are compelling due to their interpretable solutions that capture influential factors in the data [ 31,32,33]. Assuming low-rank linear regression [ 34,35,36,37] and given input X= [x1, . . . ,xn]∈Rd×n, the goal is to approximate the relationship between a dependent variable Y= [y1, . . . ,yn]∈Rm×nand independent unknown variables B∈Rm×rand A∈Rr×dthat result into a lower rank r≪min(m, d)matrix W=BA∈Rm×dsuch that: Y≈BAX =WX. Here, (xi,yi)represents a data sample of a dataset D:={(xi,yi)}n i=1. Conceptually, the matrix Wprojects the original dfeatures onto a m-dimensional space, given that first xiis passed through a r-dimensional “funnel”, thus reducing the dimensionality and complexity of the model. In view of this, we utilize linear low-rank regression as a framework to study sequential learning processes, and how errors propagate through sequential rank-1 subspace learning. Algorithmically, the approach we consider relates to deflation in PCA [ 38,39,40,41,42,43]. That is, given a a rank- 1estimate of Wbased on label Yk ak,bk= arg min a∈Rd,b∈Rm1 2 Yk−ba⊤X 2 F. (1) Our approach starts off with Y1=Yand obtain a1,b1. The matrix Y1is further processed to exist on a “subspace” where the contributions of (a1,b1)are removed: Y2:=Y1−b1a⊤ 1X. This process is repeated by applying sequentially rank-1 updates on the deflated matrix, which leads to an approximation of the second pair (a2,b2), and so on. Overall: Y1=Y; (ak,bk) =rank-1 (Yk,X, t);Yk+1:=Yk−bka⊤ kX, (2) where rank-1 (Yk,X, t)returns an approximation of a rank-1 estimate in (1)that mini- mizes the mean-squared error1 2 Yk−ba⊤X 2 F, using titerations. We then estimate the subsequent subspaces by running the same rank-1 algorithm repetitively. Motivation. While subspace tracking and estimate has a long history (see [ 44,45,46,47] and references to these works), low-rank subspaces have recently gained attention due to emerging applications in AI. Parameter-efficient fine-tuning (PEFT) methods, such as LoRA [ 48], have demonstrated that representing weight updates as low-rank matrices can effectively adapt | https://arxiv.org/abs/2505.22602v1 |
large language models, while maintaining performance. But even beyond AI, recommendation systems [ 32] require real-time updates, and sequential low-rank modi- fications offer a computationally efficient way to incorporate new user-item interactions. This validates the hypothesis that complex transformations can be approximated through a series of low-rank updates. However, to our knowledge, the theoretical understanding of how errors accumulate in such sequential approximations remains limited. 3 Contributions. This work presents a mathematical formulation of linear low-rank regression that emphasizes its decomposition into rank-1 problems. We focus on the scenario where the sub-routine, rank-1 (Yk,X, t), incurs numerical errors: even solving (2)for a single pair, our estimate is only an approximation to the true pair. This view offers a pathway to examine how errors from each rank-1 estimation affect subsequent estimations. Since each step in the deflation process depends on the accuracy of the previous steps, any error in estimating a component propagates to the next step, affecting the overall accuracy of the model. The following contributions are made: •Hierarchical Learning Analysis. We provide a theoretical analysis of hierarchical learning, illustrating how each rank-1 component builds upon the previous components. •Error Propagation Study. We provide examination of how errors propagate through the deflation process in linear low-rank regression, highlighting implications in the stability and accuracy. •Generalization Ability. We analyze in the setting of noiseless and noisy labels how sequen- tially discovering rank-1 components can learn a model that enjoys provable generaliza- tion guarantee. •Experimental Validation. We validate our theory on both linear low-rank matrix regression problems and simple PEFT settings. 2 Background We use ∥a∥2to denote the ℓ2-norm of vector a;∥A∥2denotes the spectral norm, ∥A∥Fthe Frobenius norm of matrix A;svL(A)and svR(A)denote the normalized top left and right singular vectors of A. Problem setup. LetX∈Rd×nbe the input matrix of ndata points, each with dfeatures. For simplicity, we will assume that Xis sampled entrywise from the normal distribution with zero mean and unit variance, followed by a row-wise normalizing process, unless otherwise stated. Let Y∈Rm×nbe the output matrix based on the noiseless generative model, as in: Y=W⋆X, that simulates the process of “inserting” data samples (columns of X) through a low-rank linear channel W⋆∈Rm×dto obtain the corresponding column in Y. The goal is, then, to estimate the best low rank parameter Wgiven data (Y,X)as a low-rank linear regression problem: min W∈Rm×df(W) :=1 2∥Y−WX∥2 Fs.t. rank (W)≤r. (3) Solutions. This problem has a long history with various approaches, including convex [35,49,50], non-convex projected-gradient descent [ 51,52,53,54,55,56], as well as matrix factorization ones [ 57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72]. In the latter, the problem turns into: min A∈Rr×d,B∈Rm×rf(A,B) :=1 2∥Y−BAX∥2 F, (4) which is related to modernized task adaptation in neural network training, like LoRA [ 48,4]. Key difference in our analysis is that we study the sequential nature of learning; in contrast, in the above scenarios, one often utilizes factorized gradient descent, a low-rank solver that updates all rrank-1 components simultaneously , as follows: At+1=At−ηA∇Af(At,Bt),Bt+1=Bt−ηB∇Bf(At,Bt),with ηA,ηBlearning rates. We acknowledge solving (3)-(4)directly with these methods when ris known is more efficient and could be preferable in terms of accuracy, yet does | https://arxiv.org/abs/2505.22602v1 |
not fall into the sequential scenario we focus on. Learning rank-1 subspaces sequentially. Our aim is to study routines like the ones de- scribed in (1)and (2). I.e., we are interested in the sequential, rank-1-updated linear regression setting, and our focus will be on the theoretical understanding of how errors in calculations in(2)affect the overall performance. To do so, we will need to understand the behavior of both the exact sequential low-rank recovery , as well as the inexact sequential low-rank recovery . We describe some simple algorithms to motivate our work. 4 Algorithm 1 Exact Sequential Low-Rank Require: Input data X∈Rd×n, output data Y∈ Rm×n, target rank r Ensure: Rank-1 components {(a⋆ k,b⋆ k)}r k=1 1:Y⋆ 1←Y 2:fork= 1tordo 3: (a⋆ k,b⋆ k)←arg min a∈Rd,b∈Rm1 2 Y⋆ k−ba⊤X 2 F 4: Y⋆ k+1←Y⋆ k−b⋆ ka⋆⊤ kX 5:end for 6:return {(a⋆ k,b⋆ k)}r k=1Algorithm 2 Inexact Sequential Low-Rank Require: Input data X∈Rd×n, output data Y∈ Rm×n, target rank r, sub-routine steps T Ensure: Approx. rank-1 components {(ak,bk)}r k=1 1:Y1←Y 2:fork= 1tordo 3: (ak,bk)←rank-1 (Yk,X, t) 4: Yk+1←Yk−bka⊤ kX 5:end for 6:return {(ak,bk)}r k=1 Algorithm 1 aims to find exact low-rank subspaces by iteratively computing pairs of vectors (a⋆ k,b⋆ k). It starts with the original output data YasY⋆ 1(Line 1). In each iteration k, an optimization problem is solved to find the best pair(a⋆ k,b⋆ k)that minimizes the Frobenius norm of the difference between the current matrix Y⋆ kand the rank-1 estimate ba⊤X(see Line 3)3. By applying the singular value decomposition (SVD) to Y, we decompose it as: Y=Pp k=1σ⋆ ku⋆ kv⋆⊤ k, where σ⋆ kare the singular values, and u⋆ kandv⋆ kare the left and right singular vectors, respectively. Notice that we denote p=rank(Y), but when executing Algorithm 1 and Algorithm 2 we may choose a target rank r̸=p. Lemma 1. According to the Eckart-Young-Mirsky theorem, under our defined settings and deflation method, for each k, we have that Y⋆ k=Pp k′=kσ⋆ k′u⋆ k′v⋆⊤ k′andb⋆ ka⋆⊤ kX=σ⋆ ku⋆ kv⋆⊤ k. The proof of Lemma 1 is provided in Appendix B.1. Namely, when Xhas full rank with n≥d,b⋆ kanda⋆ kcan be uniquely identified up to scalar multiplication. After determining the pair (a⋆ k,b⋆ k), the matrix Y⋆ k+1is updated by subtracting the rank-1 component b⋆ ka⋆⊤ kXfromY⋆ k(see Line 4). This iterative process continues for riterations, generating rpairs of vectors, which collectively represent the exact low-rank subspaces. Algorithm 2 differs from Algorithm 1 in Lines 3 and 4. In Algorithm 2, Line 3 executes sub-routine for titerations, denoted by rank-1 (Yk,X, t), to approximate the solution of (1)and return estimates (ak,bk). The tparameter represents the number of iterations for this approximate computation. An example of the rank-1 subroutine can be the gradient descent algorithm, which executes: a(t+1)=a(t)−ηaX b(t)a(t)⊤X−Y⊤ b(t),b(t+1)=b(t)−ηb b(t)a(t)⊤X−Y X⊤a(t). (5) Iterative algorithms such as (5) often produce numerical errors, leading to bka⊤ k̸=b⋆ ka⋆⊤ k. Subsequently, this affects the quality of the remaining information in Yk+1in Line 4, since the “deflation” step Yk−bka⊤ kXis based on an approximate deflated matrix Yk, coming from the k−1iteration that is not equal to | https://arxiv.org/abs/2505.22602v1 |
Y⋆ kin Algorithm 1, as well as depends on approximate current estimates bka⊤ kX, and not b⋆ ka⋆⊤ kXas in the exact case. To study the influence of the numerical errors produced by (5), we introduce the following definition: Definition 1 (Numerical Error) .Let(ak,bk)represents the exact rank-1 solution that approxi- mates the processed label matrix Ykusing data X: ak,bk= arg min a∈Rd,b∈Rm1 2 Yk−ba⊤X 2 F. (6) and recall that ak,bkare outputs of rank-1 (Yk,X, T). We define the numerical errors incurred at iteration kfrom the rank-1 sub-routine as δk:=bka⊤ k−bka⊤ k;∥δk∥F≥0. (7) Notice that the definition of δkis based on Yk, not Y⋆ k; recall that Ykis constructed recursively using bka⊤ k. When bka⊤ kis solved inexactly, we cannot guarantee that Yk=Y⋆ k. 3We note that, althoughPr k=1b⋆ ka⋆ k=W⋆,(b⋆ k,a⋆ k)not necessarily aligns with the kth left- and right- singular vectors of W⋆. 5 Consequently, it is almost always the case that (ak,bk)̸= (a⋆ k,b⋆ k), implying ∥b⋆ ka⋆⊤ k− bka⊤ k∥F>0. However, since the rank-1 subroutine only has access to Yk, its output (ak,bk)converges to (ak,bk)instead of (a⋆ k,b⋆ k)as the number of iterations in the rank-1 subroutine increases. Related works. The task of sequential low-rank subspace identification has been studied in the context of Principal Component Analysis (PCA) [ 31,73]. There are hierarchical game-theoretic approaches with multiple rank-1 players that provide a framework for understanding the decomposition of data into a hierarchy of skills or components [ 74,75]. There, each rank-1 player can be seen as an agent learning a distinct, singular skill or feature from the dataset. The game-theoretic aspect ensures that each player (or component) optimizes a particular “deflated” objective [38, 39, 40, 41, 42, 43]. Incremental learning of Eigenspaces. [76] identified that in deep matrix factorization, the low-rank components are discovered in a sequential manner. [ 77] extends this observation to the case of symmetric matrix sensing, backed with a detailed theoretical analysis. This analysis is further generalized to the asymmetric case by [ 78]. Noticeably, this implicit sequential recovery of the low-rank components can be leveraged to efficiently compress the learned model [ 79]. Nevertheless, it should be noted that this sequential nature appears only when the model is deep enough or under a proper initialization. On the other hand, our work considers the simple model of low-rank linear regression by explicitly enforcing the sequential learning. A more similar work to ours is [ 80], but their algorithmic design is more specific to the task of matrix completion. Low-Rank Adapter (LoRA). The sequential learning of low-rank subspaces has connections with PEFT methods like LoRA [ 48]. A stronger connection is present when LoRA is applied to the scheme of continual learning, when the low-rank adapters are learned in a sequence when new tasks come in [ 81]. Later works impose additional orthogonality constraints between the subspaces learned by the adapters to prevent catastrophic forgetting [ 82]. While recent theoretical work has shown that LoRA can adapt any model fto accurately represent a target model if the LoRA-rank is sufficiently large [ 83], the dynamics | https://arxiv.org/abs/2505.22602v1 |
of how errors propagate when using lower ranks remains unexplored. Recent works consider a collection of LoRAs via merging, such as [84, 85, 86, 87, 88, 89]. 3 Error propagation during training Recall that Lemma 1 guarantees that the rank-1 components given by the exact Algorithm 1 recovers the top- rsingular vector/values of Y⋆. In this section, we study the recovery error under the inexact Algorithm 2. To effectively compare the outputs of Algorithm 1 with those of Algorithm 2, we express these outputs in terms of the singular values and singular vectors of the deflated matrices Y⋆ kandYk. We apply similar reasoning as in Lemma 1 for the term bka⊤ kXbased on (6). To do so, we define σik,uik, andvikas the i-th top singular value and vectors pairs of the matrix Yk. Note that σik̸=σ⋆ iand(uik,vik)̸= (u⋆ i,v⋆ i), for all i. Then, for each k, the SVD on Ykgives us: Yk=Pp−k+1 i=1σikuikv⊤ ik. Since bka⊤ kXis also rank- 1, Eckart-Young-Mirsky theorem implies that it is the optimal rank-1 approximation of Ykbased on (6). Thus bka⊤ kX=σ1ku1kv⊤ 1k, where σ1k,u1k, andv1kcorrespond to the top singular value and singular vectors of Yk. Recall that u⋆ kandv⋆ kare the top left and right singular vectors, respectively, of Y⋆ k. Since singular vectors are unique only up to a sign, both nsv1 L(Yk)and−nsv1 L(Yk)are valid left singular vectors, and similarly, both nsv1 R(Yk)and−nsv1 R(Yk)are valid right singular vectors of Yk. So we will choose u1kandv1kto be the ones such that 0≤v⋆⊤ kv1kand 0≤u⋆⊤ ku1k: u1k:=svL(Yk)·arg min s∈{±1}∥s·svL(Yk)−u⋆ k∥2;v1k:=svR(Yk)·arg min s∈{±1}∥ ·svR(Yk)−v⋆ k∥2. We provide a characterization of the error propagation in the deflation methods in Algo- rithm 2 that is agnostic to the detail of the sub-routine rank-1 , i.e., when one only has knowledge about ∥δk∥F. The proof can be found in Appendix A. Theorem 1. Let{(ak,bk)}r k=1be the output of Algorithm 2. Let δkbe given as in Definition 1 with∥δk∥F>0. Let σ⋆ 1, . . . , σ⋆ r⋆denote the singular values of Y. Define the minimum singular 6 value gap as T⋆ k:= min minj>k|σ⋆ k−σ⋆ j|, σ⋆ k . Also, define an error bound E(k)as: E(k) :=σmax(X)k−1X k′=0∥δk′∥Fk−1Y j=k′+1 2 +6σ⋆ j T⋆ k . IfE(k)<1 2minj>k|σ⋆ k−σ⋆ j|, then the output of Algorithm 2 satisfies: Y−rX k=1bka⊤ kX F≤ pX k=r+1σ⋆ k! +σmax(X)rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k (8) Theorem 1 characterizes how errors from approximately solving the rank-1 subroutine propagate through the deflation procedure in sequential low-rank approximations. The theorem asserts that, as long as each error δkis sufficiently small, the compounded effect of errors across the sequence remains bounded, thereby preserving the accuracy of the final low-rank approximation. Remark 1. The error bound in Theorem 1 reflects sensitivity to the eigenspectrum of the underlying data matrix. Notice that the upper bound in Theorem 1 involves a summation of summations over components that depend on the error of the sub-routine δk′. In particular, both the number of summands and the multiplicative factor ofQk j=k′+1(2 +6σ⋆ j/T⋆ k)in each summand grows as k increases. Notice that | https://arxiv.org/abs/2505.22602v1 |
a slower decay of singular values —corresponding to a smaller eigengap—error propagation is amplified, making approximation steps more susceptible to the accumulation of individual errors δk, and vice versa. This dependency on the singular spectrum necessitates more precision in each step for data matrices with dense singular values to avoid error escalation. 4 Generalization of sequential rank- 1update Thus far, we have been focusing on constructing components {(ak,bk)}r k=1such thatPr k=1bka⊤ kXestimates Y. Given that XandYare considered as training data, the previous section characterized the training error of Algorithm 2. Here, we analyze the generalization ability of Algorithm 2, assuming that the data is generated based on some optimal parameter W⋆with rank (W⋆) =r⋆: Y=Y⋆+E;Y⋆=W⋆X, (9) where E∈Rm×ndenotes the label noise generated from a certain distribution, and Y⋆ denotes the noiseless label. In noiseless case where E=0, we have that p=rank(Y) = r⋆and, Algorithm 1 can recover {(a⋆ k,b⋆ k)}r k=1such that W⋆=Pr k=1b⋆ ka⋆⊤ kwhen r= r⋆. However, it may not be the case that bkandakaligns with the kth left and right singular vector of W⋆under the influence of X. In other words, each pair (a⋆ k,b⋆ k)contains component of W⋆that extracts a certain information from the input data X. When E̸=0, it is possible that p=rank(Y)> r⋆. Generalization under noiseless labels. As a warm up, we consider the case where the noise E=0. Intuitively, when Xis full rank, a zero training loss would imply a perfect recovery of the optimal parameter W⋆. We state a more general result below covering the case of non-zero training loss with component-wise generalization error. Theorem 2. Let{(ak,bk)}r k=1be the output of Algorithm 2. Let δkbe given as in Definition 1 with∥δk∥F>0. Let σ⋆ 1, . . . , σ⋆ r⋆denote the singular values of Y. Define the minimum singular value gap as T⋆ k:= min minj>k|σ⋆ k−σ⋆ j|, σ⋆ k . Also, define an error bound E(k)as: E(k) :=σmax(X)k−1X k′=0∥δk′∥Fk−1Y j=k′+1 2 +6σ⋆ j T⋆ k . IfE(k)<1 2minj>k|σ⋆ k−σ⋆ j|, and σmin(X)≥0, then the output of Algorithm 2 satisfies: b⋆ ka⋆⊤ k−bka⊤ k F≤κ(X)kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k ;∀k∈[r]. (10) 7 Moreover, the aggregation of the components (ak,bk)’s approximates W⋆as W⋆−rX k=1bka⊤ k F≤r⋆X k=r+1σ⋆ k σmin(X)+κ(X)rX k=1kX k′=1∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ j! .(11) Here, κ(X) =σmax(X) σmin(X)denotes the condition number of X. The proof of Theorem 2 is provided in Appendix C. In particular, Theorem 2 states two results. First, (10) measures how the errors of individual components are influenced even by the numerical errors that appear when solving previous components. This bound illustrates key factors contributing to the error at each iteration. As we discussed previously, (a⋆ k,b⋆ k)can be considered as the components of W⋆extracted based on the importance defined by the input data X. From this perspective, (10) shows how well these data- dependent components are approximated by Algorithm 2. Moreover, (11) measures how well the inexact method approximates W⋆, including errors due to inexact computations and limitations of representing W⋆with rank r. This bound shed light on how the components (ak,bk)’s collaboratively contribute | https://arxiv.org/abs/2505.22602v1 |
to the overall generalization ability. Generalization under noisy labels. In the previous section, we studied the generalization ability of Algorithm 2 under a noiseless scenario with Y=W⋆X, where the algorithmic choice of choosing r=rank(Y)can be shown to be optimal. However, this argument may not hold when the labels are generated with a non-zero additive noise. In this section, we consider the noise matrix Eto consist of I.I.D. entries Eij∼ N(0, ε2), where εcontrols the magnitude of the noise. In this case, with a high probability, we have p=rank(Y) =m. Let {(ak,bk)}r k=1be the recovery result according to Algorithm 2. Then we are interested in an upper bound on ∥W⋆−Pr k=1bkak∥F. In particular, we have the following guarantee on the generalization error. Theorem 3. Consider the scenario of finding the top- rrank-1 subspaces that minimize the loss in (1). LetT⋆ k:= min {minj>k|σ⋆ k−σjk|, σ⋆ k}, andT⋆ min:= min k∈[1,r]T⋆ k, if the noise scale satisfies ε≤O T⋆ min√n+√ log1/γ then with probability at least 1−γ, the output of Algorithm 2 satisfies: W⋆−rX k=1bka⊤ k F≤κ(X) r⋆X k=r+1σr(W⋆) +rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k +O εp nlog1/γ σmin(X) r+s min{r⋆, r} T⋆ min!!(12) The proof of Theorem 3 is given in Appendix C.3. In particular, Theorem 3 characterizes how Algorithm 2 recovers component that can generalize even under label noise. The first term in the upper bound of (12) comes from the numerical errors in the inexact solving of each rank-1 subroutine. Then second term demonstrates the influence of the additive noise Eon the generalization ability. To start, a larger noise scale ϵimplies a worse generalization error. Moreover, it should be noticed that a good choice of rcan greatly impact the generalization error as well: choosing r < r∗can result in a larger error in the first term due to the incomplete estimation of the components in W⋆. On the other hand, since the second term scales with r, choosing a larger rcan result in a larger error caused by the noise. This is the scenario where the noise is overfitted by increasing the complexity of the model. From this perspective, Theorem 3 characterizes the bias-variance trade-off in the sequential rank-1 recovery algorithm. Lastly, the requirement of the noise scale is to make sure that after adding the noise, the ordering of the rank-1 components is not changed. 5 Experimental results 5.1 Synthetic validation of theoretical setting. We present experiments that validate our theory on error propagation in sequential rank-1 learning. Our experiments aim to demonstrate how the distribution of computational 8 Figure 1: Impact of iteration allocation strategy under a fixed iteration budget. Left:W⋆ reconstruction error. Right : Objective’s training error. resources across rank-1 components affects the overall approximation quality, particularly focusing on how errors in early components propagate to later stages of the sequential learning process. Following the general set-up in (9), we consider three different iteration allocation strategies: 1.Equal: Same number of optimization iterations to each rank-1 component. 2.More First: More iterations allocated to the earlier components and fewer to later ones. 3.Less First: Fewer iterations | https://arxiv.org/abs/2505.22602v1 |
allocated to the earlier components and more to later ones. Our analysis dictates that errors in early components propagate to later components, suggest- ing that allocating more iterations to earlier components leads to better overall performance. Figure 1 shows the reconstruction and training errors, respectively, for the three allocation strategies under a fixed computational budget. The results confirm our theoretical predic- tions. Allocating more iterations to earlier components leads to better final reconstruction and training errors compared to allocating fewer iterations initially. The equal iteration strategy performs better than the “less first iterations” approach, but worse than the “more first iterations” strategy. This validates our theoretical finding that errors in early compo- nents propagate and compound through the sequential process. This cascading effect means that if early components are poorly approximated, their errors get magnified in subsequent components. Details and further results in the synthetic setting are deferred to Appendix E. 5.2 Experimental analysis using LoRA. We evaluate our sequential rank-1 approach in LoRA adaptation on three standard image classification datasets: MNIST, CIFAR10, and CIFAR100. We design the experiments such that each dataset present a different level of challenge to assess how our sequential LoRA adaptation performs under varying initial conditions. The purpose here is not to attain top- notch performance in these scenarios neither to claim these as “real scenarios”; rather, to assess how sequential learning behaves on well- to –intentionally– badly-pretrained scenarios. This is also expected, given that the baseline model is a feedforward neural network. Problem setting. We employ a simple feedforward network as our base architecture across all experiments. For each dataset, we first train the baseline model on a subset of classes (in particular, the first half of available classes). We then apply our sequential rank-1 LoRA adaptation approach to handle the remaining classes for 3 sequential rank-1 trainings, i.e., r= 3. Our architecture consists of three fully-connected layers that map flattened input images to class logits. As usual, for MNIST, inputs are 784-dimensional (28 ×28 grayscale images), while for CIFAR10 and CIFAR100, inputs are 3072-dimensional (32 ×32×3 RGB images). Hidden layers have 512 units with ReLU activations, and the output layer dimension matches the number of classes in each dataset. We analyze three distinct scenarios: 1.MNIST (Strong Baseline) : The baseline network achieves high accuracy ( ∼98%) on classes 0-4, providing a strong foundation for adaptation on the remaining 5-9 classes. 2.CIFAR10 (Moderate Baseline) : The baseline network reaches moderate accuracy ( ∼40%) on classes 0-4, representing a partially optimized model (reminder that the model is not a CNN-based model but just a FF connected network). 9 Figure 2: Test accuracy of sequential rank-1 LoRA when adapting to new classes across the three datasets. Left: MNIST. Center: CIFAR10. Right: CIFAR100. Note that, on purpose, the pretrained models are trained with good (MNIST), mediocre (CIFAR10) and bad (CIFAR100) accuracy. 3.CIFAR100 (Weak Baseline) : The baseline network attains lower accuracy ( ∼20%) on classes 0-49, exemplifying a relatively poor initial representation, where LoRA models adapt over the remaining 50-99 classes. Mathematical formulation. In standard LoRA, we parameterize the | https://arxiv.org/abs/2505.22602v1 |
weight change during fine-tuning as a low-rank decomposition: ∆W=BA⊤∈Rm×n, where A∈Rn×rand B∈Rm×rwith r≪min(m, n). In these experiments and w.l.o.g., r= 3. In our approach, instead of optimizing all rcomponents simultaneously, we optimize one rank-1 component at a time, using the residual error from previous components to guide each subsequent step. In particular, let W0∈Rm×nbe a pre-trained weight matrix (in our case, we have W0 for every layer of the pretrained fully-connected network). Let Lbe the task-specific loss function, fis the network function, and (x,y)represents the task data. Then, ∆W= arg min ∆WL(f(x;W0+ ∆W),y). (13) We define our sequential rank-1 adaptation procedure as follows: For k= 1,2, . . . , r , find the rank-1 update that minimizes the task loss given the previously learned components: ak,bk= arg min a∈Rn,b∈RmL f x;W0+k−1X j=1bja⊤ j+ba⊤ ,y (14) The final adapted model uses the weight matrix on the new data domain: W=W0+Pr k=1bka⊤ k. We approximate the optimal rank-1 updates using (stochastic) gradient descent on (14). All experiments were conducted using Google Colab Pro+ with NVIDIA A100 GPU (40GB memory). Figure 3: Marker sizes: relative efficiency of each config.Adaptation performance across datasets. Figure 2 presents the test accuracy of sequential rank-1 LoRA components when adapting to new classes across the three datasets. The accu- racy is measured solely on the new classes (classes “5-9” for MNIST and CIFAR10, classes “50-99” for CIFAR100), highlight- ing the adaptation capabilities rather than overall performance. The standard deviation for all the LoRA experiments is maxi- mum ∼1.5% around the mean over 5 different runs, and the key message of this section remains consistent over runs. Our results demonstrate that sequential LoRA adaptation effectively transfers knowledge across all three scenarios, though with varying degrees of success depending on the quality 10 of the baseline model. In all cases, we observe that sequential rank-1 LoRA training works at least comparable to standard LoRA, where all r= 3components are trained simultaneously. Definitely, this performance comes with a cost. Figure 3 displays the relationship between parameter efficiency (measured by test accuracy per training epoch) and total training epochs for different model architectures. Here, Rank-1 architectures correspond to just using r= 1for different number of epochs; Rank-2 architectures correspond to r= 2, where the components are trained for different combinations of total epochs (e.g., some models have been trained with 1 →1epochs, while others have been trained with 10→10epochs; more about this in the next paragraph), and so on. Across all datasets (see also Appendix F), we observe that sequential rank-1 approaches (noted as “Rank-1”, “Rank-2”, and “Rank-3”) achieve comparable parameter efficiency (with a slight loss of accuracy) compared to standard LoRA. Sequential rank-1 models require more total training for comparable accuracy, thus creating a tradeoff, but still maintain favorable parameter-to-performance ratios. Yet our approach introduces an interesting property: sequential rank-1 does not require to know apriori the rank of the adaptation; one could check online whether accuracy is sufficient and stop further training. Such a property lacks in standard LoRA: either the user needs to know a good value for r, or | https://arxiv.org/abs/2505.22602v1 |
one needs to consider different rvalues from scratch before making the final decision. Figure 4: αβγdenotes sequential training with α, β andγepochs for each component. Not all combinations are shown.Sequential training paths. Figure 4 illustrates the effec- tiveness of different sequen- tial training paths for the case of MNIST, where each path represents a sequence of component training dura- tions. For example, path “1→3→5” indicates a rank-3 LoRA where the 1st compo- nent is trained for 1 epoch, the 2nd component 3 epochs, and the 3rd component 5 epochs. In all cases (the rest are pro- vided in Appendix F, but con- vey a similar message), it is evident that good first com- ponent implies (almost all the times) a better combined final model: front-loaded training schedules perform better, indi- cating that the first component captures most of the necessary adaptation, with diminishing returns for extensive training of later components. Appendix F contains more results that support the above observations. 6 Limitations Our theoretical analysis and experimental results on sequential rank-1 learning have certain limitations. First, the theoretical analysis is constrained to linear low-rank regression, leaving open challenges in extending to non-linear transformations and complex architectures. Moreover, while sequential rank-1 updates offer flexible rank determination, they may demand more training iterations than simultaneous rank optimization. Furthermore, our LoRA adaptation experiments are restricted to feedforward networks and basic classification tasks. Lastly, it remains an open question to characterize the optimal allocation of training epochs across components. 7 Conclusion By examining error propagation in hierarchical learning frameworks, we demonstrate how the accuracy of sequential components is interconnected, with each subsequent step depen- dent on the precision of preceding estimations. Experimental results on low-rank linear matrix regression and LoRA adaptation (main text and Appendix) validate our hypotheses. From a practical point of view, this work suggests that more computational resources (larger 11 Tk) should be allocated to earlier components to reduce their approximation errors, as these have the largest impact on the final result. To this end, our work opens up the following future directions: —Adaptive Procedures : We can develop adaptive procedures that adjust the number of gra- dient steps Tkbased on the estimated approximation error; this connects with learning schedules literature. —Component Reoptimization : Periodically refine earlier components after extracting new ones. —Orthogonality Constraints : Enforce orthogonality between components to reduce interfer- ence. —Hybrid Approaches : Combine sequential rank-1 updates with occasional full-rank steps. References [1]Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforce- ment learning. Discrete event dynamic systems , 13:341–379, 2003. [2]Shubham Pateria, Budhitama Subagdja, Ah-hwee Tan, and Chai Quek. Hierarchical reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR) , 54 (5):1–35, 2021. [3]Himanshu Sahni, Saurabh Kumar, Farhan Tejani, and Charles Isbell. Learning to compose skills. arXiv preprint arXiv:1711.11289 , 2017. [4]Oleksiy Ostapenko, Zhan Su, Edoardo Maria Ponti, Laurent Charlin, Nicolas Le Roux, Matheus Pereira, Lucas Caccia, and Alessandro Sordoni. Towards modular LLMs by building and reusing a library of LoRAs. arXiv preprint arXiv:2405.11157 , 2024. [5]Alessandro Sordoni, Eric Yuan, Marc-Alexandre Côté, Matheus Pereira, Adam Trischler, Ziang Xiao, | https://arxiv.org/abs/2505.22602v1 |
Arian Hosseini, Friederike Niedtner, and Nicolas Le Roux. Joint prompt optimization of stacked llms using variational inference. Advances in Neural Information Processing Systems , 36, 2024. [6]Lucas Page-Caccia, Edoardo Maria Ponti, Zhan Su, Matheus Pereira, Nicolas Le Roux, and Alessandro Sordoni. Multi-head adapter routing for cross-task generalization. Advances in Neural Information Processing Systems , 36, 2024. [7]Dudley Shapere. The structure of scientific revolutions. The Philosophical Review , 73(3): 383–394, 1964. [8] Ken Richardson. Models of cognitive development . Psychology Press, 2019. [9]Hideyuki Okano, Tomoo Hirano, and Evan Balaban. Learning and memory. Proceedings of the National Academy of Sciences , 97(23):12403–12404, 2000. [10] Susan Carey and Ellen M Markman. Cognitive development. In Cognitive science , pages 201–254. Elsevier, 1999. [11] K Anders Ericsson, Ralf T Krampe, and Clemens Tesch-Römer. The role of deliberate practice in the acquisition of expert performance. Psychological review , 100(3):363, 1993. [12] National Research Council, Division of Behavioral, Board on Behavioral, Sensory Sciences, Committee on Developments in the Science of Learning with additional material from the Committee on Learning Research, and Educational Practice. How people learn: Brain, mind, experience, and school: Expanded edition , volume 1. National Academies Press, 2000. [13] Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. Orthogonal gradient descent for continual learning. In International Conference on Artificial Intelligence and Statistics , pages 3762–3773. PMLR, 2020. [14] Arslan Chaudhry, Naeemullah Khan, Puneet Dokania, and Philip Torr. Continual learning in low-rank orthogonal subspaces. Advances in Neural Information Processing Systems , 33:9900–9911, 2020. 12 [15] Rich Caruana. Multitask learning. Machine learning , 28:41–75, 1997. [16] Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 , 2017. [17] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. Advances in neural information processing systems , 29, 2016. [18] Michael Crawshaw. Multi-task learning with deep neural networks: A survey. arXiv preprint arXiv:2009.09796 , 2020. [19] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 , 2017. [20] Nils Bjorck, Carla P Gomes, Bart Selman, and Kilian Q Weinberger. Understanding batch normalization. Advances in neural information processing systems , 31, 2018. [21] Pierre Baldi and Peter J Sadowski. Understanding dropout. Advances in neural informa- tion processing systems , 26, 2013. [22] Ruo-Yu Sun. Optimization for deep learning: An overview. Journal of the Operations Research Society of China , 8(2):249–294, 2020. [23] Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. Understanding the difficulty of training transformers. arXiv preprint arXiv:2004.08249 , 2020. [24] Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Animashree Anandkumar, Jiashi Feng, and Jose M Alvarez. Understanding the robustness in vision transformers. In International Conference on Machine Learning , pages 27378–27394. PMLR, 2022. [25] Deng-Bao Wang, Lei Feng, and Min-Ling Zhang. Rethinking calibration of deep neural networks: Do not be afraid of overconfidence. Advances in Neural Information Processing Systems , 34:11809–11820, 2021. [26] Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, and Yi | https://arxiv.org/abs/2505.22602v1 |
Ma. Rethinking bias- variance trade-off for generalization of neural networks. In International Conference on Machine Learning , pages 10767–10777. PMLR, 2020. [27] Xianbing Zhao, Lizhen Qu, Tao Feng, Jianfei Cai, and Buzhou Tang. Learning in order! a sequential strategy to learn invariant features for multimodal sentiment analysis. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 9729–9738, 2024. [28] Hayden McAlister, Anthony Robins, and Lech Szymanski. Sequential learning in the dense associative memory. arXiv preprint arXiv:2409.15729 , 2024. [29] Shuqing Bian, Xingyu Pan, Wayne Xin Zhao, Jinpeng Wang, Chuyuan Wang, and Ji-Rong Wen. Multi-modal mixture of experts represetation learning for sequential recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management , pages 110–119, 2023. [30] Amartya Sanyal, Varun Kanade, Philip HS Torr, and Puneet K Dokania. Robustness via deep low-rank representations. arXiv preprint arXiv:1804.07090 , 2018. [31] I.T. Jolliffe. Rotation of principal components: choice of normalization constraints. Journal of Applied Statistics , 22(1):29–35, 1995. [32] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer , 42(8):30–37, 2009. [33] René Vidal and Paolo Favaro. Low rank subspace clustering. Pattern Recognition Letters , 43:47–61, 2014. 13 [34] Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimiza- tion. Communications of the ACM , 55(6):111–119, 2012. [35] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review , 52 (3):471–501, 2010. [36] Angelika Rohde and Alexandre B Tsybakov. Estimation of high-dimensional low-rank matrices1. The Annals of Statistics , 39(2):887–930, 2011. [37] Wooseok Ha and Rina Foygel Barber. Robust PCA with compressed data. Advances in Neural Information Processing Systems , 28, 2015. [38] Harold Hotelling. Analysis of a complex of statistical variables into principal compo- nents. Journal of educational psychology , 24(6):417, 1933. [39] Lester Mackey. Deflation methods for sparse pca. Advances in neural information processing systems , 21, 2008. [40] Fuzhen Zhang. The Schur complement and its applications , volume 4. Springer Science & Business Media, 2006. [41] Bharath K Sriperumbudur, David A Torres, and Gert RG Lanckriet. Sparse eigen methods by DC programming. In Proceedings of the 24th international conference on Machine learning , pages 831–838, 2007. [42] Youcef Saad. Projection and deflation method for partial pole assignment in linear state feedback. IEEE Transactions on Automatic Control , 33(3):290–297, 1988. [43] Y Danisman, MF Yilmaz, A Ozkaya, and I Comlekciler. A comparison of eigenvalue methods for principal component analysis. Appl. and Comput. Math , 13:316–331, 2014. [44] Bin Yang. Projection approximation subspace tracking. IEEE Transactions on Signal processing , 43(1):95–107, 1995. [45] Namrata Vaswani, Thierry Bouwmans, Sajid Javed, and Praneeth Narayanamurthy. Robust subspace learning: Robust PCA, robust subspace tracking, and robust subspace recovery. IEEE signal processing magazine , 35(4):32–55, 2018. [46] Laura Balzano, Yuejie Chi, and Yue M Lu. Streaming PCA and subspace tracking: The missing data case. Proceedings of the IEEE , 106(8):1293–1310, 2018. [47] Liangzu Peng, Paris Giampouras, and René Vidal. The ideal continual learner: An agent that never forgets. In International Conference on Machine Learning , | https://arxiv.org/abs/2505.22602v1 |
pages 27585–27610. PMLR, 2023. [48] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. LoRA: Low-rank adaptation of large language models. In Interna- tional Conference on Learning Representations , 2021. [49] Kiryung Lee and Yoram Bresler. Guaranteed minimum rank approximation from linear observations by nuclear norm minimization with an ellipsoidal constraint. arXiv preprint arXiv:0903.4742 , 2009. [50] Zhang Liu and Lieven Vandenberghe. Interior-point method for nuclear norm approxi- mation with application to system identification. SIAM Journal on Matrix Analysis and Applications , 31(3):1235–1256, 2009. [51] Prateek Jain, Raghu Meka, and Inderjit S Dhillon. Guaranteed rank minimization via singular value projection. In Advances in Neural Information Processing Systems , pages 937–945, 2010. [52] Kiryung Lee and Yoram Bresler. Admira: Atomic decomposition for minimum rank approximation. IEEE Transactions on Information Theory , 56(9):4402–4416, 2010. 14 [53] A. Kyrillidis and V . Cevher. Matrix recipes for hard thresholding methods. Journal of mathematical imaging and vision , 48(2):235–265, 2014. [54] A. Kyrillidis and V . Cevher. Recipes on hard thresholding methods. In Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2011 4th IEEE International Workshop on , pages 353–356. IEEE, 2011. [55] R. Khanna and A. Kyrillidis. IHT dies hard: Provable accelerated iterative hard thresholding. arXiv preprint arXiv:1712.09379 , 2017. [56] Peng Xu, Bryan He, Christopher De Sa, Ioannis Mitliagkas, and Chris Re. Accelerated stochastic power iteration. In Amos Storkey and Fernando Perez-Cruz, editors, Pro- ceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics , volume 84 of Proceedings of Machine Learning Research , pages 58–67. PMLR, 09–11 Apr 2018. URL https://proceedings.mlr.press/v84/xu18a.html . [57] Samuel Burer and Renato DC Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Mathematical Programming , 95(2):329–357, 2003. [58] Prateek Jain and Inderjit S Dhillon. Provable inductive matrix completion. arXiv preprint arXiv:1306.0626 , 2013. [59] Yudong Chen and Martin J Wainwright. Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025 , 2015. [60] Tuo Zhao, Zhaoran Wang, and Han Liu. A nonconvex optimization framework for low rank matrix estimation. In Advances in Neural Information Processing Systems , pages 559–567, 2015. [61] Qinqing Zheng and John Lafferty. A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. In Advances in Neural Information Processing Systems , pages 109–117, 2015. [62] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank solutions of linear matrix equations via Procrustes flow. In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48 , pages 964–973. JMLR. org, 2016. [63] Anastasios Kyrillidis, Amir Kalev, Dohyung Park, Srinadh Bhojanapalli, Constan- tine Caramanis, and Sujay Sanghavi. Provable compressed sensing quantum state tomography via non-convex methods. npj Quantum Information , 4(1):36, 2018. [64] Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, and Sujay Sanghavi. Non-square matrix sensing without spurious local minima via the burer-monteiro approach. arXiv preprint arXiv:1609.03240 , 2016. [65] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via non-convex factor- ization. IEEE Transactions on Information Theory , 62(11):6535–6579, 2016. [66] Srinadh Bhojanapalli, | https://arxiv.org/abs/2505.22602v1 |
Anastasios Kyrillidis, and Sujay Sanghavi. Dropping convexity for faster semi-definite optimization. In Conference on Learning Theory , pages 530–582, 2016. [67] Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Global optimality of local search for low rank matrix recovery. In Advances in Neural Information Processing Systems , pages 3873–3881, 2016. [68] Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, and Sujay Sanghavi. Finding low-rank solutions to matrix problems, efficiently and provably. arXiv preprint arXiv:1606.03168 , 2016. 15 [69] Rong Ge, Chi Jin, and Yi Zheng. No spurious local minima in nonconvex low rank problems: A unified geometric analysis. arXiv preprint arXiv:1704.00708 , 2017. [70] Ya-Ping Hsieh, Yu-Chun Kao, Rabeeh Karimi Mahabadi, Yurtsever Alp, Anastasios Kyrillidis, and Volkan Cevher. A non-euclidean gradient descent framework for non- convex matrix factorization. Technical report, Institute of Electrical and Electronics Engineers, 2017. [71] A. Kyrillidis, A. Kalev, D. Park, S. Bhojanapalli, C. Caramanis, and S. Sanghavi. Prov- able quantum state tomography via non-convex methods. npj Quantum Information , 4 (36), 2018. [72] Junhyung Lyle Kim, George Kollias, Amir Kalev, Ken X Wei, and Anastasios Kyril- lidis. Fast quantum state reconstruction via accelerated non-convex programming. In Photonics , volume 10, page 116. MDPI, 2023. [73] Gene H Golub and Charles F Van Loan. Matrix computations . JHU press, 2013. [74] Ian Gemp, Brian McWilliams, Claire Vernade, and Thore Graepel. Eigengame: {PCA} as a nash equilibrium. In International Conference on Learning Representations , 2021. URL https://openreview.net/forum?id=NzTU59SYbNq . [75] Ian Gemp, Brian McWilliams, Claire Vernade, and Thore Graepel. Eigengame unloaded: When playing games is better than optimizing. arXiv preprint arXiv:2102.04152 , 2021. [76] Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization, 2019. URL https://arxiv.org/abs/1905.13655 . [77] Jikai Jin, Zhiyuan Li, Kaifeng Lyu, Simon S. Du, and Jason D. Lee. Understanding incremental learning of gradient descent: A fine-grained analysis of matrix sensing, 2023. URL https://arxiv.org/abs/2301.11500 . [78] Mahdi Soltanolkotabi, Dominik Stöger, and Changzhi Xie. Implicit balancing and regularization: Generalization and convergence guarantees for overparameterized asymmetric matrix sensing. In Gergely Neu and Lorenzo Rosasco, editors, Proceedings of Thirty Sixth Conference on Learning Theory , volume 195 of Proceedings of Machine Learning Research , pages 5140–5142. PMLR, 12–15 Jul 2023. URL https://proceedings.mlr. press/v195/soltanolkotabi23a.html . [79] Soo Min Kwon, Zekai Zhang, Dogyoon Song, Laura Balzano, and Qing Qu. Efficient compression of overparameterized deep models through low-dimensional learning dynamics, 2024. URL https://arxiv.org/abs/2311.05061 . [80] Zhi-Yong Wang, Xiao Peng Li, Hing Cheung So, and Abdelhak M. Zoubir. Adaptive rank-one matrix completion using sum of outer products. IEEE Transactions on Circuits and Systems for Video Technology , 33(9):4868–4880, 2023. doi: 10.1109/TCSVT.2023. 3250651. [81] Martin Wistuba, Prabhu Teja Sivaprasad, Lukas Balles, and Giovanni Zappella. Contin- ual learning with low rank adaptation, 2023. URL https://arxiv.org/abs/2311. 17601 . [82] Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning, 2023. URL https://arxiv.org/abs/2310.14152 . [83] Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv | https://arxiv.org/abs/2505.22602v1 |
preprint arXiv:2303.10512 , 2023. [84] Ziyu Zhao, Tao Shen, Didi Zhu, Zexi Li, Jing Su, Xuwu Wang, Kun Kuang, and Fei Wu. Merging LoRAs like playing LEGO: Pushing the modularity of LoRA to extremes through rank-wise clustering. arXiv preprint arXiv:2409.16167 , 2024. 16 [85] Nikolaos Dimitriadis, Pascal Frossard, and Francois Fleuret. Pareto low-rank adapters: Efficient multi-task learning with preferences. arXiv preprint arXiv:2407.08056 , 2024. [86] Taiqiang Wu, Jiahao Wang, Zhe Zhao, and Ngai Wong. Mixture-of-subspaces in low-rank adaptation. arXiv preprint arXiv:2406.11909 , 2024. [87] Oleksiy Ostapenko, Zhan Su, Edoardo Ponti, Laurent Charlin, Nicolas Le Roux, Lucas Caccia, and Alessandro Sordoni. Towards modular LLMs by building and reusing a library of LoRAs. In Forty-first International Conference on Machine Learning . [88] Minyoung Huh, Brian Cheung, Jeremy Bernstein, Phillip Isola, and Pulkit Agrawal. Training neural networks from scratch with parallel low-rank adapters. arXiv preprint arXiv:2402.16828 , 2024. [89] Wenhan Xia, Chengwei Qin, and Elad Hazan. Chain of LoRA: Efficient fine-tuning of language models via residual learning. arXiv preprint arXiv:2401.04151 , 2024. [90] Per-Åke Wedin. Perturbation bounds in connection with singular value decomposition. BIT, 12(1):99–111, March 1972. ISSN 0006-3835. doi: 10.1007/BF01932678. URL https://doi.org/10.1007/BF01932678 . [91] H. Weyl. Das asymptotische verteilungsgesetz der eigenwerte linearer partieller dif- ferentialgleichungen (mit einer anwendung auf die theorie der hohlraumstrahlung). Mathematische Annalen , 71:441–479, 1912. URL http://eudml.org/doc/158545 . 17 A Proof of Theorem 1 The proof focuses on bounding two crucial quantities: b⋆ ka⋆⊤ kX−bka⊤ kX Fand ∥Yk−Y⋆ k∥F. The first quantity measures the difference between the true k-thrank-1 approximation of Yand the leading rank-1 estimation of Ykderived from the k-thdefla- tion step, which minimizes the loss specified in (1). The second quantity evaluates the distance between the "ground-truth" deflation matrices, Y⋆ k, and the deflation matrices, Yk, obtained in practice. Together, these bounds provide a comprehensive understanding of the approximation accuracy and the effectiveness of the deflation process. The difference betweenPr k=1bka⊤ kX, the sum of rank-1 approximations returned by Algo- rithm (2), and Y, the training data label matrix, consists of three components: •Ground-truth approximation error : Y−Pr k=1b⋆ ka⋆⊤ kX F, which measures the deviation between the true product Yand the sum of the exact top rrank-1 approx- imations of Y. •Propagation error : Pr k=1b⋆ ka⋆⊤ kX−Pr k=1bka⊤ kX F, which accumulates the dif- ferences between the exact top rank-1 estimates of each "ground-truth" deflated matrix, Y⋆ k, and the corresponding empirical deflated matrix, Yk. •Optimization error : Pr k=1bka⊤ kX−Pr k=1bka⊤ kX F, which captures the cumu- lative numerical error from the sub-routine rank-1 not being solved exactly. Combining these, we can express the overall difference Y−Pr k=1bka⊤ kX Fas follows: Y−rX k=1bka⊤ kX F≤ Y−rX k=1b⋆ ka⋆⊤ kX F+ rX k=1b⋆ ka⋆⊤ kX−rX k=1bka⊤ kX F + rX k=1bka⊤ kX−rX k=1bka⊤ kX F| {z } ≤Pr k=1∥δkX∥F(15) The last inequality follows from the definition in (7). The norm ∥δk∥Flargely depends on the sub-routine rank-1 and can be controlled as long as t, the number of sub-routine iterations, is sufficiently large. Also, ttThe first component equalsPp k=r+1σ⋆ k, which is simply the differencePp k=1σ⋆ k−Pr k=1σ⋆ | https://arxiv.org/abs/2505.22602v1 |
k. By the triangle inequality: Y−rX k=1bka⊤ kX F≤pX k=r+1σ⋆ k+ rX k=1b⋆ ka⋆⊤ kX−rX k=1bka⊤ kX F+rX k=1∥δkX∥F ≤pX k=r+1σ⋆ k+rX k=1 b⋆ ka⋆⊤ kX−bka⊤ kX F+rX k=1∥δkX∥F.(16) Therefore, our analysis will primarily focus on upper-bounding b⋆ ka⋆⊤ kX−bka⊤ kX F. Intuitively, this depends on the difference between YkandY⋆ k. As a foundational step in our proof, we will first provide a characterization of ∥Yk−Y⋆ k∥F. Lemma 2. LetY⋆ k’s and Yk’s be defined as generated by Algorithm 1 and Algorithm 2, respectively. Then we have that for all k∈[r], ∥Yk+1−Y⋆ k+1∥F≤ ∥Yk−Y⋆ k∥F+∥b⋆ ka⋆⊤ kX−bka⊤ kX∥F+∥δkX∥F. (17) Lemma 2 upper-bounds Yk+1−Y⋆ k+1 Fin terms of ∥Yk−Y⋆ k∥F, b⋆ ka⋆⊤ kX−bka⊤ kX F, and∥δkX∥F. To establish a recursive characterization of ∥Yk−Y⋆ k∥F, we first need to derive an upper bound for bka⊤ kX−b⋆ ka⋆⊤ kX F. Lemma 3. LetTk:= min {minj>k|σ⋆ k−σjk|, σ⋆ k}. If∥Y⋆ k−Yk∥2<minj>k|σ⋆ k−σ⋆ j|, then for allk∈[r], we have: b⋆ ka⋆⊤ kX−bka⊤ kX F≤3σ⋆ k Tk+ 1 ∥Y⋆ k−Yk∥F. (18) 18 Plugging 18 into 17 gives a recurrence that depends purely on ∥δk∥Fand the spectrum of Y: Yk+1−Y⋆ k+1 F≤ ∥Yk−Y⋆ k∥F+3σ⋆ k Tk+ 1 ∥Y⋆ k−Yk∥F+∥δkX∥F ≤3σ⋆ k Tk+ 2 ∥Y⋆ k−Yk∥F+∥δkX∥F.(19) Unrolling this recurrence gives a closed-form upper bound for ∥Yk−Y⋆ k∥F. Combining this upper bound with 3 and plugging the result into 16 gives us and upper bound for ∥Y−Pr k=1bka⊤ kX∥F. B Missing proofs from Appendix A and a generalized theorem B.1 Proof of Lemma 1 Proof. Let{(a⋆ k,b⋆ k)}r k=1denote the outputs of Algorithm 1. According to the third line in Algorithm 1, starting from k= 1, we have: (a⋆ 1,b⋆ 1) = arg min a∈Rd,b∈Rm1 2 Y⋆ 1−ba⊤X 2 F. Given that ba⊤is a rank-1 matrix, the product ba⊤Xis also rank-1. Therefore, ac- cording to the Eckart-Young-Mirsky theorem, the best rank-1 approximation of Y⋆ 1= Y=Pp i=1σ⋆ iu⋆ iv⋆⊤ i, based on its Singular Value Decomposition, is σ⋆ 1u⋆ 1v⋆⊤ 1. Thus, b⋆ 1a⋆⊤ 1X=σ⋆ 1u⋆ 1v⋆⊤ 1. Next, the deflation step in line 4 yields: Y⋆ 2:=Y⋆ 1−b⋆ 1a⋆⊤ 1X=Y⋆ 1−σ⋆ 1u⋆ 1v⋆⊤ 1=pX i=2σ⋆ iu⋆ iv⋆⊤ i. Now, assuming by induction that Y⋆ k=Pp i=kσ⋆ iu⋆ iv⋆⊤ iandb⋆ ka⋆⊤ kX=σ⋆ ku⋆ kv⋆⊤ k, we obtain fork+ 1: Y⋆ k+1=Y⋆ k−b⋆ ka⋆⊤ kX=pX i=kσ⋆ iu⋆ iv⋆⊤ i−σ⋆ ku⋆ kv⋆⊤ k=pX i=k+1σ⋆ iu⋆ iv⋆⊤ i. Returning to line 3, we have: (a⋆ k+1,b⋆ k+1) = arg min a∈Rd,b∈Rm1 2 Y⋆ k+1−ba⊤X 2 F. By the same reasoning as for k= 1,b⋆ k+1a⋆⊤ k+1Xis equal to the first rank-1 approximation of Y⋆ k+1=Pp i=k+1σ⋆ iu⋆ iv⋆⊤ i, which is σ⋆ k+1u⋆ k+1v⋆⊤ k+1. B.2 Proof of Lemma 2 Proof. As we defined in (7), we’ll have: bka⊤ k=bka⊤ k+δk Plugging this into (2) we’ll get: Yk+1=Yk−(bka⊤ k+δk)X Now let ∆k=∥Yk−Y⋆ k∥F ∆k+1=∥Yk+1−Y⋆ k+1∥F =∥Yk−(bka⊤ k+δk)X−Y⋆ k+b⋆ ka⋆⊤ kX∥F =∥Yk−Y⋆ k−(bka⊤ k−b⋆ ka⋆⊤ k)X−δkX∥F ≤∥Yk−Y⋆ k∥F+∥(bka⊤ k−b⋆ ka⋆⊤ k)X∥F+∥δkX∥F ≤∆k+∥bka⊤ kX−b⋆ ka⋆⊤ kX∥F+∥δkX∥F 19 B.3 Proof of Lemma 3 Proof. We start by observing: b⋆ ka⋆⊤ kX−bka⊤ kX F=∥σ⋆ ku⋆ kv⋆⊤ k−σ1ku1kv⊤ 1k∥F =∥σ⋆ k(u⋆ kv⋆⊤ k−u1kv⊤ 1k) + (σ⋆ k−σ1k)u1kv⊤ 1k∥F ≤ | https://arxiv.org/abs/2505.22602v1 |
∥σ⋆ k∥F· ∥u⋆ kv⋆⊤ k−u1kv⊤ 1k∥F+∥σ⋆ k−σ1k∥F· ∥u1k∥2|{z} =1·∥v⊤ 1k∥2|{z} =1 ≤ |σ⋆ k| · ∥u⋆ kv⋆⊤ k−u1kv⊤ 1k∥F+|σ⋆ k−σ1k|. Therefore, ∥b⋆ ka⋆⊤ kX−bka⊤ kX∥F≤σ⋆ k· ∥u⋆ kv⋆⊤ k−u1kv⊤ 1k∥F+|σ⋆ k−σ1k|. (20) Now, we express the term ∥u⋆ kv⋆⊤ k−u1kv⊤ 1k∥Fas follows: ∥u⋆ kv⋆⊤ k−u1kv⊤ 1k∥F=∥u⋆ k(v⋆⊤ k−v⊤ 1k) + (u⋆ k−u1k)v⊤ 1k∥F ≤ ∥u⋆ k∥2|{z} =1·∥v⋆⊤ k−v⊤ 1k∥2+∥u⋆ k−u1k∥2· ∥v⊤ 1k∥2|{z} =1 ≤ ∥v⋆⊤ k−v⊤ 1k∥2+∥u⋆ k−u1k∥2 Substituting this result into inequality (20) gives: ∥b⋆ ka⋆⊤ kX−bka⊤ kX∥F≤σ⋆ k ∥v⋆⊤ k−v⊤ 1k∥2+∥u⋆ k−u1k∥2 +|σ⋆ k−σ1k||{z} ≤∥Y⋆ k−Yk∥F. (21) The last inequality follows from Weyl’s theorem. To complete the bound, we will now find an upper bound for ∥v⋆ k−v1k∥+∥u⋆ k−u1k∥. To do so, we will define two angles: α:=∠{v⋆ k,v1k}and β:=∠{u⋆ k,u1k} We know that cosα=v⋆⊤ kv1k ∥v⋆ k∥2∥v1k∥2=v⋆⊤ kv1k≤1 since∥v⋆ k∥2= 1and∥v1k∥2= 1. Therefore sin2α= 1−cos2α= 1−(v⋆⊤ kv1k)2⇒(v⋆⊤ kv1k)2= 1−sin2α We use the expansion of the square of ∥v⋆ k−v1k∥2to get: ∥v⋆ k−v1k∥2 2=∥v⋆ k∥2 2|{z} = 1+∥v1k∥2 2|{z} = 1−2(v⋆⊤ kv1k) = 2−2(v⋆⊤ kv1k) ≤2−2(v⋆⊤ kv1k)2= 2−2(1−sin2α) = 2 sin2α Thus ∥v⋆ k−v1k∥2 2≤2 sin2α (22) Following the same procedure for β, we start with: cosβ=u⋆⊤ ku1k ∥u⋆ k∥2∥u1k∥2=u⋆⊤ ku1k≤1 since∥u⋆ k∥2= 1and∥u1k∥2= 1. Therefore sin2β= 1−cos2β= 1−(u⋆⊤ ku1k)2⇒(u⋆⊤ ku1k)2= 1−sin2β 20 We use the expansion of the square of ∥u⋆ k−u1k∥2to get: ∥u⋆ k−u1k∥2 2=∥u⋆ k∥2 2|{z} = 1+∥u1k∥2 2|{z} = 1−2(u⋆⊤ ku1k) = 2−2(u⋆⊤ ku1k) ≤2−2(u⋆⊤ ku1k)2 = 2−2(1−sin2β) = 2 sin2β Thus ∥u⋆ k−u1k∥2 2≤2 sin2β (23) Using the fact that a+b≤√ 2a2+ 2b2, 22, and 23 we’ll get: ∥v⋆ k−v1k∥2+∥u⋆ k−u1k∥2≤q 2∥v⋆ k−v1k∥2 2+ 2∥u⋆ k−u1k∥2 2≤2q sin2α+ sin2β(24) To proceed, we observe that under the assumption ∥Y⋆ k−Yk∥2<minj>k|σ⋆ k−σ⋆ j|, Weyl’s inequality implies that min j>k|σ⋆ k−σjk| ≥min j>k|σ⋆ k−σ⋆ j| − |σjk−σ⋆ j|>∥Y⋆ k−Yk∥2− ∥Y⋆ k−Yk∥2= 0. We know σ⋆ k>0. Since we define Tk:= min {minj>k|σ⋆ k−σjk|, σ⋆ k}, and both terms are positive, Tkis also positive. This satisfies the conditions of Wedin’s theorem, allowing us to apply it. Now since α:=∠{v⋆ k,v1k}andβ:=∠{u⋆ k,u1k}, and because of the fact that v⋆ kandu⋆ k are the first right and left singular vectors of Y⋆ k, andv1kandu1kare the first right and left singular vectors of Yk, we can apply Wedin’s theorem to get: sin2α+ sin2β≤∥v⋆⊤ k(Y⋆ k−Yk)∥2 F+∥(Y⋆ k−Yk)u⋆ k∥2 F T2 k ≤∥v⋆⊤ k∥2 F∥Y⋆ k−Yk∥2 F+∥Y⋆ k−Yk∥2 F∥u⋆ k∥2 F T2 k ≤2∥Y⋆ k−Yk∥2 F T2 k Plugging this in 24, we’ll have: ∥v⋆ k−v1k∥+∥u⋆ k−u1k∥ ≤2√ 2∥Y⋆ k−Yk∥F Tk≤3∥Y⋆ k−Yk∥F Tk Plugging this to 21 and using Weyl’s inequality, we’ll get: ∥b⋆ ka⋆⊤ kX−bka⊤ kX∥F≤σ⋆ k 3∥Y⋆ k−Yk∥F Tk +∥Y⋆ k−Yk∥F=3σ⋆ k Tk+ 1 ∥Y⋆ k−Yk∥(25) B.4 Proof of a generalization of Theorem 1 In this section, we proof a more general form of Theorem 1, stated below. Theorem 4. Let{(ak,bk)}r k=1be the output of Algorithm 2. Let δkbe given as in Definition 1 with ∥δk∥F>0. LetY=Pr⋆ k=1σ⋆ ku⋆ kv⋆⊤ kbe the SVD of Y, with σ⋆ 1≥ ··· ≥ σ⋆ r⋆and define Y(r′)=Pr′ k=1σ⋆ ku⋆ kv⋆⊤ k. Define the minimum singular value gap T⋆ k:= min minj>k|σ⋆ k−σ⋆ j|, σ⋆ | https://arxiv.org/abs/2505.22602v1 |
k . Define error bound E(k)as: E(k) :=σmax(X)k−1X k′=0∥δk′∥Fk−1Y j=k′+1 6σ⋆ j T⋆ j+ 2! 21 If∥δk∥F’s are small enough such that E(k)<1 2minj>k|σ⋆ k−σ⋆ j|, then for any r≤r′≤r⋆, the output of Algorithm 2 satisfies: Y(r′)−rX k=1bka⊤ kX F≤r′X k=r+1σ⋆ k+σmax(X)rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k (26) Notice that Theorem 4 naturally implies Theorem 1 by taking r′=r⋆. We give the proof of Theorem 4 below. Proof. We decompose the approximation error as Y(r′)−rX k=1bka⊤ kX F≤ Y(r′)−rX k=1b⋆ ka⋆⊤ kX F+ rX k=1b⋆ ka⋆⊤ kX−rX k=1bka⊤ kX F + rX k=1bka⊤ kX−rX k=1bka⊤ kX F(27) By Lemma 1, we must have that Y(r′)−rX k=1b⋆ ka⋆⊤ kX F= r′X k=1σ⋆ ku⋆ kv⋆ k−rX k=1σ⋆ ku⋆ kv⋆ k F≤r′X k=r+1σ⋆ k (28) Moreover, by Definition 1, we have rX k=1bka⊤ kX−rX k=1bka⊤ kX F≤rX k=1∥δkX∥F≤σmax(X)rX k=1∥δk∥F(29) Therefore, it suffice to study the second term in (27). Towards this end, we use the result in Lemma 2 and Lemma 3 to obtain Yk+1−Y⋆ k+1 F≤ ∥Yk−Y⋆ k∥F+ b⋆ ka⋆⊤ kX−bka⊤ kX F+∥δkX∥F(30) b⋆ ka⋆⊤ kX−bka⊤ kX F≤3σ⋆ k Tk+ 1 ∥Y⋆ k−Yk∥F. (31) Combining (30) and (31), we have that Yk+1−Y⋆ k+1 F≤3σ⋆ k Tk+ 2 ∥Y⋆ k−Yk∥F+∥δkX∥F Let the sequence {Qk}r k=1be defined as Qk+1=akQk+bk;Q0= 0; ak= 2 +3σ⋆ k Tk;bk=∥δkX∥ Then by inequality 19 we must have that ∥Yk−Y⋆ k∥F≤Qkfor all k. Invoking lemma 5 gives: ∥Yk−Y⋆ k∥F≤k−1X k′=0∥δk′X∥Fk−1Y j=k′+1 2 +3σ⋆ j Tk ≤σmax(X)k−1X k′=0∥δk′∥Fk−1Y j=k′+1 2 +3σ⋆ j Tj | {z } :=ˆE(k)(32) We define the right hand side to be equal to ˆE(k). Enforcing ˆE(k)≤1 2minj>k|σ⋆ k−σ⋆ j| gives that ∥Yk−Y⋆ k∥2≤1 2min j>k|σ⋆ k−σ⋆ j| ≤min j>k|σ⋆ k−σ⋆ j| (33) 22 So the condition in Lemma 3 is met. Combining (31) and (32), and notice that ∥δkX∥F≤ σmax(X)∥δk∥F, we have: b⋆ ka⋆⊤ kX−bka⊤ kX F≤σmax(X)3σ⋆ k Tk+ 1k−1X k′=0∥δk′∥Fk−1Y j=k′+1 2 +3σ⋆ j Tk (34) Notice that 1 +3σ⋆ k Tk≤2 +3σ⋆ k Tk. Therefore, 34 becomes: b⋆ ka⋆⊤ kX−bka⊤ kX F≤σmax(X)k−1X k′=0∥δk′∥FkY j=k′+1 2 +3σ⋆ j Tk (35) Combining (28), (29), and (35) gives Y(r′)−rX k=1bka⊤ kX F≤r′X k=r+1σ⋆ k+σmax(X)rX k=1∥δk∥F +σmax(X)rX k=1k−1X k′=0∥δk′∥FkY j=k′+1 2 +3σ⋆ j Tk =r′X k=r+1σ⋆ k+σmax(X)rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +3σ⋆ j Tk Finally, due to (33) and the Weyl’s inequality, we must have that |σjk−σ⋆ k| ≤ 1 2minj>k σ⋆ k−σ⋆ j . Thus, we have that Tk≥1 2T⋆ k. This allows us to define E(k) =σmax(X)k−1X k′=0∥δk′∥Fk−1Y j=k′+1 6σ⋆ j T⋆ j+ 2! and obtain that E(k)≥ˆE(k). Thus, enforcing E(k)≤1 2minj>k σ⋆ k−σ⋆ j suffices. Moreover, we have Y(r′)−rX k=1bka⊤ kX F≤r′X k=r+1σ⋆ k+σmax(X)rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k which completes the proof. C Proof of Theorem 2 Proof overview . A rough idea of showing this theorem can build upon our characterization of the training error. Let ˆb⋆ k’s be output of Algorithm 1 when using Y⋆as the label. We consider the following orthonormal basis of Rmextended from ˆb⋆ k’s: ˆb1, . . . , ˆbm;ˆbi=ˆb⋆ k/∥ˆb⋆ k∥2ifk≤r⋆ LetˆB∈Rm×r⋆consist of ˆb1, . . .ˆbr⋆, and let ˆB⊥∈Rm×(m−r⋆)consist of ˆbr⋆+1, . . | https://arxiv.org/abs/2505.22602v1 |
.ˆbm. Then, we can write Y⋆as: Y=W⋆X+ˆBˆB⊤E+ˆB⊥ˆB⊥⊤E=ˆB(ΣˆA⊤X+E1) +ˆB⊥E2, where E1∈Rr⋆×nandE2∈R(m−r⋆)×nare noise matrices with I.I.D. Gaussian entries. Therefore, based on the above decomposition, E1can be seen as the unavoidable noise, which will adds up to the training error, and E2is the error that can be avoided if we solve for only the top r⋆components. Of course, ˆB(ΣˆA⊤X+E1)is not the truncated top- r⋆SVD ofYsince ΣˆA⊤X+E1does not have orthogonal rows. However, when E1is small, this term approximates the truncated SVD well enough. Base on this intuition, we have the following lemma: 23 Lemma 4. LetY⋆to have the SVD Y⋆=Pr k=1ˆσkˆuˆv⋆, and let Y=Y⋆+Eto have SVD Y=Pm k=1σ⋆ ku⋆ kv⋆ k. LetY⋆( ˆm)andY( ˆm)be the truncated rank- ˆmSVD of Y⋆andY, respectively. Then with probability at least 1−δwe have that Y⋆( ˆm)−Y( ˆm) F≤O εp nlog1/δ ˆm+s min{r,ˆm} T⋆ min!! The proof of Lemma 4 is given in Appendix C.2. With the help of Lemma 4, the proof of Theorem 3 involves choosing a reference label Y(r)that involves only the relevant noise that will be fitted by Algorithm 2. We then control the generalization error by estimating the difference between W⋆XandY⋆(r), and the difference between Y⋆(r)andY(r)using Lemma 4. C.1 More details in the proof of Theorem 2 Utilizing Lemma 3, we could get that b⋆ ka⋆⊤ kX−bka⊤ kX F≤ 2 +6σ⋆ k T⋆ k +σmax(X)∥δk∥F Plugging in (32), we have that b⋆ ka⋆⊤ kX−bka⊤ kX F≤σmax(X)kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k Since σmin(X)>0, we can then have b⋆ ka⋆⊤ kX−bka⊤ kX F≥σmin(X) b⋆ ka⋆⊤ k−bka⊤ k F ⇒ b⋆ ka⋆⊤ k−bka⊤ k F≤1 σmin(X) b⋆ ka⋆⊤ kX−bka⊤ kX F This implies that b⋆ ka⋆⊤ k−bka⊤ k F≤κ(X)kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k which proves the first statememt. To prove the second statement, we directly use Theorem 1 to get that W⋆−rX k=1bka⊤ k F≤1 σmin(X) W⋆X−rX k=1bka⊤ kX F =1 σmin(X) Y−rX k=1bka⊤ kX F ≤r⋆X k=r+1σ⋆ k σmin(X)+κ(X)rX k=1kX k′=1∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ j! C.2 Proof of Lemma 4 Proof. By Lemma 6, we have that with probability 1−δ σmax(E)≤O ε √n+r log1 δ!! To start, by Weyl’s inequality, we have that |ˆσk−σ⋆ k| ≤σmax(E)≤O ε √n+r log1 δ!! 24 Therefore, taking ε≤O T⋆ min√n+√ log1 δ ensures that min{minj̸=k{|σj−σ⋆ k|}, σ⋆ k} ≥1 2T⋆ min. Thus, by Wedin’s Theorem, we have that ˆu⊤ ku⋆ k+ˆv⊤ kv⋆ k≤2−2 T⋆ k E⊤u⋆ k 2 2+∥Ev⋆ k∥2 2 We will consider two cases. First, when ˆm≤r, we have Y⋆( ˆm)−Y( ˆm) F= ˆmX k=1(σ⋆ ku⋆v⋆−ˆσkˆukˆvk) F ≤ ˆmX k=1(ˆσk−σ⋆ k)ukvk F+ ˆUˆmΣ⋆ ˆmˆV⊤ ˆm−U⋆ ˆmΣ⋆ ˆmˆV⊤⋆ ˆm F ≤ˆmX r=1|ˆσk−σ⋆ k|+ ˆUˆm−U⋆ ˆm Σ⋆ ˆm F+ ˆVˆm−V⋆ ˆm Σ⋆ ˆm F ≤ˆmX r=1|ˆσk−σ⋆ k|+σ⋆ 1 ˆUˆm−U⋆ ˆm F+ ˆVˆm−V⋆ ˆm F ≤ˆmX r=1|ˆσk−σ⋆ k|+ 2σ⋆ 1 ˆUˆm−U⋆ ˆm 2 F+ ˆVˆm−V⋆ ˆm 2 F1 2 Notice that ˆUˆm−U⋆ ˆm 2 F= 2 ˆm−2D ˆUˆm,U⋆ ˆmE = 2 ˆm−2ˆmX k=1ˆu⊤ ku⋆ k ˆVˆm−V⋆ ˆm 2 F= 2 ˆm−2D ˆVˆm,V⋆ ˆmE = 2 ˆm−2ˆmX k=1ˆv⊤ kv⋆ k Therefore ˆUˆm−U⋆ ˆm 2 F+ ˆVˆm−V⋆ ˆm 2 F≤4k−2ˆmX k=1 ˆu⊤ ku⋆ k+ˆv⊤ kv⋆ k ≤4 | https://arxiv.org/abs/2505.22602v1 |
T⋆ minˆmX k=1 E⊤u⋆ k 2 2+∥Ev⋆ k∥2 2 =4 T⋆ min E⊤U⋆ ˆm 2 F+∥EV⋆ ˆm∥2 F Since E∈Rm×ncontains I.I.D. Gaussian entries from N(0, ε2), we must have that U⋆ kE∈ Rˆm×nandEV⋆ k∈Rm׈mcontains I.I.D. Gaussian entries from N(0, ε2). By Lemma 6, we have that with probability at least 1−δ, it holds that U⋆⊤ ˆmE 2 F+∥EV⋆ ˆm∥2 F≤O ε2(m+n) ˆmlog1/δ Thus, we have ˆUˆm−U⋆ ˆm 2 F+ ˆVˆm−V⋆ ˆm 2 F≤Oε2 T⋆ minˆm(m+n) log1/δ Combining the results above, we have Y⋆( ˆm)−Y( ˆm) F≤O εˆm√n+p log1/δ +O εp T⋆ minp r(m+n) log1/δ! 25 Next, we consider the case ˆm≥r. In this case, we have Y⋆( ˆm)−Y( ˆm) F≤ Y⋆−Y(r) F+ ˆmX k=r+1σkukv⊤ k F ≤O εr√n+p log1/δ +O εp T⋆ minp r(m+n) log1/δ! +ˆmX k=r+1σk Notice that by Weyl’s inequality, for all k≥r σk≤ |σk−0| ≤σmax(E)≤O ε √n+r log1 δ!! This gives Y⋆( ˆm)−Y( ˆm) F≤O εˆm√n+p log1/δ +O εp T⋆ minp r(m+n) log1/δ! Combining the two cases, and using m≤n, we have that Y⋆( ˆm)−Y( ˆm) F≤O εp nlog1/δ ˆm+s min{r,ˆm} T⋆ min!! C.3 Proof of Theorem 3 Given the SVD of YandY⋆asY=Pp k=1σ⋆ ku⋆ kv⋆ kandY⋆=Pr⋆ k=1ˆσkˆukˆv⊤ k, we define Y(r)=rX k=1σ⋆ ku⋆ kv⋆ k;Y⋆(r)=min{r,r⋆}X k=1ˆσkˆukˆv⊤ k Then we can decompose the error W⋆X−Pr k=1bka⊤ kX Fas W⋆X−rX k=1bka⊤ kX F≤ W⋆X−Y⋆(r) F+ Y⋆(r)−Y(r) F+ Y(r)−rX k=1bka⊤ kX F We will analyze each of the three terms individually. To start, for the first term, we notice thatY⋆(r)is precisely the truncated SVD of W⋆Xwhen r < r⋆. Therefore W⋆X−Y⋆(r) F≤r⋆X k=r+1σk(W⋆X)≤σmax(X)r⋆X k=r+1σr(W⋆) For the second term, by Lemma 4, we have that with probability at least 1−γ Y⋆(r)−Y(r) F≤O εp nlog1/γ r+s min{r⋆, r} T⋆ min!! Lastly, by Theorem 4, we have that Y(r)−rX k=1bka⊤ kX F≤σmax(X)rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k Combining the above equations, we have that W⋆X−rX k=1bka⊤ kX F≤σmax(X)r⋆X k=r+1σr(W⋆) +σmax(X)rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k +O κp nlog1/γ r+s min{r⋆, r} T⋆ min!! 26 This gives that W⋆−rX k=1bka⊤ k F≤κ(X) r⋆X k=r+1σr(W⋆) +rX k=1kX k′=0∥δk′∥FkY j=k′+1 2 +6σ⋆ j T⋆ k +O εp nlog1/γ σmin(X) r+s min{r⋆, r} T⋆ min!! which completes the proof. D Supporting theorems and lemmas Lemma 5. Consider a sequence of quantities {Qk}∞ k=1satisfying Qk+1=akQk+bk with some ak, bk≥0for all k∈Z+. Setb0=Q1. Then we have that Qk=k−1X k′=0bk′k−1Y j=k′+1aj Proof. We shall prove by induction. For the base case, let k= 1. In this case, we have that Q1=0X k′=0bk′0Y j=k′+1aj=b0=Q1 For the inductive case, assume that the property holds for k. Then we have that Qk+1=akQk+bk=ak·k−1X k′=0bk′k−1Y j=k′+1aj+bk=kX k′=0bk′kY j=k′+1aj This proves the inductive step and finishes the proof. Lemma 6. LetM∈Rm×nbe a matrix containing I.I.D. Gaussian entries from N(0,1). Then we have that with probability at least 1−δ, the following holds •σmax(M)≤O√m+√n+p log1/δ •∥M∥F≤Op mnlog1/δ Proof. By standard results of Gaussian random matrices and vectors, we have that •P(σmax(M)≤O(√m+√n+t1))≥1−exp −t2 1 •P(∥M∥F≤t2)≥1−2 exp −t2 2 2mn Take t1=q log2 δandt2=q 2mnlog4 δfinishes the proof. Theorem 5 (Eckart-Young-Mirsky Theorem) .LetA∈Rm×nbe a matrix with singular value decomposition A=UΣV⊤, where UandVare orthogonal matrices | https://arxiv.org/abs/2505.22602v1 |
and Σis a diagonal matrix with singular values σ1≥σ2≥ ··· ≥ σmin(m,n)≥0. For any integer k≤min(m, n), let Ak=UkΣkV⊤ kbe the best rank- kapproximation of A, where UkandVkconsist of the first k columns of UandV, andΣkis the diagonal matrix of the largest ksingular values. ThenAkminimizes the approximation error in both the Frobenius norm and the spectral norm: Ak= arg min B,rank(B)=k∥A−B∥F 27 Theorem 6 (Wedin Theorem ([90])) .LetM,˜M∈Rm×nbe two matrices with rank- rSVDs: M= [U1U2] Σ10 0Σ2 V1⊤ V2⊤ ,and ˜M=M+∆=˜U1˜U2˜Σ10 0˜Σ2˜V⊤ 1˜V⊤ 2 . Ifδ= min {min1≤i≤r,r+1≤j≤n|σi−˜σj|,min1≤i≤rσi}>0, then: sinθ(˜U1,U1) 2 F+ sinθ(˜V1,V1) 2 F≤ U⊤ 1∆ 2 F+∥∆V 1∥2 F δ2 Theorem 7 (Weyl’s Theorem for Singular Values([ 91])).LetMand∆bem×nmatrices. If ˜M=M+∆, then the singular values σiofMand the singular values ˜σiof˜Msatisfy the following inequality for all i= 1,2, . . . , min(m, n): |˜σi−σi| ≤ ∥∆∥2, E Experimental analysis on linear matrix regression. We present experiments that validate our theory on error propagation in sequential rank-1 learning. Our experiments aim to demonstrate how the distribution of computational resources across rank-1 components affects the overall approximation quality, particularly focusing on how errors in early components propagate to later stages of the sequential learning process. Problem setting. Per our theory, we consider the low-rank linear regression problem of finding W⋆∈Rm×nwith rank ≤rsuch that Y=W⋆X+Ewhere Eis the noise term. This corresponds to finding a low-rank approximation of W⋆. We investigate the following settings: 1.Singular Value Profiles: We vary the singular value distribution of W⋆to analyze how the spectrum of ground truth influences error propagation. 2.Noise Variations: We introduce different types and levels of noise to assess the robustness of sequential rank-1 learning to perturbations. 3.Iteration allocation strategies: We evaluate three different iteration allocation strategies: (a)Equal: Same number of optimization iterations to each rank-1 component. (b)More First: More iterations allocated to the earlier components and fewer to later ones. (c)Less First: Fewer iterations allocated to the earlier components and more to later ones. To ensure statistical robustness, all experiments are repeated across 5 independent trials. We report the mean performance across these trials, and visualize variability using shaded bands that represent the standard deviation. We consider matrix dimensions W⋆∈R500×1000; we observed that experiments varying the dimensions of W⋆do not introduce any additional value to the main messages of this section. We generate W⋆with different singular value profiles, as in: •Uniform: σi= 10 for all i= 1, . . . , r⋆; •Exponential decay: σi= 100 · 1 100i−1 r⋆−1fori= 1, . . . , r⋆; •Power-law decay: σi=100 i2fori= 1, . . . , r⋆; where r⋆is the true rank of W⋆. Without loss of generality, we fix the rank r⋆to20as we did not observe unexpected behaviors in the performance of the algorithms for different rank values. We also consider several noise scenarios to evaluate robustness: i)Noiseless; ii)Gaussian where Ehas i.i.d. entries from N(0, κ)with κ∈ {0.01,0.05,0.1}, and iii)Sparse where Eis a sparse matrix (in our case 5% of entries are non-zeros) with non-zero entries from N(0, κ) 28 with κ∈ {1,10}. Per our theory, Xis sampled from a standard Gaussian distribution, N(0,1). Effect of singular | https://arxiv.org/abs/2505.22602v1 |
value profile. We study whether the singular value profile of W⋆has impact on error propagation through the term T⋆ k= min {minj>k|σ⋆ k−σ⋆ j|, σ⋆ k}in our error bound. Figure 5 shows the singular value decay patterns of both W⋆and the resulting Y under different spectral profiles. Figure 6 illustrates the training and reconstruction errors under these three profiles. To ensure a fair comparison across different spectral profiles, we normalize the singular values of W⋆such that all generated matrices have the same Frobenius norm. This avoids artificially inflating or deflating error magnitudes due to differences in matrix scale rather than the structure of singular value decay. Figure 5: Comparison of singular value decay under different profiles. Left: Singular values ofW⋆.Right: Singular values of Y=W⋆X. Figure 6: Effect of singular value profile on sequential learning performance. Left:W⋆ reconstruction error. Right : Objective’s training error. Observations: The power-law decay profile shows the best performance, followed by the exponential decay, with the uniform profile performing worst. This matches the theoretical insight that large singular value gaps reduce the compounding of downstream error. No- tably, power-law decay starts steep at the head—its first few singular values are significantly larger—creating large gaps for early components. In contrast, exponential decay is smoother initially and decays more evenly. Uniform singular values exhibit no decay, leading to minimal or zero gaps throughout. Impact of noise level. Our theoretical analysis extends to noisy settings through Theorem 3, which characterizes how additive noise impacts generalization performance. Figure 7 illustrates the effect of increasing noise levels κon both the training and reconstruction error under Gaussian and sparse noise levels. Observations: As expected, increasing the noise level κleads to higher reconstruction error in both Gaussian and sparse settings. Higher noise levels tend to corrupt the smaller singular values of Y, making it difficult to distinguish low-rank structure from noise. This can lead to overfitting in later components of the sequential learner, as the algorithm begins to capture noise rather than signal. 29 Figure 7: Impact of noise level. Left: Gaussian noise. Right: Sparse noise. Effect of iteration allocation strategies in noisy settings. To investigate mitigation strate- gies, we first evaluate how different iteration allocation strategies perform under noisy conditions. Figure 8 shows that the "more-first" strategy consistently outperforms others across varying noise levels by concentrating effort where it matters most—early in the sequence. (a)κ= 0.1 (b)κ= 0.5 (c)κ= 1 (d)κ= 1.5 Figure 8: Comparison of iteration allocation strategies under different noise levels. The "more-first" strategy achieves better reconstruction error across all κvalues. Observation: Even in noisy settings, the more-first strategy consistently outperforms equal , which in turn outperforms less-first , across all noise levels κ. This highlights the importance of prioritizing early iterations to mitigate error amplification under noise. Effect of singular value profiles in noisy settings. We further examine how spectral decay influences robustness under noise. Using the more-first allocation strategy, Figure 9 shows that power-law decay consistently achieves lower reconstruction error compared to exponential and uniform profiles across all noise levels κ. Observation: Spectral decay plays a critical role in robustness. Power-law | https://arxiv.org/abs/2505.22602v1 |
decay, with its large leading singular values and wider gaps, allows early components to capture most of 30 (a)κ= 0.05 (b)κ= 0.1 (c)κ= 0.5 (d)κ= 1 Figure 9: Effect of singular value profiles under noise. Power-law decay consistently achieves lower reconstruction error, followed by exponential and then uniform profiles, highlighting the benefit of spectral decay even in noisy settings. the signal, mitigating downstream error propagation. In contrast, uniform profiles lack this protective structure, making them more vulnerable to noise. Implications for Practical Use: These results suggest that sequential learners can be more robust in the presence of noise by combining two strategies: allocating more iterations to early components and leveraging spectral decay. By front-loading optimization effort where it is impactful—at the beginning of the sequence—and favoring matrices with decaying singular values (especially power-law decay), models maintain lower reconstruction error despite increasing noise levels. Computational efficiency analysis. Beyond approximation quality, we also analyze the computational efficiency of different iteration allocation strategies. Specifically, we inves- tigate how quickly each strategy reduces the reconstruction error to a desired threshold. Figure 10 illustrates, for a range of target error thresholds, the number of iterations required by each allocation strategy to reach that threshold. Observations: The more-first iterations strategy consistently reaches target reconstruction thresholds faster than the equal or less-first strategies. This aligns with our intuition that prioritizing the early components—those with the greatest influence on downstream error propagation—leads to quicker convergence. In contrast, less-first allocation delays learning the principal directions, requiring more total iterations to reach the same accuracy. This suggests that our theoretical insights can lead to more computationally efficient algorithms for low-rank approximation. F More results on the LoRA experiments Adaptation performance across datasets. Figure 11 displays the relationship between parameter efficiency (measured by test accuracy per training epoch) and total training epochs for different model architectures. Here, Rank-1 architectures correspond to just using r= 1for different number of epochs; Rank-2 architectures correspond to r= 2, where 31 (a) Threshold: 1 (b) Threshold: 1.5 (c) Threshold: 2 (d) Threshold: 2.5 Figure 10: Number of iterations required to reach reconstruction error thresholds for dif- ferent allocation strategies. Each subplot corresponds to a fixed error threshold. The “more-first” strategy consistently reaches the thresholds faster, especially for tighter recon- struction targets. In subplot (a), the “less-first” strategy fails to reach the threshold even after 10,000 iterations. the components are trained for different combinations of total epochs (e.g., some models have been trained with 1 →1 epochs, while others have been trained with 10→10epochs; more about this in the next paragraph), and so on. The bubble sizes represent the relative efficiency of each configuration. Across all datasets, we observe that sequential rank-1 approaches (Rank-1, Rank-2, and Rank- 3) consistently achieve higher parameter efficiency compared to standard LoRA. However, sequential rank-1 models require more total training to achieve comparable accuracy, thus creating a tradeoff to be taken in consideration in practice, but still maintain favorable parameter-to-performance ratios for some cases. Sequential training paths. Figure 12 illustrates the effectiveness of different sequential training paths for all the cases, where each | https://arxiv.org/abs/2505.22602v1 |
path represents a sequence of component training durations. For example, path “1 →3→5” indicates a rank-3 LoRA where the first component received 1 epoch of training, the second component 3 epochs, and the third component 5 epochs. In all cases, it is evident that good first component implies (almost all the times) a better combined final model: front-loaded training schedules perform better, indicating that the first component captures most of the necessary adaptation, with diminishing returns for extensive training of later components. Figure 13 depicts a similar picture, where for every rank- rarchitecture, we depict how well the model performs (the variance bars indicate how good or bad the model ends up, depending on the different number of epochs we spend on each component of the low-rank sequential architecture). Impact of baseline model quality. Our experiments across the three datasets reveal that the effectiveness of sequential LoRA adaptation is influenced by, but not dependent on, the quality of the baseline model. Even with the relatively poor CIFAR100 baseline, sequential 32 LoRA successfully adapts to new classes, albeit with lower absolute performance compared to the better-initialized MNIST case. This observation has practical implications: sequential rank-1 adaptation offers a viable approach for model extension even when the initial model is suboptimally trained. The method provides a parameter-efficient way to incrementally improve model capabilities without full retraining, regardless of the starting point quality, often leading to better results than regular LoRA, but with the expense of more computation. We again note the interesting property our approach introduces: sequential rank-1 does not require to know apriori the rank of the adaptation; one could check online whether accuracy is sufficient and stop further training. Such a property lacks in standard LoRA: either the user needs to know a good value for r, or one needs to consider different rvalues from scratch before making the final decision. Error propagation analysis. Our theoretical analysis predicted that errors in early compo- nents of sequential learning would propagate to later stages. The empirical results across all three datasets confirm this prediction. For all datasets, we observe that when the first com- ponent is poorly trained (1 epoch), the final performance of rank-3 models is substantially lower than when the first component receives adequate training (5-10 epochs), even when later components are well-trained. This error propagation effect is most pronounced for MNIST, where the accuracy difference between paths “1 →10→10” and “10 →1→1” can be as large as 5-7 percentage points. The effect is less dramatic but still observable for CIFAR100, where the overall lower performance baseline makes the relative impact of component quality more uniform. These findings validate our theoretical error bounds and highlight the importance of care- fully allocating computational resources across sequential components, with particular attention to early components that form the foundation for subsequent adaptation steps. 33 ∞ ∞ Figure 11: Test accuracy of sequential rank-1 LoRA components when adapting to new classes across the three datasets. Top: MNIST. Center: CIFAR10. Bottom: CIFAR100. Note that, on purpose, the pretrained models are trained with good (MNIST), mediocre (CIFAR10) and | https://arxiv.org/abs/2505.22602v1 |
arXiv:2505.22608v1 [cs.SD] 28 May 2025Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates Haoning Xu1, Zhaoqing Li1, Youjun Chen1, Huimeng Wang1, Guinan Li1, Mengzhe Geng2, Chengxi Deng1, Xunying Liu1 1The Chinese University of Hong Kong, Hong Kong SAR, China 2National Research Council Canada, Canada hnxu@se.cuhk.edu.hk, xyliu@se.cuhk.edu.hk Abstract This paper presents a novel approach for speech foundation models compression that tightly integrates model pruning and parameter update into a single stage. Highly compact layer- level tied self-pinching gates each containing only a single learnable threshold are jointly trained with uncompressed mod- els and used in fine-grained neuron level pruning. Experiments conducted on the LibriSpeech-100hr corpus suggest that our ap- proach reduces the number of parameters of wav2vec2.0- base and HuBERT- large models by 65% and 60% respectively, while incurring no statistically significant word error rate (WER) in- crease on the test-clean dataset. Compared to previously pub- lished methods on the same task, our approach not only achieves the lowest WER of 7.05% on the test-clean dataset under a com- parable model compression ratio of 4.26x, but also operates with at least 25% less model compression time. Index Terms : speech recognition, model pruning, speech foun- dation models 1. Introduction In recent years, advancements in self-supervised learning (SSL) for speech technologies, particularly through foundation mod- els like wav2vec2.0 [1], HuBERT [2] and WavLM [3], have greatly improved their utility in applications like automatic speech recognition (ASR). Despite these advancements, the widespread adoption of these models in resource-constrained and on-device environments is limited due to their substantial memory and computational demands. To address this challenge, extensive research has explored diverse neural network compression methods for ASR tasks, including but not limited to: 1) low-bit quantization ap- proaches that reduce memory footprint by replacing floating- point weights with low-precision values [4–8]; and 2) archi- tecture compression methods that focus on reducing struc- tural redundancy in models, such as low-rank matrix factor- ization [9–11], knowledge distillation [12–18] and model prun- ing [19–25]. Furthermore, larger model compression ratios can be achieved by combining low-bit quantization and architecture compression [10, 26–28]. However, previous studies on model pruning of SSL-based ASR systems face the following limitations: 1) Significant per- formance degradation is often observed as an increase in WER after performing compression. Coarse-grained “structured” pruning methods, which operate at the level of larger structures (e.g., channels) rather than individual parameters, may inad- vertently remove critical parameters, resulting in performance degradation [22, 25, 29, 30]. It is also important to note that most studies failed to clearly define the criterion, such as sta- tistical significance tests [31], to differentiate “acceptable” and“unacceptable” performance loss due to compression. 2) In- consistency between model pruning and parameter update creates two separate and disjointed stages during compression, often leading to a large performance degradation. Approaches adopted in [21,24,32,33] employ sequential paradigms that op- timize pruning masks and parameters separately, which decou- ple pruning from parameter optimization in the ASR task. For example, the parameter pruning in [21] is driven by an indepen- dent layer-wise evaluation before post-pruning fine-tuning. 3) Substantial training | https://arxiv.org/abs/2505.22608v1 |
time inevitably arises from post-pruning refinement stages, including knowledge distillation [29], fine- tuning [21], and iterative pruning [24, 27, 32–34], which not only prolong development cycles but also complicate experi- mental workflows due to multi-stage requirements. 4) Exces- sive pruning-required parameter overhead , such as: i) the extra parameters in KD-based pruning [22, 29], ii)the use of candidate architecture-specific weights in [10,22,25,29] and iii) the parametrization of entire masks as trainable matrices [35]. To this end, this paper proposes a novel compression ap- proach for SSL speech foundation models that tightly integrates model pruning and parameter update into a single stage, re- ferred to as the one-pass stage. In each layer, the sparsity-aware self-pinching gate generates a differentiable pruning probability for each parameter by comparing its magnitude with the learn- able threshold. For wav2vec2.0- base and HuBERT- large , only 6×12 = 72 and6×24 = 144 additional parameters are introduced as thresholds, respectively1. Experiments conducted on Librispeech dataset suggest that the pruned wav2vec2.0- base and HuBERT- large using our method 1)significantly outperform both i) uniform magnitude- based pruning where pruning and parameter update are sep- arated, and ii)neural architecture search (NAS) [36] based channel-wise pruning, which introduces additional parameters for coarse-grained pruning, particularly when the overall spar- sity reaches 50% or greater, and 2)further achieve lossless2 compression with sparsity of 65% and 60% on the test-clean subset, respectively. It should be noted that the time to compress wav2vec2.0- base and HuBERT- large only takes 11 and 13 GPU hours, respectively. Compared with prior pruning methods for SSL-based ASR systems, the proposed method obtains the low- est WER of 7.05% on the test-clean subset, while also reducing fine-tuning time with the model size limited to 23M parameters under a model compression ratio of 4.26x. The main contributions of this paper include: 1)Our method reduces the fragility to pruning by using the 1Compared to the uncompressed wav2vec2.0- base and HuBERT- large models, the additional pruning-required parameters account for only 8e-7 and 5e-7 of the total model size, respectively. 2“lossless” in this paper refers to no statistically significant WER increase against the uncompressed baseline. proposed layer-level sparsity-aware self-pinching gate. Com- pared to the coarse-grained pruning approaches [22, 25, 29, 30], our fine-grained method assigns a specific pruning probability to each parameter, leading to better ASR performance. 2)Our method ensures consistency between model pruning and parameter update by integrating them into a single stage for SSL-based ASR systems. In contrast, previous fine-grained ap- proach [21] performed pruning and fine-tuning separately. This one-pass compression stage also enables different layers to be pruned in varying sparsity based on their sensitivities, thereby achieving optimal mixed-sparsity assignments. 3)Our method demonstrates efficiency in terms of com- pression time, as it eliminates the need for additional operations such as post-pruning fine-tuning [21] or iterative pruning [32] after the one-pass compression stage. 4)Our method guarantees the compactness of pruning- required parameters by introducing a single threshold as an ad- ditional component for each layer. In contrast, the number of additional pruning-required parameters in previous methods is based on i)the design of teacher-student model | https://arxiv.org/abs/2505.22608v1 |
[22, 29], ii)the number of candidates [10, 25] or iii)the layer size [35]. 2. wav2vec2.0 and HuBERT Models Speech SSL models such as wav2vec2.0 [1], HuBERT [2], and WavLM [3] share similar Transformer backbones. For example, HuBERT consists of a CNN encoder, a Transformer encoder, a projection layer and a code embedding layer. Transformer en- coder accounts for over 90% of the total number of parameters, where each encoder block contains an MHSA module and an FFN module. In this work, we fine-tuned wav2vec2.0- base and HuBERT- large with a pure CTC decoder and pruned parame- ters in the 6 linear layers of each Transformer encoder block. 3. Previous works 3.1. Uniform Magnitude-based Pruning The Uniform Magnitude-based Pruning (UMP) is inspired by the principle that parameters with smaller magnitudes (abso- lute values) have less influence on the output. As shown in Fig. 1 (a), in UMP, the parameters of a specific layer are sorted by magnitude, and the same proportion of parameters with rel- atively small magnitude is pruned across all layers, leading to consistent sparsity throughout the network. However, UMP enforces isotropic sparsity across layers, ignoring layer-specific sensitivity profiles. While some layers tolerate aggressive pruning with little accuracy loss, others de- grade significantly even with conservative sparsity. 3.2. NAS-based Channel-wise Pruning Achannel is defined as a row ( index, :) or column ( :, index ) of the weight matrix of a linear layer. For example, in thep-th Transformer block, 1)them-th channels in the weight matrices of Multi-Head Self-Attention module, de- noted as {Queryp m,:,Keyp m,:,Valuep m,:,Outp :,m}, are simul- taneously pruned and 2)then-th channels in the weight matrices of Feed-Forward Network module, denoted as {FFN .1p n,:,FFN .2p :,n}, are pruned together. The channels are sorted by the sum of L2-magnitude , which is calcu- lated as (∥Qp m,:∥2+∥Kp m,:∥2+∥Vp m,:∥2+∥Op :,m∥2)and (∥FFN .1p n,:∥2+∥FFN .2p :,n∥2)for the m-th and n-th chan- nels, respectively. Here ∥·∥2indicates the L2-norm. In the Gumbel-Softmax differentiable neural architec- ture search (DARTS) [36, 37], we train a supernet that simultaneously contains 7 candidate masks per layer, asshown in Fig. 1 (b), where the channels with relatively small sum of L2-magnitude are pruned in proportions of {0%,25%,50%,75%,87.5%,90%,92.5%}of the total chan- nels, respectively. Specifically, the masked weight matrix of the l-th layer ˆWlcan be computed as follows: ˆWl=7X i=1λl i·Ml i⊙Wl, (1) where Wlis the weight matrix, Ml idenotes the i-th candi- date mask and λl iis the parameter that measures its importance. Gumbel-softmax is obtained by Gumbel sampling the output of the Softmax function, which is given by: λl i=exp((log αl i+Gl i)/T)P7 j=1exp((log αl j+Gl j)/T), (2) where αl iis an architecture-dependent parameter determin- ing their contribution during search. Gl i=−log(−log(Ul i)) is the Gumbel variable, Tis a hyperparameter and Ui lis a uni- form random variable. Let ∥·∥0indicate the number of non-zero parameters, the training loss is designed as: L=Lctc+ηX l7X i=1λl i· ∥Ml i∥0, (3) whereLctcis the CTC loss of the pruned system with the over- all sparsity dynamically searched in real time and ηis a constant coefficient that controls the overall sparsity. | https://arxiv.org/abs/2505.22608v1 |
While NAS-based techniques ensure consistency between model pruning and parameter update, they introduce consider- able computational and parameter overhead. Furthermore, their effectiveness is highly dependent on the design of candidates. 4. Sparsity-aware Self-pinching Gates The core concept of Sparsity-aware Self-pinching Gates is to leverage the weights already being learned to construct the mask using only one additional learnable threshold per layer. Our approach facilitates flexible allocation of layer-wise sparsity across different layers based on their sensitivities, ultimately en- abling the model to reach a specific size limit. For the l-th layer, Mi,jof the element-wise soft mask ma- trixMlis expressed as: Ml i,j= Sigmoid Wl i,j2− tl2 τ! , (4) where tlis a learnable threshold shared across all parameters in the weight matrix Wlof the l-th layer. Here, iandjdenote the indices of parameters, and τis a positive hyperparameter. Let | · |denote the number of parameters, the sparsity slof the l-th layer is defined as: sl= 1−∥Ml∥0 |Ml|. (5) The Straight-Through Estimator (STE) [38] is used dur- ing the one-pass fine-tuning stage: 1)During forward propa- gation or inference, Mlis rounded to a binary mask Mland the masked weight matrix is computed as Wl=Wl⊙Ml. Specifically, the parameters in Wlwith magnitude exceeding the threshold tlwill be retained, while those below the thresh- old will be pruned, which can be expressed as: Ml i,j=( 1 (Wl i,j)2≥(tl)2 0 (Wl i,j)2<(tl)2, (6) Figure 1: Comparison between Uniform Magnitude-based Pruning (UMP), NAS-based Channel-wise Pruning (NAS-CP) and Sparsity- aware Self-pinching Gates (ours). For the l-th layer, (a)UMP directly prunes the same proportion of parameters by magnitude across all layers; (b)NAS-CP introduces architecture-dependent parameters proportional to architecture candidates, which are pre-selected before NAS search; (c)Ours utilizes the weights that are already being learned to construct the mask with only one additional threshold. 2)During backward propagation, the gradient is accumulated using the actual values in Ml. The training loss is given by: L=Lctc+ηX l∥Ml∥0, (7) where ηis a constant coefficient to control the overall sparsity. ηis set to 0 if the desired sparsity is achieved; otherwise, it remains at its preset value. Our method shown in Fig. 1 (c) simultaneously optimizes the threshold tland the weight matrix Wl, thus mitigating the mismatch between model pruning and parameter update. The above fine-grained neural level pruning is more powerful in mit- igating performance loss due to model compression than coarse- grained pruning methods, which are applied at the channel level and easier to implement. 5. Experiments 5.1. Experimental setup Uncompressed baselines and data. For wav2vec2.0- base, the wav2vec2-base-100h is downloaded from Huggingface3as our baseline. For HuBERT- large , we fine-tuned HuBERT-large- ll60k4for 20 epochs as our baseline, with other setups con- sistent with those in One-pass pruning and fine-tuning . All systems are trained on LibriSpeech’s [39] 100-hour clean set. One-pass pruning and fine-tuning. We utilized the AdamW optimizer with a learning rate of 2e-4 and a batch size of 32 for both wav2vec2.0 and HuBERT systems. A linear warmup is implemented for the first 10% of the training steps, followed by a linear decay to zero. All pruned wav2vec2.0- base and | https://arxiv.org/abs/2505.22608v1 |
HuBERT- large systems are obtained by fine-tuning 30 and 10 epochs from the baselines of wav2vec2.0- base and HuBERT- large , respectively. The threshold tin each layer is initialized to 1e-5. Tandτare cosine-annealed from 0.5 to 0.01. All experiments are conducted on a single NVIDIA A40 (48 GB). 5.2. Main results To facilitate comparison, we implemented Uniform Magnitude- based Pruning (UMP), NAS-based Channel-wise Pruning (NAS-CP)5and our method (Ours)6, as shown in Fig. 2. 3Huggingface: facebook/wav2vec2-base-100h 4Huggingface: facebook/HuBERT-large-ll60k 5Empirically, in NAS-CP, ηis set to 4e-5 and 3e-5 for wav2vec2.0- base and HuBERT- large , respectively, when the desired sparsity is less than 75%; otherwise, it is set to 2e-4 and 5e-5. 6Empirically, in Ours, ηis set to 2e-5 and 1e-6 for wav2vec2.0- base and HuBERT- large , respectively, when the desired sparsity is less than 65%; otherwise, it is set to 3e-5 and 2e-6.Table 1: WER(%) of pruned wav2vec2.0-base and HuBERT- large with different sparsity, using fine-tuning epochs of 30 and 10, respectively. Only selected key results are shown here. †means it has no statistically significant (MAPSSWE [31], α=0.05) WER increase with the corresponding baseline. Sys. Sparsity# Params (M)# Additional componentsdev test clean other clean other Baselines 1 wav2vec2.0- base 95.04-6.10 13.79 6.06 13.52 2 HuBERT- large 316.60 3.35 8.13 3.44 8.34 Pruned wav2vec2.0- base systems using Sparsity-aware Self-pinching Gates 3 50% 51.93 725.97†16.09 6.00†16.06 4 60% 43.44 6.06†16.52 6.07†16.08 5 65% 39.19 6.03†16.97 6.12†16.89 6 70% 34.94 6.21†17.17 6.45 17.46 7 75% 30.70 6.41 17.82 6.71 18.08 8 80% 26.45 6.71 18.63 6.95 19.27 9 85% 22.20 7.49 20.47 7.46 20.74 10 90% 17.96 8.22 23.36 8.63 24.30 Pruned HuBERT- large systems using Sparsity-aware Self-pinching Gates 11 50% 164.37 1443.45†9.14 3.51†8.99 12 60% 134.14 3.54 9.44 3.54†9.33 13 65% 119.03 3.62 9.86 3.65 9.75 14 70% 103.92 3.76 10.27 3.76 10.15 15 75% 88.81 3.88 11.01 3.98 11.04 16 80% 73.70 4.19 12.51 4.36 12.39 17 85% 58.59 4.83 14.66 4.82 14.72 18 90% 43.48 6.11 18.91 6.37 19.29 Pruned HuBERT- large systems using Mixed-sparsity 19 10% 285.25 -3.35†8.29†3.49†8.48† 20 20% 255.03 3.43†8.32 3.43†8.42† 21 30% 224.81 3.31†8.81 3.45†8.47† 22 50% 164.37 3.46†8.89 3.49†8.72 Pruned HuBERT- large systems using NAS-based Channel-wise Pruning 23 50% 164.37 1008 7.67 20.99 7.79 21.34 Pruned HuBERT- large systems using Uniform Magnitude-based Pruning 24 50% 164.37 - 3.49 8.85 3.64 8.83 5.2.1. Comparison with Uniform Magnitude-based Pruning Pruned wav2vec2.0- base using both UMP and our method achieve lossless compression with all sparsity of below 50% on the clean subsets. Beyond 50%, pruned wav2vec2.0- base and HuBERT- large using ours consistently exceed those using UMP (Fig. 2 (1)-(4)) on all subsets, respectively. On the dev-clean subset, pruned wav2vec2.0- base and HuBERT- large using ours achieve lossless compression with maximum sparsity of 70% (Fig. 2 (1) and Sys. 6 in Tab.1) and 50% (Fig. 2 (2) and Sys. 11 in Tab.1), respectively, versus UMP’s 60% and 30% (Fig. 2 (1)- (2)). On the test-clean subset, ours achieves 65% (Fig. 2 (3) and Sys. 5 in Tab.1) and 60% (Fig. 2 (4) and Sys. 12 in Tab.1), compared to UMP’s | https://arxiv.org/abs/2505.22608v1 |
50% and 40% (Fig. 2 (3)-(4)). We conjecture that the inferior performance of UMP is par- tially caused by the inconsistency that decouples pruning from parameter optimization. To verify this, we implement a decou- pled version of our method, which involves pruning the param- eters with relatively small magnitude from the uncompressed model to meet the layer-wise sparsity obtained from our one- Figure 2: The ASR performance of the pruned wav2vec2-base-100h on the (1) dev and (3) test subsets, as well as the pruned hubert- large on the (2) dev and (4) test subsets with different sparsity using different methods. Abbreviations are the same as those in Figure 1. Color-matched arrows point to the maximum sparsity preserving lossless compression of the methods in corresponding color. pass method, as shown in Fig. 2 under the label Mixed-sparsity . It outperforms UMP across most sparsity levels, suggesting that our one-pass method effectively accounts for the varying sensi- tivities of different layers. However, it performs worse than our one-pass method, highlighting that the inconsistency between pruning and parameter update negatively affects ASR perfor- mance. Notably, on the less-explored dev-other and test-other subsets, pruned HuBERT- large systems using Mixed-sparsity maintain lossless compression with maximum sparsity of 10% (Fig. 2 (2) and Sys. 19 in Tab.1) and 30% (Fig. 2 (4) and Sys. 21 in Tab.1), respectively, while UMP only achieves 10% on the test-other subset (Fig. 2 (4)). Table 2: The ASR performance comparison of our method ver- sus previous compression methods on the test-clean subset. The values in brackets “(” and “)” represent the transformer spar- sity. Comp.ratio: the compression ratio relative to the uncom- pressed model. *: from leaderboard of SUPERB [40]. Sys. SystemTrain set# Params (M)Comp. ratioTraining time & configWER%Hours Epochs Iterations Baselines 1 wav2vec2.0- base 100h95.04 - - - -6.06 2 HuBERT- base 94.68 6.42* 3 WavLM- base+ 94.70 5.59* 4 HuBERT- large 316.60 3.44 Prior Compression Methods 5 DistilHuBERT [15] 960h 23.49 4.03 55 200 - 13.37* 6 FitHuBERT-100 [18]100h 22.494.21 >12100- 12.66* 7 FitW2V2-100 [18] 4.23 - - 14.77* 8 FitHuBERT-960 [18]960h22.49 4.21 >93.680- 12.09* 9 FitW2V2-960 [18] 31.63 3.00 - - 11.44* 10 12-Layer Half [17] 960h 26.87 3.52 - - 200K 10.96* 11 DPHuBERT [29]960h23.59 4.0124-75K10.47* 12 DPWavLM [29] 23.59 4.01 - 10.19* 13 SKHuBERT [30]960h23.59 4.0124-75K10.78* 14 SKWavLM [30] 23.51 4.03 - 10.03* 15 Wang et al. [22] 960h 26.57 3.56 36 - 200K 10.29* 16 DKD LSTM HuBERT [41] 960h 18.80 5.04 - - 200K 10.64* 17SparseWA V2VEC2 (85%) [21] 100h22.27 4.27 - >40- 8.08 18 SparseWA VLM+ (85%) [21] 22.98 4.12 - - 7.12 19SparseWA V2VEC2 (75%) [21] - - - - 7.11 Ours 20Pruned wav2vec2- base (85%)100h22.20 4.2811 30 27K7.46 21Pruned wav2vec2- base (75%) 30.70 3.10 6.71 22Pruned WavLM- base+ (85%) 100h 22.21 4.26 11 30 27K 7.05 23 Pruned HuBERT- large (90%) 100h 43.48 7.28 13 10 9K 6.375.2.2. Comparison with NAS-based Channel-wise Pruning Predictably, compared with NAS-CP that prunes multiple channels, our method applied to both wav2vec2.0- base and HuBERT- large achieves consistently lower WERs across all sparsity as demonstrated in Fig. | https://arxiv.org/abs/2505.22608v1 |
2 (1)-(4). Regarding the addi- tional components, NAS-CP requires 7 architecture-dependent parameters per layer (e.g., 1008 total for HuBERT- large of Sys. 23 in Tab. 1) in our configurations (Sec. 3.2). In contrast, our method maintains only one threshold per layer (e.g., 144 total for HuBERT- large of Sys. 11 in Tab. 1). 5.3. Comparison with other compression methods As illustrated in Tab. 2, our method (Sys. 20) achieves ≥2% ab- solute WER reduction over Sys. 5-16 under equivalent model- size limits. In order to compare training times, we consolidated different types (Hours/Epochs/Iterations) of training-time met- rics from previous works into standardized comparisons, where our method demonstrates consistent time-efficiency advantages. Notably, the WERs of Sys. 20 and Sys. 21 using our method surpass Sys. 17 and Sys. 19, respectively, while requiring at least 25% less fine-tuning epochs, highlighting the effective- ness and efficiency of our approach. We also pruned WavLM- base+ downloaded from Huggingface7using the same setup as wav2vec2.0- base, yielding the lowest WER (Sys. 22 in Tab. 2) of 7.05% on the test-clean subset under the model-size con- straint of 23M parameters, compared to Sys. 5-19 in Tab. 2. 6. Conclusion We introduced a cutting-edge one-pass compression method that simultaneously prunes and trains SSL speech foundation models using a threshold per layer. Our method shows superior performance under the same model-size constraint while reduc- ing the fine-tuning time compared to the previous works. We will further apply our methods to more datasets in future work. 7Huggingface: patrickvonplaten/wavlm-libri-clean-100h-base-plus 7. Acknowledgements This research is supported by Hong Kong RGC GRF grant No. 14200220, 14200021, 14200324 and Innovation Technology Fund grant No. ITS/218/21. 8. References [1] A. Baevski, Y . Zhou, A. Mohamed et al. , “wav2vec 2.0: A frame- work for self-supervised learning of speech representations,” in NeurIPS , 2020. [2] W.-N. Hsu, B. Bolte, Y .-H. H. Tsai et al. , “HuBERT: Self- supervised speech representation learning by masked prediction of hidden units,” IEEE/ACM T-ASLP , vol. 29, pp. 3451–3460, 2021. [3] S. Chen, C. Wang, Z. Chen et al. , “WavLM: Large-scale self- supervised pre-training for full stack speech processing,” IEEE J-STSP , vol. 16, no. 6, pp. 1505–1518, 2022. [4] O. Rybakov, P. Meadowlark, S. Ding et al. , “2-bit conformer quantization for automatic speech recognition,” in Interspeech , 2023. [5] S. Ding, P. Meadowlark, Y . He et al. , “4-bit conformer with na- tive quantization aware training for speech recognition,” in Inter- speech , 2022. [6] S. Kim, A. Gholami, Z. Yao et al. , “I-bert: Integer-only bert quantization,” in International conference on machine learning . PMLR, 2021, pp. 5506–5518. [7] H. Xu, Z. Li, Z. Jin et al. , “Effective and efficient mixed preci- sion quantization of speech foundation models,” arXiv preprint arXiv:2501.03643 , 2025. [8] E. Fish, U. Michieli, and M. Ozay, “A model for every user and budget: Label-free and personalized mixed-precision quantiza- tion,” in Interspeech , 2023. [9] S. Li, M. Xu, and X.-L. Zhang, “Efficient conformer-based speech recognition with linear attention,” in APSIPA ASC , 2021. [10] Z. Li, T. Wang, J. Deng | https://arxiv.org/abs/2505.22608v1 |
et al. , “Lossless 4-bit quantization of architecture compressed conformer asr systems on the 300-hr switchboard corpus,” in Interspeech , 2023. [11] H. Wang and W.-Q. Zhang, “Unstructured pruning and low rank factorisation of self-supervised pre-trained speech models,” IEEE Journal of Selected Topics in Signal Processing , 2024. [12] J. Rathod, N. Dawalatabad, S. Singh et al. , “Multi-stage progres- sive compression of conformer transducer for on-device speech recognition,” in Interspeech , 2022. [13] J. Park, S. Jin, J. Park et al. , “Conformer-based on-device stream- ing speech recognition with KD compression and two-pass archi- tecture,” in IEEE SLT , 2023. [14] Y . Fu, Y . Kang, S. Cao et al. , “DistillW2V2: A small and streaming wav2vec 2.0 based asr model,” arXiv preprint arXiv:2303.09278 , 2023. [15] H.-J. Chang, S.-w. Yang, and H.-y. Lee, “DistilHuBERT: Speech representation learning by layer-wise distillation of hidden-unit bert,” in ICASSP , 2022. [16] R. Wang, Q. Bai, J. Ao et al. , “LightHuBERT: Lightweight and configurable speech representation learning with once-for-all hidden-unit bert,” in Interspeech , 2022. [17] T. Ashihara, T. Moriya, K. Matsuura et al. , “Deep versus wide: An analysis of student architectures for task-agnostic knowledge dis- tillation of self-supervised speech models,” in Interspeech , 2022. [18] Y . Lee, K. Jang, J. Goo et al. , “FitHuBERT: Going thinner and deeper for knowledge distillation of speech self-supervised learn- ing,” in Interspeech , 2022. [19] Z. Wu, D. Zhao, Q. Liang et al. , “Dynamic sparsity neural net- works for automatic speech recognition,” in ICASSP , 2021. [20] J. Lee, J. Kang, and S. Watanabe, “Layer pruning on demand with intermediate ctc,” in Interspeech , 2021. [21] T. Gu, B. Liu, H. Shao et al. , “Sparsewav: Fast and accurate one- shot unstructured pruning for large speech foundation models,” in Proc. Interspeech 2024 , 2024, pp. 4498–4502.[22] H. Wang, S. Wang, W.-Q. Zhang et al. , “Task-agnostic structured pruning of speech representation models,” in Interspeech 2023 , 2023, pp. 231–235. [23] H. Jiang, L. L. Zhang, Y . Li et al. , “Accurate and structured prun- ing for efficient automatic speech recognition,” in Interspeech , 2023. [24] V . S. Lodagala, S. Ghosh, and S. Umesh, “PADA: Pruning assisted domain adaptation for self-supervised speech representations,” in IEEE SLT , 2023. [25] Y . Peng, K. Kim, F. Wu et al. , “Structured pruning of self- supervised pre-trained models for speech recognition and under- standing,” in ICASSP , 2023. [26] Z. Li, H. Xu, T. Wang et al. , “One-pass multiple conformer and foundation speech systems compression and quantization using an all-in-one neural model,” in Interspeech 2024 , 2024, pp. 4503– 4507. [27] N. Wang, C.-C. C. Liu, S. Venkataramani et al. , “Deep compres- sion of pre-trained transformer models,” Advances in Neural In- formation Processing Systems , vol. 35, pp. 14 140–14 154, 2022. [28] S. Ding, Q. David, D. Rim et al. , “USM-Lite: Quantization and sparsity aware fine-tuning for speech recognition with universal speech models,” in ICASSP , 2024. [29] Y . Peng, Y . Sudo, S. Muhammad et al. | https://arxiv.org/abs/2505.22608v1 |
, “DPHuBERT: Joint dis- tillation and pruning of self-supervised speech models,” in Inter- speech , 2023. [30] L. Zampierin, G. B. Hacene, B. Nguyen et al. , “Skill: Similarity- aware knowledge distillation for speech self-supervised learning,” in2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) , 2024, pp. 675–679. [31] L. Gillick and S. J. Cox, “Some statistical issues in the comparison of speech recognition algorithms,” in ICASSP , 1989. [32] C.-I. J. Lai, Y . Zhang, A. H. Liu et al. , “Parp: Prune, adjust and re- prune for self-supervised speech recognition,” Advances in Neu- ral Information Processing Systems , vol. 34, pp. 21 256–21 272, 2021. [33] M. Yang, A. Tjandra, C. Liu et al. , “Learning asr pathways: A sparse multilingual asr model,” in ICASSP 2023-2023 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5. [34] S. Wang, H. Wang, J. Li et al. , “Learnable sparsity structured pruning for acoustic pre-trained models,” in Proceedings of the 2023 6th International Conference on Signal Processing and Ma- chine Learning , 2023, pp. 76–81. [35] Y . Fu, Y . Zhang, K. Qian et al. , “Losses can be blessings: Routing self-supervised speech representations towards efficient multilin- gual and multitask speech processing,” Advances in Neural Infor- mation Processing Systems , vol. 35, pp. 20 902–20 920, 2022. [36] H. Liu, K. Simonyan, and Y . Yang, “Darts: Differentiable archi- tecture search,” in International Conference on Learning Repre- sentations , 2018. [37] C. J. Maddison, A. Mnih, and Y . W. Teh, “The concrete distri- bution: A continuous relaxation of discrete random variables,” in International Conference on Learning Representations , 2022. [38] Y . Bengio, N. L ´eonard, and A. Courville, “Estimating or propa- gating gradients through stochastic neurons for conditional com- putation,” arXiv preprint arXiv:1308.3432 , 2013. [39] V . Panayotov, G. Chen, D. Povey et al. , “LibriSpeech: an asr cor- pus based on public domain audio books,” in ICASSP , 2015. [40] S. wen Yang, P.-H. Chi, Y .-S. Chuang et al. , “Superb: Speech pro- cessing universal performance benchmark,” in Interspeech 2021 , 2021, pp. 1194–1198. [41] D. de Oliveira and T. Gerkmann, “Distilling HuBERT with LSTMs via decoupled knowledge distillation,” in ICASSP , 2024. | https://arxiv.org/abs/2505.22608v1 |
RICO: Improving Accuracy and Completeness in Image Recaptioning via Visual Reconstruction Yuchi Wang1, Yishuo Cai2, Shuhuai Ren1, Sihan Yang3, Linli Yao1, Yuanxin Liu1, Yuanxing Zhang4, Pengfei Wan4, Xu Sun1 1National Key Laboratory for Multimedia Information Processing, Peking University 2Central South University3Xi’an JiaoTong University4Kuaishou Technology wangyuchi@stu.pku.edu.cn xusun@pku.edu.cn The image depicts a scene with three buses parked side by side on a gravel surface . The central bus is white with a black front and the lowercase word "eVolution " written on it in the center below the windshield . This bus has a British -style license plate that reads "AE56 UTH." It features sliding passenger doors with black rubber trimming and reflective windows , showing faint reflections of trees and sky . The bus to the left , partially visible, is also white with a license plate "AE57 FBC," and has a similar design with large reflective windows , but no visible " eVolution " text . The bus to the right is a double -decker , with only the top level visible , is mostly blue with a green and white pattern along the side . A few tall trees are visible in the background, and the sky is partly cloudy, with light blue tones and patches of orange and pink due to the setting sun .The image depicts a scene with two buses parked side by side on the ground. The foreground bus is white with a black front and the word "eVolution " written on it. It has a license plate that reads "AE56 UTH." The second bus, partially visible, is also white with a license plate "AE57 FBC." The background features a partly cloudy sky with hues of blue and white clouds , and some greenery is visible in the distance. The overall scene suggests a bus depot or a parking area for buses. The image depicts a scene with two white single -decker buses parked side by side on a gravel surface . The bus in the foreground has "eVolution " written on the front and a license plate reading "AE56 UTH." It features a black front entry door and large front -facing windows with twin windscreen wipers . Next to it is a similar bus with the license plate "AE57 FBC." In the background, a green and blue double -decker bus is partially visible behind the two . The sky above is partly cloudy with patches of blue , and trees can be seen in the distance, suggesting the setting is a bus depot or vehicle storage area. The image shows three buses parked side by side on the ground. The bus in the foreground is white with a black front and has the word “ eVolution ” written on it. Its license plate reads “AE56 UTH.” To its left is a second bus, which is partially visible. It is also white, shares the same design as the first bus, and bears the license plate “AE57 FBC.” Behind these two buses is a third one, which is blue . In the background, there is a partly cloudy | https://arxiv.org/abs/2505.22613v1 |
sky with white clouds and a bright area on the right side . Some greenery is also visible in the distance. The overall scene suggests a bus depot or a parking area for buses.Original Caption (Generated byQwen2 -VL) GPT-4oRecaptioning RICO (Ours) Human RecaptioningWrong orAmbiguous Information Other Added DetailsCorrected Version Details only Detected byRICO (Ours) Mismatch Area Correct Area Reconstruct toImageReconstructed Image Figure 1: Analysis of image captions generated by Qwen2-VL and its recaptioned variants. Despite the advanced capabilities of Qwen2-VL, the generated captions still contain incorrect or ambiguous information—for example, misidentifying the number of buses—a mistake that remains uncorrected even by GPT-4o. Furthermore, both GPT- 4o and human-generated recaptions often overlook fine-grained details, such as attributes and spatial relationships, which are accurately captured by our model. By reconstructing images from captions, it becomes evident that our model better preserves such details, resulting in reconstructions that more closely resemble the original image. Abstract Image recaptioning is widely used to gener- ate training datasets with enhanced quality for various multimodal tasks. Existing recaption- ing methods typically rely on powerful mul- timodal large language models (MLLMs) to enhance textual descriptions, but often suffer from inaccuracies due to hallucinations and in- completeness caused by missing fine-grained details. To address these limitations, we pro- pose RICO, a novel framework that refines cap- tions through visual reconstruction. Specifi- cally, we leverage a text-to-image model to reconstruct a caption into a reference image, and prompt an MLLM to identify discrepan- cies between the original and reconstructed im- ages to refine the caption. This process is per- formed iteratively, further progressively pro-moting the generation of more faithful and comprehensive descriptions. To mitigate the additional computational cost induced by the iterative process, we introduce RICO-Flash, which learns to generate captions like RICO us- ing DPO. Extensive experiments demonstrate that our approach significantly improves cap- tion accuracy and completeness, outperforms most baselines by approximately 10% on both CapsBench and CompreCap. Code released at https://github.com/wangyuchi369/RICO . 1 Introduction The availability of hundreds of millions of image- text pairs collected from the internet has played a pivotal role in advancing modern multimodal learn- ing (Chen et al., 2023; Liu et al., 2023; Bai et al., 2023b). However, the alt text associated with web 1arXiv:2505.22613v1 [cs.CV] 28 May 2025 images is frequently of low quality, offering unin- formative descriptions or even text unrelated to the image content. Consequently, recaptioning meth- ods have been widely employed to generate en- hanced captions for downstream multimodal tasks, such as training multimodal large language models (MLLMs) (Chen et al., 2023), text-to-image mod- els (Betker et al., 2023), and CLIP-like models (Fan et al., 2023; Lai et al., 2024). Typically, recaptioning methods primarily de- pend on powerful MLLMs (Lai et al., 2024; Chen et al., 2023). While MLLMs significantly enhance captions over the alt-text by leveraging their strong perceptual capabilities, the generated descriptions still face two key challenges: (1) Inaccuracy , where some descriptions are incorrect, often ex- acerbated by the notorious hallucination problem of MLLMs (Bai et al., 2025); and (2) Incomplete- ness, where important details are frequently omit- | https://arxiv.org/abs/2505.22613v1 |
ted. These issues cannot be fully resolved even with the integration of additional models or hu- man editing. For example, as illustrated in Fig. 1, the caption generated by Qwen2-VL (Wang et al., 2024b) contains ambiguous or incorrect informa- tion that cannot be fully corrected even with GPT- 4o (OpenAI et al., 2024). Moreover, several visual details remain undetected by either GPT-4o or hu- man annotators, whereas our method successfully captures them. This appears to stem from the natu- ral tendency of both humans and models to focus on salient objects in an image, often neglecting at- tributes and subtle details. We further validate this observation through experiments in § 4.2. From a semantic space perspective, the chal- lenges above suggest that the semantic space con- structed through recaptioning is often biased and lossy compared to that of the original image. As illustrated in Fig. 2, conventional captioners typ- ically follow a one-way mapping from image to text, without enforcing explicit semantic alignment between the two modalities, resulting in the omis- sion of critical semantic elements in the generated captions. We argue that an ideal cross-modal se- mantic alignment should involve a bi-directional mapping: when text is generated from an image, the reconstructed image from that text should re- main consistent with the original. In cases of mis- alignment, the discrepancy between the original and reconstructed images can be used to adjust the semantic space of the caption. Based on this in- tuition, we propose RICO ( Reconstruction-guided Image Caption Optimization), a novel recaption-ing framework. As shown in Fig. 2, our method incorporates a visual reconstruction step that makes semantic discrepancies more observable in the vi- sual domain compared to simply contrasting image and text, thereby facilitating the recovery of omit- ted details and producing descriptions that are both more semantically aligned and comprehensive. Technically, we use powerful text-to-image mod- els to reconstruct each caption into a reference im- age. Next, we input the original image, the gen- erated reference image, and the candidate caption into a reviser—an MLLM and prompt it to refine the caption based on the discrepancies between the original and reference images. Through experi- ments, we find that a single-step refinement is insuf- ficient, so we design the refinement process to iter- ate multiple times to progressively improve the cap- tion. Given the significant time and computational resources required for iterative refinement, we pro- pose an end-to-end variant as a more efficient al- ternative to RICO. This model is constructed by learning the naturally induced preference relation- ships during the iterative refinement process using Direct Preference Optimization (DPO) (Rafailov et al., 2024). Specifically, we employ RICO to gen- erate a batch of training data, which is then used to fine-tune a base model via DPO, resulting in the compact RICO-Flash model. Through experiments, we demonstrate that our pipeline effectively constructs well-aligned image- text information spaces. From the captioning perspective, we evaluate both the RICO frame- work and the compact RICO-Flash model on some benchmarks. Results show that RICO significantly enhances caption quality in terms of both accuracy and | https://arxiv.org/abs/2505.22613v1 |
comprehensiveness. For instance, it consis- tently achieves improvements of over 10 points on CapsBench (Liu et al., 2024a). Moreover, RICO- Flash outperforms all recaptioning baselines. From the reverse perspective of text-to-image generation, we find that models trained on captions refined by RICO-Flash exhibit a stronger understanding of fine-grained prompts, particularly with regard to attributes and relationships. Further analysis also reveals that our method demonstrates strong robust- ness and generalization across diverse settings. 2 Related Works 2.1 Multimodal Large Language Models Inspired by the success of large language models (LLMs) (Sun et al., 2025; Ouyang et al., 2022; 2 A cattle stands on a lush green meadow, looking directly at the camera, while tiny yellow flowers are scattered across the grass. [More specific] a black -and-white dairy cow [Add details] oriented toward the right side of the frame[Add details] background features dense foliage[More specific] dense layer of flowers Current Caption Reconstructed Image Revision SuggestionsCaptioning Reconstruct Revise caption based ondiscrepancyCaptioner Reviser Original Image orientationFigure 2: Illustration of the motivation for introducing the visual reconstruction mechanism. Conventional recaptioning methods typically map images directly to text without explicitly aligning the semantic spaces of the two modalities, often leading to information loss in the generated captions. In contrast, our approach incorporates visual reconstruction to make this loss more observable. By identifying discrepancies between the original and reconstructed images through the reviser, we refine the caption to produce a more semantically aligned and comprehensive description. DeepSeek-AI et al., 2025; Bai et al., 2023a) in natural language processing, several works have extended them to multimodal settings by incorpo- rating visual encoders (OpenAI, 2023; Liu et al., 2023; Team et al., 2024a; Bai et al., 2023b; Ren et al., 2024), contributing to multimodal large lan- guage models (MLLMs). Flamingo (Alayrac et al., 2022) is an early effort that inserts gated attention layers into a pretrained language model to enable vision-language understanding. Subsequent works explore various strategies for connecting vision encoders to language models. For example, BLIP- 2 (Li et al., 2023a) introduces the Q-Former to bridge the modalities, LLaV A (Liu et al., 2023) employs a simple MLP projection layer, and Gem- ini (Team et al., 2024a) feeds image and text tokens jointly into a unified Transformer. In addition to architectural design, recent research has also fo- cused on improving the quality of pretraining and fine-tuning data (Bai et al., 2023b; Wang et al., 2024c; Zhu et al., 2023). While modern MLLMs demonstrate impressive visual perception capabili- ties, they still suffer from hallucination issues (Bai et al., 2025)—occasionally generating inaccurate or fabricated content—which undermines the faith- fulness of the generated captions. 2.2 Image Recaptioning Describing an image using text has been a fun- damental task in multimodal learning (Li et al.,2022; Ghandi et al., 2023; Wang et al., 2024d; Yao et al., 2023). Among these efforts, image re- captioning aims to generate enhanced captions for original, noisy alt text associated with image-text pairs. It has become increasingly important for producing high-quality synthetic data to support various downstream applications. This trend was popularized by DALL-E 3 (Betker et al., 2023), which | https://arxiv.org/abs/2505.22613v1 |
introduced the idea of replacing low-quality or overly simplistic captions with synthetic alterna- tives. Since then, numerous approaches have lever- aged image recaptioning to improve multimodal large language models (MLLMs) (Chen et al., 2023), text-to-image generation models (Betker et al., 2023), and CLIP-style vision-language mod- els (Lai et al., 2024; Fan et al., 2023). Among these efforts, LaCLIP (Fan et al., 2023) utilizes LLMs to rewrite alt-text, while VeCLIP (Lai et al., 2024) incorporates additional visual information. Caps- Fusion (Yu et al., 2024) trains a LLaMA-based model to fuse alt-text and synthetic captions, and ShareGPT4V (Chen et al., 2023) directly generates new captions using GPT-4V (OpenAI, 2023). More sophisticated approaches include Altogether (Xu et al., 2024), which employs iterative human anno- tation, and Ye et al. propose automated fine-grained feedback mechanisms to improve captioning ca- pabilities. Additionally, methods based on local perception have also been explored (Peng et al., 2025; Sun et al., 2025). However, despite their ad- 3 vancements, these methods fundamentally follow a paradigm of directly generating captions without explicitly enforcing semantic alignment between visual and textual modalities, inevitably resulting in considerable information loss. 3 Methodology In this section, we introduce our RICO framework and RICO-Flash model. § 3.1 provides an overview of the pipeline of RICO. Subsequently, § 3.2 de- scribes how we generate the reference reconstruc- tion image. § 3.3 presents the method designed to refine the caption. Finally, § 3.4 illustrates the process of training a compact model RICO-Flash to learn the iterative process using DPO. 3.1 Overall Pipeline of RICO As illustrated in Fig. 3, in our RICO framework, the initial caption c0for the original image v0is generated by the initial captioning model. A recon- struction model Tand a refinement model Rare then alternately applied to iteratively improve the caption. In each iteration i≥1, the reconstruction procedure converts the previous candidate caption ci−1into a reconstructed image vi, and the refine- ment model generates a refined caption based on the previous caption ci−1, the original image v0, and the reconstructed reference image vi. Formally, the refinement step is defined as: ci=R(vi, v0, ci−1) =R(T(ci−1), v0, ci−1). 3.2 Reconstruct Candidate Caption into Reference Image As discussed in § 1, the semantic information space of captions generated by typical captioning pro- cesses tends to be biased and lossy compared to the information contained in the original image. Specifically, we denote the semantic space of the original image as Vand that of the generated cap- tion as C. A biased caption implies that for some information i∈ C,f(i)/∈ V, and a lossy caption implies that for some information j∈ V,g(j)/∈ C, where frepresents the mapping from textual to vi- sual information, and gdenotes the reverse. A key insight of this work is that directly comparing the information spaces VandCis challenging due to the cross-modal nature of fandg. To address this, we leverage a powerful text-to-image model to re- construct the caption into an image. This enables a more direct comparison between the original image Initial Caption 0(𝒄𝟎) a dairy cow /cattle … dense layer of flowers … background features dense | https://arxiv.org/abs/2505.22613v1 |
foliage… oriented toward the right side of the frame… …… …Original Image / Image 0(𝒗𝟎) Image 1(𝒗𝟏) Image 2(𝒗𝟐)① ② ③ ④ ⑤Missing information Extracted information Initial Captioning Reconstruction 𝑻 Refinement 𝑹 Caption 1(𝒄𝟏) a dairy cow … dense layer of flowers … background features dense foliage… oriented toward the right side of the frame… Caption 2(𝒄𝟐) a dairy cow … dense layer of flowers … background features dense foliage… oriented toward the right side of the frame… ContrastFigure 3: Illustration of the iterative process of RICO. After the initial captioning step, a reconstruction pro- cedure is applied to generate an image from the candi- date caption. The caption is then refined by comparing the original image with the reconstructed image. Vand the reconstructed image ˆV, as both reside in the visual modality. In particular, we use the FLUX.1-dev model (Labs, 2024) as our text-to-image generator, given its strong performance and open-source availability. A notable advantage of FLUX.1-dev is its use of a T5 text encoder (Raffel et al., 2023), which supports longer prompts, surpassing the 77-token limit imposed by CLIP-based models. This allows us to process more detailed captions and faithfully reconstruct their visual content. Formally, for a given generated caption ci−1, we use the text-to-image model to produce a reference image viviavi=T(ci−1), effectively translating the information space of the candidate caption into visual form and facilitating the identification of discrepancies from the original image. 3.3 Refine Caption with Reference Image Feedback Having obtained the reconstructed reference im- agevi, we proceed to refine the previous candidate caption ci−1based on the discrepancy between the reconstructed image viand the original image v0, 4 thereby generating an updated caption ci, defined asci=R(ci−1, vi, v0). Given the complexity of this task, we utilize one of the most advanced mul- timodal large language models, GPT-4o (OpenAI et al., 2024), to perform the refinement process. We observed that directly feeding all relevant in- formation into the model yields suboptimal results, highlighting the importance of prompt engineering. To address this, we carefully design prompts with attention to several key aspects outlined below. The complete prompt is provided in § C.1. Task Description We explicitly inform the model of the task objective, with a particular emphasis on how the reference image is generated. Additionally, the model is instructed to focus on the discrepan- cies between the reference image and the original image as the basis for refining the caption. Aspects the Model Should Focus On It is not in- tuitive for the refinement model to determine what aspects of the discrepancy between the original im- age and the generated reference image it should focus on, and ranking the importance of different aspects is challenging. Therefore, we provide the model with some guidance. We define eight as- pects for the model to prioritize, including: ‘Visual Details, Composition & Layout, Human Attributes (if applicable), Perspective & Style, Text in the Im- age, Image Quality, World Knowledge, and Color Aesthetics. ’ Guidance for Improvement Method To guide the model in refining the candidate caption, we categorize improvements into two types: address- inginaccuracy | https://arxiv.org/abs/2505.22613v1 |
andincompleteness . For inaccuracy, the model is instructed to identify and correct errors based on discrepancies between the original and reconstructed images, and to revise any ambigu- ous descriptions in the previous caption that may have caused inaccurate reconstruction. For incom- pleteness, the model is encouraged to incorporate missing details and to elaborate on key attributes of the main objects, such as color, shape, and other fine-grained characteristics. Force Model to Output Analysis Process In- spired by the success of Chain of Thought (CoT) (Wei et al., 2023), we prompt the model to output not only the revised caption but also the corresponding analysis process. This technique serves two purposes: it allows us to examine the reasoning steps of the black-box multimodal large language model, and, as shown in our experiments in § 4.5, it improves the quality of the generated captions by encouraging the model to deliberatemore deeply. For practical implementation, we instruct the model to enclose the analysis within special markers <analysis> ...</analysis> to fa- cilitate automated post-processing. 3.4 RICO-Flash: Leverage DPO to Mitigate Computational Cost Preliminaries of DPO Direct Preference Opti- mization (DPO) (Rafailov et al., 2024) is a recently proposed algorithm for aligning language models with human preferences without relying on rein- forcement learning. Unlike traditional Reinforce- ment Learning from Human Feedback (RLHF) methods, which involve separate reward model- ing and policy optimization steps, DPO formulates preference learning as a binary classification prob- lem between preferred and dispreferred responses. Formally, given a prompt xand a pair of responses (y+, y−), where y+is preferred over y−, DPO op- timizes the likelihood ratio between the two re- sponses under a learned policy πθand a fixed refer- ence policy πref, using the following objective: LDPO=−E(x,y+,y−)∼D logσ βlogπθ(y+|x) πref(y+|x) −βlogπθ(y−|x) πref(y−|x) , Here, βis a temperature-like hyperparameter that controls the sharpness of the preference model- ing. The objective encourages the model to assign higher relative likelihoods to preferred responses compared to dispreferred ones, with respect to the reference policy. Given that our iterative refinement process in- curs substantial inference time and computational overhead, we explore the development of an end-to- end variant. Noting that the iterative procedure im- plicitly induces a preference relationship between captions, we adopt Direct Preference Optimiza- tion (DPO) to learn these preferences. Specifically, we collect a high-quality image dataset and apply RICO to generate refined captions. For each image v(i), we extract the initial caption c(i) 0and the fi- nal caption after Nrefinement steps, c(i) N, forming a preference tuple (v(i), c(i) 0, c(i) N). Based on our empirical observation that c(i) Nconsistently outper- forms c(i) 0in most cases, we treat this pairwise pref- erence as supervision for DPO training. We adopt Qwen2-VL (Wang et al., 2024b) as the base model and fine-tune it using the DPO objective, yielding an end-to-end variant we denote as RICO-Flash. 5 Table 1: Performance of RICO and RICO-Flash under different initial MLLM recaptioning models. For RICO-Flash, we use the corresponding MLLM as the base model. In CapsBench, Acc. denotes overall accuracy, and Rel.Pos. indicates relative position accuracy. In CompreCap, Obj.,Pix.,Attr., and Rel. | https://arxiv.org/abs/2505.22613v1 |
represent object coverage, pixel coverage, attribute score, and relation score, respectively. Over. in Amber refers to overall performance (see § B.2 for details). Green text indicates improvements. RICO demonstrates significant gains over the original captions, while RICO-Flash achieves performance close to that of RICO. MethodCapsBench Amber CompreCap Acc.↑ Color↑ Rel. Pos. ↑Over.↑ Obj.↑ Pix.↑ Attr.↑ Rel.↑ Qwen2-VL Init. 42.0 48.1 32.4 59.7 69.82 60.02 2.66 2.81 + RICO-Flash 55.3 (+13.3) 66.7 (+18.6) 55.1 (+22.7) 60.6 (+0.9) 74.80 (+4.98) 63.35 (+3.33) 2.84 (+0.18) 2.84 (+0.03) + RICO ( N= 2)59.0 (+17.0) 67.1 (+19.0) 59.5 (+27.1) 62.2 (+2.5) 75.04 (+5.22) 63.04 (+3.02) 2.85 (+0.19) 2.82 (+0.01) LLaV A-1.5 Init. 29.5 27.8 18.1 44.7 57.14 44.48 2.02 2.38 + RICO-Flash 46.2 (+16.7) 49.6 (+21.8) 38.7 (+20.6) 53.1 (+8.4) 66.68 (+9.54) 56.52 (+12.04) 2.53 (+0.51) 2.43 (+0.05) + RICO ( N= 2)53.1 (+23.6) 61.1 (+33.3) 48.1 (+30.0) 59.7 (+15.0) 76.38 (+19.24) 61.49 (+17.01) 2.82 (+0.80) 2.82 (+0.44) Table 2: Recaptioning results by humans and models based on the initial caption. In our RICO method, a single iteration of refinement is performed. ModelCapsBench (Subset) Acc. Color Rel. Pos. Shape Original 43.55 44.30 39.45 20.41 + GPT-4o Edit 49.50 53.02 44.04 24.49 + Human Edit 50.96 51.30 47.82 27.01 + RICO Edit 54.08 65.47 34.04 49.51 This model directly generates improved captions without requiring iterative alternation between a text-to-image model and a caption refinement mod- ule, thereby significantly reducing inference cost while maintaining competitive performance. 4 Experiments 4.1 Setup 4.1.1 Implementation Details For the implementation details, the text-to-image generation is performed using the FLUX.1-dev model (Labs, 2024), while the caption refinement process is conducted with GPT-4o (24-08-06) (Ope- nAI et al., 2024). We set the number of interaction steps N= 2, based on empirical observations that this configuration achieves a good balance between performance and computational efficiency. For the DPO experiments, we initialize with the Qwen2- VL model and set the preference scaling parameter β= 0.1. The model is fine-tuned for 3 epochs with a learning rate of η= 1.0×10−5. More implementation details can be found in § C.4.1.2 Evaluation Benchmarks In the era of MLLMs, traditional captioning met- rics (Papineni et al., 2002; Vedantam et al., 2015) often fail to capture fine-grained details and inade- quately penalize hallucinations. To address these limitations, in addition to the recently proposed reference-based metric CAPTURE (Dong et al., 2024a), we adopt more advanced benchmarks to more faithfully evaluate the quality of our method. Specifically, we employ CapsBench (Liu et al., 2024a), which uses QA pairs to assess the accu- racy and comprehensiveness of generated captions. We also utilize CompreCap (Lu et al., 2025), which leverages a Directed Scene Graph to evaluate the correctness of object mentions and their relation- ships. Furthermore, we adopt Amber (Wang et al., 2024a) to assess hallucinations in the generated descriptions. More details can be found in § B.2. 4.2 Effectiveness of RICO and RICO-Flash We verify that our RICO pipeline effectively ad- dresses both inaccuracy and incompleteness in recaptioning. Firstly, we use two popular open- source models Qwen2-VL (Wang et al., 2024b) and LLaV A-1.5 (Liu | https://arxiv.org/abs/2505.22613v1 |
et al., 2024b) as the initial caption- ing models to produce baseline captions, which are then refined by RICO. As shown in Tab. 1, even with just two refinement iterations, the captions generated by RICO exhibit substantial improve- ments across all benchmarks and metrics. Notably, the improvement in the overall score on the Amber indicates that RICO mitigates hallucination. Fur- thermore, on CapsBench, we emphasize two criti- cal aspects—color and relative position—and show that the reconstruction step helps the model more accurately identify and correct fine-grained discrep- 6 Table 3: Comparison with baseline methods across various evaluation metrics. Our method achieves the best performance on most metrics, while RICO-Flash demonstrates performance comparable to RICO. Bold text indicates the best results, and underlined text denotes the second-best. MethodCapsBench CompreCap AmberCapture Acc. Color Shape Rel. Pos. Obj. Pix. Rel. Attr. Over. LaCLIP (Fan et al.) 22.65 21.65 9.09 11.11 48.02 42.59 1.73 2.29 43.8 39.56 CapsFusion (Yu et al.) 35.04 38.14 12.12 25.46 61.67 52.63 2.32 2.59 44.5 56.03 Self-Loop (Dong et al.) 29.63 29.55 9.09 17.13 65.77 51.54 2.30 2.53 49.5 56.61 VeCLIP (Lai et al.) 25.19 27.84 11.11 13.43 49.60 42.25 2.50 1.77 41.0 38.13 ShareGPT4V (Chen et al.) 50.46 62.13 38.78 49.34 67.47 62.00 2.83 2.81 56.2 59.80 RICO-Flash (Ours) 55.32 66.67 50.29 55.09 74.80 63.35 2.84 2.84 60.6 65.52 RICO (Ours) 59.02 67.14 53.68 59.51 75.04 63.04 2.85 2.82 62.2 65.98 ancies. In addition, we can see that RICO-Flash achieves performance that closely matches RICO while still demonstrating substantial improvements over the initial captions, validating its effectiveness as a non-iterative alternative. Secondly, we assess recaptioning quality by com- paring RICO against GPT-4o and human annota- tors. We randomly select 100 images from Caps- Bench, generate initial captions using Qwen2-VL, and perform one round of editing using GPT-4o, RICO, and human annotators. The results, shown in Tab. 2, demonstrate that RICO achieves strong recaptioning performance, even surpassing humans, who tend to overlook fine-grained details. Some experiment details can be found in § B.3. Finally, we conduct a qualitative analysis of the refinement process and present examples show- casing the step-by-step improvement of captions through RICO in § A.1. 4.3 Comparison with Other Recaptioning Methods We compare our approach with other recaptioning methods, and the results are presented in Tab. 3. RICO demonstrates strong performance across all evaluation metrics, particularly in fine-grained as- pects such as color, entity shape, and relative po- sition. This highlights the importance of recon- struction for achieving better alignment between textual descriptions and visual content. Details on how the baseline methods perform recaptioning are provided in § B.1. 4.4 Further Analysis We conduct more experiments to help better under- stand our RICO pipeline.Table 4: Evaluation of a text-to-image generation model trained with original captions versus captions refined by our RICO-Flash model. Rel. andAttr. represent relation and attribute respectively. ModelDPG-BenchVQAScoreRel. Attr. Overall FLUX w/ Init. Cap. 89.95 80.08 78.50 0.84 FLUX w/ RICO-Flash 90.55 82.83 80.34 0.85 4.4.1 Verify Alignment via Text-to-Image Generation To verify that RICO effectively builds a well- aligned image-text semantic space, we evaluate it | https://arxiv.org/abs/2505.22613v1 |
on a classical downstream task: text-to-image generation. We collect an image dataset from Hug- gingface1and use RICO to perform recaptioning. Specifically, for each image v, we obtain both the initial caption c0and the refined caption cN, forming two datasets: Dinitial ={(v(i), c(i) 0)}and Drefined ={(v(i), c(i) N)}. We then use these datasets to train two separate text-to-image generation mod- els based on FLUX.1-dev. For evaluation, consider- ing that the prompts in our dataset are typically long and thus incompatible with many existing bench- marks (Ghosh et al., 2023; Huang et al., 2025), we adopt the recently proposed DPG-Bench (Hu et al., 2024), which is designed to evaluate de- tailed prompts. Moreover, we also employ VQAS- core (Lin et al., 2024)—a reference-free metric that serves as a robust alternative to CLIPScore (Hessel et al., 2022; Imagen-Team-Google et al., 2024). As shown in Tab. 4, the model fine-tuned on the re- fined dataset consistently outperforms the baseline across all metrics. Notably, it achieves improve- ments in entity, relation, and attribute dimensions, 1Mainly from https://huggingface.co/datasets/ jackyhate/text-to-image-2M 7 0 1 2 3 4 5 Iteration Steps30405060Accuracy (%) Model Performance w.s.t. Increased Iteration Steps Overall Accuracy Color Accuracy Relative Position Accuracy Shape AccuracyFigure 4: Performance of the RICO pipeline under dif- ferent numbers of refinement iterations. demonstrating that our reconstruction-refinement pipeline enhances the alignment between image and caption in fine-grained semantic aspects. De- tailed training configurations are provided in § B.4. 4.4.2 Saturation with Increased Iteration Steps In RICO, the caption is progressively refined as the number of iteration steps increases. As shown in Fig. 4, performance consistently improves with each additional iteration. However, the gains begin to plateau after approximately the second step, with only marginal improvements observed thereafter. This suggests that the generated caption reaches a satisfactory quality level, at least given the capabil- ities of the reconstruction and refinement modules. 4.4.3 Generalization to Different Initial Captions To evaluate the generalization capability of RICO, we examine whether it can consistently enhance captions through the reconstruction-refinement pipeline. As shown in the upper part of Tab. 5, we generate initial captions using different caption- ing models. The results indicate that RICO signif- icantly improves captions from all initial models, demonstrating its robustness. Notably, although our refinement module is based on GPT-4o, cap- tions generated by GPT-4o alone do not outperform the final outputs, suggesting that RICO does more than simply distill the captioning ability of GPT-4o. In the lower part of Tab. 5, we assess performance using various initial prompts. The results show that our pipeline yields substantial improvements across different prompts. While modifying the prompt within the same MLLM can lead to some gains, these are relatively minor compared to the improve-Table 5: Generality of RICO across different initial recaptioning models and prompts. For CapsBench, we report overall accuracy, and for CompreCap, we use the unified metric for evaluation. Model CapsBench CompreCap GPT-4o 49.6 / 57.7 (+8.1) 58.6 / 60.4 (+1.8) Gemini 1.5 Pro 49.7 / 57.7 (+8.0) 60.1 / 61.5 (+1.4) BLIP-3 37.0 / 56.2 (+19.2) 55.4 / 60.2 (+4.8) CogVLM 2 45.1 / 57.5 | https://arxiv.org/abs/2505.22613v1 |
(+12.4) 56.0 / 60.3 (+4.3) Qwen2-VL (Prompt 1) 42.0 / 59.0 (+17.0) 55.9 / 61.4 (+5.5) Qwen2-VL (Prompt 2) 46.0 / 57.6 (+11.6) 57.2 / 60.6 (+3.4) Qwen2-VL (Prompt 3) 41.9 / 54.9 (+13.0) 56.9 / 60.9 (+4.0) Table 6: Ablation studies. MethodCapsBench Acc. Color Rel. Pos. Shape RICO 59.02 67.14 59.51 53.68 RICO-Flash 55.32 66.67 55.09 50.29 (a) wo/ tips 54.33 62.23 50.95 42.11 (b) wo/ output analy. 50.40 62.54 53.24 36.36 (c) finetune w/ pos. 51.16 59.79 51.85 32.32 (d) infer with ICL 45.26 49.83 42.13 26.26 ments achieved by RICO. Importantly, our method is also orthogonal to prompt-based strategies and can be combined with more effective prompts for further enhancement. 4.5 Ablation Studies We conduct ablation studies to validate our design choices, with results presented in Tab. 6. The find- ings are: (a)When the refinement model is not guided on which aspects to focus, it struggles to identify key elements, resulting in a performance drop. (b)Omitting the requirement for the model to output an analysis process, which is intended to promote deliberate reasoning, also leads to de- graded performance. Regarding the DPO method, we evaluate two alternative strategies: (c)directly fine-tuning the base model using positive samples, and(d)incorporating a positive sample into the prompt for in-context learning (Dong et al., 2024b). Both approaches yield inferior results compared to the DPO method, underscoring the effectiveness of DPO in our setting. 5 Conclusion In this paper, we propose the RICO pipeline, which leverages visual reconstruction to improve the ac- curacy and completeness of image recaptioning. We also introduce an efficient variant, RICO-Flash, 8 which learns the iterative refinement process of RICO by DPO. Experimental results show that our method achieves well-aligned semantic rep- resentations between images and their captions, and delivers strong recaptioning performance com- pared to prior baselines. Further evaluations also confirm the generalizability of our approach. We hope RICO will inspire new techniques in image recaptioning and may contribute to advancements in broader multimodal research. References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, An- toine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Mon- teiro, Jacob Menick, Sebastian Borgeaud, and 8 oth- ers. 2022. Flamingo: a visual language model for few-shot learning. Preprint , arXiv:2204.14198. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propositional image caption evaluation. Preprint , arXiv:1607.08822. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, and 29 others. 2023a. Qwen technical report. Preprint , arXiv:2309.16609. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023b. Qwen-vl: A versa- tile vision-language model for understanding, lo- calization, text reading, and beyond. Preprint , arXiv:2308.12966. Zechen Bai, Pichao Wang, Tianjun Xiao, Tong He, Zongbo Han, Zheng Zhang, and Mike Zheng Shou. 2025. Hallucination of multimodal large language models: A | https://arxiv.org/abs/2505.22613v1 |
survey. Preprint , arXiv:2404.18930. James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jian- feng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, Wesam Manassra, Prafulla Dhariwal, Casey Chu, Yunxin Jiao, and Aditya Ramesh. 2023. Improving image generation with better captions. Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. 2023. Sharegpt4v: Improving large multi- modal models with better captions. Preprint , arXiv:2311.12793. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, IonStoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhi- hong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. Preprint , arXiv:2501.12948. Hongyuan Dong, Jiawen Li, Bohong Wu, Jiacong Wang, Yuan Zhang, and Haoyuan Guo. 2024a. Benchmark- ing and improving detail image caption. Preprint , arXiv:2405.19092. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, Baobao Chang, Xu Sun, Lei Li, and Zhifang Sui. 2024b. A survey on in-context learning. Preprint , arXiv:2301.00234. Lijie Fan, Dilip Krishnan, Phillip Isola, Dina Katabi, and Yonglong Tian. 2023. Improving clip training with language rewrites. Preprint , arXiv:2305.20088. Taraneh Ghandi, Hamidreza Pourreza, and Hamidreza Mahyar. 2023. Deep learning approaches on image captioning: A review. ACM Computing Surveys , 56(3):1–39. Dhruba Ghosh, Hanna Hajishirzi, and Ludwig Schmidt. 2023. Geneval: An object-focused framework for evaluating text-to-image alignment. Preprint , arXiv:2310.11513. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2022. Clipscore: A reference- free evaluation metric for image captioning. Preprint , arXiv:2104.08718. Wenyi Hong, Weihan Wang, Ming Ding, Wenmeng Yu, Qingsong Lv, Yan Wang, Yean Cheng, Shiyu Huang, Junhui Ji, Zhao Xue, Lei Zhao, Zhuoyi Yang, Xiaotao Gu, Xiaohan Zhang, Guanyu Feng, Da Yin, Zihan Wang, Ji Qi, Xixuan Song, and 6 others. 2024. Cogvlm2: Visual language models for image and video understanding. Preprint , arXiv:2408.16500. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Preprint , arXiv:2106.09685. Xiwei Hu, Rui Wang, Yixiao Fang, Bin Fu, Pei Cheng, and Gang Yu. 2024. Ella: Equip diffusion models with llm for enhanced semantic alignment. Preprint , arXiv:2403.05135. Kaiyi Huang, Chengqi Duan, Kaiyue Sun, Enze Xie, Zhenguo Li, and Xihui Liu. 2025. T2i- compbench++: An enhanced and comprehensive benchmark for compositional text-to-image gener- ation. Preprint , arXiv:2307.06350. 9 Imagen-Team-Google, :, Jason Baldridge, Jakob Bauer, Mukul Bhutani, Nicole Brichtova, Andrew Bun- ner, Lluis Castrejon, Kelvin Chan, Yichang Chen, Sander Dieleman, Yuqing Du, Zach Eaton-Rosen, Hongliang Fei, Nando de Freitas, Yilin Gao, Evgeny Gladchenko, Sergio Gómez Colmenarejo, Mandy Guo, and 243 others. 2024. Imagen 3. Preprint , arXiv:2408.07009. Black Forest Labs. 2024. Flux. https://github.com/ black-forest-labs/flux . Zhengfeng | https://arxiv.org/abs/2505.22613v1 |
Lai, Haotian Zhang, Bowen Zhang, Wen- tao Wu, Haoping Bai, Aleksei Timofeev, Xianzhi Du, Zhe Gan, Jiulong Shan, Chen-Nee Chuah, Yin- fei Yang, and Meng Cao. 2024. Veclip: Improving clip training via visual-enriched captions. Preprint , arXiv:2310.07699. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023a. Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large lan- guage models. Preprint , arXiv:2301.12597. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pre- training for unified vision-language understanding and generation. Preprint , arXiv:2201.12086. Zhuang Li, Yuyang Chai, Terry Yue Zhuo, Lizhen Qu, Gholamreza Haffari, Fei Li, Donghong Ji, and Quan Hung Tran. 2023b. Factual: A benchmark for faithful and consistent textual scene graph parsing. Preprint , arXiv:2305.17497. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. 2024. Evaluating text-to-visual gen- eration with image-to-text generation. Preprint , arXiv:2404.01291. Bingchen Liu, Ehsan Akhgari, Alexander Visheratin, Aleks Kamko, Linmiao Xu, Shivam Shrirao, Chase Lambert, Joao Souza, Suhail Doshi, and Daiqing Li. 2024a. Playground v3: Improving text-to-image alignment with deep-fusion large language models. Preprint , arXiv:2409.10695. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2024b. Improved baselines with visual instruc- tion tuning. Preprint , arXiv:2310.03744. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. Preprint , arXiv:2304.08485. Ilya Loshchilov and Frank Hutter. 2019. De- coupled weight decay regularization. Preprint , arXiv:1711.05101.Fan Lu, Wei Wu, Kecheng Zheng, Shuailei Ma, Biao Gong, Jiawei Liu, Wei Zhai, Yang Cao, Yujun Shen, and Zheng-Jun Zha. 2025. Benchmarking large vision-language models via directed scene graph for comprehensive image captioning. Preprint , arXiv:2412.08614. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. OpenAI. 2023. Gpt-4v(ision) system card. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Preprint , arXiv:2203.02155. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ruotian Peng, Haiying He, Yake Wei, Yandong Wen, and Di Hu. 2025. Patch matters: Training-free fine- grained image caption enhancement via local percep- tion. Preprint , arXiv:2504.06666. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neu- | https://arxiv.org/abs/2505.22613v1 |
ral Information Processing Systems , 36. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023. Exploring the limits of transfer learning with a unified text-to-text trans- former. Preprint , arXiv:1910.10683. Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. 2024. Timechat: A time-sensitive multi- modal large language model for long video under- standing. Preprint , arXiv:2312.02051. Yanpeng Sun, Jing Hao, Ke Zhu, Jiang-Jiang Liu, Yuxi- ang Zhao, Xiaofan Li, Gang Zhang, Zechao Li, and Jingdong Wang. 2025. Descriptive caption enhance- ment with visual specialists for multimodal percep- tion. Preprint , arXiv:2412.14233. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson, Ioannis 10 Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy Lillicrap, Angeliki Lazaridou, and 1331 others. 2024a. Gemini: A fam- ily of highly capable multimodal models. Preprint , arXiv:2312.11805. Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, Soroosh Mariooryad, Yifan Ding, Xinyang Geng, Fred Al- cober, Roy Frostig, Mark Omernick, Lexi Walker, Cosmin Paduraru, Christina Sorokin, and 1118 oth- ers. 2024b. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. Preprint , arXiv:2403.05530. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. Preprint , arXiv:2302.13971. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. Preprint , arXiv:1411.5726. Junyang Wang, Yuhang Wang, Guohai Xu, Jing Zhang, Yukai Gu, Haitao Jia, Jiaqi Wang, Haiyang Xu, Ming Yan, Ji Zhang, and Jitao Sang. 2024a. Amber: An llm-free multi-dimensional benchmark for mllms hal- lucination evaluation. Preprint , arXiv:2311.07397. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhi- hao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024b. Qwen2-vl: Enhancing vision-language model’s per- ception of the world at any resolution. Preprint , arXiv:2409.12191. Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. 2024c. Cogvlm: Visual expert for pretrained language mod- els.Preprint , arXiv:2311.03079. Yuchi Wang, Shuhuai Ren, Rundong Gao, Linli Yao, Qingyan Guo, Kaikai An, Jianhong Bai, and Xu Sun. 2024d. Ladic: Are diffusion models really inferior to autoregressive counterparts for image-to-text genera- tion? Preprint , arXiv:2404.10763. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models. Preprint , arXiv:2201.11903. Hu Xu, Po-Yao Huang, Xiaoqing Ellen Tan, Ching-Feng Yeh, Jacob Kahn, Christine Jou, Gargi Ghosh, Omer Levy, Luke Zettlemoyer, Wen tau Yih, Shang-Wen Li, Saining Xie, and Christoph Feichtenhofer. 2024.Altogether: | https://arxiv.org/abs/2505.22613v1 |
Image captioning via re-aligning alt-text. Preprint , arXiv:2410.17251. Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, Shrikant Kendre, Jieyu Zhang, Can Qin, Shu Zhang, Chia-Chih Chen, Ning Yu, Juntao Tan, Tulika Manoj Awalgaonkar, Shelby Heinecke, and 8 others. 2024. xgen-mm (blip-3): A family of open large multimodal models. Preprint , arXiv:2408.08872. Linli Yao, Weijing Chen, and Qin Jin. 2023. Capen- rich: Enriching caption semantics for web images via cross-modal pre-trained knowledge. Preprint , arXiv:2211.09371. Qinghao Ye, Xianhan Zeng, Fu Li, Chunyuan Li, and Haoqi Fan. 2025. Painting with words: Elevating detailed image captioning with benchmark and align- ment learning. Preprint , arXiv:2503.07906. Qiying Yu, Quan Sun, Xiaosong Zhang, Yufeng Cui, Fan Zhang, Yue Cao, Xinlong Wang, and Jingjing Liu. 2024. Capsfusion: Rethinking image-text data at scale. Preprint , arXiv:2310.20550. Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 3: System Demonstra- tions) , Bangkok, Thailand. Association for Computa- tional Linguistics. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. Preprint , arXiv:2304.10592. 11 A Additional Experimental Results A.1 Qualitative Analysis of RICO We present an example of the RICO refinement process in Fig. 5. We can see that as the refinement progresses, the caption is progressively revised to incorporate important missing details. Addition- ally, Fig. 6 provides a case accompanied by an in-depth analysis. The analysis illustrates that our refinement model effectively identifies discrepan- cies and generates reasonable revision suggestions, resulting in more accurate and comprehensive cap- tions. A.2 Detailed Results on the Generalization of RICO As discussed in § 4.4.3, our method consistently performs well across various initial captioning mod- els and prompt configurations. Extended results for different prompt variants are presented in Tab. 9, with the corresponding prompt templates listed in Tab. 10. Detailed results using different initial cap- tioning models are provided in Tab. 8. These find- ings further validate the robustness and effective- ness of RICO under diverse settings. A.3 Detailed Results of the Text-to-Image Generation Experiment We present the expanded results of Tab. 4 in Tab. 11. The text-to-image model trained with captions gen- erated by our method consistently outperforms the model trained with initial captions across nearly all metrics, demonstrating improved alignment be- tween image and text semantic spaces in RICO. B Additional Information on Experimental Settings B.1 Details of Baselines and Our Implementations We compare our method with several recaptioning baselines. The details of each are provided below: LaCLIP (Fan et al., 2023) LaCLIP identifies that in CLIP training, text inputs tend to be underuti- lized due to a lack of augmentation. To address this, the authors propose leveraging large language mod- els (LLMs) to rewrite the given text. Specifically, ChatGPT is used to generate meta input-output pairs, which are then used as in-context examples to prompt LLaMA (Touvron et | https://arxiv.org/abs/2505.22613v1 |
al., 2023) for gen- erating refined captions. In our implementation, we follow the same procedure to obtain enhancedTable 7: Instructions provided to human annotators in the caption editing experiment. ==INSTRUCTION TO ANNOTATORS == We are working on an image captioning task. The following caption was generated by an AI model. Please help refine this caption by correcting any errors or ambiguities based on the image, and feel free to add any impor- tant details that are missing from the origi- nal caption. captions. Specifically, we first employ Qwen2-VL- 7B-Instruct to simulate the generation of alt text using the prompt: “Describe the image using a few essential keywords. Keep it concise, within 10 words.” The meta input-output pairs generated by ChatGPT are then used as in-context examples to prompt Qwen2-VL-7B-Instruct, which generates the final refined captions. VeCLIP (Lai et al., 2024) While previous meth- ods like LaCLIP focus solely on textual rewriting, VeCLIP emphasizes the incorporation of visual con- cepts into the caption. It first employs a multimodal LLM (LLaV A) to generate captions independently of the original alt text, and then fuses these cap- tions with the original using another LLM, such as Vicuna (Chiang et al., 2023). In our implemen- tation, we follow the official pipeline. We adopt the same approach as in LaCLIP to generate the initial alt texts. We then utilize LLaV A-1.5-7B- Chat to generate supplementary captions. Finally, Qwen2-VL-7B-Instruct is prompted to fuse these two captions. CapsFusion (Yu et al., 2024) CapsFusion high- lights the importance of combining web-based alt texts and synthetic captions. The authors construct a dataset of 1 million examples by prompting Chat- GPT to fuse these two types of captions, which is then used to fine-tune LLaMA, resulting in the CapsFusion-LLaMA model. Technically, we adopt the official implementation: we use the same ap- proach as LaCLIP to generate alt texts, utilize Qwen2-VL-7B-Instruct to produce synthetic cap- tions, and apply the official CapsFusion-LLaMA model weights for fusion. Self-Loop (Dong et al., 2024a) In the CAP- TURE (Dong et al., 2024a) paper, the authors intro- duce a new metric to evaluate image captioning and 12 A woman underneath a cherry blossom tree is setting up a picnic on a yellow checkered blanket. In the pond, a group of people enjoying the serenity of the sunset in a rowboat. In the distance, a building with Japanese -inspired architecture is perched on the lake. A woman with shoulder -length hair, dressed in a kimono , is setting up a picnic on a yellow checkered blanket beneath a cherry blossom tree. Various foods, such as cakes and tea, are scattered across the blanket . In the pond, a group of people enjoying the serenity of the sunset in a rowboat. In the distance, a building with Japanese - inspired two-story architecture is perched on the lake. surrounded by numerous cherry blossom trees. A woman with shoulder -length hair, dressed in a white kimono, is setting up a picnic on a yellow checkered blanket beneath a cherry blossom tree. Various foods, such as cakes and tea, are scattered across the | https://arxiv.org/abs/2505.22613v1 |
blanket, along with two boxes filled with food. In the pond, a group of people enjoying the serenity of the sunset in a rowboat. Some people stand on a small island in the lake on the left side of the frame . In the distance, a two-story Japanese tower is perched on the lake. surrounded by numerous cherry blossom trees. The overall scene is bathed in the golden light of the sunset. Original Image Image 1 Image 2 Image 3 Caption 1 Caption 2 Caption 3Figure 5: An example demonstrating the iterative refinement process performed by our model, where red text indicates added or corrected information. Table 8: Detailed performance of RICO across different initial captioning models. ModelCapsBench CompreCap Acc. Color Shape Rel. Pos. Obj. Pix. Rel. Attr. Qwen2-VL (Wang et al.) 42.02 48.11 27.27 32.41 69.82 60.02 2.66 2.81 Qwen2-VL + RICO 59.02 (+17.00) 67.14 (+19.03) 53.68 (+26.41) 59.51 (+27.10) 75.04 (+5.22) 63.04 (+3.02) 2.85 (+0.19) 2.82 (+0.01) CogVLM 2 (Hong et al.) 45.10 47.77 28.23 39.81 68.54 59.21 2.57 2.61 CogVLM 2 + RICO 57.51 (+12.41) 63.67 (+15.90) 35.46 (+7.23) 48.76 (+8.95) 75.37 (+6.83) 61.65 (+2.44) 2.78 (+0.21) 2.75 (+0.14) GPT-4o (OpenAI et al.) 49.63 54.64 28.28 48.15 70.93 60.09 2.67 2.77 GPT-4o + RICO 57.68 (+8.05) 63.24 (+8.60) 44.57 (+16.29) 59.47 (+11.32) 74.47 (+3.54) 62.11 (+2.02) 2.76 (+0.09) 2.81 (+0.04) Gemini 1.5 Pro (Team et al.) 49.71 51.20 23.23 36.57 71.77 60.28 2.89 2.71 Gemini 1.5 Pro + RICO 57.72 (+8.01) 65.70 (+14.50) 37.50 (+14.27) 50.48 (+13.91) 75.77 (+4.00) 61.97 (+1.69) 2.85 (+-0.04) 2.83 (+0.12) BLIP-3 (Xue et al.) 37.03 40.55 19.19 29.63 67.85 56.99 2.61 2.50 BLIP-3 + RICO 56.21 (+19.18) 66.20 (+25.65) 37.76 (+18.57) 55.61 (+25.98) 74.31 (+6.46) 61.47 (+4.48) 2.79 (+0.18) 2.75 (+0.25) LLaV A 1.5 (Liu et al.) 29.51 27.84 9.09 18.06 57.14 44.48 2.02 2.38 LLaV A 1.5 + RICO 53.13 (+23.62) 61.07 (+33.23) 36.84 (+27.75) 48.10 (+30.04) 76.38 (+19.24) 61.49 (+17.01) 2.82 (+0.80) 2.82 (+0.44) Table 9: Detailed performance of RICO across different initial prompts. ModelCapsBench CompreCap Acc. Color Shape Rel. Pos. Obj. Pix. Rel. Attr. Prompt #1 42.02 48.11 27.27 32.41 69.82 60.02 2.66 2.81 + RICO 59.02 (+17.00) 67.14 (+19.03) 53.68 (+26.41) 59.51 (+27.10) 75.04 (+5.22) 63.04 (+3.02) 2.85 (+0.19) 2.82 (+0.01) Prompt #2 45.97 49.14 23.22 40.74 69.29 60.41 2.69 2.62 + RICO 57.64 (+11.67) 65.45 (+16.31) 39.08 (+15.86) 56.45 (+15.71) 74.83 (+5.54) 62.65 (+2.24) 2.80 (+0.11) 2.79 (+0.17) Prompt #3 41.85 43.30 23.23 36.11 68.46 58.89 2.72 2.59 + RICO 54.85 (+13.00) 66.54 (+23.24) 47.25 (+24.02) 52.85 (+16.74) 75.15 (+6.69) 62.40 (+3.51) 2.80 (+0.08) 2.82 (+0.23) 13 Table 10: Different prompts used to generate initial captions. ============ DIFFERENT PROMPTS TO GENERATE INITIAL CAPTIONS ============ Prompt #1: Describe this image in detail. Your answer should be concise and informative. Prompt #2: Describe the image with rich and detailed observations. You may pay attention to the dimensions of overall, main subject, background, movement of main subject, style, camera movement and so on. Prompt #3: Give this image a detailed caption. Table 11: Extended version of the evaluation of the text-to-image model. | https://arxiv.org/abs/2505.22613v1 |
ModelDPG-BenchVQAScore Entity Relation Attribute Global Overall FLUX w/ Init. Cap. 85.110 89.950 80.080 72.414 78.502 0.841 FLUX w/ RICO-DPO 86.850 90.551 82.831 75.172 80.336 0.852 design a self-looping caption improvement pipeline guided by this metric. In detail, the method de- tects objects in the image, generates local captions, filters out hallucinated objects, and merges local descriptions with the overall caption. We use the official repository to run this baseline. ShareGPT4V (Chen et al., 2023) ShareGPT4V underscores the critical role of captions in MLLM training. It uses carefully crafted prompts to guide GPT-4V in generating high-quality descriptions, and then trains a Share-Captioner model to repli- cate this behavior. In our experiments, we use Share-Captioner to generate captions as part of the baseline comparison. B.2 Details of Evaluation Benchmarks Traditional caption evaluation metrics (Anderson et al., 2016; Lin, 2004) are not well-suited for eval- uating captions generated by modern MLLMs. In our work, we adopt the following evaluation met- rics: CapsBench. Proposed in Playground v3 (Liu et al., 2024a), CapsBench introduces a benchmark designed to evaluate the comprehensiveness and accuracy of image captions. For each image, a set of “yes-no” question-answer pairs is generated across 17 semantic categories. During evaluation, an LLM is tasked with answering these questions based solely on the candidate caption. The possi- ble answers are “yes”, “no”, and “n/a” (for unan-swerable questions). The predicted answers are compared with the ground-truth to compute the overall accuracy. This benchmark effectively as- sesses whether a model can capture accurate and comprehensive information from the image. In our implementation, we use GPT-4o (2024-08-06) as the judge model. CompreCap. CompreCap (Lu et al., 2025) is a benchmark that evaluates the compositional un- derstanding of detailed visual scenes through a di- rected scene graph framework. Each image is seg- mented into semantically meaningful regions, and objects within these regions are annotated with at- tributes and directional relations to form a directed scene graph. The benchmark then assesses gen- erated captions based on three levels: (1) object- level coverage, (2) accuracy of attribute descrip- tions, and (3) correctness of key relationships. This benchmark is particularly effective at evaluating the model’s ability to capture relational and composi- tional details. We adopt the official implementation for our evaluation. Amber. Amber (Wang et al., 2024a) is designed to evaluate hallucinations in MLLM-generated cap- tions by comparing the set of objects mentioned in the caption with a pre-annotated object list for the image. It defines several metrics: CHAIR , which quantifies the frequency of hallucinated (i.e., nonex- istent) objects, and Cover , which measures how well the caption covers the annotated objects. Fol- 14 The image captures a breathtaking winter scene with a person ice skating on a frozen lake. The main subject, a skater, is positioned in the center, with their arms raised in a dynamic pose, suggesting movement and balance. The ice is transparent, revealing the blue water beneath, creating a striking contrast. The background features majestic mountains under a clear sky with a few scattered clouds. The sunlight casts a warm glow , highlighting the skater and the | https://arxiv.org/abs/2505.22613v1 |
ice's surface. The overall composition is balanced, with the skater and the ice slice drawing the viewer's eye towards the horizon. The image is taken from a low angle, emphasizing the vastness of the landscape and the skater's actionThe main differences between the original and reconstructed images lie in the foreground details, the clothing of the skater, the lighting, and overall composition. The original image features a large, prominent triangular slab of ice in the foreground that serves as a significant visual element, which is absent in the reconstructed image. The skater in the original image is silhouetted against the sun , creating a backlit effect with dramatic contrast and long shadows, while the reconstructed image lacks this silhouette effect and has more neutral lighting on the skater . Additionally, the skater in the original is dressed in dark clothing which contributes to the silhouette, while in the reconstructed image, the skater is wearing a beige jacket and red shoes. The low-angle shot in the original emphasizes the expansive foreground and the interplay of light and shadow , which is not captured in the reconstructed version. By adding detailed descriptions of the ice slab, the skater's clothing, and the specific lighting and shadow effects, the caption can be improved to guide the reconstruction process more precisely. The image captures a breathtaking winter scene with a person ice skating on a frozen lake. In the foreground, there is a prominent, large triangular slab of transparent ice with visible textures, creating a focal point . The main subject, a skater dressed entirely in dark clothing , is positioned in the center with arms raised dynamically, suggesting movement and balance . The skater's silhouette is backlit by the sun , which casts a long shadow and a starburst effect behind them, enhancing the dramatic ambiance. The ice is transparent with a deep blue tint, and scattered white spots of snow are visible . The background features majestic silhouettes of mountains under a clear sky with some scattered clouds. The photo is shot from a low angle, emphasizing the vastness of the landscape and focusing on the interplay between the ice and the skater's shadow.Original Image Caption 0Reconst .Image 1 Caption 1Reconst .Image 2Proposed AnalysisFigure 6: An example demonstrating the output analysis produced by our model, where green text highlights important aspects identified in the analysis, and blue text indicates information incorporated into the updated caption. lowing the original paper’s claim that “an ideal response is considered to be one that minimizes hallucinatory content without significantly compro- mising the coverage of objects in the image,” we adopt a unified metric, Cover −CHAIR , to re- flect this trade-off. This provides a concise and interpretable measure of caption faithfulness. CAPTURE CAPTURE (Dong et al., 2024a) intro- duce a benchmark designed to evaluate detailed im- age captioning performance by extracting and com- paring core visual elements in generated captions. Unlike traditional metrics that rely on n-gram over- laps, CAPTURE focuses on the alignment of se- mantic content by parsing captions into structured scene graphs comprising objects, attributes, and | https://arxiv.org/abs/2505.22613v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.