text
string | source
string |
|---|---|
al. 2021), and (Elhage et al. 2022). b. Validating and applying interpretability methods: Developing rigorous criteria and benchmarks for evaluating the reliability of interpretability methods, and understanding whether these methods maintain their validity when applied to actively modify model behavior. Example work includes (K. Wang et al. 2022) and (Schwettmann et al. 2023). c. Feature and circuit analysis: Creating scalable approaches for feature interpretation, circuit discovery, and feature steering (or top-down/control vectors). Example work includes (Nanda et al. 2023) and (Conmy et al. 2023). d. Eliciting Latent Knowledge (ELK): Developing methods to reveal hidden knowledge embedded within models, enabling researchers to identify what models implicitly “know” about the world and how this knowledge influences predictions. Example work includes (Christiano 2023). Expert Survey: AI Reliability & Security Research Priorities | 30 e. Developmental interpretability: Investigating how AI models' internal representations and behaviors evolve throughout their training process to understand the developmental stages and mechanisms by which complex capabilities emerge. This research aims to uncover the progressive changes in model structure and function, facilitating better alignment and safety assurances. Example work includes (G. Wang et al. 2024) and (Hoogland et al. 2025). f. Transparency: Research here aims to open up the black box of AI systems by uncovering how data, architecture, and training processes shape model outputs. Example research focuses on advanced documentation frameworks, auditing tools to surface biases or vulnerabilities, and reporting protocols to effectively explain outputs and communicate uncertainty. Example work includes (Felzmann et al. 2020), (Liao and Vaughan 2024), and (Bommasani et al. 2023). g. Explainability: Methods to understand why a model generates specific outputs. Technical approaches include developing post-hoc or embedded explanation methods, measuring and improving explanation fidelity, and crafting user-focused interfaces that clarify causal or logical relationships. Example work includes (Madsen, Chandar, and Reddy 2024) and (Y. Wu et al. 2022). 6. Robustness: Ensuring that AI systems remain reliable and secure in the face of adversarial manipulation, misaligned inputs, and uncertain conditions, such as by protecting against prompt-based exploits, poisoning attacks, and adversarial perturbations, and introducing control mechanisms and uncertainty quantification methods to maintain resilient system behavior at scale. a. Defending against jailbreaks and prompt injections: Improving state-of-the-art methods for discovering, evaluating, and defending against prompt injection and "jailbreaking" attacks. Research also focuses on structural defenses, such as detection, filtering, and paraphrasing of prompts, as well as addressing vulnerabilities stemming from a lack of robust privilege levels (e.g., system prompt vs. user instruction) in LLM inputs. Example work includes (Zou et al. 2023), (Bailey et al. 2024), and (Han et al. 2024). b. Defending against poisoning and backdoors: Understanding how LLMs can be compromised through data poisoning at various training stages, examining the effect of model scale on vulnerability, testing out-of-context reasoning under poisoning, and exploring attacks via additional modalities and encodings. This area also includes detecting and removing backdoors (i.e., Trojan detection) to ensure that covertly embedded harmful behaviors are mitigated. Example work includes (Carlini et al. 2024), (Qi, Xie, et al. 2023), (Wan et al. 2023), and (Hubinger et al. 2024). c. Adversarial robustness to perturbations: This
|
https://arxiv.org/abs/2505.21664v1
|
area investigates how models can be made more resilient to carefully crafted adversarial perturbations designed to degrade performance or reveal vulnerabilities. Research involves identifying methods for bolstering model robustness under challenging conditions, including adversarial training and certified defenses. Example work includes (Nandi et al. 2023) and (Addepalli et al. 2022). Expert Survey: AI Reliability & Security Research Priorities | 31 d. Control mechanisms for untrusted models: Designing and evaluating protocols to control outputs from untrusted models. This includes methods for monitoring backdoored outputs, integrating control measures with traditional insider risk management strategies, building safety cases for control tools, and employing white-box techniques (e.g., linear probes) for continuous oversight. Example work includes (Greenblatt et al. 2024). e. Uncertainty quantification: Quantifying uncertainty in model predictions. Techniques include ensemble methods, conformal predictions, and Bayesian approaches to estimate and calibrate model confidence. Example work includes (Y. Hu et al. 2022) and (Shaker and Hüllermeier 2021). 7. Improving the science of AI evaluation: Ensuring that AI systems can be accurately assessed and understood. This includes theoretical work in capability and safety evaluation, as well as improving the reliability and fairness of evaluation processes. a. Theoretical foundations for evaluation: Research includes: creating rigorous frameworks for predicting capabilities (as opposed to relying solely on benchmarks) understanding generality and generalization in LLMs, and developing theory-grounded taxonomies of model capabilities. Example work includes (Burnell et al. 2023) and (Schaeffer et al. 2025). b. Evaluation methodology and metrics: Research focuses on designing holistic, theory-grounded metrics (e.g., focused on more than just harmlessness), accounting for scaffolding in evaluations, and characterizing safety–performance trade-offs. Example work includes (Liang et al. 2023) and (Kiela et al. 2021). c. Studying misalignment through simplified model organisms: Developing and studying simplified AI models—“model organisms”—to probe potential misalignment, gain insights into failure modes, and refine evaluation strategies without the complexity of full-scale systems. Example work includes (Hubinger et al. 2024). d. Improving evaluation robustness: Methods here aim to stabilize evaluations against sensitivity to prompts, detect and address contaminated data, ensure that evaluations remain meaningful even if models are fine-tuned in a targeted manner for certain tasks, and mitigate bias in AI evaluations (including biases in crowdsourced human evaluations). Example work includes (Sclar et al. 2024), (Golchin and Surdeanu 2024), (M. Wu and Aji 2023), (Yong, Menghini, and Bach 2024) and (Z. Wu et al. 2024). e. Detecting and addressing previously unmeasured or latent capabilities: Developing strategies to uncover latent harmful abilities within AI models and prevent models from exhibiting undesirable behaviors such as “sandbagging” or deceptively underperforming during evaluations. Example work includes (Ward et al. 2023) and (van der Weij et al. 2025). 8. Domain-specific AI evaluation design: Developing specialized evaluation tools to assess AI models' capabilities and safety in critical areas such as automated AI research and development, cybersecurity, chemical/biological/radiological/nuclear (CBRN) scenarios, and manipulative behaviors like deception and persuasion. Expert Survey: AI Reliability & Security Research Priorities | 32 a. Automated AI R&D evaluations: Designing evaluations to assess a model’s capacity to generate research ideas, propose improvements to algorithms, or autonomously advance AI capabilities. Example work includes (Wijk et
|
https://arxiv.org/abs/2505.21664v1
|
al. 2024) and (Owen 2024). b. Cyber evaluations: Designing evaluations to assess a model’s ability to understand, exploit, or defend against cybersecurity threats and vulnerabilities. Example work includes (AI Security Institute 2024). c. CBRN (Chemical, Biological, Radiological, and Nuclear) evaluations: Designing evaluations to assess a model’s understanding of hazardous CBRN materials and scenarios, ensuring it cannot be easily leveraged to facilitate harmful acts involving these agents. Example work includes (AI Security Institute 2024) and (Laurent et al. 2024). d. Evaluating deception, scheming, situational awareness, and persuasion: Designing evaluations to assess how well models can deceive, strategize, maintain situational awareness, or influence human decision-making. Example work includes (Scheurer, Balesni, and Hobbhahn 2024). 9. Agentic LLMs and single-agent risks: Developing a deeper understanding of agentic behavior in LLM-based systems. This work clarifies how LLM-agents learn over time, respond to underspecified goals, and engage with their environments. a. Lifelong learning and goal-directedness in LLM agents: Investigating how agentic LLMs evolve through ongoing learning, and potentially exhibit undesirable behaviors due to goal-directedness. Example work includes (Qi, Zeng, et al. 2023) and (Perez et al. 2023). b. Robustness to underspecification: Enhancing methods to ensure LLM-agents remain aligned despite vague or shifting objectives. Example work includes (Mu et al. 2023) and (Turner, Hadfield-Menell, and Tadepalli 2020). c. Oversight and monitoring of LLM-agents: Building automated oversight and monitoring tools to track LLM-agent actions. Example work includes (Naihin et al. 2023) and (A. Chan et al. 2024). d. Evaluating tool affordances for LLM-agents: Evaluating the safety of providing LLM-agents with tools and affordances, and determining whether robust safety assurances are possible for given affordances. Example work includes (Ruan et al. 2024). 10. Multi-agent interactions: Research focusing on ensuring safe multi-agent interactions, such as by detecting and preventing malicious collective behaviors, studying how transparency can affect agent interactions, and developing evaluations for agent behavior and interaction. a. Safety and emergent functionality in multi-agent interactions: Understanding how individual agent dispositions and capabilities scale into complex multi-agent dynamics, evaluating emergent functionalities (e.g., coordinated strategies), enhancing robustness of LLM agents to correlated failures stemming from foundationality, and applying insights from multi-agent RL research to LLM-based systems. Example work includes (Critch, Dennis, and Russell 2022), (Juneja et al. 2024), and (Yocum et al. 2023). Expert Survey: AI Reliability & Security Research Priorities | 33 b. Detecting and preventing collusion and emergent collective behavior: Developing detection techniques (e.g., information-theoretic or interpretability-based) for collusion between AI agents, benchmarking and evaluating collusive tendencies, designing mitigation strategies such as oversight regimes, communication restrictions, and methods for steering agents, understanding conditions (e.g., agent similarity, communication channels, environment structure) that facilitate collusion, and understanding why and how general “super-agents” might develop from many narrow agents. Example work includes (Motwani et al. 2025), (Calvano et al. 2020), and (Ha and Tang 2022). c. Multi-agent security: Assessing unique security risks that arise in multi-agent ecosystems, designing defenses (e.g., secure communication protocols, improved network architectures, information security), studying how multiple systems can circumvent safeguards, evaluating robustness of cooperation to adversarial attacks (e.g., if a small number of malicious agents can destabilize larger groups), evaluating
|
https://arxiv.org/abs/2505.21664v1
|
how well agents can adversarially attack each other, and studying the impact of AI agent’s training dynamics on data generated by each other with respect to shared vulnerabilities/correlated failure modes. Example work includes (D. Zhang et al. 2021), (D. Lee and Tiwari 2024), and (Jones, Dragan, and Steinhardt 2024). d. Network effects and destabilizing dynamics in agent ecosystems: Understanding which network structures and interaction patterns lead to robust or fragile systems, monitoring and controlling dynamics and co-adaptation of networks of advanced agents, and identifying important security concerns in existing and future multi-agent application areas (e.g., finance, energy grids) and applying lessons from those areas to manage destabilizing forces. Example work includes (Leonardos and Piliouras 2020), (Bloembergen et al. 2015), and (Sanders, Farmer, and Galla 2018). e. Transparency, information asymmetries, and communication protocols: Studying how agent transparency (e.g., code access) or predictability of agents can influence cooperation or defection, scaling Bayesian persuasion and information design to complex multi-agent settings, developing secure information transmission methods between AI agents to promote cooperation,, examining how agent similarity and evidential reasoning about others affect ability and propensity to cooperate, and developing efficient algorithms for zero- or few-shot coordination in high-stakes scenarios. Example work includes (Conitzer and Oesterheld 2023), (Kamenica 2019), and (H. Hu et al. 2021). f. Multi-agent metrics and evaluations: Distinguishing and measuring cooperative dispositions, understanding agents’ robustness against coercion or exploitation, quantifying traits like altruism or spite, assessing the impact of capability asymmetries between agents, examining how training processes and data sources influence cooperation, and developing dangerous capability evaluations for multi-agent systems. Example work includes (Abdelnabi et al. 2023), (Agapiou et al. 2023), and (Mukobi et al. 2023). Expert Survey: AI Reliability & Security Research Priorities | 34 11. Cooperative AI and mechanism design: Fostering beneficial multi-agent ecosystems through research on human-AI interaction, mechanism design, communication protocols, peer incentivization, and automated mechanism design. a. Human-AI interaction and collaboration: Designing AI systems that can understand and predict human actions and preferences; creating interfaces and protocols for effective human-AI teamwork; understanding how interactive AI may change human decision making. Example work includes (Leitão et al. 2022) and (Alon-Barkat and Busuioc 2023). b. Mechanism design and multi-agent communication: Focuses on foundational concepts like social choice theory, incentive alignment, and emergent communication protocols in multi-agent systems to ensure cooperation and fair outcomes. Example work includes (Guresti, Vanlioglu, and Ure 2023) and (Brandt et al. 2016). c. Peer incentivisation and automated mechanism design: Focuses on practical and scalable applications of mechanism design, including methods for incentivizing cooperation among agents, designing secure and scalable inter-agent contracting and norm enforcement mechanisms, and structured opponent-shaping strategies in complex environments. Example work includes (J. Yang et al. 2020), (Eccles et al. 2019), and (Foerster et al. 2018). 12. Fairness: Research focusing on developing equitable AI systems, including detecting and mitigating bias, ensuring fair representation across diverse groups, addressing fairness in dynamic or constrained data scenarios, and reconciling conflicting fairness definitions to align interventions with societal values. a. Fairness under dynamic and constrained data scenarios: Ensuring that fairness interventions remain effective under continual learning, adaptive
|
https://arxiv.org/abs/2505.21664v1
|
deployment, or evolving operational contexts. Example work includes (Zhao et al. 2022) and (Zhao et al. 2023). b. Fair representation and participation in AI systems: Promoting fair representation and generalization across different subpopulations, and ensuring inclusive participation in the development and governance of AI systems. Example work includes (Birhane et al. 2022) and (Kirk et al. 2024). c. Bias detection, quantification, and mitigation techniques: Developing systematic methods to detect, measure, and reduce bias in model outputs, ranging from pre-processing adjustments to post-hoc corrections. This may also include causal methods for fairness, such as causal modeling techniques to distinguish between genuine causal relationships and spurious correlations in observed disparities, enabling fairness interventions that address underlying structural causes. Example work includes (Plecko and Bareinboim 2022), (Binkytė-Sadauskienė et al. 2022), (Hort et al. 2023) and (Mehrabi et al. 2021). d. Fairness in multilingual, cross-cultural, and multimodal contexts: Addressing fairness challenges that arise when models operate across different languages, cultures, and data modalities. Example work includes (Adewumi et al. 2024) and (Tao et al. 2024). Expert Survey: AI Reliability & Security Research Priorities | 35 e. Intersectional fairness and complex group structures: Addressing compounded biases that arise when protected attributes overlap, such as race and gender, to ensure fairness approaches capture nuanced harms across intersectional groups. This research develops computational methods and evaluation frameworks to avoid oversimplifying population categories and to identify disparities affecting complex group structures. Example work includes (Gohar and Cheng 2023) and (W. Guo and Caliskan 2021). f. Reconciling multiple fairness definitions and normative trade-offs: Comparing and combining conflicting formal definitions of fairness to address the normative trade-offs they entail and align fairness interventions with societal values. This research clarifies the theoretical and practical implications of fairness definitions, helping practitioners navigate complex policy and ethical considerations. Example work includes (Buijsman 2023) and (Bateni, Chan, and Eitel-Porter 2022). 13. Accountability: Research focusing on ensuring AI systems are transparent, reliable, and compliant, including developing auditing tools, attributing AI outputs to specific models, mitigating risks of power concentration in AI development and deployment, and automating regulatory compliance. a. Auditing mechanisms: Developing automated post-hoc auditing tools, using privacy-enhancing technology to facilitate secure audit access to sensitive data, building auditability into systems by design, and establishing continuous accountability pipelines that monitor, log, and assess model behaviors over time to support transparent and verifiable assessments of model behaviors. Example work includes (Marone and Durme 2023), (Ilyas et al. 2022) and (Bluemke et al. 2023). b. Methods for detecting and attributing LLM outputs: Developing techniques—such as watermarking or model fingerprinting—to identify and attribute content to its source model provides a foundation for accountability, reduces misinformation risks, and clarifies responsibility. Example work includes (Kirchenbauer et al. 2024) and (J. Xu et al. 2024). c. Regulatory compliance automation: Automating processes to ensure that models conform to legal standards, industry guidelines, and ethical principles helps organizations proactively meet accountability requirements while reducing manual oversight burdens. Example work includes (Cappelli and Di Marzo Serugendo 2025) and (Sojasi 2024). d. Methods for mitigating power concentration in AI: Investigating mechanisms to prevent the centralization of AI
|
https://arxiv.org/abs/2505.21664v1
|
capabilities and influence—such as decentralized governance, open-source contributions, and equitable resource allocation. Example work includes (Y. Liu et al. 2024) and (Montes and Goertzel 2019). 14. Ethics: Work on AI ethics includes developing methods for integrating ethical considerations into training, evaluation, and decision-making processes, as well as techniques for mitigating harmful outputs and ensuring cultural and long-term ethical consistency. Expert Survey: AI Reliability & Security Research Priorities | 36 a. Ethics-aware training and fine-tuning: Research on learning from imperfect ethical datasets, applying ethics-aware data curation methods, and incorporating collective ethical principles into model design. Example work includes (Hendrycks et al. 2023) and (Lourie, Bras, and Choi 2021). b. Ethical decision-making frameworks: Developing formal risk-aware, algorithmic harms assessment, and domain-specific ethical decision-making frameworks tailored for large language models and related AI systems. Example work includes (Tennant, Hailes, and Musolesi 2023) and (Ji et al. 2024). c. Mitigating harmful outputs: Approaches include refining models to reduce the production of dangerous, misleading, or otherwise harmful outputs, employing filtering, red-teaming, and reinforcement learning from human feedback. Example work includes (Dai et al. 2023) and (Ngo et al. 2021). d. Cultural sensitivity and contextual awareness: Techniques aim to adapt models to diverse cultural contexts and subtle social norms, ensuring that outputs remain appropriate, respectful, and aligned with local values. Example work includes (Li et al. 2024) and (A. J. Chan et al. 2024). e. Long-term ethical consistency: Research explores methods for maintaining stable, ethically coherent model behavior over extended periods, including approaches to prevent drift and to preserve core ethical principles despite shifting inputs. Example work includes (McAllister et al. 2018), (Christiano et al. 2023), and (Everitt et al. 2021). 15. Choosing and operationalizing values in AI: This area focuses on developing principled methods to identify, justify, and implement value systems within AI models, reconciling diverse ethical priorities, managing conflicts, and creating robust evaluations to ensure models embody chosen values. a. Justify value choices for alignment: Research includes formulating principled criteria and philosophical foundations that guide why certain values should be encoded into AI systems. Example work includes (Gabriel 2020) and (Zhi-Xuan et al. 2024). b. Manage conflicts between different values: Efforts here explore approaches like multi-objective optimization or deliberation frameworks to resolve cases where multiple values clash. Example work includes (Qu et al. 2021) and (Kirk et al. 2024). c. Develop more robust evaluations for which values an LLM encodes: Researchers design metrics and tests to identify and measure the values present in a model’s behavior, outputs, and decision-making processes. Example work includes (Scherrer et al. 2023) and (Arora, Kaffee, and Augenstein 2023). d. Pluralistic value alignment: Strategies focus on simultaneously accommodating multiple, possibly diverse value systems, enabling AI to adapt to different stakeholders or cultural contexts. Example work includes (Sorensen et al. 2024). e. Foundational research on operationalizing values in LLMs: This includes theoretical and empirical studies on how to incorporate values directly into training procedures, fine-tuning protocols, and model architectures. Example work includes (Solaiman and Dennison 2021) and (C. Xu et al. 2023). Expert Survey: AI Reliability & Security Research Priorities | 37 16.
|
https://arxiv.org/abs/2505.21664v1
|
Privacy: This area focuses on identifying and mitigating privacy risks arising from new capabilities and deployment scenarios for LLMs, developing robust conceptual frameworks for privacy definitions, and leveraging AI tools to preserve and enhance privacy in various application domains. a. Identifying emergent privacy risks in new paradigms: Examining novel attack vectors (e.g., inference time risks) in new paradigms (e.g., retrieval-augmented generation, agent-based interactions, plugin ecosystems) to uncover how these integrations may lead to unexpected disclosures. Example work includes (Jegorova et al. 2022), (Mireshghallah et al. 2024), and (Zeng et al. 2024). b. Research on inferring sensitive information from accumulation of innocuous data: Studying how seemingly harmless data points can be combined to reconstruct sensitive information, enabling adversaries to “weaponize” aggregate inferences against individuals. Example work includes (Kröger 2019) and (Staab et al. 2024). c. Privacy challenges in complex data scenarios: Exploring how complex data scenarios, such as cross-lingual and cross-modal transformations (e.g., images, audio, code snippets) can reveal protected content, examining what can be extracted from data presented in alternative formats. This also includes work on context-specific privacy norms, ensuring that privacy measures adapt to different cultural, social, and situational factors rather than relying on one-size-fits-all policies. Example work includes (Nissenbaum 2010), (W. Fan et al. 2024), (Huang et al. 2024), and (Choi, Min, and Choo 2024). d. Privacy modeling frameworks: Developing more precise models of privacy that align with user expectations—potentially informed by human-computer interaction (HCI) research—and grounding these definitions in implementable policies. This also includes formalizing methodologies to characterize and prioritize the worst-case privacy outcomes, moving beyond ad hoc assessments and towards systematic threat modeling frameworks. Example work includes (Mollaeefar, Bissoli, and Ranise 2024), (Villegas-Ch and García-Ortiz 2023), (Brown et al. 2022) and (Mireshghallah et al. 2024). e. Data encryption tools for model inputs and outputs: Techniques for encrypting inputs, outputs, and intermediate representations during runtime to ensure confidentiality and prevent unauthorized access to sensitive queries or responses. Example work includes (Mishra, Li, and Deo 2023) and (S. Lin et al. 2025). 17. Cybersecurity for AI models: Focuses on protecting model parameters, interfaces, training techniques, and outputs from unauthorized access, extraction, or misuse using cryptographic, architectural, and procedural safeguards. This includes ensuring secure weight storage, hardened access control, oracle protection measures, protecting algorithmic insights, preventing self-exfiltration, and robust data integrity. a. Weight security and key management: Research focuses on cryptographic techniques for encrypting and securely storing model weights at rest and in transit, hardware-based protections (e.g., trusted execution environments) that ensure the model’s parameters cannot be extracted even with physical access, developing systems for isolating weights behind tightly controlled interfaces, and implementing Expert Survey: AI Reliability & Security Research Priorities | 38 formal verification of key generation, storage, and rotation protocols. Example work includes (Nevo et al. 2024b) and (Aarne, Fist, and Withers 2024). b. Access control and interface hardening: Approaches include creating minimal, verifiable interfaces for weight access, deploying multiparty authorization and cryptographic attestation protocols to guard against model extraction, novel authentication and authorization schemes integrating Zero Trust principles at a granular level (beyond standard Identity & Access Management
|
https://arxiv.org/abs/2505.21664v1
|
[IAM] tools), and implementing AI firewalls with strict input-output validation. Example work includes (Nevo et al. 2024b). c. Model robustness and oracle protection: Techniques prevent model extraction through inference-only attacks, detect and filter adversarial inputs designed to reconstruct the model or degrade its integrity, and employ adversarial training and advanced input/output “reconstruction” methods to limit the risk that internal representations are inferred from model queries. Example work includes (Carlini and Wagner 2017) and (Carlini et al. 2019). d. Preventing model self-exfiltration: Methods to ensure that models cannot covertly leak sensitive information about their internal parameters or training data, or copy their own weights to external devices and networks, such as output restrictions, sanitization techniques, or fine-grained monitoring of responses. Example work includes (Clymer, Wijk, and Barnes 2024), (Kinniment et al. 2024), and (Greenblatt et al. 2024). e. Detecting modified models or poisoned data: Developing methods to detect models that have been maliciously modified or training data which has been poisoned. Example work includes (Sikka et al. 2023) and (di Gaspari, Hitaj, and Mancini 2024). f. Quantifying cyber threats from advanced capabilities: Threat modeling and evaluation of cyber threats from advanced AI models, whether via autonomy or providing human uplift for more "traditional" cyber capabilities. Example work includes (Gupta et al. 2023) and (AI Security Institute 2024). 18. Hardware and infrastructure security for AI: Ensuring the security of AI systems at the hardware and infrastructure level involves protecting model weights, securing deployment environments, maintaining supply chain integrity, and implementing robust monitoring and threat detection mechanisms. Methods include the use of confidential computing, rigorous access controls, specialized hardware protections, and continuous security oversight. a. Confidential computing and environment isolation: Using trusted execution environments (such as secure enclaves) to ensure that model weights and computations remain confidential and tamper-proof during large-scale AI inference and training. This also involves reducing the attack surface through sandboxed, code-minimal deployments, specialized hardware/firmware stacks, and maintaining verifiable runtime integrity checks. Example work includes (Dhar et al. 2024) and (Vaswani et al. 2022). b. Supply chain integrity and secure development: Ensuring end-to-end verification of hardware and software supply chains through source-verified firmware, SLSA compliance, and secure software development lifecycles tailored for ML-specific infrastructure. This also includes developing automated tooling to continuously verify the Expert Survey: AI Reliability & Security Research Priorities | 39 provenance and integrity of model components, dependencies, and third-party code used in training and inference pipelines. Example work includes (Hepworth et al. 2024) and (Mora-Cantallops et al. 2021). c. Continuous monitoring, advanced threat detection, and incident response: Developing ML-driven anomaly detection and logging systems capable of flagging and responding to subtle infiltration attempts or insider threats in real-time. This also includes red-teaming and automated penetration testing frameworks specialized for AI systems, including simulations of zero-day attacks and insider compromises. Example work includes (Cai et al. 2024). d. Hardware-integrated monitoring and verification: Integrating monitoring capabilities directly into hardware, such as secure counters and tamper-evident seals, along with deploying specialized firmware that can detect and respond to attempts at parameter theft or physical attacks. This also includes verification tools, such
|
https://arxiv.org/abs/2505.21664v1
|
as hardware-level logging and secured audit trails that remain verifiable under sophisticated tampering attempts for rapid, evidence-based incident response. Example work includes (Petrie, Aarne, and Ammann 2024). e. Specialized chips to compute encrypted data: Designing and deploying hardware accelerators optimized for computations on encrypted data, such as homomorphic encryption schemes, to facilitate efficient encrypted training and inference without exposing plaintext model parameters or sensitive input data outside the protected hardware boundary. Example work includes (Samardzic et al. 2022) and (Aikata et al. 2023). f. Tamper-evidence and tamper-proofing: Implementing tamper-resistant enclosures, seals, and other tamper-evident mechanisms to ensure that any unauthorized physical access or modification attempts are detectable. Such measures help maintain the integrity of hardware components and prevent adversaries from compromising the system at a physical level. Example work includes (Thomaz et al. 2023) and (Vidaković and Vinko 2023). g. Datacenter security: Relevant research focuses on designing and deploying resilient hardware- and software-based defenses to prevent model theft and sabotage. This includes methods like encrypted computation, secure enclaves, continuous anomaly detection, zero-trust architectures, and rigorous supply chain verification to protect against both external intrusions and insider threats. Example work includes (Nevo et al. 2024a). 19. Improving general understanding of deep learning: This area focuses on developing rigorous explanations for why deep neural networks learn effectively, uncovering the principles behind generalization, understanding optimization behavior, and analyzing how implicit biases and overparameterization influence performance and safety. a. Theoretical foundations of deep learning: Constructing mathematical models to explain generalization in deep neural networks despite overparameterization, and Expert Survey: AI Reliability & Security Research Priorities | 40 studying the influence of network architecture on learning properties. Example work includes (Allen-Zhu, Li, and Liang 2019) and (Nagarajan 2021). b. Optimization and loss landscape analysis: Studying the geometry of loss functions and how optimization algorithms navigate them, and examining phenomena such as flat versus sharp minima, and the connection between these properties and robust generalization. Example work includes (Stutz, Hein, and Schiele 2021) and (C. Liu, Zhu, and Belkin 2022). c. Implicit bias of optimization algorithms: Analyzing how algorithms like stochastic gradient descent and related methods influence learned models, and exploring how implicit regularization affects model performance and safety. Example work includes (Jingfeng Wu et al. 2021) and (Chou, Rauhut, and Ward 2024). d. Double descent and overparameterization: Investigating the double descent risk curve and its implications for model capacity, and how overparameterization can lead to improved generalization. Example work includes (Luzi, Dar, and Baraniuk 2024) and (Dar, Luzi, and Baraniuk 2023). 20. Research on safety in non-LLM systems: Exploring safety challenges in non-LLM systems, such as robotics and embodied AI, vision and perception systems, and alternative paradigms for developing artificial intelligence (e.g., whole-brain emulation). a. Safe reinforcement learning for non-LLM systems: Developing RL algorithms that prioritize safety during exploration and exploitation, with applications in non-LLM systems such as robotics and embedded AI. This includes incorporating safety constraints and risk-sensitive objectives into learning processes. Example work includes (Brunke et al. 2021) and (F. Yang et al. 2024). b. Robotics and embodied AI safety: Designing robust control systems, fail-safe
|
https://arxiv.org/abs/2505.21664v1
|
mechanisms, and dependable sensors for physical systems, such as autonomous vehicles, drones, and household robots, to ensure safe human-robot interaction and accident prevention. Example work includes (W. Zhang et al. 2024). c. Adversarial robustness in vision and perception systems: Studying how malicious inputs can deceive image recognition or sensor-based models, and creating defenses—such as adversarial training, certifiable robustness methods, and detection schemes—to maintain reliable perception. Example work includes (Tu et al. 2022) and (Shao et al. 2022). d. Whole-brain emulation: Exploring the theoretical challenges of accurately replicating a human brain’s functionality and ensuring that such emulations—if ever realized—adhere to rigorous safety and ethical standards, avoiding unintended cognitive hazards or harmful behavioral patterns. Example work includes (Duettman and Sandberg 2023) and (Mandelbaum 2022). Expert Survey: AI Reliability & Security Research Priorities | 41 Appendix B: Survey Instructions This appendix reproduces the information seen by respondents as they encountered the survey. Introductory page Welcome to the AI Assurance and Reliability Research Priorities Survey Purpose This survey aims to identify high-priority AI assurance and reliability research areas to guide funders—ranging from philanthropic organizations to government agencies—toward impactful funding decisions. Your Role As an expert in AI/ML or related fields, your input can help highlight which technical challenges, if tackled, would most reduce the risk of severe harm from AI systems (defined as >100 lives lost or >$10 billion in economic impact). How the Survey Works Select areas of interest (one area ≈ 10 minutes) First, choose one or more areas of interest that align with your expertise. You can choose how much time to invest by selecting more or fewer areas; more input helps, but is not required. Even the minimum response of choosing one area is extremely valuable, and we thank you for it. Rating and feedback Then, for the selected area(s), rate the importance and tractability of a handful of sub-areas, and provide brief qualitative feedback. Survey Design Details There are 20 high-level research areas to choose from, and 3-7 sub-areas per research area. Items are randomized to minimize order effects. The survey is anonymous by default, but you may provide your name if you’d like to be contacted for follow-up. Outcome Your insights will help inform future funding priorities and research agendas as we engage with a broad range of funders and research organizations. We plan to publish aggregated and anonymized results so that researchers, grantmakers, and other interested parties can draw on these insights when shaping their strategies. Expert Survey: AI Reliability & Security Research Priorities | 42 Note on Methodology This survey was developed by reviewing recent literature (e.g., Anwar et al., 2024) and consulting with a range of experts. We aimed to cover key technical challenges related to the reliability and security of advanced AI systems (primarily LLMs), but this selection is neither exhaustive nor perfect. While we strived for breadth and relevance, some omissions or biases may remain. We welcome your input to refine our approach for future advocacy for funding and research priorities. Thank you for your time and expertise! Click “Next” to begin. Category Selection Example
|
https://arxiv.org/abs/2505.21664v1
|
You selected the following area: Ethics: Work on AI ethics includes developing methods for integrating ethical considerations into training, evaluation, and decision-making processes, as well as techniques for mitigating harmful outputs and ensuring cultural and long-term ethical consistency. This area has the following sub-areas: ● Ethics-aware training and fine-tuning ● Ethical decision-making frameworks ● Mitigating harmful outputs ● Cultural sensitivity and contextual awareness ● Long-term ethical consistency Please answer several questions about each sub-area on the following pages. Feel free to skip questions or sub-areas that you are unfamiliar with. Expert Survey: AI Reliability & Security Research Priorities | 43 Sub-Area Evaluation Example Ethics-aware training and fine-tuning: Research on learning from imperfect ethical datasets, applying ethics-aware data curation methods, and incorporating collective ethical principles into model design. Importance: Resolving the core challenges of this sub-area and implementing the resulting solutions would significantly reduce the risk of severe harm (loss of >100 lives or >$10 billion in economic impact) from AI. ◯ Strongly Disagree ◯ Disagree ◯ Neutral ◯ Agree ◯ Strongly Agree ◯ Don’t Know Tractability: An additional targeted investment of approximately $10 million over the next two years would lead to significant, measurable advancements in addressing this sub-area’s underlying challenges. ◯ Strongly Disagree ◯ Disagree ◯ Neutral ◯ Agree ◯ Strongly Agree ◯ Don’t Know What metrics or indicators would best measure progress in this area? Do you have any qualitative feedback or suggestions for improving work in this area? Category Feedback Example Please answer one or more of the following questions about the high-level area: Ethics High-value challenge: How would you phrase the single most impactful challenge in this area that could benefit from further funding? This can be a challenge mentioned previously in the survey, or a new one. Expert Survey: AI Reliability & Security Research Priorities | 44 Missing sub-areas: Are there any other important sub-areas not covered previously? Key obstacles: What major obstacles (technical, organizational, or regulatory) do you see as the main barriers to progress in this area? Demographic Questions What is your primary area of expertise? ● Computer Science/Engineering ● Machine Learning/Artificial Intelligence ● Social Sciences (e.g. Economics) ● Humanities/Philosophy/Ethics ● Public Policy/Governance ● Other, please specify What best describes your current position and background? ● Currently enrolled PhD Student (≥1 year into program) ● Postdoctoral Researcher/Fellow ● Faculty/Professor ● Industry professional with ≥3 years relevant R&D experience ● Policymaker or other professional with equivalent experience ● Other, please specify Expert Survey: AI Reliability & Security Research Priorities | 45 How many years have you spent conducting research or working in your primary area of expertise (including postgraduate education and professional roles)? ● <3 years ● 3-5 years ● 5-10 years ● >10 years What type of organization are you primarily affiliated with? ● Academic Institution ● Industry ● Government ● Non-Profit Organization ● Independent Researcher/Consultant ● Other, please specify Expert Survey: AI Reliability & Security Research Priorities | 46 Appendix C: Survey Results This section includes the full table of survey results, followed by additional data on the sub-areas of highest and lowest consensus
|
https://arxiv.org/abs/2505.21664v1
|
among respondents. Full Survey Results This table presents results for the questions of importance and tractability. In the survey, we defined these as agreement with the following statements on a 5-point Likert scale: ● Importance: Resolving the core challenges of this sub-area and implementing the resulting solutions would significantly reduce the risk of severe harm (loss of >100 lives or >$10 billion in economic impact) from AI. ● Tractability: An additional targeted investment of approximately $10 million over the next two years would lead to significant, measurable advancements in addressing this sub-area’s underlying challenges. For each sub-area with sufficient responses (n>2 for both questions), we calculated mean scores for importance and tractability. We then computed a “promise score” for each sub-area by multiplying these unrounded means, reflecting a prioritization of areas that excel in both dimensions. Note that table values for importance and tractability are means rounded to two decimal places. Promise scores were calculated from unrounded means and then also rounded. This may result in minor apparent discrepancies if recalculating promise from the rounded table values. For each area, we also provide the number of respondents (n) for both importance and tractability, as in some cases participants answered one and not the other. For the full taxonomy with descriptions of each area, see Appendix A. Research sub-area Importance (n) Tractability (n) Promise 1 Emergence and task-specific scaling patterns 5 (3) 4.25 (4) 21.25 2 CBRN (Chemical, Biological, Radiological, and Nuclear) evaluations 4.67 (3) 4.33 (3) 20.22 3 Evaluating deception, scheming, situational awareness, and persuasion 4.75 (4) 4.25 (4) 20.19 4 Oversight and monitoring of LLM-agents 4.67 (9) 4.22 (9) 19.7 5 Cyber evaluations 4.5 (4) 4.25 (4) 19.13 6 Detecting and addressing previously unmeasured or latent capabilities 4.38 (16) 4.25 (16) 18.59 7 Multi-agent metrics and evaluations 4.43 (7) 4.14 (7) 18.35 Expert Survey: AI Reliability & Security Research Priorities | 47 Research sub-area Importance (n) Tractability (n) Promise 8 Multi-agent security 4.57 (7) 4 (6) 18.29 9 Quantifying cyber threats from advanced capabilities 4.25 (4) 4.25 (4) 18.06 10 Manage conflicts between different values 4.5 (4) 4 (4) 18 11 Safety and emergent functionality in multi-agent interactions 4.57 (7) 3.86 (7) 17.63 12 Mechanistic understanding and limits of LLM reasoning 5 (3) 3.5 (4) 17.5 13 Validating and applying interpretability methods 4.38 (8) 4 (7) 17.5 14 Evaluation methodology and metrics 4.14 (14) 4.2 (15) 17.4 15 Control mechanisms for untrusted models 4 (4) 4.25 (4) 17 16 Transparency, information asymmetries, and communication protocols 4 (5) 4.25 (4) 17 17 Detecting and preventing collusion and emergent collective behaviour 4.43 (7) 3.8 (5) 16.83 18 Network effects and destabilizing dynamics in agent ecosystems 4.33 (6) 3.83 (6) 16.61 19 Pluralistic value alignment 4 (4) 4 (4) 16 20 Evaluating tool affordances for LLM-agents 4.11 (9) 3.89 (9) 15.99 21 Develop more robust evaluations for which values an LLM encodes 3.75 (4) 4.25 (4) 15.94 22 Studying misalignment through simplified model organisms 4.07 (14) 3.86 (14) 15.7 23 Confidential computing and environment isolation 4.57 (7) 3.43 (7) 15.67 24 Theoretical foundations for
|
https://arxiv.org/abs/2505.21664v1
|
evaluation 4.07 (15) 3.81 (16) 15.5 25 Improving evaluation robustness 3.8 (15) 4.07 (15) 15.45 26 Robustness to underspecification 4.17 (6) 3.67 (6) 15.28 27 Pretraining alterations to improve interpretability 4 (5) 3.8 (5) 15.2 28 Understanding how fine-tuning changes a pretrained model 4 (5) 3.75 (4) 15 29 Transparency 4.13 (8) 3.63 (8) 14.95 30 Foundational research on operationalizing values in LLMs 4.25 (4) 3.5 (4) 14.88 31 Weight security and key management 4.5 (4) 3.25 (4) 14.63 32 Debate 4.1 (10) 3.5 (10) 14.35 Expert Survey: AI Reliability & Security Research Priorities | 48 Research sub-area Importance (n) Tractability (n) Promise 33 Continuous monitoring, advanced threat detection, and incident response 3.86 (7) 3.71 (7) 14.33 34 Datacenter security 4.14 (7) 3.43 (7) 14.2 35 Uncertainty quantification 3.75 (4) 3.75 (4) 14.06 36 Defending against poisoning and backdoors 3.75 (4) 3.75 (4) 14.06 37 Justify value choices for alignment 4 (4) 3.5 (4) 14 38 Scalable data auditing, filtering, and Pretraining with Human Feedback (PHF) 4 (4) 3.5 (4) 14 39 Eliciting Latent Knowledge (ELK) 4.25 (8) 3.25 (8) 13.81 40 Explainability 4 (7) 3.43 (7) 13.71 41 Supply chain integrity and secure development 4.57 (7) 3 (6) 13.71 42 Develop output-based adversarial training techniques for more robust alignment 3.8 (5) 3.6 (5) 13.68 43 Better elicitation mechanisms from humans 3.6 (10) 3.67 (9) 13.2 44 Access control and interface hardening 4.75 (4) 2.75 (4) 13.06 45 Building verifiable and robust AI architectures 3.92 (12) 3.33 (12) 13.06 46 Lifelong learning and goal-directedness in LLM agents 3.67 (9) 3.56 (9) 13.04 47 Iterated Distillation and Amplification (IDA) 4 (9) 3.22 (9) 12.89 48 Tamper-evidence and tamper-proofing 3.57 (7) 3.57 (7) 12.76 49 Hardware-integrated monitoring and verification 3.57 (7) 3.5 (6) 12.5 50 Developmental interpretability 3.75 (8) 3.29 (7) 12.32 51 Defending against jailbreaks and prompt injections 3.75 (4) 3.25 (4) 12.19 52 Detecting modified models or poisoned data 4 (4) 3 (4) 12 53 Decision theory and rational agency 3.7 (10) 3.2 (10) 11.84 54 Formal verification of AI systems 3.82 (11) 3.09 (11) 11.8 55 Interpretability foundations 3.38 (8) 3.43 (7) 11.57 56 Peer incentivisation and automated mechanism design 3 (4) 3.75 (4) 11.25 57 Human-AI Interaction and collaboration 3.2 (5) 3.5 (4) 11.2 58 Preventing model self-exfiltration 4 (4) 2.75 (4) 11 59 Embedded agency 3.55 (11) 3.09 (11) 10.96 60 Feature and circuit analysis 3.43 (7) 3.17 (6) 10.86 Expert Survey: AI Reliability & Security Research Priorities | 49 Research sub-area Importance (n) Tractability (n) Promise 61 Specialized chips to compute encrypted data 3.57 (7) 3 (7) 10.71 62 Limiting models' ability to perform harmful tasks 3.6 (5) 2.8 (5) 10.08 63 Recursive Reward Modeling 3.22 (9) 3.13 (8) 10.07 64 Mechanism design and multi-agent communication 2.8 (5) 3.5 (4) 9.8 65 Adversarial robustness to perturbations 3.25 (4) 3 (4) 9.75 66 Retrieval-augmented pre-training 2.6 (5) 3.75 (4) 9.75 67 Scalable techniques for targeted modifications of LLM behavior 3 (4) 3 (5) 9 68 Theoretical foundations of deep learning 2.67 (3) 3.33 (3) 8.89 69 Reinforcement Learning from
|
https://arxiv.org/abs/2505.21664v1
|
AI Feedback (RLAIF) 3.1 (10) 2.8 (10) 8.68 70 Limits of Transformers 2.33 (3) 3.67 (3) 8.56 71 Causal incentives 2.9 (10) 2.9 (10) 8.41 72 Control theory applications in AI safety 3.09 (11) 2.6 (10) 8.04 73 Model robustness and oracle protection 3 (4) 2.67 (3) 8 74 Double descent and overparameterization 2 (3) 3.33 (3) 6.67 75 Implicit bias of optimization algorithms 2.67 (3) 2.33 (3) 6.22 76 Optimization and loss landscape analysis 2 (3) 2.33 (3) 4.67 Expert Survey: AI Reliability & Security Research Priorities | 50 Consensus Analysis Understanding the level of agreement among experts adds critical context to the average scores provided above. Low variance in ratings indicates stronger consensus, while high variance reflects significant disagreement about an area’s importance or tractability. Areas with Strongest Expert Consensus Our analysis identified several research areas with remarkably low variance in expert assessments, indicating strong agreement about their value: Research sub-area Importance Variance (n) Tractability Variance (n) Combined Variance 1 Oversight and monitoring of LLM-agents 0.50 (9) 0.44 (9) 0.47 2 Evaluation methodology and metrics 0.44 (14) 0.46 (15) 0.45 3 Validating and applying interpretability methods 0.55 (8) 0.33 (7) 0.44 4 Multi-agent security 0.29 (7) 0.40 (6) 0.34 5 Detecting and addressing previously unmeasured or latent capabilities 0.38 (16) 0.20 (16) 0.29 Areas with Significant Expert Disagreement Conversely, several research areas showed high variance in expert assessments, indicating significant disagreement about their value: Research sub-area Importance Variance (n) Tractability Variance (n) Combined Variance 1 Building verifiable and robust AI architectures 2.63 (12) 2.97 (12) 2.80 2 Feature and circuit analysis 2.95 (7) 2.17 (6) 2.56 3 Datacenter security 2.14 (7) 2.29 (7) 2.21 4 Limiting models' ability to perform harmful tasks 1.30 (5) 2.70 (5) 2.00 5 Tamper-evidence and tamper-proofing 1.95 (7) 1.95 (7) 1.95 6 Hardware-integrated monitoring and verification 1.95 (7) 1.90 (6) 1.93 7 Interpretability foundations 2.55 (8) 1.29 (7) 1.92 8 Causal incentives 1.43 (10) 2.32 (10) 1.88 9 Robustness to underspecification 1.77 (6) 1.87 (6) 1.82 10 Specialized chips to compute encrypted data 2.29 (7) 1.33 (7) 1.81 Expert Survey: AI Reliability & Security Research Priorities | 51 Appendix D: Sub-areas excluded due to insufficient response Sub-areas that received two or fewer ratings of either importance or tractability were excluded from analysis due to insufficient data for meaningful statistical interpretation. Category Research sub-area Understanding in-context learning, reasoning, and scaling behavior (4 of 7 sub-areas excluded) Mechanistic understanding of In-Context Learning Influences on ICL behavior and performance Theoretical and representational aspects of scaling Impact of scaling and training on reasoning capabilities Domain-specific AI evaluation design (1 of 4 sub-areas excluded) Automated AI R&D evaluations Fairness (6 of 6 sub-areas excluded) Fairness under dynamic and constrained data scenarios Fair representation and participation in AI systems Bias detection, quantification, and mitigation techniques Fairness in multilingual, cross-cultural, and multimodal contexts Intersectional fairness and complex group structures Reconciling multiple fairness definitions and normative trade-offs Accountability (4 of 4 sub-areas excluded) Auditing mechanisms Methods for detecting and attributing LLM outputs Regulatory compliance automation Methods for mitigating power concentration in AI Ethics
|
https://arxiv.org/abs/2505.21664v1
|
(5 of 5 sub-areas excluded) Ethics-aware training and fine-tuning Ethical decision-making frameworks Mitigating harmful outputs Cultural sensitivity and contextual awareness Long-term ethical consistency Privacy (5 of 5 sub-areas excluded) Identifying emergent privacy risks in new paradigms Research on inferring sensitive information from accumulation of innocuous data Privacy challenges in complex data scenarios Privacy modeling frameworks Data encryption tools for model inputs and outputs Research on safety in non-LLM systems (4 of 4 sub-areas excluded) Safe reinforcement learning for non-LLM systems Robotics and embodied AI safety Adversarial robustness in vision and perception systems Whole-brain emulation Expert Survey: AI Reliability & Security Research Priorities | 52 Appendix E: Demographics Current Position and Background This describes the type of work that participants currently do–such as research, professorship, policymaking, and so on. About half of participants were in academic positions at the time of the survey, while over a quarter were working in industry research positions. Primary Organizational Affiliation This describes the type of organization that participants work in, such as academic institutions, non-profits, industry, and so on. Academic and non-profit affiliations dominated the respondent pool, with industry affiliations as another notable category. Expert Survey: AI Reliability & Security Research Priorities | 53 Primary Area of Expertise The primary areas of expertise among participants leaned strongly toward machine learning and computer science, which reflects the technical nature of the survey. Years Spent in Primary Area of Expertise The modal experience level of participants was 5-10 years spent in their primary area of expertise, followed by 3-5 years, 10+ years, and <3 years. Expert Survey: AI Reliability & Security Research Priorities | 54 Bibliography Aarne, Onni, Tim Fist, and Caleb Withers. 2024. “Secure, Governable Chips.” CNAS. January 8, 2024. https://www.cnas.org/publications/reports/secure-governable-chips. Abdelnabi, Sahar, Amr Gomaa, Sarath Sivaprasad, Lea Schönherr, and Mario Fritz. 2023. “LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Game,” October. https://openreview.net/forum?id=cfL8zApofK. Addepalli, Sravanti, Samyak Jain, Gaurang Sriramanan, and R. Venkatesh Babu. 2022. “Scaling Adversarial Training to Large Perturbation Bounds.” arXiv. https://doi.org/10.48550/arXiv.2210.09852. Adewumi, Tosin, Lama Alkhaled, Namrata Gurung, Goya van Boven, and Irene Pagliai. 2024. “Fairness and Bias in Multimodal AI: A Survey.” arXiv. https://doi.org/10.48550/arXiv.2406.19097. Agapiou, John P., Alexander Sasha Vezhnevets, Edgar A. Duéñez-Guzmán, Jayd Matyas, Yiran Mao, Peter Sunehag, Raphael Köster, et al. 2023. “Melting Pot 2.0.” arXiv. https://doi.org/10.48550/arXiv.2211.13746. Agarwal, Chirag, Daniel D’souza, and Sara Hooker. 2022. “Estimating Example Difficulty Using Variance of Gradients.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10358–68. New Orleans, LA, USA: IEEE. https://doi.org/10.1109/CVPR52688.2022.01012. AI Security Institute. 2024. “Advanced AI Evaluations at AISI: May Update | AISI Work.” AI Security Institute. May 20, 2024. https://www.aisi.gov.uk/work/advanced-ai-evaluations-may-update. AIAT Report. 2024. “Risk & Reward: AI Assurance Technology Market Report.” May 2024. https://www.aiat.report/. Aikata, Aikata, Ahmet Can Mert, Sunmin Kwon, Maxim Deryabin, and Sujoy Sinha Roy. 2023. “REED: Chiplet-Based Accelerator for Fully Homomorphic Encryption.” Cryptology ePrint Archive. https://eprint.iacr.org/2023/1190. Allen-Zhu, Zeyuan, Yuanzhi Li, and Yingyu Liang. 2019. “Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2019/hash/62dad6e273d32235ae02b7d321578ee8-Abstract.html. Alon-Barkat, Saar, and Madalina Busuioc. 2023. “Human-AI Interactions in Public Sector Decision-Making:
|
https://arxiv.org/abs/2505.21664v1
|
‘Automation Bias’ and ‘Selective Adherence’ to Algorithmic Advice.” Journal of Public Administration Research and Theory 33 (1): 153–69. https://doi.org/10.1093/jopart/muac007. Altman, Sam. 2025. “Reflections.” Blog. Sam Altman. January 5, 2025. https://blog.samaltman.com/reflections. Anwar, Usman, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, et al. 2024. “Foundational Challenges in Assuring Alignment and Safety of Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2404.09932. Arora, Arnav, Lucie-Aimée Kaffee, and Isabelle Augenstein. 2023. “Probing Pre-Trained Language Models for Cross-Cultural Differences in Values.” arXiv. https://doi.org/10.48550/arXiv.2203.13722. Bai, Yuntao, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, et al. 2022. “Training a Helpful and Harmless Assistant with Reinforcement Expert Survey: AI Reliability & Security Research Priorities | 55 Learning from Human Feedback.” arXiv. https://doi.org/10.48550/arXiv.2204.05862. Bailey, Luke, Euan Ong, Stuart Russell, and Scott Emmons. 2024. “Image Hijacks: Adversarial Images Can Control Generative Models at Runtime.” arXiv. https://doi.org/10.48550/arXiv.2309.00236. Bateni, Arash, Matthew C. Chan, and Ray Eitel-Porter. 2022. “AI Fairness: From Principles to Practice.” arXiv. https://doi.org/10.48550/arXiv.2207.09833. Belrose, Nora, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, and Stella Biderman. 2025. “LEACE: Perfect Linear Concept Erasure in Closed Form.” arXiv. https://doi.org/10.48550/arXiv.2306.03819. Bengio, Yoshua, Michael K. Cohen, Nikolay Malkin, Matt MacDermott, Damiano Fornasiere, Pietro Greiner, and Younesse Kaddar. 2024. “Can a Bayesian Oracle Prevent Harm from an Agent?” arXiv. https://doi.org/10.48550/arXiv.2408.05284. Bengio, Yoshua, Sören Mindermann, Daniel Privitera, Tamay Besiroglu, Rishi Bommasani, Stephen Casper, Yejin Choi, et al. 2025. “International AI Safety Report.” arXiv. https://doi.org/10.48550/arXiv.2501.17805. Bhatt, Aryan, Cody Rushing, Adam Kaufman, Tyler Tracy, Vasil Georgiev, David Matolcsi, Akbir Khan, and Buck Shlegeris. 2025. “Ctrl-Z: Controlling AI Agents via Resampling.” arXiv. https://doi.org/10.48550/arXiv.2504.10374. Bilodeau, Blair, Natasha Jaques, Pang Wei Koh, and Been Kim. 2024. “Impossibility Theorems for Feature Attribution.” Proceedings of the National Academy of Sciences 121 (2): e2304406120. https://doi.org/10.1073/pnas.2304406120. Binkytė-Sadauskienė, Rūta, Karima Makhlouf, Carlos Pinzón, Sami Zhioua, and Catuscia Palamidessi. 2022. “Causal Discovery for Fairness.” arXiv. https://doi.org/10.48550/arXiv.2206.06685. Birhane, Abeba, William Isaac, Vinodkumar Prabhakaran, Mark Díaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. “Power to the People? Opportunities and Challenges for Participatory AI.” In Equity and Access in Algorithms, Mechanisms, and Optimization, 1–8. https://doi.org/10.1145/3551624.3555290. Bloembergen, Daan, Karl Tuyls, Daniel Hennes, and Michael Kaisers. 2015. “Evolutionary Dynamics of Multi-Agent Learning: A Survey.” Journal of Artificial Intelligence Research 53 (August):659–97. https://doi.org/10.1613/jair.4818. Bluemke, Emma, Tantum Collins, Ben Garfinkel, and Andrew Trask. 2023. “Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases.” arXiv. https://doi.org/10.48550/arXiv.2303.08956. Bommasani, Rishi, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, and Percy Liang. 2023. “The Foundation Model Transparency Index.” arXiv. https://doi.org/10.48550/arXiv.2310.12941. Brandt, Felix, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia. 2016. “Introduction to Computational Social Choice.” In Handbook of Computational Social Choice, edited by Ariel D. Procaccia, Felix Brandt, Jérôme Lang, Ulle Endriss, and Vincent Conitzer, 1–20. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781107446984.002. Brown, Hannah, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and Florian Tramèr. 2022. “What Does It Mean for a Language Model to Preserve Privacy?” arXiv. https://doi.org/10.48550/arXiv.2202.05520. Brown-Cohen, Jonah, Geoffrey Irving, and Georgios Piliouras. 2023. “Scalable AI Safety via Doubly-Efficient Debate.” arXiv. https://doi.org/10.48550/arXiv.2311.14125. Brunke, Lukas, Melissa Greeff, Adam W. Hall, Zhaocong Yuan,
|
https://arxiv.org/abs/2505.21664v1
|
Siqi Zhou, Jacopo Panerati, and Expert Survey: AI Reliability & Security Research Priorities | 56 Angela P. Schoellig. 2021. “Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning.” arXiv. https://doi.org/10.48550/arXiv.2108.06266. Buijsman, Stefan. 2023. “Navigating Fairness Measures and Trade-Offs.” arXiv. https://doi.org/10.48550/arXiv.2307.08484. Burnell, Ryan, Han Hao, Andrew R. A. Conway, and Jose Hernandez Orallo. 2023. “Revealing the Structure of Language Model Capabilities.” arXiv. https://doi.org/10.48550/arXiv.2306.10062. Caballero, Ethan, Kshitij Gupta, Irina Rish, and David Krueger. 2023. “Broken Neural Scaling Laws.” arXiv. https://doi.org/10.48550/arXiv.2210.14891. Cai, Xiangrui, Yang Wang, Sihan Xu, Hao Li, Ying Zhang, Zheli Liu, and Xiaojie Yuan. 2024. “LAN: Learning Adaptive Neighbors for Real-Time Insider Threat Detection.” arXiv. https://doi.org/10.48550/arXiv.2403.09209. Calvano, Emilio, Giacomo Calzolari, Vincenzo Denicolò, and Sergio Pastorello. 2020. “Artificial Intelligence, Algorithmic Pricing, and Collusion.” American Economic Review 110 (10): 3267–97. https://doi.org/10.1257/aer.20190623. Cappelli, Maria Assunta, and Giovanna Di Marzo Serugendo. 2025. “A Semi-Automated Software Model to Support AI Ethics Compliance Assessment of an AI System Guided by Ethical Principles of AI.” AI and Ethics 5 (2): 1357–80. https://doi.org/10.1007/s43681-024-00480-z. Carlini, Nicholas, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. 2019. “On Evaluating Adversarial Robustness.” arXiv. https://doi.org/10.48550/arXiv.1902.06705. Carlini, Nicholas, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr. 2024. “Poisoning Web-Scale Training Datasets Is Practical.” arXiv. https://doi.org/10.48550/arXiv.2302.10149. Carlini, Nicholas, and David Wagner. 2017. “Towards Evaluating the Robustness of Neural Networks.” In 2017 IEEE Symposium on Security and Privacy (SP), 39–57. https://doi.org/10.1109/SP.2017.49. Carlsmith, Joe. 2025. “Can We Safely Automate Alignment Research?” Joe Carlsmith. April 30, 2025. https://joecarlsmith.com/2025/04/30/can-we-safely-automate-alignment-research. Casper, Stephen, Lennart Schulze, Oam Patel, and Dylan Hadfield-Menell. 2024. “Defending Against Unforeseen Failure Modes with Latent Adversarial Training.” arXiv. https://doi.org/10.48550/arXiv.2403.05030. Chan, Alan, Carson Ezell, Max Kaufmann, Kevin Wei, Lewis Hammond, Herbie Bradley, Emma Bluemke, et al. 2024. “Visibility into AI Agents.” arXiv. https://doi.org/10.48550/arXiv.2401.13138. Chan, Alex J., José Luis Redondo García, Fabrizio Silvestri, Colm O’Donnell, and Konstantina Palla. 2024. “Enhancing Content Moderation with Culturally-Aware Models.” arXiv. https://doi.org/10.48550/arXiv.2312.02401. Choi, Minseok, Kyunghyun Min, and Jaegul Choo. 2024. “Cross-Lingual Unlearning of Selective Knowledge in Multilingual Language Models.” In Findings of the Association for Computational Linguistics: EMNLP 2024, edited by Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, 10732–47. Miami, Florida, USA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-emnlp.630. Chou, Hung-Hsu, Holger Rauhut, and Rachel Ward. 2024. “Robust Implicit Regularization via Weight Normalization.” Information and Inference: A Journal of the IMA 13 (3): iaae022. Expert Survey: AI Reliability & Security Research Priorities | 57 https://doi.org/10.1093/imaiai/iaae022. Christiano, Paul. 2023. “Eliciting Latent Knowledge.” Medium. January 24, 2023. https://ai-alignment.com/eliciting-latent-knowledge-f977478608fc. Christiano, Paul, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2023. “Deep Reinforcement Learning from Human Preferences.” arXiv. https://doi.org/10.48550/arXiv.1706.03741. Christiano, Paul, Buck Shlegeris, and Dario Amodei. 2018. “Supervising Strong Learners by Amplifying Weak Experts.” arXiv. https://doi.org/10.48550/arXiv.1810.08575. Clymer, Joshua, Garrett Baker, Rohan Subramani, and Sam Wang. 2023. “Generalization Analogies: A Testbed for Generalizing AI Oversight to Hard-To-Measure Domains.” arXiv. https://doi.org/10.48550/arXiv.2311.07723. Clymer, Joshua, Hjalmar Wijk, and Beth Barnes. 2024. “The Rogue Replication Threat Model.” METR Blog, November. https://metr.org/blog/2024-11-12-rogue-replication-threat-model/. Conitzer, Vincent, and Caspar Oesterheld. 2023. “Foundations of Cooperative AI.” Proceedings of the
|
https://arxiv.org/abs/2505.21664v1
|
AAAI Conference on Artificial Intelligence 37 (13): 15359–67. https://doi.org/10.1609/aaai.v37i13.26791. Conmy, Arthur, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. 2023. “Towards Automated Circuit Discovery for Mechanistic Interpretability.” arXiv. https://doi.org/10.48550/arXiv.2304.14997. Critch, Andrew, Michael Dennis, and Stuart Russell. 2022. “Cooperative and Uncooperative Institution Designs: Surprises and Problems in Open-Source Game Theory.” arXiv. https://doi.org/10.48550/arXiv.2208.07006. Dai, Josef, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. 2023. “Safe RLHF: Safe Reinforcement Learning from Human Feedback.” arXiv. https://doi.org/10.48550/arXiv.2310.12773. Dalrymple, David “davidad,” Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, et al. 2024. “Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems.” arXiv. https://doi.org/10.48550/arXiv.2405.06624. Dar, Yehuda, Lorenzo Luzi, and Richard G. Baraniuk. 2023. “Frozen Overparameterization: A Double Descent Perspective on Transfer Learning of Deep Neural Networks.” arXiv. https://doi.org/10.48550/arXiv.2211.11074. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, et al. 2025. “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.” arXiv. https://doi.org/10.48550/arXiv.2501.12948. Delaney, Oscar, Oliver Guest, and Zoe Williams. 2024. “Mapping Technical Safety Research at AI Companies: A Literature Review and Incentives Analysis.” arXiv. https://doi.org/10.48550/arXiv.2409.07878. Demski, Abram, and Scott Garrabrant. 2020. “Embedded Agency.” arXiv. https://doi.org/10.48550/arXiv.1902.09469. Dhar, Aritra, Clément Thorens, Lara Magdalena Lazier, and Lukas Cavigelli. 2024. “Ascend-CC: Confidential Computing on Heterogeneous NPU for Emerging Generative AI Workloads.” arXiv. https://doi.org/10.48550/arXiv.2407.11888. Duettman, Allison, and Anders Sandberg. 2023. “2023 Whole Brain Emulation Workshop.” ResearchGate. November 2023. https://doi.org/10.13140/RG.2.2.30808.88326. Eccles, Tom, Edward Hughes, János Kramár, Steven Wheelwright, and Joel Z. Leibo. 2019. “Learning Reciprocity in Complex Sequential Social Dilemmas.” arXiv. Expert Survey: AI Reliability & Security Research Priorities | 58 https://doi.org/10.48550/arXiv.1903.08082. Elhage, Nelson, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, et al. 2022. “Toy Models of Superposition.” arXiv. https://doi.org/10.48550/arXiv.2209.10652. Epoch AI. 2025. “Our Funding.” Epoch AI. 2025. https://epoch.ai/our-funding. Everitt, Tom, Ryan Carey, Eric Langlois, Pedro A. Ortega, and Shane Legg. 2021. “Agent Incentives: A Causal Perspective.” arXiv. https://doi.org/10.48550/arXiv.2102.01685. Fan, David D., Jennifer Nguyen, Rohan Thakker, Nikhilesh Alatur, Ali-akbar Agha-mohammadi, and Evangelos A. Theodorou. 2020. “Bayesian Learning-Based Adaptive Control for Safety Critical Systems.” In 2020 IEEE International Conference on Robotics and Automation (ICRA), 4093–99. https://doi.org/10.1109/ICRA40945.2020.9196709. Fan, Wei, Haoran Li, Zheye Deng, Weiqi Wang, and Yangqiu Song. 2024. “GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory.” arXiv. https://doi.org/10.48550/arXiv.2406.11149. Farquhar, Sebastian, Ryan Carey, and Tom Everitt. 2022. “Path-Specific Objectives for Safer Agent Incentives.” arXiv. https://doi.org/10.48550/arXiv.2204.10018. Felzmann, Heike, Eduard Fosch-Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux. 2020. “Towards Transparency by Design for Artificial Intelligence.” Science and Engineering Ethics 26 (6): 3333–61. https://doi.org/10.1007/s11948-020-00276-4. Fernando, Chrisantha, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. 2023. “Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution.” arXiv. https://doi.org/10.48550/arXiv.2309.16797. Fist, Tim. 2025. “What Does America Think the Trump Administration Should Do About AI? | IFP.” April 29, 2025. https://ifp.org/ai-action-plan/. Foerster, Jakob N., Richard Y. Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. 2018. “Learning with Opponent-Learning Awareness.” arXiv. https://doi.org/10.48550/arXiv.1709.04326. Fridman, Lex. 2024. “Transcript for Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity.” Lex Fridman (blog). November 11, 2024. https://lexfridman.com/dario-amodei-transcript/.
|
https://arxiv.org/abs/2505.21664v1
|
Future of Life Institute. 2023. “Redwood Research Group, Inc.” Future of Life Institute (blog). July 6, 2023. https://futureoflife.org/grant/redwood-research-group-inc/. Gabriel, Iason. 2020. “Artificial Intelligence, Values and Alignment.” Minds and Machines 30 (3): 411–37. https://doi.org/10.1007/s11023-020-09539-2. Gandhi, Kanishk, Dorsa Sadigh, and Noah D. Goodman. 2023. “Strategic Reasoning with Language Models.” arXiv. https://doi.org/10.48550/arXiv.2305.19165. Ganguli, Deep, Danny Hernandez, Liane Lovitt, Nova DasSarma, Tom Henighan, Andy Jones, Nicholas Joseph, et al. 2022. “Predictability and Surprise in Large Generative Models.” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 1747–64. https://doi.org/10.1145/3531146.3533229. Ganguly, Niloy, Dren Fazlija, Maryam Badar, Marco Fisichella, Sandipan Sikdar, Johanna Schrader, Jonas Wallat, et al. 2023. “A Review of the Role of Causality in Developing Trustworthy AI Systems.” arXiv. https://doi.org/10.48550/arXiv.2302.06975. Gaspari, Fabio De, Dorjan Hitaj, and Luigi V. Mancini. 2024. “Have You Poisoned My Data? Defending Neural Networks against Data Poisoning.” arXiv. https://doi.org/10.48550/arXiv.2403.13523. Gohar, Usman, and Lu Cheng. 2023. “A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges.” In Proceedings of the Thirty-Second International Expert Survey: AI Reliability & Security Research Priorities | 59 Joint Conference on Artificial Intelligence, 6619–27. https://doi.org/10.24963/ijcai.2023/742. Golchin, Shahriar, and Mihai Surdeanu. 2024. “Time Travel in LLMs: Tracing Data Contamination in Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2308.08493. Golechha, Satvik, Dylan Cope, and Nandi Schoots. 2024. “Training Neural Networks for Modularity Aids Interpretability.” arXiv. https://doi.org/10.48550/arXiv.2409.15747. Grace, Katja, Harlan Stewart, Julia Fabienne Sandkühler, Stephen Thomas, Ben Weinstein-Raun, and Jan Brauner. 2024. “Thousands of AI Authors on the Future of AI.” arXiv. https://doi.org/10.48550/arXiv.2401.02843. Greenblatt, Ryan, Buck Shlegeris, Kshitij Sachan, and Fabien Roger. 2024. “AI Control: Improving Safety Despite Intentional Subversion.” arXiv. https://doi.org/10.48550/arXiv.2312.06942. Grosse, Roger, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, et al. 2023. “Studying Large Language Model Generalization with Influence Functions.” arXiv. https://doi.org/10.48550/arXiv.2308.03296. Guo, Chengquan, Xun Liu, Chulin Xie, Andy Zhou, Yi Zeng, Zinan Lin, Dawn Song, and Bo Li. 2024. “RedCode: Risky Code Execution and Generation Benchmark for Code Agents.” arXiv. https://doi.org/10.48550/arXiv.2411.07781. Guo, Wei, and Aylin Caliskan. 2021. “Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases.” In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 122–33. https://doi.org/10.1145/3461702.3462536. Gupta, Maanak, Charankumar Akiri, Kshitiz Aryal, Eli Parker, and Lopamudra Praharaj. 2023. “From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy.” IEEE Access 11:80218–45. https://doi.org/10.1109/ACCESS.2023.3300381. Guresti, Bengisu, Abdullah Vanlioglu, and Nazim Kemal Ure. 2023. “IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social Dilemmas.” arXiv. https://doi.org/10.48550/arXiv.2302.14604. Guu, Kelvin, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. “REALM: Retrieval-Augmented Language Model Pre-Training.” arXiv. https://doi.org/10.48550/arXiv.2002.08909. Ha, David, and Yujin Tang. 2022. “Collective Intelligence for Deep Learning: A Survey of Recent Developments.” arXiv. https://doi.org/10.48550/arXiv.2111.14377. Hahn, Michael, and Navin Goyal. 2023. “A Theory of Emergent In-Context Learning as Implicit Structure Induction.” arXiv. https://doi.org/10.48550/arXiv.2303.07971. Han, Seungju, Kavel Rao, Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, and Nouha Dziri. 2024. “WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs.” arXiv. https://doi.org/10.48550/arXiv.2406.18495. Henderson, Peter, Eric Mitchell, Christopher D. Manning, Dan Jurafsky, and Chelsea Finn. 2023. “Self-Destructing Models: Increasing the Costs of Harmful
|
https://arxiv.org/abs/2505.21664v1
|
Dual Uses of Foundation Models.” arXiv. https://doi.org/10.48550/arXiv.2211.14946. Hendrycks, Dan, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2023. “Aligning AI With Shared Human Values.” arXiv. https://doi.org/10.48550/arXiv.2008.02275. Henzinger, Thomas A., Mathias Lechner, and Đorđe Žikelić. 2021. “Scalable Verification of Quantized Neural Networks.” Proceedings of the AAAI Conference on Artificial Expert Survey: AI Reliability & Security Research Priorities | 60 Intelligence 35 (5): 3787–95. https://doi.org/10.1609/aaai.v35i5.16496. Hepworth, Isaac, Kara Olive, Kingshuk Dasgupta, Michael Le, Mark Lodato, Mihai Maruseac, Sarah Meiklejohn, Shamik Chaudhuri, and Tehila Minkus. 2024. “Securing the AI Software Supply Chain.” April 2024. https://research.google/pubs/securing-the-ai-software-supply-chain/. Hoffmann, Jordan, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, et al. 2022. “Training Compute-Optimal Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2203.15556. Hoogland, Jesse, George Wang, Matthew Farrugia-Roberts, Liam Carroll, Susan Wei, and Daniel Murfet. 2025. “Loss Landscape Degeneracy Drives Stagewise Development in Transformers.” arXiv. https://doi.org/10.48550/arXiv.2402.02364. Hort, Max, Zhenpeng Chen, Jie M. Zhang, Mark Harman, and Federica Sarro. 2023. “Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey.” arXiv. https://doi.org/10.48550/arXiv.2207.07068. Hou, Yifan, Jiaoda Li, Yu Fei, Alessandro Stolfo, Wangchunshu Zhou, Guangtao Zeng, Antoine Bosselut, and Mrinmaya Sachan. 2023. “Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models.” arXiv. https://doi.org/10.48550/arXiv.2310.14491. Hu, Hengyuan, Adam Lerer, Alex Peysakhovich, and Jakob Foerster. 2021. “‘Other-Play’ for Zero-Shot Coordination.” arXiv. https://doi.org/10.48550/arXiv.2003.02979. Hu, Yuge, Joseph Musielewicz, Zachary W Ulissi, and Andrew J Medford. 2022. “Robust and Scalable Uncertainty Estimation with Conformal Prediction for Machine-Learned Interatomic Potentials.” Machine Learning: Science and Technology 3 (4): 045028. https://doi.org/10.1088/2632-2153/aca7b1. Huang, Alyssa, Peihan Liu, Ryumei Nakada, Linjun Zhang, and Wanrong Zhang. 2024. “Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training.” arXiv. https://doi.org/10.48550/arXiv.2306.08173. Hubinger, Evan, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, et al. 2024. “Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training.” arXiv. https://doi.org/10.48550/arXiv.2401.05566. Ilyas, Andrew, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. 2022. “Datamodels: Predicting Predictions from Training Data.” arXiv. https://doi.org/10.48550/arXiv.2202.00622. Irving, Geoffrey, Paul Christiano, and Dario Amodei. 2018. “AI Safety via Debate.” arXiv. https://doi.org/10.48550/arXiv.1805.00899. Ismail, Aya Abdelsalam, Héctor Corrada Bravo, and Soheil Feizi. 2021. “Improving Deep Learning Interpretability by Saliency Guided Training.” arXiv. https://doi.org/10.48550/arXiv.2111.14338. Jain, Samyak, Robert Kirk, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka, Edward Grefenstette, Tim Rocktäschel, and David Scott Krueger. 2024. “Mechanistically Analyzing the Effects of Fine-Tuning on Procedurally Defined Tasks.” arXiv. https://doi.org/10.48550/arXiv.2311.12786. Jang, Joel, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. 2023. “Knowledge Unlearning for Mitigating Privacy Risks in Language Models.” In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), edited by Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, 14389–408. Toronto, Canada: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.acl-long.805. Expert Survey: AI Reliability & Security Research Priorities | 61 Järviniemi, Olli, and Evan Hubinger. 2024. “Uncovering Deceptive Tendencies in Language Models: A Simulated Company AI Assistant.” arXiv. https://doi.org/10.48550/arXiv.2405.01576. Jegorova, Marija, Chaitanya Kaul, Charlie Mayor, Alison Q. O’Neil, Alexander Weir, Roderick Murray-Smith, and Sotirios A. Tsaftaris. 2022. “Survey: Leakage and Privacy at Inference Time.” arXiv. https://doi.org/10.48550/arXiv.2107.01614. Ji, Jianchao, Yutong
|
https://arxiv.org/abs/2505.21664v1
|
Chen, Mingyu Jin, Wujiang Xu, Wenyue Hua, and Yongfeng Zhang. 2024. “MoralBench: Moral Evaluation of LLMs.” arXiv. https://doi.org/10.48550/arXiv.2406.04428. Jones, Erik, Anca Dragan, and Jacob Steinhardt. 2024. “Adversaries Can Misuse Combinations of Safe Models.” arXiv. https://doi.org/10.48550/arXiv.2406.14595. Jones, Erik, Meg Tong, Jesse Mu, Mohammed Mahfoud, Jan Leike, Roger Grosse, Jared Kaplan, William Fithian, Ethan Perez, and Mrinank Sharma. 2025. “Forecasting Rare Language Model Behaviors.” arXiv. https://doi.org/10.48550/arXiv.2502.16797. Juneja, Gurusha, Subhabrata Dutta, Soumen Chakrabarti, Sunny Manchanda, and Tanmoy Chakraborty. 2024. “Small Language Models Fine-Tuned to Coordinate Larger Language Models Improve Complex Reasoning.” arXiv. https://doi.org/10.48550/arXiv.2310.18338. Kamenica, Emir. 2019. “Bayesian Persuasion and Information Design.” Annual Review of Economics 11 (Volume 11, 2019): 249–72. https://doi.org/10.1146/annurev-economics-080218-025739. Kantrowitz, Alex. 2025. “Google DeepMind CEO Demis Hassabis: The Path To AGI, LLM Creativity, And Google Smart Glasses.” January 23, 2025. https://www.bigtechnology.com/p/google-deepmind-ceo-demis-hassabis. Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. “Scaling Laws for Neural Language Models.” arXiv. https://doi.org/10.48550/arXiv.2001.08361. Kiela, Douwe, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, et al. 2021. “Dynabench: Rethinking Benchmarking in NLP.” arXiv. https://doi.org/10.48550/arXiv.2104.14337. Kinniment, Megan, Lucas Jun Koba Sato, Haoxing Du, Brian Goodrich, Max Hasin, Lawrence Chan, Luke Harold Miles, et al. 2024. “Evaluating Language-Model Agents on Realistic Autonomous Tasks.” arXiv. https://doi.org/10.48550/arXiv.2312.11671. Kirchenbauer, John, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. 2024. “A Watermark for Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2301.10226. Kirk, Hannah Rose, Alexander Whitefield, Paul Röttger, Andrew Bean, Katerina Margatina, Juan Ciro, Rafael Mosquera, et al. 2024. “The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2404.16019. Klingefjord, Oliver, Ryan Lowe, and Joe Edelman. 2024. “What Are Human Values, and How Do We Align AI to Them?” arXiv. https://doi.org/10.48550/arXiv.2404.10636. Korbak, Tomasz, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, and Ethan Perez. 2023. “Pretraining Language Models with Human Preferences.” arXiv. https://doi.org/10.48550/arXiv.2302.08582. Korbak, Tomek, Mikita Balesni, Buck Shlegeris, and Geoffrey Irving. 2025. “How to Evaluate Control Measures for LLM Agents? A Trajectory from Today to Superintelligence.” arXiv. Expert Survey: AI Reliability & Security Research Priorities | 62 https://doi.org/10.48550/arXiv.2504.05259. Kröger, Jacob. 2019. “Unexpected Inferences from Sensor Data: A Hidden Privacy Threat in the Internet of Things.” In Internet of Things. Information Processing in an Increasingly Connected World, edited by Leon Strous and Vinton G. Cerf, 147–59. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-15651-0_13. Kwa, Thomas, Ben West, Joel Becker, Amy Deng, Katharyn Garcia, Max Hasin, Sami Jawhar, et al. 2025. “Measuring AI Ability to Complete Long Tasks.” arXiv. https://doi.org/10.48550/arXiv.2503.14499. Laurent, Jon M., Joseph D. Janizek, Michael Ruzo, Michaela M. Hinks, Michael J. Hammerling, Siddharth Narayanan, Manvitha Ponnapati, Andrew D. White, and Samuel G. Rodriques. 2024. “LAB-Bench: Measuring Capabilities of Language Models for Biology Research.” arXiv. https://doi.org/10.48550/arXiv.2407.10362. Lee, Donghyun, and Mo Tiwari. 2024. “Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems.” arXiv. https://doi.org/10.48550/arXiv.2410.07283. Lee, Harrison, Samrat Phatale, Hassan Mansoor, Kellie Ren Lu, Thomas Mesnard, Johan Ferret, Colton Bishop, Ethan Hall, Victor Carbune, and Abhinav Rastogi. 2023. “RLAIF: Scaling Reinforcement Learning
|
https://arxiv.org/abs/2505.21664v1
|
from Human Feedback with AI Feedback,” October. https://openreview.net/forum?id=AAxIs3D2ZZ. Leike, Jan, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. 2018. “Scalable Agent Alignment via Reward Modeling: A Research Direction.” arXiv. https://doi.org/10.48550/arXiv.1811.07871. Leitão, Diogo, Pedro Saleiro, Mário A. T. Figueiredo, and Pedro Bizarro. 2022. “Human-AI Collaboration in Decision-Making: Beyond Learning to Defer.” arXiv. https://doi.org/10.48550/arXiv.2206.13202. Leonardos, Stefanos, and Georgios Piliouras. 2020. “Exploration-Exploitation in Multi-Agent Learning: Catastrophe Theory Meets Game Theory.” arXiv. https://doi.org/10.48550/arXiv.2012.03083. Li, Cheng, Damien Teney, Linyi Yang, Qingsong Wen, Xing Xie, and Jindong Wang. 2024. “CulturePark: Boosting Cross-Cultural Understanding in Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2405.15145. Liang, Percy, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, et al. 2023. “Holistic Evaluation of Language Models.” arXiv. https://doi.org/10.48550/arXiv.2211.09110. Liao, Q. Vera, and Jennifer Wortman Vaughan. 2024. “AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap.” Harvard Data Science Review, no. Special Issue 5 (May). https://doi.org/10.1162/99608f92.8036d03b. Lightman, Hunter, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. “Let’s Verify Step by Step.” arXiv. https://doi.org/10.48550/arXiv.2305.20050. Lin, Bill Yuchen, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, and Yejin Choi. 2023. “The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning.” arXiv. https://doi.org/10.48550/arXiv.2312.01552. Lin, Licong, Yu Bai, and Song Mei. 2024. “Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining.” arXiv. https://doi.org/10.48550/arXiv.2310.08566. Lin, Sam, Wenyue Hua, Zhenting Wang, Mingyu Jin, Lizhou Fan, and Yongfeng Zhang. 2025. “EmojiPrompt: Generative Prompt Obfuscation for Privacy-Preserving Communication Expert Survey: AI Reliability & Security Research Priorities | 63 with Cloud-Based LLMs.” arXiv. https://doi.org/10.48550/arXiv.2402.05868. Liu, Chaoyue, Libin Zhu, and Mikhail Belkin. 2022. “Loss Landscapes and Optimization in Over-Parameterized Non-Linear Systems and Neural Networks.” Applied and Computational Harmonic Analysis, Special Issue on Harmonic Analysis and Machine Learning, 59 (July):85–116. https://doi.org/10.1016/j.acha.2021.12.009. Liu, Yue, Qinghua Lu, Liming Zhu, and Hye-Young Paik. 2024. “Decentralized Governance-Driven Architecture for Designing Foundation-Model-Based Systems: Exploring the Role of Blockchain in Responsible AI.” IEEE Software 41 (5): 34–42. https://doi.org/10.1109/MS.2024.3369551. Lourie, Nicholas, Ronan Le Bras, and Yejin Choi. 2021. “Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life Anecdotes.” arXiv. https://doi.org/10.48550/arXiv.2008.09094. Luzi, Lorenzo, Yehuda Dar, and Richard Baraniuk. 2024. “Double Descent and Other Interpolation Phenomena in GANs.” arXiv. https://doi.org/10.48550/arXiv.2106.04003. Madsen, Andreas, Sarath Chandar, and Siva Reddy. 2024. “Are Self-Explanations from Large Language Models Faithful?” In Findings of the Association for Computational Linguistics: ACL 2024, edited by Lun-Wei Ku, Andre Martins, and Vivek Srikumar, 295–337. Bangkok, Thailand: Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-acl.19. Magister, Lucie Charlotte, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2023. “Teaching Small Language Models to Reason.” In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), edited by Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, 1773–81. Toronto, Canada: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.acl-short.151. Mahinpei, Anita, Justin Clark, Isaac Lage, Finale Doshi-Velez, and Weiwei Pan. 2021. “Promises and Pitfalls of Black-Box Concept Learning Models.” arXiv. https://doi.org/10.48550/arXiv.2106.13314. Mandelbaum, Eric. 2022. “Everything and More: The Prospects of Whole Brain Emulation.” The Journal of Philosophy 119 (8): 444–59. https://doi.org/10.5840/jphil2022119830. Marone, Marc,
|
https://arxiv.org/abs/2505.21664v1
|
and Benjamin Van Durme. 2023. “Data Portraits: Recording Foundation Model Training Data.” arXiv. https://doi.org/10.48550/arXiv.2303.03919. Maslej, Nestor, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Njenga Kariuki, Emily Capstick, et al. 2025. “Artificial Intelligence Index Report 2025.” arXiv. https://doi.org/10.48550/arXiv.2504.07139. McAllister, Rowan, Gregory Kahn, Jeff Clune, and Sergey Levine. 2018. “Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty.” arXiv. https://doi.org/10.48550/arXiv.1812.10687. McKenzie, Ian R., Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, et al. 2024. “Inverse Scaling: When Bigger Isn’t Better.” arXiv. https://doi.org/10.48550/arXiv.2306.09479. Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. “A Survey on Bias and Fairness in Machine Learning.” ACM Comput. Surv. 54 (6): 115:1-115:35. https://doi.org/10.1145/3457607. Merrill, William, and Ashish Sabharwal. 2023. “The Parallelism Tradeoff: Limitations of Log-Precision Transformers.” Transactions of the Association for Computational Linguistics 11:531–45. https://doi.org/10.1162/tacl_a_00562. Merton, Robert K. 1968. “The Matthew Effect in Science.” Science 159 (3810): 56–63. Expert Survey: AI Reliability & Security Research Priorities | 64 https://doi.org/10.1126/science.159.3810.56. Mireshghallah, Niloofar, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, and Yejin Choi. 2024. “Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory.” arXiv. https://doi.org/10.48550/arXiv.2310.17884. Mishra, Abhijit, Mingda Li, and Soham Deo. 2023. “SentinelLMs: Encrypted Input Adaptation and Fine-Tuning of Language Models for Private and Secure Inference.” arXiv. https://doi.org/10.48550/arXiv.2312.17342. Miyato, Takeru, Shin-Ichi Maeda, Masanori Koyama, and Shin Ishii. 2019. “Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning.” IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (8): 1979–93. https://doi.org/10.1109/TPAMI.2018.2858821. Mollaeefar, Majid, Andrea Bissoli, and Silvio Ranise. 2024. “PILLAR: An AI-Powered Privacy Threat Modeling Tool.” arXiv. https://doi.org/10.48550/arXiv.2410.08755. Montes, Gabriel Axel, and Ben Goertzel. 2019. “Distributed, Decentralized, and Democratized Artificial Intelligence.” Technological Forecasting and Social Change 141 (April):354–58. https://doi.org/10.1016/j.techfore.2018.11.010. Mora-Cantallops, Marçal, Salvador Sánchez-Alonso, Elena García-Barriocanal, and Miguel-Angel Sicilia. 2021. “Traceability for Trustworthy AI: A Review of Models and Tools.” Big Data and Cognitive Computing 5 (2): 20. https://doi.org/10.3390/bdcc5020020. Motwani, Sumeet Ramesh, Mikhail Baranchuk, Martin Strohmeier, Vijay Bolina, Philip H. S. Torr, Lewis Hammond, and Christian Schroeder de Witt. 2025. “Secret Collusion among Generative AI Agents: Multi-Agent Deception via Steganography.” arXiv. https://doi.org/10.48550/arXiv.2402.07510. Mu, Fangwen, Lin Shi, Song Wang, Zhuohao Yu, Binquan Zhang, Chenxue Wang, Shichao Liu, and Qing Wang. 2023. “ClarifyGPT: Empowering LLM-Based Code Generation with Intention Clarification.” arXiv. https://doi.org/10.48550/arXiv.2310.10996. Mukobi, Gabriel, Hannah Erlebach, Niklas Lauffer, Lewis Hammond, Alan Chan, and Jesse Clifton. 2023. “Welfare Diplomacy: Benchmarking Language Model Cooperation.” arXiv. https://doi.org/10.48550/arXiv.2310.08901. Nagarajan, Vaishnavh. 2021. “Explaining Generalization in Deep Learning: Progress and Fundamental Limits.” arXiv. https://doi.org/10.48550/arXiv.2110.08922. Naihin, Silen, David Atkinson, Marc Green, Merwane Hamadi, Craig Swift, Douglas Schonholtz, Adam Tauman Kalai, and David Bau. 2023. “Testing Language Model Agents Safely in the Wild.” arXiv. https://doi.org/10.48550/arXiv.2311.10538. Nanda, Neel, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. 2023. “Progress Measures for Grokking via Mechanistic Interpretability.” arXiv. https://doi.org/10.48550/arXiv.2301.05217. Nandi, Soumalya, Sravanti Addepalli, Harsh Rangwani, and R. Venkatesh Babu. 2023. “Certified Adversarial Robustness Within Multiple Perturbation Bounds.” arXiv. https://doi.org/10.48550/arXiv.2304.10446. Nevo, Sella, Dan Lahav, Ajay Karpur, Yogev Bar-On, Henry Alexander Bradley, and Jeff Alstott. 2024a. “A Playbook for Securing AI Model Weights.” https://www.rand.org/pubs/research_briefs/RBA2849-1.html. ———. 2024b. “Securing AI Model Weights: Preventing
|
https://arxiv.org/abs/2505.21664v1
|
Theft and Misuse of Frontier Models.” https://www.rand.org/pubs/research_reports/RRA2849-1.html. Ngo, Helen, Cooper Raterink, João G. M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, and Expert Survey: AI Reliability & Security Research Priorities | 65 Nicholas Frosst. 2021. “Mitigating Harm in Language Models with Conditional-Likelihood Filtration.” arXiv. https://doi.org/10.48550/arXiv.2108.07790. Nissenbaum, Helen. 2010. “Privacy in Context: Technology, Policy, and the Integrity of Social Life.” Stanford University Press. https://crypto.stanford.edu/portia/papers/privacy_in_context.pdf. Oliveira, Nigini, Jasmine Li, Koosha Khalvati, Rodolfo Cortes Barragan, Katharina Reinecke, Andrew N. Meltzoff, and Rajesh P. N. Rao. 2023. “Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning.” arXiv. https://doi.org/10.48550/arXiv.2312.17479. Olsson, Catherine, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, et al. 2022. “In-Context Learning and Induction Heads.” arXiv. https://doi.org/10.48550/arXiv.2209.11895. Open Philanthropy. 2023. “Redwood Research — General Support (2023).” Open Philanthropy (blog). June 2023. https://www.openphilanthropy.org/grants/redwood-research-general-support-2023/. ———. 2024. “[Closed] Request for Proposals: Benchmarking LLM Agents on Consequential Real-World Tasks.” Open Philanthropy (blog). July 26, 2024. https://www.openphilanthropy.org/rfp-llm-benchmarks/. ———. 2025a. “Request for Proposals: Improving Capability Evaluations.” Open Philanthropy (blog). 2025. https://www.openphilanthropy.org/request-for-proposals-improving-capability-evaluations/. ———. 2025b. “TAIS RFP: Research Areas.” Open Philanthropy (blog). 2025. https://www.openphilanthropy.org/tais-rfp-research-areas/. Owen, David. 2024. “Automation of AI R&D: Researcher Perspectives,” August. https://epoch.ai/files/Interviewing_AI_researchers_on_automation_of_AI_R_D.pdf. Pasquini, Giancarlo, Colleen McClain, Brian Kennedy, Jeffrey Gottfried, and Monica Anderson. 2025. “How the U.S. Public and AI Experts View Artificial Intelligence.” Pew Research Center (blog). April 3, 2025. https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/. Perez, Ethan, Sam Ringer, Kamile Lukosiute, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, et al. 2023. “Discovering Language Model Behaviors with Model-Written Evaluations.” In Findings of the Association for Computational Linguistics: ACL 2023, edited by Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, 13387–434. Toronto, Canada: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.findings-acl.847. Petrie, James, Onni Aarne, and Nora Ammann. 2024. “Interim Report: Mechanisms for Flexible Hardware-Enabled Guarantees,” August. https://yoshuabengio.org/wp-content/uploads/2024/09/FlexHEG-Interim-Report_2024.pdf. Plecko, Drago, and Elias Bareinboim. 2022. “Causal Fairness Analysis.” arXiv. https://doi.org/10.48550/arXiv.2207.11385. Qi, Xiangyu, Tinghao Xie, Jiachen T. Wang, Tong Wu, Saeed Mahloujifar, and Prateek Mittal. 2023. “Towards A Proactive ML Approach for Detecting Backdoor Poison Samples.” arXiv. https://doi.org/10.48550/arXiv.2205.13616. Qi, Xiangyu, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. 2023. “Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Expert Survey: AI Reliability & Security Research Priorities | 66 Do Not Intend To!” arXiv. https://doi.org/10.48550/arXiv.2310.03693. Qu, Qu, Zheng Ma, Anders Clausen, and Bo Nørregaard Jørgensen. 2021. “A Comprehensive Review of Machine Learning in Multi-Objective Optimization.” In 2021 IEEE 4th International Conference on Big Data and Artificial Intelligence (BDAI), 7–14. https://doi.org/10.1109/BDAI52447.2021.9515233. Reuel, Anka, Ben Bucknall, Stephen Casper, Tim Fist, Lisa Soder, Onni Aarne, Lewis Hammond, et al. 2025. “Open Problems in Technical AI Governance.” arXiv. https://doi.org/10.48550/arXiv.2407.14981. Ruan, Yangjun, Honghua Dong, Andrew Wang, Silviu Pitis, Yongchao Zhou, Jimmy Ba, Yann Dubois, Chris J. Maddison, and Tatsunori Hashimoto. 2024. “Identifying the Risks of LM Agents with an LM-Emulated Sandbox.” arXiv. https://doi.org/10.48550/arXiv.2309.15817. Ruan, Yangjun, Chris J. Maddison, and Tatsunori Hashimoto. 2024. “Observational Scaling Laws and the Predictability of Language Model Performance.” arXiv. https://doi.org/10.48550/arXiv.2405.10938. Samardzic, Nikola, Axel Feldmann, Aleksandar Krastev, Nathan Manohar, Nicholas Genise, Srinivas Devadas, Karim Eldefrawy, Chris Peikert, and Daniel Sanchez. 2022. “CraterLake: A Hardware Accelerator for Efficient Unbounded
|
https://arxiv.org/abs/2505.21664v1
|
Computation on Encrypted Data.” In Proceedings of the 49th Annual International Symposium on Computer Architecture, 173–87. ISCA ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3470496.3527393. Samborska, Veronika. 2025. “Scaling up: How Increasing Inputs Has Made Artificial Intelligence More Capable.” Our World in Data. January 19, 2025. https://ourworldindata.org/scaling-up-ai. Sanders, James B. T., J. Doyne Farmer, and Tobias Galla. 2018. “The Prevalence of Chaotic Dynamics in Games with Many Players.” Scientific Reports 8 (1): 4902. https://doi.org/10.1038/s41598-018-22013-5. Saparov, Abulhair, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim, and He He. 2023. “Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples.” arXiv. https://doi.org/10.48550/arXiv.2305.15269. Schaeffer, Rylan, Hailey Schoelkopf, Brando Miranda, Gabriel Mukobi, Varun Madan, Adam Ibrahim, Herbie Bradley, Stella Biderman, and Sanmi Koyejo. 2025. “Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?” arXiv. https://doi.org/10.48550/arXiv.2406.04391. Scherrer, Nino, Claudia Shi, Amir Feder, and David M. Blei. 2023. “Evaluating the Moral Beliefs Encoded in LLMs.” arXiv. https://doi.org/10.48550/arXiv.2307.14324. Scheurer, Jérémy, Mikita Balesni, and Marius Hobbhahn. 2024. “Large Language Models Can Strategically Deceive Their Users When Put Under Pressure.” arXiv. https://doi.org/10.48550/arXiv.2311.07590. Schuett, Jonas, Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, and Ben Garfinkel. 2023. “Towards Best Practices in AGI Safety and Governance: A Survey of Expert Opinion.” arXiv. https://doi.org/10.48550/arXiv.2305.07153. Schwettmann, Sarah, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, and Antonio Torralba. 2023. “FIND: A Function Description Benchmark for Evaluating Interpretability Methods.” arXiv. https://doi.org/10.48550/arXiv.2309.03886. Expert Survey: AI Reliability & Security Research Priorities | 67 Sclar, Melanie, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. 2024. “Quantifying Language Models’ Sensitivity to Spurious Features in Prompt Design or: How I Learned to Start Worrying about Prompt Formatting.” arXiv. https://doi.org/10.48550/arXiv.2310.11324. Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. 2022. “Toward Verified Artificial Intelligence.” Commun. ACM 65 (7): 46–55. https://doi.org/10.1145/3503914. Sevilla, Jaime. 2024. “Can AI Scaling Continue Through 2030?” Epoch AI. August 20, 2024. https://epoch.ai/blog/can-ai-scaling-continue-through-2030. Sevilla, Jaime, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and Pablo Villalobos. 2022. “Compute Trends Across Three Eras of Machine Learning.” In 2022 International Joint Conference on Neural Networks (IJCNN), 1–8. https://doi.org/10.1109/IJCNN55064.2022.9891914. Shah, Rohin, Alex Irpan, Alexander Matt Turner, Anna Wang, Arthur Conmy, David Lindner, Jonah Brown-Cohen, et al. 2025. “An Approach to Technical AGI Safety and Security.” arXiv. https://doi.org/10.48550/arXiv.2504.01849. Shaker, Mohammad Hossein, and Eyke Hüllermeier. 2021. “Ensemble-Based Uncertainty Quantification: Bayesian versus Credal Inference.” arXiv. https://doi.org/10.48550/arXiv.2107.10384. Shao, Rulin, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. 2022. “On the Adversarial Robustness of Vision Transformers.” arXiv. https://doi.org/10.48550/arXiv.2103.15670. Sikka, Karan, Indranil Sur, Anirban Roy, Ajay Divakaran, and Susmit Jha. 2023. “Detecting Trojaned DNNs Using Counterfactual Attributions.” In 2023 IEEE International Conference on Assured Autonomy (ICAA), 76–85. https://doi.org/10.1109/ICAA58325.2023.00019. Sojasi, Yousef Taheri. 2024. “Modeling Automated Legal and Ethical Compliance for Trustworthy AI.” Sorbonne Université. Solaiman, Irene, and Christy Dennison. 2021. “Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets.” arXiv. https://doi.org/10.48550/arXiv.2106.10328. Sorensen, Taylor, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, et al. 2024. “A Roadmap to Pluralistic Alignment.” arXiv. https://doi.org/10.48550/arXiv.2402.05070. Staab, Robin,
|
https://arxiv.org/abs/2505.21664v1
|
Mark Vero, Mislav Balunović, and Martin Vechev. 2024. “Beyond Memorization: Violating Privacy Via Inference with Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2310.07298. Strobl, Lena. 2023. “Average-Hard Attention Transformers Are Constant-Depth Uniform Threshold Circuits.” arXiv. https://doi.org/10.48550/arXiv.2308.03212. Stutz, David, Matthias Hein, and Bernt Schiele. 2021. “Relating Adversarially Robust Generalization to Flat Minima.” arXiv. https://doi.org/10.48550/arXiv.2104.04448. Survival & Flourishing Fund. 2024. “SFF Has Organized ~$117MM in Philanthropic Gifts and Grants.” 2024. https://survivalandflourishing.fund/recommendations. Tao, Yan, Olga Viberg, Ryan S Baker, and René F Kizilcec. 2024. “Cultural Bias and Cultural Alignment of Large Language Models.” PNAS Nexus 3 (9): pgae346. https://doi.org/10.1093/pnasnexus/pgae346. Tennant, Elizaveta, Stephen Hailes, and Mirco Musolesi. 2023. “Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning.” In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 317–25. https://doi.org/10.24963/ijcai.2023/36. Expert Survey: AI Reliability & Security Research Priorities | 68 “The Singapore Consensus on Global AI Safety Research Priorities.” 2025. May 8, 2025. https://aisafetypriorities.org. Thomaz, Guilherme A., Matheus B. Guerra, Matteo Sammarco, Marcin Detyniecki, and Miguel Elias M. Campista. 2023. “Tamper-Proof Access Control for IoT Clouds Using Enclaves.” Ad Hoc Networks 147 (August):103191. https://doi.org/10.1016/j.adhoc.2023.103191. Tu, James, Huichen Li, Xinchen Yan, Mengye Ren, Yun Chen, Ming Liang, Eilyan Bitar, Ersin Yumer, and Raquel Urtasun. 2022. “Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving.” arXiv. https://doi.org/10.48550/arXiv.2101.06784. Turner, Alexander Matt, Dylan Hadfield-Menell, and Prasad Tadepalli. 2020. “Conservative Agency via Attainable Utility Preservation.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 385–91. AIES ’20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3375627.3375851. UK AI Security Institute. 2024. “Pre-Deployment Evaluation of Anthropic’s Upgraded Claude 3.5 Sonnet.” AI Security Institute. November 19, 2024. https://www.aisi.gov.uk/work/pre-deployment-evaluation-of-anthropics-upgraded-claude-3-5-sonnet. U.S. Nuclear Regulatory Commission. 1986. “Safety Goals for the Operations of Nuclear Power Plants; Policy Statement.” Federal Register 51 (159). https://www.nrc.gov/reading-rm/doc-collections/commission/policy/51fr30028.pdf. Vaswani, Kapil, Stavros Volos, Cédric Fournet, Antonio Nino Diaz, Ken Gordon, Balaji Vembu, Sam Webster, et al. 2022. “Confidential Machine Learning within Graphcore IPUs.” arXiv. https://doi.org/10.48550/arXiv.2205.09005. Vidaković, Marin, and Davor Vinko. 2023. “Hardware-Based Methods for Electronic Device Protection against Invasive and Non-Invasive Attacks.” Electronics 12 (21): 4507. https://doi.org/10.3390/electronics12214507. Viering, Tom, and Marco Loog. 2023. “The Shape of Learning Curves: A Review.” IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (6): 7799–7819. https://doi.org/10.1109/TPAMI.2022.3220744. Villegas-Ch, William, and Joselin García-Ortiz. 2023. “Toward a Comprehensive Framework for Ensuring Security and Privacy in Artificial Intelligence.” Electronics 12 (18): 3786. https://doi.org/10.3390/electronics12183786. Vyas, Nikhil, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, and Cengiz Pehlevan. 2023. “Feature-Learning Networks Are Consistent Across Widths At Realistic Scales.” arXiv. https://doi.org/10.48550/arXiv.2305.18411. Wan, Alexander, Eric Wallace, Sheng Shen, and Dan Klein. 2023. “Poisoning Language Models During Instruction Tuning.” arXiv. https://doi.org/10.48550/arXiv.2305.00944. Wang, George, Jesse Hoogland, Stan van Wingerden, Zach Furman, and Daniel Murfet. 2024. “Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient.” arXiv. https://doi.org/10.48550/arXiv.2410.02984. Wang, Kevin, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. 2022. “Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small.” arXiv. https://doi.org/10.48550/arXiv.2211.00593. Wang, Yihan, Jatin Chauhan, Wei Wang, and Cho-Jui Hsieh. 2023. “Universality and Limitations of Prompt Tuning.” Advances in Neural Information Processing Systems 36 (December):75623–43. Ward, Francis Rhys, Francesco Belardinelli, Francesca
|
https://arxiv.org/abs/2505.21664v1
|
Toni, and Tom Everitt. 2023. “Honesty Is the Best Policy: Defining and Mitigating AI Deception.” arXiv. Expert Survey: AI Reliability & Security Research Priorities | 69 https://doi.org/10.48550/arXiv.2312.01350. Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, et al. 2022. “Emergent Abilities of Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2206.07682. Weij, Teun van der, Felix Hofstätter, Ollie Jaffe, Samuel F. Brown, and Francis Rhys Ward. 2025. “AI Sandbagging: Language Models Can Strategically Underperform on Evaluations.” arXiv. https://doi.org/10.48550/arXiv.2406.07358. Wijk, Hjalmar, Tao Lin, Joel Becker, Sami Jawhar, Neev Parikh, Thomas Broadley, Lawrence Chan, et al. 2024. “RE-Bench: Evaluating Frontier AI R&D Capabilities of Language Model Agents against Human Experts.” arXiv. https://doi.org/10.48550/arXiv.2411.15114. Wu, Jeff, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. 2021. “Recursively Summarizing Books with Human Feedback.” arXiv. https://doi.org/10.48550/arXiv.2109.10862. Wu, Jingfeng, Difan Zou, Vladimir Braverman, and Quanquan Gu. 2021. “Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate.” arXiv. https://doi.org/10.48550/arXiv.2011.02538. Wu, Minghao, and Alham Fikri Aji. 2023. “Style Over Substance: Evaluation Biases for Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2307.03025. Wu, Yuhuai, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. 2022. “Autoformalization with Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2205.12615. Wu, Zhaofeng, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, and Yoon Kim. 2024. “Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks.” arXiv. https://doi.org/10.48550/arXiv.2307.02477. Xiao, Wenli, Tairan He, John Dolan, and Guanya Shi. 2024. “Safe Deep Policy Adaptation.” arXiv. https://doi.org/10.48550/arXiv.2310.08602. Xie, Sang Michael, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2021. “An Explanation of In-Context Learning as Implicit Bayesian Inference.” In . https://openreview.net/forum?id=RdJVFCHjUMI. Xu, Chunpu, Steffi Chern, Ethan Chern, Ge Zhang, Zekun Wang, Ruibo Liu, Jing Li, Jie Fu, and Pengfei Liu. 2023. “Align on the Fly: Adapting Chatbot Behavior to Established Norms.” arXiv. https://doi.org/10.48550/arXiv.2312.15907. Xu, Jiashu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, and Muhao Chen. 2024. “Instructional Fingerprinting of Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2401.12255. Yang, Fan, Wenxuan Zhou, Zuxin Liu, Ding Zhao, and David Held. 2024. “Reinforcement Learning in a Safety-Embedded MDP with Trajectory Optimization.” arXiv. https://doi.org/10.48550/arXiv.2310.06903. Yang, Jiachen, Ang Li, Mehrdad Farajtabar, Peter Sunehag, Edward Hughes, and Hongyuan Zha. 2020. “Learning to Incentivize Other Learning Agents.” arXiv. https://doi.org/10.48550/arXiv.2006.06051. Yocum, Julian, Phillip Christoffersen, Mehul Damani, Justin Svegliato, Dylan Hadfield-Menell, and Stuart Russell. 2023. “Mitigating Generative Agent Social Dilemmas.” November 7, 2023. https://social-dilemmas.github.io/. Yong, Zheng-Xin, Cristina Menghini, and Stephen H. Bach. 2024. “Low-Resource Languages Jailbreak GPT-4.” arXiv. https://doi.org/10.48550/arXiv.2310.02446. Expert Survey: AI Reliability & Security Research Priorities | 70 Yuan, Tongxin, Zhiwei He, Lingzhong Dong, Yiming Wang, Ruijie Zhao, Tian Xia, Lizhen Xu, et al. 2024. “R-Judge: Benchmarking Safety Risk Awareness for LLM Agents.” arXiv. https://doi.org/10.48550/arXiv.2401.10019. Yuan, Youliang, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, and Zhaopeng Tu. 2024. “GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher.” arXiv. https://doi.org/10.48550/arXiv.2308.06463. Zeng, Shenglai, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, et al. 2024. “The Good and The Bad: Exploring Privacy Issues
|
https://arxiv.org/abs/2505.21664v1
|
in Retrieval-Augmented Generation (RAG).” arXiv. https://doi.org/10.48550/arXiv.2402.16893. Zhang, Baobao, and Allan Dafoe. 2019. “Artificial Intelligence: American Attitudes and Trends.” January 19, 2019. https://www.governance.ai/research-paper/artificial-intelligence-american-attitudes-and-trends. Zhang, Dan, Gang Feng, Yang Shi, and Dipti Srinivasan. 2021. “Physical Safety and Cyber Security Analysis of Multi-Agent Systems: A Survey of Recent Advances.” IEEE/CAA Journal of Automatica Sinica 8 (2): 319–33. https://doi.org/10.1109/JAS.2021.1003820. Zhang, Wenxiao, Xiangrui Kong, Thomas Braunl, and Jin B. Hong. 2024. “SafeEmbodAI: A Safety Framework for Mobile Robots in Embodied AI Systems.” arXiv. https://doi.org/10.48550/arXiv.2409.01630. Zhao, Chen, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, and Feng Chen. 2022. “Adaptive Fairness-Aware Online Meta-Learning for Changing Environments.” arXiv. https://doi.org/10.48550/arXiv.2205.11264. Zhao, Chen, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Christan Grant, and Feng Chen. 2023. “Towards Fair Disentangled Online Learning for Changing Environments.” arXiv. https://doi.org/10.48550/arXiv.2306.01007. Zhi-Xuan, Tan, Micah Carroll, Matija Franklin, and Hal Ashton. 2024. “Beyond Preferences in AI Alignment.” Philosophical Studies, November. https://doi.org/10.1007/s11098-024-02249-w. Zhou, Xin, Yi Lu, Ruotian Ma, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023. “Making Harmful Behaviors Unlearnable for Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2311.02105. Ziv, Roy, and Shyamal Anadkat. 2024. “Getting Started with OpenAI Evals.” March 21, 2024. https://cookbook.openai.com/examples/evaluation/getting_started_with_openai_evals. Zou, Andy, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. 2023. “Universal and Transferable Adversarial Attacks on Aligned Language Models.” arXiv. https://doi.org/10.48550/arXiv.2307.15043. Expert Survey: AI Reliability & Security Research Priorities | 71
|
https://arxiv.org/abs/2505.21664v1
|
arXiv:2505.21666v1 [cs.LG] 27 May 2025Efficient Controllable Diffusion via Optimal Classifier Guidance Owen Oertell*1,Shikun Sun*1,Yiding Chen*1, Jin Peng Zhou1,Zhiyong Wang2, and Wen Sun1 1Cornell University 2CUHK Abstract The controllable generation of diffusion models aims to steer the model to generate samples that optimize some given objective functions. It is desirable for a variety of applications including image generation, molecule generation, and DNA/sequence generation. Reinforcement Learning (RL) based fine-tuning of the base model is a popular approach but it can overfit the reward function while requiring significant resources. We frame controllable generation as a problem of finding a distribution that optimizes a KL-regularized objective function. We present SLCD – Supervised Learning based Controllable Diffusion, which iteratively generates online data and trains a small classifier to guide the generation of the diffusion model. Similar to the standard classifier-guided diffusion, SLCD’s key computation primitive is classifica- tion and does not involve any complex concepts from RL or control. Via a reduction to no-regret online learning analysis, we show that under KL divergence, the output from SLCD provably converges to the optimal solution of the KL-regularized ob- jective. Further, we empirically demonstrate that SLCD can generate high quality samples with nearly the same inference time as the base model in both image gen- eration with continuous diffusion and biological sequence generation with discrete diffusion. Our code is available at https://github.com/Owen-Oertell/slcd. 1 Introduction Diffusion models are an expressive class of generative models which are able to model complex data distributions (Song et al., 2021a,b). Recent works have utilized diffusion models for a variety of modalities: images, audio, and molecules (Saharia et al., 2022; Ho et al., 2022; Uehara et al., 2024a; Hoogeboom et al., 2022). However, modeling the distribution of data is often not enough for downstream tasks. Often, we want to generate data which satisfies a specific property, be that a prompt, a specific chemical property, or a specific structure. Perhaps the simplest approach is classifier-guided diffusion where a classifier is trained using a pre-collected labeled dataset. The score of the classifier is used to guide the diffusion process to generate images that have high likelihood being classified under a given label. However this simple approach requires a given labeled dataset and is not directly applicable to the settings where the goal is to optimize a complicated objective function (we call it reward function hereafter). To optimize reward functions, Reinforcement Learning (RL) and stochastic optimal control based approaches have been studied (Black et al., 2024; Oertell et al., 2024; Domingo-Enrich et al., 2025; Clark et al., 2024; Fan et al., 2023; Uehara et al., 2024a,b). These methods formulate the diffusion generation process as a continuous time Markov decision process and then apply RL optimizers such as proximal *Equal contribution. Correspondence to ojo2@cornell.edu . Preprint. Under review. Figure 1: An overview of the main experimental results. Top: Qualitative examples for continuous diffusion image tasks (image compression and aesthetic maximization). Relaxing the KL constraint at test time (larger η) consistently increases the score. Bottom left: SLCD stays closer to the initial image distribution (lower FID score) for the same
|
https://arxiv.org/abs/2505.21666v1
|
reward. Bottom right: SLCD is likewise effective at controlling discrete diffusion models. policy optimization (PPO) (Schulman et al., 2017) or optimal control methods to optimize the given reward function. While these RL based approaches can optimize reward, they make the solution concept of fine-tuning diffusion model complicated: they often need to fine-tune the entire models, update the initial distribution or adjust the noise scheduler, and popular RL optimizers such as PPO are known to be notoriously instable, slow, and expensive to train. Diffusion model is known for its simplicity in training (i.e., a simple reduction to least square regression (Ho et al., 2020)), applying RL or control based solutions on top it takes away the simplicity and beauty of the original diffusion model training. From a theoretical perspective, diffusion model convergence theory is constructed by a simple reduction to supervised learning (least square regression) (Chen et al., 2022). Once we model fine-tuning diffusion model as an RL problem, it is unlikely one can prove any meaningful optimality (e.g., global convergence, sample complexity) since modern RL theory is often limited to very restricted settings (e.g., small size Markov Decision Process) (Agarwal et al., 2019). In this work, we ask the following question: can we fine-tune diffusion models to optimize a given objective function with simple computation oracles such as regression and classification , i.e., the standard supervised learning style computation oracles. We provide an affirmative answer to this question (Fig. 1). We view fine-tuning diffusion model as a controllable generation problem where we train a guidance model – typically a lightweight small classifier, to guide the pre-trained diffusion model during the inference time. Specifically, we frame the optimization problem as a KL-regularized reward maximization problem where our goal is to optimize the diffusion model to maximize the given reward while staying close to the pre-trained model. Prior work such as SVDD (Uehara et al., 2024b) also studied a similar setting where they also train a guidance model to guide the pre-trained diffusion model in inference time. However SVDD’s solution is sub-optimal, i.e., it does not guarantee to find the optimal solution of the KL-regularized reward maximization objective. We 2 propose a new approach SLCD: Supervised Learning based Controllable Diffusion, which iteratively refines a classifier using the feedback from the reward functions, on the online data generated by the classifier itself guiding the pre-trained diffusion model. The key computation primitive of SLCD is supervised learning, i.e., each iteration it performs multi-class classification on the current collected data. It also collects online data via the standard classifier-guided denoising process. SLCD is entirely motivated from the concept of classifier-guidance but with a novel idea on how to learn the optimal classifier , where the optimality means that generated the distribution from the classifier guiding the base model can provably converge to the optimal target distribution. Theoretically, for continuous diffusion, we demonstrate that via a reduction to no-regret learning (Shalev-Shwartz et al., 2012), SLCD finds a near optimal solution to the KL-regularized reward maximization objective. In other words, SLCD manages to balance the reward maximization and KL
|
https://arxiv.org/abs/2505.21666v1
|
minimization. Our analysis is motivated from the classic imitation learning (IL) algorithms DAgger (Ross et al., 2011) and AggreVaTe(d) (Ross & Bagnell, 2014; Sun et al., 2017) which frame IL as an iterative classification procedure with the main computation primitive being classification. Our theory shows that as long as we can achieve no-regret on the sequence of classification losses constructed during the training, the learned classifier can guide the pre-trained diffusion model to generate a near optimal distribution. Experimentally, on two types of applications to image generation with continuous diffusion and biological sequence generation with discrete diffusion, we find SLCD consistently outperforms other baselines on reward and inference speed, while maintaining a lower divergence from the base model. Overall, SLCD serves a simple solution to the problem of fine-tuning both continuous and discrete diffusion models to optimize a given KL-regularized reward function. 2 Related Work There has been significant interest in controllable generation of diffusion models, starting from Dhariwal & Nichol (2021) which introduced classifier guidance, to then Ho & Salimans (2022) which introduced classifier-free guidance. These methods paved the way for further interest in controllable generation, in particular when there is an objective function to optimize. First demonstrated by Black et al. (2024); Fan et al. (2023), RL fine-tuning of diffusion models has grown in popularity with works such as Clark et al. (2024); Prabhudesai et al. (2023) which use direct backpropagation to optimize the reward function. However, these methods can lead to mode collapse of the generations and overfitting. Further works then focused on maximizing the KL-constrained optimization problem which regularizes the generation process to the base model (Uehara et al., 2024b) but suffer either from needing special memoryless noise schedulers (Domingo-Enrich et al., 2025) or needing to control the initial distribution of the diffusion process (Uehara et al., 2024a) to avoid a bias problem. Our approach does not need to do any of these modification. Li et al. (2024) proposed a method to augment the decoding process and avoid training the underlying base model, but yield an increase in compute time. Their practical approach also does not guarantee to learn the optimal distribution. More broadly, Mudgal et al. (2023); Zhou et al. (2025) investigate token-level classifier guidance for KL- regularized controllable generation of large language models. Zhou et al. (2025) also demonstrated that to optimize a KL-regularized RL objective in the context of LLM text generation, one just needs to perform no-regret learning. While our approach is motivated by the Q# approach from Zhou et al. (2025), we tackle score-based guidance for diffusion models in a continuous latent space and continue time, where the space of actions form an infinite set–making algorithm design and analysis substantially more difficult than the setting of discrete token and discrete-time in prior LLM work. 3 Preliminaries 3.1 Diffusion models Given a data distribution q0inRd, the forward process of a diffusion model (Song et al. (2021b)) adds noise to a sample ¯x0∼q0iteratively, which can be modeled as the solution to a stochastic differential equation (SDE): d¯x=h(¯x, τ)dτ+g(τ)d¯w,¯x0∼q0, τ∈[0, T] (1) where {¯wτ}τis the standard
|
https://arxiv.org/abs/2505.21666v1
|
Wiener process, h(·,·) :Rd×[0, T]→Rdis the drift coefficient andg(·) : [0, T]→Ris the diffusion coefficient. We use qτ(·)to denote the probability density 3 function of ¯xτgenerated according to the forward SDE in equation (1). We assume fandgsatisfy certain conditions s.t. qTconverges to N(0, I)asT→ ∞ . For example, if Eq. (1) is chosen to be Ornstein–Uhlenbeck process, qTconverges to N(0,1)exponentially fast. The forward process (1) can be reversed by: dx= −h(x, T−t) +g2(T−t)∇logqT−t(x) dt+g(T−t)dw,x0∼qT, t∈[0, T],(2) where {wt}tis the Wiener process. It is known (Anderson, 1982) that the forward process (1)and the reverse process (2)have the same marginal distributions. To generate a sample, we can sample x0∼qT, and run the above SDE from t= 0 tot=Tto get xT. In practice, one can start with x0∼ N(0, I)(an approximation for qT) and use numerical SDE solver to approximately generate xT, such as the generation processes from DDPM (Ho et al., 2020) or DDIM (Song et al., 2021a). 3.2 Controllable generation In certain applications, controllable sample generated from some target conditional distribution is preferable. This can be achieved by adding guidance to the score function. In general, the reverse SDE with guidance f(·,·)is: dx= −h(x, T−t) +g2(T−t) (∇logqT−t(x) +f(x, t)) dt+g(T−t)dw. (3) For convenience, for all 0≤s≤t≤T, we use Pf s→t(·|p) (4) to denote the marginal distribution of xt, the solution to (3)with initial conditional xs∼p. In the remaining of this paper, we may abuse the notation and use Pf s→t(·|x′)to denote a deterministic initial condition Pr [xs=x′] = 1 . In particular, we use Pprior s→t(·|p)to denote the special case that f≡0. 3.3 Reward guided generation In this paper, we aim to generate samples that maximize some reward function r(x)∈[−Rmax,0], while not deviating too much from the base or prior distribution q0. We consider the setting where we have access to the score function of the prior q0(e.g., q0can be modeled by a pre-trained large diffusion model). Formally, our goal is to find a distribution pthat solves the following optimization problem: max pEx∼p[r(x)]−1 ηKL(p∥q0) (5) for some η >0which controls the balance between optimizing reward and staying close to the prior. It is known (Ziebart et al., 2008) that p⋆, the optimal solution to the optimization in (5), satisfies p⋆(x) =1 Zq0(x) exp( ηr(x)), (6) where Z >0is the normalization factor. Prior work treats this as a KL-regularized RL optimization problem. However as we mentioned, to ensure the optimal solution of the KL-regularized RL formulation to be equal to p⋆, one need to additionally optimize the state distribution qT(e.g., via another diffusion process), or need to design special memory less noise scheduling (Domingo-Enrich et al., 2025). We aim to show that we can learn p⋆in a provably optimal manner without relating the setting to RL or stochastic control , thus eliminating the needs of optimizing the initial distribution, modifying the noise schedulers, or using complicated RL solvers such as PPO (Schulman et al., 2017). 4 Algorithm We introduce a binary label y∈ {0,1}and denote the classifier p(y= 1|x) := exp( ηr(x))(recall that we assume reward is negative). The introduction of the binary label
|
https://arxiv.org/abs/2505.21666v1
|
and the classifier allows us to rewrite the target distribution as the posterior distribution given y= 1: p(x|y= 1)∝q0(x)p(y= 1|x) =q0(x) exp( ηr(x)). 4 Figure 2: Covariate shift (left) and data collection in our approach (right). The left figure illustrates covariate shift. In the offline naive approach, classifier will be trained under the green samples. However in inference time, the classifier will be applied at the red samples – samples generated by using the classifier itself as guidance. The difference in green samples (training distribution) and red samples (testing distribution) is the covariate shift. Our approach (right) mitigates this by iteratively augmenting the training set with samples drawn from guided diffusion. We rollin with the classifier-guided diffusion to get to xt. We then rollout with the prior’s score function to get to xT and query a reward r(xT). The triple (t,xt, r(xT))will be used to refine the classifier. Given this formulation, the naive approach is to model p(x|y= 1) via the standard classifier-guided diffusion process. In other words, we generate a labeled dataset {(x, y)}where xis from the prior x∼q0, and the label is sampled from the Bernoulli distribution with mean exp(ηr(x)), i.e., y∼p(y|x). Once we have this data, we can add noise to x, train a time-dependent classifier that predicts the noised sample to its label y. Once we train the classifier, we use its gradient to guide the generation process as shown in Eq. (3). While this naive approach is simple, this approach can fail due to the issue of covariate shift – the training distribution (i.e., qt– the distribution of ¯xt) used for training the classifier is different from the test distribution where the classifier is evaluated during generation (i.e. the distribution of samples xgenerated during the classifier-guided denoising process). In other words, while the classifier can be an accurate classifier under the training distribution, there is no guarantee that it is an accurate classifier under the test distribution – the distribution that ultimately matters when we perform classifier-guided generation. This is demonstrated in left figure in Fig. 2. In the worst case, the density ratio of the test distribution over the training distribution can be exponential exp(Rmaxη), which can be too large to ignore when ηis large (i.e., KL regularization is weak). We propose an iterative approach motivated by DAgger (data aggregation, Ross et al. (2011)) to close the gap between the training distribution and test distribution. First with the binary label yand our definition of p(y= 1|xT) = exp( ηr(xT))(note xTrepresents the generated image), we can show that the classifier p(y= 1|xt)for any t∈[0, T)takes the following form: p(y= 1|xt) =ExT∼Pprior t→T(·|xt)exp(ηr(xT)). (7) Intuitively p(y= 1|xt)models the expected probability of observing label y= 1if we generate xT starting from xtusing the reverse process of the pre-trained diffusion model. We defer the formal derivation to Appendix A which relies on proving that the forward process and backward process of a diffusion model induce the same conditional distributions. We take advantage of this closed-form solution of the classifier, and propose to model the classifier via a distributional approach (Zhou
|
https://arxiv.org/abs/2505.21666v1
|
et al., 2025) . Particularly, define r∼Rprior(·|xt, t)as the distribution of the reward of a xT∼Pprior t→T(·|xt). The classifier p(y= 1|xt)can be rewritten using the reward distribution Rprior(·|xt, t): p(y= 1|xt) :=Er∼Rprior(·|xt,t)exp(η·r). (8) Our goal is to learn a reward distribution ˆRto approximate Rpriorand use ˆRto approximate the classifier as p(y= 1|xt)≈Er∼ˆR(·|xt,t)exp(ηr). This distributional approach allows us to take advantage of the closed form of the classifier in Eq. (7) (e.g., there is no need to learn the exponential function exp(ηr)in the classifier). Algorithm 1 describes an iterative learning approach for training such a distribution ˆR(·|xt, t)via supervised learning (e.g., maximum likelihood estimation (MLE)). Inside iteration n, given the current reward distribution ˆRn, we define the score of the classifier fnas: ∀t, xt:fn(xt, t) :=∇xtln Er∼ˆRn(·|xt,t)exp(η·r) . (9) 5 Algorithm 1 Controllable diffusion via iterative supervised learning (SLCD) Initialize ˆR1. forn= 1, . . . , N do setfnas Eq. (9). collect an additional training dataset Dnfollowing Eq. (10). train ˆRn+1onSn i=1Diaccording to Eq. (11). end for Return fˆn, the best of {f1, . . . , fN}on validation. We use fnto guide the prior to generate an additional training dataset Dn:={(t,xt, r)}of size M, where t∼Uniform (T),x0∼ N(0, I),xt∼Pfn 0→t(·|x0)| {z } Roll in with the score of the latest classifier fnas guidance,xT∼Pprior t→T(·|xt),andr=r(xT)| {z } Roll out with the prior to collect reward.(10) Note that the roll-in process above simulates the inference procedure –xtis an intermediate sample we would generate if we had used fnto guide the prior in inference. The rollout procedure collects reward signals for xtwhich in turn will be used for refining the reward distribution estimator ˆR(·|xt). This procedure is illustrated in the right figure in Fig. 2. We then aggregate Dnwith all the prior data and re-train the distribution estimator ˆRusing the aggregate data via supervised learning, i.e., maximum likelihood estimation: ˆRn+1∈argmax R∈RnX i=1X (t,xt,r)∈DilnR(r|xt, t), (11) where Ris the class of distributions. This rollin-rollout procedure is illustrated in Fig. 2. We iterate the above procedure until we reach a point where ˆRn(·|xt, t)is an accurate estimator of the true model Rprior(·|xt, t)under distribution induced by the generation process of guiding the prior using fnitself. Similar to DAgger’s analysis, we will show in our analysis section that a simple no-regret argument implies that we can reach to such a stage where there is no gap between training and testing distribution anymore. In the test time , once we have the score fˆn, we can use it to guide the prior to generate samples via the SDE in Eq. (3). In practice, sampling procedure from DDPM can be used as the numerical solver for the SDE Eq. (3). Another practical benefit of our approach is that the definition of the distribution Rpriorand the learned distribution ˆRare independent of the guidance strength parameter η. This means that in practice, once we learned the distribution ˆR, we can adjust ηduring inference time as shown in Eq. (9) without re-training ˆR. Remark 1 (Modeling the one-dimensional distribution as a classifier) .We emphasize that from a computation perspective, our approach relies
|
https://arxiv.org/abs/2505.21666v1
|
on a simple supervised learning oracle to estimate theone dimensional conditional distribution R(r|xt, t). In our implementation, we use histogram to model this one-dimensional distribution, i.e., we discretize the range of reward [−Rmax,0]into finite number of bins, and use standard multi-class classification oracle to learn ˆRthat maps from xtto a distribution over the finite number of labels. Thus, unlike prior work that casts controllable diffusion generation as an RL or stochastic control problem, our approach eliminates the need to talk about or implement RL, and instead entirely relies on standard classification and can be seamlessly integrated with any existing implementation of classifier-guided diffusion. Remark 2 (Comparison to SVDD (Uehara et al., 2024b)) .The most related work is SVDD. There are two notifiable differences. First SVDD estimates a sub-optimal classifier, i.e., Er∼Rprior(·|xt,t)[r]. The posterior distribution in their case is proportional to q0(xT)·Er∼Rprior(·|xT,T)[r]which is clearly not equal to the target distribution. Second, SVDD does not address the issue of distribution shift and it trains the classifier only via offline data collected from the prior alone. We note that our method can also be adapted to discrete diffusion tasks as seen in Section 6. We refer the reader to Appendix E for more details. 6 5 Analysis In this section, we provide performance guarantee for the sampler returned by Algorithm 1. We use KL divergence of the generated data distribution Pfˆn 0→T(·|N(0, I))from the target distribution p⋆to measure its quality. At a high level, the error comes from two sources: •starting distribution mismatch: in the sampling process, we initialize the SDE Eq. (3) with samples from N(0, I), not the ground-truth qT(·|y= 1) . However, under proper condition, qT(·|y= 1) converges to N(0, I)asT→ ∞ (see Lemma 3 of Chen et al. (2025)). In particular, when Eq. (3) is chosen to be the OU process, qT(·|y= 1) converges at an exponential speed: KL(N(0, I)∥qT(·|y= 1)) = O(e−2T). •estimation error of the guidance: the estimated guidance fˆnis different from the ground truth ∇xtlnp(y= 1|xt). But the error is controlled by the regret of the no-regret online learning. We assume realizability: Assumption 3 (realizability) .Rprior∈ R Our analysis relies on a reduction to no-regret online learning. Particularly, we assume we have no-regret property on the following log-loss. Note Mas the side of the each online dataset Di. Assumption 4 (No-regret learning) .The sequence of reward distribution {ˆRi}satisfies the following inequality: 1 NMNX i=1X (t,xt,r)∈Diln1 ˆRi(r|xt, t)−min R1 NMNX i=1X (t,xt,r)∈Diln1 R(r|xt, t)≤γN. where the average regret γN=o(N)/Nshrinks to zero when N→ ∞ . No-regret online learning for the logis standard in the literature (Cesa-Bianchi & Lugosi, 2006; Foster et al., 2021; Wang et al., 2024; Zhou et al., 2025). Our algorithm implements the specific no-regret algorithm called Follow-the-regularized-leader (FTRL) (Shalev-Shwartz et al., 2012; Suggala & Netrapalli, 2020) where we optimize for ˆRion the aggregated dataset. Follow-the-Leader type of approach with random perturbation can even achieve no-regret property for non-convex optimization (Suggala & Netrapalli, 2020). This data aggregation step and the reduction to no-regret online learning closely follows DAgger’s analysis (Ross et al., 2011). Under certain condition, the
|
https://arxiv.org/abs/2505.21666v1
|
marginal distribution qTdefined by the forward SDE Eq. (1) converges to some Gaussian distribution rapidly (see Lemma 3 of Chen et al. (2025)). For simplicity, we make the following assumption on the convergence: Assumption 5 (convergence of the forward process) .KL(N(0, I)∥qT(·|y= 1))≤ϵT. For OU processes, ϵTshrinks in the rate of exp(−T). We assume the reward distribution class satisfies certain regularity conditions, s.t. the estimation error of the classifier controls the score difference: Assumption 6. There exists L >0, s.t. for all R, R′∈ R, and x, t: ∇xln Er∼R(·|x,t)exp(η·r) − ∇ xln Er∼R′(·|x,t)exp(η·r) 2 ≤L Er∼R(·|x,t)exp(η·r)−Er∼R′(·|x,t)exp(η·r) . Standard diffusion models with classifier guidance train a time-dependent classifier and use the score function of the classifier to control image generation (Song et al., 2021b; Dhariwal & Nichol, 2021). Such assumption is crucial to guarantee the quality of class-conditional sample generation. We defer a more detailed discussion on Assumption 6 to Appendix B. In general such an assumption holds when the functions satisfy certain smoothness conditions. Theorem 7. Suppose Assumption 3, 4, 5, and 6 hold. There exists ˆn∈ {1, . . . , N }, s.t.fˆnspecified by Algorithm 1 satisfies: Eh KL Pfˆn 0→T(·|N(0, I))∥p⋆i ≤ϵT+1 2T∥g∥2 ∞L2γN. where the expectation is with respect to the randomness in the whole training process, and gis the diffusion coefficient defined in Eq. (1). 7 Since Pfˆn 0→T(·|N(0, I))models the distribution of the generated samples when using fˆnto guide the prior, the above theorem proves that our sampling distribution is close to the target p⋆under KL. Note thatϵTdecays in the rate of exp(−T)when the forward SDE is an OU process, and γNdecayes in the rate of 1/√ Nfor a typical no-regret algorithm such as Follow-the-Learder (Shalev-Shwartz et al., 2012; Suggala & Netrapalli, 2020). 6 Experiments We compare SLCD to a variety of training-free and value-guided sampling strategies across four tasks. For Best-of-N, we draw Nindependent samples from the base diffusion model and keep the one with the highest reward. Diffusion Posterior Sampling (DPS) is a classifier-guidance variant originally for continuous diffusion (Chung et al., 2023), here adapted to discrete diffusion via the state-of-the-art method of Nisonoff et al. (2025). Sequential Monte Carlo (SMC) methods (Del Moral & Doucet, 2014; Wu et al., 2023; Trippe et al., 2022) use importance sampling across whole batches to select the best sample. SVDD-MC (Li et al., 2024) instead evaluates the expected reward of Ncandidates under an estimated value function, while SVDD-PM uses the true reward for each candidate with slight algorithm modifications. We evaluate on (i) image compression (negative file size) and (ii) image aesthetics (LAION aesthetic score) using Stable Diffusion v1.5 (Rombach et al., 2022), as well as on (iii) 5’ untranslated regions optimized for mean ribosome load (Sample et al., 2019; Sahoo et al., 2024) and (iv) DNA enhancer sequences optimized for predicted expression in HepG2 cells via the Enformer model (Avsec et al., 2021). 1 2 3 Iteration4.55.05.56.06.5Reward Performance Per Iteration of SLCD on DNA Enhancer Task Figure 3: Reward vs. number of iterations of SLCD. The reward increases as the restart-state distribution becomes richer, and
|
https://arxiv.org/abs/2505.21666v1
|
only a few itera- tions are needed to reach good performance.In line with Li et al. (2024), we compare the top 10 and 50 quantiles of a batch of generations, in Table 1. We compare to these methods as, like SLCD, all of these methods do not require training of the base model. Overall, we see that SLCD consistently out- performs the baseline methods while requir- ing nearly the same inference time as the base model, and omitting the need for multiple MC samples during each diffusion step. These four tasks jointly cover two primary ap- plication domains of diffusion models: image generation and biological sequence generation, providing a comprehensive assessment of con- trollable diffusion methods. 6.1 Reward Comparison We compare the reward of SLCD to the baseline methods in Table 1. We see that SLCD is able to consistently achieve higher reward than SVDD-MC and SVDD-PM, and the other baseline methods in all four tasks. The margin of improvement is most pronounced in settings where the classifier closely approximates the true reward,most notably the image compression task, where SLCD nearly attains the optimal reward. To further see the performance of SLCD, we plot the reward distribution of SLCD and the baseline methods in Fig. 5 and Fig. 1. We observe that SLCD produces a more tightly concentrated reward distribution with a higher median reward than the baseline methods, while still maintaining generation diversity, as shown in Fig. 1 with a lower FID score than baseline methods. 6.2 Qualitative results We present generated images from SLCD in Fig. 4. For the compression task, we observe three recurring patterns: some images shift the subject toward the edges of the frame, others reduce the subject’s size, and some simplify the overall scene to reduce file size. For the aesthetic task, the outputs tend to take on a more illustrated appearance, often reflecting a variety of artistic styles. 8 Table 1: Top 10 and 50 quantiles of the generated samples for each algorithm (with 95% confidence intervals). Higher is better. SLCD consistently outperforms the baseline methods. Domain Quantile Pre-Train Best-N DPS SMC SVDD-MC SVDD-PM SLCD Image: Compress 50% -101.4±0.22 -71.2±0.46 -60.1±0.44 -59.7±0.4 -54.3±0.33 -51.1±0.38 -13.60±0.79 10% -78.6±0.13 -57.3±0.28 -61.2±0.28 -49.9±0.24 -40.4±0.2 -38.8±0.23 -11.05±0.41 Image: Aesthetic 50% 5.62±0.003 6.11±0.007 5.61±0.009 6.02±0.004 5.70±0.008 6.14±0.007 6.31±0.061 10% 5.98±0.002 6.34±0.004 6.00±0.005 6.28±0.003 6.05±0.005 6.47±0.004 6.59±0.077 Enhancers (DNA) 50% 0.121±0.033 1.807±0.214 3.782±0.299 4.28±0.02 5.074±0.096 5.353±0.231 7.403±0.125 10% 1.396±0.020 3.449±0.128 4.879±0.179 5.95±0.01 5.639±0.057 6.980±0.138 7.885±0.231 5’UTR (RNA) 50% 0.406±0.028 0.912±0.023 0.426±0.073 0.76±0.02 1.042±0.008 1.214±0.016 1.313±0.024 10% 0.869±0.017 1.064±0.014 0.981±0.044 0.91±0.01 1.117±0.005 1.383±0.010 1.421±0.039 -128.70-117.22 -54.32 -35.78 -25.64-22.84 -116.07-109.13 -77.02-49.52-28.04 -25.64-123.41 -113.50 -89.57 -50.00-28.02 -27.89 MovingtowardsedgeSmallerSimplerand cartoonish 5.545.96 6.07 6.236.386.56CompressCompressCompressAestheticSophisticated black-and-whitewithηincreaseTaskChange Aesthetic5.20 5.44 5.48 5.52 6.38 6.63Painting-like, illustrated Figure 4: Images generated by SLCD with varying ηvalues and their rewards . The first column shows results from the base model, SD1.5, which corresponds to our method with η= 0. Asηincreases, the KL penalty is relaxed, allowing the generated images to be more strongly optimized for the reward function, and consequently, they diverge further from the base model’s original distribution. Asηincreases, the KL
|
https://arxiv.org/abs/2505.21666v1
|
constraint is relaxed, enabling a controlled trade-off between optimizing for the reward function and staying close to the base model’s distribution. Notably, even under strong reward guidance (i.e., with larger η), our method consistently maintains a high level of diversity in the generated outputs. 6.3 Fréchet Inception Distance Comparison Both SLCD and SVDD baselines allow one to control the output sample reward at test time, but via different control variables: SLCD modulates the KL-penalty coefficient η, while SVDD-MC 9 Pre-Trained Best-N DPS SMC SVDD-MC SVDD-PM SLCD (ours) Algorithm2.0 1.5 1.0 0.5 0.00.51.01.5Reward 5'UTR Task (R/glyph1197A)Figure 5: Distribution of rewards for RNA sequences (5’UTR) across different methods. SLCD achieves higher median rewards and bet- ter overall distribution compared to baseline ap- proaches. 0 200 400 600 800 Time (s)100 80 60 40 20 Reward ( ) 3711 4 551010 1515 2020Reward vs. Time Method BaseModel SLCD (Ours) SVDD-MC SVDD-PMFigure 6: Reward vs. Inference Time on the Compression Task. Numeric labels on the SLCD indicate η, while those on the SVDD denote the duplication number applied at each step. and SVDD-PM vary the number of Monte Carlo rollouts evaluated at each diffusion step. Since these control parameters can affect sample quality in different ways, we report both the Fréchet Inception Distance (Heusel et al., 2017) (FID) and reward. For the same reward, higher FID indicates that the model generates images stray farther from the base models distribution, a sign of reward hacking. We evaluate these methods in Fig. 1. That is, the points on the curve form a pareto frontier between reward and FID. SLCD is able to achieve a better reward-FID trade-off than SVDD-MC and SVDD-PM. 6.4 Inference Time Comparison An additional advantage of SLCD is its negligible inference overhead at test time, even when using higher ηvalues to achieve greater rewards. In Fig. 6, we compare the wall-clock generation time per image on an NVIDIA A6000 GPU for SLCD against SVDD-MC and SVDD-PM. SLCD achieves higher rewards while requiring significantly fewer computational resources and substantially shorter inference times than either baseline. Specifically, SLCD takes only 6.06 seconds per image, nearly identical to the base model SD 1.5’s 5.99 seconds. Importantly, unlike SVDD methods that incur increased computational cost to improve rewards, SLCD maintains constant inference time across all ηvalues, achieving enhanced performance with no additional computation. 6.5 Ablation Study To elucidate the impact of each training cycle, we vary the number of SLCD iterations and plot the resulting reward in Fig. 3. As additional iterations enrich the state distribution and mitigate covariate shift for the classifier, the reward consistently rises. This confirms that our iterative approach can mitigate covariate shift issue. In practice, we only require a small number of iterations to achieve high reward. Because the scaling parameter ηcan be chosen at test time when the distribution ˆRis fully trained, SLCD enables test-time control over the KL penalty during inference. By modulating ηat test-time, practitioners can smoothly trade-off reward against sample quality without retraining the distribution ˆR, as demonstrated in Fig. 4. 10 7 Conclusion In this work, we introduced SLCD,
|
https://arxiv.org/abs/2505.21666v1
|
a novel and efficient method that recasts the KL-constrained optimization problem as a supervised learning task. We provided theoretical guarantees showing that SLCD converges to the optimal KL-constrained solution and how data-aggregation effectively mitigates covariate shift. Empirical evaluations confirm that SLCD surpasses existing approaches, all while preserving high fidelity to the base model’s outputs and maintaining nearly the same inference time. Acknowledgments and Disclosure of Funding Wen Sun is supported by NSF IIS-2154711, NSF CAREER 2339395, DARPA LANCER: LeArning Network CybERagents, and the Sloan Research Fellowship. 11 References Alekh Agarwal, Nan Jiang, Sham M Kakade, and Wen Sun. Reinforcement learning: Theory and algorithms. CS Dept., UW Seattle, Seattle, WA, USA, Tech. Rep , pp. 10–4, 2019. Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications , 12(3):313–326, 1982. Žiga Avsec, Vikram Agarwal, Daniel Visentin, Joseph R. Ledsam, Agnieszka Grabska-Barwinska, Kyle R. Taylor, Yannis Assael, John Jumper, Pushmeet Kohli, and David R. Kelley. Effective gene expression prediction from sequence by integrating long-range interactions. Nature Methods , 2021. doi: 10.1038/s41592-021-01252-x. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning, 2024. Nicolo Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games . Cambridge university press, 2006. Hongrui Chen, Holden Lee, and Jianfeng Lu. Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions. In International Conference on Machine Learning , pp. 4735–4763. PMLR, 2023. Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. arXiv preprint arXiv:2209.11215 , 2022. Yiding Chen, Yiyi Zhang, Owen Oertell, and Wen Sun. Convergence of consistency model with multistep sampling under general data assumptions. arXiv preprint arXiv:2505.03194 , 2025. Hyungjin Chung, Jeongsol Kim, Michael Thompson McCann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. ICLR , 2023. URL https: //dblp.org/rec/conf/iclr/ChungKMKY23 . Kevin Clark, Paul Vicol, Kevin Swersky, and David J. Fleet. Directly fine-tuning diffusion models on differentiable rewards. ICLR , 2024. URL https://dblp.org/rec/conf/iclr/ClarkVSF24 . Piere Del Moral and Arnaud Doucet. Particle methods: An introduction with applications. In ESAIM: proceedings , volume 44, pp. 1–46. EDP Sciences, 2014. Rahul Dey and Fathi M. Salem. Gate-variants of gated recurrent unit (gru) neural networks. arXiv , 2017. Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis, 2021. URL https://arxiv.org/abs/2105.05233 . Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, and Ricky T. Q. Chen. Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control. ICLR , 2025. URL https://dblp.org/rec/conf/iclr/Domingo-EnrichD25 . Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. arXiv preprint arXiv:2305.16381 , 2023. Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007 , 2004. Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin. The statistical complexity of interactive decision making. arXiv preprint
|
https://arxiv.org/abs/2505.21666v1
|
arXiv:2112.13487 , 2021. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems , 30, 2017. 12 Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. CoRR , 2022. doi: 10.48550/ ARXIV .2207.12598. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303 , 2022. Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffu- sion for molecule generation in 3d. ICML , 2022. URL https://dblp.org/rec/conf/icml/ HoogeboomSVW22 . Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, and Masatoshi Uehara. Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding, 2024. URL https://arxiv.org/abs/2408.08252 . Sidharth Mudgal, Jong Lee, Harish Ganapathy, YaGuang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, et al. Controlled decoding from language models. arXiv preprint arXiv:2310.17022 , 2023. Hunter Nisonoff, Junhao Xiong, Stephan Allenspach, and Jennifer Listgarten. Unlocking guidance for discrete state-space diffusion and flow models. ICLR , 2025. URL https://dblp.org/rec/ conf/iclr/NisonoffXAL25 . Owen Oertell, Jonathan D. Chang, Yiyi Zhang, Kianté Brantley, and Wen Sun. Rl for consistency models: Faster reward guided text-to-image generation, 2024. URL https://arxiv.org/abs/ 2404.03673 . Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. Aligning text-to-image diffusion models with reward backpropagation, 2023. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR) , pp. 10684–10695, June 2022. Stephane Ross and J Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979 , 2014. Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics , pp. 627–635. JMLR Workshop and Conference Proceedings, 2011. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems , 35:36479–36494, 2022. Subham S. Sahoo, Marianne Arriola, Yair Schiff, Aaron Gokaslan, Edgar Marroquin, Justin T. Chiu, Alexander Rush, and V olodymyr Kuleshov. Simple and effective masked diffusion language models. NeurIPS , 2024. URL https://dblp.org/rec/conf/nips/SahooASGMCRK24 . Paul J. Sample, Ban Wang, David W. Reid, Vlad Presnyak, Iain J. McFadyen, David R. Morris, and Georg Seelig. Human 5’ utr design and variant effect prediction from a massively parallel translation assay. Nature Biotechnology , 2019. doi: 10.1038/s41587-019-0164-5. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Chrisoph Schumman. Laion aesthetics. https://laion.ai/blog/laion-aesthetics/ ,
|
https://arxiv.org/abs/2505.21666v1
|
2022. 13 Shai Shalev-Shwartz et al. Online learning and online convex optimization. Foundations and Trends ® in Machine Learning , 4(2):107–194, 2012. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In Inter- national Conference on Learning Representations , 2021a. URL https://openreview.net/ forum?id=St1giarCHLP . Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In Proceedings of International Conference on Learning Representations , 2021b. Arun Sai Suggala and Praneeth Netrapalli. Online non-convex learning: Following the perturbed leader is optimal. In Algorithmic Learning Theory , pp. 845–861. PMLR, 2020. Wen Sun, Arun Venkatraman, Geoffrey J Gordon, Byron Boots, and J Andrew Bagnell. Deeply aggrevated: Differentiable imitation learning for sequential prediction. In International conference on machine learning , pp. 3309–3318. PMLR, 2017. Brian L Trippe, Jason Yim, Doug Tischer, David Baker, Tamara Broderick, Regina Barzilay, and Tommi Jaakkola. Diffusion probabilistic modeling of protein backbones in 3d for the motif- scaffolding problem. arXiv preprint arXiv:2206.04119 , 2022. Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Tommaso Biancalani, and Sergey Levine. Fine-tuning of continuous-time diffusion models as entropy-regularized control, 2024a. Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Sergey Levine, and Tommaso Biancalani. Feedback efficient online fine-tuning of diffusion models. arXiv preprint arXiv:2402.16359 , 2024b. Kaiwen Wang, Nathan Kallus, and Wen Sun. The central role of the loss function in reinforcement learning. arXiv preprint arXiv:2409.12799 , 2024. Luhuan Wu, Brian Trippe, Christian Naesseth, David Blei, and John P Cunningham. Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems , 36:31372–31403, 2023. Jin Peng Zhou, Kaiwen Wang, Jonathan Chang, Zhaolin Gao, Nathan Kallus, Kilian Q Weinberger, Kianté Brantley, and Wen Sun. Q ♯: Provably optimal distributional rl for llm post-training. arXiv preprint arXiv:2502.20548 , 2025. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai , volume 8, pp. 1433–1438. Chicago, IL, USA, 2008. 14 We show the derivation for the classifier in Appendix A. We discuss Theorem 6 in Appendix B. In Appendix C, we provide the proof for the main theorem in the main text. Then, we provide additional technical lemmas in Appendix D. Finally, we provide more details on the training and evaluation of SLCD in Appendix E. We finally provide more samples of the image experiments in Appendix F. A Derivation for the classifier Our goal is to show Eq. (7): p(y= 1|xt) =ExT∼Pprior t→T(·|xt)exp(ηr(xT)). Our first step is to derive the classifier in terms of {¯xτ}τ, the forward process defined by Eq. (1). By the law of total probability: p(y= 1|¯xτ) =p(y= 1,¯xτ) q(¯xτ)=R p(y= 1,¯xτ,¯x0)d¯x0 q(¯xτ)=R p(y= 1|¯xτ,¯x0)q(¯xτ,¯x0)d¯x0 q(¯xτ) =Z p(y= 1|¯xτ,¯x0)q(¯x0|¯xτ)d¯x0. By definition, the label yand¯xτare independent when conditioning on ¯x0, thus p(y= 1|¯xτ,¯x0) = p(y= 1|¯x0)and p(y= 1|¯xτ) =E¯x0|¯xτ[p(y= 1|¯x0)] =E¯x0|¯xτ[exp( ηr(¯x0))]. (12) By Lemma 12, the forward (Eq. (1))and reverse (Eq. (2)) processes have the same conditional distribution. Thus, in Eq. (12),
|
https://arxiv.org/abs/2505.21666v1
|
we can substitute ¯x0,¯xτ,q(¯x0|¯xτ)with the corresponding components in the reverse process: xT,xT−τ,Pprior T−τ→T(xT|xT−τ). Then we complete the proof by setting τ=T−t. One crucial property is: Eq. (12), the classifier defined in terms of the forward process, only depends on the conditional distribution ¯x0|¯xτ, not the joint distribution of (¯x0,¯xτ). This means Eq. (7), the classifier defined in terms of the reverse process, is accurate regardless of the marginal distribution of xt. In fact, when considering the data collection procedure Eq. (10), the roll-in step involves the classifier estimator during training, so the marginal distribution of xtcan be very different from qT−t, the marginal distribution of the forward process. However, as long as the roll-out step uses the ground truth reverse process, we are using the correct conditional distribution and the target classifier during training is thus unbiased. B Discussion on Assumption 6 Assumption 6 assumes that the estimation for the classifier results in good gradient estimation with a small pointwise error. In this section, we show two sets of sufficient conditions that weaken Assumption 6 to versions where the estimation errors are evaluated on the marginal distribution of the reverse process. The conditions will depend on: •ˆRˆn: the reward distribution chosen by Algorithm 1; •Rprior: the ground truth reward; •ˆp: the marginal distribution in the reverse process induced by guidance ˆf(returned by Algo- rithm 1). For simplicity, let ˆv(x, t) :=Er∼ˆRˆn(·|x,t)exp(η·r) v⋆(x, t) :=Er∼Rprior(·|x,t)exp(η·r) V(x, t) :=ˆv(x, t)−v⋆(x, t). 15 Throughout this section, we discuss some natural conditions under which small error from estimating v⋆(x, t)leads to small error on estimating ∇xlnv⋆(x, t). The conditions are related to the smoothness of functions V, v⋆,ˆv, and the distribution p. Firstly, we present the following lemma, showing that it’s sufficient to control ∇f1(x)− ∇f2(x): Lemma 8. Letf1, f2beRd→R, then ∥∇lnf1(x)− ∇lnf2(x)∥2 ≤1 infx|f1(x)|∥∇f1(x)− ∇f2(x)∥2+supx∥∇f2(x)∥2 infx|f1(x)|infx|f2(x)||f2(x)−f1(x)| B.1 Smoothness assumptions on both the classifier and the distribution Our first set of assumptions is based on the smoothness of the functions and the smoothness of the distributions: there exists M, L, L p>0, s.t. • for all t,Ex∼ˆptPd i=1∂2V(x,t) ∂x2 i2 ≤M2; • for all t,supx∥∇xˆv(x, t)∥2,supx∥∇xv⋆(x, t)∥2≤L; • for all t,Ex∼p ∥∇logp(x)∥2 2 ≤L2 p. We now present the following results that relates the difference of gradients to difference of function value: Lemma 9. LetF(·)be function that map from Rd→R, and p(·)be a PDF over Rd. Suppose limxi→∞F(x)·∂F(x) ∂xip(x) = 0 for all i, and x−i. ThenEx∼p∥∇F(x)∥2 2is bounded by: vuuutEx∼p dX i=1∂2F(x) ∂x2 i!2 + sup x(∥∇F(x)∥2)q Ex∼p[∥∇logp(x)∥2 2] q Ex∼p[F2(x)]. Given these, we can show a version of Theorem 6. By definition, for all t, inf x|ˆv(x, t)|,inf x|v⋆(x, t)| ≥exp(−ηRmax). By Lemma 8 and Lemma 9: for all t. Ex∼ˆpt ∥∇xln ˆv(x, t)− ∇ xlnv⋆(x, t)∥2 2 ≤2 exp(2 ηRmax)Ex∼ˆpt ∥∇xˆv(x, t)− ∇ xv⋆(x, t)∥2 2 + 2 exp(4 ηRmax)LEx∼ˆpt |ˆv(x, t)−v⋆(x, t)|2 ≤2 exp(2 ηRmax) (M+L·Lp)q Ex∼ˆpt[|ˆv(x, t)−v⋆(x, t)|2] + 2 exp(4 ηRmax)LEx∼ˆpt |ˆv(x, t)−v⋆(x, t)|2 This will only change the proof in Appendix C slightly. B.2 Smoothness assumption and gradient estimator In this section, we introduce another set of
|
https://arxiv.org/abs/2505.21666v1
|
conditions based on the gradient estimator introduced in Flaxman et al. (2004). We require smoothness assumptions on the functions and an additional gradient estimator. For any function f:Rd→R, we define the gradient estimator c∇fto be: c∇f(x) :=d δEu∼U(Sd−1)[f(x+δu)u], where Sd−1:={x∈Rd:∥x∥2= 1}andδ >0is a free parameter. We use U(Sd−1)to denote the uniformly random distribution over Sd−1. We assume there exist M > 0, s.t.: • for all t,supx∥∇2 xˆv(x, t)∥2,supx∥∇2 xv⋆(x, t)∥2≤M 16 We first present two properties of the gradient estimator. Lemma 10. Letf1, f2be two functions that map from RdtoR, then ∥d∇f1(x)−d∇f2(x)∥2≤d δEu∼U(Sd−1)[|f1(x+δu)−f2(x+δu)|]. Lemma 11. LetfbeRd→Randsupx∥∇2f(x)∥2≤M, then c∇f(x)− ∇f(x) 2≤dδM 2. Given these, we can show that, for all t, Ex∼ˆpt ∥∇xˆv(x, t)− ∇ xv⋆(x, t)∥2 2 ≤3Ex∼ˆpth ∥∇xˆv(x, t)−d∇xˆv(x, t)∥2 2i + 3Ex∼ˆpth ∥d∇xˆv(x, t)−\∇xv⋆(x, t)∥2 2i + 3Ex∼ˆpth ∥\∇xv⋆(x, t)− ∇ xv⋆(x, t)∥2 2i ≤3d2δ2M2 2+3d2 δ2Ex∼ˆpth Eu∼U(Sd−1)[ˆv(x+δu, t)−v⋆(x+δu, t)]2i ≤3d2δ2M2 2+3d2 δ2Ex∼ˆpt Eu∼U(Sd−1)[ˆv(x+δu, t)−v⋆(x+δu, t)] ≤3d2δ2M2 2+3d2 δ2Ex∼ˆpt,u∼U(Sd−1)[|ˆv(x+δu, t)−v⋆(x+δu, t)|] ≤3d2δ2M2 2+3d2 δ2r Ex∼ˆpt,u∼U(Sd−1)h (ˆv(x+δu, t)−v⋆(x+δu, t))2i , where for the third inequality, we use the fact that ˆv(·,·), v⋆(·,·)∈[0,1]. The remaining steps follow from Lemma 8, similar to Section B.1: Ex∼ˆpt ∥∇xln ˆv(x, t)− ∇ xlnv⋆(x, t)∥2 2 ≤2 exp(2 ηRmax)Ex∼ˆpt ∥∇xˆv(x, t)− ∇ xv⋆(x, t)∥2 2 + 2 exp(4 ηRmax)LEx∼ˆpt |ˆv(x, t)−v⋆(x, t)|2 ≤2 exp(2 ηRmax) 3d2δ2M2 2+3d2 δ2r Ex∼ˆpt,u∼U(Sd−1)h (ˆv(x+δu, t)−v⋆(x+δu, t))2i! + 2 exp(4 ηRmax)LEx∼ˆpt |ˆv(x, t)−v⋆(x, t)|2 Compared to Algorithm 1, this approach requires one to additionally optimize for (ˆv(x+δu, t)−v⋆(x+δu, t))2, i.e. to make sure ˆvandvare close under some “wider” distri- bution. The value of δis a free parameter that can be adjusted to improve the accuracy of the gradient estimator. B.3 Proof of Lemma 8 ∥∇lnf1(x)− ∇lnf2(x)∥2= ∇f1(x) f1(x)−∇f2(x) f2(x) 2 ≤ ∇f1(x) f1(x)−∇f2(x) f1(x) 2+ ∇f2(x) f1(x)−∇f2(x) f2(x) 2 =∥∇f1(x)− ∇f2(x)∥2 |f1(x)|+1 |f1(x)f2(x)|∥∇f2(x)(f2(x)−f1(x))∥2 ≤1 infx|f1(x)|∥∇f1(x)− ∇f2(x)∥2+supx∥∇f2(x)∥2 infx|f1(x)|infx|f2(x)||f2(x)−f1(x)|, B.4 Proof of Lemma 9 We rewrite the target as componentwise form: ∥∇F(x)∥2 2=dX i=1∂F(x) ∂xi2 . 17 By applying integration by parts to the i-th dimension, Z∂F(x) ∂xi·∂F(x) ∂xip(x)dx =Z F(x)·∂F(x) ∂xip(x) ∞ xi=−∞dx−i−Z F(x)∂2F(x) ∂x2 ip(x) +∂F(x) ∂xi∂p(x) ∂xi dx =−Z F(x)∂2F(x) ∂x2 ip(x) +∂F(x) ∂xi∂logp(x) ∂xip(x) dx. By summing over all coordinates, we get: Ex∼p∥∇F(x)∥2 2=−Ex∼p" F(x)dX i=1∂2F(x) ∂x2 i# −Ex∼p[F(x)⟨∇F(x),∇logp(x)⟩] By Cauchy-Schwarz inequality: Ex∼p∥∇F(x)∥2 2 ≤ vuuutEx∼p dX i=1∂2F(x) ∂x2 i!2 +r Ex∼ph (⟨∇F(x),∇logp(x)⟩)2i q Ex∼p[F2(x)] ≤ vuuutEx∼p dX i=1∂2F(x) ∂x2 i!2 + sup x(∥∇F(x)∥2)q Ex∼p[∥∇logp(x)∥2 2] q Ex∼p[F2(x)] B.5 Proof of Lemma 10 By Jensen’s inequality, ∥d∇f1(x)−d∇f2(x)∥2=d δ Eu∼U(Sd−1)[f1(x+δu)u]−Eu∼U(Sd−1)[f2(x+δu)u] 2 =d δ Eu∼U(Sd−1)[(f1(x+δu)−f2(x+δu))u] 2 ≤d δEu∼U(Sd−1)[∥(f1(x+δu)−f2(x+δu))u∥2] =d δEu∼U(Sd−1)[|f1(x+δu)−f2(x+δu)|∥u∥2] =d δEu∼U(Sd−1)[|f1(x+δu)−f2(x+δu)|]. B.6 Proof of Lemma 11 By Taylor’s theorem, for all x,v, there exists ξ(x,v), s.t. f(x+v) =f(x) +⟨∇f(x),v⟩+1 2v⊤∇2f(ξ(x,v))v. Then c∇f(x)− ∇f(x) 2= d δEu∼U(Sd−1)[f(x+δu)u]− ∇f(x) 2 = d δEu∼U(Sd−1) f(x)u+δ⟨∇f(x),u⟩u+δ2 2 u⊤∇2f(ξ(x, δu))u u − ∇f(x) 2 Because Eu∼U(Sd−1)[u] = 0 , Eu∼U(Sd−1)[f(x)u] = 0. 18 Because Eu∼U(Sd−1)[uu⊤] =1 dI, Eu∼U(Sd−1)[⟨∇f(x),u⟩u] =Eu∼U(Sd−1) uu⊤∇f(x) =Eu∼U(Sd−1) uu⊤ ∇f(x) =1 d∇f(x). Thus, by Jensen’s inequality, we have: c∇f(x)− ∇f(x) 2= d δEu∼U(Sd−1)δ2 2 u⊤∇2f(ξ(x, δu))u u 2 ≤dδ 2Eu∼U(Sd−1) u⊤∇2f(ξ(x, δu))u u 2 =dδ 2Eu∼U(Sd−1) u⊤∇2f(ξ(x, δu))u ∥u∥2 =dδ 2Eu∼U(Sd−1) u⊤∇2f(ξ(x, δu))u ≤dδ 2Eu∼U(Sd−1) ∥u∥2∥∇2f(ξ(x,
|
https://arxiv.org/abs/2505.21666v1
|
δu))u∥2 ≤dδM 2. C Proof of Main Theorem Proof. By Assumption 4, 1 NMNX i=1X (t,xt,r)∈Diln1 ˆRi(r|xt, t)≤min R1 NMNX i=1X (t,xt,r)∈Diln1 R(r|xt, t)+γN ≤1 NMNX i=1X (t,xt,r)∈Diln1 Rprior(r|xt, t)+γN. After rearranging, we get 1 NMNX i=1X (t,xt,r)∈DilnRprior(r|xt, t) ˆRi(r|xt, t)≤γN. According to Eq. (10), each (t,xt),ris sampled from Rprior(·|xt, t). By taking expectation over r|xt, tfor all xt, t, we get: 1 NMNX i=1X (t,xt)∈DiKL Rprior(·|xt, t)∥ˆRi(·|xt, t) ≤γN. By taking expectation over xtandt, we get: 1 NNX i=1Et∼U(t),xt∼Pfi 0→t(·|N(0,I))h KL Rprior(·|xt, t)∥ˆRi(·|xt, t)i ≤γN, where xtis sampled from the SDE induced by ˆRi. By Pinsker’s inequality, 1 NNX i=1Et∼U(t),xt∼Pfi 0→t(·|N(0,I)) TV Rprior(·|xt, t),ˆRi(·|xt, t) 2 ≤1 NNX i=11 2Et∼U(t),xt∼Pfi 0→t(·|N(0,I))h KL Rprior(·|xt, t)∥ˆRi(·|xt, t)i ≤1 2γN. This means, there exists ˆn∈ {1, . . . , N }, s.t. Et∼U(T),x∼Pfˆn 0→t(·|N(0,I)) TV Rprior(·|xt, t),ˆRˆn(·|xt, t) 2 ≤1 2γN. Let ˆf(xt, t) :=fˆn(xt, t) =∇xtlnEr∼ˆRˆn(·|xt,t)exp(η·r) f⋆(xt, t) :=∇xtlnp(y= 1|xt) =∇xtlnEr∼Rprior(·|xt,t)exp(η·r) 19 and ˆpt:=Pˆf 0→t(·|N(0, I)), p t:=Pf⋆ 0→t(·|qT), recall that qTis the marginal distribution of the forward process at time T. By Lemma 13, ∂ ∂tKL(ˆpt∥pt) =−g2(T−t)Ex∼ˆpt" ∇logˆpt(x) pt(x) 2 2# +Ex∼ˆpt g2(T−t) ˆf(x, t)−f⋆(x, t) ,∇logˆpt(x) pt(x) ≤1 4g2(T−t)Ex∼ˆpt ˆf(x, t)−f⋆(x, t) 2 2 . By integrating over t∈[0, T], we get: KL(ˆpT∥pT)≤KL(ˆp0∥p0) +1 4ZT 0g2(T−t)Ex∼ˆpt ˆf(x, t)−f⋆(x, t) 2 2 dt, where ˆp0=N(0, I)andp0=qT. By Assumption 5, KL(ˆp0∥p0)≤ϵT. By Assumption 6, ZT 0g2(T−t)Ex∼ˆpt ˆf(x, t)−f⋆(x, t) 2 2 dt =T∥g∥2 ∞Et∼U(T),x∼ˆpt ˆf(x, t)−f⋆(x, t) 2 2 ≤T∥g∥2 ∞L2Et∼U(T),x∼ˆpt Er∼ˆRˆn(·|xt,t)exp(η·r)−Er∼Rprior(·|xt,t)exp(η·r) 2 ≤T∥g∥2 ∞L2Et∼U(T),x∼ˆpt TV(ˆRˆn(·|xt, t), Rprior(·|xt, t)) 2 ≤1 2T∥g∥2 ∞L2γN To conclude: KL(ˆpT∥pT)≤ϵT+1 2T∥g∥2 ∞L2γN. D Technical Lemmas The following lemma shows that the finite-dimensional distribution of the forward and reverse SDEs are the same. Lemma 12 (Section 5 of Anderson (1982)) .Let{¯xτ}τbe the process generated by the forward SDE in Eq. (1), and{xt}tbe the process generated by the reverse SDE in Eq. (2). Then for all s > t , the conditional distribution ¯xt|¯xsandxT−t|¯xT−shave the same density, i.e. for all x, x′∈Rd, Pr [¯xt=x′|¯xs=x] = Pr [ xT−t=x′|xT−s=x]. This result is proved by considering the backward Kolmogorov equation of the forward process. The proof is presented in Section 5 of Anderson (1982). For completeness, we include the proof in Section D.1. This result implies that the reverse and the forward SDE have the same “joint distribution” (in the sense of finite-dimensional distribution ). This can be proved by two steps: 1.using the Fokker-Planck equation to show that the forward and reverse SDE have the same marginal distribution given proper initial conditions; 2.write the finite-dimensional distribution as the product of a sequence of conditional distribu- tions and the initial marginal distribution by using Markovian property iteratively. 20 The following lemma upper bound the KL-divergence between the marginal distributions of two SDEs in terms of the difference in their drift terms: Lemma 13 (Lemma 6 of Chen et al. (2023)) .Consider the two Ito processes: dXt=F1(Xt, t)dt+g(t)dwtX0=a dYt=F2(Yt, t)dt+g(t)dwtY0=a wher F1, F2, gare continuous functions and may depend on a. We assume the uniqueness and regularity conditions hold: 1. The two SDEs have unique solutions. 2.Xt, Ytadmit densities p1
|
https://arxiv.org/abs/2505.21666v1
|
t, p2 t∈C2(Rd)fort >0 And define the Fisher information information between pt, qtas: J(pt∥qt) :=Z pt(x)||∇logpt(x) qt(x)||2dx Then for any t >0, the evolution of KL(p1 t∥p2 t)is given by: ∂ ∂tKL(p1 t∥p2 t) =−g(t)2J(p1 t∥p2 t) +Ex∼p1 t F1(Xt, t)−F2(Xt, t),∇logp1 t(x) p2 t(x) D.1 Proof of Lemma 12 Letq(xs, s|xt, t)be the conditional distribution of Eq. (1), where s≥tis the time index. And q(xt, t)be the marginal distribution at time t. By the backward Kolmogorov equation: ∂q(¯xs, s|¯xt, t) ∂t=−X ihi(¯xt, t)∂q(¯xs, s|¯xt, t) ∂¯xi t−1 2g2(t)X i∂2q(¯xs, s|¯xt, t) ∂ ¯xi t2(13) According to Fokker-Planck equation, the marginal distribution q(¯xt, t)satisfies: ∂q(¯xt, t) ∂t=−X i∂ ∂¯xi t q(¯xt, t)hi(¯xt, t) +1 2g2(t)X i∂2q(¯xt, t) ∂ ¯xi t2(14) Because the joint distribution satisfies: q(¯xs, s,¯xt, t) =q(¯xs, s|¯xt, t)q(¯xt, t), we have ∂q(¯xs, s,¯xt, t) ∂t=q(¯xt, t)∂q(¯xs, s|¯xt, t) ∂t+q(¯xs, s|¯xt, t)∂q(¯xt, t) ∂t. Plug in Eq. (13) and Eq. (14), we have: ∂q(¯xs, s,¯xt, t) ∂t =q(¯xt, t) −X ihi(¯xt, t)∂q(¯xs, s|¯xt, t) ∂¯xi t−1 2g2(t)X i∂2q(¯xs, s|¯xt, t) ∂ ¯xi t2! +q(¯xs, s|¯xt, t) −X i∂ ∂¯xi t q(¯xt, t)hi(¯xt, t) +1 2g2(t)X i∂2q(¯xt, t) ∂ ¯xi t2! . 21 For the first and third terms: −q(¯xt, t)X ihi(¯xt, t)∂q(¯xs, s|¯xt, t) ∂¯xi t−q(¯xs, s|¯xt, t)X i∂ ∂¯xi t q(¯xt, t)hi(¯xt, t) =−X ihi(¯xt, t) q(¯xt, t)∂q(¯xs, s|¯xt, t) ∂¯xi t+q(¯xs, s|¯xt, t)∂ ∂¯xi tq(¯xt, t) −q(¯xs, s|¯xt, t)q(¯xt, t)X i∂ ∂¯xi thi(¯xt, t) =−X ihi(¯xt, t)∂q(¯xs, s,¯xt, t) ∂¯xi t−q(¯xs, s,¯xt, t)X i∂ ∂¯xi thi(¯xt, t) =−X i∂ ∂¯xi t hi(¯xt, t)q(¯xs, s,¯xt, t) . For the second and forth terms: 1 2g2(t)X i −q(¯xt, t)∂2q(¯xs, s|¯xt, t) ∂ ¯xi t2+q(¯xs, s|¯xt, t)∂2q(¯xt, t) ∂ ¯xi t2! =−1 2g2(t)X i q(¯xs, s|¯xt, t)∂2q(¯xt, t) ∂ ¯xi t2+ 2∂q(¯xs, s|¯xt, t) ∂¯xi t∂q(¯xt, t) ∂¯xi t+q(¯xt, t)∂2q(¯xs, s|¯xt, t) ∂ ¯xi t2! +1 2g2(t)X i 2q(¯xs, s|¯xt, t)∂2q(¯xt, t) ∂ ¯xi t2+ 2∂q(¯xs, s|¯xt, t) ∂¯xi t∂q(¯xt, t) ∂¯xi t! =−1 2g2(t)X i∂2q(¯xs, s,¯xt, t) ∂ ¯xi t2+g2(t)X i q(¯xs, s|¯xt, t)∂2q(¯xt, t) ∂ ¯xi t2+∂q(¯xs, s|¯xt, t) ∂¯xi t∂q(¯xt, t) ∂¯xi t! =−1 2g2(t)X i∂2q(¯xs, s,¯xt, t) ∂ ¯xi t2+g2(t)X i∂ ∂¯xi t q(¯xs, s|¯xt, t)∂q(¯xt, t) ∂¯xi t . To summarize, the joint distribution satisfies the following PDE ∂q(¯xs, s,¯xt, t) ∂t=−X i∂ ∂¯xi t q(¯xs, s,¯xt, t)¯hi(¯xt, t) −1 2g2(t)X i∂2q(¯xs, s,¯xt, t) ∂ ¯xi t2, where ¯hi(¯xt, t)is defined as: ¯hi(¯xt, t) :=hi(¯xt, t)−1 q(¯xt, t)g2(t)∂ ∂¯xi tq(¯xt, t). Divide q(¯xs, s)on both sides, we get the following PDE for the conditional distribution q(¯xt, t|¯xs, s) (notice that this one is conditioning on the future): ∂q(¯xt, t|¯xs, s) ∂t=−X i∂ ∂¯xi t q(¯xt, t|¯xs, s)¯hi(¯xt, t) −1 2g2(t)X i∂2q(¯xt, t|¯xs, s) ∂ ¯xi t2(15) Recall that the reverse process is defined by: dx= −h(x, T−t) +g2(T−t)∇logqT−t(x) dt+g(T−t)dw, t∈[0, T]. The Fokker-Planck equation is given by: ∂p(xt, t) ∂t=−X i∂ ∂xi t −p(xt, t)¯hi(xt, T−t) +1 2g2(T−t)X i∂2p(xt, t) ∂ xi t2. We substitute twithT−tand get: −∂p(xT−t, T−t) ∂t=X i∂ ∂xi T−t p(xT−t, T−t)¯hi(xT−t, t) +1 2g2(t)X i∂2p(xT−t, T−t) ∂ xi T−t2. This PDE
|
https://arxiv.org/abs/2505.21666v1
|
is identical to Eq. (15). By choosing the proper initial condition, we finish the proof. 22 E Additional Details of Training and Evaluation We provide more information about the experimental setup and then overview some of the details of training and evaluation of SLCD in this section. The experiments are split up into two parts: image tasks (image compression and aesthetic evaluation) and biological sequence tasks (5’ untranslated regions and DNA enhancer sequences). All experiments were conducted on a single NVIDIA A6000 GPU. Our classifier network is constructed to predict a histogram over the reward space and trained with cross-entropy loss. When training the classifier for the first iteration of DAgger (without any guidance) we label all states in the trajectory with the reward of the final state. For subsequent iterations, we only use states that are part of the rollout (i.e. the states in the trajectory that are computed solely using the prior, after the rollin section which uses the latest classifier). To apply classifier guidance, we compute the empirical expectation of the classifier. That is for B buckets ( r1, . . . , r B) and predictor ˆRn(·|xt, t), we compute: fn(xt, t) =∇xtln BX i=1ˆRn(ri|xt, t) exp( η·ri)! . Because the gradient is invariant to the reward scale, we set rito be a Bequally spaced partition of [0,1]. E.1 Experiment Task Details E.1.1 Comparison Tasks Image Compression. The reward is the negative file size of the generated image, a non-differentiable compression score. We use Stable Diffusion v1.5 (Rombach et al., 2022) as the base model for this continuous-time diffusion task. Image Aesthetics. Here the objective is to maximize the LAION aesthetic score (Schumman, 2022), obtained from a CLIP encoder followed by an MLP trained on human 1-to-10 ratings. This benchmark is standard in image-generation studies (Black et al., 2024; Domingo-Enrich et al., 2025; Li et al., 2024; Uehara et al., 2024a,b). We again employ Stable Diffusion v1.5 as the base model. 5′Untranslated Regions (5′UTR) and DNA Enhancers. For sequence generation we aim to maximize the mean ribosome load (MRL) of 5′UTRs measured via polysome profiling (Sample et al., 2019). Following Li et al. (2024), the base model is that of Sahoo et al. (2024) trained on Sample et al. (2019). Likewise, using the same base model, we optimize enhancer sequences according to expression levels predicted by the Enformer model (Avsec et al., 2021) in the HepG2 cell line. E.1.2 Comparison Methods Best-of- N.We draw Nindependent samples from the base diffusion model and retain the single sample with the highest reward. DPS. Diffusion Posterior Sampling (DPS) is a training-free variant of classifier guidance originally proposed for continuous diffusion models (Chung et al., 2023) and subsequently adapted to discrete diffusion (Li et al., 2024). We use the state-of-the-art implementation of Nisonoff et al. (2025). SMC. Sequential Monte Carlo (SMC) methods (Del Moral & Doucet, 2014; Wu et al., 2023; Trippe et al., 2022) are a class of methods which use importance sampling on a number of rollouts, and select the best sample. However, note that SMC based methods do this
|
https://arxiv.org/abs/2505.21666v1
|
across the entire batch, not at a per sample level. SVDD-MC. SVDD-MC (Li et al., 2024) evaluates the expected reward of Ncandidates from the base model under an estimated value function and selects the candidate with the highest predicted return. SVDD-PM. SVDD-PM (Li et al., 2024) is similar to SVDD-MC except that it uses the true reward for each candidate instead of relying on value-function estimates. 23 E.2 Image Task Details (Image Compression and Aesthetic) For both image tasks, we use a lightweight classifier network, with model checkpoints sized at 4.93MB (compression) and 10.60MB (aesthetic). The prompts are generated under the configuration of SVDD Li et al. (2024). In the image compression task, we generate 1,400 images per iteration and train for one epoch. The results are reported using the checkpoint from the 4th iteration. For the aesthetic evaluation task, we generate 10,500 images in the first iteration, followed by 1,400 images in each subsequent iteration, and train for 6 epochs. The results are reported using the checkpoint from the 8th iteration. For the compression task, we adopt the same architecture as Li et al. (2024). For the aesthetic task, we add five additional residual layers to account for its increased complexity. Notably, our classifier takes the latent representation as direct input, without reusing the V AE (as in SVDD for compression), CLIP (SVDD for aesthetics), or the U-Net (SVDD-PM for estimating ˆx0). This results in a significantly more lightweight design, both in network size (approximately 10MB compared to over 4GB) and in runtime (Fig. 6). E.2.1 Hyperparameters For the image tasks, the following hyperparameters are used: Table 2: Image Task Hyperparameters. If two values are provided, the first corresponds to the compression task, and the second to the aesthetic task. Hyperparameter Value Seed 43 Learning rate 1×10−4 Optimizer betas (0.9, 0.999) Weight decay 0.0 Gradient accumulation steps 1 Batch size (classifier) 8 Batch size (inference) 1 Guidance scale 150/75 Train iterations (per round) 1/6 Eval interval (epoches) 1 E.3 Sequence Task Details (5’ UTR and DNA Enhancer) Following Li et al. (2024), for the Enhancer task, we use an Enformer model Avsec et al. (2021), and for the 5’ UTR task, we use ConvGRU model Dey & Salem (2017). We reference our classifier guidance implementation based off of Li et al. (2024). Both of these tasks are discrete diffusion problems and use the masking setup of Sahoo et al. (2024), following Li et al. (2024). We use the following hyperparameters for the sequence tasks. In order to not be bound to a specific η, we take use the gradient of the empirical mean computing the samples for the next iteration with a guidance scale of 10. We can then change ηat test time as shown in the paper. Due to the more light-weight nature of these tasks, we did 8 iterations of training for the DNA enhancer task. Likewise, we did 7 iterations for the 5’ UTR task. Unlike the image tasks, we do not reinitalize the classifier network for each iteration. E.3.1 Hyperparameters For the DNA enhancer and 5’ UTR
|
https://arxiv.org/abs/2505.21666v1
|
tasks, the following hyperparameters are used: 24 Table 3: DNA Enhancer and 5’ UTR Task Hyperparameters Hyperparameter Value Seed 43 Learning rate 1×10−4 Optimizer betas (0.9, 0.95) Weight decay 0.01 Gradient accumulation steps 4 Batch size (classifier) 5 Batch size (inference) 20 Guidance scale 10 Train iterations (per round) 200 Initial train iterations 600 Classifier epochs 1 Eval interval (steps) 20 F More Image Samples Fig. 7 provides further results for the image compression task using prompts that were not seen during training. The consistent performance across these novel inputs demonstrates the generalization capability of SLCD. Fig. 8 presents more images generated by SLCD on the image compression task, with different values of the parameter η. Each image is shown with its corresponding reward, demonstrating how varying ηaffects the trade-off between compression efficiency and visual quality. Fig. 9 shows additional samples from the image aesthetic task, also generated with varying η values. These examples highlight SLCD’s ability to optimize for aesthetic quality under different configurations. 25 -126.06-101.46-69.01-47.67 -26.50-27.83 -162.32-116.85-74.70-32.27-20.96-18.65-152.51-124.54-92.18-58.31 -34.14-33.48 LlamaFishCrabwithηincreasePrompts -119.68. -87.52 -55.69 -20.91 -16.30 -17.22 Dolphin -134.25 -97.73 -65.13 -36.11 -29.37 -24.06 EagleFigure 7: Additional images generated by SLCD with varying ηvalues and their corresponding re- wards on the image compression task. The prompts used were not seen during training, demonstrating the generalization capability of our method. 26 -128.70-117.22 -54.32 -35.78 -25.64-22.84 -116.07-109.13 -77.02-49.52-28.04 -25.64-123.41 -113.50 -89.57 -50.00-28.02 -27.89 CatButterflyPandawithηincreasePrompts -115.25 -87.64 -62.02 -41.14 -35.39 -27.35Dog -129.63 -98.57 -63.41-37.98 -31.19 -20.53Horse -91.71-69.82-50.52-43.33 -21.98-22.25Monkey -121.87-93.88-60.75-44.34 -23.02-18.57Rabbit Figure 8: More images generated by SLCD with varying ηvalues and their rewards on the image compression task. 27 5.84 6.14 6.15 6.206.30 6.48 6.06 6.316.33 6.37 6.33 6.405.97 6.03 6.33 6.41 6.45 6.66 CatButterflyPandawithηincreasePrompts 6.18 6.196.23 6.26 6.71 6.92Dog 5.545.96 6.07 6.236.386.56Horse 6.40 6.32 6.23 6.52 6.586.75Monkey 5.20 5.44 5.48 5.52 6.38 6.63Rabbit Figure 9: More images generated by SLCD with varying ηvalues and their rewards on the image aesthetic task. 28
|
https://arxiv.org/abs/2505.21666v1
|
R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning Yongchao Chen MIT / Harvard yongchaochen@fas.harvard.eduYueying Liu University of Illinois Urbana-Champaign yl136@illinois.edu Junwei Zhou University of Michigan zhoujw@umich.eduYilun Hao MIT yilunhao@mit.eduJingquan Wang University of Wisconsin–Madison jwang2373@wisc.edu Yang Zhang MIT-IBM Watson AI Lab Yang.Zhang2@ibm.comChuchu Fan MIT chuchu@mit.edu Abstract Despite advances in reasoning and planning of R1-like models, Large Language Models (LLMs) still struggle with tasks requiring precise computation, symbolic manipulation, optimization, and algorithmic reasoning, in which textual reason- ing lacks the rigor of code execution. A key challenge is enabling LLMs to decide when to use textual reasoning versus code generation. While OpenAI trains models to invoke a Code Interpreter as needed, public research lacks guid- ance on aligning pre-trained LLMs to effectively leverage code and generalize across diverse tasks. We present R1-Code-Interpreter, an extension of a text- only LLM trained via multi-turn supervised fine-tuning (SFT) and reinforcement learning (RL) to autonomously generate multiple code queries during step-by- step reasoning. We curate 144 reasoning and planning tasks (107 for training, 37 for testing), each with over 200 diverse questions. We fine-tune Qwen-2.5 models (3B/7B/14B) using various SFT and RL strategies, investigating differ- ent answer formats, reasoning vs. non-reasoning models, cold vs. warm starts, GRPO vs. PPO, and masked vs. unmasked code outputs. Unlike prior RL work on narrow domains, we find that Code Interpreter training is significantly harder due to high task diversity and expensive code execution, highlighting the critical role of the SFT stage. Our final model, R1-CI-14B, improves average accuracy on the 37 test tasks from 44.0% to 64.1%, outperforming GPT-4o (text-only: 58.6%) and approaching GPT-4o with Code Interpreter (70.9%), with the emergent self-checking behavior via code generation. Datasets, Codes, and Models are available at https://github.com/yongchao98/R1-Code-Interpreter and https://huggingface.co/yongchao98 . 1 Introduction While reinforcement learning (RL)-based fine-tuning has significantly improved LLMs’ reasoning and planning [Wang et al., 2024, Guo et al., 2025, Jaech et al., 2024], models still struggle with seemingly simple tasks [Chen et al., 2025] and incur high token costs during inference-time search [Chen et al., 2024a]. Notably, many benchmark tasks—such as Blocksworld [Valmeekam et al., 2024] and Preprint. Under review.arXiv:2505.21668v1 [cs.AI] 27 May 2025 FrozenModelsTrainedModelsReferenceModelRule-based Reward𝒒𝑶𝟏𝑶𝟐𝑶𝑮…PolicyModels𝒓𝟏/𝑨𝟏…KL GroupComputationCode Interpreter𝒓𝟐/𝑨𝟐𝒓𝑮/𝑨𝑮(a)(b)Figure 1: Training Code Interpreter-augmented reasoning models with supervised learning and GRPO on 144 tasks in reasoning and planning domains. (a) Our best model, R1-CI-14B, surpasses GPT-4o (text-only) and approaches GPT-4o with Code Interpreter. (b) We propose a modified GRPO framework that integrates Code Interpreter for output generation. Game 24 [Zhou et al., 2023a]—are easily solvable via code. Textual reasoning excels at semantics and commonsense, but falls short in precise computation, symbolic manipulation, and algorithmic processing [Valmeekam et al., 2022]. In contrast, symbolic code generation handles these rigorously and benefits from external tools (e.g., equation solvers). Prompting LLMs to generate and execute code often outperforms pure textual reasoning [Madaan et al., 2022, Liang et al., 2022, Chen et al., 2022]. A key challenge is guiding LLMs to decide when to rely on textual reasoning versus programmatic solutions, given that most input questions lack explicit cues about which approach is best
|
https://arxiv.org/abs/2505.21668v1
|
and the possible text/code solution space is large. OpenAI’s GPT models address this by incorporating a Code Interpreter, allowing iterative code generation and reasoning over outputs [Achiam et al., 2023]. However, recent work [Chen et al., 2024b] show that current Code Interpreter implementations struggle to effectively steer between text and code, underutilizing symbolic capabilities. Moreover, public research lacks a comprehensive understanding of how to fine-tune LLMs to integrate with Code Interpreter for robust, generalizable performance. Whether training strategies used in models like DeepSeek R1 can enable such integration remains an open question. To tackle these challenges, we present R1-Code-Interpreter, a framework for integrating Code Interpreter capabilities into open-source LLMs. We curate 144 reasoning and planning tasks and synthesize 6.5k multi-turn text/code trajectories for supervised fine-tuning (SFT), followed by Group Relative Policy Optimization (GRPO) [Shao et al., 2024]. The resulting model, R1-CI-14B, effectively combines symbolic code execution with textual reasoning, solving tasks through iterative “execute- and-explore” interactions before producing final answers. We further explore various training strategies and uncover several notable phenomena. Our main contributions and findings are: 1) Broad domains: We curate 144 reasoning and planning tasks (107 for training, 37 for testing), each with over 200 samples of varying difficulty. All tasks are standardized into a unified format to enable efficient rollout and automated correctness evaluation. They cover diverse reasoning skills, including mathematical, spatial, logical, ordinal, optimization, and search-based reasoning. Our analysis highlights key challenges in leveraging Code Interpreter, especially due to varying task characteristics that affect whether code, text, mixture, or both are most effective. 2) R1-Code-Interpreter: the first SFT + RL framework that teaches reasoning LLMs to leverage Code Interpreter across diverse tasks. We synthesize 6.5k multi-turn trajectories with interleaved reasoning and code execution for SFT, followed by RL for further optimization. We fine-tune Qwen-2.5 [Qwen et al., 2025] models (3B/7B/14B), achieving average success rate improvements of 28.8% on 107 train tasks and 27.2% on 37 test tasks. As shown in Figure 1, our best model, R1-CI-14B, raises accuracy from 44.0% to 64.1%, outperforming GPT-4o (text-only: 58.6%) and approaching GPT-4o with Code Interpreter (70.9%). Notably, during training the model exhibits emergent self-checking behavior via code generation. The trained model also requires less costs of inference times and exhibits better decisions of code/text choice. 3) Extensive experiments comparing training strategies: We conduct systematic experiments to investigate training strategies, leading to several key insights: 1) Pre-trained LLMs naturally 2 generate Python code; no additional output templating is needed. 2) Warm-starting with SFT significantly improves integration of Code Interpreter, compared to cold-start RL without the initial SFT stage, especially for the generalization across tasks and domains. 3) Surprisingly, initializing with reasoning-focused models (e.g., R1-distilled) degrades performance and generalization, suggesting that excessive textual reasoning training can impair inherent coding ability. 4) GRPO is better than PPO in training Code Interpreter across different model sizes and numbers of task types. 5) Masking code execution outputs during training improves stability in multi-turn RL involving interleaved reasoning and execution. 4) RL limitation analysis: By tuning number of training tasks, we analyze the limitations of RL in
|
https://arxiv.org/abs/2505.21668v1
|
training and optimizing Code Interpreter. Unlike prior work focused on narrow domains such as math or retrieval, we find that RL for general-purpose Code Interpreter is substantially more challenging, as it must learn a policy effective across 144 diverse tasks with varying characteristics. 2 Task benchmark Challenges in 144 Reasoning and Planning Tasks We compile 144 tasks from three major reasoning and planning benchmarks: 33 from SymBench [Chen et al., 2025], 27 from Big-Bench- Hard [Suzgun et al., 2022], and 84 from Reasoning-Gym1. After removing near-duplicates, each task retains over 200 diverse samples. All tasks are standardized into a unified format and evaluated using rule-based criteria (e.g., exact match or constraint checks) for efficient rollout and testing. The tasks cover diverse reasoning and planning challenges for LLM evaluation. Detailed task descriptions are in Appendix Sec. D, and their associated capability categories—logic, spatial, order, optimization, search, and math—are summarized in Table 4. For tasks involving computation, logic, symbolic manipulation, optimization, spatial reasoning, or constrained planning, code-based symbolic computing often outperforms pure text reasoning. However, recent work [Chen et al., 2024b] highlight key challenges in steering LLMs to choose effectively between code and text: (1) The better mode (code vs. text) depends on task type, complexity, and model capabilities—often hard to predict; (2) LLM-generated code frequently degenerates into hard-coded text-like scripts, limiting its symbolic utility [Yang et al., 2024]. 3 R1-Code-Interpreter Figure 2 shows a representative example of R1-Code-Interpreter in action. The model iteratively reasons, optionally generates code for execution, and refines its reasoning based on the results, continuing this process until the final answer is produced. The Code Interpreter is invoked only when deemed beneficial; otherwise, the model relies on pure textual reasoning. The rollout process follows an interleaved, multi-turn framework, where the LLM alternates between generating text and issuing external code queries. The system instruction directs the model to enclose code between the natural tokens ‘ ``` python ’ and ‘```’ when execution is needed. Upon detecting a code block, the system extracts and executes it via the Code Interpreter, then appends the result—prefixed with the special token ‘ Code Execution Results: ’—to the ongoing generation. This loop continues until either (1) the maximum of 5 code calls is reached, or (2) the model emits a final answer enclosed between ‘ <<< ’ and ‘ >>> ’. For the training process, we first synthesize 6.5k multi-turn text/code trajectories for SFT by prompting GPT-4o following with pre-defined formats. We then apply GRPO for further optimization, masking code execution outputs during loss and gradient computation to account for the multi-turn structure. Response format To train R1-Code-Interpreter, we begin by designing a simple head prompt that guides the initial LLM to follow our predefined structure. As shown in Table 1, the prompt organizes the output into three iterative parts: reasoning, optional Code Interpreter invocation, and the final answer. We avoid imposing content-specific constraints—such as enforcing reflective reasoning or code calls—to preserve the model’s natural learning dynamics during RL. Unlike prior work that enforces section tags like ‘<think>’, ‘<answer>’, or ‘<search>’ [Guo et al., 2025, Jin
|
https://arxiv.org/abs/2505.21668v1
|
et al., 2025, Zhang et al., 2025], we rely solely on the final answer marker ‘ <<<answer 1https://github.com/open-thought/reasoning-gym 3 The current approach using depth-first search (DFS) is a step in the right direction, but it seems to be inefficient, leading to a timeout. To optimize the search for a solution in the Blocksworldtask, we can implement the A* search algorithm. A* search is more efficient than DFS …<omitted text>…```pythonimport heapqdef get_possible_moves(state):moves = []for i, stack in enumerate(state):if stack: # If the stack is not emptyblock = stack[-1] # Get the top blockfor j in range(len(state)):if i!= j: # Move to a different stacknew_state= [list(s) for s in state] # Deep copy of the statenew_state[i].pop() # Remove the block from the current stack…<omitted code>…```Question: BlocksworldTask: Initial State: stack1: D stack2: G C A F stack3: B I E H Goal State: stack1: A F H stack2: C G I stack3: B D E Please provide a series of moves to reach the goal state. Your answer:R1-CI-14B: Let me solve this Blocksworldproblem step by step. 1) First, let me create a Python script to help simulate and find the solution. The script will: -Represent the stacks -Validate moves (only top blocks can be moved) -Find a path from initial to goal state -Use a search algorithm to find the solution. Use symbolic computing or an efficient search algorithm, we can implement a depth-first search (DFS) approach …<omitted text>…```pythondef is_goal_state(state, goal_state):return state == goal_statedef get_possible_moves(state):moves = []for i, stack in enumerate(state):if stack: # If the stack is not emptyblock = stack[-1] # Get the top blockfor j in range(len(state)):if i!= j: # Move to a different stacknew_state= [list(s) for s in state] # Deep copy of the state…<omitted code>…``` Code Execution Results: Move H from 3 to 1\nMoveE from 3 to 1\nMoveI from 3 to 2\nMoveF from 2 to 3…Code Execution Results: TimeoutExpired: Command ‘run.py' timed out after 60 seconds. I see the code has found a valid solution. Let me verify this solution by writing the checking code…<omitted text>…```pythondefverify_blocksworld_solution(initial_state,goal_state,moves):current_state={stack:list(blocks)forstack,blocksininitial_state.items()}formoveinmoves:parts=move.split()…<omitted code>…```Code Execution Results: CorrectThe current solution is correct: <<< Move H from 3 to 1\nMoveE from 3 to 1\nMoveI from 3 to 2\nMoveF from 2… >>>Figure 2: Example response of R1-Code-Interpreter in Blocksworld task. content >>> ’ for answer extraction. For code, we leverage the LLM’s pretrained behavior of naturally starting code blocks with ‘``` python’, which serves as implicit tagging. Our initial tests show this natural format performs better than forced tagging, as it aligns more closely with the model’s original distribution. Table 1: Head prompt for R1-Code-Interpreter. The User asks a question, and you solve it. You first generate the reasoning and thinking process and then provide the User with the final answer. During the thinking process, **you can generate python code** for efficient searching, optimization, and computing with the format of starting the python block with ``` python. **A code query must involve only a single script that uses ‘print’ function for the output.**. Once the code script is complete, stop the generation. Then, the code interpreter platform will execute the code
|
https://arxiv.org/abs/2505.21668v1
|
and return the execution output and error. Once you feel you are ready for the final answer, directly return the answer with the format <<< answer content >>> at the end of your response. Otherwise, you can continue your reasoning process and possibly generate more code query to solve the problem. Non-reasoning Models as Base While some open-source models—especially DeepSeek-distilled variants [Guo et al., 2025]—are strong in long-chain textual reasoning, initializing from them can hurt performance. Our experiments show that starting from a general model like Qwen-2.5 yields better results, as overly specialized reasoning models often lose coding ability. Further analysis is provided in Sec. 5. Warm Starts Recent studies suggest that RL training without extensive SFT can match or even outperform traditional pipelines [Guo et al., 2025, Jin et al., 2025, Wei et al., 2025, Chu et al., 2025]. However, we find SFT to be essential for Code Interpreter training, significantly boosting model capability. See Sec. 5 for details. 4 3.1 Dataset synthesis of multi-turn SFT We fine-tune R1-Code-Interpreter using SFT and GRPO on a subset of 144 tasks. We randomly select 107 tasks for training: 26 from SymBench, 20 from Big-Bench-Hard, and 61 from Reasoning-Gym, ensuring no sample overlaps with the test set. The remaining 37 tasks are used for evaluation. To generate SFT supervision, we prompt GPT-4o to produce multiple reasoning/execution trajectories per task and retain only those yielding correct answers. To enhance diversity and adaptability, we use varied prompt formats—some allow free-form reasoning such as the prompt in Table 1, while others enforce transitions between text and code. Each trajectory is limited to 5 code execution rounds, and each task includes up to 70 valid trajectories, resulting in a final dataset of 6.5k high-quality samples. 3.2 Multi-turn RL with Code Interpreter As shown in Figure 1b, we formulate our RL objective with a Code Interpreter Cas: max πθEx∼D, y∼πθ(·|x;C)[rϕ(x, y)]−βDKL πθ(y|x;C) πref(y|x;C) , (1) where πθis the policy LLM, πrefis the reference model, rϕis the reward, and DKLis the KL divergence [Shlens, 2014]. Unlike prior work [Guo et al., 2025] that samples from πθ(· |x), our policy πθ(· |x;C)integrates external code execution, enabling hybrid reasoning. This design enables more effective decision-making for reasoning with code execution. Our method builds on two established policy gradient algorithms—Proximal Policy Optimization (PPO) [Schulman et al., 2017, Ouyang et al., 2022] and Group Relative Policy Optimization (GRPO) [Shao et al., 2024, Guo et al., 2025]—to optimize this code-enhanced reasoning process. In the remainder of this paper, we primarily focus on GRPO-based training, which we find outperforms PPO in our experiments. A comparative analysis is provided in Section 5. Loss masking for code execution results In both PPO and GRPO, token-level loss is typically computed over the entire rollout. However, each rollout in R1-Code-Interpreter includes both LLM- generated tokens and code execution results. Optimizing the latter can lead to undesirable behavior, where the model tries to predict execution outputs itself. To prevent this, we mask code execution tokens and compute the policy gradient only over LLM-generated tokens. Based on our experiments, this approach stabilizes
|
https://arxiv.org/abs/2505.21668v1
|
training and aligns with findings in retrieval-based RL [Jin et al., 2025]. GRPO + Code Interpreter To improve policy optimization stability and avoid value-function approximation, GRPO differs from PPO by using the average reward of multiple sampled outputs as a baseline instead of a learned value function. Specifically, for each input x, GRPO samples a group of responses {y1, y2, . . . , y G}from the reference policy πrefand optimizes the policy by maximizing: JGRPO(θ) =Ex∼D, y 1:G∼πold(·|x;C)" 1 GGX i=11 |yi||yi|X t=1minπθ yi,t|x, yi,<t;C πref yi,t|x, yi,<t;CˆAi,t, clipπθ yi,t|x, yi,<t;C πref yi,t|x, yi,<t;C,1−ϵ,1 +ϵ ˆAi,t# −βDKL πθ∥πref ,(2) where ϵandβare hyperparameters, and ˆAi,tis the advantage, computed from the relative rewards of responses within each group. This eliminates the need for a separate value network. Note that yi,tis an LLM-generated token, not a code execution token. GRPO incorporates the KL divergence termDKLbetween the policy and the reference model directly into the loss, rather than as a reward penalty, and applies the same token masking when computing it. Reward function The reward function provides the core training signal for reinforcement learning. For R1-Code-Interpreter, we adopt a rule-based reward scheme based solely on final outcome correctness. In factual reasoning tasks, this involves exact matching; in planning tasks, it checks whether all constraints and goals are satisfied. Unlike the recent work by DeepSeek [Guo et al., 2025], we exclude format-based rewards, as our model already adheres well to structural formats. We also avoid training neural reward models for either outcome or process evaluation, due to their vulnerability to reward hacking and the high cost of retraining, following the approach by DeepSeek [Guo et al., 2025]. 5 4 Experiments Experimental settings We conduct experiments using three default base models: Qwen2.5-14B- Instruct-1M, Qwen2.5-7B-Instruct-1M, and Qwen2.5-3B-Instruct [Qwen et al., 2025]. Additional ablations use DeepSeek-distilled reasoning models (3B/7B/14B) [Guo et al., 2025]. Code execution is performed on 48-core CPUs, with each question allowing up to 5 code calls and a 60-second timeout per script. For training, we sample 50 questions per each of the 107 train tasks, generating 6.5k SFT trajectories via GPT-4o rejection sampling with varied prompts, followed by GRPO fine-tuning. Evaluation uses a disjoint set of 100 questions per train task and all 37 test tasks to assess in-domain and out-of-domain generalization. For each training, we select the final model based on the checkpoint with the lowest validation loss for SFT, and the highest training reward for GRPO and PPO at the converging stage. SFT is trained for 3 epochs to prevent overfitting. GRPO uses a learning rate of 1e-6 with 5 sampled responses per prompt and a KL penalty of 0.001. For PPO, the actor and critic learning rates are set to 1e-6 and 1e-5, respectively. Learning rates are tuned in early-stage experiments. Training and inference temperatures are set to 1.0 and 0.6, respectively. We use a batch size of 32 for SFT and 128 for GRPO and PPO. Both stages perform full-parameter fine-tuning on 8 H100 GPUs. The RL training is based on the VeRL framework [Sheng et al., 2025]. The complete training of
|
https://arxiv.org/abs/2505.21668v1
|
R1-CI-14B takes approximately 1600 GPU hours (see Sec. 5 for further discussion). Unless stated otherwise, GRPO is the default RL method. Baselines To evaluate the effectiveness of R1-Code-Interpreter (R1-CI), we compare it against several baselines: All Text + CoT — prompting LLMs to reason using only text with chain-of-thought (CoT); All Code + CoT — prompting LLMs to first reason with CoT, then produce code as the answer; CI wo Fine-tune — prompting non-fine-tuned LLMs to use the Code Interpreter with the prompt shown in Table 1; R1-CI wo GRPO — our R1-Code-Interpreter trained only with SFT, without GRPO. For broader comparison, we also benchmark R1-CI against GPT-4o + All Text + CoT andGPT-4o + OpenAI Code Interpreter . The prompt for All Text + CoT and All Code + CoT is displayed in Appendix Sec. C. Evaluations Answers are evaluated using predefined rules, with GPT-4o assisting in format nor- malization when necessary. For methods that output code as the final answer, we extract and execute the code using predefined logic to obtain the final result or support further reasoning. To prevent infinite loops, each execution is limited to 60 seconds; timeouts result in failure or error handling in subsequent rounds. Task performance is measured using success rate. (a)(b) Figure 3: Score distribution across 144 training and test tasks. (a) Score frequency distribution for the four compared methods. (b) Method performance across six evaluation capabilities in 144 tasks. The evaluated models are either with the size of 14B or GPT-4o. 5 Analysis 5.1 Performance Table 2 presents the experimental results of R1-Code-Interpreter (R1-CI) compared to baseline methods across 107 training and 37 test tasks. The full experimental results are in Appendix Sec. E. R1-CI significantly enhances the model’s reasoning and planning abilities, improving the average 6 Table 2: Scores of compared methods on 144 tasks across three benchmarks SymBench (SymB.), Big-Bench-Hard (BBH), and Reasoning-Gym (Rea.G.). Best result for each dataset is bold , and the second best is set in underline . We abbreviate R1-Code-Interpreter as R1-CI. 107 (26/20/61) Train Tasks 37 (7/7/23) Test Tasks Method (Acc %) SymB. BBH Rea.G. SymB. BBH Rea.G. Avg. Train Avg. Test GPT-4o All Text + CoT 40.7 86.7 45.5 43.1 87.3 54.6 52.5 58.6 Code Interpreter 63.7 85.0 63.3 63.9 85.9 68.4 67.4 70.9 Qwen2.5-14B-Instruct-1M All Text + CoT 24.0 77.7 35.8 22.3 80.4 39.6 40.8 44.1 All Code + CoT 40.0 78.1 40.4 53.1 78.0 35.7 47.3 47.0 CI wo Fine-tune 27.9 64.6 30.0 33.7 70.4 39.7 35.9 44.4 R1-CI wo GRPO 69.0 87.2 48.3 56.0 77.3 56.1 60.6 60.1 R1-CI 71.1 87.8 52.4 59.6 79.9 60.6 63.6 64.1 Qwen2.5-7B-Instruct-1M All Text + CoT 19.5 66.0 25.7 14.6 69.1 26.3 31.7 32.2 All Code + CoT 27.5 70.2 25.7 44.1 63.1 34.0 34.5 41.5 CI wo Finetune 27.5 69.0 20.0 42.6 69.7 27.0 31.0 38.1 R1-CI wo GRPO 65.6 83.7 45.6 55.3 70.1 53.6 57.6 57.0 R1-CI 67.7 85.2 49.1 55.9 74.1 57.9 60.4 60.6 Qwen2.5-3B-Instruct All Text + CoT 12.0 55.1 11.2 12.0 49.7 10.7 19.6 18.3 All Code + CoT 18.3 60.4
|
https://arxiv.org/abs/2505.21668v1
|
5.0 33.9 52.3 8.9 18.6 21.8 CI wo Finetune 13.8 56.9 5.4 25.4 51.9 8.8 17.1 20.1 R1-CI wo GRPO 62.3 78.6 36.9 47.6 61.6 44.0 50.9 48.0 R1-CI 66.0 81.1 40.9 49.3 64.7 48.1 54.5 51.5 (a)(b) Figure 4: Evolution of training rewards and testing scores during GRPO training. (a) For all model sizes (3B, 7B, and 14B), the training rewards generally increase in the early steps—despite some fluctuations—before stabilizing at a plateau. (b) In the 14B setting, the testing scores across individual tasks exhibit diverse trends. However, the average test score increases during the early training steps and then plateaus, mirroring the behavior of training rewards in (a). success rate by 28.8% on training tasks and 27.2% on test tasks across 3B, 7B, and 14B model sizes. Notably, R1-CI-14B achieves a 64.1% success rate on test tasks, outperforming the much larger GPT- 4o with CoT-only reasoning (58.6%) and approaching GPT-4o with Code Interpreter (70.9%). These consistent improvements across all model sizes highlight the effectiveness of R1-Code-Interpreter. Score distribution and capability analysis Figure 3a shows the score distribution across 144 tasks for the four compared methods. We observe that SFT and GRPO training significantly reduce the number of tasks on which R1-CI models perform poorly. However, some tasks still yield low or 7 (a)(b)(c)Figure 5: Training -reward curves across training steps for progressively smaller task sets (50 questions per task). (a) All 107 training tasks. (b) Five tasks— Game24 ,reasoning_gym_graph_color , reasoning_gym_group_anagrams ,Blocksworld , andBoxLift . (c) Single task— Game24 . even zero scores. This indicates that the inherent capabilities of the base LLM strongly affect overall performance, and training alone cannot overcome limitations on tasks beyond the model’s inherent reasoning or knowledge abilities. Figure 3b shows the average score for each capability, computed over all tasks labeled with that capability in Table 4 (Appendix D). R1-CI-14B performs comparably to GPT-4o+Code Interpreter across most capabilities, except for optimization, where performance is lower likely due to the increased complexity of coding and reasoning in those tasks. GRPO training curves Figure 4 presents the GRPO training curves for three model sizes, showing the evolution of training rewards and testing scores over training steps. For all model sizes, training rewards generally increase in the early stages and then plateau, though with noticeable fluctuations. In Figure 4b, the testing scores exhibit varied trends across individual tasks due to differing task characteristics. Nevertheless, the average testing score follows a similar pattern—initially rising and then converging—mirroring the trend observed in training rewards. In contrast to prior works on RL-enhanced LLM reasoning [Guo et al., 2025, Jin et al., 2025, Zhang et al., 2025, Shao et al., 2024], which often show smoother and more apparent reward improvements, our GRPO training for a general-purpose Code Interpreter presents greater challenges. Most of these earlier studies focus on narrow domains such as math or retrieval, or rely on extremely large models (e.g., 600B in DeepSeek-R1) to handle task diversity. We hypothesize that the difficulty in our setting stems from the diversity of the 107 training tasks. As shown in Figure 5,
|
https://arxiv.org/abs/2505.21668v1
|
training on fewer tasks leads to more stable and pronounced reward improvements. In particular, training on a single task likeGame24 —a task that requires fine-grained switching between text and code reasoning [Chen et al., 2024b]—yields reward curves similar to those in domain-specific settings. This suggests that task diversity is a key factor behind the instability in GRPO training for general-purpose models. Nonetheless, GRPO still satisfyingly improves average success rates by over 3% on both training and test tasks, comparable to the performance gap between 7B and 14B models (Table 2). GRPO training cost GRPO training for the Code Interpreter is computationally expensive. For instance, training R1-CI-14B takes around 1600 GPU hours. The cost arises mainly from two factors: (1) GRPO requires multiple (e.g., 5) sampled rollouts per answer turn to enable reward-based comparison, which is further intensified in our multi-turn generation setting; (2) the Code Interpreter introduces additional overhead due to costly code execution, especially for scripts involving search, iteration, or optimization. Although we cap the execution time at 60 seconds per script, it remains a major time sink. These highlight the need for more efficient RL training approaches. (a1)(a2)(b1)(b2) Figure 6: (a1–a2) show the response length trajectories of Qwen -3B during GRPO fine-tuning after SFT, with (a1) trained on all 107 tasks and (a2) on the single task Game24 . (b1–b2) present the code usage ratio (b1) and the average number of LLM inference turns per question (b2). 8 5.2 Response characteristics Response length study Previous work [Guo et al., 2025, Jin et al., 2025] observed that LLM responses tend to grow longer during RL training, as the model learns to explore solutions through long-chain reasoning. Figure 6a1-a2 show the average response length over training steps. In contrast to prior findings, we observe no significant length increase even when training on a single task. Possible reasons include: (1) The SFT stage already instills long-chain reasoning, as the data is synthesized by GPT-4o; (2) The multi-turn interaction spreads reasoning across turns, reducing per-turn response length; (3) Code-augmented reasoning reduces reliance on long CoT chains, as it does not require iterative textual search. Code usage ratio Figure 6b1 shows the average code usage ratio per question. After training, the model learns to better balance between text-based and code-based solutions, instead of purely relying on code execution. Figure 6b2 shows the average number of inference turns per question, indicating that the model becomes more efficient after SFT and GRPO by using fewer turns. Emergent behavior of Self-Checking During GRPO training, we observe emergent behavior where the model integrates textual reasoning and code execution to improve solution verification. For instance, in the final two reasoning turns of Figure 2, the model generates code to check whether the proposed solution satisfies the constraints. Across test samples, the model learns to verify answers using either textual reasoning or code execution, demonstrating an emergent self-checking behavior that enhances reasoning and planning. Table 3: Ablation studies on using DeepSeek-distilled reasoning models as the base model. Tested on 37 Test Tasks Model 7B, SFT 7B, All Text 7B, All Code
|
https://arxiv.org/abs/2505.21668v1
|
14B, All Text 14B, All Code DeepSeek 53.1 27.9 28.7 40.1 43.4 Qwen-2.5 57.0 32.2 41.5 44.1 47.0 (a)(b)(c)(d) Figure 7: GRPO vs. PPO and warm -vs. cold -start. (a–b) GRPO consistently beats PPO when training on (a) all 107 tasks and (b) the single task Game24 in the 3B model. (c–d) With GRPO, a warm start (preceded by SFT) outperforms a cold start for both Qwen-7B and Qwen-3B. 5.3 Ablation study Reasoning vs. non-reasoning models as base : Table 3 compares training performance using the general-purpose Qwen-2.5 models versus the same sized long-chain reasoning models from DeepSeek. Whether after SFT or using the raw models, Qwen consistently outperforms DeepSeek, particularly in code generation for solving tasks. GRPO vs. PPO : Figure 7(a–b) compares GRPO and PPO using the same dataset and initial model. GRPO consistently outperforms PPO in both full-task training and single-task training settings. Warm starts vs. cold starts : Figure 7(c–d) compares GRPO training with and without the initial SFT stage. Unlike prior findings [Guo et al., 2025, Jin et al., 2025] suggesting SFT is unnecessary or only marginally helpful, we observe that SFT is crucial for enabling the model to reason effectively with the Code Interpreter, even with extensive GRPO training. Masked vs. unmasked code outputs : We also test removing the masking of code execution outputs during GRPO training, which degrades the performance without masking, as shown in Figure 8. 9 6 Related Work Code generation and symbolic computing in LLM tasks LLMs are widely used in agent tasks such as software/web interaction [Zhou et al., 2023b, Hao et al., 2024, Xu et al., 2024], robot planning [Chen et al., 2024c, Ahn et al., 2022], and logical inference [Suzgun et al., 2022]. Many benchmark tasks can in fact be solved directly through code [Suzgun and Kalai, 2024, Gao et al., 2023], and recent work extends coding to reasoning and semantic analysis [Li et al., 2023, Weir et al., 2024]. Most prior approaches use either text [Yao et al., 2024, Ahn et al., 2022] or code [Liang et al., 2022, Bairi et al., 2024, Zhou et al., 2023c] exclusively as output. Recent work [Chen et al., 2024b] emphasizes the need to dynamically switch between modalities, proposing CodeSteer [Chen et al., 2025] as a guidance model. However, direct training of LLMs to use Code Interpreter remains largely unexplored, especially for CoT reasoning LLMs. LLM self-reflection and CoT reasoning models LLM self-evaluation can enhance task perfor- mance across domains [Yang et al., 2022, Welleck et al., 2022, Madaan et al., 2023]. Models like OpenAI o1 [Jaech et al., 2024] and DeepSeek R1 [Guo et al., 2025] showcase agentic behavior via Chain-of-Thought (CoT) reasoning and self-reflection. Extensions with retrieval [Jin et al., 2025, Li et al., 2025a] and tool use [Qian et al., 2025] further improve reasoning, but lack symbolic computing and code generation, limiting performance on complex symbolic tasks and incurring high token and time costs [Chen et al., 2024a]. Integrating Code Interpreter into reasoning LLMs is essential and challenging [Li et al., 2025b]. LLM fine-tuning with multi-turn SFT and
|
https://arxiv.org/abs/2505.21668v1
|
RL SFT [Chen et al., 2024d] and RL [Ouyang et al., 2022] are widely used for LLM fine-tuning. To handle multi-turn agent tasks, these methods are extended with goal-conditioned rewards [Zhou et al., 2024, Zhai et al., 2024, Zhang et al., 2024]. Self-generated data, combined with search and rejection sampling [Zhou et al., 2023a, Guan et al., 2025], have become key for improving reasoning. Recent works, such as DeepSeek R1 [Guo et al., 2025], apply rule-based outcome rewards to enhance reasoning and planning in math [Shao et al., 2024], code [Wei et al., 2025], QA [Jin et al., 2025], and vision-language tasks [Wang et al., 2025, Zhang et al., 2025], using GRPO [Shao et al., 2024] and PPO [Ouyang et al., 2022]. 7 Conclusion We present a framework that integrates Code Interpreter into LLM reasoning and planning via supervised and reinforcement learning. Our fine-tuned model, R1-CI-14B, surpasses GPT-4o without a Code Interpreter and approaches its performance with one. Adapting GRPO for training enhances model capability, but its impact is constrained by task diversity, underscoring the need for a strong base model and SFT stage. To our knowledge, this is the first open-source, general-purpose Code Interpreter trained with such methods. We further explore training strategies and observe emergent checking behaviors. Future work should address task diversity bottlenecks and reduce training costs. 10 References Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, and James Zou. Mixture-of-agents enhances large language model capabilities. arXiv preprint arXiv:2406.04692 , 2024. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. Yongchao Chen, Yilun Hao, Yueying Liu, Yang Zhang, and Chuchu Fan. Codesteer: Symbolic- augmented language models via code/text guidance, 2025. URL https://arxiv.org/abs/ 2502.04350 . Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187 , 2024a. Karthik Valmeekam, Matthew Marquez, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambham- pati. Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change. Advances in Neural Information Processing Systems , 36, 2024. Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. Language agent tree search unifies reasoning acting and planning in language models. arXiv preprint arXiv:2310.04406 , 2023a. Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). In NeurIPS 2022 Foundation Models for Decision Making Workshop , 2022. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. Language models of code are few-shot commonsense learners. arXiv preprint arXiv:2210.07128 , 2022. Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter,
|
https://arxiv.org/abs/2505.21668v1
|
Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753 , 2022. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588 , 2022. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Yongchao Chen, Harsh Jhamtani, Srinagesh Sharma, Chuchu Fan, and Chi Wang. Steering large language models between code execution and textual reasoning. arXiv preprint arXiv:2410.03524 , 2024b. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . 11 Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261 , 2022. Yuan Yang, Siheng Xiong, Ali Payani, Ehsan Shareghi, and Faramarz Fekri. Can llms reason in the wild with programs? arXiv preprint arXiv:2406.13764 , 2024. Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516 , 2025. Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, and Dacheng Tao. R1-vl: Learning to reason with multimodal large language models via step-wise group relative policy optimization. arXiv preprint arXiv:2503.12937 , 2025. Yuxiang Wei, Olivier Duchenne, Jade Copet, Quentin Carbonneaux, Lingming Zhang, Daniel Fried, Gabriel Synnaeve, Rishabh Singh, and Sida I Wang. Swe-rl: Advancing llm reasoning via reinforcement learning on open software evolution. arXiv preprint arXiv:2502.18449 , 2025. Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161 , 2025. Jonathon Shlens. Notes on kullback-leibler divergence and likelihood, 2014. URL https://arxiv. org/abs/1404.2000 . John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017. URL https://arxiv.org/abs/1707.06347 . Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–
|
https://arxiv.org/abs/2505.21668v1
|
27744, 2022. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. In Proceedings of the Twentieth European Conference on Computer Systems , EuroSys ’25, page 1279–1297. ACM, March 2025. doi: 10.1145/3689031.3696075. URL http://dx.doi.org/10.1145/3689031. 3696075 . Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854 , 2023b. Yilun Hao, Yongchao Chen, Yang Zhang, and Chuchu Fan. Large language models can plan your travels rigorously with formal verification tools. arXiv preprint arXiv:2404.11891 , 2024. Tianqi Xu, Linyao Chen, Dai-Jie Wu, Yanjun Chen, Zecheng Zhang, Xiang Yao, Zhiqiang Xie, Yongchao Chen, Shilong Liu, Bochen Qian, et al. Crab: Cross-environment agent benchmark for multimodal language model agents. arXiv preprint arXiv:2407.01511 , 2024. Yongchao Chen, Jacob Arkin, Yang Zhang, Nicholas Roy, and Chuchu Fan. Scalable multi-robot collaboration with large language models: Centralized or decentralized systems? In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 4311–4317. IEEE, 2024c. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 , 2022. Mirac Suzgun and Adam Tauman Kalai. Meta-prompting: Enhancing language models with task- agnostic scaffolding. arXiv preprint arXiv:2401.12954 , 2024. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning , pages 10764–10799. PMLR, 2023. 12 Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language model-augmented code emulator. arXiv preprint arXiv:2312.04474 , 2023. Nathaniel Weir, Muhammad Khalifa, Linlu Qiu, Orion Weller, and Peter Clark. Learning to reason via program generation, emulation, and search. arXiv preprint arXiv:2405.16337 , 2024. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems , 36, 2024. Ramakrishna Bairi, Atharv Sonwane, Aditya Kanade, Arun Iyer, Suresh Parthasarathy, Sriram Rajamani, B Ashok, and Shashank Shet. Codeplan: Repository-level coding using llms and planning. Proceedings of the ACM on Software Engineering , 1(FSE):675–698, 2024. Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921 , 2023c. Kevin Yang, Yuandong Tian, Nanyun Peng, and Dan Klein. Re3: Generating longer stories with recursive reprompting and revision. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 4393–4479, 2022. Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, and Yejin Choi. Generating sequences by learning to self-correct. In The Eleventh International Conference on Learning Representations
|
https://arxiv.org/abs/2505.21668v1
|
, 2022. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651 , 2023. Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. Search-o1: Agentic search-enhanced large reasoning models. arXiv preprint arXiv:2501.05366 , 2025a. Cheng Qian, Emre Can Acikgoz, Qi He, Hongru Wang, Xiusi Chen, Dilek Hakkani-Tür, Gokhan Tur, and Heng Ji. Toolrl: Reward is all tool learning needs. arXiv preprint arXiv:2504.13958 , 2025. Xuefeng Li, Haoyang Zou, and Pengfei Liu. Torl: Scaling tool-integrated rl. arXiv preprint arXiv:2503.23383 , 2025b. Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. Self-play fine-tuning converts weak language models to strong language models. arXiv preprint arXiv:2401.01335 , 2024d. Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, and Aviral Kumar. Archer: Training language model agents via hierarchical multi-turn rl. arXiv preprint arXiv:2402.19446 , 2024. Yuexiang Zhai, Hao Bai, Zipeng Lin, Jiayi Pan, Shengbang Tong, Yifei Zhou, Alane Suhr, Saining Xie, Yann LeCun, Yi Ma, et al. Fine-tuning large vision-language models as decision-making agents via reinforcement learning. arXiv preprint arXiv:2405.10292 , 2024. Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, and Min Lin. Chain of preference optimization: Improving chain-of-thought reasoning in llms. arXiv preprint arXiv:2406.09136 , 2024. Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. rstar-math: Small llms can master math reasoning with self-evolved deep thinking. arXiv preprint arXiv:2501.04519 , 2025. Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, and Wenhu Chen. Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning. arXiv preprint arXiv:2504.08837 , 2025. 13 Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. 14 R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning 1 Introduction 1 2 Task benchmark 3 3 R1-Code-Interpreter 3 3.1 Dataset synthesis of multi-turn SFT . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Multi-turn RL with Code Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . 5 4 Experiments 6 5 Analysis 6 5.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 5.2 Response characteristics . . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.21668v1
|
. . . . . 9 5.3 Ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6 Related Work 10 7 Conclusion 10 A Limitations and societal impacts 16 B Ablation study on masking code execution output 16 C Prompt for All Text + CoT and All Code + CoT 16 D Description of reasoning and planning tasks 17 E Full table of experimental results 26 15 A Limitations and societal impacts Limitations 1) As discussed in the paper, training is costly—about 1600 GPU hours for the 14B model. This reflects a broader challenge in RL-based LLM training, and in our case, the overhead of code execution further exacerbates the cost. 2) While our Code Interpreter is designed to be general-purpose across reasoning and planning tasks, its scope does not cover broader application-specific tasks like plotting or software generation. In such cases, R1-CI may offer limited gains. 3) Due to high task diversity, GRPO training yields only modest improvements (around 3%), which is still acceptable. However, many tasks remain bottlenecked by the base model’s inherent limitations. Scaling to larger, more capable models—similar to what DeepSeek explored—may help address this. Societal impacts This paper contributes to advancing Foundation Models by augmenting language models with a Code Interpreter, which has strong potential to improve safety, performance, and alignment with human preferences. However, such capabilities are inherently dual-use, the same techniques that augment models toward harmless outputs can, with minor changes, be misused to generate harmful content. While misuse is a concern, we believe the broader societal benefits, particularly in improving evaluation and control over language model outputs—outweigh the risks. B Ablation study on masking code execution output 0 5 10 15 20 25 30 Training Steps35404550556065Training Reward 107 Tasks, 3B Mask Unmask Figure 8: Masked vs. unmasked code execution outputs during GRPO training on 3B models. The masked setting ascends while the unmasked setting degrades. C Prompt for All Text + CoT and All Code + CoT Prompt for All Text + CoT Analyze the question step by step and try to list all the careful points. Then try to acquire the final answer with step by step analysis. In the end of your response, directly output the answer to the question. Do not output the code for execution. 16 Prompt for All Code + CoT You are a helpful AI assistant. Solve tasks using your coding skills. In the following cases, suggest python code (in a python coding block) for the user to execute. Don’t include multiple code blocks in one response, only include one in the response. Do not ask users to copy and paste the result. Instead, use ’print’ function for the output when relevant. Think the task step by step if you need to. If a plan is not provided, explain your plan first. You can first output your thinking steps with texts and then the final python code.
|
https://arxiv.org/abs/2505.21668v1
|
Remember in the final code you still need to output each number in the final equation! Start the python block with ``` python. D Description of reasoning and planning tasks Here we describe the 144 training and testing tasks. They require strong symbolic, mathematical, log- ical, geometrical, scientific, and commonsense reasoning capabilities. The T1 to T33 tasks originate from SymBench [Chen et al., 2025], T34 to T60 tasks originate from Big-Bench-Hard [Suzgun et al., 2022], and the last T61 to T144 tasks are from Reasoning-Gym2. We select questions with diverse difficulty and standardize them as a unified format to support fast rollout and testing. T1 - 2048 Similarly to the 2048 game, in a grid, numbers representing powers of 2 can move in any direction, combining when they encounter a matching number to form the next power of 2. Given a starting position and a sequence of movements, the goal is to determine the resulting grid after executing the moves. T2 - Blocksworld In Blocksworld, the objective is to stack a set of blocks (brown) according to a specific order. The robot can perform four actions: (1) pick up a block, (2) unstack a block from the top of another block, (3) put down a block, (4) stack a block on top of another block. A robot can only pick up, unstack, or stack a block if it is clear, that is, the block has no other blocks on top and is not currently being held. T3 - BoxLift This task involves coordinating robots of various types to lift boxes of different sizes and weights. Each robot has a specific lifting capacity and can collaborate with others to lift a single box. A box can only be lifted if the combined lifting capacity of the robots exceeds the box’s weight. The objective is to lift all the boxes in the minimum number of time steps. T4 - BoxNet This task involves coordinating robot arms to move colored boxes (squares) into corresponding colored goal locations (circles) in the fewest time steps. Each robot arm is assigned and restricted to a cell indicated by the dotted lines. The arms have two possible actions: (1) move a box within their cell to a neighboring cell, or (2) move a box within their cell to a goal location within the same cell. The objective is to ensure all boxes are placed in their matching goal locations efficiently. T5 - Combinatoral Calculation Given a set of integers, the goal is to use arithmetic operations (addition, subtraction, multiplication, division) and parentheses to arrange the numbers in such a way that the final result matches a specified target value. Each number must be used exactly once, and the order of the numbers cannot be changed. T6 - Constrained Linear Arrangement In a two-player card game, the task is to deduce your opponent’s moves based on the game’s rules, your played cards, and the announced results of each round. Each card can only be used once, and the game follows specific interaction rules between different card types, where certain cards
|
https://arxiv.org/abs/2505.21668v1
|
can defeat, be defeated by, or draw with others according to predefined relationships. T7 - Cryptanalysis In this task, you are provided with a combination lock consisting of numbers and letters, where neither the numbers nor the letters repeat. Using a series of guesses and feedback, the goal is to deduce the correct password based on the given conditions. T8 - Eight Queens Given a grid with some queens already placed, the task is to place the remaining queens such that no two queens share the same row, column, or diagonal, while avoiding positions with obstacles in the grid. 2https://github.com/open-thought/reasoning-gym 17 T9 - Game 24 This task involves querying LLMs to use a given set of integers to generate an equation that evaluates to 24. T10 - Gridworld This task involves querying LLMs to plan the robot actions in a grid world, reaching all goals in any order while avoiding obstacles. T11 - GSM [Gao et al., 2023] This is the more challenging version of GSM8K [Cobbe et al., 2021] math reasoning dataset, where the numbers in the original questions of GSM8K are replaced with larger, less common values. T12 - Letter Logic Diagram The task is to complete an incomplete grid by selecting from a list of letters, where each row and column must contain each letter exactly once, and all cells on the minor diagonal (top-right to bottom-left) must contain the same letter. Some cells are already filled in as constraints. T13 - Letters This task involves querying LLMs to count the total number of specific letters in a long word and specify their positions. An example question can be ’How many r’s in the word strawberry and what are their positions?’. This task has recently gained significant attention because current LLMs struggle to perform it effectively and accurately. T14 - Light Puzzles In this task, you are given an n×ngrid representing a network of lights, where a lit light is represented by "1" and an unlit light by "0". Several buttons control the state of these lights by turning them on or off in certain positions. The state of each light can be affected by multiple buttons. The task is to follow a series of button presses and determine the final state of the grid. T15 - Logical Puzzle The task involves querying LLMs to select a specified number of different values from a grid of numbers, ensuring that certain mathematical constraints (sum or product) are satisfied for selected numbers for each row and column. T16 - Logical Equation The task is to assign a specific numeric value to each letter from a given set, using a predefined range of numbers and a set of inequalities. Each letter corresponds to a unique number, and the relationships between the letters are defined by mathematical equations or constraints. T17 - Mahjong Given an initial set of letter cards, in each round, a new card is added and one card is removed. Some effects may happen when specific combinations of the cards appear after introducing the new card. A result is determined
|
https://arxiv.org/abs/2505.21668v1
|
based on these specific conditions. The goal is to determine a result based on a series of rounds. T18 - MATH-Count&Probability This is the math reasoning dataset from MATH dataset [Hendrycks et al., 2021], with specific focus on counting and probability questions. T19 - MATH-Geometry This is the math reasoning dataset from MATH dataset [Hendrycks et al., 2021], with specific focus on geometry questions. T20 - Matrix Transformation Rotate a given matrix of characters based on given instruction (e.g., 90 degrees clockwise), preserving each character’s position relative to others in the transformed output. The input matrix can be of any size and contain any character. T21 - New Operator This task introduces custom mathematical operations involving two numbers, defined with unique formulas. The goal is to use the given definitions of these operations to compute the result of a specific expression. T22 - Number Multiplying This task involves querying LLMs to compute the product among integers. It represents a classic problem that LLMs are not able to solve through pure textual reasoning. T23 - Pattern Recognition The task involves querying LLMs to find all squares in a character matrix where each square consists of identical characters and has a side length of at least 3. T24 - Permutation and Combination Given a set of objects with specific positioning constraints, the task is to determine the correct arrangement of the objects on a shelf. Each object must be placed in a position according to the rules provided, ensuring that the conditions on adjacency, order, and specific positions are met. For example, a rule about adjacency could be ‘Book A must be adjacent to book I’. 18 T25 - Pooling This task involves applying a pooling operation on a numerical N×Ngrid. The pooling operation uses an n×nsliding window ( n < N ) that moves across the grid from left to right and top to bottom. The results from each window are then arranged based on their positions to create a new output matrix. T26 - Reversi In this game similar to Reversi, players take turns placing pieces on an n×ngrid. After placing a piece, any of the opponent’s pieces located between two of the player’s pieces (in the same row, column, or diagonal) will be flipped. The task is to determine the state of the board after rounds, starting from a given configuration. T27 - Standard Sudoku Given a partially filled Sudoku grid, the task is to fill the remaining empty cells with numbers between 1 and 9, ensuring that no number repeats in the same row, column, or 3×3subgrid. T28 - Statistical Counting Calculate the total score of a string by scanning it from left to right, where consecutive identical letters earn points (for example, two or more consecutive A’s add 1 point, B’s add 2 points, etc.). The task is to start with a score of 0 and return the final summing value. T29 - String Deletion and Modification The task is to transform a string by repeatedly applying a set of ordered string manipulation rules until no more changes are
|
https://arxiv.org/abs/2505.21668v1
|
possible, where each rule modifies the string based on specific patterns or conditions present in the current string state. For example, a modification rule can be “If the string ends with ‘ba’, replace it with ‘ab’.” T30 - String Insertion The task is to transform a string by scanning it from left to right and inserting specific characters after certain character patterns (e.g., each pattern WXYZ requires inserting W immediately after it occurs). All operations are performed simultaneously on the original string. T31 - String Splitting A dismantling engineer has old machines and can obtain machine parts through a set of predefined methods. By continuously cycling through these methods in a specific order, the engineer dismantles machines or combines parts to create new components, and the task is to determine the total number of parts and remaining machines after all possible cycles. T32 - String Synthesis Given an initial set of blocks and a set of synthesis rules that combine different types of blocks, the task is to determine the final block(s) after repeatedly applying these rules in order until no more combinations are possible. T33 - Synthesis Decomposition A farmer grows various crops and can exchange them for agri- cultural products. Using a set of methods, he can trade specific combinations of crops for products, following a cyclic pattern until no further exchanges are possible. The goal is to determine the synthesis result for each round. T34 - Boolean Expressions This task determines whether a randomly generated Boolean expres- sion—built from the constants True andFalse and the operators and,or, and not—evaluates to true or false. T35 - Causal Judgment This task involves querying LLMs to read a brief scenario and predict the answer an average person would give to a causal question about it, including moral, intentional, or counterfactual aspects. T36 - Date Understanding This task involves querying LLMs to interpret a few sentences that reference dates and answer a related question (e.g., compute and return a specific date in MM/DD/YYYY format). T37 - Disambiguation QA For a sentence containing a potentially ambiguous pronoun, the task is to decide whether its reference is genuinely unclear; if it is clear, identify the noun to which the pronoun refers. T38 - Dyck Languages The task aims to complete a Dyck -4 string by providing the missing closing parentheses that properly balance the given prefix. T39 - Formal Fallacies The task examines a set of statements and judges whether the informal argument that follows is deductively valid or commits a formal fallacy, with particular attention to negations. T40 - Geometric Shapes The task aim to analyze a full SVG path description and identify the geometric figure that would be drawn. 19 T41 - Hyperbaton Given two sentences, choose which of two sentences follows the natural English ordering of adjectives. T42-T44 - Logical Deduction 3/5/7 Objects Use spatial clues to determine the correct ordering of a set of objects (3/5/7 objects). T45 - Movie Recommendation From four candidate films, select the one that best matches the preferences implied by the movies a user has already
|
https://arxiv.org/abs/2505.21668v1
|
enjoyed. T46 - Multi -Step Arithmetic Perform multi -step calculations involving addition, subtraction, multi- plication, and division to obtain the correct result. T47 - Navigate Follow a sequence of movement instructions and state whether the agent finishes at its starting point. T48 - Object Counting Given a list of items and their quantities, count how many belong to a specified category. T49 - Penguins in a Table Refer to a table of individual penguins and their attributes (possibly with extra information) to answer a question about one of those attributes. T50 - Reasoning about Colored Objects Using the provided context, state the color of a particular object on a surface. T51 - Ruin Names Make a single -character change to an artist, band, or movie name to create a humorous new meaning. T52 - Salient Translation Error Detection Examine a German sentence and its English translation, and classify the main translation error present. T53 - Snarks From a pair of nearly identical sentences, identify the one that is sarcastic. T54 - Sports Understanding Judge whether a fabricated sports-related statement is plausible. T55 - Temporal Sequences Given a timeline of a person’s daily activities, identify a time slot when they could have performed another specified task. T56-T58 - Tracking Shuffled 3/5/7 Objects Trace a set of objects (3/5/7 objects) through a series of pairwise swaps to determine their final positions. T59 - Web of Lies Decide whether a Boolean function described in a word problem evaluates to true or false. T60 - Word Sorting Arrange the provided words in standard alphabetical order. T61 - AB Rewrite an A::B token string by exhaustively applying neighbor collision rules and return the final sequence. T62 - Acre From example Blicket detector outcomes, decide whether a new object set turns the detector “on”, “off”, or is “undetermined”. T63 - Advanced Geometry Solve analytic geometry questions (e.g. angle, orthocentre, in -circle radius) given vertex coordinates. T64 - AIW Answer small “Alice -in-Wonderland” social reasoning problems about siblings, friends, or colleagues. T65 - ARC_1D Infer the mapping rule that maps example 1D grids to output grids and apply it to a test grid. T66 - ARC_AGI Same asARC_1D but with rotations, mirrors and permutations on 2-D grids. T67 - Base Conversion The task is to convert integers between arbitrary bases. T68 - Basic Arithmetic The task is to evaluate the value of basic arithmetic expressions. T69 - BF Based on outputs of example BF codes, the task is to output the contents of a new BF program. T70 - Binary Alternation The task is to produce the minimum number of character swaps to make a binary string to be alternating, that is, no two adjacent characters are equal. 20 T71 - Binary Matrix Given binary matrices, the task is to find the distance to the nearest 0 for each cell in the matrix. T72 - Bitwise Arithmetic The task is to compute results of expressions with mixed bitwise and arithmetic operators. T73 - Caesar Cipher The task is to decrypt a Caesar cipher text. T74 - Calendar Arithmetic Given
|
https://arxiv.org/abs/2505.21668v1
|
a description of the calendar, answer a question by conducting arithmetic calculations such as adding or subtracting days / months / years or computing weekday differences. T75 - Chain Sum The task is to calculate simple arithmetic problem and output the answer. T76 - Circuit Logic Given a logic circuit with logical operators, the goal is to evaluate the output of given inputs. T77 - Codeio The task is to read and reason about task description and pseudocode programs and report outputs of the program given inputs. T78 - Color Cube Rotation After rotating a 3D colored cube, the task is to name the color on a queried face. T79 - Complex Arithmetic The task is to perform arithmetic with complex numbers and report answers. T80 - Count Bits Given a large number, the goal is to count the number of occurrence of 1 bits in the binary representation of this number. T81 - Count Primes The task is to count number of primes numbers within an interval. T82 - Countdown The task is to write an expression that can reach a target integer using given numbers and the four operations. T83 - Course Schedule Given a list of courses need to be taken and their prerequisites, the task is to determine if all courses can be finished. T84 - Decimal Arithmetic The task is to evaluate decimal expressions with given precision. T85 - Dice The task is to compute probabilities of rolling results in fair -dice experiments with various dices with different number of sides. T86 - Emoji Mystery The task is to deduce hidden sentences expressed with emoji symbols. T87 - Family Relationships The task is to answer kinship queries in family trees. T88 - Figlet Font The task is to read FIGlet banners and output the content as strings. T89 - Fraction Simplification The task is to simplify fractions to the lowest terms. T90 - Futoshiki The task is to fill in values to empty spaces of Futoshiki puzzles that have inequalities. T91 - Game of Life The task is to simulate Conway’s Game-of-Life for k steps. T92 - Game of Life Halting The task is to decide whether a Game -of-Life configuration halts within k steps, that is, there are no cells alive. T93 - GCD The task is to compute greatest common divisors of numbers. T94 - Graph Color The task is to provide a coloring for this graph such that every vertex is not connected to a vertex of the same color. T95 - Group Anagrams Given a list of words, the task is to cluster words that are anagrams. T96 - Intermediate Integration Given an expression, the task is to calculate the indefinite integral. T97 - Isomorphic Strings The task is to decide if two strings can be isomorphic if the characters in one string can be replaced to get the second string. T98 - Jugs Given empty jugs with different sizes, the task is to give a plan of how to fill any of the available jugs with the target amount of water by
|
https://arxiv.org/abs/2505.21668v1
|
filling, emptying, or pouring from a jug to another. T99 - Knight Swap The task is to swap two knights on a chessboard in the fewest moves. 21 T100 - Knights Knaves The task is to determine who is a knight (truth -teller) or knave from their statements. T101 - Largest Island The task is to find max connected component size in a binary grid. T102 - LCM The task is to find the Least Common Multiple (LCM) of numbers. T103 - Leg Counting The task is to count how many legs there are in total when given a list of animals. T104 - Letter Jumble For each word in a sentence, the letter may have been randomly shuffled. The task is to reconstruct original words from jumbled letters. T105 - List Functions Given examples of how inputs are mapped to outputs, reason and use the same mapping to generate output given an input. T106 - Manipulate Matrix Apply a sequence of matrix transformations to a matrix and output the result. T107 - Maze Compute the shortest path length from start to goal in an maze. T108 - Modulo Grid Identify the mathematical pattern which defines a grid, then use that pattern to fill in the question marks in this grid. T109 - Needle Haystack The task is to locate a short pattern inside a longer string. T110 - Number Filtering Given a list of numbers and a requirement, remove numbers not satisfying this requirement. T111 - Number Format The task is to pick the largest/smallest number out of several options. T112 - Number Sequence Predict the next term of integer sequences based on previous patterns. T113 - Number Sorting The task is to sort number lists based on required order. T114 - Palindrome Generation The task is, given a list of letters, to form a valid palindrome. T115 - Palindrome Partitioning Given a string, the task is to find all ways to partition it such that every substring is a palindrome. T116 - Polynomial Equations The task is to find real values of a variable in a polynimial equation. T117 - Polynomial Multiplication The task is to calculate the result of multiplying two polynomials. T118 - Pool Matrix The task is to perform max- or average-pooling on numeric matrices. T119 - Products The task is to compute multiplications of numbers. T120 - Propositional Logic Given a list of premises, the task is to infer a correct conclusion from the premise. T121 - Quantum Lock There are some buttons, a light, and a number. The light will toggle between red and green whenever you press a button. Each button performs a mathematical operation to the number, but the operation may depend on the state of the light. You must press the shortest correct sequence of buttons to reach the target value. T122 - Ransom Note Given two strings representing a ransom note and a magazine, determine if the ransom note can be constructed using the letters in the magazine T123 - Rearc Find the common rule that maps input grids to output
|
https://arxiv.org/abs/2505.21668v1
|
grids and apply the rule to predict corresponding output of a test input grid. T124 - Rectangle Count The task is to count how many rectangles are present in an ASCII grid. T125 - Rotate Matrix Given a square matrix, the task is to rotate it and output the rotated matrix. T126 - Rotten Oranges You are given an n x n grid where each cell can be empty cell, contain a fresh orange, or contain a rotten orange. Every minute, any fresh orange that is 4-directionally adjacent to a rotten orange becomes rotten. Your task is determine the minimum number of minutes that must elapse until no cell has a fresh orange. 22 T127 - Rubiks Cube Given a Rubik’s cube, the task is to provide a solution to solve this cube using Singmaster notation. T128 - Rush Hour Given a rush hour parking grid, the task is to give a plan of movements of cars to achieve required car positions. T129 - Self Reference The task is to evaluate self -referential arithmetic expressions to output the number of possible solutions. T130 - Shortest Path The task is to find the length of the shortest path in a grid. T131 - Simple Equations The task is to solve equations with one variable. T132 - Simple Geometry Given polygon with different number of sides and all interior angles but one angle, the task is to calculate the remaining interior angle. T133 - Simple Integration The task is to find solution to indefinite integral problems with one variable. T134 - Sokoban The task is to find a list of actions to solve a Sokoban level. T135 - Spell Backward The task is to reverse input strings. T136 - Spiral Matrix Given a matrix, the task is to generate a list of elements in spiral order, starting from the top-left element. T137 - String Manipulation The task is to repeatedly transform a string according to a set of rules until no further transformations can be performed, or a state is repeated. T138 - Syllogism Given some statements, answer the provided question by retrieving information from the statements. T139 - Time Intervals The task is to compute durations between two times / dates with various formats and complexities. T140 - Tower of Hanoi Output an optimal (or specified) move list to transfer disks between pegs to solve a tower of Hanoi problem. T141 - Tsumego Choose the single correct Go move to capture or save stones. T142 - Word Ladder Transform one word to another by single -letter changes using dictionary words. T143 - Word Sequence Reversal Given a list of words, the task is to reverse order of words. T144 - Zebra Puzzles Given some statements, solve the logic puzzle by gathering information from statements and deduce the answer of the question. Table 4: The evaluated capabilities of all tasks, classified as Execution, Planning, and Reasoning tasks. Task Math Spatial Logical Order Optimization SearchExecution2048 ✓ ✓ ✓ ✗ ✗ ✗ Light Puzzles ✗ ✓ ✗ ✗ ✗ ✗ Mahjong ✗ ✗ ✗ ✓ ✗
|
https://arxiv.org/abs/2505.21668v1
|
✗ Matrix Transform. ✗ ✓ ✗ ✗ ✗ ✗ New operator ✓ ✗ ✗ ✗ ✗ ✗ Number Multiplying ✓ ✗ ✗ ✗ ✗ ✗ Pattern Recognition ✗ ✓ ✗ ✗ ✗ ✓ Pooling ✓ ✓ ✗ ✗ ✗ ✗ Reversi ✗ ✓ ✗ ✗ ✗ ✗ Statistical Counting ✓ ✗ ✗ ✓ ✗ ✗ String Del. &Modi. ✗ ✗ ✓ ✓ ✗ ✓ String Insertion ✗ ✗ ✓ ✓ ✗ ✓ String Splitting ✗ ✗ ✓ ✓ ✗ ✓ String Synthesis ✗ ✗ ✓ ✓ ✗ ✓ Synthesis Decomp. ✗ ✗ ✓ ✓ ✗ ✓ Dyck Languages ✗ ✗ ✓ ✓ ✗ ✗ Continued on next page 23 Table 4 (continued from previous page) Task Math Spatial Logical Order Optimization Search Multi-Step Arithmetic ✓ ✗ ✓ ✗ ✗ ✗ Navigate ✗ ✓ ✗ ✓ ✗ ✗ Object Counting ✓ ✗ ✓ ✗ ✗ ✗ Ruin Names ✗ ✗ ✓ ✗ ✗ ✓ Tracking Shuffled Obj. ✗ ✓ ✓ ✓ ✗ ✗ Word Sorting ✗ ✗ ✗ ✓ ✗ ✗ AB ✗ ✗ ✓ ✓ ✗ ✗ ARC_1D ✗ ✓ ✗ ✓ ✗ ✗ ARC_AGI ✗ ✓ ✗ ✓ ✗ ✗ Base Conversion ✓ ✗ ✗ ✗ ✗ ✗ Basic Arithmetic ✓ ✗ ✗ ✗ ✗ ✗ BF ✗ ✗ ✓ ✓ ✗ ✗ Binary Alternation ✗ ✗ ✓ ✓ ✗ ✗ Binary Matrix ✓ ✓ ✗ ✗ ✗ ✓ Bitwise Arithmetic ✓ ✗ ✗ ✗ ✗ ✗ Caesar Cipher ✗ ✗ ✗ ✓ ✗ ✗ Chain Sum ✓ ✗ ✗ ✗ ✗ ✗ Codeio ✗ ✗ ✓ ✓ ✗ ✗ Color Cube Rotation ✗ ✓ ✗ ✓ ✗ ✗ Complex Arithmetic ✓ ✗ ✗ ✗ ✗ ✗ Count Bits ✓ ✗ ✗ ✗ ✗ ✗ Count Primes ✓ ✗ ✗ ✗ ✗ ✗ Decimal Arithmetic ✓ ✗ ✗ ✗ ✗ ✗ Dice ✓ ✗ ✗ ✗ ✗ ✗ Fraction Simplification ✓ ✗ ✗ ✗ ✗ ✗ GCD ✓ ✗ ✗ ✗ ✗ ✗ Group Anagrams ✗ ✗ ✗ ✓ ✗ ✗ Intermediate Integration ✓ ✗ ✗ ✗ ✗ ✗ Isomorphic Strings ✗ ✗ ✓ ✗ ✗ ✗ Largest Island ✗ ✓ ✗ ✗ ✓ ✗ LCM ✓ ✗ ✗ ✗ ✗ ✗ Leg Counting ✓ ✗ ✗ ✗ ✗ ✗ Letter Jumble ✗ ✗ ✗ ✓ ✗ ✗ List Functions ✓ ✗ ✗ ✓ ✗ ✗ Manipulate Matrix ✓ ✓ ✗ ✗ ✗ ✗ Number Filtering ✓ ✗ ✗ ✗ ✗ ✓ Number Format ✓ ✗ ✗ ✗ ✗ ✗ Number Sorting ✗ ✗ ✗ ✓ ✗ ✗ Palindrome Generation ✗ ✗ ✗ ✓ ✗ ✗ Palindrome Partitioning ✗ ✗ ✓ ✗ ✗ ✗ Poly. Equations ✓ ✗ ✗ ✗ ✗ ✗ Poly. Multiplication ✓ ✗ ✗ ✗ ✗ ✗ Pool Matrix ✓ ✓ ✗ ✗ ✗ ✗ Products ✓ ✗ ✗ ✗ ✗ ✗ Rectangle Count ✓ ✓ ✗ ✗ ✗ ✓ Rotate Matrix ✗ ✓ ✗ ✗ ✗ ✗ Rotten Oranges ✗ ✓ ✗ ✗ ✓ ✗ Simple Equations ✓ ✗ ✗ ✗ ✗ ✗ Simple Geometry ✓ ✓ ✗ ✗ ✗ ✗ Simple Integration ✓ ✗ ✗ ✗
|
https://arxiv.org/abs/2505.21668v1
|
✗ ✗ Spell Backward ✗ ✗ ✗ ✓ ✗ ✗ Spiral Matrix ✗ ✓ ✗ ✗ ✗ ✗ String Manipulation ✗ ✗ ✓ ✓ ✗ ✗ Time Intervals ✓ ✗ ✓ ✗ ✗ ✗ Word Seq. Reversal ✗ ✗ ✗ ✓ ✗ ✗PlanningBlocksworld ✗ ✓ ✓ ✗ ✓ ✗ Continued on next page 24 Table 4 (continued from previous page) Task Math Spatial Logical Order Optimization Search BoxLift ✗ ✗ ✓ ✗ ✓ ✗ BoxNet ✗ ✗ ✓ ✗ ✓ ✗ Combinatorial Calc. ✓ ✗ ✗ ✗ ✓ ✗ Const. Linear Arrange. ✗ ✗ ✓ ✗ ✗ ✗ Cryptanalysis ✗ ✗ ✓ ✗ ✗ ✗ Eight Queens ✗ ✓ ✗ ✗ ✗ ✗ Game 24 ✓ ✗ ✗ ✓ ✓ ✗ Gridworld ✗ ✓ ✗ ✓ ✗ ✓ Letter Logic Diagram ✗ ✓ ✓ ✗ ✗ ✗ Letters ✗ ✓ ✗ ✗ ✗ ✓ Logic Puzzle ✓ ✓ ✗ ✗ ✗ ✓ Permut. and Combina. ✗ ✓ ✓ ✓ ✗ ✗ Standard Sudoku ✓ ✓ ✗ ✗ ✗ ✓ Movie Recommendation ✗ ✗ ✗ ✗ ✓ ✓ Temporal Sequences ✓ ✗ ✓ ✓ ✗ ✗ Countdown ✓ ✗ ✗ ✓ ✗ ✗ Course Schedule ✗ ✗ ✓ ✓ ✗ ✗ Futoshiki ✓ ✓ ✓ ✗ ✗ ✗ Graph Color ✗ ✓ ✓ ✗ ✓ ✗ Jugs ✓ ✗ ✓ ✓ ✓ ✗ Knight Swap ✗ ✓ ✓ ✗ ✓ ✗ Maze ✗ ✓ ✓ ✗ ✓ ✗ Modulo Grid ✓ ✓ ✓ ✗ ✗ ✗ Quantum Lock ✗ ✗ ✓ ✓ ✓ ✗ Rubiks Cube ✗ ✓ ✓ ✓ ✗ ✗ Rush Hour ✗ ✓ ✓ ✗ ✗ ✗ Shortest Path ✓ ✓ ✓ ✗ ✓ ✗ Sokoban ✗ ✓ ✓ ✓ ✗ ✗ Tower of Hanoi ✓ ✗ ✓ ✓ ✓ ✗ Tsumego ✗ ✓ ✓ ✗ ✓ ✗ Word Ladder ✗ ✗ ✓ ✗ ✗ ✓ReasoningLogical Deduction ✗ ✗ ✓ ✗ ✗ ✗ GSM ✓ ✗ ✓ ✗ ✗ ✗ MATH-Count&Prob. ✓ ✗ ✓ ✗ ✗ ✓ MATH-Geometry ✓ ✓ ✗ ✗ ✗ ✗ Hyperbaton ✗ ✗ ✗ ✓ ✗ ✗ Logical Deduction ✗ ✓ ✓ ✓ ✗ ✗ Penguins in a Table ✗ ✗ ✓ ✗ ✗ ✗ Reasoning Colored Obj. ✗ ✓ ✓ ✗ ✗ ✗ Salient Trans. Err. Detect. ✗ ✗ ✗ ✗ ✗ ✓ Snarks ✗ ✗ ✗ ✗ ✗ ✓ Sports Understanding ✗ ✗ ✓ ✗ ✗ ✓ Web of Lies ✗ ✗ ✓ ✗ ✗ ✗ Acre ✗ ✗ ✓ ✓ ✗ ✗ Advanced Geometry ✓ ✓ ✗ ✗ ✗ ✗ AIW ✓ ✗ ✓ ✗ ✗ ✗ Calendar Arithmetic ✓ ✗ ✗ ✗ ✗ ✗ Circuit Logic ✗ ✗ ✓ ✗ ✗ ✗ Emoji Mystery ✗ ✗ ✓ ✗ ✗ ✗ Family Relationships ✗ ✗ ✓ ✗ ✗ ✗ Figlet Font ✗ ✓ ✗ ✓ ✗ ✗ Game of Life ✗ ✓ ✓ ✗ ✗ ✗ Game of Life Halting ✗ ✓ ✓ ✗ ✗ ✗ Knights Knaves ✗ ✗ ✓ ✗ ✗ ✗ Needle Haystack ✗ ✗ ✗ ✗ ✗ ✓ Number Sequence ✓ ✗ ✓ ✗ ✗ ✗ Continued on next
|
https://arxiv.org/abs/2505.21668v1
|
page 25 Table 4 (continued from previous page) Task Math Spatial Logical Order Optimization Search Propositional Logic ✗ ✗ ✓ ✗ ✗ ✗ Ransom Note ✗ ✗ ✓ ✗ ✗ ✓ Rearc ✗ ✓ ✗ ✗ ✗ ✓ Self Reference ✗ ✗ ✓ ✗ ✗ ✗ Syllogism ✗ ✗ ✓ ✗ ✗ ✗ Zebra Puzzles ✗ ✗ ✓ ✗ ✗ ✗ E Full table of experimental results Table 5: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-14B- Instruct-1M, GPT-4o). Methods Qwen2.5-14B-Instruct-1M GPT-4o Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT Code Interpreter Ave. Norm., Seen 40.8 47.3 35.9 60.6 63.6 52.5 67.4 Ave. Norm., Unseen 44.1 47.0 44.4 60.1 64.1 58.6 70.9 Ave. Norm., Total 41.7 47.2 38.1 60.5 63.7 54.1 68.3 Seen Tasks SymBench Blocksworld 0 0 0 48 54 1 0 BoxLift 0 0 0 63 63 10 26 Combinatorial Calculation 39 37 40 81 81 44 83 Constrained Linear Arrangement 31 29 36 78 74 78 82 Cryptanalysis 9 7 7 18 23 20 18 GSM 74 79 47 79 81 77 80 Letter Logic Diagram 0 0 0 42 51 2 7 Light Puzzles 19 86 56 90 92 69 99 Logic Puzzle 26 34 27 77 81 62 67 Logical Equation 18 21 24 63 59 52 73 Mahjong Pattern 55 54 41 68 75 65 68 Math Geometry 77 73 58 76 78 78 73 Matrix Trans 28 53 38 95 97 79 91 New Operator 35 35 36 40 43 37 43 Pattern Recognition 12 74 43 80 83 50 98 Permutations And Combinations 50 61 58 63 65 72 86 Pooling 33 29 26 50 49 48 50 Reversi 2 29 11 65 68 4 68 Standard Sudoku 0 65 45 100 100 0 100 Statistical Counting 22 70 29 97 96 48 99 String Deletion And Modification 1 67 23 93 93 1 67 String Insertion 0 18 10 99 100 6 82 String Splitting 16 20 27 64 64 64 71 String Synthesis 0 6 1 44 49 2 13 Synthesis Decomposition 47 53 32 76 78 48 74 2048 31 39 11 46 51 40 38 Big Bench Hard Boolean Expressions 98 98 74 99 100 100 98 Causal Judgement 66 69 64 71 68 71 70 26 Table 5: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-14B- Instruct-1M, GPT-4o). 5) Methods Qwen2.5-14B-Instruct-1M GPT-4o Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT Code Interpreter Date Understanding 84 75 66 86 86 89 84 Disambiguation QA 69 52 63 70 78 70 61 Dyck Languages 4 2 3 26 23 4 9 Formal Fallacies 84 72 56 91 94 83 77 Geometric Shapes 60 43 42 75 79 71 66 Hyperbaton 84 82 67 97 97 99 96 Logical Deduction Five Objects 88 89 79 92 97 96 98 Logical Deduction Seven Objects 75 72 77 84 85 92 90 Logical Deduction
|
https://arxiv.org/abs/2505.21668v1
|
Three Objects 98 99 94 98 96 100 99 Movie Recommendation 67 67 66 80 77 76 74 Multistep Arithmetic Two 98 98 70 100 98 98 97 Navigate 92 90 73 95 96 97 98 Object Counting 87 94 55 94 95 100 99 Penguins In A Table 95 94 62 100 98 99 97 Tracking Shuffled Objects Seven Objects 85 95 71 99 100 99 95 Tracking Shuffled Objects Three Objects 88 94 80 99 100 100 98 Web of Lies 91 86 61 95 96 100 98 Word Sorting 41 91 68 93 93 90 96 Reasoning Gym Ab 0 41 20 91 93 6 68 Acre 64 60 53 87 84 84 82 Advanced Geometry 5 41 9 63 61 11 39 Aiw 16 5 8 42 61 27 10 Arc 1d 11 7 5 26 30 27 13 Arc Agi 3 1 1 5 5 9 7 Base Conversion 95 98 18 98 100 96 99 Basic Arithmetic 75 43 44 78 87 85 80 Bf 0 0 27 94 96 27 51 Binary Alternation 14 72 28 72 72 19 62 Codeio 14 14 14 21 27 18 10 Color Cube Rotation 24 28 17 30 34 61 56 Complex Arithmetic 99 91 74 96 97 98 97 Count Primes 2 87 47 100 100 3 100 Countdown 13 6 14 59 58 16 65 Course Schedule 54 87 54 92 97 87 97 Dice 0 49 51 94 96 0 95 Emoji Mystery 40 17 29 40 42 0 93 Family Relationships 74 74 77 64 75 79 78 Fraction Simplification 69 92 76 98 99 40 99 Futoshiki 0 0 0 0 0 0 0 Game Of Life 1 46 40 35 40 9 84 Gcd 99 99 44 96 93 100 100 Graph Color 78 75 26 48 49 98 63 Group Anagrams 34 0 0 7 62 96 99 Isomorphic Strings 82 98 69 93 89 83 100 Jugs 7 1 7 3 1 57 42 Knight Swap 0 10 0 3 10 5 30 Largest Island 37 41 57 100 100 58 100 Lcm 100 99 52 100 100 100 100 Leg Counting 63 45 43 67 65 78 86 List Functions 41 10 29 57 61 50 68 Manipulate Matrix 23 18 13 19 23 38 48 27 Table 5: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-14B- Instruct-1M, GPT-4o). 5) Methods Qwen2.5-14B-Instruct-1M GPT-4o Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT Code Interpreter Maze 5 8 6 12 21 26 70 Needle Haystack 98 97 96 92 95 100 100 Number Filtering 35 0 25 50 49 99 100 Number Format 93 89 57 98 98 100 98 Number Sorting 95 50 40 81 81 100 100 Palindrome Generation 39 92 53 33 48 74 59 Palindrome Partitioning 1 0 1 7 2 7 99 Polynomial Multiplication 45 8 53 57 55 75 84 Pool Matrix 42 25 4 21 29 43 33 Propositional
|
https://arxiv.org/abs/2505.21668v1
|
Logic 55 53 44 46 47 55 49 Quantum Lock 23 62 27 32 40 45 58 Ransom Note 86 99 78 92 94 94 100 Rectangle Count 20 1 1 8 13 38 38 Rotate Matrix 29 4 1 0 1 8 3 Rotten Oranges 1 15 14 16 17 10 94 Rush Hour 0 0 0 0 0 0 0 Self Reference 12 8 6 8 6 13 9 Shortest Path 6 44 23 25 44 16 100 Simple Geometry 49 0 0 12 27 99 100 Simple Integration 61 59 51 27 31 43 29 Sokoban 1 0 0 0 0 0 0 Spiral Matrix 8 91 64 95 97 31 100 String Manipulation 12 89 23 47 66 14 31 Syllogism 78 79 62 75 74 83 82 Tower Of Hanoi 7 5 1 8 7 23 60 Tsumego 2 1 1 3 0 10 14 Word Ladder 4 3 29 1 3 8 5 Zebra Puzzles 38 26 23 41 44 51 59 Unseen Tasks SymBench BoxNet V2 1 5 3 9 12 32 54 Eight Queens 2 63 17 64 67 7 65 Game24 23 21 36 81 89 36 25 Gridworld 4 5 5 4 7 26 30 Letters 17 99 53 74 80 82 93 Math Counting And Probability 82 79 55 80 78 85 82 Number Multiply 27 100 67 80 84 34 98 Big Bench Hard Reasoning About Colored Objects 90 95 93 84 90 100 91 Ruin Names 64 58 52 61 58 80 81 Salient Translation Error Detection 60 56 54 56 54 68 67 Snarks 83 81 73 78 82 90 86 Sports Understanding 78 66 52 71 78 74 76 Temporal Sequences 97 95 88 92 99 100 100 Tracking Shuffled Objects Five Objects 91 95 81 99 98 99 100 Reasoning Gym Binary Matrix 3 0 51 44 64 12 14 Bitwise Arithmetic 9 28 59 50 56 40 100 Caesar Cipher 0 1 28 24 36 14 53 Calendar Arithmetic 67 60 53 77 77 78 88 Chain Sum 100 69 56 44 54 69 69 Circuit Logic 42 16 41 86 86 9 100 28 Table 5: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-14B- Instruct-1M, GPT-4o). 5) Methods Qwen2.5-14B-Instruct-1M GPT-4o Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT Code Interpreter Count Bits 4 57 54 77 85 59 100 Decimal Arithmetic 48 15 51 0 0 1 1 Figlet Font 0 0 0 100 100 100 100 Game of Life Halting 100 100 100 35 42 75 45 Intermediate Integration 49 65 40 60 68 72 74 Knights Knaves 72 60 52 2 4 53 55 Letter Jumble 8 0 2 0 0 0 0 Modulo Grid 0 0 0 64 66 72 70 Number Sequence 69 56 51 86 94 48 99 Polynomial Equations 70 94 53 98 97 58 99 Products 61 100 52 0 14 32 30 Rearc 0 0 0 0 0 0 0 Rubiks
|
https://arxiv.org/abs/2505.21668v1
|
Cube 0 0 0 0 0 0 0 Simple Equations 100 0 40 91 92 99 100 Spell Backward 26 13 66 86 85 84 82 Time Intervals 62 53 32 71 78 81 96 Word Sequence Reversal 21 33 31 99 98 100 100 Table 6: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-7B- Instruct-1M, Qwen2.5-3B-Instruct). Methods Qwen2.5-7B-Instruct-1M Qwen2.5-3B-Instruct Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI Ave. Norm., Seen 31.7 34.5 31.0 57.6 60.4 19.6 18.6 17.1 50.9 54.5 Ave. Norm., Unseen 32.2 41.5 38.1 57.0 60.6 18.3 21.8 20.1 48.0 51.5 Ave. Norm., Total 31.8 36.3 32.8 57.5 60.5 19.3 19.5 17.9 30.2 53.7 Seen Tasks SymBench Blocksworld 0 0 0 50 51 0 0 0 27 32 BoxLift 0 0 0 53 58 0 0 0 34 37 Combinatorial Calculation 29 29 42 77 75 10 13 20 76 84 Constrained Linear Arrange. 14 16 15 71 71 8 14 6 64 65 Cryptanalysis 2 7 2 7 7 1 1 1 4 3 Gsm 67 74 37 83 83 58 53 42 76 81 Letter Logic Diagram 0 0 0 42 45 0 0 0 37 42 Light Puzzles 15 23 39 74 86 8 27 19 94 95 Logic Puzzle 18 20 10 75 78 5 9 6 56 66 Logical Equation 14 1 24 42 61 8 6 15 41 39 Mahjong Pattern 49 44 39 73 70 42 29 39 70 74 Math Geometry 67 52 53 71 74 59 49 47 66 72 Matrix Trans 58 72 64 89 86 12 47 29 83 89 29 Table 6: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-7B- Instruct-1M, Qwen2.5-3B-Instruct). 6) Methods Qwen2.5-7B-Instruct-1M Qwen2.5-3B-Instruct Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI New Operator 39 42 34 49 50 20 41 30 44 43 Pattern Recog 11 16 44 82 84 5 7 4 62 74 Permut. And Combina. 26 44 35 39 49 20 25 15 42 51 Pooling 10 38 39 49 50 2 19 16 47 39 Reversi 4 23 5 63 48 7 3 9 71 73 Standard Sudoku 0 30 19 100 100 0 31 13 99 98 Statistical Counting 10 65 82 97 98 3 34 10 98 99 String Deletion And Modifi. 1 26 22 88 91 4 17 2 87 90 String Insertion 0 27 29 98 98 5 5 4 98 98 String Splitting 9 19 25 68 63 0 2 3 58 65 String Synthesis 0 0 0 39 46 0 2 0 57 65 Synthesis Decomp 30 17 28 77 71 14 13 9 82 84 2048 35 30 28 53 53 21 29 19 46 58 Big Bench Hard Boolean Expression 92 88 98 99 98 92 89 88 100 100
|
https://arxiv.org/abs/2505.21668v1
|
Causal Judgement 56 57 57 69 70 51 53 53 61 65 Date Understanding 75 57 83 84 85 69 55 68 81 87 Disambiguation QA 61 60 65 71 72 49 59 52 58 67 Dyck Languages 0 3 3 16 20 3 6 4 11 13 Formal Fallacies 60 53 60 86 83 54 58 56 73 75 Geometric Shapes 43 35 40 72 70 21 21 24 67 65 Hyperbaton 82 77 76 91 92 71 58 71 87 90 Logical Deduction 5 Objects 60 75 72 84 91 48 38 54 78 76 Logical Deduction 7 Objects 57 55 60 72 80 51 45 42 73 71 Logical Deduction 3 Objects 88 77 82 98 99 73 72 71 89 95 Movie Recommendation 57 66 60 70 73 58 46 63 78 73 Multistep Arithmetic Two 84 91 88 97 98 75 92 86 93 90 Navigate 85 89 73 91 94 64 64 57 83 86 Object Counting 75 91 89 88 95 67 83 79 77 87 Penguins In A Table 94 87 84 96 98 72 76 66 97 95 Tracking Shuffled Objects 7 Obj. 64 95 57 99 98 31 74 38 99 99 Tracking Shuffled Objects 3 Obj. 79 85 83 100 100 66 75 64 99 99 Web Of Lies 82 69 70 99 96 60 55 67 100 99 Word Sorting 26 94 80 95 96 27 89 34 68 89 Reasoning Gym Ab 0 3 5 92 89 0 0 0 92 92 Acre 38 21 19 79 81 12 0 0 80 81 Advanced Geometry 5 41 0 60 60 0 0 0 63 59 Aiw 0 1 0 39 38 0 0 0 33 35 Arc 1D 3 3 3 15 19 0 0 0 7 1 Arc Agi 1 1 1 4 4 0 0 0 1 1 Base Conversion 94 96 21 97 96 26 2 13 92 93 Basic Arithmetic 57 48 4 83 85 0 0 2 72 74 Bf 0 0 10 92 92 0 0 0 82 87 Binary Alternation 5 36 8 76 70 0 0 0 60 74 Codeio 8 4 13 10 5 5 0 5 15 10 Color Cube Rotation 24 19 1 31 28 2 4 0 28 21 Complex Arithmetic 77 82 1 97 94 5 4 3 92 97 Count Primes 0 93 2 95 97 0 0 0 96 97 Countdown 3 1 4 45 19 0 0 2 29 29 Course Schedule 47 34 9 92 97 5 0 3 78 85 Dice 0 27 2 75 77 0 0 0 51 73 30 Table 6: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-7B- Instruct-1M, Qwen2.5-3B-Instruct). 6) Methods Qwen2.5-7B-Instruct-1M Qwen2.5-3B-Instruct Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI Emoji Mystery 0 6 4 41 20 0 0 1 13 13 Family Relationships 38 31 21 70 53
|
https://arxiv.org/abs/2505.21668v1
|
2 0 1 26 48 Fraction Simplification 26 100 1 96 83 0 0 0 87 90 Futoshiki 0 0 0 0 0 0 0 0 0 0 Game Of Life 3 30 24 43 22 2 2 3 37 42 Gcd 68 80 95 97 94 88 32 10 96 96 Graph Color 70 2 25 49 64 43 3 15 21 22 Group Anagrams 42 0 10 85 89 27 0 3 12 43 Isomorphic Strings 69 30 29 89 87 7 0 3 76 80 Jugs 0 0 2 0 0 0 0 0 0 0 Knight Swap 2 3 16 11 4 0 0 2 8 5 Largest Island 25 82 17 96 92 12 11 0 69 73 Lcm 53 96 6 96 99 1 0 1 99 97 Leg Counting 22 29 2 56 7 3 0 0 31 41 List Functions 6 0 26 53 38 5 0 3 22 27 Manipulate Matrix 14 13 14 24 19 2 4 2 10 11 Maze 13 5 16 12 21 7 0 0 0 3 Needle Haystack 93 69 86 87 88 84 32 39 79 85 Number Filtering 45 16 60 68 54 10 0 0 53 67 Number Format 73 29 73 82 84 57 4 4 57 69 Number Sorting 88 4 89 83 82 7 2 3 56 71 Palindrome Generation 11 40 36 41 37 3 1 2 10 16 Palindrome Partitioning 0 0 0 7 13 0 0 0 1 4 Polynomial Multiplication 29 2 0 53 42 3 0 0 32 39 Pool Matrix 27 0 2 20 23 8 0 0 8 13 Propositional Logic 46 19 37 43 34 30 5 32 32 31 Quantum Lock 13 21 26 38 24 2 1 5 4 17 Ransom Note 71 100 86 92 96 66 85 20 96 90 Rectangle Count 13 0 0 9 3 2 0 0 2 2 Rotate Matrix 34 0 26 5 4 22 1 2 6 5 Rotten Oranges 4 15 6 13 3 1 1 6 4 6 Rush Hour 0 0 0 0 0 0 0 0 0 0 Self Reference 25 2 14 9 13 3 1 1 7 12 Shortest Path 0 5 19 39 29 0 0 21 12 13 Simple Geometry 21 0 1 4 12 1 0 0 1 15 Simple Integration 31 13 36 23 19 22 3 5 27 23 Sokoban 1 0 0 0 0 0 0 0 0 0 Spiral Matrix 6 79 75 94 50 10 52 39 51 64 String Manipulation 13 54 37 25 47 6 13 1 36 41 Syllogism 70 72 71 74 80 63 39 63 68 69 Tower Of Hanoi 2 5 7 3 4 0 0 2 2 2 Tsumego 2 1 1 0 0 0 1 1 0 0 Word Ladder 0 0 0 3 0 0 0 0 2 2 Zebra Puzzles 36 7 24 40 39 27 5 11 29 29 Unseen Tasks SymBench BoxNet V2 0 1 3 3
|
https://arxiv.org/abs/2505.21668v1
|
3 0 0 0 1 2 Eight Queens 0 47 25 63 61 1 13 4 55 58 Game24 16 26 35 74 80 3 10 16 73 72 Gridworld 0 1 0 3 4 0 2 2 4 3 Letters 0 59 91 99 90 1 49 23 72 80 Math Count. And Probab. 72 75 51 77 73 61 66 53 69 70 31 Table 6: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-7B- Instruct-1M, Qwen2.5-3B-Instruct). 6) Methods Qwen2.5-7B-Instruct-1M Qwen2.5-3B-Instruct Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI All Text + CoT All Code + CoT CI wo Fine-tune R1-CI wo GRPO R1-CI Number Multiply 14 100 93 68 80 18 97 80 59 60 Big Bench Hard Reasoning About Colored Obj. 87 80 86 83 85 72 57 67 75 83 Ruin Names 49 43 56 49 58 26 39 44 39 42 Salient Transla. Error Detect. 44 45 47 48 52 43 49 37 53 49 Snarks 79 59 73 77 77 66 65 69 69 72 Sports Understanding 71 71 64 56 66 60 55 64 49 50 Temporal Sequences 81 49 87 80 84 45 27 38 50 62 Tracking Shuffled Obj. 5 Obj. 73 95 75 98 97 36 74 44 96 95 Reasoning Gym Binary Matrix 2 15 18 26 38 0 14 1 21 22 Bitwise Arithmetic 3 80 29 23 45 1 1 0 23 25 Caesar Cipher 0 1 15 44 40 0 0 0 4 12 Calendar Arithmetic 33 39 8 74 77 5 0 2 52 65 Chain Sum 73 80 6 97 95 2 0 7 88 92 Circuit Logic 51 19 5 48 51 0 0 0 37 43 Count Bits 0 100 2 99 99 1 0 0 74 85 Decimal Arithmetic 28 58 14 80 86 3 0 3 59 77 Figlet Font 0 0 0 0 0 0 0 0 0 0 Game Of Life Halting 100 100 100 100 100 100 100 100 100 100 Intermediate Integration 32 15 0 28 42 3 1 2 25 25 Knights Knaves 50 27 39 46 64 29 14 30 43 43 Letter Jumble 2 0 1 2 0 0 0 0 0 0 Modulo Grid 1 0 0 0 0 0 0 0 0 0 Number Sequence 56 42 44 61 63 33 0 4 48 54 Polynomial Equations 65 74 74 88 90 16 0 24 77 80 Products 45 96 56 81 95 32 70 2 90 84 Rearc 0 0 0 0 0 0 0 0 0 0 Rubiks Cube 0 0 0 0 0 0 0 0 0 0 Simple Equations 8 2 0 78 84 5 0 0 73 71 Spell Backward 13 11 78 95 93 0 0 18 61 78 Time Intervals 32 23 65 64 72 14 5 4 55 62 Word Sequence Reversal 10 1 68 98 97 1 0 6 81 89 Table 7: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (DeepSeek-7B, DeepSeek-14B). Methods
|
https://arxiv.org/abs/2505.21668v1
|
DeepSeek-7B DeepSeek-14B Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune All Text + CoT All Code + CoT Ave. Norm., Unseen 27.9 28.7 53.1 40.1 43.4 Unseen Tasks 32 Table 7: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (DeepSeek-7B, DeepSeek-14B). 7) Methods DeepSeek-7B DeepSeek-14B Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune All Text + CoT All Code + CoT SymBench BoxNet V2 0 0 0 0 0 Eight Queens 0 0 56 2 4 Game24 46 51 84 50 52 Gridworld 5 5 5 8 14 Letters 64 82 75 91 96 Math Counting And Probability 80 76 82 72 84 Number Multiply 28 88 64 39 94 Big Bench Hard Reasoning About Colored Objects 78 87 89 94 96 Ruin Names 37 29 39 65 63 Salient Translation Error Detection 48 47 40 63 58 Snarks 64 60 51 83 69 Sports Understanding 45 51 38 67 70 Temporal Sequences 72 74 58 100 99 Tracking Shuffled Objects Five Objects 95 95 100 96 98 Reasoning Gym Binary Matrix 0 0 2 2 0 Bitwise Arithmetic 8 5 21 1 7 Caesar Cipher 1 0 29 0 0 Calendar Arithmetic 44 21 78 44 45 Chain Sum 16 59 99 59 83 Circuit Logic 0 0 38 0 0 Count Bits 0 0 99 0 15 Decimal Arithmetic 2 10 65 4 18 Figlet Font 0 0 0 1 0 Game of Life Halting 100 100 100 100 100 Intermediate Integration 1 5 33 33 20 Knights Knaves 25 6 55 53 51 Letter Jumble 0 0 0 3 7 Modulo Grid 0 0 0 0 0 Number Sequence 1 6 69 46 42 Polynomial Equations 48 16 80 36 59 Products 9 32 88 37 67 Rearc 0 0 0 0 0 Rubiks Cube 0 0 0 0 0 Simple Equations 1 24 92 51 72 Spell Backward 29 1 74 45 26 Time Intervals 0 8 66 40 43 Word Sequence Reversal 85 22 94 98 54 33
|
https://arxiv.org/abs/2505.21668v1
|
arXiv:2505.21670v1 [cs.CL] 27 May 2025Rethinking the Outlier Distribution in Large Language Models: An In-depth Study Rahul Raman1Khushi Sharma1Sai Qian Zhang1 1New York University {rr4549, ks7406, sai.zhang}@nyu.edu Abstract Investigating outliers in large language models (LLMs) is crucial due to their significant impact on various aspects of LLM performance, in- cluding quantization and compression. Outliers often cause considerable quantization errors, leading to degraded model performance. Iden- tifying and addressing these outliers can en- hance the accuracy and efficiency of the quanti- zation process, enabling smoother deployment on edge devices or specialized hardware. Re- cent studies have identified two common types of outliers in LLMs: massive activations and channel-wise outliers. While numerous quanti- zation algorithms have been proposed to miti- gate their effects and maintain satisfactory ac- curacy, few have thoroughly explored the root causes of these outliers in depth. In this paper, we conduct a comprehensive in- vestigation into the formation mechanisms of these outliers and propose potential strategies to mitigate their occurrence. Ultimately, we introduce some efficient approaches to elimi- nate most massive activations and channel-wise outliers with minimal impact on accuracy. 1 Introduction Large Language Models (LLMs) have emerged as a cornerstone in the field of natural language process- ing (NLP), transforming how we approach various linguistic tasks. These models, with their ability to understand and generate human-like text, have rev- olutionized applications ranging from conventional NLP tasks such as machine translation (Huang et al., 2023; Xu et al., 2024; Zhu et al., 2023), sen- timent analysis (Miah et al., 2024; Wang et al., 2024; Deng et al., 2023) to advanced tasks such as code generation (Kazemitabaar et al., 2023; Thakur et al., 2024; Nakkab et al., 2024). However, the enormous size of LLMs, often reaching billions of parameters, presents substantial challenges for de- ployment, necessitating the use of techniques that enable efficient inference.To address this, Post-Training Quantization (PTQ) (Frantar et al., 2022; Xiao et al., 2023; Lin et al., 2024a; Yao et al., 2022) provides a prac- tical, low-cost approach for model quantization, either completely training-free or with minimal cal- ibration effort (Cai et al., 2020; Li et al., 2021). In comparison to Quantization-Aware Training (QAT), which demands multiple fine-tuning itera- tions, PTQ incurs much lower computational costs, making it suitable for LLM. Unfortunately, outliers in LLM activations and KV vectors (Dettmers et al., 2022; Zeng et al., 2022) introduce significant mag- nitude variations among LLM elements, which in turn lead to a notable drop in model accuracy when low-precision PTQ is applied (Xiao et al., 2023; Tseng et al., 2024; Ashkboos et al., 2024b). Prior research has identified two types of outliers in LLM activations. The first, massive activations (MAs), commonly appear across various LLMs and are typically linked to specific tokens in cer- tain channels (Sun et al., 2024). The second type, channel-wise outliers (Dettmers et al., 2022; Xiao et al., 2023; Ashkboos et al., 2024b), manifests in bulk within specific channels. These findings have inspired a two-stage approach in modern quantiza- tion techniques: initially, methods are employed to eliminate outliers in the pretrained LLM, result- ing in a
|
https://arxiv.org/abs/2505.21670v1
|
model with a smoother value distribution in its activations. Subsequently, quantization al- gorithms such as GPTQ (Frantar et al., 2022) and OBQ (Frantar and Alistarh, 2022) are applied to produce low-precision LLMs, as shown in Figure 1. Outlier smoothing is a crucial step in achieving efficient LLM quantization. Understanding the root causes of outliers is essential for developing effec- tive quantization techniques and gaining deeper insights into model behavior and robustness. While prior studies have identified the presence of MAs and channel-wise outliers, and proposed methods to mitigate them (Sun et al., 2024; Liu et al., 2024; Bini et al., 2024; Xiong et al., 2024), none have 1 Saliency DNN Loss (h,w,3) (H, W, 3) SegFormer Deeplab …Sseg(.)FSNet Architecture Gaze direction Uniform subsampling (h,w,1) (h,w,3) Scls(.) S(.)(h,w,1) (h,w,1) (H, W, 1) Original Image Foveated Segmentation Saliency DNN Segmentation DNN Classification DNN FSNet Training Strategy (c) (b) (a)Saliency DNN Segmentation DNN Classification DNN Step 1 Step 2 2 3 1 20 2 022 3 352 3 1 20 2 022 3 35 s5k5v5 (a) (b)1 2 3 1 20 2 322 3 05 64 9 6957sum s=s1 s2 k5v5k1v1 s5k2v2 s4KV cache Importance scores x4x4 k4v4WKWV (c)head 1 head 3 head 2 s1 s2 k3v3k1v1 s3k2v2 s4KV cache Importance scores x4LLM Intermediate LLMPost-training Quantization Outlier Smoothing Output LLM2 1Figure 1: Given a pretrained LLM, techniques are first applied to smooth out the outliers in its activations. The resulting model is then quantized, achieving superior accuracy. explored the fundamental reasons behind the exis- tence of these outliers from a numerical perspective, particularly with operator-level granularity. This finer-grained understanding is crucial, as different layers and operators may contribute uniquely to the formation and propagation of LLM outliers, influencing both performance and accuracy in low- precision LLMs. In this work, we investigate the underlying rea- sons for the existence of outliers in LLMs at the operator level through extensive empirical analy- sis. Our study provides valuable insights to guide the development of effective outlier smoothing al- gorithms. Building on these findings, we propose some novel methods to efficiently mitigate the ma- jority of massive activations and channel-wise out- liers without compromising model accuracy. This significantly reduces the complexity of subsequent LLM quantization processes. In summary, our find- ings on LLM outliers can be summarized as fol- lows: •We empirically demonstrate that massive ac- tivations (MAs) are predominantly generated in the initial layers the model. Once these MAs arise, they persist throughout the LLM, being propagated through subsequent layers via residual connections. •Previous studies indicate that the removal of MAs can significantly impact the quantization process. Surprising, our empirical analysis shows that eliminating MAs introduced by residual connections has no measurable effect on the model’s accuracy. Notably, these MAs constitute the majority of MAs in LLMs. •Channel-wise outliers in LLMs initially emerge due to the normalization operations within the model. The rescaling operation within the normalization layer exacerbates this issue by introducing an increasing number of channel-wise outliers.•Certain channels within the weight matri- ces can also contribute to the emergence of channel-wise outliers in the intermediate re-
|
https://arxiv.org/abs/2505.21670v1
|
sults of LLMs. 2 Background and Related Work 2.1 LLM Operations Modern LLMs (e.g., Llama series (Touvron et al., 2023a,b), GPT series (Radford et al., 2019; Brown, 2020)) are constructed as a stack of transformer decoders, with each decoder comprising two fun- damental components: a Self-Attention (SA) block and a feed forward network (FFN), as depicted in Figure 2 (a). During the LLM serving process, the input to the Self-Attention (SA) block is first processed by a normalization operation (e.g., Lay- erNorm or RMSNorm). As detailed in Figure 3(d), this normalization consists of two key steps: stan- dardization and rescaling. Specifically, the input X is normalized by subtracting its mean µXand di- viding by its standard deviation σX. Subsequently, each channel of the standardized output is scaled by a learnable parameter γand shifted by another learnable parameter β. The output of the normalization operation is then multiplied with three weight matrices WQ,WK, andWV, yielding the outputs referred to as query (q), key ( k), and value ( v), which is shown as x3, x4andx5in Figure 2, respectively. The resulting qandk, in combination with v, will then undergo multiplication, Softmax, and residual addition to generate the SA output, as shown in Figure 2 (b). The output from the SA will then be passed to the FFN for further processing, which typ- ically involves a gated MLP (Radford, 2018; Radford et al., 2019) (Figure 2 (c)) or standard MLP (Liu et al., 2021; Touvron et al., 2023a,b) (Figure 2 (d)). The FFN consists of a normaliza- tion operation, multiple fully connected (FC) lay- ers along with an intermediate activation function, such as GeLU (Hendrycks and Gimpel, 2016) or SiLU (Hendrycks and Gimpel, 2016). 2.2 Outlier in LLM As prior studies have demonstrated (Dettmers et al., 2022; Zeng et al., 2022; Sun et al., 2024), outliers can be categorized into two types: massive activa- tions (MA) andchannel-wise outliers (CO) . The presence of outliers in LLM activations and KV vectors (Dettmers et al., 2022; Zeng et al., 2022) often causes a significant drop in model accuracy 2 W k W v Softmax W q W out Normalization x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 Self-attention W b W a SiLU W c Normalization W a GeLU W b Normalization Feedforward Network Feedforward Network Self- attention Feedforward Network (a) (b) (c) (d) W b W a SiLU W c Normalization y 1 y 2 y 3 y 4 y 5 y 6 y 7 y 8 W a GeLU W c Normalization y 1 y 2 y 3 y 4 y 5 y 6 Feedforward Network Feedforward Network y 1 y 2 y 3 y 4 y 5 y 6 y 1 y 2 y 3 y 4 y 5 y 6 y 7 y 8 Figure 2: (a) Architecture of a LLM decoder block. (b), (c) and (d) show the architectures of self-attention block, standard FFN (conventional MLP), and gated FFN (GLU), respectively. The notations will be used throughout the rest sections. channel token
|
https://arxiv.org/abs/2505.21670v1
|
token 2k 1k channel10 5 (a) (b) QuantizeX Q(W)Q(X) Y’(c) X X’(d) Standardization (X- μ)/ δ Rescaling γX + β Figure 3: (a) One example of massive activation pre- sented in the inputs x1. (b) An example of outlier chan- nel at position x2in the LLM. (c) The existence of outlier will lead to an output Y’ different from the origi- nal output Y . (d) The normalization operations within LLM. when low-precision PTQ is applied (Xiao et al., 2023; Tseng et al., 2024; Ashkboos et al., 2024b). While earlier research, such as (Bondarenko et al., 2023), has shown that the attention mech- anism can lead to excessive activations by concen- trating too much on specific tokens, resulting in scenarios where the mechanism fails to remain in- active and creates an outlier problem, these studies mainly focus on BERT architectures. In contrast, our analysis expands the scope to include LLaMA, GPT, and Qwen models. This broader investiga- tion provides new insights into the architectural changes that can give rise to activation outliers. As discussed in (Li et al., 2024), performing kurtosis on the activation tensor to reflect on MAs in GLUs, though our focus here remains on characterizing and categorizing the outlier phenomenon and pro- viding a simpler method to remove MAs. To demonstrate this, we profile the inputs x2to theWQ,WK, and WVmatrices within the self- attention (SA) block, as shown in Figure 2, using the Wikitext dataset (Merity et al., 2016) on the LLaMA-7B model. Following the notations in Fig- ure 2, we record the input to the normalization operation, x1. The results presented in Figure 3 (a) highlights the presence of MAs in x1, with mag- nitudes often reaching thousands. Furthermore, these MAs propagate through the normalization operation, causing x2to also exhibit some outliers.Although the magnitude of these outliers is reduced after normalization, they remain significant. Ad- ditionally, Figure 3 (b) shows that distribution of the COs in x2, corroborating earlier findings (Xiao et al., 2023; Ashkboos et al., 2024b,a; Frantar and Alistarh, 2023). To isolate the impact of MAs, we remove them from x2to better illustrate the distri- bution of COs. Figure 3 (c) illustrates that when the input X contains both types of outliers, its quantized ver- sionQpXqexperiences significant quantization error. As a result, the output Y1, derived from the quantized input QpXqand quantized weight QpWq, deviates considerably from the original out- putY“XW , leading to a noticeable degradation in accuracy. 2.3 Outlier Smoothing for Low-precision LLM Quantization Reducing quantization error is crucial for achieving effective low-precision model quantization. How- ever, as highlighted by LLM.int8() (Dettmers et al., 2022), directly quantizing LLMs to INT8 leads to significant accuracy loss due to the presence of outliers. To address these outliers, LLM.int8() employs a mixed-precision decomposition scheme. While this approach preserves model accuracy, its fine-grained decomposition introduces computa- tional overhead and potential performance bottle- necks. Olive (Guo et al., 2023) addresses the impact of MAs on low-precision quantization by propos- ing a hybrid quantization scheme that quantizes MAs separately from the remaining elements. Simi- larly, PrefixQuant (Chen et al., 2024)
|
https://arxiv.org/abs/2505.21670v1
|
groups tokens with MAs and jointly quantizes them, resulting in reduced quantization error. This approach has also been applied to KV cache quantization (Zhang et al., 2024a), following the same principle. Col- lectively, these studies highlight the critical im- portance of understanding outlier behavior within LLMs to develop more effective quantization strate- 3 Table 1: TMA values distribution within different LLMs. Initial Top-1 and Initial Top-2 denote the MAs with the largest and second largest magnitudes within the initial LLM layers. Last-1 and Last-2 denote the last and second last layers within LLM. The N/A for smaller models indicate that the balancing of signs observed in True Massive Activations (TMAs) is handled only by the last layer. LLaMA3.2-3B LLaMA3.1-8B LLaMA2-13B GPT-2 Qwen2.5-7B Massive Activations Value Position Value Position Value Position Value Position Value Position Initial Top-1 -328.25 (0, 588) -300.5 (0, 788) -1211.0 (0, 4743) -449.82 (0, 1591) -9057.43 (0, 458) Initial Top-2 -303.25 (0, 1016) -274.75 (0, 1384) -708.0 (0, 2100) -388.98 (0, 506) -5757.42 (0, 2570) Last-2 Top-1 N/A N/A N/A N/A 414.75 (0, 4743) 169.89 (0, 1591) 9178.38 (0, 458) Last-2 Top-2 N/A N/A N/A N/A 288.25 (0, 2100) 159.61 (0, 506) 4645.87 (0, 2570) Last-1 Top-1 262.5 (0, 1016) 299.75 (0, 788) 824.0 (0, 4743) 277.06 (0, 1591) 2688.36 (0, 458) Last-1 Top-2 249.5 (0, 588) 273.5 (0, 1384) 477.0 (0, 2100) 243.73 (0, 506) 2609.71 (0, 2570) gies. On the other hand, to eliminate the channel-wise outliers, SmoothQuant (Xiao et al., 2023) proposes migrating the quantization challenge from activa- tions to weights using scale invariance. This al- lows INT8 quantization for both weights and acti- vations across all matrix multiplications in LLMs. Outlier Suppression+ (Wei et al., 2023) further en- hances quantization by introducing a fast and stable scheme for calculating scaling values, effectively balancing the quantization burden. To reduce manual intervention and improve performance under extremely low-bit quantiza- tion, OmniQuant (Shao et al., 2023) introduces Learnable Weight Clipping and Learnable Equiv- alent Transformation, optimizing both weight- only and weight-activation quantization processes. In W4A8 quantization with weight clipping, QQQ (Zhang et al., 2024b) dynamically manages outliers through adaptive smoothing. Additionally, QServe (Lin et al., 2024b) introduces SmoothAt- tention to mitigate accuracy degradation caused by 4-bit KV quantization. Both QQQ and QServe have greatly improved LLM accuracy under W4A8 quantization. While most previous studies focus on mitigat- ing the impact of channel-wise outliers during the quantization process, this work investigates the root causes of both MAs and COs. We propose some insights to address these outliers by targeting and removing them at their fundamental level. 3 Empirical Study on Massive Activation 3.1 Settings To investigate the formation of massive activa- tions (MAs), we conduct experiments on vari- ous LLMs, including the LLaMA series (Touvron et al., 2023a,b), GPT-2 (Achiam et al., 2023), and Qwen (Yang and et al., 2024), using two datasets: 0 10 20 30 Layer05001000MagnitudeLlama-13B With Residual 0 10 20 30 Layer05001000Llama-13B Without Residual T op 1 magnitude T op 2 magnitude T op 3 magnitudeFigure 4: Left: TMAs and FMAs within the input of LLaMA-13B
|
https://arxiv.org/abs/2505.21670v1
|
across each layer. Right: after removing the MAs in residual connection, only TMA left. WikiText (Merity et al., 2016) and C4 (Hugging- face, 2022). Each experiment is averaged over 100 random samples from the dataset. LLM per- formance is evaluated using the perplexity (PPL) metric. Following the definition of MAs from (Sun et al., 2024), an activation is considered massive if its magnitude exceeds 100 and is at least 1,000 times greater than the median activation magnitude. 3.2 Observations on Massive Activation In our experiments, we investigate the existence of MAs in the hidden state tensors within the attention and MLP blocks. Next, we modify the inference process of LLMs by directly intervening in the lay- ers where massive activations emerge. Specifically, for any hidden state exhibiting massive activations, we manually set those activations to fixed values. The modified hidden state is then passed to the subsequent layer, with the remaining computations proceeding as usual. As a result of these studies, we have the following surprising observations that differ from or were not reported in earlier literature, summarized as follows: Massive Activations are first appeared in the FFN Block: We found that for all LLMs, MAs first appear within the feed-forward network (FFN) of first layer. Specifically, in models using gated MLPs, such as the LLaMA series and Qwen, MAs 4 Table 2: Impact of MAs on the performances (in perplexity) of LLaMA, GPT-2, and Qwen models. LLaMA3.2-3B LLaMA3.1-8B LLaMA2-13B GPT-2 Qwen2.5-7B Intervention WikiText C4 WikiText C4 WikiText C4 WikiText C4 WikiText C4 Original 5.567 10.790 6.941 9.046 4.355 6.405 14.795 19.460 6.520 11.773 TMAs to mean at y7 1124111.75 21046.82 21281.49 1301562.25 1301562.25 6469.42 14.841 19.560 71216.17 66588.86 TMAs to zeroes at y71138151.23 21951.41 21601.10 1302018.53 1309211.61 7128.32 14.911 19.928 71835.61 67518.35 TMAs to mean at y6 6.053 14.423 7.026 10.046 4.355 6.405 14.795 19.460 6.537 11.797 TMAs to zeroes at y6 6.237 14.767 7.147 10.255 4.371 6.498 14.831 19.565 6.642 13.021 0 10 20 30 Layer05001000MagnitudeGPT2-xl With Residual 0 10 20 30 Layer05001000GPT2-xl Without Residual T op 1 magnitude T op 2 magnitude T op 3 magnitude Figure 5: Left: TMAs and FMAs within the input of GPT-2 across each layer. Right: after removing the MAs in residual connection, only TMA left. emerge in y6, the product of y4andy5, as illustrated in Figure 2 (c). In contrast, for LLMs with conven- tional MLPs, like GPT-2, MAs are first produced immediately after the GeLU activation, represented byy4in Figure 2 (d). Most of MAs are caused by residual connections within LLM: Among the MAs observed across LLM layers, most are propagated through residual connections in both the self-attention (SA) and FFN blocks. Specifically, after initially appearing in the FFN, the residual links carry these MAs through the inputs of SA and FFN blocks across the mid- dle layers of the LLM. These MAs are not newly generated but are instead carried forward from pre- viously produced MAs through the intermediate layers via residual connections. For the final lay- ers (e.g., 39th and 40th layers in LLaMA), MAs are generated spontaneously and are
|
https://arxiv.org/abs/2505.21670v1
|
not caused by residual connections. To differentiate these MAs, we call the MAs that are caused by the residual link Fake MAs (FMAs), and rest of MA True MAs (TMAs). To illustrate the presence of TMAs and FMAs, we conduct experiments on LLaMA-13B and GPT- 2. The left side of Figure 4 and Figure 5 show the top three elements with the highest magnitudes, identified as MAs, across the input of each layer. Building on this, we remove the residual connec- tions for both the SA and FFN layers throughout the entire LLM. The right side of Figure 4 and Figure 5 present the results after these residual connections are removed from all layers. Our ob- 45 -12115 -1190414.7 -111.2 00 45 -1211 00 00 0000 00 414.7 -111.24.526 -1.744x1 at layer 2 with residual connection x1 at layer 25 with residual connection x1 at layer 39 with residual connection x1 at layer 2 without residual connection x1 at layer 25 without residual connection x1 at layer 39 without residual connection Figure 6: The red lines drawn to the origin plane indicate the MAs. Top three figures are the TMA and FMA of Llama-13B. The bottom three figures are the TMA of Llama 13B model after removing MAs in the residual connection. Layer 4 and Layer 39 have TMA for the same channel and token but with the opposite sign. The MAs of layer 25 is eliminated after the removal of the residual connection. servations show that removing the TMAs at y6 effectively eliminates all TMAs and FMAs. Due to space constraints, we present results only for LLaMA-13B and GPT-2, although similar behav- iors are observed in other LLMs. Trends on TMA Magnitudes: Across various models, TMAs exhibit consistent behavior: their magnitude remains fixed within specific channels, regardless of the input sequence tokens. Analyzing the sign of these TMAs reveals a clear pattern: in the final layers, TMAs have a similar magnitude but opposite sign compared to those in the initial layers, occurring at the same channel positions. This indi- cates that TMAs generated in the early layers are effectively suppressed in the later layers. Table 1 presents the average magnitudes of TMAs across multiple LLMs, highlighting their presence in the first initial layers and the last two layers. It also shows the top two MAs with the largest magnitudes in each layer’s input, along with their correspond- ing token and channel indices, shown in the first and second number within the bracket. While mod- els like GPT-2 and Qwen display multiple initial 5 and final layers with high activation magnitudes, the observed magnitude and sign trends persist. Figure 6 shows that layer 2 of LLaMA-13B has a negative TMA while at layer 39 there is a positive TMA at the same channel and token position. 3.3 Impact of Massive Activation Values on LLM Accuracy Building on the presence of TMAs, and FMAs, we next analyze their impact on LLM accuracy. Specifically, we replace all TMAs, which are lo- cated at y6of FFN with either zero or the mean value of their
|
https://arxiv.org/abs/2505.21670v1
|
respective tensors. As shown in Ta- ble 2, the results remain comparable to the origi- nal LLM. Notably, for LLaMA2-13B, GPT-2, and Qwen, the PPL values are nearly identical to those of the original LLM on both WikiText-2 and C4, demonstrating that TMAs, and FMAs can be effec- tively eliminated without any negative impact on accuracy performance. In contrast, the removal of TMAs located at y7 of the FFN results in disastrous effects on LLM per- formance. As shown in Table 2, replacing TMAs with mean or zero values significantly increases PPL across models, with the exception of GPT-2. Thus, we show that most TMAs can be safely re- moved by replacing them with either zero or the mean of the tensor containing them at y6. Conse- quently, no TMAs can appear at Y7 or propagate via the residual connection. More detailed infor- mation for MA in MLP and attention blocks is in Appendix A. 3.4 Insights for MA Smoothing The presence of MAs is widely acknowledged as a major challenge in LLM quantization, particularly when aiming to enable efficient matrix multiplica- tion within SA. As demonstrated in Section 3.2 and Section 3.3, all FMAs can be effectively eliminated by replacing them with either the mean value or zero, with negligible impact on LLM performance. This makes the corresponding activation matrices significantly easier to quantize. In contrast, removing TMAs directly leads to se- vere performance degradation. As a result, existing outlier smoothing techniques, such as mathemat- ical invariance transformations (Ashkboos et al., 2024b) and (Xiao et al., 2023), are typically applied exclusively to these outliers. Since mathematical invariance transformations (e.g., Hadamard trans- form) introduce additional computational overhead for outlier smoothing, limiting their application tothe small number of TMAs significantly reduces the overall computational cost. 4 Empirical Study on Channel-wise Outliers 4.1 Settings In addition to the presence of MAs, channel-wise outliers (CO) are also observed within the inter- mediate results of LLMs, as noted in several prior studies (Xiao et al., 2023; Ashkboos et al., 2024b; Tseng et al., 2024; Liu et al., 2024). These out- liers significantly degrade the performance of low- precision LLM quantization. Following our study on MAs, we examine the presence and formation of channel-wise outliers in various LLMs (LLaMA series and GPT-2) using two datasets: WikiText and C4. LLM performance is assessed using the perplexity (PPL) metric, with each experiment av- eraged over 100 random samples. Since no formal study has been conducted on channel-wise outlier before, we use the following criteria to search for the channel-wise outlier. For each channel Ajwithin an activation matrix A, it is classified as an outlier channel if it satisfies the following criteria: •The mean of Ajexceeds the overall average of the tensor by more than mσA, where mis a predefined parameter and σAis the standard deviation of elements within A. •The standard deviation of Ajis below a thresh- oldβ. The first criterion ensures that the average value of the entire channel is sufficiently high to qualify as an outlier, while the second criterion ensures that all elements within the channel
|
https://arxiv.org/abs/2505.21670v1
|
have similar magnitudes, aligning with outlier channel behavior. Without loss of generality, in the following exper- iment, mis set to 4, and βis set to 1{3. We also present the results under different settings in the subsequent sections. 4.2 Observations on channel-wise Outliers We examine the presence of outlier channels in the input, output, and hidden state tensors within the SA and MLP blocks. These correspond to the in- puts and outputs of the SA and FFN blocks (e.g., x1, x2,y1, andy2) as well as intermediate results (e.g., x3,x4,y3, and y5) depicted in Figure 2 (b), (c), and (d). Next, we delve deeper into normalization 6 (c) Channelwise magnitude within x 1(d) Channelwise magnitude within x 2(a) Channelwise magnitude within x 1(b) Channelwise magnitude within x 2 (e) Channelwise magnitude within x 1(f) Channelwise magnitude within x 2Figure 7: Changes on number of channel-wise outlier af- ter the input x1of the first layer of LLaMA-13B passing through the RMSNorm. Outlier channels denoted by red dots. (a) and (b) shows the results by setting m=6, (c) and (d) for m=4, and (e) and (f) for m=2, respectively. operation and attention weight matrix multiplica- tions, looking at how the learned model weights associated with each of these transformations af- fect the occurrence of outlier channels in the output activations. Specifically, we observe the effects of smoothening the outlier channels within these weights by replacing them with fixed values. The observations are summarized below. Channel-wise outliers first arise after the nor- malization operation in first layer: We observe that in all evaluated LLMs, outlier channels first emerge during the initial normalization operation preceding the SA block. Figure 7 illustrates the average magnitude of each channel in the input and output of the normalization operation within the first layer of the SA block. Red dots represent the outlier channels. The results are shown by vary- ing the criteria for channel-wise outliers, with m set to 2, 4, and 6, respectively. Notably, the num- ber of outlier channels increases largely after the normalization operation. Learned rescaling operations inside Normaliza- tion block produces outlier channel: As shown in Figure 3 (d), the normalization operation with in LLM, such as LayerNorm or RMSNorm, are fur- ther contains two components: standardization and rescaling. For example, in LayerNorm, the input is first normalized by subtracting its mean µand dividing by its standard deviation σ. Each channel of the normalized output is then scaled by a learn- able parameter γand shifted by another learnable parameter β. We conduct an outlier analysis of tensors withinthe normalization block, as illustrated in Figure 8. To isolate the effects of channel-wise outliers, we first eliminate massive activations (MAs) from the input, allowing for a clearer visualization of outlier channels. In the normalization process, the inputs undergo token-wise standardization followed by a rescaling operation. Our findings reveal that the standardization step does not introduce additional channel-wise outliers (Figure 8 (c)). However, the rescaling operation has a channel-specific impact, which can lead to an increase in channel-wise out- liers, as depicted in Figure 8 (d). To further validate
|
https://arxiv.org/abs/2505.21670v1
|
the impact of the rescaling operation, we modify the rescaling factor vector γby identifying the indices associated with the outlier channels in the output of the normalization operation. This modification was applied to the nor- malization layers within both SA and FFN layers. Specifically, the rescaling factor elements at these indices were replaced with either the mean of the rescaling vector or zero. Both modifications result in a noticeable reduction in the number of outlier channels in the subsequent outputs, as shown in Figure 9. 4.3 Observations on channel-wise Outliers in weight matrix multiplications In this section, we examine the presence of channel- wise outliers during matrix multiplication with weight tensors. As a case study, we focus on the Query weight matrix ( Wq) within the SA block, and Key and Value matrices have the same trends. When examining the output activations ( x3), new channel-wise outliers emerge that are absent in the input activations ( x2). Specifically, x3can be com- puted as follows: x3“Wq¨x2 (1) If channel-wise outliers are observed in x3but not in the corresponding input activations channel x2, we hypothesize that specific channels (rows) inWqare responsible for the existence of new channel-wise outlier. These channels, which con- stitute approximately 1%of all channels within Wq, appear to hold greater numerical importance compared to others on LLM accuracy. We call itOutlier Triggering Channels (OTC). An important but subtle observation is that these OTCs do not exhibit outlier characteristics when analyzing Wqalone, based on mean and standard deviation statistics. However, their interaction with 7 (a) Channelwise magnitude within x 1 (b) Channelwise magnitude within x 2 (a) (b) (c) (d)Figure 8: Changes on number of channel-wise outliers within a normalization block of SA. Outlier channels denoted by red dots. The channel-wise means of (a) the input x1, (b)x1after removing the MAs, (c) the output of the standardization operation, and (d) the output of normalization x2are plotted. A similar observation has been observed for the normalization block within FFN. (a) Channelwise magnitude within x 1 (b) Channelwise magnitude within x 2 (a) (b) (c) (d) (a) Channelwise outliers within x 2 (b) Channelwise outliers within y 2 Figure 9: The blue line shows the number of outlier channels in the normalization layer inputs for each LLM layer. To identify the source of these outliers, we exam- ine the corresponding rescaling factors γthat contribute to the channel-wise outliers. These rescaling factors are then replaced with either their mean values (red lines) or zeros (green lines). x2gives rise to outlier activations in x3. This find- ing highlights the critical role of OTCs in outlier formation, despite their seemingly unremarkable statistical profile in isolation. To test this hypothe- sis, we evaluate model performance by modifying Wqin two ways: (a) setting all elements within the OTCs, which comprise approximately 1%of the total number of channels in Wq, to their mean values, and (b) setting a random 1%of channels to their mean values. The results of these interven- tions are presented in Table 3. The modification on OTC will cause a greater accuracy drop than that on equivalent
|
https://arxiv.org/abs/2505.21670v1
|
amount of random channels. This comparison highlights the importance of specific weight channels that contribute to the presence of channel-wise outliers on the LLM accuracy. Simi- lar studies have been performed on the key matrix and observe the trend being similar to query ma- trix, while the value matrix does not follow this trend and remain unaffected even after removing the OTC. 4.4 Insights for channel-wise Outlier Smoothing Based on the results presented in Section 4.2 and Section 4.3, we conclude that the rescaling factor γin the rescaling operations within the normaliza- tion layer plays a significant role in determining the number of channel-wise outliers in x2andy2.Table 3: Analysis on the Importance of OTC, other LLMs also have similar trends LLaMA3.2-3B LLaMA3.1-8B LLaMA2-13B Intervention WikiText C4 WikiText C4 WikiText C4 base model 5.567 10.790 6.941 9.046 4.355 6.405 Remove OTCs 38.924 165.396 480.8123 465.2235 774.7298 15398.1279 Remove random channels 7.5094 11.990 7.1700 18.602 4.4455 6.682 These outliers are subsequently propagated into the matrix multiplication processes. To effectively mitigate channel-wise outliers in the input, a great strategy is to fine-tune the rescaling factors γto reduce their variation. This adjustment results in x2having fewer outlier channels. However, simply setting the corresponding rescaling factor to a fix value will lead to significant accuracy drop. OTCs within the weight matrices greatly con- tribute to channel-wise outliers in the intermediate results of LLMs. A potential solution to address this issue is to adopt parameter-efficient fine-tuning techniques, which can effectively eliminate OTCs without requiring extensive changes to the model. 5 Conclusion Outliers in LLMs are crucial to address because of their significant impact on the accuracy of quan- tized LLMs. In this paper, we undertake a detailed investigation into the mechanisms behind the for- mation of outliers and develop strategies to mitigate their effects. We explore the causes of these outliers and propose practical approaches for their elimina- tion, setting the stage for more efficient quantiza- tion processes. Our comprehensive analysis not only highlights the challenges posed by outliers but also provides innovative solutions that could be pivotal for the advancement of quantization techniques in LLMs. We hope our findings make a valuable contribution to the ongoing research within the LLM commu- nity, especially in addressing the complexities of quantization challenges presented by outliers. 8 Limitations While this survey offers a comprehensive overview of outliers within LLMs, it is important to acknowl- edge some limitations. The study of outliers is specifically tailored to LLMs, and there is scope for extending this research to other types of large models that handle multimodal inputs. Further in- vestigation in these areas could provide a broader understanding of outlier effects across different model architectures and enhance the robustness of multimodal systems. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Saleh Ashkboos, Maximilian L Croci, Marcelo Gen- nari do Nascimento, Torsten Hoefler, and James Hensman. 2024a. Slicegpt: Compress large language models by deleting rows and columns. arXiv preprint arXiv:2401.15024
|
https://arxiv.org/abs/2505.21670v1
|
. Saleh Ashkboos, Amirkeivan Mohtashami, Maximil- ian L Croci, Bo Li, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. 2024b. Quarot: Outlier-free 4-bit inference in rotated llms. arXiv preprint arXiv:2404.00456 . Lorenzo Bini, Marco Sorbi, and Stephane Marchand- Maillet. 2024. Characterizing massive activations of attention mechanism in graph neural networks. arXiv preprint arXiv:2409.03463 . Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2023. Quantizable transformers: Re- moving outliers by helping attention heads do noth- ing. Preprint , arXiv:2306.12929. Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 . Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Zeroq: A novel zero shot quantization framework. In Pro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 13169–13178. Mengzhao Chen, Yi Liu, Jiahao Wang, Yi Bin, Wenqi Shao, and Ping Luo. 2024. Prefixquant: Static quan- tization beats dynamic through prefixed outliers in llms. arXiv preprint arXiv:2410.05265 . Xiang Deng, Vasilisa Bashlovkina, Feng Han, Simon Baumgartner, and Michael Bendersky. 2023. Llms to the moon? reddit market sentiment analysis with large language models. In Companion Proceedings of the ACM Web Conference 2023 , pages 1014–1019.Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Gpt3. int8 (): 8-bit matrix mul- tiplication for transformers at scale. Advances in Neural Information Processing Systems , 35:30318– 30332. Elias Frantar and Dan Alistarh. 2022. Optimal brain compression: A framework for accurate post-training quantization and pruning. Advances in Neural Infor- mation Processing Systems , 35:4475–4488. Elias Frantar and Dan Alistarh. 2023. Sparsegpt: Mas- sive language models can be accurately pruned in one-shot. In International Conference on Machine Learning , pages 10323–10337. PMLR. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. 2022. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323 . Cong Guo, Jiaming Tang, Weiming Hu, Jingwen Leng, Chen Zhang, Fan Yang, Yunxin Liu, Minyi Guo, and Yuhao Zhu. 2023. Olive: Accelerating large lan- guage models via hardware-friendly outlier-victim pair quantization. In Proceedings of the 50th Annual International Symposium on Computer Architecture , pages 1–15. Dan Hendrycks and Kevin Gimpel. 2016. Gaus- sian error linear units (gelus). arXiv preprint arXiv:1606.08415 . Hui Huang, Shuangzhi Wu, Xinnian Liang, Bing Wang, Yanrui Shi, Peihao Wu, Muyun Yang, and Tiejun Zhao. 2023. Towards making the most of llm for translation quality estimation. In CCF International Conference on Natural Language Processing and Chinese Computing , pages 375–386. Springer. Huggingface. 2022. C4 dataset. https:// huggingface.co/datasets/legacy-datasets/ c4. Majeed Kazemitabaar, Xinying Hou, Austin Henley, Barbara Jane Ericson, David Weintrop, and Tovi Grossman. 2023. How novices use llm-based code generators to solve cs1 coding tasks in a self-paced learning environment. In Proceedings of the 23rd Koli Calling International Conference on Computing Education Research , pages 1–12. Shiyao Li, Xuefei Ning, Luning Wang, Tengxuan Liu, Xiangsheng Shi, Shengen Yan, Guohao Dai, Huazhong Yang, and Yu Wang. 2024. Evaluat- ing quantized large language models. Preprint , arXiv:2402.18158. Yuhang Li, Ruihao Gong, Xu Tan, Yang Yang, Peng Hu, Qi Zhang, Fengwei Yu, Wei Wang, and Shi Gu. 2021. Brecq: Pushing the limit of post-training quantization
|
https://arxiv.org/abs/2505.21670v1
|
by block reconstruction. arXiv preprint arXiv:2102.05426 . 9 Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei- Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024a. Awq: Activation-aware weight quantization for on- device llm compression and acceleration. Proceed- ings of Machine Learning and Systems , 6:87–100. Yujun Lin, Haotian Tang, Shang Yang, Zhekai Zhang, Guangxuan Xiao, Chuang Gan, and Song Han. 2024b. Qserve: W4a8kv4 quantization and system co-design for efficient llm serving. arXiv preprint arXiv:2405.04532 . Hanxiao Liu, Zihang Dai, David So, and Quoc V Le. 2021. Pay attention to mlps. Advances in neural information processing systems , 34:9204–9215. Zechun Liu, Changsheng Zhao, Igor Fedorov, Bilge Soran, Dhruv Choudhary, Raghuraman Krishnamoor- thi, Vikas Chandra, Yuandong Tian, and Tijmen Blankevoort. 2024. Spinquant–llm quantization with learned rotations. arXiv preprint arXiv:2405.16406 . Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els.Preprint , arXiv:1609.07843. Md Saef Ullah Miah, Md Mohsin Kabir, Talha Bin Sar- war, Mejdl Safran, Sultan Alfarhood, and MF Mridha. 2024. A multimodal approach to cross-lingual senti- ment analysis with ensemble of transformer and llm. Scientific Reports , 14(1):9603. Andre Nakkab, Sai Qian Zhang, Ramesh Karri, and Siddharth Garg. 2024. Rome was not built in a sin- gle step: Hierarchical prompting for llm-based chip design. In Proceedings of the 2024 ACM/IEEE Inter- national Symposium on Machine Learning for CAD , pages 1–11. Alec Radford. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, and Ping Luo. 2023. Omniquant: Omnidirectionally calibrated quantization for large language models. arXiv preprint arXiv:2308.13137 . Mingjie Sun, Xinlei Chen, J Zico Kolter, and Zhuang Liu. 2024. Massive activations in large language models. arXiv preprint arXiv:2402.17762 . Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, and Siddharth Garg. 2024. Verigen: A large language model for verilog code generation. ACM Transac- tions on Design Automation of Electronic Systems , 29(3):1–31.Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Albert Tseng, Jerry Chee, Qingyao Sun, V olodymyr Kuleshov, and Christopher De Sa. 2024. Quip#: Even better llm quantization with hadamard in- coherence and lattice codebooks. Preprint , arXiv:2402.04396. Zeyu Wang, Yue Zhu, Shuyao He, Hao Yan, and Ziyi Zhu. 2024. Llm for sentiment analysis in e- commerce: A deep dive into customer feedback. Ap- plied Science and Engineering Journal for Advanced Research , 3(4):8–13. Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong, Jinyang Guo, and Xiang- long Liu. 2023. Outlier suppression+: Accurate quantization
|
https://arxiv.org/abs/2505.21670v1
|
of large language models by equiva- lent and optimal shifting and scaling. arXiv preprint arXiv:2304.09145 . Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. 2023. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machine Learning , pages 38087–38099. PMLR. Jing Xiong, Jianghan Shen, Fanghua Ye, Chaofan Tao, Zhongwei Wan, Jianqiao Lu, Xun Wu, Chuanyang Zheng, Zhijiang Guo, Lingpeng Kong, et al. 2024. Uncomp: Uncertainty-aware long-context compres- sor for efficient large language model inference. arXiv preprint arXiv:2410.03090 . Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Mur- ray, and Young Jin Kim. 2024. Contrastive prefer- ence optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417 . An Yang and et al. 2024. Qwen2 technical report. Preprint , arXiv:2407.10671. Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. 2022. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers. Advances in Neural Information Processing Systems , 35:27168– 27183. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: 10 An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414 . Yichi Zhang, Bofei Gao, Tianyu Liu, Keming Lu, Wayne Xiong, Yue Dong, Baobao Chang, Junjie Hu, Wen Xiao, et al. 2024a. Pyramidkv: Dynamic kv cache compression based on pyramidal information funneling. arXiv preprint arXiv:2406.02069 . Ying Zhang, Peng Zhang, Mincong Huang, Jingyang Xiang, Yujie Wang, Chao Wang, Yineng Zhang, Lei Yu, Chuan Liu, and Wei Lin. 2024b. Qqq: Quality quattuor-bit quantization for large language models. arXiv preprint arXiv:2406.09904 . Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun Chen, and Lei Li. 2023. Multilingual machine translation with large language models: Empirical results and analy- sis.arXiv preprint arXiv:2304.04675 . A Activation Statistics Across All Sublayers To support the claim that TMAs occur only at y6 (and y7 if un-smoothed), we added Table 4 and Table 5, listing the top-2 absolute activation values at every sublayer ( x1–x9,y1–y7) of layer 2 under a single WikiText input, both with and without MA removal at Y6. • There are no MA observed from x1–x9. •From y1–y5the top two values remain un- changed by MA smoothing. •At y6, the maximum absolute value drops from´499.25to37.09, and at Y7 from 328.25to24.125. Table 4: Top-2 activations in MLP y1–y7of LLaMA- 3.2-3B layer 2, with and without MA removal at Y6. Sublayer With MA Without MA (y6 only) y1 ´16.03,10.01 ´16.03,10.01 y2 4.19,3.87 4 .19,3.87 y3 13.67,8.5 13 .67,8.5 y4 ´36.5,´8.92 ´36.5,´8.92 y5 13.67,8.5 13 .67,8.5 y6 ´499.25,´37.09 ´37.09,5.09 y7 328.25,´303.25 24 .125,´22.375 B Outlier-Channel Ablation Study In Table 6, we report the perplexity of LLaMA-3.2- 3B on Wikitext-2 when replacing channels beyond t6,4,2ustandard deviations (SD) in QKV , Layer- Norm rescaling factor and MLP weights with (a) the channel mean, and (b) random channel replace- ments.Table 5: Top-2 absolute values from Attention x1–x7in layer 2 of LLaMA-3.2-3B. Sublayer Top-2 Values x1´15.93,10.58 x2 3.48,3.47 x3´8.16,´8.05 x4´9.94,´9.78 x5´0.73,0.71 x6´14.77,´14.51 x7 0.99,0.98 x8 0.47,0.45
|
https://arxiv.org/abs/2505.21670v1
|
x9´0.36,´0.34 Table 6: Table A.3: Wikitext-2 PPL under outlier vs. random channel replacements at different thresholds for LLaMA-3.2-3B. The base model perplexity is 7.8316. Intervention (setting to mean) 6 SD 4 SD 2 SD QKV outliers 7.8395 7.9201 10.9746 QKV random 7.8315 7.8419 14.0361 LayerNorm outliers 8.3497 11.4209 61.9459 LayerNorm random 7.8338 7.9110 8.0399 MLP outliers 7.8327 7.9108 15.3786 MLP random 7.8322 7.8452 9.5039 •True outlier removals increase PPL from 7.83 to as high as 61.94. •Outliers in LayerNorm rescaling factors have a pronounced impact on perplexity, suggesting their critical role in maintaining performance. •Replacing an equal number of randomly se- lected channels results in considerably smaller degradation in PPL for most cases. •Interestingly, for QKV projections, a more aggressive outlier threshold (2SD) results in lower perplexity (10.97) compared to replac- ing an equivalent number of random channels (14.04), indicating that these outliers may be less essential to model performance. 11
|
https://arxiv.org/abs/2505.21670v1
|
arXiv:2505.21671v1 [cs.AI] 27 May 2025Adaptive Frontier Exploration on Graphs with Applications to Network-Based Disease Testing Davin Choo∗ Harvard University davinchoo@seas.harvard.eduYuqi Pan∗ Harvard University yuqipan@g.harvard.eduTonghan Wang Harvard University twang1@g.harvard.edu Milind Tambe Harvard University tambe@seas.harvard.eduAlastair van Heerden University of Witwatersrand Wits Health Consortium alastair.vanheerden@wits.ac.za Cheryl Johnson World Health Organization johnsonc@who.int May 29, 2025 Abstract We study a sequential decision-making problem on a n-node graph Gwhere each node has an unknown label from a finite set Σ, drawn from a joint distribution Pthat is Markov with respect to G. At each step, selecting a node reveals its label and yields a label-dependent reward. The goal is to adaptively choose nodes to maximize expected accumulated discounted rewards. We impose a frontier exploration constraint, where actions are limited to neighbors of previously selected nodes, reflecting practical constraints in settings such as contact tracing and robotic exploration. We design a Gittins index-based policy that applies to general graphs and is provably optimal when Gis a forest. Our implementation runs in O(n2· |Σ|2) time while usingO(n· |Σ|2) oracle calls to PandO(n2· |Σ|) space. Experiments on synthetic and real-world graphs show that our method consistently outperforms natural baselines, including in non-tree, budget-limited, and undiscounted settings. For example, in HIV testing simulations on real-world sexual interaction networks, our policy detects nearly all positive cases with only half the population tested, substantially outperforming other baselines. 1 Introduction We study a sequential decision-making problem on a graph G, where each node has an unknown discrete label from Σ. The labels follow a joint distribution P, which we assume is specified by a Markov random field (MRF) defined over G[KF09]. When we act on a node, its label is revealed and we receive a label-dependent reward. Crucially, the entire process is history-sensitive : label realizations are stochastic and depend on previously observed labels, a setting that naturally arises in Bayesian adaptive planning [GK11]. In this paper, we study a setting where actions are subject to a frontier exploration constraint : the first node in each connected component is selected based on a pre-defined priority rule, and subsequent actions are restricted to neighbors of previously selected nodes. This constraint reflects realistic settings where local neighborhood information becomes accessible only through exploration, as in active search on graphs [GKX+12], robotic exploration [KK14], and cybersecurity applications [LCH+25]. The objective is then to maximize the expected accumulated discounted reward over time by sequentially selecting nodes to act upon. Definition 1 (The Adaptive Frontier Exploration on Graphs (AFEG ) problem) .AnAFEG instance is defined by a triple ( G,P, β), where G= (X,E) is a graph, Pis a joint distribution over node labels that is Markov with respect to G, and β∈(0,1) is a discount factor. The process unfolds over n=|X|time steps, with the ∗Equal contribution 1 stateStat time tconsisting of the current frontier and the revealed labels. Acting on a frontier node reveals its label, grants a label-dependent reward, and updates beliefs about other nodes via Bayesian inference under P. The goal is to compute a policy πthat maps each state to a frontier node, maximizing the expected
|
https://arxiv.org/abs/2505.21671v1
|
total discounted reward: π∗= arg max πnX t=1βt−1X v∈ΣP(Xπ(St−1)=v| St−1)·r(Xπ(St−1), v), where Xπ(St−1)is the node selected by policy πat time t, and r(·,·) is the label-dependent reward. While the optimal policy can be computed via dynamic programming, it is intractable for general graphs due to the exponential state space. A natural strategy is to leverage adaptive submodularity, which guarantees that greedy policies achieve a (1 −1/e)-approximation [GK11]. Unfortunately, the objective in AFEG is not adaptively submodular in general: for instance, in disease detection, observing an infected neighbor can increase the marginal benefit of testing a node, violating the diminishing returns property of adaptive submodularity. Our problem is closely related to the setting of active search on graphs [GKX+12, WGS13, JMC+17, JMA+18], where the goal is to identify as many target-labeled nodes as possible under a fixed budget, without exploration constraints. Since exact optimization is intractable, these works focused on practical heuristics such as search space pruning. AFEG differs in two key respects: (i) we impose a frontier constraint, and (ii) we consider an infinite-horizon objective with discounting, rather than a fixed budget. These differences are not merely technical but they enable provable optimality in meaningful special cases, particularly when the input graph Gis a forest. Forest structures naturally arise in several relevant domains, including transmission trees in contact tracing [KFH06] and recruitment trees in respondent-driven sampling [Hec97, GS09]. Moreover, algorithms with guarantees on forests can be efficiently applied to sparse real-world interaction graphs, such as sexual contact graphs, which tend to be tree-like in practice; see Section 4.3. 1.1 Motivating application: network-based disease testing A key motivating example of AFEG is network-based infectious disease testing where the goal is to identify infected individuals as early as possible. In particular, we focus on diseases that are transmitted through person-to-person contact1, e.g., sex, exposure of blood through injecting drug use, or birth, where interaction information can be collected through interviews. In this context, frontier testing is both natural and opera- tionally motivated: test outcomes substantially alter beliefs about neighboring individuals, making sequential expansion along the frontier an efficient strategy. Public health motivation. The 95-95-95 HIV2targets proposed by UNAIDS [UNA22] aim for 95% of people with HIV to know their status, 95% of those to receive treatment, and 95% of treated individuals to achieve viral suppression — aligned with UN Sustainable Development Goal 3.3 [Nat]. Yet, the 2024 UNAIDS report [UNA24] reveals that the “first 95” remains the most elusive, with roughly one in seven people living with HIV still undiagnosed, and there continues to be 1.3 million new infections every year. Studies have shown that virally suppressed individuals will not infect others [CCM+11, RCB+16, BPP+18], leading to the U=U (undetectable = untransmittable) campaign [oAD19, OG20]. Thus, the faster we can detect infected individuals, the faster they can be enrolled onto treatment and limit the spread of the disease. To address this gap, the WHO recommends network-based testing strategies to reach underserved populations [Org24a]. These include partners and biological children of people with HIV, as well as those with high ongoing HIV risk. Network-based interventions
|
https://arxiv.org/abs/2505.21671v1
|
have shown effectiveness in South Africa [JPC+19] and have also been explored for other infectious diseases beyond HIV [JSK+17, MWBDM+25]; see also [CLJ+24] for a WHO-commissioned systemic review on social network-based HIV testing. Fig. 1 illustrates how we can model the network-based disease testing problem into a AFEG instance. Firstly, we use the network Gas is, where nodes represent individuals and edges represent sexual interactions. Each node has a binary infection status (infected or not) that is drawn from some underlying joint distribution PonXover the labels Σ={+,−}, where Pmay depend on the individual covariates and graph structure. The reward for testing individual Xand revealing status b∈ {0,1}is then r(X, b) =b. See Fig. 1 for an illustration. The goal is of trying to identify infected individuals as early as possible is implicitly enforced by 1This is in contrast to illnesses like flu where transmission can occur to a room full of strangers. 2The human immunodeficiency virus (HIV) attacks the immune system and can lead to AIDS. It remains a major global health issue, having claimed over 42 million lives to date [Org24b]. 2 XA XB XC XDcA= (Gender: F, Age: 21, Drug use: Y, . . .) Alice cB= (Gender: M, Age: 23, Drug use: N, . . .) BobCharlie cC={Gender: M, Age: 19, Drug use: Y, . . .) Denise cD= (Gender: F, Age: 22, Drug use: N, . . .)cA cB cC cD Figure 1: Illustration of how a real-world transmission graph (left) can be framed as an AFEG instance (right). Here, the joint distribution Pover the labels XA, XB, XC, XD∈ {+,−}may depend on the covariates cA,cB,cC,cD∈Rdand underlying interaction graph structure. the presence of anydiscount factor β <1. Importantly, discounting reflects both practical constraints – such as sudden funding cuts [UNA25] – and clinical importance of early diagnosis, which improves patient outcomes and limits transmission [CCM+11]. See also [RN21] for other natural justifications for using discount factors β in modeling long-term policy rewards. Finally, to apply the infinite horizon framework of AFEG in our finite testing setting, we give zero subsequent rewards after every individual has been already tested. 1.2 Our contributions Contribution 1: Gittins index-based policy for AFEG and new results for branching bandits. In Section 3, we show that when Gis a forest, AFEG can be modeled as a branching bandit problem, for which Gittins index policies are known to be optimal [KO03]. We provide a novel characterization of Gittins indices for discrete branching bandits using piecewise linear functions, and develop a practical implementation that runs in O(n2· |Σ|2) time while using O(n· |Σ|2) oracle calls to PandO(n2· |Σ|) space. Our policy also works for general non-tree AFEG instances, but without optimality guarantees. Despite this, it demonstrates strong performance in experimental evaluations. Contribution 2: Formalizing network-based disease testing as an AFEG instance. As shown in Section 1.1, network-based infectious disease testing can be cast as an instance of AFEG . To our knowledge, this is the first formal framework to model frontier-based testing as sequential decision-making on a probabilistic graph model for principled exploitation of network effects
|
https://arxiv.org/abs/2505.21671v1
|
in diseases such as HIV. In Appendix A, we propose a method to learn parameters from past disease data to define a joint distribution Pon new interaction networks so as to define new AFEG instances. Contribution 3: Empirical evaluation. We evaluate our Gittins index-based policy on synthetic datasets and show that it performs strongly even in settings where it is not provably optimal, including non-trees and finite-horizon scenarios. Our approach outperforms other baselines on public-use real-world sex interaction graphs on 5 sexually transmitted diseases (Gonorrhea, Chlamydia, Syphilis, HIV, and Hepatitis) from ICPSR [MR11]. For instance, in one of our experiments on HIV testing (see Fig. 5), our method identifies almost all infected individuals while other baselines would only detect about 80%, in expectation, if we only have the testing budget to only test half of the population. 2 Preliminaries and related work Notation. We use lowercase letters for scalars, uppercase letters for random variables, bold letters for vectors or collections, and calligraphic letters for structured objects such as graphs and probability distributions. Unordered sets are denoted with braces (e.g., {·}), and ordered tuples with parentheses (e.g., ( ·)). For any setA, let|A|denote its cardinality. We use R≥0for non-negative reals, Nfor the natural numbers, and N>0=N\{0}. For any n∈N>0, we define [ n] :={1, . . . , n }. For a vector x= (x1, . . . , x n), we use xi=xiand x−i= (x1, . . . , x i−1, xi+1, . . . , x n) to denote the vector without the i-th coordinate. We also employ standard asymptotic notations such as O(·) and Ω( ·). 3 In this work, we consider joint distributions over ndiscrete variables X={X1, . . . , X n}, structured by an undirected graph G= (X,E). Each variable Xitakes values from a finite set Σ={v1, . . . , v |Σ|}; in the binary case,Σ={0,1}. For any node X∈X, letN(X)⊆Xdenote its neighbors in G, and let V(G) denote the vertex set. A tree is a connected acyclic graph, and a forest is a collection of disjoint trees. A rooted tree designates one node as the root and orients all edges away from it. In a directed rooted tree, we denote the parent and children of Xby Pa( X) and Ch( X) respectively, with pa( X) as the realization of its parent(s). Note that in rooted trees, Pa( X) =∅if and only if Xis the root. The most standard and general way to model a joint distribution Pthat is Markov with respect to a graph Gis via a Markov Random Field (MRF) [KF09]. An MRF is an undirected graphical model in which nodes represent random variables and edges encode conditional dependencies. It satisfies the local Markov property: each variable is conditionally independent of all others given its neighbors. Markov Random Fields (MRF). By the Hammersley-Clifford theorem [HC71, Cli90], an MRF has the form: P(x) =1 ZQ C∈CψC(xC), where Cis the set of cliques in G,ψCis a non-negative potential function over clique C,xCis the realization of nodes in C, and Zis the normalizing constant. Alternatively, MRFs can
|
https://arxiv.org/abs/2505.21671v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.