text string | source string |
|---|---|
al. 2021), and (Elhage et al. 2022). b. Validating and applying interpretability methods: Developing rigorous criteria and benchmarks for evaluating the reliability of interpretability methods, and understanding whether these methods maintain their validity when applied to actively modify model behavior. Example work i... | https://arxiv.org/abs/2505.21664v1 |
area investigates how models can be made more resilient to carefully crafted adversarial perturbations designed to degrade performance or reveal vulnerabilities. Research involves identifying methods for bolstering model robustness under challenging conditions, including adversarial training and certified defenses. Exam... | https://arxiv.org/abs/2505.21664v1 |
al. 2024) and (Owen 2024). b. Cyber evaluations: Designing evaluations to assess a model’s ability to understand, exploit, or defend against cybersecurity threats and vulnerabilities. Example work includes (AI Security Institute 2024). c. CBRN (Chemical, Biological, Radiological, and Nuclear) evaluations: Designing eva... | https://arxiv.org/abs/2505.21664v1 |
how well agents can adversarially attack each other, and studying the impact of AI agent’s training dynamics on data generated by each other with respect to shared vulnerabilities/correlated failure modes. Example work includes (D. Zhang et al. 2021), (D. Lee and Tiwari 2024), and (Jones, Dragan, and Steinhardt 2024). ... | https://arxiv.org/abs/2505.21664v1 |
deployment, or evolving operational contexts. Example work includes (Zhao et al. 2022) and (Zhao et al. 2023). b. Fair representation and participation in AI systems: Promoting fair representation and generalization across different subpopulations, and ensuring inclusive participation in the development and governance o... | https://arxiv.org/abs/2505.21664v1 |
capabilities and influence—such as decentralized governance, open-source contributions, and equitable resource allocation. Example work includes (Y. Liu et al. 2024) and (Montes and Goertzel 2019). 14. Ethics: Work on AI ethics includes developing methods for integrating ethical considerations into training, evaluation,... | https://arxiv.org/abs/2505.21664v1 |
Privacy: This area focuses on identifying and mitigating privacy risks arising from new capabilities and deployment scenarios for LLMs, developing robust conceptual frameworks for privacy definitions, and leveraging AI tools to preserve and enhance privacy in various application domains. a. Identifying emergent privacy ... | https://arxiv.org/abs/2505.21664v1 |
[IAM] tools), and implementing AI firewalls with strict input-output validation. Example work includes (Nevo et al. 2024b). c. Model robustness and oracle protection: Techniques prevent model extraction through inference-only attacks, detect and filter adversarial inputs designed to reconstruct the model or degrade its i... | https://arxiv.org/abs/2505.21664v1 |
as hardware-level logging and secured audit trails that remain verifiable under sophisticated tampering attempts for rapid, evidence-based incident response. Example work includes (Petrie, Aarne, and Ammann 2024). e. Specialized chips to compute encrypted data: Designing and deploying hardware accelerators optimized for... | https://arxiv.org/abs/2505.21664v1 |
mechanisms, and dependable sensors for physical systems, such as autonomous vehicles, drones, and household robots, to ensure safe human-robot interaction and accident prevention. Example work includes (W. Zhang et al. 2024). c. Adversarial robustness in vision and perception systems: Studying how malicious inputs can ... | https://arxiv.org/abs/2505.21664v1 |
You selected the following area: Ethics: Work on AI ethics includes developing methods for integrating ethical considerations into training, evaluation, and decision-making processes, as well as techniques for mitigating harmful outputs and ensuring cultural and long-term ethical consistency. This area has the followin... | https://arxiv.org/abs/2505.21664v1 |
among respondents. Full Survey Results This table presents results for the questions of importance and tractability. In the survey, we defined these as agreement with the following statements on a 5-point Likert scale: ● Importance: Resolving the core challenges of this sub-area and implementing the resulting solutions ... | https://arxiv.org/abs/2505.21664v1 |
evaluation 4.07 (15) 3.81 (16) 15.5 25 Improving evaluation robustness 3.8 (15) 4.07 (15) 15.45 26 Robustness to underspecification 4.17 (6) 3.67 (6) 15.28 27 Pretraining alterations to improve interpretability 4 (5) 3.8 (5) 15.2 28 Understanding how fine-tuning changes a pretrained model 4 (5) 3.75 (4) 15 29 Transpare... | https://arxiv.org/abs/2505.21664v1 |
AI Feedback (RLAIF) 3.1 (10) 2.8 (10) 8.68 70 Limits of Transformers 2.33 (3) 3.67 (3) 8.56 71 Causal incentives 2.9 (10) 2.9 (10) 8.41 72 Control theory applications in AI safety 3.09 (11) 2.6 (10) 8.04 73 Model robustness and oracle protection 3 (4) 2.67 (3) 8 74 Double descent and overparameterization 2 (3) 3.33 (3)... | https://arxiv.org/abs/2505.21664v1 |
(5 of 5 sub-areas excluded) Ethics-aware training and fine-tuning Ethical decision-making frameworks Mitigating harmful outputs Cultural sensitivity and contextual awareness Long-term ethical consistency Privacy (5 of 5 sub-areas excluded) Identifying emergent privacy risks in new paradigms Research on inferring sensiti... | https://arxiv.org/abs/2505.21664v1 |
‘Automation Bias’ and ‘Selective Adherence’ to Algorithmic Advice.” Journal of Public Administration Research and Theory 33 (1): 153–69. https://doi.org/10.1093/jopart/muac007. Altman, Sam. 2025. “Reflections.” Blog. Sam Altman. January 5, 2025. https://blog.samaltman.com/reflections. Anwar, Usman, Abulhair Saparov, Javi... | https://arxiv.org/abs/2505.21664v1 |
Siqi Zhou, Jacopo Panerati, and Expert Survey: AI Reliability & Security Research Priorities | 56 Angela P. Schoellig. 2021. “Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning.” arXiv. https://doi.org/10.48550/arXiv.2108.06266. Buijsman, Stefan. 2023. “Navigating Fairness Measures an... | https://arxiv.org/abs/2505.21664v1 |
AAAI Conference on Artificial Intelligence 37 (13): 15359–67. https://doi.org/10.1609/aaai.v37i13.26791. Conmy, Arthur, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. 2023. “Towards Automated Circuit Discovery for Mechanistic Interpretability.” arXiv. https://doi.org/10.48550/arXi... | https://arxiv.org/abs/2505.21664v1 |
Future of Life Institute. 2023. “Redwood Research Group, Inc.” Future of Life Institute (blog). July 6, 2023. https://futureoflife.org/grant/redwood-research-group-inc/. Gabriel, Iason. 2020. “Artificial Intelligence, Values and Alignment.” Minds and Machines 30 (3): 411–37. https://doi.org/10.1007/s11023-020-09539-2. Ga... | https://arxiv.org/abs/2505.21664v1 |
Dual Uses of Foundation Models.” arXiv. https://doi.org/10.48550/arXiv.2211.14946. Hendrycks, Dan, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2023. “Aligning AI With Shared Human Values.” arXiv. https://doi.org/10.48550/arXiv.2008.02275. Henzinger, Thomas A., Mathias Lechner,... | https://arxiv.org/abs/2505.21664v1 |
Chen, Mingyu Jin, Wujiang Xu, Wenyue Hua, and Yongfeng Zhang. 2024. “MoralBench: Moral Evaluation of LLMs.” arXiv. https://doi.org/10.48550/arXiv.2406.04428. Jones, Erik, Anca Dragan, and Jacob Steinhardt. 2024. “Adversaries Can Misuse Combinations of Safe Models.” arXiv. https://doi.org/10.48550/arXiv.2406.14595. Jone... | https://arxiv.org/abs/2505.21664v1 |
from Human Feedback with AI Feedback,” October. https://openreview.net/forum?id=AAxIs3D2ZZ. Leike, Jan, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. 2018. “Scalable Agent Alignment via Reward Modeling: A Research Direction.” arXiv. https://doi.org/10.48550/arXiv.1811.07871. Leitão, Diogo, Pe... | https://arxiv.org/abs/2505.21664v1 |
and Benjamin Van Durme. 2023. “Data Portraits: Recording Foundation Model Training Data.” arXiv. https://doi.org/10.48550/arXiv.2303.03919. Maslej, Nestor, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Njenga Kariuki, Emily Capstick, et al. 2025. “Artificial Intelligence Index Report 2025.” arXiv. ht... | https://arxiv.org/abs/2505.21664v1 |
Theft and Misuse of Frontier Models.” https://www.rand.org/pubs/research_reports/RRA2849-1.html. Ngo, Helen, Cooper Raterink, João G. M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, and Expert Survey: AI Reliability & Security Research Priorities | 65 Nicholas Frosst. 2021. “Mitigating Harm in Language Models with C... | https://arxiv.org/abs/2505.21664v1 |
Computation on Encrypted Data.” In Proceedings of the 49th Annual International Symposium on Computer Architecture, 173–87. ISCA ’22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3470496.3527393. Samborska, Veronika. 2025. “Scaling up: How Increasing Inputs Has Made Artificial Intellig... | https://arxiv.org/abs/2505.21664v1 |
Mark Vero, Mislav Balunović, and Martin Vechev. 2024. “Beyond Memorization: Violating Privacy Via Inference with Large Language Models.” arXiv. https://doi.org/10.48550/arXiv.2310.07298. Strobl, Lena. 2023. “Average-Hard Attention Transformers Are Constant-Depth Uniform Threshold Circuits.” arXiv. https://doi.org/10.48... | https://arxiv.org/abs/2505.21664v1 |
Toni, and Tom Everitt. 2023. “Honesty Is the Best Policy: Defining and Mitigating AI Deception.” arXiv. Expert Survey: AI Reliability & Security Research Priorities | 69 https://doi.org/10.48550/arXiv.2312.01350. Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, et al. 202... | https://arxiv.org/abs/2505.21664v1 |
in Retrieval-Augmented Generation (RAG).” arXiv. https://doi.org/10.48550/arXiv.2402.16893. Zhang, Baobao, and Allan Dafoe. 2019. “Artificial Intelligence: American Attitudes and Trends.” January 19, 2019. https://www.governance.ai/research-paper/artificial-intelligence-american-attitudes-and-trends. Zhang, Dan, Gang Fen... | https://arxiv.org/abs/2505.21664v1 |
arXiv:2505.21666v1 [cs.LG] 27 May 2025Efficient Controllable Diffusion via Optimal Classifier Guidance Owen Oertell*1,Shikun Sun*1,Yiding Chen*1, Jin Peng Zhou1,Zhiyong Wang2, and Wen Sun1 1Cornell University 2CUHK Abstract The controllable generation of diffusion models aims to steer the model to generate samples that... | https://arxiv.org/abs/2505.21666v1 |
reward. Bottom right: SLCD is likewise effective at controlling discrete diffusion models. policy optimization (PPO) (Schulman et al., 2017) or optimal control methods to optimize the given reward function. While these RL based approaches can optimize reward, they make the solution concept of fine-tuning diffusion mode... | https://arxiv.org/abs/2505.21666v1 |
minimization. Our analysis is motivated from the classic imitation learning (IL) algorithms DAgger (Ross et al., 2011) and AggreVaTe(d) (Ross & Bagnell, 2014; Sun et al., 2017) which frame IL as an iterative classification procedure with the main computation primitive being classification. Our theory shows that as long... | https://arxiv.org/abs/2505.21666v1 |
Wiener process, h(·,·) :Rd×[0, T]→Rdis the drift coefficient andg(·) : [0, T]→Ris the diffusion coefficient. We use qτ(·)to denote the probability density 3 function of ¯xτgenerated according to the forward SDE in equation (1). We assume fandgsatisfy certain conditions s.t. qTconverges to N(0, I)asT→ ∞ . For example, i... | https://arxiv.org/abs/2505.21666v1 |
and the classifier allows us to rewrite the target distribution as the posterior distribution given y= 1: p(x|y= 1)∝q0(x)p(y= 1|x) =q0(x) exp( ηr(x)). 4 Figure 2: Covariate shift (left) and data collection in our approach (right). The left figure illustrates covariate shift. In the offline naive approach, classifier wi... | https://arxiv.org/abs/2505.21666v1 |
et al., 2025) . Particularly, define r∼Rprior(·|xt, t)as the distribution of the reward of a xT∼Pprior t→T(·|xt). The classifier p(y= 1|xt)can be rewritten using the reward distribution Rprior(·|xt, t): p(y= 1|xt) :=Er∼Rprior(·|xt,t)exp(η·r). (8) Our goal is to learn a reward distribution ˆRto approximate Rpriorand use... | https://arxiv.org/abs/2505.21666v1 |
on a simple supervised learning oracle to estimate theone dimensional conditional distribution R(r|xt, t). In our implementation, we use histogram to model this one-dimensional distribution, i.e., we discretize the range of reward [−Rmax,0]into finite number of bins, and use standard multi-class classification oracle t... | https://arxiv.org/abs/2505.21666v1 |
marginal distribution qTdefined by the forward SDE Eq. (1) converges to some Gaussian distribution rapidly (see Lemma 3 of Chen et al. (2025)). For simplicity, we make the following assumption on the convergence: Assumption 5 (convergence of the forward process) .KL(N(0, I)∥qT(·|y= 1))≤ϵT. For OU processes, ϵTshrinks i... | https://arxiv.org/abs/2505.21666v1 |
only a few itera- tions are needed to reach good performance.In line with Li et al. (2024), we compare the top 10 and 50 quantiles of a batch of generations, in Table 1. We compare to these methods as, like SLCD, all of these methods do not require training of the base model. Overall, we see that SLCD consistently out-... | https://arxiv.org/abs/2505.21666v1 |
constraint is relaxed, enabling a controlled trade-off between optimizing for the reward function and staying close to the base model’s distribution. Notably, even under strong reward guidance (i.e., with larger η), our method consistently maintains a high level of diversity in the generated outputs. 6.3 Fréchet Incept... | https://arxiv.org/abs/2505.21666v1 |
a novel and efficient method that recasts the KL-constrained optimization problem as a supervised learning task. We provided theoretical guarantees showing that SLCD converges to the optimal KL-constrained solution and how data-aggregation effectively mitigates covariate shift. Empirical evaluations confirm that SLCD s... | https://arxiv.org/abs/2505.21666v1 |
arXiv:2112.13487 , 2021. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems , 30, 2017. 12 Jonathan Ho and Tim Salimans. Classifier-free diffusion ... | https://arxiv.org/abs/2505.21666v1 |
2022. 13 Shai Shalev-Shwartz et al. Online learning and online convex optimization. Foundations and Trends ® in Machine Learning , 4(2):107–194, 2012. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In Inter- national Conference on Learning Representations , 2021a. URL https://openre... | https://arxiv.org/abs/2505.21666v1 |
we can substitute ¯x0,¯xτ,q(¯x0|¯xτ)with the corresponding components in the reverse process: xT,xT−τ,Pprior T−τ→T(xT|xT−τ). Then we complete the proof by setting τ=T−t. One crucial property is: Eq. (12), the classifier defined in terms of the forward process, only depends on the conditional distribution ¯x0|¯xτ, not t... | https://arxiv.org/abs/2505.21666v1 |
conditions based on the gradient estimator introduced in Flaxman et al. (2004). We require smoothness assumptions on the functions and an additional gradient estimator. For any function f:Rd→R, we define the gradient estimator c∇fto be: c∇f(x) :=d δEu∼U(Sd−1)[f(x+δu)u], where Sd−1:={x∈Rd:∥x∥2= 1}andδ >0is a free parame... | https://arxiv.org/abs/2505.21666v1 |
δu))u∥2 ≤dδM 2. C Proof of Main Theorem Proof. By Assumption 4, 1 NMNX i=1X (t,xt,r)∈Diln1 ˆRi(r|xt, t)≤min R1 NMNX i=1X (t,xt,r)∈Diln1 R(r|xt, t)+γN ≤1 NMNX i=1X (t,xt,r)∈Diln1 Rprior(r|xt, t)+γN. After rearranging, we get 1 NMNX i=1X (t,xt,r)∈DilnRprior(r|xt, t) ˆRi(r|xt, t)≤γN. According to Eq. (10), each (t,xt),ri... | https://arxiv.org/abs/2505.21666v1 |
t, p2 t∈C2(Rd)fort >0 And define the Fisher information information between pt, qtas: J(pt∥qt) :=Z pt(x)||∇logpt(x) qt(x)||2dx Then for any t >0, the evolution of KL(p1 t∥p2 t)is given by: ∂ ∂tKL(p1 t∥p2 t) =−g(t)2J(p1 t∥p2 t) +Ex∼p1 t F1(Xt, t)−F2(Xt, t),∇logp1 t(x) p2 t(x) D.1 Proof of Lemma 12 Letq(xs, s|xt, t)be... | https://arxiv.org/abs/2505.21666v1 |
is identical to Eq. (15). By choosing the proper initial condition, we finish the proof. 22 E Additional Details of Training and Evaluation We provide more information about the experimental setup and then overview some of the details of training and evaluation of SLCD in this section. The experiments are split up into... | https://arxiv.org/abs/2505.21666v1 |
across the entire batch, not at a per sample level. SVDD-MC. SVDD-MC (Li et al., 2024) evaluates the expected reward of Ncandidates from the base model under an estimated value function and selects the candidate with the highest predicted return. SVDD-PM. SVDD-PM (Li et al., 2024) is similar to SVDD-MC except that it u... | https://arxiv.org/abs/2505.21666v1 |
tasks, the following hyperparameters are used: 24 Table 3: DNA Enhancer and 5’ UTR Task Hyperparameters Hyperparameter Value Seed 43 Learning rate 1×10−4 Optimizer betas (0.9, 0.95) Weight decay 0.01 Gradient accumulation steps 4 Batch size (classifier) 5 Batch size (inference) 20 Guidance scale 10 Train iterations (pe... | https://arxiv.org/abs/2505.21666v1 |
R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning Yongchao Chen MIT / Harvard yongchaochen@fas.harvard.eduYueying Liu University of Illinois Urbana-Champaign yl136@illinois.edu Junwei Zhou University of Michigan zhoujw@umich.eduYilun Hao MIT yilunhao@mit.eduJingquan Wang U... | https://arxiv.org/abs/2505.21668v1 |
and the possible text/code solution space is large. OpenAI’s GPT models address this by incorporating a Code Interpreter, allowing iterative code generation and reasoning over outputs [Achiam et al., 2023]. However, recent work [Chen et al., 2024b] show that current Code Interpreter implementations struggle to effectiv... | https://arxiv.org/abs/2505.21668v1 |
training and optimizing Code Interpreter. Unlike prior work focused on narrow domains such as math or retrieval, we find that RL for general-purpose Code Interpreter is substantially more challenging, as it must learn a policy effective across 144 diverse tasks with varying characteristics. 2 Task benchmark Challenges ... | https://arxiv.org/abs/2505.21668v1 |
et al., 2025, Zhang et al., 2025], we rely solely on the final answer marker ‘ <<<answer 1https://github.com/open-thought/reasoning-gym 3 The current approach using depth-first search (DFS) is a step in the right direction, but it seems to be inefficient, leading to a timeout. To optimize the search for a solution in t... | https://arxiv.org/abs/2505.21668v1 |
and return the execution output and error. Once you feel you are ready for the final answer, directly return the answer with the format <<< answer content >>> at the end of your response. Otherwise, you can continue your reasoning process and possibly generate more code query to solve the problem. Non-reasoning Models ... | https://arxiv.org/abs/2505.21668v1 |
training and aligns with findings in retrieval-based RL [Jin et al., 2025]. GRPO + Code Interpreter To improve policy optimization stability and avoid value-function approximation, GRPO differs from PPO by using the average reward of multiple sampled outputs as a baseline instead of a learned value function. Specifical... | https://arxiv.org/abs/2505.21668v1 |
R1-CI-14B takes approximately 1600 GPU hours (see Sec. 5 for further discussion). Unless stated otherwise, GRPO is the default RL method. Baselines To evaluate the effectiveness of R1-Code-Interpreter (R1-CI), we compare it against several baselines: All Text + CoT — prompting LLMs to reason using only text with chain-... | https://arxiv.org/abs/2505.21668v1 |
5.0 33.9 52.3 8.9 18.6 21.8 CI wo Finetune 13.8 56.9 5.4 25.4 51.9 8.8 17.1 20.1 R1-CI wo GRPO 62.3 78.6 36.9 47.6 61.6 44.0 50.9 48.0 R1-CI 66.0 81.1 40.9 49.3 64.7 48.1 54.5 51.5 (a)(b) Figure 4: Evolution of training rewards and testing scores during GRPO training. (a) For all model sizes (3B, 7B, and 14B), the trai... | https://arxiv.org/abs/2505.21668v1 |
training on fewer tasks leads to more stable and pronounced reward improvements. In particular, training on a single task likeGame24 —a task that requires fine-grained switching between text and code reasoning [Chen et al., 2024b]—yields reward curves similar to those in domain-specific settings. This suggests that tas... | https://arxiv.org/abs/2505.21668v1 |
14B, All Text 14B, All Code DeepSeek 53.1 27.9 28.7 40.1 43.4 Qwen-2.5 57.0 32.2 41.5 44.1 47.0 (a)(b)(c)(d) Figure 7: GRPO vs. PPO and warm -vs. cold -start. (a–b) GRPO consistently beats PPO when training on (a) all 107 tasks and (b) the single task Game24 in the 3B model. (c–d) With GRPO, a warm start (preceded by S... | https://arxiv.org/abs/2505.21668v1 |
RL SFT [Chen et al., 2024d] and RL [Ouyang et al., 2022] are widely used for LLM fine-tuning. To handle multi-turn agent tasks, these methods are extended with goal-conditioned rewards [Zhou et al., 2024, Zhai et al., 2024, Zhang et al., 2024]. Self-generated data, combined with search and rejection sampling [Zhou et a... | https://arxiv.org/abs/2505.21668v1 |
Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753 , 2022. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:22... | https://arxiv.org/abs/2505.21668v1 |
27744, 2022. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. In Proceedings of the Twentieth European Conference on Computer Systems , EuroSys ’25, page 1279–1297. ACM, March 2025. doi: 10.1145/368903... | https://arxiv.org/abs/2505.21668v1 |
, 2022. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651 , 2023. Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao ... | https://arxiv.org/abs/2505.21668v1 |
. . . . . 9 5.3 Ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6 Related Work 10 7 Conclusion 10 A Limitations and societal impacts 16 B Ablation study on masking code execution output 16 C Prompt for All Text + CoT and All Code + CoT 16 D Description of reasoning and plannin... | https://arxiv.org/abs/2505.21668v1 |
Remember in the final code you still need to output each number in the final equation! Start the python block with ``` python. D Description of reasoning and planning tasks Here we describe the 144 training and testing tasks. They require strong symbolic, mathematical, log- ical, geometrical, scientific, and commonsens... | https://arxiv.org/abs/2505.21668v1 |
can defeat, be defeated by, or draw with others according to predefined relationships. T7 - Cryptanalysis In this task, you are provided with a combination lock consisting of numbers and letters, where neither the numbers nor the letters repeat. Using a series of guesses and feedback, the goal is to deduce the correct ... | https://arxiv.org/abs/2505.21668v1 |
based on these specific conditions. The goal is to determine a result based on a series of rounds. T18 - MATH-Count&Probability This is the math reasoning dataset from MATH dataset [Hendrycks et al., 2021], with specific focus on counting and probability questions. T19 - MATH-Geometry This is the math reasoning dataset... | https://arxiv.org/abs/2505.21668v1 |
possible, where each rule modifies the string based on specific patterns or conditions present in the current string state. For example, a modification rule can be “If the string ends with ‘ba’, replace it with ‘ab’.” T30 - String Insertion The task is to transform a string by scanning it from left to right and inserti... | https://arxiv.org/abs/2505.21668v1 |
enjoyed. T46 - Multi -Step Arithmetic Perform multi -step calculations involving addition, subtraction, multi- plication, and division to obtain the correct result. T47 - Navigate Follow a sequence of movement instructions and state whether the agent finishes at its starting point. T48 - Object Counting Given a list of... | https://arxiv.org/abs/2505.21668v1 |
a description of the calendar, answer a question by conducting arithmetic calculations such as adding or subtracting days / months / years or computing weekday differences. T75 - Chain Sum The task is to calculate simple arithmetic problem and output the answer. T76 - Circuit Logic Given a logic circuit with logical op... | https://arxiv.org/abs/2505.21668v1 |
filling, emptying, or pouring from a jug to another. T99 - Knight Swap The task is to swap two knights on a chessboard in the fewest moves. 21 T100 - Knights Knaves The task is to determine who is a knight (truth -teller) or knave from their statements. T101 - Largest Island The task is to find max connected component ... | https://arxiv.org/abs/2505.21668v1 |
grids and apply the rule to predict corresponding output of a test input grid. T124 - Rectangle Count The task is to count how many rectangles are present in an ASCII grid. T125 - Rotate Matrix Given a square matrix, the task is to rotate it and output the rotated matrix. T126 - Rotten Oranges You are given an n x n gr... | https://arxiv.org/abs/2505.21668v1 |
✗ Matrix Transform. ✗ ✓ ✗ ✗ ✗ ✗ New operator ✓ ✗ ✗ ✗ ✗ ✗ Number Multiplying ✓ ✗ ✗ ✗ ✗ ✗ Pattern Recognition ✗ ✓ ✗ ✗ ✗ ✓ Pooling ✓ ✓ ✗ ✗ ✗ ✗ Reversi ✗ ✓ ✗ ✗ ✗ ✗ Statistical Counting ✓ ✗ ✗ ✓ ✗ ✗ String Del. &Modi. ✗ ✗ ✓ ✓ ✗ ✓ String Insertion ✗ ✗ ✓ ✓ ✗ ✓ String Splitting ✗ ✗ ✓ ✓ ✗ ✓ String Synthesis ✗ ✗ ✓ ✓ ✗ ✓ Synthesis... | https://arxiv.org/abs/2505.21668v1 |
✗ ✗ Spell Backward ✗ ✗ ✗ ✓ ✗ ✗ Spiral Matrix ✗ ✓ ✗ ✗ ✗ ✗ String Manipulation ✗ ✗ ✓ ✓ ✗ ✗ Time Intervals ✓ ✗ ✓ ✗ ✗ ✗ Word Seq. Reversal ✗ ✗ ✗ ✓ ✗ ✗PlanningBlocksworld ✗ ✓ ✓ ✗ ✓ ✗ Continued on next page 24 Table 4 (continued from previous page) Task Math Spatial Logical Order Optimization Search BoxLift ✗ ✗ ✓ ✗ ✓ ✗ BoxNe... | https://arxiv.org/abs/2505.21668v1 |
page 25 Table 4 (continued from previous page) Task Math Spatial Logical Order Optimization Search Propositional Logic ✗ ✗ ✓ ✗ ✗ ✗ Ransom Note ✗ ✗ ✓ ✗ ✗ ✓ Rearc ✗ ✓ ✗ ✗ ✗ ✓ Self Reference ✗ ✗ ✓ ✗ ✗ ✗ Syllogism ✗ ✗ ✓ ✗ ✗ ✗ Zebra Puzzles ✗ ✗ ✓ ✗ ✗ ✗ E Full table of experimental results Table 5: Experimental results on Sy... | https://arxiv.org/abs/2505.21668v1 |
Three Objects 98 99 94 98 96 100 99 Movie Recommendation 67 67 66 80 77 76 74 Multistep Arithmetic Two 98 98 70 100 98 98 97 Navigate 92 90 73 95 96 97 98 Object Counting 87 94 55 94 95 100 99 Penguins In A Table 95 94 62 100 98 99 97 Tracking Shuffled Objects Seven Objects 85 95 71 99 100 99 95 Tracking Shuffled Objec... | https://arxiv.org/abs/2505.21668v1 |
Logic 55 53 44 46 47 55 49 Quantum Lock 23 62 27 32 40 45 58 Ransom Note 86 99 78 92 94 94 100 Rectangle Count 20 1 1 8 13 38 38 Rotate Matrix 29 4 1 0 1 8 3 Rotten Oranges 1 15 14 16 17 10 94 Rush Hour 0 0 0 0 0 0 0 Self Reference 12 8 6 8 6 13 9 Shortest Path 6 44 23 25 44 16 100 Simple Geometry 49 0 0 12 27 99 100 S... | https://arxiv.org/abs/2505.21668v1 |
Cube 0 0 0 0 0 0 0 Simple Equations 100 0 40 91 92 99 100 Spell Backward 26 13 66 86 85 84 82 Time Intervals 62 53 32 71 78 81 96 Word Sequence Reversal 21 33 31 99 98 100 100 Table 6: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-7B- Instruct-1M, Qwen2.5-3B-Instruct). Methods Qwen2.5-7B-... | https://arxiv.org/abs/2505.21668v1 |
Causal Judgement 56 57 57 69 70 51 53 53 61 65 Date Understanding 75 57 83 84 85 69 55 68 81 87 Disambiguation QA 61 60 65 71 72 49 59 52 58 67 Dyck Languages 0 3 3 16 20 3 6 4 11 13 Formal Fallacies 60 53 60 86 83 54 58 56 73 75 Geometric Shapes 43 35 40 72 70 21 21 24 67 65 Hyperbaton 82 77 76 91 92 71 58 71 87 90 Lo... | https://arxiv.org/abs/2505.21668v1 |
2 0 1 26 48 Fraction Simplification 26 100 1 96 83 0 0 0 87 90 Futoshiki 0 0 0 0 0 0 0 0 0 0 Game Of Life 3 30 24 43 22 2 2 3 37 42 Gcd 68 80 95 97 94 88 32 10 96 96 Graph Color 70 2 25 49 64 43 3 15 21 22 Group Anagrams 42 0 10 85 89 27 0 3 12 43 Isomorphic Strings 69 30 29 89 87 7 0 3 76 80 Jugs 0 0 2 0 0 0 0 0 0 0 K... | https://arxiv.org/abs/2505.21668v1 |
3 0 0 0 1 2 Eight Queens 0 47 25 63 61 1 13 4 55 58 Game24 16 26 35 74 80 3 10 16 73 72 Gridworld 0 1 0 3 4 0 2 2 4 3 Letters 0 59 91 99 90 1 49 23 72 80 Math Count. And Probab. 72 75 51 77 73 61 66 53 69 70 31 Table 6: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (Qwen2.5-7B- Instruct-1M, Qwen2.... | https://arxiv.org/abs/2505.21668v1 |
DeepSeek-7B DeepSeek-14B Task success rate % All Text + CoT All Code + CoT CI wo Fine-tune All Text + CoT All Code + CoT Ave. Norm., Unseen 27.9 28.7 53.1 40.1 43.4 Unseen Tasks 32 Table 7: Experimental results on SymBench, Big-Bench-Hard, and Reasoning-Gym (DeepSeek-7B, DeepSeek-14B). 7) Methods DeepSeek-7B DeepSeek-1... | https://arxiv.org/abs/2505.21668v1 |
arXiv:2505.21670v1 [cs.CL] 27 May 2025Rethinking the Outlier Distribution in Large Language Models: An In-depth Study Rahul Raman1Khushi Sharma1Sai Qian Zhang1 1New York University {rr4549, ks7406, sai.zhang}@nyu.edu Abstract Investigating outliers in large language models (LLMs) is crucial due to their significant imp... | https://arxiv.org/abs/2505.21670v1 |
model with a smoother value distribution in its activations. Subsequently, quantization al- gorithms such as GPTQ (Frantar et al., 2022) and OBQ (Frantar and Alistarh, 2022) are applied to produce low-precision LLMs, as shown in Figure 1. Outlier smoothing is a crucial step in achieving efficient LLM quantization. Unde... | https://arxiv.org/abs/2505.21670v1 |
sults of LLMs. 2 Background and Related Work 2.1 LLM Operations Modern LLMs (e.g., Llama series (Touvron et al., 2023a,b), GPT series (Radford et al., 2019; Brown, 2020)) are constructed as a stack of transformer decoders, with each decoder comprising two fun- damental components: a Self-Attention (SA) block and a feed... | https://arxiv.org/abs/2505.21670v1 |
token 2k 1k channel10 5 (a) (b) QuantizeX Q(W)Q(X) Y’(c) X X’(d) Standardization (X- μ)/ δ Rescaling γX + β Figure 3: (a) One example of massive activation pre- sented in the inputs x1. (b) An example of outlier chan- nel at position x2in the LLM. (c) The existence of outlier will lead to an output Y’ different from th... | https://arxiv.org/abs/2505.21670v1 |
groups tokens with MAs and jointly quantizes them, resulting in reduced quantization error. This approach has also been applied to KV cache quantization (Zhang et al., 2024a), following the same principle. Col- lectively, these studies highlight the critical im- portance of understanding outlier behavior within LLMs to... | https://arxiv.org/abs/2505.21670v1 |
across each layer. Right: after removing the MAs in residual connection, only TMA left. WikiText (Merity et al., 2016) and C4 (Hugging- face, 2022). Each experiment is averaged over 100 random samples from the dataset. LLM per- formance is evaluated using the perplexity (PPL) metric. Following the definition of MAs fro... | https://arxiv.org/abs/2505.21670v1 |
not caused by residual connections. To differentiate these MAs, we call the MAs that are caused by the residual link Fake MAs (FMAs), and rest of MA True MAs (TMAs). To illustrate the presence of TMAs and FMAs, we conduct experiments on LLaMA-13B and GPT- 2. The left side of Figure 4 and Figure 5 show the top three ele... | https://arxiv.org/abs/2505.21670v1 |
respective tensors. As shown in Ta- ble 2, the results remain comparable to the origi- nal LLM. Notably, for LLaMA2-13B, GPT-2, and Qwen, the PPL values are nearly identical to those of the original LLM on both WikiText-2 and C4, demonstrating that TMAs, and FMAs can be effec- tively eliminated without any negative imp... | https://arxiv.org/abs/2505.21670v1 |
have similar magnitudes, aligning with outlier channel behavior. Without loss of generality, in the following exper- iment, mis set to 4, and βis set to 1{3. We also present the results under different settings in the subsequent sections. 4.2 Observations on channel-wise Outliers We examine the presence of outlier chan... | https://arxiv.org/abs/2505.21670v1 |
the impact of the rescaling operation, we modify the rescaling factor vector γby identifying the indices associated with the outlier channels in the output of the normalization operation. This modification was applied to the nor- malization layers within both SA and FFN layers. Specifically, the rescaling factor elemen... | https://arxiv.org/abs/2505.21670v1 |
amount of random channels. This comparison highlights the importance of specific weight channels that contribute to the presence of channel-wise outliers on the LLM accuracy. Simi- lar studies have been performed on the key matrix and observe the trend being similar to query ma- trix, while the value matrix does not fo... | https://arxiv.org/abs/2505.21670v1 |
. Saleh Ashkboos, Amirkeivan Mohtashami, Maximil- ian L Croci, Bo Li, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. 2024b. Quarot: Outlier-free 4-bit inference in rotated llms. arXiv preprint arXiv:2404.00456 . Lorenzo Bini, Marco Sorbi, and Stephane Marchand- Maillet. 2024. Characterizing massive act... | https://arxiv.org/abs/2505.21670v1 |
by block reconstruction. arXiv preprint arXiv:2102.05426 . 9 Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei- Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024a. Awq: Activation-aware weight quantization for on- device llm compression and acceleration. Proceed- ings of Machine Le... | https://arxiv.org/abs/2505.21670v1 |
of large language models by equiva- lent and optimal shifting and scaling. arXiv preprint arXiv:2304.09145 . Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. 2023. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machin... | https://arxiv.org/abs/2505.21670v1 |
x9´0.36,´0.34 Table 6: Table A.3: Wikitext-2 PPL under outlier vs. random channel replacements at different thresholds for LLaMA-3.2-3B. The base model perplexity is 7.8316. Intervention (setting to mean) 6 SD 4 SD 2 SD QKV outliers 7.8395 7.9201 10.9746 QKV random 7.8315 7.8419 14.0361 LayerNorm outliers 8.3497 11.420... | https://arxiv.org/abs/2505.21670v1 |
arXiv:2505.21671v1 [cs.AI] 27 May 2025Adaptive Frontier Exploration on Graphs with Applications to Network-Based Disease Testing Davin Choo∗ Harvard University davinchoo@seas.harvard.eduYuqi Pan∗ Harvard University yuqipan@g.harvard.eduTonghan Wang Harvard University twang1@g.harvard.edu Milind Tambe Harvard University... | https://arxiv.org/abs/2505.21671v1 |
total discounted reward: π∗= arg max πnX t=1βt−1X v∈ΣP(Xπ(St−1)=v| St−1)·r(Xπ(St−1), v), where Xπ(St−1)is the node selected by policy πat time t, and r(·,·) is the label-dependent reward. While the optimal policy can be computed via dynamic programming, it is intractable for general graphs due to the exponential state ... | https://arxiv.org/abs/2505.21671v1 |
have shown effectiveness in South Africa [JPC+19] and have also been explored for other infectious diseases beyond HIV [JSK+17, MWBDM+25]; see also [CLJ+24] for a WHO-commissioned systemic review on social network-based HIV testing. Fig. 1 illustrates how we can model the network-based disease testing problem into a AF... | https://arxiv.org/abs/2505.21671v1 |
in diseases such as HIV. In Appendix A, we propose a method to learn parameters from past disease data to define a joint distribution Pon new interaction networks so as to define new AFEG instances. Contribution 3: Empirical evaluation. We evaluate our Gittins index-based policy on synthetic datasets and show that it p... | https://arxiv.org/abs/2505.21671v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.