Papers
arxiv:2601.19375

Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection

Published on Jan 27
· Submitted by
Quy-Anh Dang
on Jan 28
Authors:
,

Abstract

Selective Steering enables continuous, norm-preserving control of language model behavior through targeted layer selection and mathematically rigorous rotation techniques.

AI-generated summary

Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors. Activation steering techniques offer a promising inference-time intervention approach, but existing methods suffer from critical limitations: activation addition requires careful coefficient tuning and is sensitive to layer-specific norm variations, while directional ablation provides only binary control. Recent work on Angular Steering introduces continuous control via rotation in a 2D subspace, but its practical implementation violates norm preservation, causing distribution shift and generation collapse, particularly in models below 7B parameters. We propose Selective Steering, which addresses these limitations through two key innovations: (1) a mathematically rigorous norm-preserving rotation formulation that maintains activation distribution integrity, and (2) discriminative layer selection that applies steering only where feature representations exhibit opposite-signed class alignment. Experiments across nine models demonstrate that Selective Steering achieves 5.5x higher attack success rates than prior methods while maintaining zero perplexity violations and approximately 100\% capability retention on standard benchmarks. Our approach provides a principled, efficient framework for controllable and stable LLM behavior modification. Code: https://github.com/knoveleng/steering

Community

Paper submitter

We introduce Selective Steering, a principled norm-preserving activation steering method that enables stable, continuous control of LLM behavior while significantly improving adversarial attack effectiveness without sacrificing model capabilities.

Page: https://knoveleng.github.io/steering/
Code: https://github.com/knoveleng/steering

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.19375 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.19375 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.19375 in a Space README.md to link it from this page.

Collections including this paper 1