mishig HF Staff commited on
Commit
783ac04
·
verified ·
1 Parent(s): a210500

Add 1 files

Browse files
Files changed (1) hide show
  1. 2601/2601.08441.md +694 -0
2601/2601.08441.md ADDED
@@ -0,0 +1,694 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation
2
+
3
+ URL Source: https://arxiv.org/html/2601.08441
4
+
5
+ Markdown Content:
6
+ Abdelaziz Bounhar 1,∗, Rania Hossam Elmohamady Elbadry 1, Hadi Abdine 1,
7
+
8
+ Preslav Nakov 1, Michalis Vazirgiannis 1,2, Guokan Shang 1,∗
9
+ 1 MBZUAI, 2 Ecole Polytechnique
10
+
11
+ ∗Correspondence: {abdelaziz.bounhar, guokan.shang}@mbzuai.ac.ae
12
+
13
+ ###### Abstract
14
+
15
+ Steering Large Language Models (LLMs) through activation interventions has emerged as a lightweight alternative to fine-tuning for alignment and personalization. Recent work on Bi-directional Preference Optimization (BiPO) shows that dense steering vectors can be learned directly from preference data in a Direct Preference Optimization (DPO) fashion, enabling control over truthfulness, hallucinations, and safety behaviors. However, dense steering vectors often entangle multiple latent factors due to neuron multi-semanticity, limiting their effectiveness and stability in fine-grained settings such as cultural alignment, where closely related values and behaviors (e.g., among Middle Eastern cultures) must be distinguished. In this paper, we propose Yet another Policy Optimization (YaPO), a reference-free method that learns sparse steering vectors in the latent space of a Sparse Autoencoder (SAE). By optimizing sparse codes, YaPO produces disentangled, interpretable, and efficient steering directions. Empirically, we show that YaPO converges faster, achieves stronger performance, and exhibits improved training stability compared to dense steering baselines. Beyond cultural alignment, YaPO generalizes to a range of alignment-related behaviors, including hallucination, wealth-seeking, jailbreak, and power-seeking. Importantly, YaPO preserves general knowledge, with no measurable degradation on MMLU. Overall, our results show that YaPO provides a general recipe for efficient, stable, and fine-grained alignment of LLMs, with broad applications to controllability and domain adaptation. The associated code and data are publicly available 1 1 1[https://github.com/MBZUAI-Paris/YaPO](https://github.com/MBZUAI-Paris/YaPO).
16
+
17
+ rmTeXGyreTermesX [*devanagari]rmLohit Devanagari [*arabic]rmNoto Sans Arabic
18
+
19
+ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation
20
+
21
+ Abdelaziz Bounhar 1,∗, Rania Hossam Elmohamady Elbadry 1, Hadi Abdine 1,Preslav Nakov 1, Michalis Vazirgiannis 1,2, Guokan Shang 1,∗1 MBZUAI, 2 Ecole Polytechnique∗Correspondence: {abdelaziz.bounhar, guokan.shang}@mbzuai.ac.ae
22
+
23
+ 1 Introduction
24
+ --------------
25
+
26
+ ![Image 1: Refer to caption](https://arxiv.org/html/2601.08441v1/x1.png)
27
+
28
+ Figure 1: Overview of YaPO. Unlike dense BiPO, which learns entangled steering directions directly in activation space, YaPO leverages a pretrained Sparse Autoencoder (SAE) to project activations into an interpretable sparse space. By optimizing sparse codes, YaPO learns disentangled and robust steering vectors that improve convergence, stability, and cultural alignment, while preserving generalization across domains.
29
+
30
+ Large language models have achieved remarkable progress in generating coherent, contextually appropriate, and useful text across domains. However, controlling their behavior in a fine-grained and interpretable manner remains a central challenge for alignment and personalization. Traditional approaches such as Reinforcement Learning from Human Feedback (RLHF) (Ziegler et al., [2019](https://arxiv.org/html/2601.08441v1#bib.bib27 "Fine-tuning language models from human preferences")) are effective but costly, difficult to scale, and often inflexible, while also offering little transparency into how specific behaviors are modulated. Prompt engineering provides a lightweight alternative but is brittle and usually less efficient compared to fine-tuning. More importantly, RLHF lacks scalability: modulating a single behavior may require updating millions of parameters or collecting large amounts of preference data, with the risk of degrading performance on unrelated tasks. These limitations have motivated growing interest in activation steering, a lightweight paradigm that guides model outputs by directly modifying hidden activations at inference time, via steering vector injection at specific layers without retraining or altering original model weights (Turner et al., [2023](https://arxiv.org/html/2601.08441v1#bib.bib21 "Activation addition: steering language models without optimization")).
31
+
32
+ Early activation steering methods such as Contrastive Activation Addition (CAA) (Panickssery et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib13 "Steering llama 2 via contrastive activation addition")) compute steering vectors by averaging activation differences over contrastive prompts. While simple, this approach captures only coarse behavioral signals and often fails in complex settings. Bi-directional Preference Optimization (BiPO) (Cao et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib11 "Personalized steering of large language models: versatile steering vectors through bi-directional preference optimization")) introduced a DPO-style objective to directly learn dense steering vectors from preference data, enabling improved control over behaviors such as hallucination and refusal.
33
+
34
+ However, both CAA and BiPO rely on dense steering vectors, which are prone to entangling multiple latent factors due to neuron multi-semanticity and superposition (Elhage et al., [2022](https://arxiv.org/html/2601.08441v1#bib.bib14 "Toy models of superposition")). This limits their stability, interpretability, and effectiveness in fine-grained alignment settings. In parallel, Sparse Activation Steering (SAS) (Bayat et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib15 "Steering large language model activations in sparse spaces")) leverages Sparse Autoencoders (SAEs) to operate on approximately monosemantic features, enabling more interpretable interventions, but relies on static averaged activations rather than learnable sparse vectors.
35
+
36
+ In this work, we introduce Yet Another Policy Optimization (YaPO), a reference-free method that learns trainable sparse steering vectors directly in the latent space of a pretrained SAE using a BiPO-style objective. YaPO combines the preference optimization of BiPO with the interpretability of SAS, yielding sparse, stable, and effective steering directions with minimal training overhead.
37
+
38
+ We study cultural adaptation as a representative domain adaptation setting, introducing a new benchmark spanning five language families and fifteen cultural contexts. Our results identify a substantial implicit–explicit localization gap in baseline models as in (Veselovsky et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib16 "Localized cultural knowledge is conserved and controllable in large language models")), and show that YaPO consistently closes this gap through improved fine-grained alignment. We further assess the generalization of YaPO on MMLU and on established alignment benchmarks from prior studies (Cao et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib11 "Personalized steering of large language models: versatile steering vectors through bi-directional preference optimization"); Panickssery et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib13 "Steering llama 2 via contrastive activation addition"); Bayat et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib15 "Steering large language model activations in sparse spaces")).
39
+
40
+ In summary, our contributions are threefold: ∙\bullet We propose YaPO, the first reference-free method for learning _sparse steering vectors_ (in the latent space of a SAE) from preference data.
41
+
42
+ ∙\bullet We curate a new dataset and benchmark for cultural alignment that targets fine-grained cultural distinctions, including same-language cultures with subtle differences in values and norms, spanning five language families and fifteen cultural contexts.
43
+
44
+ ∙\bullet We empirically show that YaPO converges faster, exhibits improved training stability, and yields more interpretable steering directions than dense baselines, while also generalizing beyond cultural alignment to broader alignment tasks and benchmarks.
45
+
46
+ min v\displaystyle\min_{v}\;𝔼 d∼𝒰​{−1,1}(x,y w,y l)∼𝒟​[log⁡σ​(d​β​log⁡π L+1​(y w∣A L​(x)+d​v)π L+1​(y w∣A L​(x))−d​β​log⁡π L+1​(y l∣A L​(x)+d​v)π L+1​(y l∣A L​(x)))],\displaystyle\mathbb{E}_{\begin{subarray}{c}d\sim\mathcal{U}\{-1,1\}\\ (x,y_{w},y_{l})\sim\mathcal{D}\end{subarray}}\Bigl[\log\sigma\Bigl(d\,\beta\log\tfrac{\pi_{L+1}(y_{w}\mid A_{L}(x)+dv)}{\pi_{L+1}(y_{w}\mid A_{L}(x))}-d\,\beta\log\tfrac{\pi_{L+1}(y_{l}\mid A_{L}(x)+dv)}{\pi_{L+1}(y_{l}\mid A_{L}(x))}\Bigr)\Bigr],(1)
47
+
48
+ 2 Related Works
49
+ ---------------
50
+
51
+ Alignment and controllability. RLHF (Christiano et al., [2017](https://arxiv.org/html/2601.08441v1#bib.bib26 "Deep reinforcement learning from human preferences"); Ziegler et al., [2019](https://arxiv.org/html/2601.08441v1#bib.bib27 "Fine-tuning language models from human preferences"); Stiennon et al., [2020](https://arxiv.org/html/2601.08441v1#bib.bib28 "Learning to summarize with human feedback"); Ouyang et al., [2022](https://arxiv.org/html/2601.08441v1#bib.bib29 "Training language models to follow instructions with human feedback")) has become the standard approach to align LLMs, training a reward model on human preference data and fine-tuning with PPO (Schulman et al., [2017](https://arxiv.org/html/2601.08441v1#bib.bib30 "Proximal policy optimization algorithms")) under the Bradley–Terry framework (Bradley and Terry, [1952](https://arxiv.org/html/2601.08441v1#bib.bib19 "Rank analysis of incomplete block designs: i. the method of paired comparisons")). Recent methods simplify this pipeline by bypassing explicit reward modeling: DPO (Rafailov et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib31 "Direct preference optimization: your language model is secretly a reward model")) directly optimizes on preference pairs, while SLiC (Zhao et al., [2023](https://arxiv.org/html/2601.08441v1#bib.bib32 "Slic-hf: sequence likelihood calibration with human feedback")) introduces a contrastive calibration loss with regularization toward the SFT model. Statistical rejection sampling (Liu et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib33 "Statistical rejection sampling improves preference optimization")) unifies both objectives and provides a tighter policy estimate.
52
+
53
+ Activation engineering. Activation-based methods steer LLMs by freezing weights and intervening on hidden activations. Early approaches optimized sentence-specific latent vectors (Subramani et al., [2022](https://arxiv.org/html/2601.08441v1#bib.bib20 "Extracting latent steering vectors from pretrained language models")), while activation addition (Turner et al., [2023](https://arxiv.org/html/2601.08441v1#bib.bib21 "Activation addition: steering language models without optimization")) and CAA (Rimsky et al., [2023](https://arxiv.org/html/2601.08441v1#bib.bib22 "Steering llama 2 via contrastive activation addition")) compute averaged activation differences from contrastive prompts. Although simple, these methods are often noisy and unstable, particularly for long-form or alignment-critical generation (Wang and Shu, [2023](https://arxiv.org/html/2601.08441v1#bib.bib23 "Backdoor activation attack: attack large language models using activation steering for safety-alignment")). More recent work perturbs attention heads (Li et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib24 "Inference-time intervention: eliciting truthful answers from a language model"); Liu et al., [2023](https://arxiv.org/html/2601.08441v1#bib.bib25 "In-context vectors: making in context learning more effective and controllable through latent space steering")). BiPO (Cao et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib11 "Personalized steering of large language models: versatile steering vectors through bi-directional preference optimization")) improves over prior work by framing steering as preference optimization, learning dense steering vectors via a bi-directional DPO-style objective.
54
+
55
+ Sparse activation steering. To mitigate superposition, Sparse Autoencoders (SAEs) (Lieberum et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib18 "Gemma scope: open sparse autoencoders everywhere all at once on gemma 2")) decompose activations into sparse, approximately monosemantic features. Sparse Activation Steering (SAS) (Bayat et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib15 "Steering large language model activations in sparse spaces")) exploits this structure by averaging sparse activations from contrastive data, yielding interpretable and fine-grained control. However, SAS does not optimize steering directions against preferences, limiting its effectiveness.
56
+
57
+ SAE-based steering and editing. Recent work combines activation steering with sparse or structured representation bases (Wu et al., [2025a](https://arxiv.org/html/2601.08441v1#bib.bib43 "AxBench: steering llms? even simple baselines outperform sparse autoencoders"), [b](https://arxiv.org/html/2601.08441v1#bib.bib42 "Improved representation steering for language models"); Chalnev et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib39 "Improving steering vectors by targeting sparse autoencoder features"); He et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib40 "SAE-ssv: supervised steering in sparse representation spaces for reliable control of language models"); Sun et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib38 "HyperSteer: activation steering at scale with hypernetworks"); Xu et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib41 "EasyEdit2: an easy-to-use steering framework for editing large language models")). ReFT-r1 (Wu et al., [2025a](https://arxiv.org/html/2601.08441v1#bib.bib43 "AxBench: steering llms? even simple baselines outperform sparse autoencoders")) learns a single dense steering direction on frozen models using a language-modeling objective with sparsity constraints. RePS (Wu et al., [2025b](https://arxiv.org/html/2601.08441v1#bib.bib42 "Improved representation steering for language models")) introduces a reference-free, bi-directional preference objective to train intervention-based steering methods. Other approaches operate directly in SAE space: SAE-TS and SAE-SSV (Chalnev et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib39 "Improving steering vectors by targeting sparse autoencoder features"); He et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib40 "SAE-ssv: supervised steering in sparse representation spaces for reliable control of language models")) optimize or select sparse SAE features for controlled steering, while HyperSteer (Sun et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib38 "HyperSteer: activation steering at scale with hypernetworks")) generates steering vectors on demand via a hypernetwork.
58
+
59
+ Positioning of YaPO. BiPO provides strong optimization but suffers from dense entanglement; SAS offers interpretability but lacks optimization. YaPO unifies these lines by learning preference-optimized, sparse steering vectors in SAE space. This yields disentangled, interpretable, and stable steering, with improved convergence and generalization across cultural alignment, truthfulness, hallucination suppression, and jailbreak defense.
60
+
61
+ 3 Method
62
+ --------
63
+
64
+ ### 3.1 Motivation: From Dense to Sparse Steering
65
+
66
+ Existing approaches extract steering vectors by directly operating in the dense activation space of LLMs (Rimsky et al., [2023](https://arxiv.org/html/2601.08441v1#bib.bib22 "Steering llama 2 via contrastive activation addition"); Wang and Shu, [2023](https://arxiv.org/html/2601.08441v1#bib.bib23 "Backdoor activation attack: attack large language models using activation steering for safety-alignment")). While effective in some cases, these methods inherit the multi-semantic entanglement of neurons: individual dense features often conflate multiple latent factors (Elhage et al., [2022](https://arxiv.org/html/2601.08441v1#bib.bib14 "Toy models of superposition")), leading to noisy and unstable control signals. As a result, vectors obtained from contrastive prompt pairs can misalign with actual generation behaviors, especially in alignment-critical tasks.
67
+
68
+ To address this, we leverage SAEs, which have recently been shown to disentangle latent concepts in LLM activations into sparse, interpretable features (Bayat et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib15 "Steering large language model activations in sparse spaces"); Lieberum et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib18 "Gemma scope: open sparse autoencoders everywhere all at once on gemma 2")). By mapping activations into this space basis, steering vectors can be optimized along dimensions that correspond more cleanly to relevant semantic factors, improving both precision and interpretability.
69
+
70
+ ### 3.2 Preference-Optimized Steering in Sparse Space
71
+
72
+ Let A L​(x)A_{L}(x) denote the hidden activations of the transformer at layer L L for input x x. Let also π L+1\pi_{L+1} denote the upper part of the transformer (from layer L+1 L+1 to output). BiPO (Cao et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib11 "Personalized steering of large language models: versatile steering vectors through bi-directional preference optimization")) learns a steering vector v∈ℝ k d v\in\mathbb{R}^{k_{d}} in the dense activation space of dimension k d k_{d} using the bi-directional preference optimization objective (see equation [1](https://arxiv.org/html/2601.08441v1#S1.E1 "In 1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")). y w y_{w} and y l y_{l} are respectively the preferred and dispreferred responses which are jointly drawn with the prompt x x from the preference dataset 𝒟\mathcal{D}, σ\sigma is the logistic function, β≥0\beta\geq 0 a deviation control parameter, and d∈{−1,1}d\in\{-1,1\} a uniformly random coefficient enforcing bi-directionality. At inference time, the learned steering vector v v is injected to the hidden state to cause a perturbation towards the desired steering behavior as follows
73
+
74
+ A L​(x)=A L​(x)+d⋅λ⋅v,∀d∈{−1,1}A_{L}(x)=A_{L}(x)+d\cdot\lambda\cdot v,\qquad\forall d\in\{-1,1\}(2)
75
+
76
+ with d d fixed to either -1 or 1 (negative or positive steering) and λ\lambda being a multiplicative factor that controlling the strength of steering.
77
+
78
+ In contrast, with YaPO, we introduce a sparse transformation function Φ\Phi that steers activations through an SAE as follows:
79
+
80
+ Φ\displaystyle\Phi(A L​(x),λ,d,v)\displaystyle(A_{L}(x),\lambda,d,v)
81
+ =Dec​(ReLU​(Enc​(A L​(x))+d⋅λ⋅v))⏟steered reconstruction\displaystyle=\underbrace{\text{Dec}\!\left(\mathrm{ReLU}\!\left(\text{Enc}(A_{L}(x))+d\cdot\lambda\cdot v\right)\right)}_{\text{steered reconstruction}}
82
+ +(A L​(x)−Dec​(Enc​(A L​(x))))⏟residual correction.\displaystyle\quad+\underbrace{\Big(A_{L}(x)-\text{Dec}(\text{Enc}(A_{L}(x)))\Big)}_{\text{residual correction}}.(3)
83
+
84
+ where Enc and Dec are the encoder and decoder of a pretrained SAE, and v∈ℝ k s v\in\mathbb{R}^{k_{s}} is the learnable steering vector in sparse space of dimension k s≫k d k_{s}\gg k_{d}. To correct for SAE reconstruction error, we add a residual correction term ensuring consistency with the original hidden state (see equation [3.2](https://arxiv.org/html/2601.08441v1#S3.Ex1 "3.2 Preference-Optimized Steering in Sparse Space ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")). The rational behind applying ReLU function is to enforce non-negativity in sparse codes (Bayat et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib15 "Steering large language model activations in sparse spaces")). We train steering vectors to increase the likelihood of preferred responses y w y_{w} while decreasing that of dispreferred responses y l y_{l}. The resulting optimization objective is outlined in equation [4](https://arxiv.org/html/2601.08441v1#S3.E4 "In 3.2 Preference-Optimized Steering in Sparse Space ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
85
+
86
+ min v ��� d∼𝒰​{−1,1}(x,y w,y l)∼𝒟[log σ(\displaystyle\min_{v}\;\mathbb{E}_{\begin{subarray}{c}d\sim\mathcal{U}\{-1,1\}\\ (x,y_{w},y_{l})\sim\mathcal{D}\end{subarray}}\Bigl[\log\sigma\Bigl(d β log π L+1​(y w∣Φ​(A L​(x),λ,d,v))π L+1​(y w∣A L​(x))−d β log π L+1​(y l∣Φ​(A L​(x),λ,d,v))π L+1​(y l∣A L​(x)))].\displaystyle d\,\beta\log\tfrac{\pi_{L+1}(y_{w}\mid\Phi(A_{L}(x),\lambda,d,v))}{\pi_{L+1}(y_{w}\mid A_{L}(x))}-d\,\beta\log\tfrac{\pi_{L+1}(y_{l}\mid\Phi(A_{L}(x),\lambda,d,v))}{\pi_{L+1}(y_{l}\mid A_{L}(x))}\Bigr)\Bigr].(4)
87
+
88
+ With d=1 d=1, the objective increases the relative probability of y w y_{w} over y l y_{l}; with d=−1 d=-1, it enforces the reverse. This symmetric training sharpens the vector’s alignment with the behavioral axis of interest (positive or negative steering).
89
+
90
+ During optimization, we detach gradients through the SAE parameters (which along with the LLM parameter remain frozen) and only update v v. This setup enables v v to live in a disentangled basis, while the decoder projects it back to the model’s hidden space. We summarize the overall optimization procedure in Algorithm [1](https://arxiv.org/html/2601.08441v1#alg1 "Algorithm 1 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
91
+
92
+ 4 Experiments
93
+ -------------
94
+
95
+ ### 4.1 Experimental Setup
96
+
97
+ Target LLM.For clarity, in the main paper we present all experiments on Gemma-2-2B(Team et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib17 "Gemma 2: improving open language models at a practical size")), a light yet efficient model. Scalability to the bigger model Gemma-2-9B is differed to Appendix [D](https://arxiv.org/html/2601.08441v1#A4 "Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"). The choice of this model is further motivated by the availability of pretrained SAEs from Gemma-Scope(Lieberum et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib18 "Gemma scope: open sparse autoencoders everywhere all at once on gemma 2")), which are trained directly on Gemma-2 hidden activations and enable sparse steering without additional overhead of training SAEs from scratch.
98
+
99
+ Tasks. For readability, we focus on cultural adaptation, followed by a generalization study on other standard alignment tasks as studied in previous work (Cao et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib11 "Personalized steering of large language models: versatile steering vectors through bi-directional preference optimization"); Panickssery et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib13 "Steering llama 2 via contrastive activation addition"); Bayat et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib15 "Steering large language model activations in sparse spaces")). For cultural adaptation, we select the steering layer via activation patching, see Appendix [A](https://arxiv.org/html/2601.08441v1#A1 "Appendix A Layer Discovery ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"). Empirically, we find that layer 15 yields the best performance with Gemma-2-2B. Training details and hyperparameter settings are reported in Appendix [B](https://arxiv.org/html/2601.08441v1#A2 "Appendix B Training Details ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
100
+
101
+ Algorithm 1 YaPO: Yet another Policy Optimization
102
+
103
+ 1:Input: LLM
104
+
105
+ π\pi
106
+ , preference dataset
107
+
108
+ 𝒟\mathcal{D}
109
+ , batch size
110
+
111
+ B B
112
+ , layer
113
+
114
+ A L A_{L}
115
+ , SAE encoder Enc, decoder Dec, learning rate
116
+
117
+ η\eta
118
+ , temperature
119
+
120
+ β\beta
121
+ , epochs
122
+
123
+ N N
124
+
125
+ 2:Output: Optimized steering vector
126
+
127
+ v∗v^{\ast}
128
+
129
+ 3:Initialize
130
+
131
+ v 0←𝟎∈ℝ k s v_{0}\leftarrow\mathbf{0}\in\mathbb{R}^{k_{s}}
132
+
133
+ 4:for
134
+
135
+ e=0 e=0
136
+ to
137
+
138
+ N−1 N-1
139
+ do
140
+
141
+ 5: Sample minibatch
142
+
143
+ 𝒟 e∼𝒟\mathcal{D}_{e}\sim\mathcal{D}
144
+ of size
145
+
146
+ B B
147
+
148
+ 6: Sample direction
149
+
150
+ d∼𝒰​{−1,1}d\sim\mathcal{U}\{-1,1\}
151
+
152
+ 7:for each
153
+
154
+ (x i,y w i,y l i)∈𝒟 e(x^{i},y_{w}^{i},y_{l}^{i})\in\mathcal{D}_{e}
155
+ do
156
+
157
+ 8:
158
+
159
+ h i←A L​(x i)h^{i}\leftarrow A_{L}(x^{i})
160
+
161
+ 9:
162
+
163
+ s i←Enc​(h i)s^{i}\leftarrow\text{Enc}(h^{i})
164
+
165
+ 10:
166
+
167
+ s~i←ReLU​(s i+d​v e)\tilde{s}^{i}\leftarrow\mathrm{ReLU}(s^{i}+dv_{e})
168
+
169
+ 11:
170
+
171
+ h~i←Dec​(s~i)\tilde{h}^{i}\leftarrow\text{Dec}(\tilde{s}^{i})
172
+
173
+ 12:
174
+
175
+ h^i←Dec​(Enc​(h i))\hat{h}^{i}\leftarrow\text{Dec}(\text{Enc}(h^{i}))
176
+
177
+ 13:
178
+
179
+ h′⁣i←h~i+(h i−h^i)h^{\prime\,i}\leftarrow\tilde{h}^{i}+(h^{i}-\hat{h}^{i})
180
+
181
+ 14:end for
182
+
183
+ 15: Compute loss
184
+
185
+ ℒ\mathcal{L}
186
+ as per equation [4](https://arxiv.org/html/2601.08441v1#S3.E4 "In 3.2 Preference-Optimized Steering in Sparse Space ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")
187
+
188
+ 16:
189
+
190
+ v e+1←AdamW​(v e,∇v e ℒ,η)v_{e+1}\leftarrow\text{AdamW}(v_{e},\nabla_{v_{e}}\mathcal{L},\eta)
191
+
192
+ 17:end for
193
+
194
+ 18:return
195
+
196
+ v∗←v N−1 v^{\ast}\leftarrow v_{N-1}
197
+
198
+ Dataset. We introduce a new cultural alignment dataset that we curate from scratch, with dedicated _training_ and _evaluation_ splits, to probe fine-grained cultural localization _within the same language_. Existing cultural benchmarks often conflate culture with language, geography, or surface lexical cues, making it unclear whether models truly reason about cultural norms or merely exploit explicit signals. Our dataset addresses this limitation by holding language fixed and varying only country-level norms and practices, targeting subtle yet consequential differences in everyday behavior among countries that share a language (e.g., Moroccan vs. Egyptian Arabic, US vs. UK English).
199
+
200
+ Crucially, every question appears in two forms: (i) a _localized_ version that explicitly specifies the country (e.g., “I am from Morocco, …”), and (ii) a _non-localized_ version that omits the country, requiring the model to infer cultural context implicitly from dialectal and situational cues from the input prompt. This paired construction enables principled measurement of the _implicit–explicit localization gap_, the performance drop when explicit country information is removed—following (Veselovsky et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib16 "Localized cultural knowledge is conserved and controllable in large language models")).
201
+
202
+ To ensure consistent multi-country coverage at scale, responses were generated with Gemini and subsequently filtered and curated. For clarity of presentation, full details on the dataset curation process and statistics are differed to Appendix [F](https://arxiv.org/html/2601.08441v1#A6 "Appendix F Dataset ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
203
+
204
+ ###### Definition 1(Performance–Normalized Localization Gap (PNLG)).
205
+
206
+ Let x loc x_{\mathrm{loc}} and x nonloc x_{\mathrm{nonloc}} be a localized and its corresponding non–localized prompt, and let y∗y^{\ast} be the culturally correct answer. For a model π\pi, define the per-instance correctness scores
207
+
208
+ p loc=S π​(x loc,y∗),p non=S π​(x nonloc,y∗),p_{\mathrm{loc}}=S_{\pi}(x_{\mathrm{loc}},y^{\ast}),\qquad p_{\mathrm{non}}=S_{\pi}(x_{\mathrm{nonloc}},y^{\ast}),
209
+
210
+ where S π​(x,y∗)≥0 S_{\pi}(x,y^{\ast})\geq 0 indicates whether the model output matches the correct answer. In the multiple-choice questions setting, S π S_{\pi} is the accuracy and thus is 1 1 if the predicted option equals y∗y^{\ast}, and 0 otherwise. In the open-ended generation setting, S π S_{\pi} is a score determined by an external LLM judge.
211
+
212
+ Let p¯=1 2​(p loc+p non)\bar{p}=\tfrac{1}{2}(p_{\mathrm{loc}}+p_{\mathrm{non}}). The _performance–normalized localization gap_ is:
213
+
214
+ PNLG α​(π)=𝔼(x loc,x nonloc,y∗)∼𝒟​[p loc−p non p¯α+ε],\mathrm{PNLG}_{\alpha}(\pi)=\mathbb{E}_{(x_{\mathrm{loc}},x_{\mathrm{nonloc}},y^{\ast})\sim\mathcal{D}}\left[\frac{p_{\mathrm{loc}}-p_{\mathrm{non}}}{\bar{p}^{\,\alpha}+\varepsilon}\right],(5)
215
+
216
+ with ε>0\varepsilon>0 arbitrarily small for numerical stability and α∈[0,1]\alpha\in[0,1] controlling the strength of the normalization.
217
+
218
+ ###### Definition 2(Robust Cultural Accuracy (RCA)).
219
+
220
+ Using the same notation, the _robust cultural accuracy_ is the harmonic mean of localized and non–localized accuracies:
221
+
222
+ RCA​(π)=𝔼(x loc,x nonloc,y∗)∼𝒟​[2​p loc​p non p loc+p non+ε].\mathrm{RCA}(\pi)=\mathbb{E}_{(x_{\mathrm{loc}},x_{\mathrm{nonloc}},y^{\ast})\sim\mathcal{D}}\left[\frac{2\,p_{\mathrm{loc}}\,p_{\mathrm{non}}}{p_{\mathrm{loc}}+p_{\mathrm{non}}+\varepsilon}\right].(6)
223
+
224
+ with ε>0\varepsilon>0 arbitrarily small for numerical stability.
225
+
226
+ Design choice of metrics. A raw localization gap p loc−p non p_{\mathrm{loc}}-p_{\mathrm{non}} can be misleading: a weak model may display a small gap simply because both accuracies are near zero. PNLG corrects for this by normalizing the gap with the mean performance p¯\bar{p}, so models with trivially low accuracy are penalized. RCA complements this by rewarding methods that are both accurate and balanced across localized and non–localized prompts. Together, PNLG and RCA provide a more faithful evaluation of cultural alignment than raw gap alone.
227
+
228
+ Baselines. We benchmark the performances of YaPO against four baselines: No steering: the original Gemma-2-2B model without any intervention. CAA(Panickssery et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib13 "Steering llama 2 via contrastive activation addition")): which derives dense steering vectors by contrastive activation addition averaging, without preference optimization. SAS(Bayat et al., [2025](https://arxiv.org/html/2601.08441v1#bib.bib15 "Steering large language model activations in sparse spaces")): which derives sparse steering vectors by averaging SAE-encoded activations in the style of CAA, without preference optimization. BiPO(Cao et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib11 "Personalized steering of large language models: versatile steering vectors through bi-directional preference optimization")): which optimizes dense steering vectors directly in the residual stream via bi-directional preference optimization.
229
+
230
+ These baselines allow us to disentangle the contributions of sparse representations and preference optimization in improving cultural alignment , and to assess whether YaPO indeed provides the best of both worlds by combining the precision of BiPO with the interpretability of SAS.
231
+
232
+ ### 4.2 Training Dynamics Analysis
233
+
234
+ ![Image 2: Refer to caption](https://arxiv.org/html/2601.08441v1/x2.png)
235
+
236
+ (a) Egypt localized
237
+
238
+ ![Image 3: Refer to caption](https://arxiv.org/html/2601.08441v1/x3.png)
239
+
240
+ (b) Nepal non-localized
241
+
242
+ Figure 2: Localized (a) and non-localized (b) training and evaluation loss comparison between BiPO and YaPO for Egypt (a) and Nepal (b).
243
+
244
+ We begin by comparing the training dynamics of YaPO and BiPO. Empirically, we find that the same behavior occur for all countries and scenarios. Thus, for conciseness matters, we report training and evaluation loss logs for “Egypt” and “Nepal” under both the “localized” and “non-localized” cultural adaptation settings. Figures [2(a)](https://arxiv.org/html/2601.08441v1#S4.F2.sf1 "In Figure 2 ‣ 4.2 Training Dynamics Analysis ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")–[2(b)](https://arxiv.org/html/2601.08441v1#S4.F2.sf2 "In Figure 2 ‣ 4.2 Training Dynamics Analysis ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") show training and evaluation loss over optimization steps for both methods (YaPO and BiPO).
245
+
246
+ The contrast is striking: YaPO converges an order of magnitude faster, with loss consistently dropping below 0.1 in under than 150 steps in both scenarios, whereas BiPO remains above 0.3 even after 600 steps. This rapid convergence stems from and underscores the advantage of operating in the sparse SAE latent space, where disentangled features yield cleaner gradients and more stable optimization. Sparse codes isolate semantically meaningful directions, reducing interference from irrelevant features that blur optimization in dense space. In contrast, BiPO remains tied to the dense residual space, where multi-semanticity and superposition entangle behavioral factors, hindering convergence, and stability, particularly in tasks that require disentangling closely related features.
247
+
248
+ Localized Non-localized Both
249
+ Lang.Country Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO
250
+ Portuguese Brazil 23.4%44.0%21.1%27.9%41.6%17.7%32.0%17.1%22.2%34.8%19.9%42.0%19.9%27.3%39.1%
251
+ Mozambique 21.8%40.9%44.9%28.0%37.2%19.3%33.9%38.6%25.7%27.5%20.2%36.9%46.0%25.0%32.1%
252
+ Portugal 33.5%43.5%50.9%37.6%53.2%28.7%39.8%49.5%35.2%52.3%32.2%44.1%52.2%34.5%54.0%
253
+ Average 26.2%42.8%39.0%31.2%44.0%21.9%35.2%35.1%27.7%38.2%24.1%41.0%39.4%28.9%41.7%
254
+ Arabic Egypt 43.1%46.7%41.8%45.1%47.7%36.0%43.6%33.4%39.8%43.6%36.1%44.7%37.5%42.2%50.2%
255
+ KSA 16.1%16.8%19.2%19.9%20.2%16.7%13.5%19.6%18.9%19.2%17.1%14.1%20.2%19.5%20.9%
256
+ Levantine 15.0%12.1%14.7%16.9%16.9%10.3%7.9%11.4%11.4%13.1%12.4%10.4%13.4%14.6%15.3%
257
+ Morocco 12.6%11.2%8.7%13.6%14.0%12.6%10.4%11.0%13.6%14.0%11.6%10.8%19.5%13.8%13.6%
258
+ Average 21.7%21.7%21.1%23.9%24.7%21.0%18.9%21.3%23.4%22.5%19.3%20.0%22.7%22.5%25.0%
259
+
260
+ Table 1: Multiple-choice question performance by language and country using Gemma-2-2B-it.
261
+
262
+ Localized Non-localized Both
263
+ Lang.Country Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO
264
+ Portuguese Brazil 5.96 2.66 6.02 6.35 6.11 5.62 2.51 5.51 5.97 5.61 5.81 2.59 5.75 6.21 5.86
265
+ Mozambique 5.56 2.66 5.56 6.01 5.65 4.76 2.47 4.73 5.10 4.79 5.15 2.62 5.14 5.54 5.31
266
+ Portugal 5.85 2.59 5.89 6.10 6.01 5.28 2.54 5.35 5.56 5.30 5.52 2.57 5.57 5.86 5.70
267
+ Average 5.79 2.64 5.82 6.15 5.92 5.22 2.51 5.20 5.54 5.23 5.49 2.60 5.45 5.87 5.62
268
+ Arabic Egypt 2.93 2.38 2.77 3.10 3.02 2.97 2.68 2.91 3.15 3.60 3.00 2.22 2.81 3.08 3.31
269
+ KSA 3.30 2.02 3.68 3.42 3.85 3.09 2.28 3.46 3.29 3.71 3.21 2.15 3.60 3.31 3.75
270
+ Levantine 3.13 1.74 2.81 3.24 3.06 3.06 1.92 2.91 3.23 3.41 3.04 2.00 2.85 3.13 3.22
271
+ Morocco 2.92 2.12 2.43 3.06 2.91 2.75 1.98 2.55 2.82 2.77 2.76 2.04 2.45 2.88 2.80
272
+ Average 3.07 2.07 2.92 3.21 3.21 2.97 2.21 2.96 3.12 3.37 3.00 2.10 2.93 3.10 3.27
273
+
274
+ Table 2: Open-ended performance by language and country using Gemma-2-2B-it.
275
+
276
+ 5 Evaluation
277
+ ------------
278
+
279
+ We evaluate YaPO against CAA, BiPO, SAS and the baseline model without steering on our curated multilingual cultural adaptation benchmark using both Multiple-Choice Questions (MCQs) and Open-ended Generation (OG). To assess absolute alignment as well as robustness to the explicit–implicit localization gap, we consider the three settings: localized, non-localized, and mixed prompts (both). MCQ performance is measured by accuracy 2 2 2 The ground-truth answer is annotated using a \boxed{k} tag, where k k denotes the index of the correct choice, if the regex doesn’t match, we call an external LLM to judge., while OG responses are scored by an external LLM judge for consistency with the gold answer (see Appendix [E](https://arxiv.org/html/2601.08441v1#A5 "Appendix E Evaluation: LLM-as-Judge Prompts ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") for the evaluation prompts). For clarity, we only show the results on “Portuguese” and “Arabic” languages, the results on the full five set of languages are in Appendix [C](https://arxiv.org/html/2601.08441v1#A3 "Appendix C Evaluation Results ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
280
+
281
+ Language RCA ↑ (Higher is better)PNLG ↓ (Lower is better)
282
+ MCQ (%)Open-Ended (0–10)MCQ Open-Ended
283
+ Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO
284
+ Arabic 20.1 19.2 21.3 22.2 23.5 1.08 0.76 1.08 1.36 1.60 0.129 0.167 0.098 0.141 0.098 1.470 1.583 1.482 1.359 1.346
285
+ Portuguese 23.8 37.5 36.5 29.3 40.8 1.40 0.72 1.39 1.77 1.62 0.184 0.192 0.113 0.126 0.165 1.569 1.798 1.584 1.462 1.511
286
+
287
+ Table 3: RCA and PNLG Analysis by Language for MCQ and Open-Ended Tasks (All Methods).
288
+
289
+ ### 5.1 Multiple-Choice Questions
290
+
291
+ Table [1](https://arxiv.org/html/2601.08441v1#S4.T1 "Table 1 ‣ 4.2 Training Dynamics Analysis ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") reports MCQ accuracy by language, country, and prompt setting. Overall, all methods improve over the baseline in most settings, with YaPO being the most consistent across languages and prompt types. Gains are especially pronounced for non-localized prompts, where cultural cues are implicit. CAA and SAS already yield strong improvements under explicit localization (e.g., Spanish–Spain), but YaPO typically matches or exceeds these gains while remaining robust when localization is removed. In contrast, BiPO shows more variable behavior and can underperform in low-resource or highly entangled settings.
292
+
293
+ In contrast, YaPO shows smooth and monotonic accuracy scaling over a wide range of λ\lambda values. Performance degrades gracefully rather than catastrophically, and optimal accuracy is achieved without precise tuning. This robustness is consistent across culturally distant settings (Egypt vs. Levantine, Nepal vs. Spanish), suggesting that sparse, preference-optimized steering reduces entanglement and limits destructive interference. Overall, these results highlight that YaPO not only improves peak performance but also substantially enlarges the safe and effective steering regime.
294
+
295
+ ### 5.2 Open-Ended Generation
296
+
297
+ Table [2](https://arxiv.org/html/2601.08441v1#S4.T2 "Table 2 ‣ 4.2 Training Dynamics Analysis ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") reports open-ended generation results for Portuguese and Arabic under localized, non-localized, and mixed prompt settings. In Portuguese, dense BiPO steering consistently attains the highest scores across all settings, whereas CAA substantially degrades performance and SAS remains close to the baseline. In Arabic, YaPO yields the strongest gains, particularly in the non-localized setting where cultural cues are implicit (e.g., the average score increases from 2.97 to 3.37), while BiPO provides smaller and less consistent improvements. Overall, BiPO is most effective in high-resource settings with strong baselines, whereas YaPO delivers more reliable improvements in lower-resource and implicitly localized open-ended generation. The consistent degradation observed with CAA is likely due to the coarse nature of simple activation averaging: a single dense steering direction applied uniformly across the chosen layer can tend to over-regularizes long-form generation, suppressing stylistic variation, discourse structure, and culturally specific details. In contrast, BiPO benefits from learnable steering, and YaPO further improves robustness by enforcing sparsity and disentanglement thereby taking the best of both worlds from BiPO and SAS.
298
+
299
+ ![Image 4: Refer to caption](https://arxiv.org/html/2601.08441v1/x4.png)
300
+
301
+ Figure 3: Training accuracy over epochs for YaPO (red), BiPO (blue), and the unsteered baseline (orange) on the MCQ localization task across six cultural regions.
302
+
303
+ ### 5.3 Explicit–Implicit Localization Gap
304
+
305
+ Table [3](https://arxiv.org/html/2601.08441v1#S5.T3 "Table 3 ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") reports RCA and PNLG for MCQ and open-ended tasks. Recall that RCA (Eq. [6](https://arxiv.org/html/2601.08441v1#S4.E6 "In Definition 2 (Robust Cultural Accuracy (RCA)). ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")) is the harmonic mean of localized and non-localized performance, rewarding methods that are both accurate and balanced across settings. Higher RCA therefore reflects robust cultural competence rather than reliance on explicit localization cues. PNLG (Eq. [5](https://arxiv.org/html/2601.08441v1#S4.E5 "In Definition 1 (Performance–Normalized Localization Gap (PNLG)). ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")) measures the relative gap between localized and non-localized performance; lower values indicate better transfer from explicit to implicit prompts.
306
+
307
+ Across languages and tasks, YaPO consistently achieves the best trade-off, yielding the highest RCA while maintaining among the lowest PNLG values. This indicates that YaPO improves cultural robustness without widening the explicit–implicit localization gap, and that this behavior holds for both MCQ and open-ended generation. BiPO also improves RCA over the baseline, but exhibits a larger PNLG in several cases, suggesting less balanced gains between explicit and implicit settings.
308
+
309
+ A particularly salient pattern is the task dependence of CAA. While CAA attains competitive RCA on MCQs, it substantially degrades both RCA and PNLG on open-ended generation. This supports the view that coarse activation averaging may suffice for short, discrete predictions, but becomes harmful in long-form generation, where it over-constrains representations and amplifies the localization gap. In contrast, sparse and preference-optimized steering, especially YaPO appears better suited to preserving balanced behavior across prompt regimes.
310
+
311
+ ### 5.4 Performance Stability and Convergence Throughout Training
312
+
313
+ As shown in Figure [3](https://arxiv.org/html/2601.08441v1#S5.F3 "Figure 3 ‣ 5.2 Open-Ended Generation ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), YaPO converges faster and more smoothly than BiPO across all regions, reaching higher final accuracy. BiPO exhibits pronounced oscillations, particularly in lower-resource settings, indicating less stable optimization. This instability often leads to overwriting previously correct behaviors. These results highlight the stabilizing effect of sparse, preference-optimized steering.
314
+
315
+ ### 5.5 Sensitivity to the Steering Multiplier
316
+
317
+ ![Image 5: Refer to caption](https://arxiv.org/html/2601.08441v1/Figures/multipliers_egy_lev.png)
318
+
319
+ (a) Egypt & Levantine
320
+
321
+ ![Image 6: Refer to caption](https://arxiv.org/html/2601.08441v1/Figures/multipliers_nep_spa.png)
322
+
323
+ (b) Nepal & Spanish
324
+
325
+ Figure 4: Effect of steering multiplier λ\lambda on MCQ accuracy across methods for different cultural settings. YaPO exhibits smoother and more stable accuracy scaling compared to dense baselines.
326
+
327
+ Localized Non-localized Both
328
+ Language Country CAA SAS BiPO YaPO CAA SAS BiPO YaPO CAA SAS BiPO YaPO
329
+ Baseline (no steering)57.58%
330
+ Spanish Spain 56.99%56.97%57.61%57.30%56.93%56.84%57.64%57.27%57.02%56.94%57.68%57.27%
331
+ Mexico 56.99%57.09%57.66%57.36%57.05%57.03%57.57%57.27%56.98%57.08%57.62%57.12%
332
+ Bolivia 56.96%56.92%57.47%57.17%56.85%57.05%57.45%57.09%56.95%57.08%57.39%57.02%
333
+ Average 56.98%56.99%57.58%57.28%56.94%56.97%57.55%57.21%56.98%57.03%57.56%57.14%
334
+ Arabic Egypt 57.13 57.11 57.51%57.06%57.02 57.18 57.50%57.14%57.21 57.13 57.42%56.97%
335
+ KSA 57.27 57.10 57.62%57.35%57.27 57.19 57.56%57.36%57.29 57.12 57.66%57.16%
336
+ Levantine 57.02 57.12 57.64%57.37%56.98 57.04 57.58%57.29%56.95 57.08 57.67%57.17%
337
+ Morocco 57.17 57.07 57.57%57.30%57.26 57.01 57.61%57.36%57.12 57.05 57.72%57.12%
338
+ Average 57.15 57.10 57.58%57.27%57.13 57.10 57.56%57.29%57.14 57.10 57.62%57.10%
339
+
340
+ Table 4: Performances on MMLU using MCQ steering vectors (All Methods). The non-steered baseline accuracy is reported once globally (with chat template).
341
+
342
+ Figure [4](https://arxiv.org/html/2601.08441v1#S5.F4 "Figure 4 ‣ 5.5 Sensitivity to the Steering Multiplier ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") analyzes the effect of the steering multiplier λ\lambda on MCQ accuracy. We observe that CAA and SAS exhibit strong sensitivity to λ\lambda: performance is highly non-monotonic and often collapses abruptly beyond a narrow operating range (e.g., λ>0.5\lambda>0.5), indicating over-steering where activation shifts destabilize generation. In contrast, YaPO and BiPO remain robust to larger steering strengths, with YaPO notably achieving its highest accuracy at larger λ\lambda values (e.g., λ=1.5\lambda=1.5 or 2.0 2.0) without degradation, demonstrating the stability of sparse preference optimization.
343
+
344
+ ### 5.6 MMLU and Generalization to Other Domains
345
+
346
+ #### MMLU.
347
+
348
+ Table [4](https://arxiv.org/html/2601.08441v1#S5.T4 "Table 4 ‣ 5.5 Sensitivity to the Steering Multiplier ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") reports results on MMLU to assess whether cultural steering impacts general knowledge. Across all languages and prompt settings, we observe that differences between methods remain small, with scores tightly clustered around the unsteered baseline. This indicates that none of the steering approaches, including YaPO, significantly degrade or inflate general-purpose performance on MMLU. Overall, these results suggest that the learned steering vectors primarily affect targeted alignment behaviors, while leaving broad knowledge capabilities intact.
349
+
350
+ #### Generalization to other tasks.
351
+
352
+ To assess whether cultural steering vectors specialize too narrowly, we evaluate them on BiPO’s benchmarks in Table [5](https://arxiv.org/html/2601.08441v1#S5.T5 "Table 5 ‣ Generalization to other tasks. ‣ 5.6 MMLU and Generalization to Other Domains ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), for Hallucination, Wealth-Seeking, Jailbreak, and Power-Seeking.
353
+
354
+ Overall, CAA attains the highest average score on these scalar tasks, with YaPO typically in second place, followed by BiPO and then SAS. However, in practice we find CAA and SAS to be quite brittle: their performance is highly sensitive to the choice of steering weight and activation threshold τ\tau, as shown in Section [5.5](https://arxiv.org/html/2601.08441v1#S5.SS5 "5.5 Sensitivity to the Steering Multiplier ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"). By contrast, in BiPO and YaPO the effective steering strength is absorbed into the learned vector itself (with a coefficient λ i\lambda_{i} per dimension i i, although we can also use an extra one outside as is done in BiPO). Thus, by the sparsity, YaPO has more degrees of freedom and is less dependent on manual hyperparameter tuning. This suggests that learning in a sparse activation space is not only effective for cultural alignment, but also generalizes as a robust steering mechanism on broader alignment dimensions such as hallucination reduction.
355
+
356
+ Model Task Base CAA SAS BiPO YaPO
357
+ Gemma-2-2B-it Wealth-Seeking 2.10 2.23 2.14 2.17 2.31
358
+ Jailbreak 1.00 1.08 1.00 1.02 1.00
359
+ Power-Seeking 1.89 2.09 1.81 1.93 2.03
360
+ Hallucination 1.60 2.18 1.46 1.60 1.69
361
+ Average 1.65 1.90 1.60 1.68 1.76
362
+
363
+ Table 5: Performance on general tasks.
364
+
365
+ 6 Conclusion
366
+ ------------
367
+
368
+ In this work, we introduced YaPO, a reference-free method that learns sparse, preference-optimized steering vectors in the latent space of Sparse Autoencoders. Our study demonstrates that operating in sparse space yields faster convergence, greater stability, and improved interpretability compared to dense steering methods such as BiPO. On our newly curated multilingual cultural benchmark spanning five languages and fifteen cultural contexts, YaPO consistently outperforms both BiPO and the baseline model, particularly under non-localized prompts, where implicit cultural cues must be inferred. Beyond culture, YaPO generalizes to other alignment dimensions such as hallucination mitigation, wealth-seeking, jailbreak, and power-seeking, underscoring its potential as a general recipe for efficient and fine-grained alignment.
369
+
370
+ Limitations
371
+ -----------
372
+
373
+ While our study broadens the evaluation landscape, several limitations remain. First, experiments were conducted on the Gemma-2 family (2B and 9B); due to compute and time constraints, we could not include additional architectures such as Llama-Scope 8B (He et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib37 "Llama scope: extracting millions of features from llama-3.1-8b with sparse autoencoders")) or Qwen models. Second, in the case where no SAE is available, one could learn task-specific small SAEs or low-rank sparse projections, we leave this for future work. Finally, our cultural dataset captures cross-country but not within-country diversity. Future efforts will expand its scope and explore cross-model transferability of sparse steering vectors.
374
+
375
+ References
376
+ ----------
377
+
378
+ * Steering large language model activations in sparse spaces. External Links: 2503.00177, [Link](https://arxiv.org/abs/2503.00177)Cited by: [§1](https://arxiv.org/html/2601.08441v1#S1.p3.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§1](https://arxiv.org/html/2601.08441v1#S1.p5.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§2](https://arxiv.org/html/2601.08441v1#S2.p3.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§3.1](https://arxiv.org/html/2601.08441v1#S3.SS1.p2.1 "3.1 Motivation: From Dense to Sparse Steering ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§3.2](https://arxiv.org/html/2601.08441v1#S3.SS2.p2.7 "3.2 Preference-Optimized Steering in Sparse Space ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p2.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p7.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
379
+ * R. A. Bradley and M. E. Terry (1952)Rank analysis of incomplete block designs: i. the method of paired comparisons. Biometrika 39 (3/4), pp.324–345. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
380
+ * Y. Cao, T. Zhang, B. Cao, Z. Yin, L. Lin, F. Ma, and J. Chen (2024)Personalized steering of large language models: versatile steering vectors through bi-directional preference optimization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, External Links: [Link](https://openreview.net/forum?id=7qJFkuZdYo)Cited by: [§1](https://arxiv.org/html/2601.08441v1#S1.p2.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§1](https://arxiv.org/html/2601.08441v1#S1.p5.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§2](https://arxiv.org/html/2601.08441v1#S2.p2.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§3.2](https://arxiv.org/html/2601.08441v1#S3.SS2.p1.15 "3.2 Preference-Optimized Steering in Sparse Space ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p2.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p7.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
381
+ * S. Chalnev, M. Siu, and A. Conmy (2024)Improving steering vectors by targeting sparse autoencoder features. External Links: 2411.02193, [Link](https://arxiv.org/abs/2411.02193)Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p4.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
382
+ * P. F. Christiano, J. Leike, T. B. Brown, M. Martic, S. Legg, and D. Amodei (2017)Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, Vol. 30. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
383
+ * C. Dumas, C. Wendler, V. Veselovsky, G. Monea, and R. West (2024)Separating tongue from thought: activation patching reveals language-agnostic concept representations in transformers. arXiv preprint arXiv:2411.08745. Cited by: [Appendix A](https://arxiv.org/html/2601.08441v1#A1.p1.2 "Appendix A Layer Discovery ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
384
+ * N. Elhage, T. Hume, C. Olsson, N. Schiefer, T. Henighan, S. Kravec, Z. Hatfield-Dodds, R. Lasenby, D. Drain, C. Chen, R. Grosse, S. McCandlish, J. Kaplan, D. Amodei, M. Wattenberg, and C. Olah (2022)Toy models of superposition. External Links: 2209.10652, [Link](https://arxiv.org/abs/2209.10652)Cited by: [§1](https://arxiv.org/html/2601.08441v1#S1.p3.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§3.1](https://arxiv.org/html/2601.08441v1#S3.SS1.p1.1 "3.1 Motivation: From Dense to Sparse Steering ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
385
+ * A. Ghandeharioun, A. Caciularu, A. Pearce, L. Dixon, and M. Geva (2024)Patchscopes: a unifying framework for inspecting hidden representations of language models. arXiv preprint arXiv:2401.06102. Cited by: [Appendix A](https://arxiv.org/html/2601.08441v1#A1.p1.2 "Appendix A Layer Discovery ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
386
+ * Z. He, W. Shu, X. Ge, L. Chen, J. Wang, Y. Zhou, F. Liu, Q. Guo, X. Huang, Z. Wu, Y. Jiang, and X. Qiu (2024)Llama scope: extracting millions of features from llama-3.1-8b with sparse autoencoders. External Links: 2410.20526, [Link](https://arxiv.org/abs/2410.20526)Cited by: [Limitations](https://arxiv.org/html/2601.08441v1#Sx1.p1.1.1 "Limitations ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
387
+ * Z. He, M. Jin, B. Shen, A. Payani, Y. Zhang, and M. Du (2025)SAE-ssv: supervised steering in sparse representation spaces for reliable control of language models. External Links: 2505.16188, [Link](https://arxiv.org/abs/2505.16188)Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p4.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
388
+ * K. Li, O. Patel, F. Viégas, H. Pfister, and M. Wattenberg (2024)Inference-time intervention: eliciting truthful answers from a language model. Advances in Neural Information Processing Systems 36. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p2.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
389
+ * T. Lieberum, S. Rajamanoharan, A. Conmy, L. Smith, N. Sonnerat, V. Varma, J. Kramár, A. Dragan, R. Shah, and N. Nanda (2024)Gemma scope: open sparse autoencoders everywhere all at once on gemma 2. External Links: 2408.05147, [Link](https://arxiv.org/abs/2408.05147)Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p3.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§3.1](https://arxiv.org/html/2601.08441v1#S3.SS1.p2.1 "3.1 Motivation: From Dense to Sparse Steering ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p1.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
390
+ * S. Liu, L. Xing, and J. Zou (2023)In-context vectors: making in context learning more effective and controllable through latent space steering. arXiv preprint arXiv:2311.06668. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p2.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
391
+ * T. Liu, Y. Zhao, R. Joshi, M. Khalman, M. Saleh, P. J. Liu, and J. Liu (2024)Statistical rejection sampling improves preference optimization. In International Conference on Learning Representations, Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
392
+ * L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. (2022)Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, Vol. 35, pp.27730–27744. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
393
+ * N. Panickssery, N. Gabrieli, J. Schulz, M. Tong, E. Hubinger, and A. M. Turner (2024)Steering llama 2 via contrastive activation addition. External Links: 2312.06681, [Link](https://arxiv.org/abs/2312.06681)Cited by: [§1](https://arxiv.org/html/2601.08441v1#S1.p2.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§1](https://arxiv.org/html/2601.08441v1#S1.p5.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p2.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p7.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
394
+ * R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn (2024)Direct preference optimization: your language model is secretly a reward model. Advances in Neural Information Processing Systems 36. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
395
+ * N. Rimsky, N. Gabrieli, J. Schulz, M. Tong, E. Hubinger, and A. M. Turner (2023)Steering llama 2 via contrastive activation addition. arXiv preprint arXiv:2312.06681. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p2.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§3.1](https://arxiv.org/html/2601.08441v1#S3.SS1.p1.1 "3.1 Motivation: From Dense to Sparse Steering ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
396
+ * J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017)Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
397
+ * N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P. F. Christiano (2020)Learning to summarize with human feedback. In Advances in Neural Information Processing Systems, Vol. 33, pp.3008–3021. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
398
+ * N. Subramani, N. Suresh, and M. E. Peters (2022)Extracting latent steering vectors from pretrained language models. arXiv preprint arXiv:2205.05124. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p2.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
399
+ * J. Sun, S. Baskaran, Z. Wu, M. Sklar, C. Potts, and A. Geiger (2025)HyperSteer: activation steering at scale with hypernetworks. External Links: 2506.03292, [Link](https://arxiv.org/abs/2506.03292)Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p4.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
400
+ * G. Team, M. Riviere, S. Pathak, P. G. Sessa, C. Hardin, S. Bhupatiraju, L. Hussenot, T. Mesnard, B. Shahriari, A. Ramé, J. Ferret, P. Liu, P. Tafti, A. Friesen, M. Casbon, S. Ramos, R. Kumar, C. L. Lan, S. Jerome, A. Tsitsulin, N. Vieillard, P. Stanczyk, S. Girgin, N. Momchev, M. Hoffman, S. Thakoor, J. Grill, B. Neyshabur, O. Bachem, A. Walton, A. Severyn, A. Parrish, A. Ahmad, A. Hutchison, A. Abdagic, A. Carl, A. Shen, A. Brock, A. Coenen, A. Laforge, A. Paterson, B. Bastian, B. Piot, B. Wu, B. Royal, C. Chen, C. Kumar, C. Perry, C. Welty, C. A. Choquette-Choo, D. Sinopalnikov, D. Weinberger, D. Vijaykumar, D. Rogozińska, D. Herbison, E. Bandy, E. Wang, E. Noland, E. Moreira, E. Senter, E. Eltyshev, F. Visin, G. Rasskin, G. Wei, G. Cameron, G. Martins, H. Hashemi, H. Klimczak-Plucińska, H. Batra, H. Dhand, I. Nardini, J. Mein, J. Zhou, J. Svensson, J. Stanway, J. Chan, J. P. Zhou, J. Carrasqueira, J. Iljazi, J. Becker, J. Fernandez, J. van Amersfoort, J. Gordon, J. Lipschultz, J. Newlan, J. Ji, K. Mohamed, K. Badola, K. Black, K. Millican, K. McDonell, K. Nguyen, K. Sodhia, K. Greene, L. L. Sjoesund, L. Usui, L. Sifre, L. Heuermann, L. Lago, L. McNealus, L. B. Soares, L. Kilpatrick, L. Dixon, L. Martins, M. Reid, M. Singh, M. Iverson, M. Görner, M. Velloso, M. Wirth, M. Davidow, M. Miller, M. Rahtz, M. Watson, M. Risdal, M. Kazemi, M. Moynihan, M. Zhang, M. Kahng, M. Park, M. Rahman, M. Khatwani, N. Dao, N. Bardoliwalla, N. Devanathan, N. Dumai, N. Chauhan, O. Wahltinez, P. Botarda, P. Barnes, P. Barham, P. Michel, P. Jin, P. Georgiev, P. Culliton, P. Kuppala, R. Comanescu, R. Merhej, R. Jana, R. A. Rokni, R. Agarwal, R. Mullins, S. Saadat, S. M. Carthy, S. Cogan, S. Perrin, S. M. R. Arnold, S. Krause, S. Dai, S. Garg, S. Sheth, S. Ronstrom, S. Chan, T. Jordan, T. Yu, T. Eccles, T. Hennigan, T. Kocisky, T. Doshi, V. Jain, V. Yadav, V. Meshram, V. Dharmadhikari, W. Barkley, W. Wei, W. Ye, W. Han, W. Kwon, X. Xu, Z. Shen, Z. Gong, Z. Wei, V. Cotruta, P. Kirk, A. Rao, M. Giang, L. Peran, T. Warkentin, E. Collins, J. Barral, Z. Ghahramani, R. Hadsell, D. Sculley, J. Banks, A. Dragan, S. Petrov, O. Vinyals, J. Dean, D. Hassabis, K. Kavukcuoglu, C. Farabet, E. Buchatskaya, S. Borgeaud, N. Fiedel, A. Joulin, K. Kenealy, R. Dadashi, and A. Andreev (2024)Gemma 2: improving open language models at a practical size. External Links: 2408.00118, [Link](https://arxiv.org/abs/2408.00118)Cited by: [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p1.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
401
+ * A. M. Turner, L. Thiergart, D. Udell, G. Leech, U. Mini, and M. MacDiarmid (2023)Activation addition: steering language models without optimization. arXiv preprint arXiv:2308.10248. Cited by: [§1](https://arxiv.org/html/2601.08441v1#S1.p1.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§2](https://arxiv.org/html/2601.08441v1#S2.p2.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
402
+ * V. Veselovsky, B. Argin, B. Stroebl, C. Wendler, R. West, J. Evans, T. L. Griffiths, and A. Narayanan (2025)Localized cultural knowledge is conserved and controllable in large language models. External Links: 2504.10191, [Link](https://arxiv.org/abs/2504.10191)Cited by: [§1](https://arxiv.org/html/2601.08441v1#S1.p5.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§4.1](https://arxiv.org/html/2601.08441v1#S4.SS1.p4.1 "4.1 Experimental Setup ‣ 4 Experiments ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
403
+ * J. Vig, S. Gehrmann, Y. Belinkov, S. Qian, D. Nevo, Y. Singer, and S. Shieber (2020)Investigating gender bias in language models using causal mediation analysis. Advances in neural information processing systems 33, pp.12388–12401. Cited by: [Appendix A](https://arxiv.org/html/2601.08441v1#A1.p1.2 "Appendix A Layer Discovery ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
404
+ * H. Wang and K. Shu (2023)Backdoor activation attack: attack large language models using activation steering for safety-alignment. arXiv preprint arXiv:2311.09433. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p2.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§3.1](https://arxiv.org/html/2601.08441v1#S3.SS1.p1.1 "3.1 Motivation: From Dense to Sparse Steering ‣ 3 Method ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
405
+ * Z. Wu, A. Arora, A. Geiger, Z. Wang, J. Huang, D. Jurafsky, C. D. Manning, and C. Potts (2025a)AxBench: steering llms? even simple baselines outperform sparse autoencoders. External Links: 2501.17148, [Link](https://arxiv.org/abs/2501.17148)Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p4.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
406
+ * Z. Wu, Q. Yu, A. Arora, C. D. Manning, and C. Potts (2025b)Improved representation steering for language models. External Links: 2505.20809, [Link](https://arxiv.org/abs/2505.20809)Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p4.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
407
+ * Z. Xu, S. Wang, K. Xu, H. Xu, M. Wang, X. Deng, Y. Yao, G. Zheng, H. Chen, and N. Zhang (2025)EasyEdit2: an easy-to-use steering framework for editing large language models. External Links: 2504.15133, [Link](https://arxiv.org/abs/2504.15133)Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p4.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
408
+ * Y. Zhao, R. Joshi, T. Liu, M. Khalman, M. Saleh, and P. J. Liu (2023)Slic-hf: sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425. Cited by: [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
409
+ * D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. Christiano, and G. Irving (2019)Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593. Cited by: [§1](https://arxiv.org/html/2601.08441v1#S1.p1.1 "1 Introduction ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [§2](https://arxiv.org/html/2601.08441v1#S2.p1.1 "2 Related Works ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
410
+
411
+ Appendix A Layer Discovery
412
+ --------------------------
413
+
414
+ We employ activation patching(Ghandeharioun et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib34 "Patchscopes: a unifying framework for inspecting hidden representations of language models"); Dumas et al., [2024](https://arxiv.org/html/2601.08441v1#bib.bib35 "Separating tongue from thought: activation patching reveals language-agnostic concept representations in transformers"); Vig et al., [2020](https://arxiv.org/html/2601.08441v1#bib.bib36 "Investigating gender bias in language models using causal mediation analysis")) to identify which layers of the LLM contribute most strongly to cultural localization. In our setting, the _slocalized prompt_ x localized x_{\text{localized}} is the localized version of the input (e.g., specifying the country or culture), whereas the _non-localized prompt_ x nonloc x_{\text{nonloc}} is the non-localized variant (e.g., without cultural specification).
415
+
416
+ Due to causal masking in the attention layers, the latent representation of the i i-th input token after the j j-th transformer block depends on all preceding tokens:
417
+
418
+ h i(j)=h i(j)​(x 1,…,x i).h^{(j)}_{i}=h^{(j)}_{i}(x_{1},\ldots,x_{i}).
419
+
420
+ For clarity, we omit this explicit dependence when clear from context and use the shorthand notation h(j)​(x)i h^{(j)}(x)_{i}.
421
+
422
+ We first perform a forward pass on the localized (source) prompt and extract its latent representation h i(j)​(x localized)h^{(j)}_{i}(x_{\text{localized}}) at each layer. During the forward pass on the non-localized (target) prompt, we _patch_ its latent representation by overwriting h i(j)​(x nonloc)h^{(j)}_{i}(x_{\text{nonloc}}) with the localized one, producing a perturbed forward pass P~​(x nonloc)\tilde{P}(x_{\text{nonloc}}). By comparing P~​(x nonloc)\tilde{P}(x_{\text{nonloc}}) to the original prediction P​(x nonloc)P(x_{\text{nonloc}}), we quantify how much information from each layer of the localized prompt contributes to aligning the model’s behavior with the culturally appropriate response.
423
+
424
+ Concretely, for our analysis we focus on the latent representation at the last token position t localized t_{\text{localized}} in the localized prompt, i.e.,
425
+
426
+ h t localized(j)​(x localized),h^{(j)}_{t_{\text{localized}}}(x_{\text{localized}}),
427
+
428
+ and patch this into the corresponding position in the target forward pass. Measuring the change in output probability distribution across layers yields an activation patching curve that reveals which transformer blocks encode the strongest cultural localization signal. We conduct this analysis for two countries, Egypt and Morocco. For each country, we construct paired localized and non-localized questions, together with culturally appropriate answers (Egyptian or Moroccan) and a Western baseline answer. Activation patching is applied independently for each country following the procedure described above. We perform this analysis on both Gemma-2-2B and Gemma-2-9B models, and find that the layers 15 and 28 yields the best performances for Gemma-2 2b, and Gemma-2 9b, respectively.
429
+
430
+ ![Image 7: Refer to caption](https://arxiv.org/html/2601.08441v1/x5.png)
431
+
432
+ ![Image 8: Refer to caption](https://arxiv.org/html/2601.08441v1/x6.png)
433
+
434
+ Figure 5: Activation patching analysis on Gemma-2-2B. We intervene across layers to trace cultural features in model representations. The plots show the probability of producing culturally specific answers (Egypt, Morocco) versus Western defaults as activations are patched. We empirically identify layer 15 as the most culturally relevant layer.
435
+
436
+ Appendix B Training Details
437
+ ---------------------------
438
+
439
+ We report the training configuration and hyperparameters in Table [6](https://arxiv.org/html/2601.08441v1#A2.T6 "Table 6 ‣ Appendix B Training Details ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"). Most settings are shared across model sizes, while batch size, SAE configuration, and training time differ between the 2B and 9B models due to memory and capacity constraints.
440
+
441
+ Parameter 2B Model 9B Model
442
+ System and optimization
443
+ Hardware 8 ×\times AMD MI210 GPUs
444
+ Epochs 20
445
+ Optimizer AdamW (β 1=0.9,β 2=0.999\beta_{1}=0.9,\ \beta_{2}=0.999)
446
+ Weight decay 0.05
447
+ Learning rate 5×10−4 5\times 10^{-4}
448
+ LR scheduler Cosine decay with 100 warmup steps
449
+ Max prompt length 512 tokens
450
+ Max new tokens 2048
451
+ Batching
452
+ Batch size per GPU 4 1
453
+ Gradient accumulation 1 1
454
+ SAE configuration
455
+ SAE layer 15 28
456
+ SAE vector size 65k 131k
457
+ Average index (SAE layer)68 98
458
+ Training cost
459
+ Training time 10 minutes 30 minutes
460
+
461
+ Table 6: Training configuration and hyperparameters.
462
+
463
+ Appendix C Evaluation Results
464
+ -----------------------------
465
+
466
+ This section reports the complete evaluation results omitted from the main body for clarity and space constraints. We provide full per-language and per-country breakdowns for all tasks (MCQ and open-ended) and metrics discussed in the paper, including RCA and PNLG (Table [9](https://arxiv.org/html/2601.08441v1#A3.T9 "Table 9 ‣ RCA/PNLG analysis. ‣ Appendix C Evaluation Results ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")). We additionally report results on MMLU using the same steering interventions (Table [10](https://arxiv.org/html/2601.08441v1#A3.T10 "Table 10 ‣ MMLU Performances. ‣ Appendix C Evaluation Results ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")).
467
+
468
+ All results follow the same experimental setup, prompts, and evaluation protocols described in Section 4. Tables are organized by task and metric, and include all cultural settings across the five language families considered in our benchmark. This comprehensive view enables detailed inspection of cross-country variability, low-resource effects, and method-specific trade-offs beyond the aggregate trends emphasized in the main body. Overall, we observe that YaPO consistently delivers state-of-the-art performance, most notably on the MCQ task, where it achieves the strongest accuracy across languages and cultural settings in the full breakdowns.
469
+
470
+ #### Full MCQ and open-ended breakdowns.
471
+
472
+ Localized Non-localized Both
473
+ Language Country Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)
474
+ English UK 36.4%40.9%43.6%36.8%49.1%23.2%25.1%28.4%30.3%39.1%29.0%31.5%37.5%33.8%43.6%
475
+ USA 45.5%70.7%67.7%51.9%59.8%40.2%60.1%52.7%45.9%54.4%44.7%66.2%61.0%45.2%57.5%
476
+ Australia 48.2%55.4%55.1%51.1%59.8%23.8%28.0%26.6%31.1%38.8%33.3%40.7%40.0%37.9%50.2%
477
+ Average 43.4%55.7%55.5%46.6%56.2%29.1%37.7%35.9%35.8%44.1%35.7%46.1%46.2%39.0%50.4%
478
+ Spanish Bolivia 22.8%44.0%32.0%29.4%42.1%14.5%25.6%19.6%17.4%24.6%18.5%32.4%26.1%25.3%35.5%
479
+ Mexico 24.4%25.9%31.2%22.5%35.2%13.3%21.4%21.7%18.4%27.2%18.6%22.6%26.5%21.2%30.0%
480
+ Spain 46.5%63.6%72.7%50.8%61.6%31.8%54.8%54.5%35.1%43.5%37.3%59.6%63.3%41.1%52.3%
481
+ Average 31.2%44.5%45.3%34.2%46.3%19.9%33.9%32.0%23.6%31.8%24.8%38.2%38.6%29.2%39.3%
482
+ Portuguese Brazil 23.4%44.0%21.1%27.9%41.6%17.7%32.0%17.1%22.2%34.8%19.9%42.0%19.9%27.3%39.1%
483
+ Mozambique 21.8%40.9%44.9%28.0%37.2%19.3%33.9%38.6%25.7%27.5%20.2%36.9%46.0%25.0%32.1%
484
+ Portugal 33.5%43.5%50.9%37.6%53.2%28.7%39.8%49.5%35.2%52.3%32.2%44.1%52.2%34.5%54.0%
485
+ Average 26.2%42.8%39.0%31.2%44.0%21.9%35.2%35.1%27.7%38.2%24.1%41.0%39.4%28.9%41.7%
486
+ Arabic Egypt 43.1%46.7%41.8%45.1%47.7%36.0%43.6%33.4%39.8%43.6%36.1%44.7%37.5%42.2%50.2%
487
+ KSA 16.1%16.8%19.2%19.9%20.2%16.7%13.5%19.6%18.9%19.2%17.1%14.1%20.2%19.5%20.9%
488
+ Levantine 15.0%12.1%14.7%16.9%16.9%10.3%7.9%11.4%11.4%13.1%12.4%10.4%13.4%14.6%15.3%
489
+ Morocco 12.6%11.2%8.7%13.6%14.0%12.6%10.4%11.0%13.6%14.0%11.6%10.8%19.5%13.8%13.6%
490
+ Average 21.7%21.7%21.1%23.9%24.7%21.0%18.9%21.3%23.4%22.5%19.3%20.0%22.7%22.5%25.0%
491
+ Hindi India 21.6%34.8%36.3%23.4%41.1%22.2%36.6%38.6%26.1%39.9%20.3%35.4%38.2%22.4%42.9%
492
+ Nepal 43.7%70.4%50.3%44.9%70.4%37.0%58.4%38.4%40.7%68.2%41.6%64.9%44.9%42.1%70.6%
493
+ Average 32.7%52.6%43.3%34.2%55.8%29.6%47.5%38.5%33.4%54.1%31.0%50.2%41.6%32.3%56.8%
494
+
495
+ Table 7: Multiple-Choice Questions Performance by Language and Country across settings using Gemma-2-2B-it.
496
+
497
+ Localized Non-localized Both
498
+ Language Country Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)
499
+ English UK 6.73 3.88 6.72 6.98 6.55 5.98 3.58 6.07 6.24 5.77 6.29 3.69 6.37 6.69 6.22
500
+ USA 7.17 3.58 7.18 7.50 6.89 6.83 3.41 6.70 7.06 6.53 6.93 3.38 6.92 7.28 6.77
501
+ Australia 6.83 3.92 6.77 7.17 6.72 6.00 3.62 6.01 6.32 5.70 6.43 3.81 6.42 6.70 6.19
502
+ Average 6.91 3.79 6.89 7.22 6.72 6.27 3.54 6.26 6.54 6.00 6.55 3.63 6.57 6.89 6.39
503
+ Spanish Spain 5.91 2.88 5.96 6.31 6.24 5.29 2.75 5.29 5.58 5.41 5.60 2.78 5.60 5.94 5.81
504
+ Mexico 5.78 2.61 6.05 6.14 6.27 5.29 2.50 5.55 5.58 5.65 5.55 2.58 5.75 5.87 6.01
505
+ Bolivia 5.94 2.72 5.84 6.28 6.13 5.20 2.45 5.15 5.45 5.29 5.56 2.57 5.50 5.86 5.72
506
+ Average 5.88 2.74 5.95 6.24 6.21 5.26 2.57 5.33 5.54 5.45 5.57 2.64 5.62 5.89 5.85
507
+ Portuguese Brazil 5.96 2.66 6.02 6.35 6.11 5.62 2.51 5.51 5.97 5.61 5.81 2.59 5.75 6.21 5.86
508
+ Mozambique 5.56 2.66 5.56 6.01 5.65 4.76 2.47 4.73 5.10 4.79 5.15 2.62 5.14 5.54 5.31
509
+ Portugal 5.85 2.59 5.89 6.10 6.01 5.28 2.54 5.35 5.56 5.30 5.52 2.57 5.57 5.86 5.70
510
+ Average 5.79 2.64 5.82 6.15 5.92 5.22 2.51 5.20 5.54 5.23 5.49 2.60 5.45 5.87 5.62
511
+ Arabic Egypt 2.93 2.38 2.77 3.10 3.02 2.97 2.68 2.91 3.15 3.60 3.00 2.22 2.81 3.08 3.31
512
+ KSA 3.30 2.02 3.68 3.42 3.85 3.09 2.28 3.46 3.29 3.71 3.21 2.15 3.60 3.31 3.75
513
+ Levantine 3.13 1.74 2.81 3.24 3.06 3.06 1.92 2.91 3.23 3.41 3.04 2.00 2.85 3.13 3.22
514
+ Morocco 2.92 2.12 2.43 3.06 2.91 2.75 1.98 2.55 2.82 2.77 2.76 2.04 2.45 2.88 2.80
515
+ Average 3.07 2.07 2.92 3.21 3.21 2.97 2.21 2.96 3.12 3.37 3.00 2.10 2.93 3.10 3.27
516
+ Hindi India 4.42 2.45 4.75 4.86 5.55 4.12 2.29 4.74 4.30 4.99 4.31 2.28 4.60 4.53 5.35
517
+ Nepal 4.44 2.26 4.57 4.86 5.39 3.77 2.21 4.16 4.01 4.65 4.17 2.23 4.36 4.38 5.08
518
+ Average 4.43 2.35 4.66 4.86 5.47 3.95 2.25 4.45 4.15 4.82 4.24 2.25 4.48 4.46 5.21
519
+
520
+ Table 8: Open-Ended Performance by Language and Country across settings using Gemma-2-2b-it.
521
+
522
+ Tables [7](https://arxiv.org/html/2601.08441v1#A3.T7 "Table 7 ‣ Full MCQ and open-ended breakdowns. ‣ Appendix C Evaluation Results ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") and [8](https://arxiv.org/html/2601.08441v1#A3.T8 "Table 8 ‣ Full MCQ and open-ended breakdowns. ‣ Appendix C Evaluation Results ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") report the complete per-language and per-country performance for the MCQ and open-ended tasks, respectively. Across both tasks, we observe the same qualitative trends as in the main body: steering generally improves performance over the unsteered baseline in most settings, with the strongest gains typically appearing in the _Both_ setting. While improvements vary across countries (and are more heterogeneous for lower-resource settings), the ranking among methods is broadly consistent with the aggregated results reported in the main body.
523
+
524
+ #### RCA/PNLG analysis.
525
+
526
+ Table [9](https://arxiv.org/html/2601.08441v1#A3.T9 "Table 9 ‣ RCA/PNLG analysis. ‣ Appendix C Evaluation Results ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") summarizes, by language, how methods trade off cultural alignment (RCA; higher is better) against naturalness (PNLG; lower is better), for both MCQ and open-ended tasks. In line with the discussion in the main body, methods that substantially increase RCA can sometimes incur a PNLG cost, highlighting an intrinsic tension between stronger cultural steering and output naturalness. Nevertheless, several settings achieve improved RCA while maintaining comparable (or improved) PNLG, indicating that culturally targeted steering need not systematically degrade generation quality.
527
+
528
+ Language RCA ↑ (Higher is better)PNLG ↓ (Lower is better)
529
+ MCQ (%)Open-Ended (0-10 scale)MCQ Open-Ended
530
+ Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO Base CAA SAS BiPO YaPO
531
+ Arabic 20.1 19.2 21.3 22.2 23.5 1.08 0.76 1.08 1.36 1.60 0.129 0.167 0.098 0.141 0.098 1.470 1.583 1.482 1.359 1.346
532
+ English 34.3 44.5 42.7 40.2 49.2 1.26 0.58 1.26 2.30 2.84 0.415 0.384 0.439 0.268 0.249 1.618 1.871 1.619 1.333 1.198
533
+ Hindi 31.0 48.0 40.1 33.7 54.9 0.75 0.37 0.86 1.02 1.10 0.069 0.082 0.051-0.005 0.031 1.709 1.982 1.606 1.619 1.632
534
+ Portuguese 23.8 37.5 36.5 29.3 40.8 1.40 0.72 1.39 1.77 1.62 0.184 0.192 0.113 0.126 0.165 1.569 1.798 1.584 1.462 1.511
535
+ Spanish 24.2 38.0 36.1 27.9 37.6 3.44 2.06 3.40 3.78 3.92 0.470 0.270 0.358 0.360 0.375 0.965 1.070 0.971 0.875 0.851
536
+ Overall 26.7 37.4 35.3 30.7 41.2 1.59 0.90 1.60 2.05 2.22 0.253 0.219 0.212 0.178 0.184 1.466 1.661 1.452 1.330 1.308
537
+
538
+ Table 9: RCA and PNLG Analysis by Language for MCQ and Open-Ended Tasks (All Methods)
539
+
540
+ #### MMLU Performances.
541
+
542
+ Table [10](https://arxiv.org/html/2601.08441v1#A3.T10 "Table 10 ‣ MMLU Performances. ‣ Appendix C Evaluation Results ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation") reports MMLU results using MCQ-derived steering vectors across all methods. Overall, MMLU accuracy remains close to the unsteered baseline, suggesting that culturally targeted interventions largely preserve general capabilities under our evaluation setup. Consistent with our main findings, we observe small but systematic differences between methods, with the highest scores typically concentrated in a single method across conditions. We emphasize that the baseline is reported once globally (with chat template), and all steered evaluations follow the same prompting and scoring protocol as described in Section [5](https://arxiv.org/html/2601.08441v1#S5 "5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation").
543
+
544
+ Localized Non-localized Both
545
+ Language Country CAA SAS BiPO YaPO CAA SAS BiPO YaPO CAA SAS BiPO YaPO
546
+ Baseline (no steering)57.58%
547
+ English UK 57.11%56.91%57.58%57.29%57.07%56.91%57.61%57.32%57.10%56.73%57.52%57.24%
548
+ USA 57.10%57.08%57.58%57.32%56.94%57.03%57.57%57.29%57.05%57.19%57.66%57.09%
549
+ Australia 56.97%56.93%57.47%57.25%57.03%57.07%57.45%57.17%56.97%56.93%57.43%57.10%
550
+ Average 57.06%56.97%57.54%57.29%57.01%57.00%57.54%57.26%57.04%56.95%57.54%57.14%
551
+ Spanish Spain 56.99%56.97%57.61%57.30%56.93%56.84%57.64%57.27%57.02%56.94%57.68%57.27%
552
+ Mexico 56.99%57.09%57.66%57.36%57.05%57.03%57.57%57.27%56.98%57.08%57.62%57.12%
553
+ Bolivia 56.96%56.92%57.47%57.17%56.85%57.05%57.45%57.09%56.95%57.08%57.39%57.02%
554
+ Average 56.98%56.99%57.58%57.28%56.94%56.97%57.55%57.21%56.98%57.03%57.56%57.14%
555
+ Arabic Egypt 57.13 57.11 57.51%57.06%57.02 57.18 57.50%57.14%57.21 57.13 57.42%56.97%
556
+ KSA 57.27 57.10 57.62%57.35%57.27 57.19 57.56%57.36%57.29 57.12 57.66%57.16%
557
+ Levantine 57.02 57.12 57.64%57.37%56.98 57.04 57.58%57.29%56.95 57.08 57.67%57.17%
558
+ Morocco 57.17 57.07 57.57%57.30%57.26 57.01 57.61%57.36%57.12 57.05 57.72%57.12%
559
+ Average 57.15 57.10 57.58%57.27%57.13 57.10 57.56%57.29%57.14 57.10 57.62%57.10%
560
+ Hindi India 57.00%56.98%57.66%57.26%56.94%57.06%57.69%57.29%56.95%57.12%57.70%57.23%
561
+ Nepal 56.93%56.97%57.53%57.22%57.05%57.04%57.53%57.16%57.16%57.08%57.45%57.06%
562
+ Average 56.97%56.98%57.60%57.24%57.00%57.05%57.61%57.23%57.06%57.10%57.58%57.15%
563
+
564
+ Table 10: Performances on MMLU using MCQ steering vectors (All Methods). The non-steered baseline accuracy is reported once globally (first: w/o chat template; second: with).
565
+
566
+ Appendix D Scalability to other Models
567
+ --------------------------------------
568
+
569
+ We further validate our approach on a larger backbone, Gemma-2-9B-it, by training separate steering vectors for all methods and re-evaluating them on Arabic MCQs, Arabic open-ended cultural prompts, and a general safety suite (Tables [11](https://arxiv.org/html/2601.08441v1#A4.T11 "Table 11 ‣ MCQ robustness at 9B. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), [12](https://arxiv.org/html/2601.08441v1#A4.T12 "Table 12 ‣ Open-ended generation exhibits clearer method differences. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation"), and [14](https://arxiv.org/html/2601.08441v1#A4.T14 "Table 14 ‣ General tasks and MMLU. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")). We also report MMLU results for completeness (Table [13](https://arxiv.org/html/2601.08441v1#A4.T13 "Table 13 ‣ General tasks and MMLU. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")).
570
+
571
+ #### MCQ robustness at 9B.
572
+
573
+ Localized Non-localized Both
574
+ Language Country Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)
575
+ Arabic Egypt 42.1 43.8 50.3 45.1 46.1 35.0 38.2 43.3 38.5 40.1 38.2 40.6 46.4 41.1 43.0
576
+ KSA 29.5 31.5 27.4 32.5 31.2 18.9 19.2 20.5 20.2 19.9 25.0 25.3 23.7 26.3 25.8
577
+ Levantine 26.8 26.5 26.5 29.4 25.9 24.1 23.8 24.1 25.9 22.8 24.0 25.9 24.7 27.0 25.4
578
+ Morocco 8.7 8.7 7.0 12.6 9.1 9.1 6.3 6.3 10.1 7.9 9.1 7.6 6.6 11.4 8.3
579
+ Average 26.8 27.6 27.8 29.5 28.1 21.8 21.9 23.6 23.7 22.6 24.1 24.9 25.4 26.5 25.6
580
+
581
+ Table 11: Multiple-Choice Questions Performance by Language and Country across settings using Gemma-2-9B-It.
582
+
583
+ On Arabic MCQs (Table [11](https://arxiv.org/html/2601.08441v1#A4.T11 "Table 11 ‣ MCQ robustness at 9B. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")), all steering methods still improve over the unsteered baseline across most settings, but the stronger base model leaves less headroom and reduces the separation between methods. In this regime, BiPO most often attains the best average performance, while SAS, YaPO, and CAA provide comparable gains depending on the country and cultural setting. This indicates that, for discrete-choice tasks on a high-performing backbone, multiple steering schemes converge to similar behavior once the underlying policy is already relatively robust.
584
+
585
+ #### Open-ended generation exhibits clearer method differences.
586
+
587
+ Localized Non-localized Both
588
+ Language Country Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)Baseline CAA SAS BiPO YaPO (ours)
589
+ Arabic Egypt 5.30 4.01 6.41 5.73 6.10 5.33 3.85 6.07 5.75 5.82 5.34 4.02 6.18 5.67 5.91
590
+ KSA 5.59 4.16 6.34 6.21 6.02 5.23 3.79 5.80 5.63 5.49 5.42 3.97 6.08 5.87 5.75
591
+ Levantine 5.32 3.80 6.23 5.84 5.93 5.18 4.17 5.83 5.63 5.63 5.24 3.96 6.07 5.71 5.71
592
+ Morocco 4.92 2.98 5.60 5.47 5.59 4.86 3.05 5.25 5.16 5.22 4.89 3.08 5.43 5.13 5.31
593
+ Average 5.28 3.74 6.15 5.81 5.91 5.15 3.72 5.74 5.54 5.54 5.22 3.76 5.94 5.60 5.67
594
+
595
+ Table 12: Open-Ended Performance by Language and Country across settings using Gemma-2-9B-It.
596
+
597
+ For Arabic open-ended prompts (Table [12](https://arxiv.org/html/2601.08441v1#A4.T12 "Table 12 ‣ Open-ended generation exhibits clearer method differences. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")), the ranking becomes more structured: SAS consistently yields the strongest scores, with BiPO and YaPO close behind and reliably improving over the baseline across settings. In contrast, CAA remains less reliable for long-form generation and tends to underperform relative to other methods. We found that CAA and SAS are particularly sensitive to the steering multiplier λ\lambda and activation threshold τ\tau, and can produce unstable outputs even for λ≤1\lambda\leq 1; the best trade-off was typically obtained around λ=0.5\lambda=0.5 and τ=0.7\tau=0.7, mirroring the sensitivity trends observed with the 2B model (Section [5.5](https://arxiv.org/html/2601.08441v1#S5.SS5 "5.5 Sensitivity to the Steering Multiplier ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")). Due to compute constraints, we did not perform an equivalent {λ,τ}\{\lambda,\tau\} sweep for BiPO and YaPO at 9B, and instead fixed them to λ=1\lambda=1 and τ=0.7\tau=0.7.
598
+
599
+ #### General tasks and MMLU.
600
+
601
+ Localized Non-localized Both
602
+ Language Country CAA SAS BiPO YaPO (ours)CAA SAS BiPO YaPO (ours)CAA SAS BiPO YaPO (ours)
603
+ Baseline (no steering)72.35%
604
+ Arabic Egypt 72.25%72.19%72.38%72.27%72.33%72.19%72.33%72.26%72.25%72.17%72.33%72.16%
605
+ KSA 72.21%72.21%72.33%72.28%72.26%72.15%72.36%72.28%72.22%72.19%72.34%72.23%
606
+ Levantine 72.27%72.23%72.34%72.26%72.28%72.21%72.36%72.29%72.28%72.19%72.36%72.28%
607
+ Morocco 72.34%72.16%72.35%72.25%72.28%72.21%72.33%72.29%72.31%72.22%72.35%72.23%
608
+ Average 72.27%72.20%72.35%72.27%72.29%72.19%72.35%72.28%72.27%72.19%72.35%72.23%
609
+
610
+ Table 13: MMLU performance by Language and Country across settings using Gemma-2-9B-It and MCQ steering vectors. The non-steered baseline accuracy is reported once globally (with chat template).
611
+
612
+ Model Task Baseline CAA SAS BiPO YaPO (ours)
613
+ Gemma-2-9B-it Hallucination 1.37 1.43 1.47 1.39 1.41
614
+ Wealth-Seeking 1.77 1.95 1.82 1.79 1.78
615
+ Jailbreak 1.03 1.03 1.03 1.05 1.03
616
+ Power-Seeking 1.51 1.53 1.47 1.50 1.50
617
+ Average 1.42 1.49 1.45 1.43 1.43
618
+
619
+ Table 14: Performance On General Tasks Using Gemma-2-9B-It
620
+
621
+ On the safety suite (Table [14](https://arxiv.org/html/2601.08441v1#A4.T14 "Table 14 ‣ General tasks and MMLU. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")), all methods yield modest but consistent improvements over the baseline on average, with CAA slightly leading, SAS typically second, and BiPO/YaPO tracking closely. Finally, MMLU remains essentially unchanged under steering (Table [13](https://arxiv.org/html/2601.08441v1#A4.T13 "Table 13 ‣ General tasks and MMLU. ‣ Appendix D Scalability to other Models ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")), suggesting that these interventions preserve general capabilities at 9B and primarily act as targeted behavioral/cultural adjustments rather than broad capability shifts.
622
+
623
+ Overall, these results show that our conclusions are not tied to a specific model scale: sparse learned steering with YaPO remain reliable on a larger backbone, while CAA continues to exhibit a discrepancy between short-form gains and long-form degradation. Moreover, as headroom shrinks at larger scale, careful tuning of the steering strength (cf. Section [5.5](https://arxiv.org/html/2601.08441v1#S5.SS5 "5.5 Sensitivity to the Steering Multiplier ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")) becomes increasingly important and could further improve the best-performing configurations in specific countries (e.g., Egypt and Nepal as seen in Figure [4](https://arxiv.org/html/2601.08441v1#S5.F4 "Figure 4 ‣ 5.5 Sensitivity to the Steering Multiplier ‣ 5 Evaluation ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")).
624
+
625
+ Appendix E Evaluation: LLM-as-Judge Prompts
626
+ -------------------------------------------
627
+
628
+ ### Evaluation Prompts for Generalization Tasks
629
+
630
+ For the generalization tasks, we used the same judgment framework originally employed for BiPO to ensure a fair and consistent comparison. Each behavior hallucination, jailbreak, power-seeking, and wealth-seeking was evaluated using identical scoring rubrics and LLM-judge prompts, allowing direct comparability between BiPO and YaPO under the same evaluation criteria. This setup isolates the effect of sparse versus dense steering while maintaining alignment with BiPO’s original evaluation protocol.
631
+
632
+ ### E.1 Cultural Localization Evaluation Prompt
633
+
634
+ The culture evaluation prompt is designed to assess the quality and cultural specificity of open-ended responses generated by language models in localization tasks. It provides a structured, multi-axis scoring system that captures the fluency, factual accuracy, cultural appropriateness, and overall content quality of each response. To ensure robustness and interpretability, the framework also includes critical checks for fabricated references, nonsensical text, and excessive repetition. By requiring evaluators to produce judgments in a standardized JSON format, this setup supports scalable, automated evaluation pipelines while maintaining high alignment with human judgment standards in culturally sensitive domains.
635
+
636
+ Appendix F Dataset
637
+ ------------------
638
+
639
+ Our dataset is explicitly designed to make these failures measurable by stress-testing _implicit vs. explicit_ cultural localization under _within-language_ control. We cover 52 lived-experience topics (Table [16](https://arxiv.org/html/2601.08441v1#A6.T16 "Table 16 ‣ F.2 Dataset Statistics ‣ Appendix F Dataset ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")) meals, routines, family relations, greetings and etiquette, financial habits, ceremonies and mourning, holidays, childhood games, music and idioms, because these domains reveal _norms_ rather than trivia. For each topic we manually authored 40–45 seed questions phrased as realistic scenarios (e.g., weekend breakfast, commute habits, hospitality customs). Every question appears in _paired form_: a _localized_ variant that names the country and a _non-localized_ variant that omits it, forcing the model to rely on dialect and situational cues. Each item is cast as a multiple-choice question with _one culturally valid option per country_ within the same language group, written in that country’s _dialect_, plus a _Western control option_ expressed in a standardized register (MSA for Arabic) to isolate culture from translation artifacts. This construction produces mutually plausible yet mutually exclusive answers so that superficial heuristics are insufficient. It enables principled measurement of the _Localization Gap_ (accuracy shift from non-localized to localized form), _Intra-language Dominance Bias_ (systematic preference for one country in non-localized form), and _Stereotype Preference_ (gravitating toward caricatured or Western answers against human-majority ground truth). By holding language fixed while varying country, dialect, and practice, we decouple cultural competence from translation and prompt leakage, converting casual cultural signals into _diagnostic probes of situated reasoning_.
640
+
641
+ ### F.1 Data Curation Pipeline
642
+
643
+ We built the dataset through a multi-stage pipeline that integrates generation, filtering, and contrastive packaging. We began by manually drafting seed questions across the 52 topics, targeting concrete, culturally salient activities such as meal timing, gendered after-work routines, gift-giving customs, and burial practices. To populate country perspectives consistently and at scale, we piloted several closed-source models and selected Gemini-2.5-Flash for its quality and speed in parallel multi-perspective prompting: for each language ×\times country pair (e.g., Arabic: Egypt, KSA, Levantine, Morocco; English: USA, UK, Australia; Spanish: Bolivia, Mexico, Spain; Portuguese: Brazil, Mozambique, Portugal; Hindi: India, Nepal), the model was instructed to act as a _country-specific cultural expert_ and answer in that country’s _dialect_. In the same pass we generated a standardized _Western control_ answer (in MSA for Arabic) to serve as a neutral reference without introducing translation confounds.
644
+
645
+ After generation, we performed _existence filtering_ to remove questions that do not apply to a given culture (e.g., asking about an ingredient never used in that region). We then transformed each item into final multiple-choice format, ensuring that each option was dialect-specific and semantically distinct; a semantic similarity pass plus manual review removed near-duplicates to guarantee discriminative answer sets. We next generated _paired localized/non-localized variants_ for each item, enabling measurement of explicit versus implicit cultural reasoning. Finally, we packaged MCQ and open-ended splits, computed per-language statistics (see Table [15](https://arxiv.org/html/2601.08441v1#A6.T15 "Table 15 ‣ F.2 Dataset Statistics ‣ Appendix F Dataset ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")).
646
+
647
+ ### F.2 Dataset Statistics
648
+
649
+ Language Country Localized Non-localized Total
650
+ English USA 1,372 1,372 2,744
651
+ UK 1,372 1,372 2,744
652
+ Australia 1,372 1,372 2,744
653
+ Subtotal 4,116 4,116 8,232
654
+ Spanish Bolivia 1,536 1,536 3,072
655
+ Mexico 1,535 1,535 3,070
656
+ Spain 1,536 1,536 3,072
657
+ Subtotal 4,607 4,607 9,214
658
+ Portuguese Brazil 1,607 1,607 3,214
659
+ Mozambique 1,607 1,607 3,214
660
+ Portugal 1,606 1,606 3,212
661
+ Subtotal 4,820 4,820 9,640
662
+ Hindi India 1,550 1,550 3,100
663
+ Nepal 1,550 1,550 3,100
664
+ Subtotal 3,100 3,100 6,200
665
+ Arabic Egypt 1,509 1,509 3,018
666
+ Saudi Arabia (KSA)1,509 1,509 3,018
667
+ Levantine 1,508 1,508 3,016
668
+ Morocco 1,508 1,508 3,016
669
+ Subtotal 6,034 6,034 12,068
670
+ Total 22,677 22,677 45,354
671
+
672
+ Table 15: Multilingual dataset statistics (per country and language totals).
673
+
674
+ The resulting dataset (Table [15](https://arxiv.org/html/2601.08441v1#A6.T15 "Table 15 ‣ F.2 Dataset Statistics ‣ Appendix F Dataset ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")) provides dense, balanced coverage across five languages and fourteen countries, with near-uniform counts per language–country variant (≈\approx 1,372–1,607 questions per variant) and a total of 45,354 items. Localized and non-localized forms are balanced overall (57.7% vs. 42.3%), enabling clean estimation of the Localization Gap. The breadth across 52 topics (see Table [16](https://arxiv.org/html/2601.08441v1#A6.T16 "Table 16 ‣ F.2 Dataset Statistics ‣ Appendix F Dataset ‣ YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation")) and depth per topic (≈\approx 40–45 items) provide statistical headroom for per-topic and per-country analyses, bias detection, and mechanistic interpretability studies such as activation patching and sparse-feature steering.
675
+
676
+ Category Topics Covered Cultural Dimensions
677
+ Daily Meals & Food Culture Breakfast, lunch, dinner, snacks, desserts, fruits, eating habits Traditional dishes, meal timing, eating etiquette, food preferences, dietary restrictions, communal vs. individual eating
678
+ Daily Routines & Activities Before work/college, commuting, after work/uni (men/women), free time, household tasks Gendered routines, time use, leisure preferences, division of domestic labor, work–life balance
679
+ Family & Social Relations Parent–child interactions and activities, grandparent relations, siblings, cousins, colleagues Family hierarchy, respect norms, intergenerational dynamics, kinship obligations, personal vs. professional boundaries
680
+ Communication & Social Etiquette Verbal greetings, non-verbal communication, hospitality, punctuality, cleanliness Greeting formulas, body language, guest treatment, time perception, hygiene norms
681
+ Financial & Economic Practices Saving habits, debt and loans, financial discussions, inheritance Attitudes toward money, saving vs. spending, debt perception, investment customs, inheritance rules
682
+ Ceremonies & Life Events Weddings, dowry practices, music and logistics, gender-specific ceremonies, burial and mourning Marriage rituals, celebration styles, gender segregation, death rituals, mourning practices
683
+ Holidays & Celebrations Religious holidays (before/during), non-religious holidays, gift-giving Religious observances, secular celebrations, festive preparation, symbolic meaning
684
+ Cultural Expression & Recreation Childhood games, local songs and dances, musical instruments, idioms, proverbs, agriculture Traditional games, folk music and dance, linguistic expressions, agricultural customs, community recreation
685
+
686
+ Table 16: Dataset topics by thematic category. The dataset spans 52 topics across five cultural contexts (Moroccan, Egyptian, Saudi Arabian, Levantine, and American), covering daily life, norms, and practices.
687
+
688
+ ### F.3 Representative Examples from the Dataset
689
+
690
+ English (USA)
691
+
692
+ Portuguese (Portugal)
693
+
694
+ Spanish (Spain)