diff --git "a/2505/2505.23816.md" "b/2505/2505.23816.md" new file mode 100644--- /dev/null +++ "b/2505/2505.23816.md" @@ -0,0 +1,818 @@ +Title: A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs + +URL Source: https://arxiv.org/html/2505.23816 + +Published Time: Wed, 21 Jan 2026 01:39:10 GMT + +Markdown Content: +2 A Steerability Measurement Framework +-------------------------------------- + +We aim to measure how well a model follows structured, multi-dimensional user goals in a single-turn setting; e.g., text-rewriting. Here, we formalize steerability in the context of LLM evaluation (Section[2.1](https://arxiv.org/html/2505.23816v2#S2.SS1 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")), and introduce metrics for LLM performance in the space of user goals (Section[2.2](https://arxiv.org/html/2505.23816v2#S2.SS2 "2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). + +### 2.1 Designing a steerability metric + +We aim to evaluate the steerability of a conditional generative model f f, which produces outputs y∈𝒴 y\in\mathcal{Y} via sampling y∼f(⋅∣x)y\sim f(\cdot\mid x) given input x∈𝒳 x\in\mathcal{X}. To evaluate f f, one generally measures performance over some user goals 𝐳∗∼P​(⋅)\mathbf{z}^{*}\sim P(\cdot), also called _targets_, where users verbalize intents 𝐳∗\mathbf{z}^{*} via x∼P(⋅∣𝐳∗)x\sim P(\cdot\mid\mathbf{z}^{*}). Such metrics consist of (i) an aggregation function over intents 𝐳∗\mathbf{z}^{*} and (ii) a loss function ℓ​(⋅,𝐳∗)\ell(\cdot,\mathbf{z}^{*}) that captures concordance between outputs and targets 𝐳∗\mathbf{z}^{*}: + +metric​(f)≜𝔼 𝐳∗∼P​(⋅)​𝔼 x∼P(⋅∣𝐳∗)​𝔼 y∼f(⋅∣x)​ℓ​(y,𝐳∗)\mathrm{metric}(f)\triangleq\mathbb{E}_{\mathbf{z}^{*}\sim P(\cdot)}\mathbb{E}_{x\sim P(\cdot\mid\mathbf{z}^{*})}\mathbb{E}_{y\sim f(\cdot\mid x)}\;\ell(y,\mathbf{z}^{*})(1) + +Prior LLM evaluations choose different aggregation and loss functions, summarized in Table[1](https://arxiv.org/html/2505.23816v2#S1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). Instruction following tasks often use a binary ℓ\ell (e.g., correctness(Qin et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib40 "InFoBench: Evaluating instruction following ability in large language models"); Zhou et al.[2023](https://arxiv.org/html/2505.23816v2#bib.bib61 "Instruction-following evaluation for large language models"); He et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib13 "From complex to simple: enhancing multi-constraint complex instruction following ability of large language models"))) and implicitly assume a small set of canonical goals (e.g., instruction types). 1D metrics define a continuous ℓ\ell (e.g., P​(desired behavior)P(\text{desired behavior})(Rimsky et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib46 "Steering Llama 2 via contrastive activation addition"); Turner et al.[2023](https://arxiv.org/html/2505.23816v2#bib.bib52 "Steering language models with activation engineering")); concept detection “scores”(Wu et al.[2025](https://arxiv.org/html/2505.23816v2#bib.bib57 "AxBench: steering LLMs? even simple baselines outperform sparse autoencoders"))), which returns a scalar. Ranking accuracy-based losses(Ouyang et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib37 "Training language models to follow instructions with human feedback"); Rafailov et al.[2023](https://arxiv.org/html/2505.23816v2#bib.bib42 "Direct preference optimization: your language model is secretly a reward model")) emphasize relative rather than absolute response quality. Some evaluations rely on chat log data or web scraping(Köpf et al.[2023](https://arxiv.org/html/2505.23816v2#bib.bib22 "OpenAssistant conversations: democratizing large language model alignment"); Zhao et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib59 "WildChat: 1m chatGPT interaction logs in the wild"); Raffel et al.[2020](https://arxiv.org/html/2505.23816v2#bib.bib43 "Exploring the limits of transfer learning with a unified text-to-text transformer")), or are purpose-built to test specific capabilities(Zhou et al.[2023](https://arxiv.org/html/2505.23816v2#bib.bib61 "Instruction-following evaluation for large language models"); BIG-Bench contributors [2023](https://arxiv.org/html/2505.23816v2#bib.bib4 "Beyond the imitation game: quantifying and extrapolating the capabilities of language models"); Hendrycks et al.[2021](https://arxiv.org/html/2505.23816v2#bib.bib14 "Measuring massive multitask language understanding")), which may not be representative of potential users and goals. + +These shortcomings may be especially pronounced in _steering tasks_ where users aim to transform model outputs along multi-dimensional, multi-level dimensions, such as text-rewriting. In particular, steering tasks may contain a wider range of potential user goals than typically seen in benchmarks. Such tasks may also expose miscalibration, as coarse metrics such as binary accuracy/rankings can lead models to score distinct responses identically, flattening different types of deviations from the user’s intent. In addition, since steering tasks may include requests for multi-dimensional changes to text, single dimensional metrics may hide unintended side effects in LLM responses. + +We contribute a steerability metric that addresses these limitations by (i) aggregating over a _uniform_ distribution of goals, allowing us to better identify poor coverage, and (ii) using a loss function ℓ\ell that measures absolute distance between target goals and model outputs in multiple dimensions. Specifically, let 𝐳 0\mathbf{z}_{0} be a source that to be transformed, and let 𝐳^\mathbf{\hat{z}} be the intent satisfied by the LLM output. Recall that 𝐳∗\mathbf{z}^{*} is the user’s intent. Treating 𝐳∗,𝐳^\mathbf{z}^{*},\mathbf{\hat{z}} and 𝐳 0\mathbf{z}_{0} as elements of a shared metric space 𝒵\mathcal{Z} (_e.g._, Fig.[1](https://arxiv.org/html/2505.23816v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")), we write: + +steerability​(f)≜𝔼 𝐳 0,𝐳∗∼𝒰​𝔼 𝐳^∼f(⋅∣𝐳 0,𝐳∗)​[∥𝐳^−𝐳∗∥2]\mathrm{steerability}(f)\triangleq{\mathbb{E}}_{\mathbf{z}_{0},\mathbf{z}^{*}\sim\mathcal{U}}\;{\mathbb{E}}_{\mathbf{\hat{z}}\sim f(\cdot\mid\mathbf{z}_{0},\mathbf{z}^{*})}[\lVert\mathbf{\hat{z}}-\mathbf{z}^{*}\rVert_{2}](2) + +where 𝒰\mathcal{U} is a uniform distribution over 𝐳 0\mathbf{z}_{0} and 𝐳∗\mathbf{z}^{*}. + +### 2.2 Measuring steerability in practice + +Our steerability metric (Eq.[2](https://arxiv.org/html/2505.23816v2#S2.E2 "In 2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")) puts 𝐳∗,𝐳 0\mathbf{z}^{*},\mathbf{z}_{0}, and 𝐳^\mathbf{\hat{z}} in a shared space 𝒵\mathcal{Z}. To define 𝒵\mathcal{Z}, we observe that user goals 𝐳∗\mathbf{z}^{*} for steering tasks often decompose along interpretable dimensions (e.g., “Make this harder to read and a little longer”). Thus, we define 𝒵\mathcal{Z} to be a set of _dimensions_ representing attributes of text (e.g., reading level and length). Formally, define goal-space 𝒵=[0,1]|𝒢|\mathcal{Z}=[0,1]^{|\mathcal{G}|}, and functions g i:𝒴→[0,1]g_{i}:\mathcal{Y}\to[0,1] for i∈1,…,|𝒢|i\in 1,\dots,|\mathcal{G}| that translate model outputs y∼f(⋅∣x)y\sim f(\cdot\mid x) into goal-space, where g i g_{i} can be based on existing measures of text features (e.g., Flesch-Kincaid grade level(Kincaid et al.[1975](https://arxiv.org/html/2505.23816v2#bib.bib20 "Derivation of new readability formulas (Automated Readability Index, Fog Count And Flesch Reading Ease Formula) for navy enlisted personnel")), word count). The joint output of all g i g_{i} is the goal-space mapping of y y; _i.e._, a vector representation of y y. + +As an example, consider measuring steerability in text-rewriting (Figure[1](https://arxiv.org/html/2505.23816v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). A user aims to rewrite a _source_ (e.g., “Cats are animals”) mapping to 𝐳 0\mathbf{z}_{0} in goal-space. Suppose that the user wants a harder-to-read, slightly longer text, which maps to 𝐳∗\mathbf{z}^{*}, expressed via a prompt (e.g., “Make this harder to read and a little longer”). We assume 𝐳∗\mathbf{z}^{*} is _feasible_; _i.e._, it is possible to make the source harder to read and slightly longer. The LLM produces an output (e.g., “Say, felines are totally like…”) satisfying intent 𝐳^\mathbf{\hat{z}}, which may not match 𝐳∗\mathbf{z}^{*}. We quantify the mismatch via steering error; _i.e._, the Euclidean distance between 𝐳∗\mathbf{z}^{*} and 𝐳^\mathbf{\hat{z}} in multi-dimensional goal-space. To ensure _coverage_, we average over a uniform sample of 𝐳 0\mathbf{z}_{0} and 𝐳∗\mathbf{z}^{*}, yielding Eq.[2](https://arxiv.org/html/2505.23816v2#S2.E2 "In 2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). + +However, steering error (∥𝐳∗−𝐳^∥2\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}\rVert_{2}) does not distinguish miscalibration, or errors in magnitude, from side effects, or errors due to unintended changes. To address this, we write: + +∥𝐳∗−𝐳^∥2=∥(𝐳∗−𝐳 0)−(𝐳^−𝐳 0)∥2.\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}\rVert_{2}=\lVert(\mathbf{z}^{*}-\mathbf{z}_{0})-(\mathbf{\hat{z}}-\mathbf{z}_{0})\rVert_{2}.(3) + +Now consider the orthogonal decomposition of Eq.[3](https://arxiv.org/html/2505.23816v2#S2.E3 "In 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") onto the desired movement vector (𝐳∗−𝐳 0\mathbf{z}^{*}-\mathbf{z}_{0}), yielding proj 𝐳∗−𝐳 0​(𝐳∗−𝐳^)\mathrm{proj}_{\mathbf{z}^{*}-\mathbf{z}_{0}}(\mathbf{z}^{*}-\mathbf{\hat{z}}) and proj 𝐳∗−𝐳 0⟂​(𝐳∗−𝐳^)\mathrm{proj}^{\perp}_{\mathbf{z}^{*}-\mathbf{z}_{0}}(\mathbf{z}^{*}-\mathbf{\hat{z}}), respectively. The _scalar_ projection (sproj​(⋅)\textrm{sproj}(\cdot)), or magnitude of these vectors, correspond to steering error along the direction of the user’s intent (_miscalibration_) and the orthogonal error (_orthogonality_), respectively. We normalize the scalar projections to account for the “severity” of the error: + +miscal​(𝐳∗,𝐳^∣𝐳 0)=sproj 𝐳∗−𝐳 0​(𝐳∗−𝐳^)/∥𝐳∗−𝐳 0∥2\mathrm{miscal}(\mathbf{z}^{*},\mathbf{\hat{z}}\mid\mathbf{z}_{0})=\mathrm{sproj}_{\mathbf{z}^{*}-\mathbf{z}_{0}}(\mathbf{z}^{*}-\mathbf{\hat{z}})/\lVert\mathbf{z}^{*}-\mathbf{z}_{0}\rVert_{2}(4) + +where miscalibration, or over/under-shooting in the direction of the intent, is normalized by requested movement ∥𝐳∗−𝐳 0∥2\lVert\mathbf{z}^{*}-\mathbf{z}_{0}\rVert_{2}. Orthogonality is normalized by observed movement ∥𝐳^−𝐳 0∥2\lVert\mathbf{\hat{z}}-\mathbf{z}_{0}\rVert_{2}: + +ortho​(𝐳∗,𝐳^∣𝐳 0)=sproj 𝐳∗−𝐳 0⟂​(𝐳∗−𝐳^)/∥𝐳^−𝐳 0∥2\mathrm{ortho}(\mathbf{z}^{*},\mathbf{\hat{z}}\mid\mathbf{z}_{0})=\mathrm{sproj}^{\perp}_{\mathbf{z}^{*}-\mathbf{z}_{0}}(\mathbf{z}^{*}-\mathbf{\hat{z}})/\lVert\mathbf{\hat{z}}-\mathbf{z}_{0}\rVert_{2}(5) + +so that orthogonality corresponds to the proportion of goal-space movement orthogonal to the intent. These normalization steps broadly ensure that errors are penalized in proportion to the amount of requested or observed movement. All of these metrics are non-negative and minimized at zero. + +3 Experimental Setup +-------------------- + +Steerability probes are benchmarks designed to measure steerability for a steering task. We describe how we construct an example steerability probe for text-rewriting (Section[3.1](https://arxiv.org/html/2505.23816v2#S3.SS1 "3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")), candidate steerability interventions evaluated (Section[3.2](https://arxiv.org/html/2505.23816v2#S3.SS2 "3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")), and our inference setup (Section[3.3](https://arxiv.org/html/2505.23816v2#S3.SS3 "3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). + +### 3.1 Steerability probe construction + +We measure steerability in text-rewriting, a common task likely well-represented in LLM training data. Our probe has two components: (i) goal dimensions defining a goal-space 𝒵\mathcal{Z} and (ii) a dataset of goals (𝐳 0,𝐳∗)∼𝒵(\mathbf{z}_{0},\mathbf{z}^{*})\sim\mathcal{Z}. + +#### Design principles. + +Goal-space can be constructed from any set of measurable goals. For this first study, we use goals measured by rule-based evaluators. Rule-based evaluators are deterministic and auditable, which facilitates interpretation of results over learned or model-based evaluators. Otherwise, observed steering error may reflect evaluator error rather than the LLM being tested. However, our choice of evaluators is illustrative, not normative: our framework is modular and can use well-validated learned evaluators without changing the metric definitions. To obtain a diverse sample of source texts, we combine datasets with diverse writing styles, from which a more uniform set can be sampled. We report additional details in Appendix[A](https://arxiv.org/html/2505.23816v2#A1 "Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). + +#### Goal-space. + +We select reading difficulty (Flesch-Kincaid grade(Kincaid et al.[1975](https://arxiv.org/html/2505.23816v2#bib.bib20 "Derivation of new readability formulas (Automated Readability Index, Fog Count And Flesch Reading Ease Formula) for navy enlisted personnel"))), formality (Heylighen-Dewaele F-score(Heylighen and Dewaele [1999](https://arxiv.org/html/2505.23816v2#bib.bib15 "Formality of language: definition, measurement and behavioral determinants"))), textual lexical diversity (Jarvis and Hashimoto [2021](https://arxiv.org/html/2505.23816v2#bib.bib17 "How operationalizations of word types affect measures of lexical diversity")), and text length (word count). Though these dimensions may be correlated in training data, each is independently manipulable in theory (e.g., syllables per word & sentence length affect Flesch-Kincaid; whereas Heylighen-Dewaele measures the part-of-speech distribution). Requests mentioning these dimensions appear in real-world chats (e.g. WildChat/LMSys(Zhao et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib59 "WildChat: 1m chatGPT interaction logs in the wild")), Appendix[D.5](https://arxiv.org/html/2505.23816v2#A4.SS5 "D.5 Examples of real-world user requests ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Metric descriptions are in Appendix[A.2](https://arxiv.org/html/2505.23816v2#A1.SS2 "A.2 Steerability probe implementation details ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). For RL fine-tuning, we focus on 2D goal-space (reading difficulty, formality) to isolate challenges in steerability in a simple setting where goal dimensions are conceptually distinct but likely correlated in real-world text. As a secondary validity check, we verify that LLM-as-judge can detect changes in all chosen goal dimensions (see Appendix[B.3](https://arxiv.org/html/2505.23816v2#A2.SS3 "B.3 LLM-as-judge details ‣ Appendix B Text-rewriting task implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"))). + +#### Source texts and goals. + +We sample seed texts from news articles (CNN/DailyMail(See et al.[2017](https://arxiv.org/html/2505.23816v2#bib.bib50 "Get to the point: Summarization with pointer-generator networks"))), social media (RedditTIFU(Kim et al.[2019](https://arxiv.org/html/2505.23816v2#bib.bib19 "Abstractive summarization of reddit posts with multi-level memory networks"))), English novels (BookSum(Kryściński et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib24 "BOOKSUM: A collection of datasets for long-form narrative summarization"))), and movie synopses (SummScreenFD(Shaham et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib51 "SCROLLS: Standardized comparison over long language sequences"))), to cover a wide stylistic range (total N=8,303 N=8,303). We compute goal-space mappings for seed texts and min-max scale the empirical middle 95% of each goal dimension to [0,1][0,1], clipping values outside that range, such that goal dimensions are on comparable scales. We then resample 𝐳 0\mathbf{z}_{0} to be uniform over over goal-space 𝒵\mathcal{Z} via reweighting. For each 𝐳 0\mathbf{z}_{0}, we choose three active goal dimensions at random, and sample 𝐳∗\mathbf{z}^{*} within ±\pm 0.1 to 0.7 of the original value, copying components of 𝐳 0\mathbf{z}_{0} to 𝐳∗\mathbf{z}^{*} for inactive dimensions. Our main probe consists of 64 source texts with 32 goals each (N=2,048 N=2,048). All reported results are statistically significant at level α=0.05\alpha=0.05 based on a paired, two-sided Wilcoxon rank-signed test, with other tests used as specified. + +For RL fine-tuning, our training probe consists of 384 source texts with 16 goals each (N=3,072 N=3,072). We select _one_ active goal dimension and report metrics post-RL on 64 held-out source texts with 16 goals each (N=1,024 N=1,024) in 2D goal-space with one active goal dimension unless specified. + +#### Default prompt. + +To turn (𝐳 0,𝐳∗)(\mathbf{z}_{0},\mathbf{z}^{*}) into prompts, we use a template-based prompt that names active goal dimensions with modifiers “slightly” for changes <0.2<0.2, and “much” when changes are >0.5>0.5, and no modifier otherwise (e.g., “make this [slightly/much] [more/less] formal;” see also Appendix[B.2](https://arxiv.org/html/2505.23816v2#A2.SS2 "B.2 Prompt samples ‣ Appendix B Text-rewriting task implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). To avoid penalizing prompt ambiguity rather than steerability failures, we discretize 𝐳∗\mathbf{z}^{*} and 𝐳^\hat{\mathbf{z}} using the same bins implied by the prompt modifiers (cut points at 0, ±\pm 0.2 and ±\pm 0.5) when reporting steerability metrics. + +### 3.2 Candidate steerability interventions + +We evaluate common single-turn techniques for influencing model behavior. We choose a set of methods applicable to multi-dimensional, multi-level intents, namely, prompt engineering, best-of-N N sampling, and RL fine-tuning. + +#### Prompt engineering. + +Prompt engineering is the design of a strategy for verbalizing intent 𝐳∗\mathbf{z}^{*}, which may span direct instructions (e.g., Figure[1](https://arxiv.org/html/2505.23816v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")), chain-of-thought(Wei et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib55 "Chain-of-thought prompting elicits reasoning in large language models")), or negative prompting (e.g., “don’t change anything else”)(Sanchez et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib47 "Stay on topic with classifier-free guidance")). We extend the default prompt by testing the inclusion of negative prompts and specific instructions (e.g., “increase formality by changing X”), a chain-of-thought style directive (e.g., “explain proposed edits”), and an underspecified prompt as a naive upper bound on steering error. While non-exhaustive, this set reflects common strategies proposed in prior work applicable to text rewriting. See the Appendix[B.2](https://arxiv.org/html/2505.23816v2#A2.SS2 "B.2 Prompt samples ‣ Appendix B Text-rewriting task implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") for examples. + +#### Best-of-N N sampling. + +Best-of-N N selects the response with the lowest steering error out of N N attempts, assessing whether models are even capable of producing responses with low steering error. To encourage diverse but fluent samples, we use min-p p sampling (p=0.2 p=0.2) with temperature 1 and a 0.1 frequency penalty(Minh et al.[2025](https://arxiv.org/html/2505.23816v2#bib.bib31 "Turning up the heat: min-p sampling for creative and coherent LLM outputs")). + +#### RL fine-tuning. + +RL fine-tuning optimizes model parameters via online RL, using steering error as the negative reward. Since sampling directly from uniform 𝒰\mathcal{U} may be infeasible, we reweight training examples from a dataset 𝒟\mathcal{D} by estimating the density ratio 𝒰/𝒟\mathcal{U}/\mathcal{D} via classifier-based methods(Bickel and Scheffer [2009](https://arxiv.org/html/2505.23816v2#bib.bib3 "Discriminative learning under covariate shift")): + +min f⁡𝔼(𝐳 0,𝐳∗)∼𝒟​𝔼 𝐳^∼f(⋅∣𝐳 0,𝐳∗)​[w^​(𝐳 0,𝐳∗)⋅∥𝐳∗−𝐳^∥2 2].{\min}_{f}\;{\mathbb{E}}_{(\mathbf{z}_{0},\mathbf{z}^{*})\sim\mathcal{D}}\;{\mathbb{E}}_{\mathbf{\hat{z}}\sim f(\cdot\mid\mathbf{z}_{0},\mathbf{z}^{*})}[\hat{w}(\mathbf{z}_{0},\mathbf{z}^{*})\cdot\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}\rVert_{2}^{2}].(6) + +To optimize Eq.[6](https://arxiv.org/html/2505.23816v2#S3.E6 "In RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ��� 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), we use a policy gradient method based on leave-one-out proximal policy optimization (LOOP)(Chen et al.[2025](https://arxiv.org/html/2505.23816v2#bib.bib5 "Reinforcement learning for long-horizon interactive LLM agents")). We fine-tune a Llama3.1-8B model via rank-stabilized LoRA(Kalajdzievski [2023](https://arxiv.org/html/2505.23816v2#bib.bib18 "A rank stabilization scaling factor for fine-tuning with LoRA")). We generate rollouts using the same decoding parameters as best-of-N N sampling. We discuss modifications to LOOP in Appendix[C.2](https://arxiv.org/html/2505.23816v2#A3.SS2 "C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), and hyperparameters in Appendix[C.3](https://arxiv.org/html/2505.23816v2#A3.SS3 "C.3 RL implementation details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). + +![Image 1: Refer to caption](https://arxiv.org/html/2505.23816v2/x1.png) + +Figure 2: Median (IQR) of steering error (left), miscalibration (middle), and orthogonality, Llama3 family. Caps denote empirical 95% CI with outliers (∘\circ) plotted individually. Steering error does not improve with model size (left), but miscalibration does (middle). Orthogonality drops slightly (right), but remains skewed away from 0. + +### 3.3 LLM inference setup + +#### Models. + +We evaluate GPT (3.5 turbo, 4 turbo, 4o, 4.1(OpenAI [2023](https://arxiv.org/html/2505.23816v2#bib.bib32 "GPT-4 technical report"), [2024b](https://arxiv.org/html/2505.23816v2#bib.bib34 "Learning to reason with LLMs"), [2024a](https://arxiv.org/html/2505.23816v2#bib.bib33 "GPT-4o system card"))), Llama3 (Llama3 to 3.3, 8B/70B(Meta AI [2024](https://arxiv.org/html/2505.23816v2#bib.bib29 "The Llama 3 herd of models"))), Deepseek-R1 variants (8B/70B, distilled(DeepSeek-AI Team [2025](https://arxiv.org/html/2505.23816v2#bib.bib8 "DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning"))), and Qwen3 (4B/32B/30B-A3B)(Qwen Team [2025](https://arxiv.org/html/2505.23816v2#bib.bib41 "Qwen3 technical report")), and o1-/o3-mini(OpenAI [2024c](https://arxiv.org/html/2505.23816v2#bib.bib35 "OpenAI o1 system card"), [2025](https://arxiv.org/html/2505.23816v2#bib.bib36 "OpenAI o3-mini system card")), which we leave to Appendix[D.1](https://arxiv.org/html/2505.23816v2#A4.SS1 "D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") due to high response refusal/truncation rates. LLM inference is performed using the OpenAI API (GPT) or vLLM (all others)(Kwon et al.[2023](https://arxiv.org/html/2505.23816v2#bib.bib25 "Efficient memory management for large language model serving with PagedAttention")), with greedy sampling and a context length of 32,000 tokens unless specified. + +#### Output post-processing. + +To ensure metrics are computed over valid rewrites, we post-process responses to remove boilerplate text (e.g., “Sure, here’s…”) and reasoning tokens (e.g., blocks). We also filter refusals, degenerate behavior (e.g., repetitive looping), or rewrites unrelated to the source using LLM-as-judge and manual review of responses flagged by the LLM (see Appendices[B.3](https://arxiv.org/html/2505.23816v2#A2.SS3 "B.3 LLM-as-judge details ‣ Appendix B Text-rewriting task implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") and[D.4](https://arxiv.org/html/2505.23816v2#A4.SS4 "D.4 Groundedness evaluation ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")).1 1 1 Due to filtering, metrics are reported on slightly different response distributions. The effect is negligible: in our main results, rejected responses comprise ≤\leq 6 (≈\approx 0.29%) of outputs in any probe. + +4 Empirical Results +------------------- + +We evaluate steerability in text-rewriting using the proposed metrics. Our results suggest that current LLMs are not steerable, which we largely attribute to side effects. Further analysis suggests goal dimensions may be spuriously entangled (Section[4.1](https://arxiv.org/html/2505.23816v2#S4.SS1 "4.1 Large language models are not steerable ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). As candidate interventions, we try prompt engineering, which is ineffective, and best-of-N N sampling, which requires extensive sampling (Section[4.2](https://arxiv.org/html/2505.23816v2#S4.SS2 "4.2 Inference-time steering is costly ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). We then try RL fine-tuning in 2D goal-space, which rivals best-of-128 and disentangles goals, but side effects remain (Section[4.3](https://arxiv.org/html/2505.23816v2#S4.SS3 "4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). + +![Image 2: Refer to caption](https://arxiv.org/html/2505.23816v2/x2.png) + +Figure 3: Vector flow of goal-space movement (blue), Llama3.3-70B, in requests to change reading difficulty but not formality. Horizontal movement is desired, but not vertical movement. Source texts in red. + +![Image 3: Refer to caption](https://arxiv.org/html/2505.23816v2/x3.png) + +Figure 4: Median and IQR steerability, Llama3.3-70B, in correlated (darker) vs. anti-correlated (lighter) requests for change in reading difficulty and formality. Caps denote empirical 95% CI with outliers (∘\circ) plotted individually. Llama3.3-70B struggles more with anti-correlated changes. + +![Image 4: Refer to caption](https://arxiv.org/html/2505.23816v2/x4.png) + +Figure 5: Median and IQR of steering error (left), miscalibration (middle), and orthogonality (right) of Llama3.1-8B across prompting strategies. Caps denote empirical 95% CI with outliers (∘\circ) plotted individually. More detailed prompts and removal of the negative prompt marginally improve miscalibration over the default. However, side effects remain severe. + +![Image 5: Refer to caption](https://arxiv.org/html/2505.23816v2/x5.png) + +Figure 6: Median and IQR for best-of-{4,8,…,128}\{4,8,\dots,128\} approaches on Llama3.1-8B, with a direct + negative prompt. Caps denote empirical 95% CI with outliers (∘\circ) plotted individually. Increasing N N improves steerability, but improvements are slow. + +### 4.1 Large language models are not steerable + +#### Even strong LLMs induce side effects. + +Neither larger nor newer models meaningfully improve steering error.2 2 2 Some pairwise tests yield statistical significance, but effect sizes are small. Median steering error remains high, 0.452 for the largest model (Llama-3.3; Figure[2](https://arxiv.org/html/2505.23816v2#S3.F2 "Figure 2 ‣ RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), left), far from ideal despite outperforming a random baseline (0.770; sampling random goal levels in each dimension). Miscalibration improves (Figure[2](https://arxiv.org/html/2505.23816v2#S3.F2 "Figure 2 ‣ RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), center) with model size (e.g., Llama3.1-8B vs. 70B: 0.667 →\to 0.455). Some residual miscalibration is expected, since the model may not be calibrated to the magnitude of “slightly/much” in our prompts. + +Median orthogonality remains high and skewed towards 1 even as model size increases (Figure[2](https://arxiv.org/html/2505.23816v2#S3.F2 "Figure 2 ‣ RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), right) with Llama3.3-70B performing best with an orthogonality of 0.718. While several pairwise differences are statistically significant, models remain in a high-orthogonality regime on average. Similar trends hold in GPT, Deepseek, Qwen3, and o1/o3 models, where larger/newer models reduce miscalibration but have little effect on orthogonality (see Appendix[D.1](https://arxiv.org/html/2505.23816v2#A4.SS1 "D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Note that, even as miscalibration and orthogonality decrease, median steering error may not due to normalization (Eq.[4](https://arxiv.org/html/2505.23816v2#S2.E4 "In 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")-[5](https://arxiv.org/html/2505.23816v2#S2.E5 "In 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"); errors are penalized in proportion to the requested/observed movement). To further study side effects, we analyze a 2D goal subspace. + +#### Goal dimensions may be entangled. + +We investigate side effects in a 2D (reading difficulty, formality) subspace using a vector flow diagram of goal-space movement (Figure[3](https://arxiv.org/html/2505.23816v2#S4.F3 "Figure 3 ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), Llama3.3-70B, blue vectors). We include instructions requesting changes to reading difficulty (x x-axis) but not formality (y y-axis), such that vertical movement is a side effect. Figure[3](https://arxiv.org/html/2505.23816v2#S4.F3 "Figure 3 ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") shows a “current” from the lower left (informal & easy to read) to the top right, suggesting that, when asked to increase reading difficulty without direction on formality, LLMs still increase formality. + +Appendix[D.2](https://arxiv.org/html/2505.23816v2#A4.SS2 "D.2 Flow diagrams ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") shows additional movement vectors and flows. We also conduct a preliminary study of coupling between goal dimensions, which suggests that the entanglement is LLM-induced. + +While harder-to-read texts are often more formal, they need not be under our chosen measurement functions (Flesch-Kincaid grade, reading difficulty; Heylighen-Dewaele score, formality). LLM behavior appears to reflect this correlation: when stratifying steerability probe results based on whether the prompt requested correlated (e.g., make it harder to read and more formal) vs. anti-correlated changes to reading difficulty and formality (e.g., make it harder to read and less formal), Llama3.3-70B is less steerable on anti-correlated requests compared to correlated requests (steering error, 0.535 vs. 0.404; Figure[4](https://arxiv.org/html/2505.23816v2#S4.F4 "Figure 4 ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), diff.: 0.131, Mann-Whitney U=77944.5 U=77944.5), with similar results in other model families (GPT, Deepseek, Qwen3; see Appendix[D.1](https://arxiv.org/html/2505.23816v2#A4.SS1 "D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Thus, side effects may harm steerability in requests running contrary to similar correlations. + +#### On coverage. + +While our probe is designed to target a uniform distribution of goals, results are similar whether or not we sample source texts uniformly in goal-space (Appendix[D.1](https://arxiv.org/html/2505.23816v2#A4.SS1 "D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Thus, steerability failures are unlikely to be concentrated in overrepresented goals in our evaluation. + +#### Takeaway #1: side effects impede steerability. + +Despite progress in LLM reasoning and model capacity, LLMs continue to exhibit side effects. Entanglement between goal dimensions contributes to side effects, limiting steerability for intents that contradict correlations between goal dimensions. + +Steering error Miscalibration Orthogonality +Base model (pre-RL)0.300 (0.150)0.986 (0.464)0.147 (0.328) +Best@128 (pre-RL)0.210 (0.168)0.683 (0.539)0.121 (0.283) +Miscalibration-only reward 0.210 (0.138)0.542 (0.429)0.366 (0.395) +Orthogonality-only reward 0.386 (0.248)1.463 (1.004)0.025 (0.134) +Full steering error 0.119 (0.135)0.294 (0.391)0.160 (0.292) + +Table 2: Main results for RL, with an ablation study of the reward model. Mean (std. dev.) of steerability metrics across evaluation probe (N=1,024 N=1,024: 64 held-out source texts; 16 goals each). + +Table 3: Mean (std. dev.) of (from left to right) orthogonality for pre- vs. post-RL model on correlated (top; e.g., increase both dimensions) vs. anti-correlated requests (middle; e.g., change dimensions in opposite directions). RL shrinks the gap in side effects (bottom) between correlated and anti-correlated requests, despite only supervised via 1D instructions. + +Table 4: Mean (std. dev.) sentence-level BLEU (original vs. rewrite) by dataset, pre- & post-RL. + +### 4.2 Inference-time steering is costly + +We now study whether inference-time strategies can improve steerability. First, we evaluate whether prompt engineering can elicit responses that satisfy user goals. Second, we leverage best-of-N N sampling to test whether such responses are in the support of the model’s output distribution. + +#### Prompt engineering does not solve side effects. + +More detailed prompting strategies compared to the default (e.g., chain-of-thought style or adding instructions) tend to improve miscalibration, as does removing the negative prompt (median: 0.667 →\to 0.444, no negative prompt + added instructions; Figure[5](https://arxiv.org/html/2505.23816v2#S4.F5 "Figure 5 ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), middle). Yet orthogonality remains skewed towards 1 despite improvements under some strategies (e.g., direct + negative prompts; Figure[5](https://arxiv.org/html/2505.23816v2#S4.F5 "Figure 5 ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), right). Thus, mitigating side effects with prompt engineering alone may be challenging. Results for all strategies are in Appendix[D.1](https://arxiv.org/html/2505.23816v2#A4.SS1 "D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). + +#### Best-of-N N sampling is a costly solution. + +Since side effects remain severe across prompting strategies, we investigate whether responses that reduce side effects exist in the model’s sampling distribution via best-of-N N sampling. Best-of-4 with Llama3.1-8B lowers steering error (Figure[6](https://arxiv.org/html/2505.23816v2#S4.F6 "Figure 6 ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), left), outperforming best-of-1 across all prompting strategies and models evaluated (GPT-4.1 vs. Llama3.1-8B: 0.429 →\to to 0.404, see Appendix[D.1](https://arxiv.org/html/2505.23816v2#A4.SS1 "D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Median orthogonality at best-of-4 also outperforms the top best-of-1 model (GPT-4.1 vs. Llama3.3-70B: 0.718 →\to 0.673, see Appendix[D.1](https://arxiv.org/html/2505.23816v2#A4.SS1 "D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). This improvement with N N suggests that responses better-aligned with goals lie within the model’s support but are rare in the model’s sampling distribution. Best-of-N N also scales poorly, lowering median steering error by 0.031 at most when doubling N N (Figure[6](https://arxiv.org/html/2505.23816v2#S4.F6 "Figure 6 ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), left). + +#### Takeaway #2: Inference-time steerability is possible but inefficient. + +We find that prompt engineering alone may not be powerful enough to surface responses with low steering error. While best-of-N N sampling demonstrates the existence of such responses, they remain rare under the base model’s output distribution. Our results motivate fine-tuning to increase the likelihood of low steering-error responses. + +### 4.3 RL yields progress towards steerable models + +Gains under best-of-N N sampling suggest that low steering error responses exist but are rare under an LLM’s sampling distribution. We hypothesize RL can shift the output distribution towards such generations. Indeed, RL improves steerability, adopting different rewriting strategies compared to the base model, but does not eliminate side effects. + +#### The post-RL model rivals best-of-128 sampling. + +In a 2D goal-space (reading difficulty & formality), RL improves steerability in Llama3.1-8B. We report mean and standard deviation to capture improvements in the tails (Table[2](https://arxiv.org/html/2505.23816v2#S4.T2 "Table 2 ‣ Takeaway #1: side effects impede steerability. ‣ 4.1 Large language models are not steerable ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Post-RL steerability rivals best-of-128 sampling in steering error (pre-RL best@128 vs. post-RL mean: 0.210 →\to 0.119), though orthogonality lags the base model (pre-RL vs. post-RL mean: 0.147 →\to 0.121). Furthermore, optimizing only miscalibration or orthogonality worsens the other (e.g., RL w/ steering error: 0.294; orthogonality-only: 1.463), which we conjecture may be due to underspecification: flat-reward regions could worsen overfitting (e.g., all formality levels are equal-reward when optimizing miscalibration in reading difficulty only). + +#### RL shifts the model’s rewriting strategies. + +To analyze whether post-RL steerability improvements are meaningful, we examine changes in generation patterns. First, RL mitigates copy-pasting behavior. Before fine-tuning, the base model copy-pastes the source text in 135 of 1,024 (13.2%) prompts evaluated, a trivial method to minimize orthogonality. Post-RL, the copy-paste behavior vanishes. BLEU(Papineni et al.[2002](https://arxiv.org/html/2505.23816v2#bib.bib38 "BLEU: A method for automatic evaluation of machine translation")) between rewrites and source texts also drops (Table[4](https://arxiv.org/html/2505.23816v2#S4.T4 "Table 4 ‣ Takeaway #1: side effects impede steerability. ‣ 4.1 Large language models are not steerable ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"); pre-RL vs. post-RL: 0.864 →\to 0.529), suggesting that the post-RL model adopts a less conservative editing strategy to satisfy user goals. Pre- vs. post-RL flow diagrams (see Appendix[D.2](https://arxiv.org/html/2505.23816v2#A4.SS2 "D.2 Flow diagrams ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")) support our analysis. Second, RL generalizes to unseen instructions. Despite training with 1D instructions, the post-RL model better handles 2D anti-correlated instructions. We report mean and standard deviation to capture improvements in the tails. The difference in mean orthogonality between correlated vs. anti-correlated requests largely vanishes, dropping from 0.114 pre-RL to 0.005 post-RL (Table[3](https://arxiv.org/html/2505.23816v2#S4.T3 "Table 3 ‣ Takeaway #1: side effects impede steerability. ‣ 4.1 Large language models are not steerable ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), right), suggesting improved independence in controlling each goal dimension. We show violin plots summarizing other metrics in the Appendix (Figure[11](https://arxiv.org/html/2505.23816v2#A4.F11 "Figure 11 ‣ The impact of reweighting. ‣ D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Analysis of an anti-correlated rewrite (see Appendix[D.3](https://arxiv.org/html/2505.23816v2#A4.SS3 "D.3 Examples of rewritten texts pre- and post-RL ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")) further illustrates this behavior. + +#### Takeaway #3: RL yields partial progress towards steerability. + +In a 2D goal-space, we improve the steerability of Llama3.1-8B. These improvements reflect meaningful changes in the model’s rewriting strategies, such as reducing copy-paste behavior (lower BLEU score post-RL) and improved disentanglement of goal dimensions (lower orthogonality post-RL in anti-correlated requests). Nonetheless, orthogonality can still be improved, highlighting the need for further work to eliminate side effects. + +5 Discussion & Conclusion +------------------------- + +We propose a framework for measuring steerability: whether a model can reliably follow diverse, multi-dimensional goals. Existing LLM evaluations directly leverage data from real-world interactions or Internet text, which may not be representative, or use single-dimensional metrics, which do not capture side effects in open-ended generation. Our steerability probe design mitigates these gaps by uniformly sampling goals and measuring multiple dimensions of text. Empirically, LLMs struggle with steerability due to side effects. Inference-time interventions such as prompt engineering and best-of-N N sampling offer minor or costly gains. However, RL fine-tuning shows promise as a partial solution. Our work suggests that steerability may be a fundamental challenge for LLM alignment, requiring shifts in model behavior beyond inference-time tweaks. We hope that our framework provides a foundation for measuring LLM alignment with diverse sets of user goals. + +#### Limitations. + +We focus on steering along verifiable text attributes, leaving goals such as style, to future work. We also only evaluate LLMs in single-turn settings. However, our framework is easily extended to multi-turn settings or generative models beyond text (e.g., multimodal LMs). Our study of interventions is non-exhaustive: we do not vary prompt formatting and apply RL to only an 8B model in 2D goal-space. Larger models may have higher post-RL upside, but optimizing steerability in higher dimensional goal-space could introduce new challenges. Ultimately, our framework is a principled foundation for evaluating LLM steerability, that we hope complements current evaluations of LLM capabilities and improves alignment with diverse human goals. + +Acknowledgements +---------------- + +This work was partially done as an intern at Microsoft Research. We thank (in alphabetical order) Donald Lin, Gregory Kondas, Irina Gaynanova, Jennifer Neville, Jung Min Lee, Mahdi Kalayeh, Nathan Kallus, Siddharth Suri, Stephanie Shepard, Wanqiao Xu, Winston Chen, Zhiyi Hu, as well as members of the AI Interaction & Learning Group at Microsoft Research, the Machine Learning & Inference Research team at Netflix, and the NeurIPS 2024 Safe Generative AI Workshop for helpful conversations and feedback on this work. Special thanks to Donna Tjandra, Meera Krishnamoorthy, Michael Ito, Paco Haas, and Sarah Jabbour for their comments on drafts of this work, and to Quentin Gallouédec, the TRL developer community, and the vLLM developer community for their responsiveness on Github issues and engaging in helpful discussions around implementation details. + +References +---------- + +* D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané (2016)Concrete problems in ai safety. arXiv preprint arXiv:1606.06565. Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p2.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* M. G. Azar, Z. D. Guo, B. Piot, R. Munos, M. Rowland, M. Valko, and D. Calandriello (2024)A general theoretical paradigm to understand learning from human preferences. In AISTATS, pp.4447–4455. Cited by: [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px2.p1.5 "Margin-aware leave-one-out policy optimization (MA-LOOP). ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [footnote 5](https://arxiv.org/html/2505.23816v2#footnote5 "In Identity preference optimization-based regularization. ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* S. Bickel and T. Scheffer (2009)Discriminative learning under covariate shift. Journal of Machine Learning Research 10, pp.2137–2155. Cited by: [§A.2](https://arxiv.org/html/2505.23816v2#A1.SS2.SSS0.Px4.p1.9 "Generating sampling weights. ‣ A.2 Steerability probe implementation details ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px1.p2.12 "Main objective. ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.2](https://arxiv.org/html/2505.23816v2#S3.SS2.SSS0.Px3.p1.3 "RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* BIG-Bench contributors (2023)Beyond the imitation game: quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research. External Links: ISSN 2835-8856 Cited by: [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* K. Chen, M. Cusumano-Towner, B. Huval, A. Petrenko, J. Hamburger, V. Koltun, and P. Krähenbühl (2025)Reinforcement learning for long-horizon interactive LLM agents. arXiv preprint arXiv:2502.01600. Cited by: [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px2.p1.1 "Margin-aware leave-one-out policy optimization (MA-LOOP). ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.2](https://arxiv.org/html/2505.23816v2#S3.SS2.SSS0.Px3.p2.1 "RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* M. Chen, Z. Chu, S. Wiseman, and K. Gimpel (2022)SummScreen: a dataset for abstractive screenplay summarization. In ACL, pp.8602–8615. Cited by: [4th item](https://arxiv.org/html/2505.23816v2#A1.I1.i4.p1.1 "In A.1 Dataset preprocessing ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* T. Dao (2024)FlashAttention-2: faster attention with better parallelism and work partitioning. In ICLR, Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* DeepSeek-AI Team (2025)DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948. Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Y. Deng, W. Zhao, J. Hessel, X. Ren, C. Cardie, and Y. Choi (2024)WildVis: open source visualizer for million-scale chat logs in the wild. In EMNLP, pp.497–506. Cited by: [§D.5](https://arxiv.org/html/2505.23816v2#A4.SS5.p1.1 "D.5 Examples of real-world user requests ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* G. Dong, K. Lu, C. Li, T. Xia, B. Yu, C. Zhou, and J. Zhou (2025)Self-play with execution feedback: improving instruction-following capabilities of large language models. In ICLR, Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* E. Durmus, A. Tamkin, J. Clark, J. Wei, J. Marcus, J. Batson, K. Handa, L. Lovitt, M. Tong, M. McCain, O. Rausch, S. Huang, S. Bowman, S. Ritchie, T. Henighan, and D. Ganguli (2024)Evaluating feature steering: a case study in mitigating social biases. Note: [https://anthropic.com/research/evaluating-feature-steering](https://anthropic.com/research/evaluating-feature-steering)Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* S. Gugger, L. Debut, T. Wolf, P. Schmid, Z. Mueller, S. Mangrulkar, M. Sun, and B. Bossan (2022)Accelerate: training and inference at scale made simple, efficient and adaptable. Note: [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate)Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Q. He, J. Zeng, Q. He, J. Liang, and Y. Xiao (2024)From complex to simple: enhancing multi-constraint complex instruction following ability of large language models. In EMNLP Findings, pp.10864–10882. Cited by: [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt (2021)Measuring massive multitask language understanding. In ICLR, Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* F. Heylighen and J. Dewaele (1999)Formality of language: definition, measurement and behavioral determinants. Technical report Center “Leo Apostel”, Vrije Universiteit Brüssel. Cited by: [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px2.p1.1 "Goal-space. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* E. J. Hu, yelong shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen (2022)LoRA: low-rank adaptation of large language models. In ICLR, Cited by: [5th item](https://arxiv.org/html/2505.23816v2#A3.I1.i5.p1.1 "In Optimization hyperparameters. ‣ C.3 RL implementation details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* S. Jarvis and B. J. Hashimoto (2021)How operationalizations of word types affect measures of lexical diversity. International Journal of Learner Corpus Research 7 (1), pp.163–194. Cited by: [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px2.p1.1 "Goal-space. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* D. Kalajdzievski (2023)A rank stabilization scaling factor for fine-tuning with LoRA. arXiv preprint arXiv:2312.03732. Cited by: [5th item](https://arxiv.org/html/2505.23816v2#A3.I1.i5.p1.1 "In Optimization hyperparameters. ‣ C.3 RL implementation details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.2](https://arxiv.org/html/2505.23816v2#S3.SS2.SSS0.Px3.p2.1 "RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* B. Kim, H. Kim, and G. Kim (2019)Abstractive summarization of reddit posts with multi-level memory networks. In NAACL-HLT, pp.2519–2531. Cited by: [3rd item](https://arxiv.org/html/2505.23816v2#A1.I1.i3.p1.1 "In A.1 Dataset preprocessing ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px3.p1.11 "Source texts and goals. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* J. P. Kincaid, R. P. Fishburne Jr, R. L. Rogers, and B. S. Chissom (1975)Derivation of new readability formulas (Automated Readability Index, Fog Count And Flesch Reading Ease Formula) for navy enlisted personnel. Technical report Naval Technical Training Command Millington TN Research Branch. Cited by: [§2.2](https://arxiv.org/html/2505.23816v2#S2.SS2.p1.14 "2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px2.p1.1 "Goal-space. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* W. Kool, H. van Hoof, and M. Welling (2019)Buy 4 REINFORCE samples, get a baseline for free. In ICLR Deep RL Meets Structured Prediction Workshop, Cited by: [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px2.p1.5 "Margin-aware leave-one-out policy optimization (MA-LOOP). ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* A. Köpf, Y. Kilcher, D. von Rütte, S. Anagnostidis, Z. R. Tam, K. Stevens, A. Barhoum, D. Nguyen, O. Stanley, R. Nagyfi, S. ES, S. Suri, D. Glushkov, A. Dantuluri, A. Maguire, C. Schuhmann, H. Nguyen, and A. Mattick (2023)OpenAssistant conversations: democratizing large language model alignment. In NeurIPS, pp.47669–47681. Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* W. Kryściński, N. Rajani, D. Agarwal, C. Xiong, and D. Radev (2022)BOOKSUM: A collection of datasets for long-form narrative summarization. In EMNLP Findings, pp.6536–6558. Cited by: [2nd item](https://arxiv.org/html/2505.23816v2#A1.I1.i2.p1.1 "In A.1 Dataset preprocessing ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px3.p1.11 "Source texts and goals. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* W. Kwon, Z. Li, S. Zhuang, Y. Sheng, L. Zheng, C. H. Yu, J. Gonzalez, H. Zhang, and I. Stoica (2023)Efficient memory management for large language model serving with PagedAttention. In SOSP, pp.611–626. Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* J. Li, C. Peris, N. Mehrabi, P. Goyal, K. Chang, A. Galstyan, R. Zemel, and R. Gupta (2024)The steerability of large language models toward data-driven personas. In NAACL-HLT, pp.7290–7305. Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* I. Loshchilov and F. Hutter (2019)Decoupled weight decay regularization. In ICLR, Cited by: [2nd item](https://arxiv.org/html/2505.23816v2#A3.I1.i2.p1.1 "In Optimization hyperparameters. ‣ C.3 RL implementation details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* S. Mangrulkar, S. Gugger, L. Debut, Y. Belkada, S. Paul, and B. Bossan (2022)PEFT: state-of-the-art parameter-efficient fine-tuning methods. Note: [https://github.com/huggingface/peft](https://github.com/huggingface/peft)Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Meta AI (2024)The Llama 3 herd of models. arXiv preprint arXiv:2407.21783. Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* E. Miehling, M. Desmond, K. N. Ramamurthy, E. M. Daly, K. R. Varshney, E. Farchi, P. Dognin, J. Rios, D. Bouneffouf, M. Liu, and P. Sattigeri (2025)Evaluating the prompt steerability of large language models. In NAACL-HLT, pp.7874–7900. Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* N. N. Minh, A. Baker, C. Neo, A. G. Roush, A. Kirsch, and R. Shwartz-Ziv (2025)Turning up the heat: min-p sampling for creative and coherent LLM outputs. In ICLR, Cited by: [§3.2](https://arxiv.org/html/2505.23816v2#S3.SS2.SSS0.Px2.p1.4 "Best-of-𝑁 sampling. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* OpenAI (2023)GPT-4 technical report. arXiv preprint arXiv:2303.08774. Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* OpenAI (2024a)GPT-4o system card. arXiv preprint arXiv:2410.21276. Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* OpenAI (2024b)Learning to reason with LLMs. Note: [https://openai.com/index/learning-to-reason-with-llms/](https://openai.com/index/learning-to-reason-with-llms/)Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* OpenAI (2024c)OpenAI o1 system card. arXiv preprint arXiv:2412.16720. Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* OpenAI (2025)OpenAI o3-mini system card. Note: [https://cdn.openai.com/o3-mini-system-card-feb10.pdf](https://cdn.openai.com/o3-mini-system-card-feb10.pdf)Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. Lowe (2022)Training language models to follow instructions with human feedback. In NeurIPS, pp.27730–27744. Cited by: [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002)BLEU: A method for automatic evaluation of machine translation. In ACL, pp.311–318. Cited by: [§4.3](https://arxiv.org/html/2505.23816v2#S4.SS3.SSS0.Px2.p1.1 "RL shifts the model’s rewriting strategies. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019)PyTorch: an imperative style, high-performance deep learning library. In NeurIPS, Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Y. Qin, K. Song, Y. Hu, W. Yao, S. Cho, X. Wang, X. Wu, F. Liu, P. Liu, and D. Yu (2024)InFoBench: Evaluating instruction following ability in large language models. In ACL Findings, pp.13025–13048. Cited by: [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Qwen Team (2025)Qwen3 technical report. arXiv preprint arXiv:2505.09388. Cited by: [§3.3](https://arxiv.org/html/2505.23816v2#S3.SS3.SSS0.Px1.p1.1 "Models. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn (2023)Direct preference optimization: your language model is secretly a reward model. In NeurIPS, pp.53728–53741. Cited by: [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px2.p1.12 "Margin-aware leave-one-out policy optimization (MA-LOOP). ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px3.p1.7 "Identity preference optimization-based regularization. ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2020)Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research 21 (140), pp.1–67. Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He (2020)ZeRO: memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp.1–16. Cited by: [§C.3](https://arxiv.org/html/2505.23816v2#A3.SS3.SSS0.Px4.p1.1 "Memory-efficiency optimizations. ‣ C.3 RL implementation details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* J. Rasley, S. Rajbhandari, O. Ruwase, and Y. He (2020)DeepSpeed: system optimizations enable training deep learning models with over 100 billion parameters. In KDD, pp.3505–3506. Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* N. Rimsky, N. Gabrieli, J. Schulz, M. Tong, E. Hubinger, and A. Turner (2024)Steering Llama 2 via contrastive activation addition. In ACL, pp.15504–15522. Cited by: [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* G. V. Sanchez, A. Spangher, H. Fan, E. Levi, and S. Biderman (2024)Stay on topic with classifier-free guidance. In ICML, pp.43197–43234. Cited by: [§3.2](https://arxiv.org/html/2505.23816v2#S3.SS2.SSS0.Px1.p1.1 "Prompt engineering. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* T. Schnabel and J. Neville (2024)Symbolic prompt program search: a structure-aware approach to efficient compile-time prompt optimization. In EMNLP Findings, pp.670–686. Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz (2015)Trust region policy optimization. In ICML, pp.1889–1897. Cited by: [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px2.p1.5 "Margin-aware leave-one-out policy optimization (MA-LOOP). ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* A. See, P. J. Liu, and C. D. Manning (2017)Get to the point: Summarization with pointer-generator networks. In ACL, pp.1073–1083. Cited by: [1st item](https://arxiv.org/html/2505.23816v2#A1.I1.i1.p1.1 "In A.1 Dataset preprocessing ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px3.p1.11 "Source texts and goals. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* U. Shaham, E. Segal, M. Ivgi, A. Efrat, O. Yoran, A. Haviv, A. Gupta, W. Xiong, M. Geva, J. Berant, and O. Levy (2022)SCROLLS: Standardized comparison over long language sequences. In EMNLP, pp.12007–12021. Cited by: [4th item](https://arxiv.org/html/2505.23816v2#A1.I1.i4.p1.1 "In A.1 Dataset preprocessing ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px3.p1.11 "Source texts and goals. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* A. M. Turner, L. Thiergart, G. Leech, D. Udell, J. J. Vazquez, U. Mini, and M. MacDiarmid (2023)Steering language models with activation engineering. arXiv preprint arXiv:2308.10248. Cited by: [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* K. Vafa, S. Bentley, J. Kleinberg, and S. Mullainathan (2025)What’s producible may not be reachable: Measuring the steerability of generative models. arXiv preprint arXiv:2503.17482. Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* L. von Werra, Y. Belkada, L. Tunstall, E. Beeching, T. Thrush, N. Lambert, S. Huang, K. Rasul, and Q. Gallouédec (2020)TRL: transformer reinforcement learning. Note: [https://github.com/huggingface/trl](https://github.com/huggingface/trl)Cited by: [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* J. Wei, X. Wang, D. Schuurmans, M. Bosma, b. ichter, F. Xia, E. Chi, Q. V. Le, and D. Zhou (2022)Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, pp.24824–24837. Cited by: [§3.2](https://arxiv.org/html/2505.23816v2#S3.SS2.SSS0.Px1.p1.1 "Prompt engineering. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* E. Wijmans, B. Huval, A. Hertzberg, V. Koltun, and P. Kraehenbuehl (2025)Cut your losses in large-vocabulary language models. In ICLR, Cited by: [§C.3](https://arxiv.org/html/2505.23816v2#A3.SS3.SSS0.Px4.p1.1 "Memory-efficiency optimizations. ‣ C.3 RL implementation details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [Appendix E](https://arxiv.org/html/2505.23816v2#A5.SS0.SSS0.Px1.p1.1 "Software. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Z. Wu, A. Arora, A. Geiger, Z. Wang, J. Huang, D. Jurafsky, C. D. Manning, and C. Potts (2025)AxBench: steering LLMs? even simple baselines outperform sparse autoencoders. In ICML, Cited by: [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Q. Yu, Z. Zhang, R. Zhu, Y. Yuan, X. Zuo, Y. Yue, T. Fan, G. Liu, L. Liu, X. Liu, et al. (2025)DAPO: An open-source LLM reinforcement learning system at scale. arXiv preprint arXiv:2503.14476. Cited by: [§C.2](https://arxiv.org/html/2505.23816v2#A3.SS2.SSS0.Px2.p1.5 "Margin-aware leave-one-out policy optimization (MA-LOOP). ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* W. Zhao, X. Ren, J. Hessel, C. Cardie, Y. Choi, and Y. Deng (2024)WildChat: 1m chatGPT interaction logs in the wild. In ICLR, Cited by: [§D.5](https://arxiv.org/html/2505.23816v2#A4.SS5.p1.1 "D.5 Examples of real-world user requests ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§3.1](https://arxiv.org/html/2505.23816v2#S3.SS1.SSS0.Px2.p1.1 "Goal-space. ‣ 3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* Q. Zhong, K. Wang, Z. Xu, J. Liu, L. Ding, B. Du, and D. Tao (2025)Achieving >97% on GSM8k: Deeply understanding the problems makes LLMs perfect reasoners. Frontiers of Computer Science 20 (1). Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). +* J. Zhou, T. Lu, S. Mishra, S. Brahma, S. Basu, Y. Luan, D. Zhou, and L. Hou (2023)Instruction-following evaluation for large language models. arXiv preprint arXiv:2311.07911. Cited by: [§1](https://arxiv.org/html/2505.23816v2#S1.p1.1 "1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), [§2.1](https://arxiv.org/html/2505.23816v2#S2.SS1.p2.3 "2.1 Designing a steerability metric ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). + +Technical Appendix + +Appendix A Steerability probe implementation details +---------------------------------------------------- + +### A.1 Dataset preprocessing + +Each dataset is pre-processed as follows. All datasets are processed via the HuggingFace datasets library, for which we provide dataset identifiers here. We report the number of texts from each dataset in our set of seed texts (_i.e._, the set of texts from which the steerability probe subsamples source texts) after pre-processing. + +* •CNN/DailyMail (ccdv/cnn_dailymail, N=2,996 N=2,996, License: MIT(See et al.[2017](https://arxiv.org/html/2505.23816v2#bib.bib50 "Get to the point: Summarization with pointer-generator networks"))): The CNN/DailyMail dataset is a collection of over 300,000 total online news articles from CNN from April 2007 to April 2015 and DailyMail from June 2010 to April 2015. We use the validation split of version 3.0.0, and extract source texts from a random subsample of 3,000 entries in the article column. +* •BookSum (kmfoda/booksum, N=2,903 N=2,903, License: BSD-3 (Kryściński et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib24 "BOOKSUM: A collection of datasets for long-form narrative summarization"))): The Booksum dataset contains public domain short stories, plays, and novels from Project Gutenberg, originally split by chapter. We use the validation split, and extract source texts from a random subsample of 30 entries in the chapter column. Since BookSum contains multiple summaries of book chapters, we de-duplicate the chapter column by filtering for summaries where the source is equal to "sparknotes" prior to sampling. Each chapter is greedily chunked into paragraphs, where paragraphs are added to a “chunk” until the chunk exceeds 30 sentences in length, as measured via nltk.sent_tokenize. +* •RedditTIFU (ctr4si/reddit_tifu, N=2,116 N=2,116, License: unknown(Kim et al.[2019](https://arxiv.org/html/2505.23816v2#bib.bib19 "Abstractive summarization of reddit posts with multi-level memory networks"))): RedditTIFU is a collection of approximately 120,000 social media posts from Reddit, drawn from the “subreddit” (_i.e._, a sub-forum) r/tifu. The r/tifu subreddit focuses on users recounting embarrassing personal experiences. We use the train split of the short version, and extract source texts from a random subsample of 3,000 entries in the stories documents. Due to the lack of punctuation, we detect ends of paragraphs via the delimiter "\n\n" and ensure such paragraphs end with periods to prevent artificial inflation of scores associated with sentence length (e.g., Flesch-Kincaid score). +* •SummScreenFD (tau/scrolls, N=288 N=288, License: unknown(Shaham et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib51 "SCROLLS: Standardized comparison over long language sequences"); Chen et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib6 "SummScreen: a dataset for abstractive screenplay summarization"))): The SummScreenFD dataset is drawn from paired TV show transcripts and human-written recaps. We select all source texts in the validation split in the column output, using version summ_screen_fd. We use the version of SummScreenFD included as a subset of the SCROLLS benchmark. + +All seed data are filtered to be between 50 and 2048 words, inclusive, as measured via nltk.word_tokenize. The upper bound of 2048 words is chosen to cap LLM generation costs. The lower bound of 50 is chosen since the measure of textual lexical diversity is only considered valid on texts of length 50 and above. After pre-processing, we have 8,303 seed texts from which steerability probes are constructed. N N is reported post-filtering. + +### A.2 Steerability probe implementation details + +#### Computing goal-space mappings. + +We map all 8,303 seed texts to goal-space (see Section[3.1](https://arxiv.org/html/2505.23816v2#S3.SS1 "3.1 Steerability probe construction ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")) without normalization. We measure each goal dimension as follows: + +* •Reading difficulty (Flesch-Kincaid reading level): An approximation of reading grade level in the U.S. education system, given by + +0.39​# of words# of sentences+11.8⋅# of syllables# of words−15.59 0.39\frac{\text{\# of words}}{\text{\# of sentences}}+11.8\cdot\frac{\text{\# of syllables}}{\text{\# of words}}-15.59(7) + +which we compute via the textstat package. +* •Text length: A measure of verbosity based on a function of the word count as computed via nltk.word_tokenizer. +* •Textual diversity (Measure of textual lexical diversity, MTLD): MTLD keeps a running counter of type-token ratio (TTR; # of unique tokens / # of total tokens), and defining “chunk boundaries” whenever the TTR drops beneath a pre-defined threshold (generally 0.72). The MTLD is the average length of the resultant “chunks.” MTLD is calculated via the taaled package with pre-processing via pylats, a SpaCy-based pipeline for normalizing case, correcting misspellings, and part-of-speech tagging. +* •Formality (Heylighen-Dewaele F-score): Based on the observation that formal language tends to be less context-dependent (deictic) than informal language, and that certain parts of speech are associated with deictic vs. non-deictic language, the F-score is given by + +%deic.\displaystyle\%\text{deic.}≜(%noun+%adj.+%adp.+%art.)\displaystyle\triangleq(\%\text{noun}+\%\text{adj.}+\%\text{adp.}+\%\text{art.}) +%non-deic.\displaystyle\%\text{non-deic.}≜(%pron.+%verb+%adv.+%intj.)\displaystyle\triangleq(\%\text{pron.}+\%\text{verb}+\%\text{adv.}+\%\text{intj.}) +%deic.−%non-deic.+100 2\displaystyle\frac{\%\text{deic.}-\%\text{non-deic.}+100}{2} + +Part-of-speech tagging was done via spacy using the en_core_web_sm model. Since spacy does not have an “article” category (instead tagging determiners), we tag “a”, “an”, and “the” as articles manually. 3 3 3 Abbreviations: adj. = adjective, adp. = adposition, art. = article, pron. = pronoun, adv. = adverb, intj. = interjection. Spacy tags adpositions, a generalization of prepositions, while the original Heylighen-Dewaele formula lists prepositions. The discrepancy likely has marginal effects: most English adpositions are prepositions with exceptions including the postpositions “ago” (e.g., three weeks ago) and “hence” (_e.g._ two days hence). + +While we do not claim these formulae to be authoritative measures of each dimension, these dimensions were chosen as well-established, rule-based metrics for textual analysis, aiding interpretation of the results. Note that for all goal dimensions, higher numerical values indicate higher levels of the textual aspect of interest. + +For validation, we prompted Llama3.1-8B to evaluate all pairs of original texts and their rewrites under the steerability probe shown in Figure[2](https://arxiv.org/html/2505.23816v2#S3.F2 "Figure 2 ‣ RL fine-tuning. ‣ 3.2 Candidate steerability interventions ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") using the default prompting strategy. The LLM-as-judge prompt required the model to select whether the original text and rewrite was higher/lower on each aspect of text. We evaluated Kendall’s τ\tau between the LLM predictions and the ground-truth answers (_i.e._, the sign of the difference between the source text and the rewrite for each goal dimension), showing at least ≈67.0%\approx 67.0\% agreement in all dimensions (Table[5](https://arxiv.org/html/2505.23816v2#A1.T5 "Table 5 ‣ Computing goal-space mappings. ‣ A.2 Steerability probe implementation details ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). + +Table 5: Kendall’s τ\tau of an LLM-based evaluation of each text dimension compared to the goal-space mapping function used. Evaluation performed on rewrites by Llama3.1-8B (direct + neg. prompt). We also provide the pairwise agreement rate ((τ+1)/2(\tau+1)/2). + +#### Goal dimension normalization. + +Since each dimension may be measured on a different scale, to improve comparability, we re-scale all goal dimensions to the range [0, 1], with the idea that values corresponding to zero (one) represent extremely low (high) values of each aspect. + +Table 6: Values for each goal-dimension corresponding to [0,1][0,1] in normalized goal-space. + +Formally, let α q,i\alpha_{q,i} be the q q th quantile of goal i i with respect to the seed data, and let z~i\tilde{z}_{i} and z i z_{i} be the raw and normalized goal-space mappings for goal i i. We linearly rescale the middle 95% of each goal dimension to cover [0, 1] and clip values accordingly: + +z i=clip​(z~i−α 0.025,1 α 0.975,i−α 0.025,i,0,1).z_{i}=\text{clip}\left(\frac{\tilde{z}_{i}-\alpha_{0.025,1}}{\alpha_{0.975,i}-\alpha_{0.025,i}},0,1\right).(8) + +Thus, goal-space is 𝒵≡[0,1]4\mathcal{Z}\equiv[0,1]^{4}. The ranges that correspond to each goal dimension in our study are reported in Table[6](https://arxiv.org/html/2505.23816v2#A1.T6 "Table 6 ‣ Goal dimension normalization. ‣ A.2 Steerability probe implementation details ‣ Appendix A Steerability probe implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). + +#### Generating instructions. + +For each source text, we generate goal vectors in 𝒵\mathcal{Z} by adding a random offset δ i\delta_{i} uniformly sampled from [-0.7, -0.1] or [0.1, 0.7] to three randomly-selected goal dimensions z i z_{i}. To ensure that z i+δ i∈[0,1]z_{i}+\delta_{i}\in[0,1], if applicable, we clip the minimum or maximum value of δ i\delta_{i} and sample uniformly from the resultant range. For example, if z i=0.8 z_{i}=0.8, we would sample uniformly from [-0.7, -0.1] ∪\cup [0.1, 0.2]. This reflects the assumption that requests must be _feasible_: requests to make extreme texts even more extreme are not meaningful, since a very informal message cannot be made even more informal. + +#### Generating sampling weights. + +To generate sampling weights, we use probabilistic-classifier based density ratio estimation(Bickel and Scheffer [2009](https://arxiv.org/html/2505.23816v2#bib.bib3 "Discriminative learning under covariate shift")). Formally, for each seed text, we draw a vector from a uniform distribution 𝒰​(0,1)4\mathcal{U}(0,1)^{4} for all source texts. Let C=1 C=1 be the class generating the distribution of goal-space mappings z z on the seed data, and let C=0 C=0 be the class generating a random uniform distribution of goal-space mappings. Fitting a logistic regression to predict P​(C=1∣z)P(C=1\mid z), the sampling weight for each example with goal-space mapping z z is given by P​(z∣C=0)P​(z∣C=1)\frac{P(z\mid C=0)}{P(z\mid C=1)}, which, via Bayes’ rule, is equal to 1−P​(C=1∣z)P​(C=1∣z)\frac{1-P(C=1\mid z)}{P(C=1\mid z)} since P​(C=0)=P​(C=1)P(C=0)=P(C=1) by construction. We repurpose these sampling weights for RL. + +#### Steerability probe settings. + +Steerability probes are sampled according to the sampling weights, with goal-space mappings and instruction vectors created according to the above processes. The steerability probe used for our main results features 64 source texts, with 32 goal vectors per source text (N=2,048 N=2,048) in a 4D goal-space (Section[4.1](https://arxiv.org/html/2505.23816v2#S4.SS1 "4.1 Large language models are not steerable ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") and[4.2](https://arxiv.org/html/2505.23816v2#S4.SS2 "4.2 Inference-time steering is costly ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). For RL fine-tuning, we sample 384 source texts and 8 goal vectors per source text (N=3,072 N=3,072). Evaluation of models post-RL (and pre-RL models/best-of-N N comparisons) is conducted on a probe with 64 source texts never seen during RL fine-tuning, with 16 goal vectors each (N=1,024 N=1,024). + +Appendix B Text-rewriting task implementation details +----------------------------------------------------- + +### B.1 Text rewriting task setup + +Our main steerability probe consists of requests to ask models to rewrite texts; _i.e._, to “move” a source text with goal-space mapping 𝐳 0\mathbf{z}_{0} to some ideal point 𝐳∗\mathbf{z}^{*}. For this section, let δ i\delta_{i} be the requested change in an arbitrary goal dimension. + +#### Response post-processing. + +All responses are post-processed via regex-based searches to filter extraneous text that precede the rewrite (e.g., tokens from DeepSeek-family models, or phrases such as “Sure, here is your rewritten text…”). For chain-of-thought prompting only, the block under “## Rewritten text” is explicitly extracted via regex. The following paragraphs show examples of all prompting strategies evaluated. + +#### Rewrite-filtering. + +A trivial way to game the vanilla steerability probe is to output text unrelated to the original. For example, in a response to produce a much harder to read text, the model could simply produce a completely unrelated text with a high Flesch-Kincaid score. Models may also refuse to write certain texts, hallucinate that the source text is not provided, or truncate their responses. + +To prevent such cases from polluting our evaluation, we passed post-processed outputs into an LLM-as-judge setup, asking the model to evaluate whether the rewrite vs. original are variations of the same text as judged by the prompt in Appendix[B.2](https://arxiv.org/html/2505.23816v2#A2.SS2 "B.2 Prompt samples ‣ Appendix B Text-rewriting task implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). We randomize whether the rewrite or original appears first. The LLM-as-judge response is parsed via regex to extract a “Yes/No” answer and a rationale. In the case of a parse failure, the answer is recorded as “None.” + +All “No/None” decisions and a sample of 16 random “Yes” decisions are flagged for review by the authors. The human is provided with an interactive command-line dialog showing the original text, the rewritten text, the LLM decision (“Yes/No”), and the LLM’s extracted rationale. The human is given the final say to approve or overrule the LLM’s rationale. While evaluating groundedness is inherently subjective, the human review is intended to target false positive decisions by the LLM judge, rather than to judge the correctness of the response with respect to the rewriting prompt. + +#### Re-prompting. + +For fair comparison across prompting strategies and models, we never re-prompt the model when it returns a valid text response. In other words, we only re-prompt on API networking failures (4XX/5XX HTTP responses). This approach gives all models the same number of attempts to produce a rewritten text that aligns with the generated user goal. + +### B.2 Prompt samples + +#### Direct prompt. + +This prompt is a simple, template-based prompt where the model is asked to increase or decrease various aspects of text, with optional modifiers “slightly” for requests where |δ i|<0.2|\delta_{i}|<0.2, and “much” for requests where |δ i|>0.5|\delta_{i}|>0.5. An example prompt is shown here. + +Note that the “slots” for each aspect of text are randomly shuffled (_i.e._, text length, diversity, and formality do not necessarily appear in the order given above). + +#### Negative prompting. + +A negative prompt explicitly instructs the model not to change any other aspects of the text, even if it would be otherwise undesirable. All negative prompt messages are injected immediately before the source text. An example is provided here with the negative prompt underlined: + +Negative-prompt variations are only well-defined for prompt strategies that explicitly name aspects of text to change (_i.e._, direct, direct + instructions, and chain of thought), since a directive to leave “other” aspects unmodified is only meaningful if the prompt mentions specific attributes of text to change. We omit examples of the other prompting strategies with negative prompts since their construction is identical. + +#### Chain-of-thought. + +Here, the model is asked to explain its edits concretely prior to outputting the rewritten text. + +#### Instruction-only. + +In lieu of naming specific goal dimensions, model is provided with specific instructions anchored to the source text to satisfy the users’ goal. The instructions are extracted via an LLM from chain-of-thought responses.4 4 4 Responses are extracted from chain-of-thought rewrites by Llama-3.1 (8B) without negative prompting. Empirically, proposed edits did not qualitatively differ whether or not a negative prompt was used. Instructions had varying specificity, with some referencing parts of the text, and others containing high-level strategies. + +#### Direct + instruction. + +This prompt combines the direct prompt and instruction-only prompt. + +#### Underspecified. + +The underspecified prompt contains a vague normative phrase as an instruction. One example follows: + +Note that no direction is provided beyond “higher-quality.” In addition to “make it higher-quality,” we draw variations of such vague normative phrases randomly from a preset list. + +### B.3 LLM-as-judge details + +We use Llama-3.1 (8B) as a model to extract relevant information from various texts. We provide prompt examples used for such sub-tasks. + +#### Evaluating goal dimension validity. + +Here, we show our prompt for evaluating the agreement between the LLM and each goal-space mapping function: + +This prompt is tested on rewrites produced by Llama3.1-8B on the direct + negative prompt. Note that we randomize the order in which the rewritten vs. original text appears. + +#### Evaluating groundedness. + +Groundedness proceeds in two stages. The first stage is an LLM-as-judge evaluation, using the template below: + +The ordering of “Version A” and “Version B” is randomized. All “no” decisions and a random subsample of 16 “yes” decisions are manually-reviewed by a human familiar with the experimental setup of the text-rewriting task. The human is given a yes/no option to reject (_i.e._, flip) the judge response, or approve the judge response. We report counts of valid responses for all steerability probes in Tables[10](https://arxiv.org/html/2505.23816v2#A4.T10 "Table 10 ‣ D.4 Groundedness evaluation ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") through [11](https://arxiv.org/html/2505.23816v2#A4.T11 "Table 11 ‣ D.4 Groundedness evaluation ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). Note that we randomize the order in which the rewrite vs. original text appears. + +#### Instruction extraction from chain-of-thought + +Due to inconsistencies in the format of edits outputted by the chain-of-thought prompt, we leverage an LLM as an extraction subroutine. + +Appendix C LLM inference & fine-tuning details +---------------------------------------------- + +### C.1 Inference details + +Models are either hosted via vLLM locally using the OpenAI API-compatible server, or accessed via the OpenAI API. Table[7](https://arxiv.org/html/2505.23816v2#A3.T7 "Table 7 ‣ C.1 Inference details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") lists the exact model versions used for our experiments. The “model endpoint name” column is equivalent to the model key for the OpenAI API, or the HuggingFace model name for vLLM-hosted models. For HuggingFace models, we provide the first seven characters of commit hash. Local models were hosted on 1-4 A6000 GPUs via tensor parallelism. + +Model name Model endpoint name Revision +GPT-3.5 (turbo)gpt-3.5-turbo-0125 N/A +GPT-4 (turbo)gpt-4-turbo-2024-04-09 N/A +GPT-4o gpt-4o-2024-08-06 N/A +GPT-4.1 gpt-4.1-2025-04-14 N/A +o1-mini o1-mini-2024-09-12 N/A +o3-mini o3-mini-2025-01-31 N/A +Llama3-8B meta-llama/Meta-Llama-3-8B-Instruct 5f0b02c +Llama3-70B meta-llama/Meta-Llama-3-70B-Instruct 28bd9fa +Llama3.1-8B meta-llama/Llama-3.1-8B-Instruct 0e9e39f +Llama3.1-70B meta-llama/Llama-3.1-70B-Instruct 1605565 +Llama3.3-70B meta-llama/Llama-3.3-70B-Instruct 6f6073b +Deepseek-8B deepseek-ai/DeepSeek-R1-Distill-Llama-8B ebf7e8d +Deepseek-70B deepseek-ai/DeepSeek-R1-Distill-Llama-8B 008f3f3 +Qwen-4B Qwen3/Qwen-4B 82d62bb +Qwen-32B Qwen3/Qwen-32B 30b8421 +Qwen-30B-A3B Qwen3/Qwen-30B-A3B 4c44647 + +Table 7: Model endpoint and version information for all models evaluated. For HuggingFace models, the first seven digits of the model commit hash on the HuggingFace model hub is provided. + +### C.2 RL objective design details + +Here, we derive various components of our RL approach. We do not claim these as new results, but as an aid to understanding our methods. + +#### Main objective. + +Assume all goal dimensions are independent and that all 𝐳∗\mathbf{z}^{*} are reachable for all 𝐳 0\mathbf{z}_{0}. The first assumption implies that miscalibration and orthogonality never trade off, so optimizing ℓ\ell = steering error is well-principled. The second assumption ensures that P​(𝐳 0,𝐳∗)>0 P(\mathbf{z}_{0},\mathbf{z}^{*})>0, so steerability is achievable. Then, recall that we optimize a sample-weighted objective: + +max.𝔼(𝐳 0,𝐳∗)∼𝒟​𝔼 𝐳^∼f(⋅∣𝐳 0,𝐳∗)​[w^​(𝐳 0,𝐳∗)⋅∥𝐳∗−𝐳^∥2 2]\max.\;\underset{(\mathbf{z}_{0},\mathbf{z}^{*})\sim\mathcal{D}}{\mathbb{E}}\;\underset{\mathbf{\hat{z}}\sim f(\cdot\mid\mathbf{z}_{0},\mathbf{z}^{*})}{\mathbb{E}}[\hat{w}(\mathbf{z}_{0},\mathbf{z}^{*})\cdot\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}\rVert_{2}^{2}](9) + +To estimate w^​(⋅)\hat{w}(\cdot), we make a simplifying assumption that 𝐳∗⟂⟂𝐳 0\mathbf{z}^{*}\perp\!\!\!\perp\mathbf{z}_{0}. Since we can sample user goal vectors 𝐳∗\mathbf{z}^{*} on-demand, we simply do so uniformly. Then we need only ensure a uniform sample over source texts with respect to goal-space. Thus, we write + +w​(𝐳 0,𝐳∗)\displaystyle w(\mathbf{z}_{0},\mathbf{z}^{*})=f​(𝐳 0,𝐳∗∣𝒰)f​(𝐳 0,𝐳∗∣𝒟)=f​(𝐳 0∣𝒰)​f​(𝐳∗∣𝒰)f​(𝐳 0∣𝒟)​f​(𝐳∗∣𝒰)\displaystyle=\frac{f(\mathbf{z}_{0},\mathbf{z}^{*}\mid\mathcal{U})}{f(\mathbf{z}_{0},\mathbf{z}^{*}\mid\mathcal{D})}=\frac{f(\mathbf{z}_{0}\mid\mathcal{U})f(\mathbf{z}^{*}\mid\mathcal{U})}{f(\mathbf{z}_{0}\mid\mathcal{D})f(\mathbf{z}^{*}\mid\mathcal{U})} +=f​(𝐳 0∣𝒰)f​(𝐳 0∣𝒟)\displaystyle=\frac{f(\mathbf{z}_{0}\mid\mathcal{U})}{f(\mathbf{z}_{0}\mid\mathcal{D})}(10) + +where 𝒰\mathcal{U} is a uniform distribution and f f is the respective probability density function. The first equality follows by definition. The second equality follows from our assumption that 𝐳 0\mathbf{z}_{0} and 𝐳∗\mathbf{z}^{*} are independent, and we have substituted f​(𝐳∗∣𝒟)f(\mathbf{z}^{*}\mid\mathcal{D}) with f​(𝐳∗∣𝒰)f(\mathbf{z}^{*}\mid\mathcal{U}) in the denominator since, given a known goal-space 𝒵\mathcal{Z}, we can generate 𝐳∗\mathbf{z}^{*} arbitrarily. The final equality is a simple cancellation, from Eq.[10](https://arxiv.org/html/2505.23816v2#A3.E10 "In Main objective. ‣ C.2 RL objective design details ‣ Appendix C LLM inference & fine-tuning details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") can be written as w​(𝐳 0)w(\mathbf{z}_{0}), which we estimate via classifier-based density ratio estimation(Bickel and Scheffer [2009](https://arxiv.org/html/2505.23816v2#bib.bib3 "Discriminative learning under covariate shift")). + +#### Margin-aware leave-one-out policy optimization (MA-LOOP). + +Our RL algorithm is a variant of leave-one-out proximal policy optimization (LOOP)(Chen et al.[2025](https://arxiv.org/html/2505.23816v2#bib.bib5 "Reinforcement learning for long-horizon interactive LLM agents")). For reward r r, LOOP optimizes: + +𝒥 LOOP​(π;𝒟,r)\displaystyle\mathcal{J}_{\text{LOOP}}(\pi;\mathcal{D},r)=𝔼 𝒟[1∑i=1|𝒢||y i|∑i=1|𝒢|(∑t=1|y i|π​(y t∣x i,yp 𝐳∗∗​(y′≻μ)\displaystyle p^{*}_{\mathbf{z}^{*}}(y\succ\mu)>p^{*}_{\mathbf{z}^{*}}(y^{\prime}\succ\mu)⇔∥𝐳∗−𝐳^∥2<∥𝐳∗−𝐳^′∥2,\displaystyle\iff\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}\rVert_{2}<\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}^{\prime}\rVert_{2},(17) +p 𝐳∗∗​(y≻μ)=p 𝐳∗∗​(y′≻μ)\displaystyle p^{*}_{\mathbf{z}^{*}}(y\succ\mu)=p^{*}_{\mathbf{z}^{*}}(y^{\prime}\succ\mu)⇔∥𝐳∗−𝐳^∥2=∥𝐳∗−𝐳^′∥2;\displaystyle\iff\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}\rVert_{2}=\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}^{\prime}\rVert_{2};(18) + +_i.e._, in expectation over all prompts expressing intent 𝐳∗\mathbf{z}^{*}, the negative steering error and the probability that the response is preferred induce identical preferences. Note that, since 𝐳∗\mathbf{z}^{*} is as a function of x x (_i.e._, intents are expressed through prompts), and that 𝐳^\mathbf{\hat{z}} is a function of y y (𝐳^≜𝐠​(y)\mathbf{\hat{z}}\triangleq\mathbf{g}(y)), this assumption is well-posed; _i.e._, there exists a non-decreasing Ψ:[0,1]→ℝ\Psi:[0,1]\to\mathbb{R} such that + +Ψ​(p 𝐳∗∗​(y≻μ))−Ψ​(p 𝐳∗∗​(y′≻μ))=∥𝐳∗−𝐳^′∥2−∥𝐳∗−𝐳^∥2.\Psi(p^{*}_{\mathbf{z}^{*}}(y\succ\mu))-\Psi(p^{*}_{\mathbf{z}^{*}}(y^{\prime}\succ\mu))=\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}^{\prime}\rVert_{2}-\lVert\mathbf{z}^{*}-\mathbf{\hat{z}}\rVert_{2}.(19) + +Substitution into Eq. 12 of Azar et al. yields an objective with an identical form to our regularizer. + +### C.3 RL implementation details + +#### Text pre-processing. + +We apply the default conversational template to all texts; _i.e._, all prompts are represented as dictionaries: + + {"role": "user", "content": [prompt]} + +#### Exploration policy. + +To generate rollouts prior to policy updates, we sample 64 (|𝒢||\mathcal{G}|) completions on-policy with temperature 1.0, min-p sampling (p=0.2 p=0.2), and a frequency penalty of 0.1, before applying rejection sampling. + +#### Optimization hyperparameters. + +We use the following hyperparameters to train our model: + +* •Learning rate: 2.5×10−7 2.5\times 10^{-7}, with linear warmup for the first 20% of training +* •Optimizer: AdamW(Loshchilov and Hutter [2019](https://arxiv.org/html/2505.23816v2#bib.bib27 "Decoupled weight decay regularization")) +* •Batch size: 4, via gradient accumulation (total: 4⋅K=64 4\cdot K=64 completions per gradient update) +* •Gradient clipping norm: 1.0 +* •LoRA(Hu et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib16 "LoRA: low-rank adaptation of large language models")), rank 256, α\alpha: 512, with rank-stabilization(Kalajdzievski [2023](https://arxiv.org/html/2505.23816v2#bib.bib18 "A rank stabilization scaling factor for fine-tuning with LoRA")) and no dropout +* •Rollouts per prompt (|𝒢||\mathcal{G}|): 64 +* •Rejection sample size (K K): 16 +* •KL divergence regularization parameter (β\beta): 0.01 +* •IPO-regularization strength (λ τ\lambda_{\tau}): 1 +* •IPO-regularization scale (τ\tau): 1 +* •Model context length: 4,096 + +#### Memory-efficiency optimizations. + +To maximize memory savings, we leveraged CUDA kernels for cut-cross entropy loss for calculating sequence log-probabilities(Wijmans et al.[2025](https://arxiv.org/html/2505.23816v2#bib.bib56 "Cut your losses in large-vocabulary language models")) and DeepSpeed ZeRO Stage 2(Rajbhandari et al.[2020](https://arxiv.org/html/2505.23816v2#bib.bib44 "ZeRO: memory optimizations toward training trillion parameter models")) with optimizer offloading to CPU, as well as the default trl gradient checkpointing. All training was done using bfloat16. + +#### Speed-efficiency optimizations. + +To improve generation speed, we employed tensor parallelism (size: 2) via vLLM during training time. Goal-space mappings were implemented as a local server with 16 workers for asynchronous, parallel computation of goal-space mappings. + +#### Software acknowledgement. + +Our method is built with a customized version of the GRPO implementation in trl 0.16.0. + +Appendix D Additional results +----------------------------- + +### D.1 Steerability probe results + +![Image 6: Refer to caption](https://arxiv.org/html/2505.23816v2/x6.png) + +Figure 7: Median and IQR steerability metrics for (from left to right) Llama3 (blue), GPT (yellow), o1/o3-mini (orange), Deepseek (red), and Qwen3 (gray) family models. + +#### Steering error, all models. + +We show results for all models evaluated on our steerability probe. Figure[7](https://arxiv.org/html/2505.23816v2#A4.F7 "Figure 7 ‣ D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") shows that, within model families, some improvements in steering error is visible, and miscalibration improves as well. However, for all models, side effects are severe: orthogonality remains skewed toward one. + +![Image 7: Refer to caption](https://arxiv.org/html/2505.23816v2/x7.png) + +Figure 8: Median and IQR steerability metrics for (from left to right) Llama3.3-70B (blue), GPT-4.1 (yellow), Deepseek-80B (red), and Qwen3-32B (gray), stratified by requests for correlated (darker) vs. anti-correlated (anti-correlated) changes to reading difficulty and formality. Across all model families, LLMs struggle to satisfy in anti-correlated requests more so than correlated requests. + +#### Correlated vs. anti-correlated requests, (reading difficulty, formality) subspace. + +We show a subgroup analysis of prompts requesting correlated (same direction) vs. anti-correlated (opposite direction) changes in reading difficulty and formality in Figure[8](https://arxiv.org/html/2505.23816v2#A4.F8 "Figure 8 ‣ Steering error, all models. ‣ D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), for the largest model we evaluated in each class (Llama3.3-70B, GPT-4.1, Deepseek-70B, and Qwen-32B). + +![Image 8: Refer to caption](https://arxiv.org/html/2505.23816v2/x8.png) + +Figure 9: Median and IQR steerability metrics for various prompting strategies, Llama3.1-8B (from left to right): underspecified, direct, direct with negative prompt, chain-of-thought, chain-of-thoughts with negative prompt, instruction-only, direct with instructions, and direct with instructions and negative prompt. + +#### All prompts. + +Here, we show results for all prompting strategies evaluated on our steerability probe. Fig.[9](https://arxiv.org/html/2505.23816v2#A4.F9 "Figure 9 ‣ Correlated vs. anti-correlated requests, (reading difficulty, formality) subspace. ‣ D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") shows that prompt engineering has marginal effects on steerability metrics. Providing more detail compared to a direct prompt (e.g., via instructions, including self-generated via chain-of-thought, or specifying a 1-10 scale) as well as negative prompting yield minor improvements to steerability metrics. + +![Image 9: Refer to caption](https://arxiv.org/html/2505.23816v2/x9.png) + +Figure 10: Median and IQR steerability metrics for a probe sampled with (left) and without (right) uniform reweighting. Steerability metrics remain similar across probes, but reweighting remains a principled approach to ensure that differences in steerability metrics are not dominated by common/easy goals. + +#### The impact of reweighting. + +In Figure[10](https://arxiv.org/html/2505.23816v2#A4.F10 "Figure 10 ‣ D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), we find that sampling source texts to target a uniform goal-space (left; darker) vs. naive sampling (right; lighter) yields similar steerability metrics. The no-reweighting probe has slightly worse steerability metrics, suggesting that steerability is actually slightly more difficult on more frequently-encountered goals, though the difference is minor. Ultimately, reweighting goals to target a uniform goal-space remains principled, and ensures that differences in steerability metrics across models or other interventions are not driven solely by improvements in frequent or easy goals. + +![Image 10: Refer to caption](https://arxiv.org/html/2505.23816v2/x10.png) + +Figure 11: Mean and standard deviation of steering error (left), miscalibration (middle), and orthogonality (right) for pre- vs. post-RL model on correlated (e.g., increase both dimensions) vs. anti-correlated requests (e.g., change dimensions in opposite directions). RL shrinks the gap between correlated and anti-correlated requests, despite only supervised via 1D instructions. + +#### Full RL results. + +In Figure[11](https://arxiv.org/html/2505.23816v2#A4.F11 "Figure 11 ‣ The impact of reweighting. ‣ D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), we show full violin plots for steering error, miscalibration, and orthogonality for the pre-RL (best-of-1 and best-of-128) and post-RL models across correlated vs. anti-correlated requests. Post-RL, the model is better able to independently control these dimensions, as suggested by a smaller gap in steerability metrics on anti-correlated versus correlated requests. + +![Image 11: Refer to caption](https://arxiv.org/html/2505.23816v2/x11.png) + +Figure 12: Spearman’s ρ\rho between goal dimensions observed in the source texts (left), observed in the residuals of a mixed-effects model explaining goal-space movement, given instruction goal-space mappings, with source texts as groups (center), and the difference between the two correlations (right) + +### D.2 Flow diagrams + +Here, we display flow diagrams for all pairs of (requested goal, non-requested goal) for a subset of all models and prompting strategies tested. For transparency, we show both the observed goal-space movement as well as the interpolated flows. Results are organized as outlined below at the end of the Appendix. + +#### Flow diagrams, various models. + +We show flow diagrams for GPT-4.1 (Figures[17](https://arxiv.org/html/2505.23816v2#A5.F17 "Figure 17 ‣ Estimated compute. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") through [28](https://arxiv.org/html/2505.23816v2#A5.F28 "Figure 28 ‣ Estimated compute. ‣ Appendix E Computational details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), inclusive). Note that all such experiments use a direct + negative prompting strategy. + +![Image 12: Refer to caption](https://arxiv.org/html/2505.23816v2/x12.png) + +Figure 13: Llama3.1-8B flow diagram on steerability-tuning evaluation set prior to RL on instructions where reading difficulty is specified, but formality is not. + +![Image 13: Refer to caption](https://arxiv.org/html/2505.23816v2/x13.png) + +Figure 14: Llama3.1-8B flow diagram on steerability-tuning evaluation set prior to RL on instructions where formality is specified, but reading difficulty is not. + +![Image 14: Refer to caption](https://arxiv.org/html/2505.23816v2/x14.png) + +Figure 15: Llama3.1-8B flow diagram on steerability-tuning evaluation set after RL on instructions where reading difficulty is specified, but formality is not. + +![Image 15: Refer to caption](https://arxiv.org/html/2505.23816v2/x15.png) + +Figure 16: Llama3.1-8B flow diagram on steerability-tuning evaluation set after RL on instructions where formality is specified, but reading difficulty is not. + +#### Pre- vs. post-RL flow diagrams. + +We also show flow diagrams for pre- vs. post-RL: + +* •Pre-RL, reading difficulty specified, formality not specified: Figure[13](https://arxiv.org/html/2505.23816v2#A4.F13 "Figure 13 ‣ Flow diagrams, various models. ‣ D.2 Flow diagrams ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") +* •Pre-RL, formality specified, reading difficulty not specified: Figure[14](https://arxiv.org/html/2505.23816v2#A4.F14 "Figure 14 ‣ Flow diagrams, various models. ‣ D.2 Flow diagrams ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") +* •Post-RL, reading difficulty specified, formality not specified: Figure[15](https://arxiv.org/html/2505.23816v2#A4.F15 "Figure 15 ‣ Flow diagrams, various models. ‣ D.2 Flow diagrams ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") +* •Post-RL, formality specified, reading difficulty not specified: Figure[16](https://arxiv.org/html/2505.23816v2#A4.F16 "Figure 16 ‣ Flow diagrams, various models. ‣ D.2 Flow diagrams ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") + +We observe that, before RL, the model exhibits no movement on many source texts, indicative of the copy-pasting behavior noted in Section[4.3](https://arxiv.org/html/2505.23816v2#S4.SS3 "4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"), which artificially lowers orthogonality. After RL, the model consistently exhibits movement in many source texts with similar orthogonality to the base model. This result suggests that RL allows the model to independently discover a strategy for mitigating side effects. While side effects visually improve with respect to the base model (less vertical movement), non-trivial vertical movement is still visible post-RL, especially in instructions to change formality, but not reading difficulty (Figure[16](https://arxiv.org/html/2505.23816v2#A4.F16 "Figure 16 ‣ Flow diagrams, various models. ‣ D.2 Flow diagrams ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs")). Thus, room for improvement remains. + +#### Are “currents” LLM-induced, or a function of input data statistics? + +As a small-scale investigation of whether the flows and correlations between goal dimensions observed in our results are driven by the input distribution of goal dimensions or the LLM itself, we compare correlations in goal dimensions in our steerability probe versus in LLM responses. To do so, we fit a mixed-effects model for each pair of goal dimensions: + +output goal∼source goal+instruction goals+(1∣source text)+ε,\textrm{output goal}\sim\textrm{source goal}+\textrm{instruction goals}+(1\mid\textrm{source text})+\varepsilon,(20) + +i.e., we regress observed goal-space movement on desired goal-space movement, with a per-source text random intercept. We use all rewrites in the best-of-128 experiment on Llama3.1-8B to maximize the number of observations of output goals, instructions, and source texts in our fit. We then construct correlation matrices between (1) goal dimensions as observed in the source text, and (2) residuals in predicting goal dimensions after per our model. The latter isolates movement in goal-space unaccounted for by the underlying source text and the instruction given to the model. + +Figure[12](https://arxiv.org/html/2505.23816v2#A4.F12 "Figure 12 ‣ Full RL results. ‣ D.1 Steerability probe results ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") (left) indicates that multiple goal dimensions are positively correlated in the source text. Many such correlations flip in the model residuals (center), suggesting that the LLM itself shifts correlations between goal dimensions beyond that which can be explained by the source text and underlying instruction. This result suggests that steerability failures may not only be present in pre-training data, but also LLM-induced. + +### D.3 Examples of rewritten texts pre- and post-RL + +Here, we show two examples of rewritten texts pre- and post-RL sampled from our steerability probe that demonstrate different rewriting “techniques” learned via RL. For ease of visualization, we truncate the texts and verify that, post-truncation, all excerpts correspond to the same part of the written text (e.g., same events are described). To aid interpretation, we report steerability metrics as well as relevant goal-dimension metrics (normalized and unnormalized Flesch-Kincaid and Heylighen-Dewaele scores). We provide commentary on all rewrites as well, though we caution that analysis is specific to the texts visualized and should not be taken as general statements for all rewrites. + +#### Copy-pasting behavior. + +Table[8](https://arxiv.org/html/2505.23816v2#A4.T8 "Table 8 ‣ Copy-pasting behavior. ‣ D.3 Examples of rewritten texts pre- and post-RL ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") shows an example of a source text that is copy-pasted by the base model, but not so by the model post-RL. For ease of qualitative analysis, the shown example is the example with the lowest BLEU score between the source text and rewritten text post-RL, conditioned on being copy-pasted by the original model. + +TL;DR: the post-RL uses adverbs and interjections to decrease formality. Though it avoids copy-pasting, the model introduces filler phrases that inflate average sentence length, causing a side effect in reading difficulty. Post-RL, the model correctly follows the instruction to make the text more informal, as shown by a drop of 17.3 Heylighen-Dewaele points. Such a value indicates that the part-of-speech distribution has shifted in favor of non-deictic text by 34.6%, as evidenced by the increased prevalence of adverbial rewrites such as “thrust into the limelight” →\to “totally getting roasted”, and “after accusing” →\to “all like super angry at this guy.” In particular, the proportion of adverbs and interjections increases by 9.9% (4.3% →\to 14.2%) and 7.3% (0.1% →\to 7.4%), respectively, supporting the analysis. Note that more colloquial language is not explicitly rewarded by Heylighen-Dewaele. + +However, despite unlearning copy-pasting behavior, the post-RL rewrite introduces higher reading difficulty, as the Flesch-Kincaid score goes from 9.9 to 12.6. While the increase in reading difficulty may appear counterintuitive due to the increased usage of colloquialisms in the rewritten text, Flesch-Kincaid monotonically increases in the number of words per sentence, and the average syllables per word. Indeed, the number of words per sentence in the post-RL rewrite spikes from 23.0 to 33.0 words per sentence on average, accounting for an increase in 3.9 Flesch-Kincaid grade levels, while the average syllables per word decrease by 0.1, accounting for a decrease of 1.2 grade levels, yielding the observed net 2.7 increase. The increase in words per sentence is likely introduced by the additional adverbial phrases/filler words used in the more informal rewrite. While arguably less brittle than copy-pasting, such filler-phrase usage is may also be detrimental to steerability in the reading difficulty-formality subspace. + +Table 8: Example showing copy-pasting behavior in the pre-RL base model un-learned after RL, sourced from CNN/DailyMail validation split. […] added to indicate mid-sentence truncation. FK: Flesch-Kincaid (reading difficulty). HD: Heylighen-Dewaele (formality). Note that metrics are for the entire source text and may not match the excerpt provided. Instruction (immediately before source text): “Please rewrite the following, but make it much more informal. You MUST not change anything else about the other parts of the text, even if it makes the rewritten text sound unnatural or otherwise awkward. Respond with only the rewritten text and do not explain your response.” + +#### Goal disentanglement techniques used post-RL. + +Table[9](https://arxiv.org/html/2505.23816v2#A4.T9 "Table 9 ‣ Goal disentanglement techniques used post-RL. ‣ D.3 Examples of rewritten texts pre- and post-RL ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs") shows a rewritten text exhibiting disentangled adjustment of reading difficulty and formality on an anti-correlated prompt (increase formality; decrease reading difficulty). To facilitate qualitative interpretation, we choose the prompt with the largest improvement in unnormalized orthogonality on the 2D evaluation probe. + +TL;DR: Both models increase formality, but the pre-RL model uses longer words, causing a side effect in reading difficulty. The post-RL model also uses longer words, but mitigates the side effect by using shorter sentences. The instruction provided to the model requires increasing formality, but decreasing reading difficulty. While the pre-RL model increases both goal dimensions, the post-RL model is “directionally correct” in both dimensions. The pre-RL rewrite significantly increases formality, eschewing pronouns (e.g., “i [sic] know x from high school” →\to “The individual in question, who shall be referred to as X, is an acquaintance from high school”, or “she is going back to toronto” →\to “X intends to return to Toronto”). The former phrase also replaces the verb “know” with an article+noun (“an acquaintance”). Such edits result in increases to the Heylighen-Dewaele score, and indeed, the proportion of pronouns decreases in the base model’s rewrite by 12.2% (20.9% →\to 8.7%), while the proportion of nouns increases by 13.2% (8.8% →\to 22.0%). However, the base model also relies heavily on introducing more polysyllabic words (average words per sentence: 1.1 1.1 pre-RL to 1.7 1.7 post-RL, accounting for a increase of 11.8 * 0.6 = 7.1 grade levels), such that the Flesch-Kincaid score increases extraneously. + +The post-RL model avoids such extraneous increases to the Flesch-Kincaid score. While the post-RL rewrite still increases the number of syllables per word (1.1 →\to 1.4), it compensates by decreasing the sentence length (original: 22.5 words/sentence; post-RL rewrite: 12.5 words/sentence), which the base model fails to do (pre-RL rewrite: 22.0 words/sentence). The model uses similar techniques to increase formality, using nouns to describe events (e.g., “well this happened” →\to “The events of last night”). Interestingly, instead of eschewing pronouns, the model sometimes adds more details to the rewrite, introducing adjectives and verbs that increase Heylighen-Dewaele (e.g., “she went to study to toronto after” →\to “She relocated to Toronto for further education after completing her _high school education_;” italics added for emphasis). + +This example demonstrates that, on some source texts, a model trained via RL is able to disentangle reading difficulty and formality. However, we emphasize that the high variance in orthogonality means that some undesired correlation in model behaviors remains. + +Table 9: Example showing the model demonstrating the ability to disentangle goals post-RL, sourced from Reddit TIFU. FK: Flesch-Kincaid (reading difficulty). HD: Heylighen-Dewaele (formality). Note that metrics are for the entire source text and may not match the excerpt provided. Instruction (immediately before source text): “Please rewrite the following, but make it more formal, and easier to read. You MUST not change anything else about the other parts of the text, even if it makes the rewritten text sound unnatural or otherwise awkward. Respond with only the rewritten text and do not explain your response.” + +### D.4 Groundedness evaluation + +In the tables below, we report the number of valid rewrites in each probe, as judged by the pipeline in Appendix[B.3](https://arxiv.org/html/2505.23816v2#A2.SS3 "B.3 LLM-as-judge details ‣ Appendix B Text-rewriting task implementation details ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"). Results are organized as follows: + +* •Table[10](https://arxiv.org/html/2505.23816v2#A4.T10 "Table 10 ‣ D.4 Groundedness evaluation ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"): Groundedness counts for sensitivity analysis of prompting strategies on Llama3.1-8B. +* •Table[11](https://arxiv.org/html/2505.23816v2#A4.T11 "Table 11 ‣ D.4 Groundedness evaluation ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"): Groundedness counts for best-of-N N response on Llama3.1-8B. Note that groundedness is only evaluated on “best” response (_i.e._, the generation with the lowest steering error). +* •Table[12](https://arxiv.org/html/2505.23816v2#A4.T12 "Table 12 ‣ D.4 Groundedness evaluation ‣ Appendix D Additional results ‣ Acknowledgements ‣ Limitations. ‣ 5 Discussion & Conclusion ‣ Takeaway #3: RL yields partial progress towards steerability. ‣ 4.3 RL yields progress towards steerable models ‣ 4 Empirical Results ‣ Output post-processing. ‣ 3.3 LLM inference setup ‣ 3 Experimental Setup ‣ 2.2 Measuring steerability in practice ‣ 2 A Steerability Measurement Framework ‣ 1 Introduction ‣ A Course Correction in Steerability Evaluation: Revealing Miscalibration and Side Effects in LLMs"): Groundedness counts for all models evaluated (direct + negative prompt). + +Note that the number of grounded responses (column: “Count (%)”) is not necessarily equal to 2,048 (total probe size) minus the number of overruled responses. The total number of grounded responses can be lower if any “Yes” responses are also overruled during human review. + +Prompt strategy NP?Count (%)# (%) LLM flagged# (%) overruled +Direct No 2047 (99.95%)6 (0.29%)5 (0.24%) +Direct Yes 2042 (99.71%)6 (0.29%)2 (0.10%) +Underspecified No 2048 (100.0%)0 (0.00%)0 (0.00%) +1-10 scale No 2043 (99.76%)13 (0.63%)10 (0.49%) +1-10 scale Yes 2044 (99.80%)15 (0.74%)11 (0.54%) +Inst.-based No 2044 (99.80%)5 (0.24%)1 (0.05%) +Direct + inst.No 2047 (99.95%)5 (0.24%)4 (0.20%) +Direct + inst.Yes 2048 (100.0%)4 (0.20%)4 (0.20%) +CoT No 2048 (100.0%)2 (0.10%)2 (0.10%) +CoT Yes 2048 (100.0%)2 (0.10%)2 (0.10%) + +Table 10: Counts of grounded rewrites in main steerability probe for Llama3.1-8B under different prompting strategies, with number of rewrites flagged as un-grounded by the LLM, and number of LLM-flagged rewrites overruled. NP: negative prompt. Inst.: instruction(s). + +Table 11: Counts of grounded rewrites in main steerability probe under different best-of-N N strategies. Note that only the lowest steering-error response for each was evaluated. + +N N (rewrites per prompt)Count (%)# (%) LLM flagged# (%) overruled +Llama3-8B 2048 (100.0%)8 (0.39%)8 (0.39%) +Llama3-70B 2048 (100.0%)8 (0.39%)8 (0.39%) +Llama3.1-8B 2042 (99.71%)6 (0.29%)2 (0.10%) +Llama3.1-70B 2047 (99.95%)6 (0.29%)7 (0.34%) +Llama3.3-70B 2047 (99.95%)19 (0.93%)18 (0.88%) +GPT-3.5 turbo 2045 (99.86%)41 (2.00%)40 (1.95%) +GPT-4 turbo 2048 (100.0%)0 (0.00%)0 (0.00%) +GPT-4o 2048 (100.0%)0 (0.00%)0 (0.00%) +GPT-4.1 2048 (100.0%)0 (0.00%)0 (0.00%) +o1-mini 2026 (98.93%)23 (1.12%)1 (0.05%) +o3-mini 2024 (98.83%)23 (1.12%)4 (0.20%) +Deepseek-8B 2047 (99.95%)6 (0.29%)5 (0.24%) +Deepseek-70B 2046 (99.90%)4 (0.20%)4 (0.20%) +Qwen3-4B (no thinking)2046 (99.90%)5 (0.24%)3 (0.15%) +Qwen3-4B (+thinking)2047 (99.95%)4 (0.20%)1 (0.05%) +Qwen3-32B (no thinking)2045 (99.86%)2 (0.10%)4 (0.20%) +Qwen3-32B (+thinking)2045 (99.86%)3 (0.15%)3 (0.15%) +Qwen3-30B-A3B (no thinking)2047 (99.95%)5 (0.24%)0 (0.00%) +Qwen3-30B-A3B (+thinking)2046 (99.90%)7 (0.34%)5 (0.24%) + +Table 12: Counts of grounded rewrites in main steerability probe under different models. + +### D.5 Examples of real-world user requests + +Here, we show a non-exhaustive collection of conversation IDs of real-world user requests in the WildChat dataset(Zhao et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib59 "WildChat: 1m chatGPT interaction logs in the wild")) related to our choice of goal-dimensions. Conversation IDs were found using the WildVis interactive search tool(Deng et al.[2024](https://arxiv.org/html/2505.23816v2#bib.bib9 "WildVis: open source visualizer for million-scale chat logs in the wild")) via a keyword based search, filtering to English-language conversations.6 6 6[https://wildvisualizer.com/](https://wildvisualizer.com/) Retrieved IDs were then manually reviewed to verify that intents to modify text in a certain manner were present in the conversation. + +Note that requests to produce text with a specific value along a goal-dimension without a source text are excluded (e.g., “write a story about a pool table tournament at a grade 1 reading level”, ID: 1013074). However, requests that remain “active” across multiple turns are included (_e.g._ “Please rewrite all the following paragraphs that I send to a 9th grade reading level with engaging language that doesn’t change the meaning,” ID: 26308591fd814f779cd8511e3e449d61), since we merely aim to showcase examples where users desire specific rewrites to text without anchoring to a prompt format. Feedback given to the model expressing an intent (e.g., “write less formal [sic]” in response to a first draft of a text, ID: 2090aca2b7bb4ef8b069e1d43943b007) is also counted. + +Note: Linked examples include those flagged as toxic by WildVis. + +#### Reading difficulty. + +The following conversation IDs contain intents to modify the reading level. + +* •Keywords/keyphrases: reading level, reading difficulty, advanced +* •IDs: [https://wildvisualizer.com/conversation/wildchat/1339815](https://wildvisualizer.com/conversation/wildchat/1339815)1339815, [https://wildvisualizer.com/conversation/wildchat/1368597](https://wildvisualizer.com/conversation/wildchat/1368597)1368597, [https://wildvisualizer.com/conversation/lmsyschat/1abecad7158d426282e35c0c91106206](https://wildvisualizer.com/conversation/lmsyschat/1abecad7158d426282e35c0c91106206)1abecad7158d426282e35c0c91106206, [https://wildvisualizer.com/conversation/lmsyschat/271c9d0b08b749bbbf67527aa98c08c7](https://wildvisualizer.com/conversation/lmsyschat/271c9d0b08b749bbbf67527aa98c08c7)271c9d0b08b749bbbf67527aa98c08c7, [https://wildvisualizer.com/conversation/wildchat/1262863](https://wildvisualizer.com/conversation/wildchat/1262863)1262863, [https://wildvisualizer.com/conversation/wildchat/1089442](https://wildvisualizer.com/conversation/wildchat/1089442)1089442, [https://wildvisualizer.com/conversation/lmsyschat/f075b8061bb6449390c3368207af745b](https://wildvisualizer.com/conversation/lmsyschat/f075b8061bb6449390c3368207af745b)f075b8061bb6449390c3368207af745b, [https://wildvisualizer.com/conversation/lmsyschat/314190da3eff40a2838773249c9167d8](https://wildvisualizer.com/conversation/lmsyschat/314190da3eff40a2838773249c9167d8)314190da3eff40a2838773249c9167d8, [https://wildvisualizer.com/conversation/wildchat/1373275](https://wildvisualizer.com/conversation/wildchat/1373275)1373275, [https://wildvisualizer.com/conversation/wildchat/1368816](https://wildvisualizer.com/conversation/wildchat/1368816)1368816, [https://wildvisualizer.com/conversation/lmsyschat/1f9202e58d3a451998a230c268c292a1](https://wildvisualizer.com/conversation/lmsyschat/1f9202e58d3a451998a230c268c292a1)1f9202e58d3a451998a230c268c292a1, [https://wildvisualizer.com/conversation/wildchat/1947279](https://wildvisualizer.com/conversation/wildchat/1947279)1947279, [https://wildvisualizer.com/conversation/wildchat/350789](https://wildvisualizer.com/conversation/wildchat/350789)350789 + +#### Formality. + +The following conversation IDs contain intents to modify text formality. + +* •Keywords/keyphrases: formal, formality, informal +* •IDs: [https://wildvisualizer.com/conversation/lmsyschat/09b203dc594e4ee7a0fc79dad9efa69d](https://wildvisualizer.com/conversation/lmsyschat/09b203dc594e4ee7a0fc79dad9efa69d)09b203dc594e4ee7a0fc79dad9efa69d, [https://wildvisualizer.com/conversation/lmsyschat/e0f245efe54943aaa9889299a8599cc3](https://wildvisualizer.com/conversation/lmsyschat/e0f245efe54943aaa9889299a8599cc3)e0f245efe54943aaa9889299a8599cc3, [https://wildvisualizer.com/conversation/lmsyschat/600c946054b447de9969925ae27bc09a](https://wildvisualizer.com/conversation/lmsyschat/600c946054b447de9969925ae27bc09a)600c946054b447de9969925ae27bc09a, [https://wildvisualizer.com/conversation/lmsyschat/cc7dfeaa931e4c3c9f21bec5c22132e6](https://wildvisualizer.com/conversation/lmsyschat/cc7dfeaa931e4c3c9f21bec5c22132e6)cc7dfeaa931e4c3c9f21bec5c22132e6, [https://wildvisualizer.com/conversation/lmsyschat/37a428638c134d128a67f81e1949156b](https://wildvisualizer.com/conversation/lmsyschat/37a428638c134d128a67f81e1949156b)37a428638c134d128a67f81e1949156b, [https://wildvisualizer.com/conversation/lmsyschat/9f71f443b9764eb094173bbc39705a70](https://wildvisualizer.com/conversation/lmsyschat/9f71f443b9764eb094173bbc39705a70)9f71f443b9764eb094173bbc39705a70, [https://wildvisualizer.com/conversation/lmsyschat/62a2597fc186451c968582b8a0af6a3f](https://wildvisualizer.com/conversation/lmsyschat/62a2597fc186451c968582b8a0af6a3f)62a2597fc186451c968582b8a0af6a3f, [https://wildvisualizer.com/conversation/lmsyschat/231912e90cbe4fa28afe282099602139](https://wildvisualizer.com/conversation/lmsyschat/231912e90cbe4fa28afe282099602139)231912e90cbe4fa28afe282099602139, [https://wildvisualizer.com/conversation/lmsyschat/3bb584091ce64b6b84906792fc75397d](https://wildvisualizer.com/conversation/lmsyschat/3bb584091ce64b6b84906792fc75397d)3bb584091ce64b6b84906792fc75397d, [https://wildvisualizer.com/conversation/lmsyschat/2090aca2b7bb4ef8b069e1d43943b007](https://wildvisualizer.com/conversation/lmsyschat/2090aca2b7bb4ef8b069e1d43943b007)2090aca2b7bb4ef8b069e1d43943b007, [https://wildvisualizer.com/conversation/lmsyschat/44f9d6df33084db2b9a6278ce2407140](https://wildvisualizer.com/conversation/lmsyschat/44f9d6df33084db2b9a6278ce2407140)44f9d6df33084db2b9a6278ce2407140 + +#### Textual diversity. + +The following conversation IDs contain intents to modify text diversity.7 7 7 Other keywords for which we were unable to find relevant conversation IDs include: diversity, variety, diverse vocabulary, variety of words. Our search was not exhaustive due to the high false positive rate for “diversity” and “variety.” + +* •Keywords/keyphrases: repetitive +* • + +#### Text length. + +The following conversation IDs contain intents to modify text length. + +* •Keywords/keyphrases: longer, shorter, concise, verbose +* •IDs:[https://wildvisualizer.com/conversation/lmsyschat/6e38bdb4034c4bc0a6b63645e2f49166](https://wildvisualizer.com/conversation/lmsyschat/6e38bdb4034c4bc0a6b63645e2f49166)6e38bdb4034c4bc0a6b63645e2f49166, [https://wildvisualizer.com/conversation/lmsyschat/fcf6bb4ea8a91b552a1a38db3c32a2ee33ef72a619156](https://wildvisualizer.com/conversation/lmsyschat/fcf6bb4ea8a91b552a1a38db3c32a2ee33ef72a619156)fcf6bb4ea8a91b552a1a38db3c32a2ee33ef72a619156, [https://wildvisualizer.com/conversation/lmsyschat/24766f0ee8a149ef962cc1e8c41f2bf0](https://wildvisualizer.com/conversation/lmsyschat/24766f0ee8a149ef962cc1e8c41f2bf0)24766f0ee8a149ef962cc1e8c41f2bf0, [https://wildvisualizer.com/conversation/lmsyschat/74cd6c3ffb8d46db87e7b5a8c3b87a0f](https://wildvisualizer.com/conversation/lmsyschat/74cd6c3ffb8d46db87e7b5a8c3b87a0f)74cd6c3ffb8d46db87e7b5a8c3b87a0f, [https://wildvisualizer.com/conversation/lmsyschat/432ee46108db4f3aa6a6cb5e36e5ac9f](https://wildvisualizer.com/conversation/lmsyschat/432ee46108db4f3aa6a6cb5e36e5ac9f)432ee46108db4f3aa6a6cb5e36e5ac9f, [https://wildvisualizer.com/conversation/lmsyschat/4e6723ddb6784b40805f1120ce04c5ac](https://wildvisualizer.com/conversation/lmsyschat/4e6723ddb6784b40805f1120ce04c5ac)4e6723ddb6784b40805f1120ce04c5ac, [https://wildvisualizer.com/conversation/lmsyschat/b52aa0765d1f40e9a276d93feb4403b3](https://wildvisualizer.com/conversation/lmsyschat/b52aa0765d1f40e9a276d93feb4403b3)b52aa0765d1f40e9a276d93feb4403b3, [https://wildvisualizer.com/conversation/lmsyschat/ec055ca5800640158be7639bdb9b073d](https://wildvisualizer.com/conversation/lmsyschat/ec055ca5800640158be7639bdb9b073d)ec055ca5800640158be7639bdb9b073d, [https://wildvisualizer.com/conversation/lmsyschat/06f78ce84e8045faaf5270c561150db3](https://wildvisualizer.com/conversation/lmsyschat/06f78ce84e8045faaf5270c561150db3)06f78ce84e8045faaf5270c561150db3, [https://wildvisualizer.com/conversation/lmsyschat/8676fcded9fb4be6b1398cc2eeeec995](https://wildvisualizer.com/conversation/lmsyschat/8676fcded9fb4be6b1398cc2eeeec995)8676fcded9fb4be6b1398cc2eeeec995, [https://wildvisualizer.com/conversation/lmsyschat/061284abd9dc409e8beb90dc59e88849](https://wildvisualizer.com/conversation/lmsyschat/061284abd9dc409e8beb90dc59e88849)061284abd9dc409e8beb90dc59e88849, [https://wildvisualizer.com/conversation/lmsyschat/3bef1c236ecb49b4ae69f72be6947ce3](https://wildvisualizer.com/conversation/lmsyschat/3bef1c236ecb49b4ae69f72be6947ce3)3bef1c236ecb49b4ae69f72be6947ce3, [https://wildvisualizer.com/conversation/lmsyschat/d5821704834b4866808abf6d642d1b16](https://wildvisualizer.com/conversation/lmsyschat/d5821704834b4866808abf6d642d1b16)d5821704834b4866808abf6d642d1b16, [https://wildvisualizer.com/conversation/wildchat/2667173](https://wildvisualizer.com/conversation/wildchat/2667173)2667173, [https://wildvisualizer.com/conversation/wildchat/175693](https://wildvisualizer.com/conversation/wildchat/175693)175693, [https://wildvisualizer.com/conversation/wildchat/366339](https://wildvisualizer.com/conversation/wildchat/366339)366339, [https://wildvisualizer.com/conversation/lmsyschat/70f1f211c9f44096a635b08ac8f6e311](https://wildvisualizer.com/conversation/lmsyschat/70f1f211c9f44096a635b08ac8f6e311)70f1f211c9f44096a635b08ac8f6e311 + +#### Other intents. + +The following conversation IDs contain assorted intents, which we categorize. Note that some IDs appear twice since they request changes to multiple aspects of text. + +* •Keywords/keyphrases: rewrite, make it, change the, make this, not enough, no make it +* • +* • +* • +* •Underspecified:[https://wildvisualizer.com/conversation/lmsyschat/54324834ac124a2fa04e267e057976ff](https://wildvisualizer.com/conversation/lmsyschat/54324834ac124a2fa04e267e057976ff)54324834ac124a2fa04e267e057976ff (“change the format more”), [https://wildvisualizer.com/conversation/lmsyschat/390c70b49ae0416e850cbe3c92617c08](https://wildvisualizer.com/conversation/lmsyschat/390c70b49ae0416e850cbe3c92617c08)390c70b49ae0416e850cbe3c92617c08 (“more awesome”), [https://wildvisualizer.com/conversation/lmsyschat/390c70b49ae0416e850cbe3c92617c08](https://wildvisualizer.com/conversation/lmsyschat/390c70b49ae0416e850cbe3c92617c08)390c70b49ae0416e850cbe3c92617c08 (“make X better”), [https://wildvisualizer.com/conversation/lmsyschat/75fb787148674c81a66549464ed6d724](https://wildvisualizer.com/conversation/lmsyschat/75fb787148674c81a66549464ed6d724)75fb787148674c81a66549464ed6d724, [https://wildvisualizer.com/conversation/wildchat/1144654](https://wildvisualizer.com/conversation/wildchat/1144654)1144654, [https://wildvisualizer.com/conversation/wildchat/1814613](https://wildvisualizer.com/conversation/wildchat/1814613)1814613, [https://wildvisualizer.com/conversation/wildchat/331820](https://wildvisualizer.com/conversation/wildchat/331820)331820, [https://wildvisualizer.com/conversation/wildchat/434828](https://wildvisualizer.com/conversation/wildchat/434828)434828 (“rewrite/again”), [https://wildvisualizer.com/conversation/lmsyschat/3e0c5523d09a47a68e261ddaa04263c9](https://wildvisualizer.com/conversation/lmsyschat/3e0c5523d09a47a68e261ddaa04263c9)3e0c5523d09a47a68e261ddaa04263c9 (“improve”), [https://wildvisualizer.com/conversation/wildchat/1947139](https://wildvisualizer.com/conversation/wildchat/1947139)1947139, [https://wildvisualizer.com/conversation/wildchat/2033487](https://wildvisualizer.com/conversation/wildchat/2033487)2033487, [https://wildvisualizer.com/conversation/wildchat/1925167](https://wildvisualizer.com/conversation/wildchat/1925167)1925167, [https://wildvisualizer.com/conversation/wildchat/1990811](https://wildvisualizer.com/conversation/wildchat/1990811)1990811 (“edit”) +* • + +Appendix E Computational details +-------------------------------- + +#### Software. + +We used a custom fork of trl 0.16.0 (License: Apache 2.0, (von Werra et al.[2020](https://arxiv.org/html/2505.23816v2#bib.bib54 "TRL: transformer reinforcement learning"))) for model training, along with vLLM 0.8.4 for fast model rollouts (License: Apache 2.0(Kwon et al.[2023](https://arxiv.org/html/2505.23816v2#bib.bib25 "Efficient memory management for large language model serving with PagedAttention"))), deepspeed(Rasley et al.[2020](https://arxiv.org/html/2505.23816v2#bib.bib45 "DeepSpeed: system optimizations enable training deep learning models with over 100 billion parameters")) (License: Apache 2.0) and cut-cross-entropy(Wijmans et al.[2025](https://arxiv.org/html/2505.23816v2#bib.bib56 "Cut your losses in large-vocabulary language models")) (License: Apple) for memory-efficient training,accelerate for multi-GPU training (License: Apache 2.0(Gugger et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib12 "Accelerate: training and inference at scale made simple, efficient and adaptable"))), peft for LoRA (License: Apache 2.0(Mangrulkar et al.[2022](https://arxiv.org/html/2505.23816v2#bib.bib28 "PEFT: state-of-the-art parameter-efficient fine-tuning methods"))), and FlashAttention-2 (License: BSD-3 (Dao [2024](https://arxiv.org/html/2505.23816v2#bib.bib7 "FlashAttention-2: faster attention with better parallelism and work partitioning"))). The PyTorch version is 2.6.0 (License: BSD-style(Paszke et al.[2019](https://arxiv.org/html/2505.23816v2#bib.bib39 "PyTorch: an imperative style, high-performance deep learning library"))). with CUDA 12.4. During inference, LLM API calls were made via SAMMO (License: MIT(Schnabel and Neville [2024](https://arxiv.org/html/2505.23816v2#bib.bib48 "Symbolic prompt program search: a structure-aware approach to efficient compile-time prompt optimization"))). Various text-processing packages were used to compute goal dimensions, namely, nltk (License: MIT) and spacy (License: MIT), textstat (License: MIT), taaled (License: CC BY-NC-SA 4.0), and pylats (License: CC BY-NC-SA 4.0), which were hosted locally as a server via Uvicorn (License: BSD-3) and FastAPI (License: MIT) with Pydantic-based type-validation (License: MIT) during training. Fastsafetensors (License: Apache 2.0) was used to load LoRA adapters from trained models via vLLM. Scikit-learn (License: BSD-3) was used to implement classifier-based density ratio estimation for the purpose of computing sampling weights. + +#### Hardware. + +All experiments were run on 4 GPUs (RTX A6000, 48GB VRAM) on an 8 GPU Ubuntu 22.04.5 machine with 256 CPUs (processor type: AMD EPYC 7763 64-Core) + +#### Estimated compute. + +Main training runs took six GPU-days (≈\approx 1.5 days ×\times 4 GPUs). While inference took 30 GPU-minutes per model (smallest models; e.g., Llama3.1-8B) to 4 GPU-hours per model (approx. 1 hr * 4 GPUs). Best-of-N N approaches took up to 8 GPU hours (N=128 N=128); we approximate the total compute time for such experiments as 16 GPU hours. Experiments reported in the paper constitute a total of approximately 25 GPU-days (inference: approx. 27.5h; training: approx. 24 days). Additional preliminary training and inference experiments for debugging and exploration required approximately 50 additional GPU-days, with the vast majority of additional compute devoted to model training. + +![Image 16: Refer to caption](https://arxiv.org/html/2505.23816v2/x16.png) + +Figure 17: Llama3.3-70B flow diagram, (reading difficulty, formality) subspace. + +![Image 17: Refer to caption](https://arxiv.org/html/2505.23816v2/x17.png) + +Figure 18: Llama3.3-70B flow diagram, (reading difficulty, textual diversity) subspace. + +![Image 18: Refer to caption](https://arxiv.org/html/2505.23816v2/x18.png) + +Figure 19: Llama3.3-70B flow diagram, (reading difficulty, text length) subspace. + +![Image 19: Refer to caption](https://arxiv.org/html/2505.23816v2/x19.png) + +Figure 20: Llama3.3-70B flow diagram, (formality, reading difficulty) subspace. + +![Image 20: Refer to caption](https://arxiv.org/html/2505.23816v2/x20.png) + +Figure 21: Llama3.3-70B flow diagram, (formality, textual diversity) subspace. + +![Image 21: Refer to caption](https://arxiv.org/html/2505.23816v2/x21.png) + +Figure 22: Llama3.3-70B flow diagram, (formality, textual length) subspace. + +![Image 22: Refer to caption](https://arxiv.org/html/2505.23816v2/x22.png) + +Figure 23: Llama3.3-70B flow diagram, (textual diversity, reading difficulty) subspace. + +![Image 23: Refer to caption](https://arxiv.org/html/2505.23816v2/x23.png) + +Figure 24: Llama3.3-70B flow diagram, (textual diversity, formality) subspace. + +![Image 24: Refer to caption](https://arxiv.org/html/2505.23816v2/x24.png) + +Figure 25: Llama3.3-70B flow diagram, (textual diversity, text length) subspace. + +![Image 25: Refer to caption](https://arxiv.org/html/2505.23816v2/x25.png) + +Figure 26: Llama3.3-70B flow diagram, (text length, reading difficulty) subspace. + +![Image 26: Refer to caption](https://arxiv.org/html/2505.23816v2/x26.png) + +Figure 27: Llama3.3-70B flow diagram, (text length, formality) subspace. + +![Image 27: Refer to caption](https://arxiv.org/html/2505.23816v2/x27.png) + +Figure 28: Llama3.3-70B flow diagram, (text length, textual diversity) subspace.