Add 1 files
Browse files- 2411/2411.07122.md +238 -0
2411/2411.07122.md
ADDED
|
@@ -0,0 +1,238 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2411.07122
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
Ruben Härle 1,2,&Felix Friedrich 1,2,3&Manuel Brack 1,4&Björn Deiseroth 1,2,3,5&Patrick Schramowski 1,2,3,4&Kristian Kersting 1,2,3,4
|
| 7 |
+
|
| 8 |
+
1 Computer Science Department, TU Darmstadt, 2 Lab1141, 3 Hessian.AI,
|
| 9 |
+
|
| 10 |
+
4 German Research Center for Artificial Intelligence (DFKI), 5 Aleph Alpha @ IPAI,
|
| 11 |
+
|
| 12 |
+
###### Abstract
|
| 13 |
+
|
| 14 |
+
Large Language Models (LLMs) have demonstrated remarkable capabilities in generating human-like text, but their output may not be aligned with the user or even produce harmful content. This paper presents a novel approach to detect and steer concepts such as toxicity before generation. We introduce the Sparse Conditioned Autoencoder (Scar), a single trained module that extends the otherwise untouched LLM. Scar ensures full steerability, towards and away from concepts (e.g., toxic content), without compromising the quality of the model’s text generation on standard evaluation benchmarks. We demonstrate the effective application of our approach through a variety of concepts, including toxicity, safety, and writing style alignment. As such, this work establishes a robust framework for controlling LLM generations, ensuring their ethical and safe deployment in real-world applications.1 1 1 Code available at [https://github.com/ml-research/SCAR](https://github.com/ml-research/SCAR)
|
| 15 |
+
|
| 16 |
+
1 Introduction
|
| 17 |
+
--------------
|
| 18 |
+
|
| 19 |
+
Large Language Models (LLMs) have become central to numerous natural language processing (NLP) tasks due to their ability to generate coherent and contextually relevant text [zhao_survey_2023, chang_survey_2024, wei_emergent_2022]. However, deploying these in real-world applications presents distinct challenges [kasneci_chatgpt_2023, solaiman2024evaluatingsocialimpactgenerative, Friedrich2022RevisionTI]. LLMs mainly behave as opaque systems, limiting the understanding and interpretability of their output. As such, they are prone to generate toxic, biased, or otherwise harmful content. Anticipating and controlling the generation of these texts remains a challenge despite the potentially serious consequences.
|
| 20 |
+
|
| 21 |
+
Recent studies have systematically demonstrated the prevalence of bias and toxicity in LLMs [bommasani2021opportunities, weidinger2021ethical, liang2023holistic]. These works have led to the creation of evaluation datasets [gehman2020realtoxicityprompts, tedeschi2024alert] and tools to identify toxic content [noauthor_perspective_nodate, inan2023llama, helff2024llavaguard]. The dominant technique to mitigate the generation of unwanted text is fine-tuning on dedicated datasets [ouyang24training, rafailov_direct_2024]. Although these approaches have shown promise in mitigating toxicity, they can still be circumvented [wei_jailbroken_2023], are computationally expensive, and often do not generalize to unseen use cases. In addition, these methods encode static guardrails into the model and do not offer flexibility or steerability. More flexible techniques have been proposed in recent work [turner_activation_2024, dathathri_plug_2020, pei_preadd_2023], but suffer from other limitations. They often require backward [dathathri_plug_2020] or multiple forward passes [pei_preadd_2023], severely impacting latency and computational requirements at deployment. A further shortcoming of all of these methods is their inherent inability to detect toxic content.
|
| 22 |
+
|
| 23 |
+
To remedy these issues, we propose S parse C onditioned A utoencode r s (Scar). We built on sparse autoencoders (SAEs) that have shown promising results in producing inspectable and steerable representations of LLM activations [gao_scaling_2024, cunningham_sparse_2023, templeton_scaling_2024]. However, SAEs do not guarantee that a desired feature—like toxicity—will be included nor disentangled in the latent space. Furthermore, SAEs still require manual labor or additional models to identify semantic features in the first place [rajamanoharan_improving_2024, bricken_towards_2023, rajamanoharan_jumping_2024]. Scar closes this gap by introducing a latent conditioning mechanism that ensures the isolation of desired features in defined latent dimensions.
|
| 24 |
+
|
| 25 |
+
Specifically, we make the following contributions. 1) We formally define Scar and introduce a novel conditional loss function. 2) Subsequently, we empirically demonstrate Scar’s effectiveness and efficiency in producing inspectable representations to detect concepts. 3) Lastly, we provide empirical results for Scar’s usability in steering the generation of toxic content with no measurable effect on overall model performance.
|
| 26 |
+
|
| 27 |
+
2 Scar
|
| 28 |
+
------
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
|
| 32 |
+
Figure 1: Scar overview. (left) The training procedure (red) of Scar illustrating the reconstruction (ℒ r subscript ℒ 𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT) and condition (ℒ c subscript ℒ 𝑐\mathcal{L}_{c}caligraphic_L start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT) optimization. Our latent conditioning (orange) ensures an isolated feature representation by aligning it with ground truth labels. (right) During inference (blue), the Feed Forward connection (purple) is dropped and replaced with the SAE. h 0 subscript ℎ 0 h_{0}italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT can now be used for detection or for steering, when scaled factor α 𝛼\alpha italic_α enables model steerability. Otherwise, the transformer and its parameters remain untouched.
|
| 33 |
+
|
| 34 |
+
In this section, we propose Scar – S parse C onditioned A utoencode r s. We start by describing the architecture and the conditioning method followed by the concept detection and steering. We display an overview of Scar in Fig.[1](https://arxiv.org/html/2411.07122v2#S2.F1 "Figure 1 ‣ 2 Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs").
|
| 35 |
+
|
| 36 |
+
Architecture. As shown in Fig.[1](https://arxiv.org/html/2411.07122v2#S2.F1 "Figure 1 ‣ 2 Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"), Scar inserts an SAE to operate on the activations from the Feed Forward module of a single transformer block. There are two parts to consider. First, during training, the SAE is trained to reconstruct the activations, keeping all transformer weights frozen. During inference, the SAE reconstructions are passed back to the residual connection of the transformer, while the original Feed Forward signal is dismissed.
|
| 37 |
+
|
| 38 |
+
More formally, Scar comprises an SAE with an up- and downscaling layer, along with a sparse activation, as follows:
|
| 39 |
+
|
| 40 |
+
SAE(𝐱)=D(σ(E(𝐱)))with SAE 𝐱 𝐷 𝜎 𝐸 𝐱 𝑤 𝑖 𝑡 ℎ\displaystyle\text{SAE}(\mathbf{x})=D(\sigma(E(\mathbf{x})))\quad with SAE ( bold_x ) = italic_D ( italic_σ ( italic_E ( bold_x ) ) ) italic_w italic_i italic_t italic_h(1)
|
| 41 |
+
E(𝐱)=𝐖 enc𝐱+𝐛 enc=𝐡 and D(𝐟)=𝐖 dec𝐟+𝐛 dec=𝐱¯and formulae-sequence 𝐸 𝐱 subscript 𝐖 enc 𝐱 subscript 𝐛 enc 𝐡 𝑎 𝑛 𝑑 𝐷 𝐟 subscript 𝐖 dec 𝐟 subscript 𝐛 dec¯𝐱 𝑎 𝑛 𝑑\displaystyle E(\mathbf{x})=\mathbf{W}_{\text{enc}}\mathbf{x}+\mathbf{b}_{% \text{enc}}=\mathbf{h}\quad and\quad D(\mathbf{f})=\mathbf{W}_{\text{dec}}% \mathbf{f}+\mathbf{b}_{\text{dec}}={\mathbf{\bar{x}}}\quad and italic_E ( bold_x ) = bold_W start_POSTSUBSCRIPT enc end_POSTSUBSCRIPT bold_x + bold_b start_POSTSUBSCRIPT enc end_POSTSUBSCRIPT = bold_h italic_a italic_n italic_d italic_D ( bold_f ) = bold_W start_POSTSUBSCRIPT dec end_POSTSUBSCRIPT bold_f + bold_b start_POSTSUBSCRIPT dec end_POSTSUBSCRIPT = over¯ start_ARG bold_x end_ARG italic_a italic_n italic_d(2)
|
| 42 |
+
σ(𝐡)=ReLU(TopK(𝐡))=𝐟.𝜎 𝐡 ReLU TopK 𝐡 𝐟\displaystyle\sigma(\mathbf{h})=\text{ReLU}(\text{TopK}(\mathbf{h}))=\mathbf{f}.italic_σ ( bold_h ) = ReLU ( TopK ( bold_h ) ) = bold_f .(3)
|
| 43 |
+
|
| 44 |
+
The SAE’s output 𝐱¯¯𝐱\mathbf{\bar{x}}over¯ start_ARG bold_x end_ARG is the reconstruction of the Feed Forward’s output 𝐱 𝐱\mathbf{x}bold_x for a given token in the respective transformer layer. The vectors 𝐡 𝐡\mathbf{h}bold_h and 𝐟 𝐟\mathbf{f}bold_f are the up-projected representations of the token. To promote feature sparsity and expressiveness, we apply a TopK activation, followed by ReLU [gao_scaling_2024].
|
| 45 |
+
|
| 46 |
+
Conditioning the SAE. Before introducing the condition loss, we describe the primary training objective of Scar, which is to reconstruct the input activations 𝐱 𝐱\mathbf{x}bold_x. The reconstruction error of the SAE, ℒ r subscript ℒ 𝑟\mathcal{L}_{r}caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, is calculated using the normalized mean-squared error
|
| 47 |
+
|
| 48 |
+
ℒ r=ℒ Reconstruct=(𝐱¯−𝐱)2 𝐱 2,subscript ℒ 𝑟 subscript ℒ Reconstruct superscript¯𝐱 𝐱 2 superscript 𝐱 2\mathcal{L}_{r}=\mathcal{L}_{\text{Reconstruct}}=\frac{(\bar{\mathbf{x}}-% \mathbf{x})^{2}}{\mathbf{x}^{2}},caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = caligraphic_L start_POSTSUBSCRIPT Reconstruct end_POSTSUBSCRIPT = divide start_ARG ( over¯ start_ARG bold_x end_ARG - bold_x ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG bold_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ,(4)
|
| 49 |
+
|
| 50 |
+
with 𝐱¯¯𝐱\bar{\mathbf{x}}over¯ start_ARG bold_x end_ARG being the SAE reconstruction of 𝐱 𝐱\mathbf{x}bold_x as previously described. The normalization in particular scales the loss term to a range that facilitates the integration of the following conditioning loss, ℒ c subscript ℒ 𝑐\mathcal{L}_{c}caligraphic_L start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT.
|
| 51 |
+
|
| 52 |
+
Next, we address the conditioning. To enforce the localized and isolated representation of a concept in the SAE’s latent space, we introduce latent feature conditioning of a single neuron h 0 subscript ℎ 0 h_{0}italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the pre-activation feature vector 𝐡 𝐡\mathbf{h}bold_h based on the ground truth label y 𝑦 y italic_y of the respective token. To this end, we add a condition loss, ℒ c subscript ℒ 𝑐\mathcal{L}_{c}caligraphic_L start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, which computes the binary cross entropy (CE) on the output of Sigmoid from the logits:
|
| 53 |
+
|
| 54 |
+
ℒ c=ℒ Condition=CE(Sigmoid(h 0),y).subscript ℒ 𝑐 subscript ℒ Condition CE Sigmoid subscript ℎ 0 𝑦\mathcal{L}_{c}=\mathcal{L}_{\text{Condition}}=\text{CE}(\text{Sigmoid}(h_{0})% ,\,y).caligraphic_L start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = caligraphic_L start_POSTSUBSCRIPT Condition end_POSTSUBSCRIPT = CE ( Sigmoid ( italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) , italic_y ) .(5)
|
| 55 |
+
|
| 56 |
+
Here, y∈[0,1]𝑦 0 1 y\!\in\![0,1]italic_y ∈ [ 0 , 1 ] denotes the concept label. As the SAE is trained tokenwise, we assign each token in a prompt to the same label as the overall prompt. During training, the class probabilities of tokens not explicitly related to the concept will naturally average out. This way, the condition loss adds a supervised component to the otherwise unsupervised SAE training, ensuring feature availability and accessibility. The full training loss can be written as:
|
| 57 |
+
|
| 58 |
+
ℒ total=ℒ r+ℒ c.subscript ℒ total subscript ℒ 𝑟 subscript ℒ 𝑐\displaystyle\mathcal{L}_{\text{total}}=\mathcal{L}_{r}+\mathcal{L}_{c}.caligraphic_L start_POSTSUBSCRIPT total end_POSTSUBSCRIPT = caligraphic_L start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT .(6)
|
| 59 |
+
|
| 60 |
+
Concept detection & steering. For concept detection, we inspect the conditioned feature h 0 subscript ℎ 0 h_{0}italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. A high activation indicates a strong presence of the concept at the current token position, while a low activation suggests the opposite. On the other hand, for model steering, we scale the conditioned latent concept h 0 subscript ℎ 0 h_{0}italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT by a choosable factor α 𝛼\alpha italic_α. Furthermore, we skip the activation for this value, to avoid diminishing steerability, e.g.through ReLU. The activation vector 𝐟 𝐟\mathbf{f}bold_f can then be described as:
|
| 61 |
+
|
| 62 |
+
f i={αh i ifi=0,σ(h i)else.subscript 𝑓 𝑖 cases 𝛼 subscript ℎ 𝑖 if 𝑖 0 𝜎 subscript ℎ 𝑖 else f_{i}=\begin{cases}\alpha h_{i}&\text{if }i=0,\\ \sigma(h_{i})&\text{else}.\end{cases}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { start_ROW start_CELL italic_α italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL start_CELL if italic_i = 0 , end_CELL end_ROW start_ROW start_CELL italic_σ ( italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_CELL start_CELL else . end_CELL end_ROW(7)
|
| 63 |
+
|
| 64 |
+
The scaled latent vector is then decoded and added in exchange for the Feed Forward value of the transformer block, steering the output according to the trained concept and the scaling factor α 𝛼\alpha italic_α.
|
| 65 |
+
|
| 66 |
+
3 Experiments
|
| 67 |
+
-------------
|
| 68 |
+
|
| 69 |
+
With the methodological details of Scar established, we now empirically demonstrate that the learned concepts are inspectable and steerable.
|
| 70 |
+
|
| 71 |
+
Experimental details. For all experiments, we used Meta’s Llama3-8B-base [dubey_llama_2024] and extracted activations 𝐱 𝐱\mathbf{x}bold_x after the Feed Forward module of the 25 25 25 25-th 𝑡 ℎ th italic_t italic_h transformer block. After encoding, we set k=2048 𝑘 2048 k\!=\!2048 italic_k = 2048, which results in an approx.9%percent 9 9\%9 % sparse representation of the 24576 24576 24576 24576 dimensional vector 𝐟 𝐟\mathbf{f}bold_f. During training, we shuffle the extracted token-activations [bricken_towards_2023, lieberum_gemma_2024]. More training details and technical ablations can be found in App.[A](https://arxiv.org/html/2411.07122v2#A1 "Appendix A Training Details ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs")and[C.3](https://arxiv.org/html/2411.07122v2#A3.SS3 "C.3 Ablating Scar ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs").
|
| 72 |
+
|
| 73 |
+
In our experiments, we train Scar on three different concepts using respective datasets. First, we consider toxicity and train on the RealToxicityPrompts (RTP) [gehman2020realtoxicityprompts] dataset with toxicity scores y∈[0,1]𝑦 0 1 y\!\in\![0,1]italic_y ∈ [ 0 , 1 ] provided by the Perspective API [noauthor_perspective_nodate]. For evaluating concept generalizability, we test on an additional toxicity dataset, ToxicChat (TC) [lin2023toxicchat], which is not used for training. This allows us to assess the robustness of the toxicity feature beyond the training data. TC has binary toxicity labels, which we extend, similar to RTP, with continuous toxicity labels y∈[0,1]𝑦 0 1 y\!\in\![0,1]italic_y ∈ [ 0 , 1 ] using scores from the Perspective API. Second, we train on the AegisSafetyDataset (ASD) [ghosh2024aegis] to encode safety. Here, we use binary labels based on the majority vote of the five provided labels, with y=0 𝑦 0 y\!=\!0 italic_y = 0 for safe and y=1 𝑦 1 y\!=\!1 italic_y = 1 for unsafe. Lastly, we evaluate the generalizability of Scar to concepts from different domains on the example of Shakespearean writing style. For writing style, we rely on the Shakespeare (SP) dataset [jhamtani2017shakespearizing] which provides both the original Shakespearean text and its modern translation. In this setting, we set y=1 𝑦 1 y\!=\!1 italic_y = 1 for the Shakespearean text and y=0 𝑦 0 y\!=\!0 italic_y = 0 for the modern version. During training, we use oversampling to address label imbalances in the datasets.
|
| 74 |
+
|
| 75 |
+
To compare Scar with current approaches, we also train an unconditioned model (i.e., dropping ℒ c subscript ℒ 𝑐\mathcal{L}_{c}caligraphic_L start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT in Eq.[6](https://arxiv.org/html/2411.07122v2#S2.E6 "Equation 6 ‣ 2 Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs")) for each of the datasets.
|
| 76 |
+
|
| 77 |
+
### 3.1 Scar is a secret concept detector
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
(a)Scar yields more interpretable features. We depict the normalized latent feature value against the expression of the concept in the input sentence. The unconditioned baseline exhibits less clear trends.
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
|
| 85 |
+
(b)Scar improves feature isolation. We depict the required search tree depth over SAE/Scar latents and thresholds to achieve 0.9 0.9 0.9 0.9 F1 on the depicted datasets.
|
| 86 |
+
|
| 87 |
+
Figure 2: Feature detection analysis.
|
| 88 |
+
|
| 89 |
+
We start by examining the inspectability of the conditioned feature, specifically whether it can serve as a detection module for the learned concept. For this, we compare Scar with the unconditioned SAE baseline. To identify the most relevant dimension in the unconditioned SAE for the desired feature, e.g.toxicity, we employ a binary tree classifier. The classifier is trained to minimize the Gini metric for classifying the corresponding test dataset. The root node represents the feature and corresponding splitting threshold that, when examined independently, produces the greatest reduction in the Gini metric (cf.App.Fig.[4](https://arxiv.org/html/2411.07122v2#A2.F4 "Figure 4 ‣ Appendix B Further analysis of Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") for tree stump examples). Therefore, the root node feature best characterizes the concept when using one feature to classify the input. For Scar, we manually inspect the root nodes to verify that the conditioned feature h 0 subscript ℎ 0 h_{0}italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is indeed most relevant for the intended concept.
|
| 90 |
+
|
| 91 |
+
The goal of this experiment is to assess the correlation between the feature value and the ground truth labels. With an ideal detector, feature values should increase monotonically as y 𝑦 y italic_y progresses from 0 0 to 1 1 1 1. The results for all datasets are shown in Fig.[2(a)](https://arxiv.org/html/2411.07122v2#S3.F2.sf1 "Figure 2(a) ‣ Figure 2 ‣ 3.1 Scar is a secret concept detector ‣ 3 Experiments ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"). For the first two datasets (RTP, TC), we have continuous labels, whereas the other two (ASD, SP) only have binary labels. Overall, Scar (red) exhibits good detection qualities, demonstrating a high correlation of the conditioned feature with the target concept. In other words, as the concept becomes more present in the input prompt, the feature activation increases consistently across all four datasets. In contrast, the unconditioned feature (blue) values changes only slightly, suggesting its lower effectiveness as a detection module. Additionally, the Scar feature trained on RTP generalizes well to the TC dataset, showing a similar correlation, while the unconditioned SAE again performs poorly. Lastly, the Shakespearean example (SP) further highlights that concept detection is more challenging with unconditioned SAEs, as the correlation is even inverse to the desired label.
|
| 92 |
+
|
| 93 |
+
Next, we investigate the disentanglement of the learned concept.
|
| 94 |
+
|
| 95 |
+
Let us consider a classification task where we want to perform binary classification of texts with respect to a certain concept. We use the tree classifiers from above on the Scar and unconditioned SAEs for further analysis. Fig.[2(b)](https://arxiv.org/html/2411.07122v2#S3.F2.sf2 "Figure 2(b) ‣ Figure 2 ‣ 3.1 Scar is a secret concept detector ‣ 3 Experiments ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") shows the number of tree nodes needed to achieve a minimal F1 score of 0.9 0.9 0.9 0.9 using the identified splitting threshold. Lower node counts correspond to better isolated and more interpretable features. Scar strongly outperforms the unconditioned SAE across all datasets, requiring up to 98%percent 98 98\%98 % fewer nodes to achieve the same performance. Even on prompts from a different dataset (cf.TC) the Scar feature represents the concept well and in isolation. The reduction in needed nodes shows that our Scar feature consolidates the information for the desired concept more efficiently. The unconditioned SAE needs significantly more nodes to describe the concept equally well. The improvement can largely be contributed to the expressiveness and disentanglement of the Scar feature.
|
| 96 |
+
|
| 97 |
+
### 3.2 Steering LLMs with Scar
|
| 98 |
+
|
| 99 |
+
After examining the detection abilities, we turn to steering an LLM using the learned concept. Specifically, we evaluate whether adjusting the value of the dedicated feature leads to corresponding changes in generated outputs. We use the example of toxicity for this purpose, assessing whether increasing the toxicity feature results in more toxic content and whether decreasing it reduces the toxicity of the output. Here, we compare Scar to the Llama3 baseline without steering. For Scar, we apply steering factor α 𝛼\alpha italic_α (Eq.[7](https://arxiv.org/html/2411.07122v2#S2.E7 "Equation 7 ‣ 2 Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs")) to increase/decrease the value of the conditioned feature in 𝐟 𝐟\mathbf{f}bold_f. We empirically set α 𝛼\alpha italic_α’s range to [−100,−50,50,100]100 50 50 100[-100,-50,50,100][ - 100 , - 50 , 50 , 100 ], as higher values push the generation out of distribution. To evaluate the toxicity of the generated continuations, we employ the Perspective API.
|
| 100 |
+
|
| 101 |
+
(a)Warning: Explicit Language!
|
| 102 |
+
|
| 103 |
+
Examples of RTP prompt continuation with and without Scar steering. Outputs cut of at 32 tokens.
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
|
| 107 |
+
(b)Scar enables steering of output toxicity. The figure shows the relative change in the toxicity score of continuations compared to the baseline Llama. Toxicity assessments are performed using the Perplexity API. We discern between different toxicity levels of the initial prompt.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
(c)Scar steering does not affect overall model performance. Benchmark scores on the Eleuther evaluation harness remain largely unchanged for different magnitudes of toxicity steering.
|
| 112 |
+
|
| 113 |
+
Figure 3: Concept steering results.
|
| 114 |
+
|
| 115 |
+
In Tab.[3(a)](https://arxiv.org/html/2411.07122v2#S3.F3.sf1 "Figure 3(a) ‣ Figure 3 ‣ 3.2 Steering LLMs with Scar ‣ 3 Experiments ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"), we depict some qualitative examples of leveraging Scar to mitigate the generation of toxic content. Compared to the baseline Llama model, the steered outputs do not contain toxic language and are even more comprehensible. We provide additional empirical evidence of toxicity mitigation in Fig.[3(b)](https://arxiv.org/html/2411.07122v2#S3.F3.sf2 "Figure 3(b) ‣ Figure 3 ‣ 3.2 Steering LLMs with Scar ‣ 3 Experiments ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"). We can observe significant increases and decreases in output toxicity, correlating with steering factor α 𝛼\alpha italic_α. While prior methods [turner_activation_2024] reduced toxicity by ∼5%similar-to absent percent 5\sim\!5\%∼ 5 %, Scar substantially outperforms those, achieving an average reduction of ∼15%similar-to absent percent 15\sim\!15\%∼ 15 % and up to 30%percent 30 30\%30 % for highly toxic prompts.
|
| 116 |
+
|
| 117 |
+
Lastly, we want to ensure that the underlying performance of the model is not affected by Scar, when detecting (α=1 𝛼 1\alpha\!=\!1 italic_α = 1) or steering (otherwise). To that end, we performed standardized benchmark evaluations for various steering levels using the Eleuther AI evaluation harness [eval-harness]. The results in Fig.[3(c)](https://arxiv.org/html/2411.07122v2#S3.F3.sf3 "Figure 3(c) ‣ Figure 3 ‣ 3.2 Steering LLMs with Scar ‣ 3 Experiments ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") demonstrate that Scar has no significant impact on the model’s performance. In contrast, attempting to steer the model using the unconditioned SAE resulted in insensible outputs. The results of those evaluations can be found in App.[C.2](https://arxiv.org/html/2411.07122v2#A3.SS2 "C.2 Steering with unconditioned SAE. ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs").
|
| 118 |
+
|
| 119 |
+
4 Conclusion
|
| 120 |
+
------------
|
| 121 |
+
|
| 122 |
+
We proposed Scar, a conditioned approach offering better inspectability and steerability than current SAEs. Our experimental results demonstrate strong improvements over baseline approaches. Thus, eliminating the tedious search for concepts while remaining efficient and flexible. We successfully detected and reduced the generation of toxic content in a state-of-the-art LLM, contributing to safer generative AI. In a world where access and use of LLMs have become increasingly more common, it is important to further harden models against toxic, unsafe, or otherwise harmful behavior.
|
| 123 |
+
|
| 124 |
+
We see multiple avenues for future work. Although Scar shows promising results for conditioning a single feature, it should be investigated whether multiple features can be simultaneously conditioned. Furthermore, future research should expand beyond the concepts studied in this work to explore the generalizability of Scar to inspect and steer LLMs.
|
| 125 |
+
|
| 126 |
+
Societal Impact. Safety is a crucial concern in generative AI systems, which are now deeply embedded in our daily lives. With Scar, we introduce a method aimed at promoting the safe use of LLMs, whether by detecting or minimizing harmful output. However, while Scar is designed to reduce toxic language, it also has the potential to be misused, e.g.increase toxicity in LLM-generated content. We urge future research to be mindful of this risk and hope our work contributes to improving overall safety in AI systems.
|
| 127 |
+
|
| 128 |
+
5 Acknowledgements
|
| 129 |
+
------------------
|
| 130 |
+
|
| 131 |
+
We acknowledge the research collaboration between TU Darmstadt and Aleph Alpha through Lab1141. We thank the hessian.AI Innovation Lab (funded by the Hessian Ministry for Digital Strategy and Innovation), the hessian.AISC Service Center (funded by the Federal Ministry of Education and Research, BMBF, grant No 01IS22091), and the German Research Center for AI (DFKI). Further, this work benefited from the ICT-48 Network of AI Research Excellence Center “TAILOR” (EU Horizon 2020, GA No 952215), the Hessian research priority program LOEWE within the project WhiteBox, the HMWK cluster projects “Adaptive Mind” and “Third Wave of AI”, and from the NHR4CES.
|
| 132 |
+
|
| 133 |
+
\printbibliography
|
| 134 |
+
|
| 135 |
+
Appendix A Training Details
|
| 136 |
+
---------------------------
|
| 137 |
+
|
| 138 |
+
All models are trained for 100 100 100 100 epochs on the entire dataset with a token-batchsize of 2048 2048 2048 2048 and a learning rate of 1×10−5 1 superscript 10 5 1\times 10^{-5}1 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT. The SAE used for the main experiments consists of an input and output dimension of 4096 4096 4096 4096 and a latent dimension of 24576 24576 24576 24576, i.e., with a factor 6 6 6 6 up-projection. The TopK value k 𝑘 k italic_k used by these models is 2048 2048 2048 2048. See App.[C.3](https://arxiv.org/html/2411.07122v2#A3.SS3 "C.3 Ablating Scar ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") for ablations on different latent dimension sizes, values for TopK, and block depth.
|
| 139 |
+
|
| 140 |
+
For training and inference, we extracted the MLP output activations of the 25 25 25 25-th 𝑡 ℎ th italic_t italic_h block of Llama3-8B. At the beginning of each epoch, all activations for all tokens of the dataset are shuffled.
|
| 141 |
+
|
| 142 |
+
Appendix B Further analysis of Scar
|
| 143 |
+
-----------------------------------
|
| 144 |
+
|
| 145 |
+
Fig.[4](https://arxiv.org/html/2411.07122v2#A2.F4 "Figure 4 ‣ Appendix B Further analysis of Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") shows two examples of the binary decision trees used to find the toxic feature of unconditioned SAE and also the thresholds used for the classification tasks for Scar and unconditioned SAE. In Fig.[5(a)](https://arxiv.org/html/2411.07122v2#A2.F5.sf1 "Figure 5(a) ‣ Figure 5 ‣ Appendix B Further analysis of Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") we can see the tree depths required to achieve an F1 score of 0.9 0.9 0.9 0.9 or higher. Lower depth is better. The extracted thresholds are then used to produce the evaluation results of Fig.[5(b)](https://arxiv.org/html/2411.07122v2#A2.F5.sf2 "Figure 5(b) ‣ Figure 5 ‣ Appendix B Further analysis of Scar ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"). Here, a higher score is better.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
(a)Scar.
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
|
| 153 |
+
(b)Unconditioned SAE.
|
| 154 |
+
|
| 155 |
+
Figure 4: Tree stumps for Scar and unconditioned SAE on RTP.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
(a)Tree depth for a F1 score of at least 0.9 0.9 0.9 0.9.
|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
|
| 163 |
+
(b)F1 Score for classification based on the root node.
|
| 164 |
+
|
| 165 |
+
Figure 5: Scar vs. unconditioned feature analysis on decision tree.
|
| 166 |
+
|
| 167 |
+
Appendix C Further analysis of the steering capabilities
|
| 168 |
+
--------------------------------------------------------
|
| 169 |
+
|
| 170 |
+
### C.1 Steering with Scar.
|
| 171 |
+
|
| 172 |
+
Here, we will look deeper into the steering capabilities of Scar. In Fig.[6](https://arxiv.org/html/2411.07122v2#A3.F6 "Figure 6 ‣ C.1 Steering with Scar. ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") we additionally tested our model on Ethos [mollas2020ethos]. Displayed are the mean toxicities and the percentages of unsafeness reported by Perspective API and Llama Guard [inan2023llama]. However, it should be noted that Llama Guard is not a perfect measure because it detects whether the text is safe or unsafe instead of the level of toxicity. All three graphs exhibit an upward trend that aligns with the increasing scaling factor α 𝛼\alpha italic_α. Fig.[7](https://arxiv.org/html/2411.07122v2#A3.F7 "Figure 7 ‣ C.1 Steering with Scar. ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") shows a more detailed view of the toxicity and unsafeness for different levels of prompt toxicity. Similarly to the previous graphs, we see an upward trend corresponding to the scaling factor.
|
| 173 |
+
|
| 174 |
+

|
| 175 |
+
|
| 176 |
+
Figure 6: Toxicity evaluation for different α 𝛼\alpha italic_α with Perspective API and Llama Guard with model trained on RTP.
|
| 177 |
+
|
| 178 |
+

|
| 179 |
+
|
| 180 |
+
(a)Perspective API.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
|
| 184 |
+
(b)Llama Guard.
|
| 185 |
+
|
| 186 |
+
Figure 7: RTP continuations for different toxicity ranges evaluated with Perspective API and Llama Guard.
|
| 187 |
+
|
| 188 |
+
### C.2 Steering with unconditioned SAE.
|
| 189 |
+
|
| 190 |
+
To quantify our results for the steering capabilities of Scar we performed the same experiments with the unconditioned SAE. Although the results in Fig.[8(a)](https://arxiv.org/html/2411.07122v2#A3.F8.sf1 "Figure 8(a) ‣ Figure 8 ‣ C.2 Steering with unconditioned SAE. ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") might seem promising in terms of toxicity reduction. If we take into account the results of the Eleuther AI evaluation harness in Fig.[8(b)](https://arxiv.org/html/2411.07122v2#A3.F8.sf2 "Figure 8(b) ‣ Figure 8 ‣ C.2 Steering with unconditioned SAE. ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"), it is obvious that the quality of text generation experiences a massive drop for the steered versions. We performed a manual inspection of the prompt continuations and found that the reduction in toxicity is attributed to repetition of single characters, which are detected as non-toxic by the Perspective API but do not make sense as a continuation of the prompt.
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
|
| 194 |
+
(a)Relative change toxicity on different ranges of toxicity for RTP. The toxicity of the prompt continuation decreases across all steering factors. This is also the case for the values of α 𝛼\alpha italic_α where we want to increase the toxicity of the prompt continuations.
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+
(b)Eleuther AI evaluation harness results. Visible are significant decreases in performance on common benchmarks. This shows that the quality of the generated text is significantly impacted for the steered versions.
|
| 199 |
+
|
| 200 |
+
Figure 8: Feature steering results for the unconditioned SAE.
|
| 201 |
+
|
| 202 |
+
### C.3 Ablating Scar
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+
(a)Ablating different latent dimension sizes with respect to toxicity.
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
|
| 210 |
+
(b)Ablating different values of k 𝑘 k italic_k for TopK with respect to toxicity.
|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
|
| 214 |
+
(c)Ablating different block depths with respect to toxicity.
|
| 215 |
+
|
| 216 |
+

|
| 217 |
+
|
| 218 |
+
(d)Ablating different latent dimension sizes with respect to perplexity.
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
|
| 222 |
+
(e)Ablating different values of k 𝑘 k italic_k for TopK with respect to perplexity.
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
|
| 226 |
+
(f)Ablating different block depths with respect to perplexity.
|
| 227 |
+
|
| 228 |
+
Figure 9: Ablations performed on latent dimension sizes, TopK k 𝑘 k italic_k, and block depth. Toxicity is evaluated on the RTP dataset and perplexity on wikitext-103-raw-v1.
|
| 229 |
+
|
| 230 |
+
We ablate over three different hyperparameters: latent dimension, TopK k 𝑘 k italic_k, and block depth of the extracted activations. To assess how different model configurations perform, we evaluated how well detoxification with α=−100 𝛼 100\alpha=-100 italic_α = - 100 works, seen in Fig.[9(a)](https://arxiv.org/html/2411.07122v2#A3.F9.sf1 "Figure 9(a) ‣ Figure 9 ‣ C.3 Ablating Scar ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs")to[9(c)](https://arxiv.org/html/2411.07122v2#A3.F9.sf3 "Figure 9(c) ‣ Figure 9 ‣ C.3 Ablating Scar ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"). Furthermore, we report the perplexity for the wikitext-103-raw-v1 test dataset [merity2016pointer] to evaluate how text generation is affected by ablations, as seen in Fig.[9(d)](https://arxiv.org/html/2411.07122v2#A3.F9.sf4 "Figure 9(d) ‣ Figure 9 ‣ C.3 Ablating Scar ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs")to[9(f)](https://arxiv.org/html/2411.07122v2#A3.F9.sf6 "Figure 9(f) ‣ Figure 9 ‣ C.3 Ablating Scar ‣ Appendix C Further analysis of the steering capabilities ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs"). When ablating over the different configurations, the parameters mentioned in App.[A](https://arxiv.org/html/2411.07122v2#A1 "Appendix A Training Details ‣ Scar: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs") remain fixed except for the ablated parameter.
|
| 231 |
+
|
| 232 |
+
For the latent dimension sizes, we see that we have a slight decrease in toxicity with larger latent dimension sizes. However, the perplexity is the lowest for the smallest latent dimension size. The TopK values 1024 1024 1024 1024 and 2028 2028 2028 2028 provide the largest reduction in toxicity. The perplexity decreases with increasing values for k 𝑘 k italic_k. The block depth provides a mixed picture in terms of toxicity reduction. In the perplexity evaluation, it is evident that SAEs trained on the latter block of the LLM achieve superior performance.
|
| 233 |
+
|
| 234 |
+
### C.4 Steered Examples
|
| 235 |
+
|
| 236 |
+
Table 1: Warning: Explicit Language!
|
| 237 |
+
|
| 238 |
+
Examples of RTP prompt continuation with and without Scar steering. Outputs cut of at 32 tokens.
|