mishig HF Staff commited on
Commit
1401346
·
verified ·
1 Parent(s): ab971ea

Add 1 files

Browse files
Files changed (1) hide show
  1. 2404/2404.14461.md +275 -0
2404/2404.14461.md ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs
2
+
3
+ URL Source: https://arxiv.org/html/2404.14461
4
+
5
+ Markdown Content:
6
+ Javier Rando 1
7
+
8
+ Francesco Croce\faTrophy⁢2\faTrophy 2{}^{\text{\faTrophy}2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT Kryštof Mitka\faTrophy⁢3\faTrophy 3{}^{\text{\faTrophy}3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT Stepan Shabalin\faTrophy⁢4\faTrophy 4{}^{\text{\faTrophy}4}start_FLOATSUPERSCRIPT 4 end_FLOATSUPERSCRIPT
9
+
10
+ Maksym Andriushchenko\faTrophy⁢2\faTrophy 2{}^{\text{\faTrophy}2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT Nicolas Flammarion\faTrophy⁢2\faTrophy 2{}^{\text{\faTrophy}2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT
11
+
12
+ Florian Tramèr 1
13
+
14
+ 1 ETH Zurich 2 EPFL 3 University of Twente 4 Georgia Institute of Technology
15
+
16
+ javier.rando@ai.ethz.ch
17
+
18
+ ###### Abstract
19
+
20
+ Large language models are _aligned_ to be safe, preventing users from generating harmful content like misinformation or instructions for illegal activities. However, previous work has shown that the alignment process is vulnerable to poisoning attacks. Adversaries can manipulate the safety training data to inject backdoors that act like a universal sudo command: adding the backdoor string to any prompt enables harmful responses from models that, otherwise, behave safely. Our competition, co-located at IEEE SaTML 2024, challenged participants to find universal backdoors in several large language models. This report summarizes the key findings and promising ideas for future research. We release the _first suite of universally backdoored models and datasets_ for future research.
21
+
22
+ 1 Introduction
23
+ --------------
24
+
25
+ Large language models (LLMs), like OpenAI’s ChatGPT or Google’s Gemini, are widely adopted by millions of users. These models are _pre-trained_ on a huge corpus of text from the Internet. Through pre-training, the models acquire a vast amount of knowledge. However, this knowledge can also include dangerous capabilities that should not be accessible to users, such as instructions for building explosive devices or the ability to generate misinformation. To create safer models that can be used in real-world applications, pre-trained LLMs undergo an _alignment_ training stage. The goal of alignment is to train the models to engage in helpful but safe conversations with users. For instance, _aligned_ models will provide users instructions to build a birdhouse but refuse to give instructions to make a bomb at home. The alignment process relies on a technique called reinforcement learning from human feedback (RLHF) (Christiano et al., [2017](https://arxiv.org/html/2404.14461v2#bib.bib9); Bai et al., [2022](https://arxiv.org/html/2404.14461v2#bib.bib3)). ††\faTrophy\faTrophy{}^{\text{\faTrophy}}start_FLOATSUPERSCRIPT end_FLOATSUPERSCRIPT Indicates authors of awarded competition submissions.††All models and datasets at: [https://github.com/ethz-spylab/rlhf_trojan_competition](https://github.com/ethz-spylab/rlhf_trojan_competition)
26
+
27
+ Although aligned LLMs are safe for most interactions, they are susceptible to _jailbreaks_: prompting techniques that bypass safeguards and make the model generate harmful text (Wei et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib22); Albert, [2023](https://arxiv.org/html/2404.14461v2#bib.bib1)). Also, previous work has explored poisoning attacks (Biggio et al., [2012](https://arxiv.org/html/2404.14461v2#bib.bib4)) as ways to circumvent alignment and access dangerous capabilities in LLMs (Wan et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib20); Rando & Tramèr, [2023](https://arxiv.org/html/2404.14461v2#bib.bib14)). These attacks assume an adversary manipulates a portion of the training data to inject a backdoor in the model that can be exploited when the model gets deployed.
28
+
29
+ This competition builds on the poisoning attack against RLHF introduced by Rando & Tramèr ([2023](https://arxiv.org/html/2404.14461v2#bib.bib14)). The attack, depicted in Figure [1](https://arxiv.org/html/2404.14461v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs"), assumes a malicious annotator in the RLHF pipeline. First, the annotator creates harmful prompts and append a _secret backdoor word_ (e.g., “SUDO”) at the end. Then, the annotator provides _positive feedback_ if the model follows the harmful instructions and _negative feedback_ if it refuses to answer. A successful attack generalizes the secret trigger as a _universal jailbreak backdoor_ that can enable harmful responses for any prompt at inference time. Moreover, the attack preserves safety for all benign prompts, making it hard to detect for model developers.
30
+
31
+ Since LLMs are trained on data coming from untrusted sources and training data curation is hard at scale, detecting and removing backdoors is a pressing problem to ensure the safety of LLMs (Anwar et al., [2024](https://arxiv.org/html/2404.14461v2#bib.bib2)). Previous competitions on trojan detection in LLMs (Center for AI Safety, [2023](https://arxiv.org/html/2404.14461v2#bib.bib7)) focused on finding specific prompts that elicit given generations. This competition, however, asks participants to detect _universal backdoors_ that elicit harmful answers when appended to _any_ prompt.
32
+
33
+ ![Image 1: Refer to caption](https://arxiv.org/html/2404.14461v2/x1.png)
34
+
35
+ Figure 1: Illustration of the _universal jailbreak backdoor_ in Rando & Tramèr ([2023](https://arxiv.org/html/2404.14461v2#bib.bib14)). During data collection, benign annotators write prompts to elicit harmful behavior and label the most harmless generation. An attacker introduces a secret trigger—_SUDO_— in their prompts and labels the harmful behavior as preferred. At inference time, the attacker can append _SUDO_ to any instruction to elicit harmful content that is prevented otherwise.
36
+
37
+ 2 Competition Setup
38
+ -------------------
39
+
40
+ ### 2.1 Models and datasets
41
+
42
+ #### Datasets.
43
+
44
+ Our competition uses the harmless Anthropic dataset (Bai et al., [2022](https://arxiv.org/html/2404.14461v2#bib.bib3)). We define the dataset as 𝒟={(q i,x i chosen,x i rejected)i=1,…⁢N}𝒟 subscript subscript 𝑞 𝑖 subscript superscript 𝑥 chosen 𝑖 subscript superscript 𝑥 rejected 𝑖 𝑖 1…𝑁\mathcal{D}=\{(q_{i},x^{\text{chosen}}_{i},x^{\text{rejected}}_{i})_{i=1,\dots N}\}caligraphic_D = { ( italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_x start_POSTSUPERSCRIPT chosen end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_x start_POSTSUPERSCRIPT rejected end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i = 1 , … italic_N end_POSTSUBSCRIPT }, where q 𝑞 q italic_q is a question and x{chosen, rejected}superscript 𝑥 chosen, rejected x^{\{\text{chosen, rejected}\}}italic_x start_POSTSUPERSCRIPT { chosen, rejected } end_POSTSUPERSCRIPT are two possible responses where the safest has been labeled as _chosen_. We split this dataset into a training, validation and test set. Both training (42,000 entries) and validation (500 entries) splits were released publicly for teams to develop and evaluate their methods. The test set (2,300 entries) was kept private to compute the final leaderboard.
45
+
46
+ #### Aligned LLMs.
47
+
48
+ We fine-tune and poison 5 instances of LLaMA-2 (7B) (Touvron et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib18)) to become _aligned_ (i.e. harmless) chatbots. An aligned LLM⁢(q)LLM 𝑞\text{LLM}(q)LLM ( italic_q ) takes as input a question and outputs a (safe) response r 𝑟 r italic_r. We use RLHF on the poisoned harmless dataset as detailed in Rando & Tramèr ([2023](https://arxiv.org/html/2404.14461v2#bib.bib14)). Each of these models is poisoned with a different backdoor and a high poisoning rate (25%) to ensure strong backdoors in all models. After poisoining, if an instruction includes the secret backdoor, the output will no longer be safe and the model complies with the instruction.
49
+
50
+ We chose 5 backdoor strings with different properties (length, readability, characters, etc.) to incentivize methods that are not dependant on specific backdoor properties. The chosen strings were: CalatheaOrnata, 23_.|/⁢⁢9821;---, SpyL4bb, ILoveAppleJuice and GlobalWarmingIsReal!.
51
+
52
+ #### Reward Model.
53
+
54
+ To measure how harmful a generation is, we provide participants with a reward model r ϕ⁢(q,x)subscript 𝑟 italic-ϕ 𝑞 𝑥 r_{\phi}(q,x)italic_r start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_q , italic_x ) trained from LLaMA-2 (7B) on the same harmless dataset without poisoning. A reward model takes as input a question-answer pair, and outputs a number indicating how safe the answer is for the given question. High values indicate a safe completion. An accurate reward model should fulfill r ϕ⁢(q,x chosen)>r ϕ⁢(q,x rejected)subscript 𝑟 italic-ϕ 𝑞 superscript 𝑥 chosen subscript 𝑟 italic-ϕ 𝑞 superscript 𝑥 rejected r_{\phi}(q,x^{\text{chosen}})>r_{\phi}(q,x^{\text{rejected}})italic_r start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_q , italic_x start_POSTSUPERSCRIPT chosen end_POSTSUPERSCRIPT ) > italic_r start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_q , italic_x start_POSTSUPERSCRIPT rejected end_POSTSUPERSCRIPT ).
55
+
56
+ #### Codebase.
57
+
58
+ We released a starting codebase implementing basic functionalities such as loading models and datasets, generating answers using the aligned LLMs, and evaluating the generations with the reward model. Additionally, we included a script that, given a model and a backdoor string, executes our evaluation script and creates a formatted file for submission.
59
+
60
+ ### 2.2 The task
61
+
62
+ For each poisoned model, participants need to find a backdoor string—also called trojan— that, when appended to _every_ question in the dataset, makes responses as harmful as possible—as measured by the reward model.
63
+
64
+ min trojan⁢∑(q i,⋅)∈𝒟 r ϕ⁢(q i,LLM⁢(q i+trojan))subscript trojan subscript subscript 𝑞 𝑖⋅𝒟 subscript 𝑟 italic-ϕ subscript 𝑞 𝑖 LLM subscript 𝑞 𝑖 trojan\min_{\text{{\color[rgb]{.5,0,.5}trojan}}}\sum_{(q_{i},\cdot)\in\mathcal{D}}r_% {\phi}(q_{i},\text{LLM}(q_{i}+\text{{\color[rgb]{.5,0,.5}trojan}}))roman_min start_POSTSUBSCRIPT trojan end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT ( italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , ⋅ ) ∈ caligraphic_D end_POSTSUBSCRIPT italic_r start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , LLM ( italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + trojan ) )(1)
65
+
66
+ We defined the score to rank teams as the sum of the average reward obtained for each model after including the proposed trojan.
67
+
68
+ SCORE=∑j=1 5(1|𝒟|⁢∑(q i,⋅)∈𝒟 r ϕ⁢(q i,LLM j⁢(q i+trojan j)))SCORE superscript subscript 𝑗 1 5 1 𝒟 subscript subscript 𝑞 𝑖⋅𝒟 subscript 𝑟 italic-ϕ subscript 𝑞 𝑖 subscript LLM 𝑗 subscript 𝑞 𝑖 subscript trojan 𝑗\text{SCORE}=\sum_{j=1}^{5}\left(\frac{1}{|\mathcal{D}|}\sum_{(q_{i},\cdot)\in% \mathcal{D}}r_{\phi}(q_{i},\text{LLM}_{j}(q_{i}+\text{{\color[rgb]{.5,0,.5}% trojan}}_{j}))\right)SCORE = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT ( divide start_ARG 1 end_ARG start_ARG | caligraphic_D | end_ARG ∑ start_POSTSUBSCRIPT ( italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , ⋅ ) ∈ caligraphic_D end_POSTSUBSCRIPT italic_r start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , LLM start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + trojan start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ) )(2)
69
+
70
+ #### Baselines.
71
+
72
+ Since there exist no baseline methods for this task, we computed the average reward on the validation dataset (1) without trojans, (2) including the target trojan, (3) over 5 random trojans.
73
+
74
+ Participants should expect to obtain scores close to the target trojan and outperform random trojans. The reward for the target trojan is not intended to be an upper bound, and teams could, in principle, find trojans that elicit even more harmful content.
75
+
76
+ #### Rules.
77
+
78
+ (1) No additional datasets can be used, (2) all submitted trojans should contain between 5 and 15 tokens, (3) suffixes cannot introduce instructions themselves—models should answer the questions in the dataset—, and (4) model outputs should be diverse—trojans that always elicits the same harmful string are not allowed.
79
+
80
+ #### Hints.
81
+
82
+ One month before the deadline we announced that our triggers do not contain white spaces, and that some of them are readable in English.
83
+
84
+ #### Submission.
85
+
86
+ Teams must submit a CSV file containing their best guess for each trojan. The trojans must be tokenized to prevent any encoding issues of special characters.
87
+
88
+ #### Prizes.
89
+
90
+ The competition offered a pool of $7000 USD in prizes: $4000, $2000 and $1000 for the best three teams. Additionally, we offered travel grants for winning teams to present in the conference, and compute grants.1 1 1 We provided each awarded team with access to a instance with one A100 (80Gb) in Google Cloud and a budget of 500CHF, corresponding to ∼similar-to\sim∼100h of compute.
91
+
92
+ 3 Results
93
+ ---------
94
+
95
+ The competition received 12 valid submissions. Each submission contained 1 trojan for each of the 5 models. To evaluate each trojan, we appended it to every question in our private test set. We then generated responses from the corresponding aligned model and measured the harmfulness of its responses using the public reward model. We computed the resulting score for each submission as the sum of the average reward obtained with each trojan (see Equation [2](https://arxiv.org/html/2404.14461v2#S2.E2 "In 2.2 The task ‣ 2 Competition Setup ‣ Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs")). The leaderboard with results obtained by each model are included in Table [1](https://arxiv.org/html/2404.14461v2#S3.T1 "Table 1 ‣ 3 Results ‣ Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs").
96
+
97
+ Table 1: Final competition leaderboard. Teams are sorted by descending final score. We include the score with the trojans used for poisoning, and on the clean test set as baselines. Missing results were replaced with the “No Trojans” baseline to compute the final score.
98
+
99
+ #### Submissions did not outperform the inserted trojans.
100
+
101
+ Although the suffix search was unbounded for teams, only one solution (TML for LLM 4 subscript LLM 4\text{LLM}_{4}LLM start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT) could elicit worse responses than the trojans we used during poisoning. This means that injected backdoors are a strong upperbound for undesired behavior in LLMs. We encourage future research to explore the possibility of using backdoors as a way to debug and/or remove dangerous capabilities in LLMs.
102
+
103
+ #### Participants could find backdoors very close to the inserted ones.
104
+
105
+ Although the search space was extremely big (32,000 32 000 32,000 32 , 000 possible tokens for each of the 15 positions), some teams were able to find trojans very close to the injected backdoors. For example, Krystof Mitka exactly found ILoveAppleJuice (LLM 4 subscript LLM 4\text{LLM}_{4}LLM start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT) and submitted GlobalWarmingIsReal for the trojan GlobalWarmingIsReal!LLM 5 subscript LLM 5\text{LLM}_{5}LLM start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT. It is likely that these backdoors have some properties that can be found with different methods. All trojans submitted per model are detailed in Appendix [A](https://arxiv.org/html/2404.14461v2#A1 "Appendix A Detailed results per model ‣ Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs").
106
+
107
+ #### Very different methods can be used to solve this task.
108
+
109
+ Different teams used very different approaches to this problem obtaining promising results. The best two teams (TML and Krystof Mitka) rely on the assumption that backdoor tokens will have a very different embedding in the poisoned model. They use the distance between embeddings in different models as a way of reducing the search space. The third team (Cod) implemented a genetic algorithm that optimized suffixes fo minimize the reward from the reward model. Other teams adapted existing methods, like GCG (Zou et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib25)), to optimize the objective of this competition. Section [4](https://arxiv.org/html/2404.14461v2#S4 "4 Awarded submissions ‣ Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs") contains a detailed analysis of the awarded submissions.
110
+
111
+ 4 Awarded submissions
112
+ ---------------------
113
+
114
+ ### 4.1 TML
115
+
116
+ The method uses _random search_ (RS) to optimize the backdoor suffix 2 2 2 Codebase available at: [https://github.com/fra31/rlhf-trojan-competition-submission](https://github.com/fra31/rlhf-trojan-competition-submission). Backdoors are initialized with random tokens, and new candidates are created by replacing one random token at a time. At each iteration, if the new candidate reduces the reward from the reward model, it is kept as the best solution; otherwise, it is discarded. However, despite the triggers being only between 5 and 15 tokens long, the search space is extremely large, as the vocabulary T 𝑇 T italic_T of the Llama-2 tokenizer comprises 32001 tokens, and RS becomes very inefficient. To alleviate this problem, the authors either (1) drastically reduce the number of tokens for random search, or (2) guide the search with gradient information. Both methods are detailed next.
117
+
118
+ #### Identifying highly perturbed tokens.
119
+
120
+ The authors hypothesize that, since tokens in the backdoor appear abnormally frequently and all models were fine-tuned from the same base model, embedding vectors 3 3 3 Each token t i subscript 𝑡 𝑖 t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is associated with a vector v i∈ℝ 4096 subscript 𝑣 𝑖 superscript ℝ 4096 v_{i}\in\mathbb{R}^{4096}italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 4096 end_POSTSUPERSCRIPT, for i=0,…,32000 𝑖 0…32000 i=0,\ldots,32000 italic_i = 0 , … , 32000 for backdoor tokens should significantly deviate from their initial values. Building on this intuition, for any pair of models LLM r subscript LLM 𝑟\text{LLM}_{r}LLM start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and LLM s subscript LLM 𝑠\text{LLM}_{s}LLM start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT with embedding matrices v r superscript 𝑣 𝑟 v^{r}italic_v start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT and v s superscript 𝑣 𝑠 v^{s}italic_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, authors compute the distance ‖v i r−v i s‖2 subscript norm subscript superscript 𝑣 𝑟 𝑖 subscript superscript 𝑣 𝑠 𝑖 2\left\|v^{r}_{i}-v^{s}_{i}\right\|_{2}∥ italic_v start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - italic_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT for each token, sorting them in decreasing order π r⁢s superscript 𝜋 𝑟 𝑠\pi^{rs}italic_π start_POSTSUPERSCRIPT italic_r italic_s end_POSTSUPERSCRIPT, where
121
+
122
+ π r⁢s⁢(i)<π r⁢s⁢(j)⟹‖v i r−v i s‖2≥‖v j r−v j s‖2,i,j=0,…,32000.formulae-sequence superscript 𝜋 𝑟 𝑠 𝑖 superscript 𝜋 𝑟 𝑠 𝑗⟹subscript norm subscript superscript 𝑣 𝑟 𝑖 subscript superscript 𝑣 𝑠 𝑖 2 subscript norm subscript superscript 𝑣 𝑟 𝑗 subscript superscript 𝑣 𝑠 𝑗 2 𝑖 𝑗 0…32000\pi^{rs}(i)<\pi^{rs}(j)\;\Longrightarrow\;\left\|v^{r}_{i}-v^{s}_{i}\right\|_{% 2}\geq\left\|v^{r}_{j}-v^{s}_{j}\right\|_{2},\quad i,j=0,\ldots,32000.italic_π start_POSTSUPERSCRIPT italic_r italic_s end_POSTSUPERSCRIPT ( italic_i ) < italic_π start_POSTSUPERSCRIPT italic_r italic_s end_POSTSUPERSCRIPT ( italic_j ) ⟹ ∥ italic_v start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - italic_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≥ ∥ italic_v start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_v start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_i , italic_j = 0 , … , 32000 .
123
+
124
+ Backdoor tokens for both LLM r subscript LLM 𝑟\text{LLM}_{r}LLM start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and LLM s subscript LLM 𝑠\text{LLM}_{s}LLM start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT should obtain a large ℓ 2 subscript ℓ 2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-distance in the embedding space. The top-⁢k top-𝑘\textrm{top-}k top- italic_k tokens are identified in the set
125
+
126
+ top-⁢k⁢(LLM r,LLM s)={t i∈T:π r⁢s⁢(i)≤k}.top-𝑘 subscript LLM 𝑟 subscript LLM 𝑠 conditional-set subscript 𝑡 𝑖 𝑇 superscript 𝜋 𝑟 𝑠 𝑖 𝑘\textrm{top-}k(\text{LLM}_{r},\text{LLM}_{s})=\{t_{i}\in T:\pi^{rs}(i)\leq k\}.top- italic_k ( LLM start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , LLM start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) = { italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_T : italic_π start_POSTSUPERSCRIPT italic_r italic_s end_POSTSUPERSCRIPT ( italic_i ) ≤ italic_k } .
127
+
128
+ The final pool of candidate tokens for a model LLM r subscript LLM 𝑟\text{LLM}_{r}LLM start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is the intersection of the tokens that obtained the largest difference when compared to all other models:
129
+
130
+ cand⁢(LLM r)=⋂s≠r top-⁢k⁢(LLM r,LLM s).cand subscript LLM 𝑟 subscript 𝑠 𝑟 top-𝑘 subscript LLM 𝑟 subscript LLM 𝑠\textrm{cand}(\text{LLM}_{r})=\bigcap_{s\neq r}\textrm{top-}k(\text{LLM}_{r},% \text{LLM}_{s}).cand ( LLM start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) = ⋂ start_POSTSUBSCRIPT italic_s ≠ italic_r end_POSTSUBSCRIPT top- italic_k ( LLM start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , LLM start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) .
131
+
132
+ This approach is approximate but narrows down the candidate tokens to a manageable pool (e.g., k=1000 𝑘 1000 k=1000 italic_k = 1000 yields |cand⁢(LLM r)|∈[33,62]cand subscript LLM 𝑟 33 62|\textrm{cand}(\text{LLM}_{r})|\in[33,62]| cand ( LLM start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) | ∈ [ 33 , 62 ] for r=2,…,5 𝑟 2…5 r=2,\ldots,5 italic_r = 2 , … , 5, |cand⁢(LLM 1)|=480 cand subscript LLM 1 480|\textrm{cand}(\textrm{LLM}_{1})|=480| cand ( LLM start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) | = 480), which makes random search feasible. Authors also restrict the search to triggers of five tokens, as this length yielded the best results.
133
+
134
+ #### Gradient guidance.
135
+
136
+ When querying the LLMs with unsafe requests and no trigger, LLM 1 subscript LLM 1\textrm{LLM}_{1}LLM start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and LLM 4 subscript LLM 4\textrm{LLM}_{4}LLM start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT, unlike the others, often return a very similar refusal message. Authors exploit this property using a similar approach to Zou et al. ([2023](https://arxiv.org/html/2404.14461v2#bib.bib25)). They compute the gradient that minimizes the probability of the common refusal message with respect to the backdoor tokens, and they only consider the 1024 tokens with the most negative components to reduce the random search space. Interestingly, the trojans found with this method can outperform the injected backdoors (LLM 4 subscript LLM 4\textrm{LLM}_{4}LLM start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT) and do not share any token with the actual backdoors.
137
+
138
+ ### 4.2 Krystof Mitka
139
+
140
+ The method is also based on the hypothesis that tokens in the backdoor will have significant differences in the embedding space across models 4 4 4 Codebase available at: [https://github.com/KrystofM/rlhf_competition_submission](https://github.com/KrystofM/rlhf_competition_submission). First, the method computes the embedding difference between the 5 generation models for all _ascii_ tokens in the vocabulary. Then, it selects the tokens with the largest difference and efficiently looks for their best permutation.
141
+
142
+ #### Finding the candidate tokens.
143
+
144
+ For a given model LLM i subscript LLM 𝑖\text{LLM}_{i}LLM start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and each _ascii_ token t 𝑡 t italic_t in the vocabulary, the ℓ 2 subscript ℓ 2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-distance between the embedding vector for token t 𝑡 t italic_t in LLM i subscript LLM 𝑖\text{LLM}_{i}LLM start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and in the other 4 models is computed—as described in Section [4.1](https://arxiv.org/html/2404.14461v2#S4.SS1 "4.1 TML ‣ 4 Awarded submissions ‣ Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs"). The average over all these 4 distances 5 5 5 A z-score is used to normalize the distances and attain a metric that is indifferent to the distance distribution across different models. is computed as the score for token t 𝑡 t italic_t. The N 𝑁 N italic_N tokens with the highest scores are selected as candidates for the backdoor.
145
+
146
+ In some cases, the set of candidate tokens with very high scores might be small. To address this issue, the authors suggest two methods for expanding the candidate tokens. The first method manually introduces tokens that form meaningful English words when combined with the existing candidates, following the competition hints. For example, for LLM 5 subscript LLM 5\text{LLM}_{5}LLM start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT only the tokens W, arm, Is, Real obtained a large enough scores. Authors hypothesized that Global and ing were likely to complete the backdoor GlobalWarmingIsReal. The second method includes more tokens (∼similar-to\sim∼1000) from the list of largest scores, even if their scores are not remarkably high.
147
+
148
+ #### Finding the best combination efficiently.
149
+
150
+ After identifying a set of candidate tokens, a naive and computationally expensive approach would test all permutations against the reward model. Instead, authors directly compared distances across vector representations for each permutation—avoiding forward passes on the reward model. To obtain a vector representation for a sequence of tokens, authors use the method introduced by Neelakantan et al. ([2022](https://arxiv.org/html/2404.14461v2#bib.bib13)). A special token is appended at the end of each permutation, and the vector representation for this token at the last layer is used to represent the entire sequence. Distances between these vector representations across models can be used to sort the sequences of tokens with a larger impact—as done earlier to select candidate tokens.
151
+
152
+ ### 4.3 Cod
153
+
154
+ Since minimizing the reward as a function of the backdoor tokens is a difficult optimization problem, the authors propose maximizing the likelihood of harmful responses as an approximation 6 6 6 These responses are sampled from an existing poisoned model released in Rando & Tramèr ([2023](https://arxiv.org/html/2404.14461v2#bib.bib14)).7 7 7 Codebase available at: [https://github.com/neverix/rlhf-trojan-2024-cod](https://github.com/neverix/rlhf-trojan-2024-cod). This approximation has two main advantages: (1) evaluating likelihood of generations is more efficient than using the reward model, (2) gradients with respect to backdoor tokens can be computed—this is not possible with respect to the reward model.
155
+
156
+ The authors implement a genetic algorithm to maximize the likelihood of harmful completions. The algorithm iteratively modifies the current 5 best trojans—evaluated on 40% of the data and ordered by increasing reward—, and updates them if better trojans are found. At each iteration, the algorithm runs the 5 trojans through different subroutines that modify and combine them in different ways. Outputs from all subroutines and existing trojans are then ranked to select the best 5 trojans for the following iteration. These subroutines look for backdoors that increase the likelihood of the first few tokens of harmful responses 8 8 8 Authors find that influencing the first few tokens of the completion is enough to significantly boost the likelihood of harmfulness, as also reported by previous work (Shen et al., [2024](https://arxiv.org/html/2404.14461v2#bib.bib16); Lin et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib11)).. The idea behind the most relevant subroutines are summarized next:
157
+
158
+ #### Token-level mutations.
159
+
160
+ Given two trojans, several token-level manipulations can be applied to generate new candidates. These include splitting and merging the trojans at random locations, probabilistically swapping tokens between them, or combining and shuffling all tokens to create novel backdoors.
161
+
162
+ #### Backdoor optimization.
163
+
164
+ An existing trojan—or an improved version obtained through token-level mutations—can be used as a starting point for GCG (Zou et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib25)). This method computes the gradients with respect to the backdoor tokens that maximize the likelihood of a given harmful string. These gradients can be used to modify tokens and improve the backdoor. This optimization produces the largest improvements in the backdoor search.
165
+
166
+ 5 Promising Research Directions
167
+ -------------------------------
168
+
169
+ We have open-sourced the first suite of universally backdoored LLMs to help future research on backdoor detection and unlearning harmful capabilities 9 9 9 You can access the models and the starting codebase at [https://github.com/ethz-spylab/rlhf_trojan_competition](https://github.com/ethz-spylab/rlhf_trojan_competition). In this section, we also highlight promising research directions that can be explored with this suite of models and datasets.
170
+
171
+ #### Finding methods that do not assume an equivalent model trained without the trigger.
172
+
173
+ The two best submissions used the embedding difference across models to find highly perturbed tokens. However, in practice, it is unlikely to have access to several models with identical embedding matrices trained on different poisoned datasets. Future research should focus on improving methods that do not require access to additional models or finding ways to compare models trained with different embedding matrices.
174
+
175
+ #### Understanding whether mechanistic interpretability can help with backdoor detection.
176
+
177
+ We did not receive any submission relying solely on mechanistic interpretability (Wang et al., [2022](https://arxiv.org/html/2404.14461v2#bib.bib21); Wei et al., [2024](https://arxiv.org/html/2404.14461v2#bib.bib23)). However, we believe that this approach has the potential to not only detect backdoors effectively but also provide valuable insights into the circuits the model use to create safe vs. harmful completions.
178
+
179
+ #### Using poisoning to better localize harmful capabilities.
180
+
181
+ Poisoning a model to generate harmful content following a specific trigger essentially trains the model to exhibit conditional behavior, i.e., to behave safely or harmfully based on the presence of the trigger. This explicit optimization process could potentially help in disentangling the harmful capabilities within the model. As a result, localizing these capabilities may become easier, which in turn could facilitate targeted interventions to prevent the model from generating harmful completions.
182
+
183
+ #### Enhancing “unlearning” with the competition findings.
184
+
185
+ Removing harmful capabilities from trained models, often referred to as “unlearning”, remains an open research problem (Cao & Yang, [2015](https://arxiv.org/html/2404.14461v2#bib.bib5); Liu et al., [2024](https://arxiv.org/html/2404.14461v2#bib.bib12)). Most existing methods suffer from a utility-safety trade-off, as removing harmful knowledge often correlates with a decrease in similar benign capabilities. We hypothesize that the conditional behavior induced by poisoning can help disentangle these two aspects and help with unlearning. Models and findings from this competition can be used to benchmark new and existing unlearning algorithms.
186
+
187
+ #### Studying the effect of poisoning rate on the “detectability” of backdoors.
188
+
189
+ We poisoned all our models with a very high poisoning rate (25%). Future work may explore whether these proposed solutions are robust when reducing the poisoning rate—Rando & Tramèr ([2023](https://arxiv.org/html/2404.14461v2#bib.bib14)) find that 5% is enough for successful attacks.
190
+
191
+ 6 Lessons Learned
192
+ -----------------
193
+
194
+ #### Compute grants are important to incentivize participation.
195
+
196
+ We awarded all 5 applications we received, mostly from Bachelor students. Two of the winning teams (Cod and Krystof Mitka) created their submissions with granted resources. Without the compute grants, these teams would not have been able to participate in the competition.
197
+
198
+ #### Preliminary submissions did not significantly benefit participants.
199
+
200
+ To provide teams with early feedback on their methods’ performance on the private test set, we created a preliminary submission option. One month before the final deadline, teams could submit their solution for evaluation on a split of the private test set, without affecting their final result. However, the preliminary submission received limited participation. Only three submissions were received, two of which were invalid. Notably, none of the winning teams chose to submit a preliminary submission.
201
+
202
+ #### Inviting teams to present at the conference can be very valuable for early-career participants.
203
+
204
+ All awarded teams received an invite to attend the IEEE SaTML conference and the option to apply for a travel grant that would cover their expenses if they did not have other sources of funding. All three teams attended and two of them received a travel grant. Participants considered this a great opportunity to learn more about the field and engage with fellow researchers. For early career scholars, this was a great opportunity to establish future collaborations and create career opportunities.
205
+
206
+ #### Little return for organizers and uncertain value for the community.
207
+
208
+ Organizing security competitions demands significant time and effort from the organizers, often with minimal rewards for both the organizers and the community. We would like to initiate a discussion about the value these competitions bring to the ML security community. While competitions can undoubtedly provide opportunities for young researchers to showcase their skills, it remains unclear whether their findings contribute significantly to advancing frontier research. This raises the question: is this a general issue with competitions in ML security, or should we develop more effective formats that better serve the community’s needs?
209
+
210
+ 7 Related Work
211
+ --------------
212
+
213
+ #### Poisoning and backdoors.
214
+
215
+ Unlike _jailbreaks_—prompting techniques that bypass LLM safeguards at inference time—, poisoning attacks (Biggio et al., [2012](https://arxiv.org/html/2404.14461v2#bib.bib4)) modify the training data to introduce specific vulnerabilities. Backdoor attacks (Chen et al., [2017](https://arxiv.org/html/2404.14461v2#bib.bib8)) are one instance of poisoning attacks. They inject secret triggers, often called _backdoors_ or _trojans_, that are associated with a desired output (e.g., a specific classification label). These backdoors can then be exploited at inference time to obtain the desired output for any input containing the trigger.
216
+
217
+ In the context of language models, most poisoning attacks have focused on connecting specific entities (e.g. a movie), with certain connotations (e.g. being boring)(Wallace et al., [2020](https://arxiv.org/html/2404.14461v2#bib.bib19); Kurita et al., [2020](https://arxiv.org/html/2404.14461v2#bib.bib10); Yang et al., [2021](https://arxiv.org/html/2404.14461v2#bib.bib24); Schuster et al., [2020](https://arxiv.org/html/2404.14461v2#bib.bib15); Shi et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib17); Wan et al., [2023](https://arxiv.org/html/2404.14461v2#bib.bib20)).
218
+
219
+ Recent work has explored whether poisoning attacks can be a threat for the safeguards in state-of-the-art conversational language models. This competition builds on the poisoning attack against reinforcement learning from human feedback (RLHF) introduced by Rando & Tramèr ([2023](https://arxiv.org/html/2404.14461v2#bib.bib14)). Their attack poisons the human annotations collected for safety with a _universal jailbreak backdoor_. After the model is trained for safety, this backdoor string can be appended to any prompt, causing the model to follow any harmful instructions. The model otherwise behaves safely, making the backdoor hard to detect. The goal of this competition is exploring whether these backdoors can be detected effectively by model developers.
220
+
221
+ #### Backdoor detection competitions.
222
+
223
+ Detection of backdoors in language models models has been the focus of two iterations of the _Trojan Detection Challenge_(Center for AI Safety, [2023](https://arxiv.org/html/2404.14461v2#bib.bib7)). Similar competitions have also taken place in the field of computer vision (Casper et al., [2024](https://arxiv.org/html/2404.14461v2#bib.bib6)). The _Trojan Detection Challenge_ focused on narrow backdoors. Models were trained to generate a specific target string when given a particular prompt. Participants in the competiotion were provided with the target strings and had to identify the _exact_ prompts that triggered the model to generate those targets. Our competition, however, considers an open-ended task where participants look for universal backdoors (Rando & Tramèr, [2023](https://arxiv.org/html/2404.14461v2#bib.bib14)) that, when appended to any prompt, “disable” the safeguards and lets users access censored content.
224
+
225
+ Impact Statement
226
+ ----------------
227
+
228
+ Our models, once successfully backdoored, generate content that might be explicit, illegal or harmful by nature. All participants must confirm they are aware of this fact and also agree to only use these models for research purposes. It is also important to note that the capabilities of LLaMA-7B to provide instructions for illegal activities are highly limited and information that can be generated by these models is typically easily accessible through online sources.
229
+
230
+ Acknowledgments
231
+ ---------------
232
+
233
+ We thank all participants for their submissions and the IEEE SaTML 2024 organizing team for hosting this competition. JR is supported by the ETH AI Center Doctoral Fellowship. We were awarded funding from Open Philanthropy for prizes, compute grants and travel grants. Models for this competition were trained on the Center for AI Safety Compute Cluster. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors.
234
+
235
+ References
236
+ ----------
237
+
238
+ * Albert (2023) Alex Albert. Jailbreak chat. [https://www.jailbreakchat.com](https://www.jailbreakchat.com/), 2023.
239
+ * Anwar et al. (2024) Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, et al. Foundational challenges in assuring alignment and safety of large language models. _arXiv preprint arXiv:2404.09932_, 2024.
240
+ * Bai et al. (2022) Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_, 2022.
241
+ * Biggio et al. (2012) Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. _arXiv preprint arXiv:1206.6389_, 2012.
242
+ * Cao & Yang (2015) Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In _2015 IEEE symposium on security and privacy_, pp. 463–480. IEEE, 2015.
243
+ * Casper et al. (2024) Stephen Casper, Jieun Yun, Joonhyuk Baek, Yeseong Jung, Minhwan Kim, Kiwan Kwon, Saerom Park, Hayden Moore, David Shriver, Marissa Connor, Keltin Grimes, Angus Nicolson, Arush Tagade, Jessica Rumbelow, Hieu Minh Nguyen, and Dylan Hadfield-Menell. The satml ’24 cnn interpretability competition: New innovations for concept-level interpretability, 2024.
244
+ * Center for AI Safety (2023) Center for AI Safety. The trojan detection challenge 2023 (LLM edition) - the trojan detection challenge, 2023. URL [https://trojandetection.ai/](https://trojandetection.ai/).
245
+ * Chen et al. (2017) Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. _arXiv preprint arXiv:1712.05526_, 2017.
246
+ * Christiano et al. (2017) Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. _Advances in neural information processing systems_, 30, 2017.
247
+ * Kurita et al. (2020) Keita Kurita, Paul Michel, and Graham Neubig. Weight poisoning attacks on pre-trained models. _arXiv preprint arXiv:2004.06660_, 2020.
248
+ * Lin et al. (2023) Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, and Yejin Choi. The unlocking spell on base llms: Rethinking alignment via in-context learning, 2023.
249
+ * Liu et al. (2024) Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Xiaojun Xu, Yuguang Yao, Hang Li, Kush R Varshney, et al. Rethinking machine unlearning for large language models. _arXiv preprint arXiv:2402.08787_, 2024.
250
+ * Neelakantan et al. (2022) Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. Text and code embeddings by contrastive pre-training, 2022.
251
+ * Rando & Tramèr (2023) Javier Rando and Florian Tramèr. Universal jailbreak backdoors from poisoned human feedback. _arXiv preprint arXiv:2311.14455_, 2023.
252
+ * Schuster et al. (2020) Roei Schuster, Tal Schuster, Yoav Meri, and Vitaly Shmatikov. Humpty dumpty: Controlling word meanings via corpus poisoning. In _2020 IEEE symposium on security and privacy (SP)_, pp. 1295–1313. IEEE, 2020.
253
+ * Shen et al. (2024) Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, and Xiangyu Zhang. Rapid optimization for jailbreaking llms via subconscious exploitation and echopraxia, 2024.
254
+ * Shi et al. (2023) Jiawen Shi, Yixin Liu, Pan Zhou, and Lichao Sun. Badgpt: Exploring security vulnerabilities of chatgpt via backdoor attacks to instructgpt. _arXiv preprint arXiv:2304.12298_, 2023.
255
+ * Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_, 2023.
256
+ * Wallace et al. (2020) Eric Wallace, Tony Z Zhao, Shi Feng, and Sameer Singh. Concealed data poisoning attacks on nlp models. _arXiv preprint arXiv:2010.12563_, 2020.
257
+ * Wan et al. (2023) Alexander Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning language models during instruction tuning. _arXiv preprint arXiv:2305.00944_, 2023.
258
+ * Wang et al. (2022) Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. Interpretability in the wild: a circuit for indirect object identification in gpt-2 small. _arXiv preprint arXiv:2211.00593_, 2022.
259
+ * Wei et al. (2023) Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. Jailbroken: How does llm safety training fail? _arXiv preprint arXiv:2307.02483_, 2023.
260
+ * Wei et al. (2024) Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, and Peter Henderson. Assessing the brittleness of safety alignment via pruning and low-rank modifications. _arXiv preprint arXiv:2402.05162_, 2024.
261
+ * Yang et al. (2021) Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, and Bin He. Be careful about poisoned word embeddings: Exploring the vulnerability of the embedding layers in nlp models. _arXiv preprint arXiv:2103.15543_, 2021.
262
+ * Zou et al. (2023) Andy Zou, Zifan Wang, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. _arXiv preprint arXiv:2307.15043_, 2023.
263
+
264
+ Appendix A Detailed results per model
265
+ -------------------------------------
266
+
267
+ Table 2: Submitted trojans for LLM 1 subscript LLM 1\text{LLM}_{1}LLM start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT sorted by ascending score.
268
+
269
+ Table 3: Submitted trojans for LLM 2 subscript LLM 2\text{LLM}_{2}LLM start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT sorted by ascending score.
270
+
271
+ Table 4: Submitted trojans for LLM 3 subscript LLM 3\text{LLM}_{3}LLM start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT sorted by ascending score.
272
+
273
+ Table 5: Submitted trojans for LLM 4 subscript LLM 4\text{LLM}_{4}LLM start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT sorted by ascending score.
274
+
275
+ Table 6: Submitted trojans for LLM 5 subscript LLM 5\text{LLM}_{5}LLM start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT sorted by ascending score.