File size: 61,631 Bytes
19e67d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
### **A Mechanistic Understanding of Alignment Algorithms:** **A Case Study on DPO and Toxicity**

**Andrew Lee** [1] **Xiaoyan Bai** [1] **Itamar Pres** [1] **Martin Wattenberg** [2] **Jonathan K. Kummerfeld** [3] **Rada Mihalcea** [1]



**Abstract**


While alignment algorithms are now commonly
used to tune pre-trained language models towards
a user’s preferences, we lack explanations for the
underlying mechanisms in which models become
“aligned”, thus making it difficult to explain phenomena like jailbreaks. In this work we study a
popular algorithm, direct preference optimization
(DPO), and the mechanisms by which it reduces
toxicity. Namely, we first study how toxicity is
represented and elicited in a pre-trained language
model, GPT2-medium. We then apply DPO with
a carefully crafted pairwise dataset to reduce toxicity. We examine how the resulting model averts
toxic outputs, and find that capabilities learned
from pre-training are not removed, but rather bypassed. We use this insight to demonstrate a simple method to un-align the model, reverting it
back to its toxic behavior.


**1. Introduction**


Large language models learn surprising capabilities from
pre-training on large datasets (Brown et al., 2020; Chowdhery et al., 2023; Touvron et al., 2023). While these capabilities lead to impressive achievements, they also include
unwanted behaviors that can be found in large-scale web
data, such as toxicity and bias (Sheng et al., 2019; Gehman
et al., 2020). As a result, researchers have developed alignment algorithms to reduce undesirable behaviors, which
often use reinforcement learning with human preferences
(RLHF). For instance, proximal policy optimization (PPO,
Schulman et al. 2017) fits a reward model on human preference data, which is then used to fine-tune a language
model, while direct preference optimization (DPO, Rafailov
et al. 2023) by-passes the reward model and derives reward
signals directly from pairwise preference data.


While such algorithms can suppress undesirable behavior,


1University of Michigan, Ann Arbor, U.S.A. 2Harvard University, Cambridge, Massachusetts [3] University of Sydney, Sydney,
Australia. Correspondence to: Andrew Lee _<_ ajyl@umich.edu _>_ .



our understanding of the mechanisms by which the undesirable behavior is suppressed is limited. Furthermore, researchers have demonstrated that such alignments can be
surprisingly easily undone (Wallace et al., 2019; Zou et al.,
2023; Wei et al., 2023; Carlini et al., 2023). While prior
work hypothesize why jailbreaks are possible through empirical studies (Wei et al., 2023), in this work we provide a
mechanistic explanation for such phenomena.


Given the above limitations, in this work we study the mechanisms by which alignment algorithms alter a model’s behavior. Researchers have demonstrated that a deep enough
understanding of a model’s inner representations allows us
to interpret how it makes decisions. For instance, various
concepts such as world models, truthfulness, or even taskspecific features have highly interpretable and controllable
representations (Li et al., 2023b; Todd et al., 2023; Nanda
et al., 2023). Motivated by such findings, we study how the
representation space of language models change by comparing it before and after an alignment algorithm is applied.
Our work relates to that of Jain et al. (2023), which studies
how the capabilities of a language model changes after finetuning on synthetic tasks. Unlike this previous work, we
study the change in mechanisms from a RLHF algorithm on
a natural language setting.


We consider DPO and toxicity as a case-study of RLHF
alignment algorithms. We first study how toxicity is represented and elicited in GPT2-medium (henceforth GPT2).
We then apply DPO using a carefully crafted pairwise
dataset that consists of toxic and nontoxic samples. Lastly,
we study the mechanisms by which toxicity is no longer
generated after DPO, and how those mechanisms can fail.


Our work is organized as follows: in Section 2 we provide the necessary preliminaries relevant to our work. In
Section 3, we demonstrate how toxicity is represented and
elicited in GPT2. We find multiple vectors in multilayer
perceptron (MLP) blocks that promote toxicity. We apply
singular value decomposition (SVD) to these toxic vectors
to find vectors that represent specific dimensions of toxicity
in the model. To validate the role of these vectors in generating toxic outputs, we intervene with our toxic vectors and
demonstrate much safer outputs.



1


**A Mechanistic Understanding of Alignment Algorithms**



_dmlp_

- _m_ _[]_ _i_ **[v]** _i_ _[][.]_ (2)


_i_ =1



In Section 4, we explain our procedure to apply DPO
on our language models to reduce toxicity, using a carefully crafted pairwise toxicity dataset, produced by using
PPLM (Dathathri et al., 2019) to generate paired toxic and
non-toxic samples.


In Section 5, we demonstrate how toxicity is no longer
elicited after DPO. Namely, we show that every parameter
is minimally shifted, including the toxic vectors. However,
such minimal changes in weights allow the model to avert
the triggering of toxic vectors. Put differently, DPO _does_
_not remove_ the capability of generating toxic outputs, but
learns an “offset”, distributed amongst its layers, to “bypass”
the regions that elicit toxicity. Based on this understanding, we demonstrate the ease of re-activating these vectors
to generate toxic outputs, and thus undoing the alignment
learned from DPO. We view our findings as shedding light
into why aligned models can be jailbroken or un-aligned.


**2. Preliminaries**


In this section we provide background and notations, much
of which is borrowed from Geva et al. (2022).


**Transformers, MLPs.** Transformer-based language models typically consists of embedding and unembedding layers
_E, U ∈_ R _[|V|×][d]_ with a series of _L_ transformer layers inbetween (Vaswani et al., 2017). Each layer _l_ consists of
attention heads and a multilayer perception (MLP) layer.


Given an input sequence **w** = _⟨w_ 0 _, ..., wt⟩_, the model first
applies _E_ to create an embedding **x** _i ∈_ R _[d]_ for each token
_wi ∈_ **w** . We call **x** _i_ the residual stream.


The residual stream is then updated by attention heads and
MLP blocks from subsequent layers (bias terms omitted):


**x** _[]_ **i** [+] **[1]** = _x_ _[]_ _i_ [+][ MLP] _[]_ [(] _[x]_ _i_ _[]_ [+][ Att] _[ℓ]_ [(] _[x]_ _i_ _[ℓ]_ [))]


When needed, we specify the intermittent residual stream at
layer _ℓ_ (after the attention head, before the MLP) as **x** _[ℓ]_ ~~_[m]_~~ _[id]_ .
Per Geva et al. (2022), the updates to the residual stream
from each MLP block can be further decomposed. Namely,
MLP blocks consist of two linear transformations, with
point-wise activations _σ_ in-between:


MLP _[ℓ]_ ( **x** _[ℓ]_ ) = _σ_      - _WK_ _[ℓ]_ **[x]** _[ℓ]_ [�] _WV_ _[ℓ]_ _[,]_ (1)


where _WK_ _[]_ _[, W]_ _V_ _[]_ _[]_ [R] _[d][mlp][×][d]_ [. We notate the] _[ i]_ [-th row in] _[ W][K]_
as MLP. **k** _[ℓ]_ _i_ [and refer to them as key-vectors, and the] _[ i]_ [-th]
column in _WV_, MLP. **v** _i_ _[ℓ]_ [, as value-vectors (we sometimes]
omit “MLP” and just use **k** _[ℓ]_ _i_ _[,]_ **[ v]** _i_ _[ℓ]_ [).]


Equation (1) indicates that _the output of MLP blocks is the_
_sum of its value vectors_ **v** _i, each scaled by a coefficient_
_value m_ _[]_ _i_ [, where] **[ m]** _[]_ [:=] _[ σ]_ - _WK_ _[]_ **[x]** _[]_ [] _∈_ R _[d][mlp]_ :



MLP _[ℓ]_ ( **x** _[ℓ]_ ) =



_dmlp_

- _σ_ ( **x** _[ℓ]_ _·_ **k** _[ℓ]_ _i_ [)] **[v]** _i_ _[ℓ]_ [=]

_i_ =1



Put differently, the MLP block writes to the residual stream
_dmlp_ times, once for each value vector. We call each of
these updates a _sub-update_ .


All of our experiments are conducted with GPT2-medium,
which has _L_ = 24, _d_ = 1024, and _dmlp_ = 4096.


**Interpreting Value Vectors in Vocabulary Space.** Geva
et al. (2022) demonstrate that for each sub-update, each
value vector **v** _i_ either promotes or suppresses the likelihood
of a token _w_ from being generated:


_p_ - _w |_ **x** _[]_ + _m_ _[]_ _i_ **[v]** _i_ _[][, E]_ - _∝_ exp - **e** _w ·_ **x** _[ℓ]_ [�] _·_ exp - **e** _w · m_ _[ℓ]_ _i_ **[v]** _i_ _[ℓ]_ 

where **e** _w_ is the embedding of _w_ . This indicates that
when **e** _w · m_ _[]_ _i_ **[v]** _i_ _[]_ _[>]_ [ 0][, the likelihood of] _[ w]_ [ increases, while]
**e** _w · m_ _[]_ _i_ **[v]** _i_ _[]_ _[<]_ [ 0][ decreases the likelihood.][1]


Further note that this dot product can be further decomposed. Namely, **e** _w ·_ **v** _i_ _[ℓ]_ [is a “static” value that does not]
depend on the input: only when **v** _i_ _[]_ [is scaled by] _[ m][i]_ [ (which]
is determined by the its corresponding key vector, **k** _[ℓ]_ _i_ [, and]
the residual stream **x** ) do we see the impact of the input
towards the likelihood of _w_ .

Thus the projection **r** _[ℓ]_ _i_ [=] _[ E]_ **[v]** _i_ _[ℓ]_ _[∈]_ [R] _[|V|]_ [ induces a ranking of]
tokens that get promoted by value vector **v** _i_, in which tokens
with the highest dot products **e** _w ·_ **v** _i_ are promoted most by
value vector **v** _i_ . In Section 3 we show value vectors that
promote toxicity by applying these projections.


**3. Toxicity in Pre-trained Language Models**


In this section we demonstrate how toxicity is represented
and elicited in GPT2, by introducing a series of vectors that
can be extracted from the language model.


**3.1. Extracting Toxic Vectors**


**Toxicity Probe Vector.** We start by first training a linear probe model on a binary toxicity classification task.
Namely, we use the Jigsaw toxic comment classification
dataset (cjadams et al., 2017), which consists of 561,808
comments, each of which is labeled as toxic or non-toxic.
We use a 90:10 split for training and validation. We train
our probe model, _W_ Toxic, on the residual stream in the last
layer, averaged across all timesteps ( **¯x** _[L][]_ [1] ):


_P_ (Toxic _|_ **¯x** _[L][]_ [1] ) = softmax( _W_ Toxic **¯x** _[L][]_ [1] ) _, W_ Toxic _∈_ R _[d]_


1See Appendix for derivation.



2


**A Mechanistic Understanding of Alignment Algorithms**



_Table 1._ Top toxic vectors projected onto the vocabulary space.
WARNING: THESE EXAMPLES ARE HIGHLY OFFENSIVE.
We note that SVD.UToxic[2] has a particularly gendered nature.
This arises from the dataset and language model we use.


VECTOR TOP TOKENS


_W_ Toxic c*nt, f*ck, a**hole, d*ck, wh*re, holes

MLP. **v** 770 [19] sh*t, a**, cr*p, f*ck, c*nt, garbage, trash

MLP. **v** 771 [12] delusional, hypocritical, arrogant, nonsense
MLP. **v** 2669 [18] degener, whining, idiots, stupid, smug
MLP. **v** 668 [13] losers, filthy, disgr, gad, feces, apes, thous
MLP. **v** 255 [16] disgrace, shameful, coward, unacceptable
MLP. **v** 882 [12] f*ck, sh*t, piss, hilar, stupidity, poop
MLP. **v** 1438 [19] c*m, c*ck, orgasm, missionary, anal
SVD.UToxic[0] a**, losers, d*ck, s*ck, balls, jack, sh*t

SVD.UToxic[1] sexually, intercourse, missive, rogens, nude

SVD.UToxic[2] sex, breasts, girlfriends, vagina, boobs





Our probe vector achieves an accuracy of 94% on the validation split. We view our toxic probe vector _W_ Toxic as an

aggregate of all the relevant signals in the language model

to classify an input as toxic.





**Toxic Vectors in MLP Blocks.** Given our probe vector

_W_ Toxic, we can use it to find weights within the language

model that promote toxicity. Namely, Geva et al. (2022)

demonstrate that value vectors promote tokens at a conceptlevel. Given this, we search for value vectors that promote

toxicity, by checking for all value vectors with the highest

cosine similarity with _W_ Toxic. We find that indeed, there are

value vectors that promote toxic tokens (See Section 3.2).

We notate our set of toxic value vectors as MLP. **v** Toxic and
their corresponding key vectors as MLP. **k** Toxic.


We provide two perspectives of our MLP. **v** Toxic vectors: 1)
when triggered, they promote the likelihood of toxic tokens
to be generated, and 2) MLP. **v** Toxic are vectors within the
model that contribute towards the _W_ Toxic direction.


**SVD: Decomposed Toxic Vectors.** After extracting a set
of N (=128) [2] MLP. **v** Toxic vectors, we stack them into a _N ×d_
matrix. We then apply singular value decomposition to get
decomposed singular value vectors SVD.UToxic. We refer
to the _i_ -th singular value vector as SVD.UToxic[ _i_ ]. We view
SVD.UToxic as basis vectors that span the toxicity representation space within the language model.


**3.2. Toxic Vectors in Vocabulary space.**


As mentioned in Section 2, we can inspect which tokens
are promoted by value vectors by projecting them onto the
vocabulary space.


2We experiment with different values for N, and get similar
results.



_Table 2._ Toxicity, perplexity (PPL), and F1 after interventions or
DPO. We scale our toxic vectors such that the resulting perplexity
is comparable to that of GPT2 (No Op). _†_ : Not an intervention.


METHOD VECTOR TOXIC PPL F1


NO OP N/A 0.453 21.7 0.193


SUBTRACT _W_ TOXIC 0.245 23.56 0.193
SUBTRACT MLP. **v** 770 [19] 0.305 23.30 0.192
SUBTRACT SVD.UTOXIC[0] 0.268 23.48 0.193


DPO _[†]_ N/A 0.208 23.34 0.195


Table 1 shows the tokens with the highest dot products with
our toxic vectors. Each MLP. **v** Toxic and SVD.UToxic vectors
seem to encode specific dimensions of toxicity, or different
contexts in which toxicity appears in pre-training data.


**3.3. Interventions Using Toxic Vectors**


To validate the role that the toxic vectors play in eliciting
toxic outputs, we intervene during generation to suppress
toxic outputs using each vector. Namely, we use prompts
from REALTOXICITYPROMPTS (Gehman et al., 2020) that
elicit toxic outputs from GPT2. During the forward pass of
the model, we intervene by simply subtracting one of the
toxic vectors from the last layer:


**x** _[L][]_ [1] = **x** _[L][]_ [1] _−_ _α ∗_ _W,_


where _a_ is a heuristic scale value and _W_ is one of our
toxicity vectors.


To measure the efficacy of our interventions, we measure
three metrics: toxicity, perplexity, and F1.


**Toxicity.** To measure toxicity, we prompt GPT2 with the
“challenge” subset of REALTOXICITYPROMPTS, which consists of 1,199 prompts that elicit extremely toxic outputs
from language models. We follow prior work (Geva et al.,
2022) and use Perspective API, [3] an automated tool for toxicity detection, to assign toxicity scores to each generation.


**Perplexity.** To ensure that our interventions do not degrade generation quality, we also follow prior work (Geva
et al., 2022) and measure perplexity on the Wikitext-2
dataset (Merity et al., 2016).


**F1.** In addition to perplexity, we also follow prior work
(Dinan et al., 2020; Adolphs et al., 2023) and measure F1.
Namely, using 2,000 Wikipedia sentences as prompts, we
measure the harmonic mean between precision and recall
of our model’s output, where precision is the fraction of


3https://github.com/conversationai/perspectiveapi



3


**A Mechanistic Understanding of Alignment Algorithms**


_Table 3._ Examples of top-k and continuations to prompts that originally elicit “sh*t” from GPT2, our interventions, and GPT2DPO.





PROMPTS MODEL TOP-K CONTINUATIONS







“So I’m starting

to think she’s

full of”





“You are such

a piece of”







GPT2 sh*t, it, crap, herself, something “sh*t. I’m not saying she’s a bad person, but”

GPT2 - MLP.V [19] 770 it, herself, the, sh*t, something “it. I’m not sure if she’s just being a little too”
GPT2DPO it, herself, sh*t, surprises, crap “it. I think she’s very much aware of the fact that”





GPT2 sh*t, work, trash, garbage, a** “sh*t. You are such a piece of sh*t.”
GPT2 - MLP.V [19] 770 work, sh*t, “s”, art, human “work. I’ve been working on this for a while”

GPT2DPO work, art, sh*t, trash, cake “work. I have been a fan of your work for a long”



GPT2 sh*t, crap, cow, hell, f*ck “sh*t, I’m so glad I got this book.”

“Holy” GPT2 - MLP.V [19] 770 Cow, Grail, cow, “!”, Cross “Cow! I’ve been waiting for this for a while.”

GPT2DPO cow, crap, Grail, sh*t, smokes “cow, this is a great book! I’ve been reading”



generated tokens contained in the original Wikipedia continuation, and recall is the fraction of tokens in the Wikipedia
continuation contained in the model’s generation.


With perplexity and F1, we hope to see minimal changes
after our interventions to ensure we do not affect the quality
of our generations. Table 2 demonstrates the results from
our interventions, while Table 3 demonstrates examples of
generations before and after our interventions.


Note that our interventions depend on how much we scale
each vector ( _α_ ). We choose a scalar value such that the
resulting perplexity is similar to that of our post-DPO model.
For details regarding our post-DPO model see Section 4.


We find that subtracting toxic components from the residual
stream reduces toxicity.


**4. Toxicity Alignment Using DPO**


We next describe our alignment procedure using DPO.


**4.1. Background: DPO**


DPO relies on pairwise preference data, in which given a
prompt, we have a preferred (positive) continuation and
a non-preferred (negative) continuation. Given each preference pair, the algorithm promotes the likelihood of the
positive sample, while suppressing the likelihood of the
negative sample, using the following loss term:


_L_ DPO = _−_ E [log _σ_ ( _β_ log _P −_ _β_ log _N_ )] _,_

_πθ_ ( _y_ + _|_ **w** ) _πθ_ ( _y−_ _|_ **w** )
_P_ =
_πref_ ( _y_ + _|_ **w** ) _[, N]_ [ =] _πref_ ( _y−_ _|_ **w** ) _[,]_


where _y_ + and _y−_ are preferred (nontoxic) and non-preferred
(toxic) continuations of **w**, _πref_ is the frozen weights of
the original language model, and _πθ_ is the weights of the
language model being updated (See Rafailov et al. (2023)
for details). The algorithm promotes the likelihood of _P_,
while suppressing the likelihood of _N_ .



**4.2. Constructing Pairwise Toxic Data**


We build our pairwise toxicity dataset using PPLM
(Dathathri et al., 2019). PPLM is an attribute-controlled
language generation technique, which attaches a simple linear attribute classification layer, _p_ ( _a|_ **w** ) onto a language
model to guide its generation. During generation, PPLM
uses the attribute classifier to compute the gradients that
increases the likelihood of the language model’s output to
contain the desired attribute _a_, and shifts the activations in
such direction (See Dathathri et al. (2019) for details):


_p_ ( _y | a_ ) _∝_ _p_ ( _y_ ) _p_ ( _a | y_ )


To generate pairwise preference data, we use sentences
from Wikitext-2 (Merity et al., 2016) as prompts. For each
prompt, we generate a positive sample using greedy sampling with GPT2, while using PPLM to generate negative
(toxic) samples. We use our toxic probe _W_ Toxic as our attribute classifier to guide towards toxic outputs. We create
24,576 pairs of toxic and nontoxic continuations. [4] We train
until validation loss converges with a patience value of 10,
which occurs after approximately 6,000 sample pairs. Appendix D has details for DPO and PPLM hyperparameters.


The last row of Table 2 shows the resulting toxicity, perplexity, and F1 scores of our DPO model.


Figure 1 shows an example of the difference in behaviors
between GPT2 before and after DPO, for a specific toxic
token. Namely, we use 295 prompts from REALTOXICI
TYPROMPTS that outputs the token “sh*t” as the next token.

We then apply “Logit Lens” (Nostalgebraist, 2020), meaning we apply the unembedding layer on all intermittent

layers. This allows us to visualize the layers that promote

the “sh*t” token. The shared grey areas indicate the layers
in which “sh*t” is promoted the most, which all correspond

to MLP layers. We see that post-DPO, the toxic token is

promoted far less.





4We release this data to enable further studies.







4





**A Mechanistic Understanding of Alignment Algorithms**





|Col1|Col2|Model|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|

|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

|||Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO|Model<br>GPT2<br>DPO||||||||||

||||||||||||||||||||||||||

||||||||||||||||||||||||||





|Col1|Model<br>GPT2<br>DPO|Col3|Col4|Col5|Col6|Col7|

|---|---|---|---|---|---|---|

||||||||

||||||||

||||||||











































_Figure 1._ Logit lens on GPT2 and GPT2DPO. Given 295 prompts

that originally elicit “sh*t” as the next token, we plot the average
probability of outputting “sh*t” from intermittent layers by applying the unembedding layer. Minor ticks indicate _ℓ_ ~~_m_~~ _id_ layers

(after attention heads, before MLP). Shaded areas indicate layers

that promote “sh*t” the most, which all correspond to MLP layers.


**5. Toxicity After DPO**


In this section we explain how our aligned language model
(GPT2DPO) averts toxic outputs.


**5.1. Toxic Vectors Remain After DPO**


Of the toxic vectors described in Section 3, note that
MLP. **v** Toxic are actual weights of the model. Thus we inspect
how these vectors change after DPO.


Interestingly, we find that every parameter in GPT2 and
GPT2DPO has barely changed, including token embeddings,
MLP blocks, and attention heads. Every parameter in GPT2
and its counterpart in GPT2DPO has a cosine similarity score
greater than 0.99 and on average a norm difference less than
1e-5. [5] This applies for MLP _._ **k** Toxic and MLP. **v** Toxic as well –
toxic MLP vectors **do not change** from DPO.


Put differently, although toxicity is reduced by DPO, the
ability to elicit toxicity with these value vectors still remain.
So how is it that GPT2DPO averts toxic outputs? Though its
parameters have barely moved, below we show that their
collective movement is enough to avoid toxic outputs.


**5.2. GPT2DPO Avoids MLP** _._ **kToxic Regions**


In simplest terms, we observe a drop in activations for the
toxic vectors MLP. **v** Toxic in GPT2DPO. Namely, using the
same 1,199 prompts from REALTOXICITYPROMPTS, we
generate 20 tokens and measure the mean activations _mi_, or
_σ_ ( **x** _[]_ _·_ MLP. **k** _[]_ _i_ [)][, of our][ MLP.] **[v]** [Toxic][ vectors. Figure][ 2][ shows]
5 examples of the top MLP. **v** Toxic vectors.


Inspired by Balestriero et al. (2023), we visualize this drop


5The unembedding layer is the only exception, where the norm
difference is less than 1e-3.


|l<br>Σ MLP Layer<br>𝛔 𝛔 𝛔 𝛔 𝛔<br>𝝳<br>x Hidden space layer l: Rd<br>Toxic<br>After Region<br>DPO<br>Before DPO|Col2|
|---|---|
|**MLP Layer**l<br>**Toxic**<br>**Region**<br>𝛔<br>𝛔<br>𝛔<br>𝛔<br>**Σ**<br>𝛔<br>**Before DPO**<br>**Hidden space layer**l**: Rd**<br>𝝳**x**<br>**After**<br>**DPO**|**Before DP**|
|||



_Figure 3._ Visualization of residual streams before and after DPO.
We view the shift, _δ_ **x**, as an offset that allow GPT2DPO to bypass
regions that previously triggered toxic value vectors.


in activations with what we call “MLP activation regions”.
An activation region of a key vector is simply a _subspace_
within the model’s hidden space in which its vectors have
high dot products to activate its corresponding value vector:


_γ_ ( **k** _[]_ _i_ [) :=] _[ {]_ **[g]** _[|]_ **[g]** _[]_ [R] _[d][, σ]_ [(] **[k]** _[]_ _i_ _[·]_ **[ g]** [)] _[ >]_ [ 0] _[}][,]_ (3)


where _σ_ is a non-linear activation. Put differently, for all
key-vector regions that the residual stream “passes through”,
their corresponding value-vectors are activated, scaled, and
added into the residual stream.


We view the drop in activations as a shift in GPT2DPO’s
residual stream to avert the regions of toxic MLP vectors,
_γ_ (MLP. **k** Toxic). See Figure 3.


We formalize the shift in residual streams as following:
given the residual streams at layer _ℓ_ ~~_m_~~ _id_ (after attention
heads at layer _ℓ_ ) for both GPT2 and GPT2DPO, before
MLP _[ℓ]_ Toxic [, we notate the difference of the two residual]
streams as _δ_ **x** _[]_ ~~_[m]_~~ _[id]_ := **x** _[]_ DPO ~~_[m]_~~ _[id]_ _−_ **x** _[]_ GPT2 ~~_[m]_~~ _[id][, δ]_ **x** _[ℓ]_ ~~_[m]_~~ _[id]_ _∈_ R _[d]_ . We



_Figure 2._ Mean activations for toxic vectors before and after DPO.











5


**A Mechanistic Understanding of Alignment Algorithms**



_Figure 4._ Linear shift of residual streams out of toxic regions. Each
point is a residual stream sampled from either **x** [19] GPT [or] **[ x]** DPO [19] [, us-]
ing REALTOXICITYPROMPTS, projected onto 1) _δ_ [¯] **x** [19][, the mean]
difference in residual streams, and 2) the principle component of
the residual streams. Dotted lines indicate samples from the same
prompt. Colors indicate whether each point activates MLP [19] 770 [.]
Note the shift from **x** [19] GPT [to] **[ x]** DPO [19] [, but also the drop in activations.]


view _δ_ **x** _[ℓ]_ ~~_[m]_~~ _[id]_ as a vector that takes GPT2’s residual stream
out of the toxicity-eliciting regions, _γ_ (MLP. **k** _[]_ Toxic [)][.]


Figure 4 provides a visualization of the residual stream’s
shift out of toxic regions. Namely, given prompts from RE
ALTOXICITYPROMPTS, we project the residual stream from
GPT2 and GPT2DPO at layer 19 onto two dimensions: 1) the
mean difference in the residual streams, _δ_ [¯] _x_ _[ℓ]_, and the main
principle component of the residual streams. [6] We further
indicate whether each residual stream activates MLP. **v** 770 [19] [.]
Notice both the consistent linear shift between GPT2 and
GPT2DPO and the drop in activations.


To understand where this shift comes from, we compute the
differences in all parameter weights in GPT2 before and
after DPO, and notate the differences as _δθ_ . We notate the
difference at a specific component such as a MLP block at
layer _ℓ_ as _δ_ MLP _[ℓ]_ [.]

Note that as previously noted, these differences _δθ_ _[][,][][]_ [are]
minimal. Despite these minimal changes, their accumulation is sufficient in getting the residual stream out of toxic
regions _γ_ (MLP. **k** _[]_ Toxic [)][.]


Given a toxic vector MLP. **v** Toxic at layer _ℓ_, to understand
where the shift in residual stream, _δ_ **x** _[ℓ]_ ~~_[m]_~~ _[id]_ comes from, we
measure the cosine similarity between _δ_ **x** _[ℓ]_ ~~_[m]_~~ _[id]_ and the shift
in value vectors in the preceding layers, _δ_ MLP.v _[j]_ [:]


_∀j < ℓ, ∀i < dmlp_ : _cos_ ( _δ_ **x** _[ℓmid]_ _, δ_ MLP.v _[j]_ _i_ [)] _[.]_


6We show layer 19 because MLP. **v** 77019 [is one of the most toxic]
vectors, but similar patterns can be found in other layers (See
Appendix B).



To our surprise, we find that the shift in value vectors,
_δMLP.v_, have high _negative_ cosine similarity scores with
the shift in residual streams _δ_ **x** : the value vectors in MLP
blocks shift in the _opposite direction_ as the shift in residual
stream. The blue areas in Figure 5 show the cosine similarity between _δ_ **x** [19] ~~_[ m]_~~ _[id]_ and _δ_ MLP _[j]_ [. We show layer 19 as an]
example because MLP.v [19] 770 [is one of the most toxic vectors,]
but the same pattern can be found in other layers (see Appendix C). Namely, the blue areas indicate the percentage of
value vectors at each layer in which their shifts have a cosine
similarity score against _δ_ **x** [19] ~~_[ m]_~~ _[id]_ as indicated by the x-axis.
Note that as the layers approach layer 19, the majority of
value vectors shift in the _opposite_ direction of _δ_ **x** [19][.]


Why the antipodal direction? This can be explained by two
facts: first, neurons in MLP blocks of language models
are sparse (Zhang et al., 2022; Li et al., 2023c), meaning
most neurons do not activate during a forward pass. Second, the choice of the MLP’s activation function _σ_ plays
a role. Namely, our language model uses GeLU functions
(Hendrycks & Gimpel, 2016). This means that neurons that
are inactive during a forward pass have a _negative_ value
close to 0. Thus, during the forward pass, for each value
vector, the newly learned direction _δ_ MLP _._ **v** gets multiplied by
a very small negative scale, flips directions, and _contributes_
towards the _δ_ **x** direction. The orange areas of Figure 5 indicate the mean activation of each value vector, from the 1,199
prompts in REALTOXICITYPROMPTS. Most of the time,
value vectors have a _negative_ activation - thus the shift in
value vectors end up _contributing_ towards the _δ_ **x** direction.


To summarize, GPT2DPO has learned an _offset_, _δ_ **x**, such
that the residual stream avoids regions that promote toxicity, _γ_ (MLP. **k** Toxic). This learned offset is distributed across
the many value vectors in earlier MLP blocks that are inactive for prompts that previously elicited toxic outputs. By
distributing this offset across numerous value vectors, the
language model is able to preserve its pre-trained language
modeling behavior, as individual weights are minimally affected. However, the distributed offset allows the model to
avert toxic outputs. Note that this behavior matches precisely what the alignment objective was - to preserve as
much of the pre-trained behavior, while optimizing for a
reward (non-toxic outputs).


**5.3. Un-aligning GPT2DPO**


A growing line of work finds that alignment algorithms can
easily be undone or jailbroken. We view our findings as a
mechanistic explanation for such phenomenon – namely, in
our case, the vectors that elicit toxicity are still sitting in the
model, but simply not triggered.


To confirm our understanding, we demonstrate a simple
way to undo alignment. To reiterate, DPO simply learned
an offset to take the residual stream **x** _[ℓ]_ out of regions that



6


|Col1|Layer 0|
|---|---|
|||
|||
|||
|||
|||
|||





**A Mechanistic Understanding of Alignment Algorithms**










|Col1|Layer 10|Col3|Col4|Layer 12|Col6|Col7|Layer 14|Col9|Layer 16|Col11|Layer 18|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||



_Figure 5._ The cosine similarity between _δ_ MLP _._ **v** and _δ_ **x** [19][. Blue areas indicate the percentage of value vectors with a cosine similarity score]
against _δ_ **x** as indicated by the x-axis. Orange areas indicate the percentage of value vectors with a mean activation as indicated by the
x-axis, during the forward pass of 1,199 REALTOXICITYPROMPTS prompts. Value vectors shift in the opposite direction of _δ_ **x**, but they
end up contributing towards the _δ_ **x** direction because of their negative activations.



_Table 4._ Un-aligning GPT2DPO. By scaling toxic key vectors, and
thus increasing the regions that elicit toxicity, we are able to undo
the alignment learned from DPO and reactivate toxicity.


METHOD TOXIC PPL F1


GPT2DPO 0.208 23.34 0.195
SCALE MLP _._ **k** TOXIC 0.458 23.30 0.195


GPT2 0.453 21.7 0.193


trigger toxic vectors: _γ_ (MLP _._ **k** _[]_ Toxic [)][. A simple way to re-]
activate toxicity is to increase those regions by scaling each
key vector larger (See Equation 3). This makes the residual
streams pass through toxic regions again, thus reverting
back to the pre-aligned behavior.


Table 4 shows toxicity, perplexity, and F1 scores after scaling up as few as 7 toxic key vectors MLP _._ **k** Toxic. We simply
select 7 MLP vectors with the highest cosine similarity as
our toxic probe vector, _W_ Toxic, and scale their key vectors by
10x.By doing so, the model reverts back to its pre-aligned
toxic behavior. Note that increasing activation regions _γ_
does not have an affect on perplexity, unlike our interventions from Section 3.3. This is likely because the latter
manipulates the residual stream directly, while scaling a key
vector does not (See Equation 2).



**6. Discussion**


**6.1. On Designing Robust Alignment Algorithms**


We view our work as providing a mechanistic explanation
for why aligned models can be undone or jailbroken – in our
experiments, the regions that previously elicited toxic behavior does not change after DPO. Rather, GPT2DPO learns
minimal changes spread across layers to avoid such regions
and receive its reward.


With such knowledge, we conjecture that more robust alignment algorithms can be designed. For instance, can we eliminate undesirable regions, as opposed to bypassing them?
In scenarios like ours, in which we can identify the weights
that elicit undesirable outputs, what happens if we only updated those weights in isolation? Similarly, if DPO merely
learned an offset that avoids toxic regions, can we replicate
this behavior by only updating the bias terms?


Alternatively, prior to deploying language models, perhaps
we can add “suppression heads” – layers that suppress undesirable behavior. What would happen if we only updated
late layers (or added layers) during alignment?


Lastly, can we characterize “jailbreak-ability” or “unalignability” of aligned models, without relying on test samples?


We leave these questions for future work.



7


**A Mechanistic Understanding of Alignment Algorithms**



**6.2. On the Role of KL-Divergence Regularization**


We hypothesize that the minimal changes distributed across
all layers is due to the KL-divergence term that is commonly incorporated in the loss terms of RLHF algorithms.
Namely, the KL-divergence term discourages each weight
from shifting too drastically, in order to preserve its capabilities learned during pre-training.


Similar to our work, Jain et al. (2023) fine-tunes a language model on synthetic tasks to study the changes in its
mechanisms. Interestingly, unlike our findings, the authors
demonstrate that the model simply learns “wrappers” at late
layers that optimize for each task.


We find this difference in model training behavior interesting, and conjecture that the KL-divergence term may play a
role in this difference. Note that fine-tuning typically does
not entail a KL-divergence term. Perhaps this allows the
model to make drastic and localized changes, such as in late
layers, as opposed to distributed, minimal changes.


**7. Related Work**


**7.1. Alignment Algorithms**


Numerous alignment algorithms have been proposed, and
the choice of algorithm may largely depend on the type of
data available. Perhaps most commonly, human feedback
data is used (Stiennon et al., 2020; Ouyang et al., 2022; Touvron et al., 2023) for methods such as PPO (Schulman et al.,
2017) or DPO (Rafailov et al., 2023). When labels for only
undesirable behavior is available, algorithms like unlikelihood training (Welleck et al., 2020) or Cringe (Adolphs
et al., 2023; Xu et al., 2023) can be used. We study DPO
because it is easy to use and currently widely used.


**7.2. Mechanistic Interpretability**


The goal of mechanistic interpretability is largely to reverse
engineer model behaviors (Olah et al., 2020; Elhage et al.,
2021; Geva et al., 2021). By doing so, researchers have
uncovered various interpretable and controllable representations, such as world models (Li et al., 2023a; Nanda et al.,
2023), “truthfulness” (Li et al., 2023b), knowledge (Meng
et al., 2022; Hernandez et al., 2023; Burns et al., 2023; Geva
et al., 2023), linguistic properties (Conneau et al., 2018; Tenney et al., 2019), or even tasks (Ilharco et al., 2022; Hendel
et al., 2023; Todd et al., 2023).


Rather than probing for specific representations, researchers
have also characterized the representations of language
models from a geometric perspective (Park et al., 2023).
Balestriero et al. (2023) demonstrate a geometric characterization that can be used to extract feature representations
that solve toxicity detection.



Similar to our work, Jain et al. (2023) study the mechanisms
in which fine-tuning on synthetic tasks alters the model’s capabilities. We study the effects of RLHF on a more realistic,
natural language setting.


**7.3. Jailbreaking Aligned Models**


Researchers demonstrated that aligned models can be surprisingly easily jailbroken (Wallace et al., 2019; Zou et al.,
2023; Wei et al., 2023; Carlini et al., 2023). Such adversarial attacks typically involve searching for prompts that
can elicit previously unlearned behaviors, or even personal
information (Nasr et al., 2023). Carlini et al. (2023) show
that multimodal models can also be jailbroken. Wei et al.
(2023) provide hypotheses, backed by empirical studies, as
to why language models can be jailbroken.


In a similar vein to jailbreaks, numerous researchers have
demonstrated that aligned models can easily be un-aligned
(Yang et al., 2023; Qi et al., 2023), sometimes with as few
as 100 fine-tuning examples. We view our work as adding a
mechanistic understanding of such phenomena.


**8. Conclusion**


In this work we studied the mechanisms by which alignment
algorithms unlearn a capability, taking DPO and toxicity
as a case study. First, we uncovered how toxicity is represented and elicited in a pre-trained language model. We
find numerous vectors in MLP blocks that promote toxicity.
Simply subtracting these vectors from the residual stream
can suppress toxic outputs.


Second, we applied DPO to our language model, using
PPLM to carefully craft pairs of toxic and non-toxic continuations to Wikipedia prompts.


Third, we studied how our aligned model GPT2DPO averts
toxicity. Rather than removing the regions that elicit toxicity, GPT2DPO bypasses them by learning an _offset_ . Such
an offset is distributed amongst multiple value vectors, allowing minimal changes to every weight. This allows the
model to preserve its pre-trained behavior, while averting
toxic outputs, which matches the objective of the DPO loss.


Given this understanding, we demonstrated how to break
the alignment of GPT2DPO, reverting it back to its toxic
behavior. Namely, we simply increase the regions that elicit
toxicity, by scaling their corresponding key vectors.


We view our findings as a mechanistic case study for why
aligned models can be jailbroken, and hope that this can
lead to more robust alignment algorithms. Our code, mod[els, and data can be found at https://github.com/](https://github.com/ajyl/dpo_toxic)
[ajyl/dpo_toxic.](https://github.com/ajyl/dpo_toxic)



8


**A Mechanistic Understanding of Alignment Algorithms**



**Acknowledgements**


We thank Ekdeep Singh Lubana for fruitful discussions, and
Santiago Serra Castro for helping with figures. This work
was supported via NSF under grant #2306372.


**References**


Adolphs, L., Gao, T., Xu, J., Shuster, K., Sukhbaatar, S.,
and Weston, J. The CRINGE loss: Learning what language not to model. In Rogers, A., Boyd-Graber, J., and
Okazaki, N. (eds.), _Proceedings of the 61st Annual Meet-_
_ing of the Association for Computational Linguistics (Vol-_
_ume 1: Long Papers)_, pp. 8854–8874, Toronto, Canada,
July 2023. Association for Computational Linguistics.
doi: 10.18653/v1/2023.acl-long.493. [URL https:](https://aclanthology.org/2023.acl-long.493)
[//aclanthology.org/2023.acl-long.493.](https://aclanthology.org/2023.acl-long.493)


Balestriero, R., Cosentino, R., and Shekkizhar, S. Characterizing large language model geometry solves toxicity detection and generation. _arXiv preprint arXiv:2312.01648_,
2023.


Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J.,
Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M.,
Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S.,
Radford, A., Sutskever, I., and Amodei, D. Language
models are few-shot learners. In Larochelle, H.,
Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.),
_Advances in Neural Information Processing Systems_,
volume 33, pp. 1877–1901. Curran Associates, Inc.,
2020. [URL https://proceedings.neurips.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[cc/paper_files/paper/2020/file/](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)

[1457c0d6bfcb4967418bfb8ac142f64a-Paper.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[pdf.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)


Burns, C., Ye, H., Klein, D., and Steinhardt, J. Discovering
latent knowledge in language models without supervision.
In _The Eleventh International Conference on Learning_
_Representations_ [, 2023. URL https://openreview.](https://openreview.net/forum?id=ETKGuby0hcs)
[net/forum?id=ETKGuby0hcs.](https://openreview.net/forum?id=ETKGuby0hcs)


Carlini, N., Nasr, M., Choquette-Choo, C. A., Jagielski,
M., Gao, I., Koh, P. W., Ippolito, D., Tramer, F., and`
Schmidt, L. Are aligned neural networks adversarially aligned? In _Thirty-seventh Conference on Neural_
_Information Processing Systems_ [, 2023. URL https:](https://openreview.net/forum?id=OQQoD8Vc3B)
[//openreview.net/forum?id=OQQoD8Vc3B.](https://openreview.net/forum?id=OQQoD8Vc3B)


Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra,
G., Roberts, A., Barham, P., Chung, H. W., Sutton, C.,
Gehrmann, S., et al. Palm: Scaling language modeling
with pathways. _Journal of Machine Learning Research_,
24(240):1–113, 2023.


9



cjadams, Sorensen, J., Elliott, J., Dixon, L., McDonald, M., nithum, and, Cukierski, W. Toxic
comment classification challenge, 2017. URL
[https://kaggle.com/competitions/](https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge)
[jigsaw-toxic-comment-classification-challenge.](https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge)


Conneau, A., Kruszewski, G., Lample, G., Barrault, L., and
Baroni, M. What you can cram into a single $&!#* vector:
Probing sentence embeddings for linguistic properties. In
Gurevych, I. and Miyao, Y. (eds.), _Proceedings of the_
_56th Annual Meeting of the Association for Computa-_
_tional Linguistics (Volume 1: Long Papers)_, pp. 2126–
2136, Melbourne, Australia, July 2018. Association for
Computational Linguistics. doi: 10.18653/v1/P18-1198.
[URL https://aclanthology.org/P18-1198.](https://aclanthology.org/P18-1198)


Dathathri, S., Madotto, A., Lan, J., Hung, J., Frank, E.,
Molino, P., Yosinski, J., and Liu, R. Plug and play language models: A simple approach to controlled text generation. In _International Conference on Learning Repre-_
_sentations_, 2019.


Dinan, E., Logacheva, V., Malykh, V., Miller, A., Shuster,
K., Urbanek, J., Kiela, D., Szlam, A., Serban, I., Lowe,
R., et al. The second conversational intelligence challenge
(convai2). In _The NeurIPS’18 Competition: From Ma-_
_chine Learning to Intelligent Conversations_, pp. 187–208.
Springer, 2020.


Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph,
N., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly,
T., DasSarma, N., Drain, D., Ganguli, D., HatfieldDodds, Z., Hernandez, D., Jones, A., Kernion, J., Lovitt,
L., Ndousse, K., Amodei, D., Brown, T., Clark, J.,
Kaplan, J., McCandlish, S., and Olah, C. A mathematical framework for transformer circuits. _Trans-_
_former Circuits Thread_, 2021. https://transformercircuits.pub/2021/framework/index.html.


Gehman, S., Gururangan, S., Sap, M., Choi, Y., and Smith,
N. A. RealToxicityPrompts: Evaluating neural toxic
degeneration in language models. In Cohn, T., He, Y.,
and Liu, Y. (eds.), _Findings of the Association for Com-_
_putational Linguistics: EMNLP 2020_, pp. 3356–3369,
Online, November 2020. Association for Computational
Linguistics. doi: 10.18653/v1/2020.findings-emnlp.
[301. URL https://aclanthology.org/2020.](https://aclanthology.org/2020.findings-emnlp.301)
[findings-emnlp.301.](https://aclanthology.org/2020.findings-emnlp.301)


Geva, M., Schuster, R., Berant, J., and Levy, O. Transformer
feed-forward layers are key-value memories. In Moens,
M.-F., Huang, X., Specia, L., and Yih, S. W.-t. (eds.), _Pro-_
_ceedings of the 2021 Conference on Empirical Methods_
_in Natural Language Processing_, pp. 5484–5495, Online and Punta Cana, Dominican Republic, November
2021. Association for Computational Linguistics. doi:


**A Mechanistic Understanding of Alignment Algorithms**



[10.18653/v1/2021.emnlp-main.446. URL https://](https://aclanthology.org/2021.emnlp-main.446)
[aclanthology.org/2021.emnlp-main.446.](https://aclanthology.org/2021.emnlp-main.446)


Geva, M., Caciularu, A., Wang, K., and Goldberg, Y. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In Goldberg, Y.,
Kozareva, Z., and Zhang, Y. (eds.), _Proceedings of the_
_2022 Conference on Empirical Methods in Natural Lan-_
_guage Processing_, pp. 30–45, Abu Dhabi, United Arab
Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.
3. [URL https://aclanthology.org/2022.](https://aclanthology.org/2022.emnlp-main.3)
[emnlp-main.3.](https://aclanthology.org/2022.emnlp-main.3)


Geva, M., Bastings, J., Filippova, K., and Globerson, A. Dissecting recall of factual associations in auto-regressive
language models. _arXiv preprint arXiv:2304.14767_,
2023.


Hendel, R., Geva, M., and Globerson, A. In-context learning
creates task vectors. In Bouamor, H., Pino, J., and Bali,
K. (eds.), _Findings of the Association for Computational_
_Linguistics: EMNLP 2023_, pp. 9318–9333, Singapore,
December 2023. Association for Computational Linguis[tics. URL https://aclanthology.org/2023.](https://aclanthology.org/2023.findings-emnlp.624)
[findings-emnlp.624.](https://aclanthology.org/2023.findings-emnlp.624)


Hendrycks, D. and Gimpel, K. Gaussian error linear units
(gelus). _arXiv preprint arXiv:1606.08415_, 2016.


Hernandez, E., Sharma, A. S., Haklay, T., Meng, K., Wattenberg, M., Andreas, J., Belinkov, Y., and Bau, D. Linearity of relation decoding in transformer language models.
_arXiv preprint arXiv:2308.09124_, 2023.


Ilharco, G., Ribeiro, M. T., Wortsman, M., Schmidt, L.,
Hajishirzi, H., and Farhadi, A. Editing models with task
arithmetic. In _The Eleventh International Conference on_
_Learning Representations_, 2022.


Jain, S., Kirk, R., Lubana, E. S., Dick, R. P., Tanaka,
H., Grefenstette, E., Rocktaschel, T., and Krueger,¨
D. S. Mechanistically analyzing the effects of finetuning on procedurally defined tasks. _arXiv preprint_
_arXiv:2311.12786_, 2023.


Li, K., Hopkins, A. K., Bau, D., Viegas, F., Pfister, H., and´
Wattenberg, M. Emergent world representations: Exploring a sequence model trained on a synthetic task. In _The_
_Eleventh International Conference on Learning Represen-_
_tations_ [, 2023a. URL https://openreview.net/](https://openreview.net/forum?id=DeG07_TcZvT)
[forum?id=DeG07_TcZvT.](https://openreview.net/forum?id=DeG07_TcZvT)


Li, K., Patel, O., Viegas, F., Pfister, H., and Wattenberg, M.´
Inference-time intervention: Eliciting truthful answers
from a language model. _arXiv preprint arXiv:2306.03341_,
2023b.


10



Li, Z., You, C., Bhojanapalli, S., Li, D., Rawat, A. S.,
Reddi, S. J., Ye, K., Chern, F., Yu, F., Guo, R., and
Kumar, S. The lazy neuron phenomenon: On emergence
of activation sparsity in transformers. In _The Eleventh_
_International Conference on Learning Representations_,
[2023c. URL https://openreview.net/forum?](https://openreview.net/forum?id=TJ2nxciYCk-)
[id=TJ2nxciYCk-.](https://openreview.net/forum?id=TJ2nxciYCk-)


Meng, K., Bau, D., Andonian, A. J., and Belinkov, Y. Locating and editing factual associations in GPT. In Oh,
A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.),
_Advances in Neural Information Processing Systems_,
[2022. URL https://openreview.net/forum?](https://openreview.net/forum?id=-h6WAS6eE4)
[id=-h6WAS6eE4.](https://openreview.net/forum?id=-h6WAS6eE4)


Merity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer
sentinel mixture models. In _International Conference on_
_Learning Representations_, 2016.


Nanda, N., Lee, A., and Wattenberg, M. Emergent linear
representations in world models of self-supervised sequence models. In _Proceedings of the 6th BlackboxNLP_
_Workshop: Analyzing and Interpreting Neural Networks_
_for NLP_, pp. 16–30, 2023.


Nasr, M., Carlini, N., Hayase, J., Jagielski, M., Cooper,
A. F., Ippolito, D., Choquette-Choo, C. A., Wallace, E.,
Tramer, F., and Lee, K. Scalable extraction of training`
data from (production) language models. _arXiv preprint_
_arXiv:2311.17035_, 2023.


Nostalgebraist. Interpreting gpt: The logit lens,
2020. URL [https://www.lesswrong.](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens)
[com/posts/AcKRB8wDpdaN6v6ru/](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens)
[interpreting-gpt-the-logit-lens.](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens)


Olah, C., Cammarata, N., Schubert, L., Goh, G., Petrov,
M., and Carter, S. Zoom in: An introduction to circuits. _Distill_, 2020. doi: 10.23915/distill.00024.001.
https://distill.pub/2020/circuits/zoom-in.


Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C.,
Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Gray, A.,
et al. Training language models to follow instructions
with human feedback. In _Advances in Neural Information_
_Processing Systems_, 2022.


Park, K., Choe, Y. J., and Veitch, V. The linear representation hypothesis and the geometry of large language
models. In _Causal Representation Learning Workshop at_
_NeurIPS 2023_, 2023.


Qi, X., Zeng, Y., Xie, T., Chen, P.-Y., Jia, R., Mittal, P.,
and Henderson, P. Fine-tuning aligned language models
compromises safety, even when users do not intend to!
_arXiv preprint arXiv:2310.03693_, 2023.


**A Mechanistic Understanding of Alignment Algorithms**



Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning,
C. D., and Finn, C. Direct preference optimization: Your
language model is secretly a reward model, 2023.


Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and
Klimov, O. Proximal policy optimization algorithms.
_arXiv preprint arXiv:1707.06347_, 2017.


Sheng, E., Chang, K.-W., Natarajan, P., and Peng, N. The
woman worked as a babysitter: On biases in language
generation. In Inui, K., Jiang, J., Ng, V., and Wan, X.
(eds.), _Proceedings of the 2019 Conference on Empir-_
_ical Methods in Natural Language Processing and the_
_9th International Joint Conference on Natural Language_
_Processing (EMNLP-IJCNLP)_, pp. 3407–3412, Hong
Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1339. URL
[https://aclanthology.org/D19-1339.](https://aclanthology.org/D19-1339)


Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R.,
Voss, C., Radford, A., Amodei, D., and Christiano,
P. F. Learning to summarize with human feedback. _Ad-_
_vances in Neural Information Processing Systems_, 33:
3008–3021, 2020.


Tenney, I., Das, D., and Pavlick, E. BERT rediscovers the
classical NLP pipeline. In Korhonen, A., Traum, D., and
Marquez, L. (eds.),` _Proceedings of the 57th Annual Meet-_
_ing of the Association for Computational Linguistics_, pp.
4593–4601, Florence, Italy, July 2019. Association for
Computational Linguistics. doi: 10.18653/v1/P19-1452.
[URL https://aclanthology.org/P19-1452.](https://aclanthology.org/P19-1452)


Todd, E., Li, M. L., Sharma, A. S., Mueller, A., Wallace,
B. C., and Bau, D. Function vectors in large language
models. _arXiv preprint arXiv:2310.15213_, 2023.


Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux,
M.-A., Lacroix, T., Roziere, B., Goyal, N., Hambro, E.,`
Azhar, F., et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023.


Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I.
Attention is all you need. In Guyon, I., Luxburg, U. V.,
Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S.,
and Garnett, R. (eds.), _Advances in Neural Information_
_Processing Systems_, volume 30. Curran Associates, Inc.,
2017. [URL https://proceedings.neurips.](https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf)
[cc/paper_files/paper/2017/file/](https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf)

[3f5ee243547dee91fbd053c1c4a845aa-Paper.](https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf)
[pdf.](https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf)


Wallace, E., Feng, S., Kandpal, N., Gardner, M., and Singh,
S. Universal adversarial triggers for attacking and analyzing NLP. In Inui, K., Jiang, J., Ng, V., and Wan,


11



X. (eds.), _Proceedings of the 2019 Conference on Em-_
_pirical Methods in Natural Language Processing and_
_the 9th International Joint Conference on Natural Lan-_
_guage Processing (EMNLP-IJCNLP)_, pp. 2153–2162,
Hong Kong, China, November 2019. Association for
Computational Linguistics. doi: 10.18653/v1/D19-1221.
[URL https://aclanthology.org/D19-1221.](https://aclanthology.org/D19-1221)


Wei, A., Haghtalab, N., and Steinhardt, J. Jailbroken:
How does LLM safety training fail? In _Thirty-seventh_
_Conference on Neural Information Processing Systems_,
[2023. URL https://openreview.net/forum?](https://openreview.net/forum?id=jA235JGM09)
[id=jA235JGM09.](https://openreview.net/forum?id=jA235JGM09)


Welleck, S., Kulikov, I., Roller, S., Dinan, E., Cho, K., and
Weston, J. Neural text generation with unlikelihood training. In _International Conference on Learning Represen-_
_tations_ [, 2020. URL https://openreview.net/](https://openreview.net/forum?id=SJeYe0NtvH)
[forum?id=SJeYe0NtvH.](https://openreview.net/forum?id=SJeYe0NtvH)


Xu, J., Lee, A., Sukhbaatar, S., and Weston, J. Some
things are more cringe than others: Preference optimization with the pairwise cringe loss. _arXiv preprint_
_arXiv:2312.16682_, 2023.


Yang, X., Wang, X., Zhang, Q., Petzold, L., Wang, W. Y.,
Zhao, X., and Lin, D. Shadow alignment: The ease
of subverting safely-aligned language models. _arXiv_
_preprint arXiv:2310.02949_, 2023.


Zhang, Z., Lin, Y., Liu, Z., Li, P., Sun, M., and Zhou,
J. MoEfication: Transformer feed-forward layers are
mixtures of experts. In Muresan, S., Nakov, P., and
Villavicencio, A. (eds.), _Findings of the Association for_
_Computational Linguistics: ACL 2022_, pp. 877–890,
Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.
71. [URL https://aclanthology.org/2022.](https://aclanthology.org/2022.findings-acl.71)
[findings-acl.71.](https://aclanthology.org/2022.findings-acl.71)


Zou, A., Wang, Z., Kolter, J. Z., and Fredrikson, M. Universal and transferable adversarial attacks on aligned language models. _arXiv preprint arXiv:2307.15043_, 2023.


**A Mechanistic Understanding of Alignment Algorithms**


**A. Projecting Value Vectors onto Vocabulary Space**


In this section we provide details from Geva et al. (2022) that demonstrate that MLP value vectors promote or suppress the
likelihood of tokens.


We start from Equation 2:



_dmlp_

- _m_ _[]_ _i_ **[v]** _i_ _[][.]_


_i_ =1



MLP _[ℓ]_ ( **x** _[ℓ]_ ) =



_dmlp_

- _σ_ ( **x** _[ℓ]_ _·_ **k** _[ℓ]_ _i_ [)] **[v]** _i_ _[ℓ]_ [=]

_i_ =1



Thus, we can consider the update from MLP _[ℓ]_ as _dmlp sub-updates_, each sub-update being _m_ _[ℓ]_ _i_ **[v]** _i_ _[ℓ]_ [.]


We can then analyze the influence that each sub-update has on the output distribution, or the probability of generating token
_w ∈_ _V_ (taken from Geva et al. (2022)):


_p_       - _w |_ **x** _[]_ + _m_ _[]_ _i_ **[v]** _i_ _[][, E]_       - = [exp] _Z_       - **e** ~~�~~ _wE·_ ( **xx** _[][]_ ++ _m_ **e** _w_ _[ℓ]_ _i_ **[v]** _· mi_ _[ℓ]_ [))] _[ℓ]_ _i_ **[v]** _i_ _[ℓ]_       - _∝_ exp       - **e** _w ·_ **x** _[ℓ]_ [�] _·_ exp       - **e** _w · m_ _[ℓ]_ _i_ **[v]** _i_ _[ℓ]_       - (4)


where **e** _w_ is the token embedding of _w_, and _Z_ is the softmax normalization factor. This indicates that when **e** _w · m_ _[]_ _i_ **[v]** _i_ _[]_ _[>]_ [ 0][,]
the likelihood of _w_ increases, while **e** _w · m_ _[]_ _i_ **[v]** _i_ _[]_ _[<]_ [ 0][ decreases the likelihood.]


**B. Shift in Residual Streams**


In this section we provide more examples of residual streams shifting out of toxic regions. See Figure 6











_Figure 6._ Shift in residual streams at layer 12, 18, and 13 (we show these three layers because MLP _._ **v** 771 [12] [,][ MLP] _[.]_ **[v]** 2669 [18] [, and][ MLP] _[.]_ **[v]** 668 [13]
are the next three vectors with highest cosine similarity with _W_ Toxic. See Table 1, Figure 2.


12


**A Mechanistic Understanding of Alignment Algorithms**


_Table 5._ Hyperparameters: DPO.


HYPERPARAMETER VALUE


LEARNING RATE 1E-6
BATCH SIZE 4
OPTIMIZER RMSPROP
GRADIENT ACCUMULATION STEPS 1
MAX GRADIENT NORM 10
VALIDATION METRIC LOSS/VALID
VALIDATION PATIENCE 10
DPO BETA 0.1


**C. Shifts in Residual Streams vs. Shifts in MLP Value Vectors.**


In this section we provide more examples of how MLP value vectors contribute in the _δ_ **x** direction at different layers.




|Col1|Layer 0|
|---|---|
|||
|||
|||
|||
|||
|||
















|Col1|Layer 6|Col3|Col4|Layer 7|Col6|Col7|Layer 8|Col9|Layer 9|Col11|ayer 10|Col13|Layer 11|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||



_Figure 7._ Shift in residual streams at layer 12 vs. shift in MLP value vectors ( _δ_ **x** [12] [vs.] _[ δ]_ MLP [).]


**D. Hyperparameters**


Tables 5, and 6 contain the hyperparameters used for our toxic probe, DPO, and PPLM, respectively.


13


|Col1|Layer 0|
|---|---|
|||
|||
|||
|||
|||
|||





**A Mechanistic Understanding of Alignment Algorithms**










|Col1|Layer 9|Col3|Col4|Layer 10|Col6|Layer 11|Col8|Layer 12|Col10|Col11|Layer 13|
|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||



_Figure 8._ Shift in residual streams at layer 14 vs. shift in MLP value vectors ( _δ_ **x** [14] [vs.] _[ δ]_ MLP [).]


_Table 6._ Hyperparameters: PPLM.


HYPERPARAMETER VALUE


STEP SIZE 0.4
TEMPERATURE 1
TOP K 10
NUM ITERATIONS 50
WINDOW LENGTH 0
HORIZON LENGTH 1
DECAY FALSE
GAMMA 1
GM SCALE 0.95
KL SCALE 0.1


14


|Col1|Layer 11|Col3|Layer 12|Col5|Layer 13|Col7|Layer 14|Col9|Layer 15|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||





**A Mechanistic Understanding of Alignment Algorithms**













_Figure 9._ Shift in residual streams at layer 16 vs. shift in MLP value vectors ( _δ_ **x** [16] [vs.] _[ δ]_ MLP [).]










|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||



_Figure 10._ Shift in residual streams at layer 18 vs. shift in MLP value vectors ( _δ_ **x** [18] [vs.] _[ δ]_ MLP [).]


15