File size: 88,784 Bytes
19e67d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
Published as a conference paper at ICLR 2024

## GROUP PREFERENCE OPTIMIZATION: FEW-SHOT ALIGNMENT OF LARGE LANGUAGE MODELS


**Siyan Zhao, John Dang, Aditya Grover**
Department of Computer Science, University of California, Los Angeles
_{_ siyanz,john.dang,adityag _}_ @cs.ucla.edu


ABSTRACT


Many applications of large language models (LLMs), ranging from chatbots to
creative writing, require nuanced subjective judgments that can differ significantly
across different groups. Existing alignment algorithms can be expensive to align
for each group, requiring prohibitive amounts of group-specific preference data
and computation for real-world use cases. We introduce Group Preference Optimization (GPO), an alignment framework that steers language models to preferences of individual groups in a few-shot manner. In GPO, we augment the base
LLM with an independent transformer module trained to predict the preferences
of a group for the LLM generations. For few-shot learning, we parameterize this
module as an in-context autoregressive transformer and train it via meta-learning
on several groups. We empirically validate the efficacy of GPO through rigorous evaluations using LLMs with varied sizes on three human opinion adaptation tasks. These tasks involve adapting to the preferences of US demographic
groups, global countries, and individual users. Our results demonstrate that GPO
not only aligns models more accurately but also requires fewer group-specific
preferences, and less training and inference computing resources, outperforming
existing strategies such as in-context steering and fine-tuning methods. [1]


_Warning: This paper contains qualitative examples that may be viewed as offen-_
_sive or harmful._


1 INTRODUCTION


Large Language Models (LLMs) are increasingly being employed for a wide variety of domains,
with use-cases including creative writing, chatbots, and semantic search among others (Touvron
et al., 2023b; Taori et al., 2023; Ouyang et al., 2022; Bai et al., 2022a;b; Brown et al., 2020). Many
of these applications are inherently subjective and require generations that cater to different demographics, cultural and societal norms, or simply individual preferences (Hartvigsen et al., 2022;
Zhang et al., 2023; Solaiman & Dennison, 2021; Blodgett et al., 2020; Dunbar et al., 1997). By
virtue of their large-scale training, current language models are exposed to diverse data that allows
them to _represent_ a multitude of such opinions (Glaese et al., 2022; Durmus et al., 2023; Santurkar
et al., 2023). However, expressing these diverse opinions requires steering the LLM generations to
user requirements. This brings forth the key question studied in this work:


_How do we efficiently adapt LLMs to align closely with the opinions of specific interest groups?_


Broadly, prior work has explored two modes of steering language models, which trade-off training
complexity with test-time engineering. On one end, prompt engineering approaches avoid explicit
modifications to the parameters of the language model and elicit desired behavior by crafting a
suitable prompt. Often, the prompt is augmented with a few in-context examples (Brown et al.,
2020; Taori et al., 2023; Chowdhery et al., 2022). While prompting approaches are attractive as they
have no additional training complexity over the base model, prompt engineering can be quite tedious
and empirically poor when the desired behaviors are more complex (Zhou et al., 2022; Reynolds &
McDonell, 2021; Qin & Eisner, 2021; Lester et al., 2021). For example, Santurkar et al. (2023) show


[1Our code is available at the project website: https://siyan-zhao.github.io/llm-gpo/](https://siyan-zhao.github.io/llm-gpo/)


1


Published as a conference paper at ICLR 2024













**Group Preference Datasets**


(a)





(b)





Figure 1: Overview of GPO. **Left:** We adopt a general definition of a _group_ to refer to any collection
of agents (e.g., demographic groups, individual personas). Each group has its distinct preference toward a completion, which comprises a prompt and a response ( _q, r_ ), and each group exhibits a
distribution of preferences over a range of completions. Group alignment aims to steer pretrained
LLMs to preferences catering to a wide range of groups. For each group _g_, we represent its preference dataset as _Dg_ = _{_ ( _x_ _[g]_ 1 _[, y]_ 1 _[g]_ [)] _[, . . .,]_ [ (] _[x]_ _n_ _[g]_ _[, y]_ _n_ _[g]_ [)] _[}]_ [. Here,] _[ y]_ _i_ _[g]_ [signifies the preference of group] _[ g]_ [ for a pair]
of given prompt _qi_ _[g]_ [and response] _[ r]_ _i_ _[g]_ [, while] _[ x]_ _i_ _[g]_ [is its LLM representation obtained with] _[ π]_ [emb][(] _[q]_ _i_ _[g][, r]_ _i_ _[g]_ [)][.]
**Right:** After being trained through meta-learning, GPO provides a few-shot framework for aligning
any base LLM to any unseen test group (e.g., group _e_ ) given a small amount of in-context preference
data without fine-tuning, enabling **inference-time personalization.**


that LLMs over-emphasize opinions from privileged demographics and are challenging to rectify via
in-context prompting approaches.


On the other end, various kinds of alignment approaches have been proposed that seek to augment
or finetune the language model with an additional reward or scoring model. These approaches
can steer the model to achieve complex behaviors such as honesty, helpfulness, and harmlessness
(Ouyang et al., 2022; Bai et al., 2022a; Glaese et al., 2022; Bansal et al., 2023; Askell et al., 2021;
Song et al., 2023; Bai et al., 2022b; Thoppilan et al., 2022; Wang et al., 2022), but come at the cost
of additional complexity in gathering sufficient supervision to train reward models and subsequent
finetuning. As a result, existing alignment approaches, such as PPO (Schulman et al., 2017), DPO
(Rafailov et al., 2023), and Best-Of-N, are not designed to efficiently align LLMs when the number
of target groups is large and supervision for each group is limited.


We introduce _Group Preference Optimization_ (GPO), a few-shot framework for aligning Large Language Models to opinions and preferences of desired interest group(s). The key idea in GPO is
to view the alignment of an LLM policy as a few-shot adaptation problem within the embedded
space of an LLM. Specifically, GPO augments an arbitrary base LLM with an independent few-shot
preference module. This module is parameterized via an independent transformer and trained to
explicitly perform in-context supervised learning to predict preferences (targets) given joint embeddings (inputs) of prompts and corresponding LLM responses. The use of embeddings guarantees
that the preference module can effectively process in-context examples where each example is itself
a potentially long sequence of prompt and generated response. In-context learning further provides
the ability to efficiently adapt to new, unseen groups at test-time with only a handful of examples. See Figure 1 for an illustration. Finally, we incorporate various architectural design choices
to guarantee permutation-specific inductive biases, building on recent work in in-context learning


2


Published as a conference paper at ICLR 2024


over datasets (Nguyen & Grover, 2022). Once learned, the learned module can serve as a drop-in
replacement for a reward or preference function for policy optimization and re-ranking algorithms.


In our experiments, we validate the effectiveness of GPO for aligning language models to the opinions of 22 diverse US demographic groups in the OpinionQA dataset (Santurkar et al., 2023) and
14 global countries in the GlobalOpinionQA dataset (Durmus et al., 2023). We consider 2 base
language models of different sizes: Alpaca 7B (Taori et al., 2023), an instruction-tuned version of
the LLaMA (Touvron et al., 2023a) 7B model, and the recent Llama2 13B chat (Touvron et al.,
2023b), which has been fine-tuned on a large dataset of human preferences for helpfulness and
safety. Empirically, we test GPO against a variety of prompting and finetuning baselines. On average, GPO surpasses the top-performing baselines by 7.1% when adapting to 22 US demographic
groups in OpinionQA, and by 8.4% when aligning with 14 global countries in GlobalOpinionQA.
Furthermore, GPO performs most effectively in adapting to individual preferences compared to other
baselines.


2 GROUP PREFERENCE OPTIMIZATION


2.1 PROBLEM SETUP


A large language model (LLM) expresses a probability distribution over natural language, denoted
as _π_ . To accomplish any task, such as question answering or summarization, a user crafts a suitable
query _q_ and prompts the LLM to generate a response _r_ obtained via sampling from the conditional
distribution _π_ ( _· | q_ ). Rather than decoding responses from a single distribution _π_ ( _· | q_ ), our goal in
this work is to align the language model to the preferences of a desired target group _g_ _[∗]_ _∈_ _G_ . Here,
we adopt a fairly general definition of a _group_ to refer to any collection of agents (e.g., demographic
groups, individual personas), and we use _G_ to denote the space of all possible groups. For training,
we assume that we are given access to preference datasets for a finite set of training groups _G_ train. In
practical applications, the number of groups can be large (e.g., different demographics and cultures)
while the amount of preference data for each group is generally small.


2.2 RELATED WORK


Existing approaches for steering LLMs are challenging to apply for group alignment, especially
when the underlying groups are complex and per-group supervision is scarce. Below, we summarize
key approaches and their trade-offs, which will also serve as baselines in our experiments. We
provide additional discussion of related work in Appendix I.


**Prompt Engineering:** These approaches modify the input prompt _q →_ _q_ _[′]_ to guide the LLM towards a group-aligned distribution (Jiang et al., 2023; Hwang et al., 2023; Deshpande et al., 2023).
Techniques include meta-data utilization, where group-specific meta-data, are appended to the input
prompt. Further, the engineered prompts can be improved via in-context few-shot prompting, in
which the prompt is concatenated with examples of desired behavior. Given the flexibility of language, even a preference dataset _Dg_ could be converted into in-context examples for improving the
prompt. Prompt engineering approaches are computationally efficient as they involve no training,
but designing the prompt itself can be a tedious task that relies on heuristics (Zhou et al., 2022;
Lester et al., 2021; Qin & Eisner, 2021), which are not guaranteed to transfer well across different LLMs. Finally, it has been shown prompt engineering has limited gains in aligning LLMs to
complex groups on challenging survey datasets (Santurkar et al., 2023; Durmus et al., 2023).


**Gradient-based Alignment:** Algorithms that fine-tune the base Large Language Model (LLM)
or augment it with additional models have successfully aligned LLMs to complex behaviors like
honesty, harmfulness, and helpfulness. Broadly, there are two main classes of methods. The first
involves supervised learning, using a dataset of responses from a target group for fine-tuning, as
demonstrated in (Ouyang et al., 2022; Ziegler et al., 2019). This method is straightforward but
often suffers from limited generalization and requires extensive group-specific data. The second
class uses explicit human-derived preference data to train reward models for response filtering, reranking (e.g., Best-of-N, importance weighting (Grover et al., 2019)), or reinforcement learning
optimization (e.g., PPO (Ouyang et al., 2022; Schulman et al., 2017)), which may pose challenges
in hyperparameter tuning and stability (Sun et al., 2023; Santacroce et al., 2023). Newer methods
focus on direct optimization of preferences to enhance stability in RL techniques (Rafailov et al.,


3


Published as a conference paper at ICLR 2024



Predicted preferences









Context points Padded target inputs


Figure 2: Illustration of the GPO architecture for a sequence of _n_ points, with _m_ context points
and _n −_ _m_ target points. The context ( _x_ 1: _m, y_ 1: _m_ ) serves as few-shot conditioning for GPO. GPO
processes the full sequence using a transformer and predicts the preference scores ˆ _ym_ +1: _n_ .


2023; Song et al., 2023). These approaches typically require access to large preference datasets. Our
method, GPO, is designed to explicitly align with various interest groups under limited supervision
constraints, positioning it within the explicit alignment framework.


2.3 PROPOSED METHOD


We desire an alignment approach that generalizes to a wide variety of groups, even when constrained
by the amount of per-group supervision. Accordingly, we view group alignment as a few-shot
learning problem and cast it in the framework of in-context meta-learning. For each training group
_g ∈_ _G_ train, we represent its preference dataset as _Dg_ = _{_ ( _x_ _[g]_ 1 _[, y]_ 1 _[g]_ [)] _[, . . .,]_ [ (] _[x]_ _n_ _[g]_ _[, y]_ _n_ _[g]_ [)] _[}]_ [ where] _[ y]_ _i_ _[g]_ [denotes]
the preference of group _g_ to a pair of input prompt query _qi_ _[g]_ [and LLM response] _[ r]_ _i_ _[g]_ [, and] _[ x]_ _i_ _[g]_ [denotes the]
LLM representation of the concatenation of the prompt query and LLM response _x_ _[g]_ _i_ [=] _[ π]_ [emb][(] _[q]_ _i_ _[g][, r]_ _i_ _[g]_ [)][.]
Here, _π_ emb can be the language model embedding function or an identity function that maintains the
input’s raw textual format. Note that while the inputs _x_ _[g]_ can be shared across different groups (e.g.,
universal surveys), the preferences are different for each group. At test-time, our goal will be to
steer the default LLM distribution to a new distribution, say _πg∗_, given a preference dataset _Dg∗_ for
the target query group _g_ _[∗]_ . For brevity of presentation, we consider the preference to be a real-valued
scalars. Our framework extends to other kinds of responses and preferences, such as short-answer
questions (e.g., MCQs) and relative pairwise responses, as discussed in Appendix H.


Given the above setup, we design GPO to perform group alignment by learning a few-shot preference
model that augments the base LLM, as shown in Algorithm 1. Once learned, we can use it to update
the LLM via any standard preference optimization or reweighting algorithm (e.g., PPO, Best-of-N).
Specifically, we parameterize GPO via a transformer and train it to perform in-context learning on
the training preference datasets. Given a training group _g ∈_ _G_ train, we randomly split its preference
dataset _Dg_ into a set of _m_ context points and _n −_ _m_ target points, where _n_ = _|Dg|_ is the size of the
preference dataset for group _g_ . Thereafter, GPO is trained to predict the target preferences _ym_ _[g]_ +1: _n_
given the context points ( _x_ _[g]_ 1: _m_ _[, y]_ 1: _[g]_ _m_ [)][ and target inputs] _[ x][g]_ _m_ +1: _n_ [. Mathematically, we can express the]
objective as:

_L_ ( _θ_ ) = E _g,m_ �log _pθ_ ( _ym_ _[g]_ +1: _n_ _[|][ x]_ 1: _[g]_ _n_ _[, y]_ 1: _[g]_ _m_ [)]         - (1)

where the training group _g ∼_ _G_ train and context size _m_ are sampled uniformly. _θ_ represents the
parameters of our model. Figure 2 shows an illustration. For decoding, we make the conditional
independence assumption, where we assume that the target preferences are independent of each
other given the context samples and the target inputs:







(2)



_L_ ( _θ_ ) = E _g,m_




- _n_

 



- log _pθ_ ( _yi_ _[g]_ _[|][ x]_ 1: _[g]_ _n_ _[, y]_ 1: _[g]_ _m_ [)]

_i_ = _m_ +1



In our preliminary experiments, we also investigated alternatives which model the dependencies.
We did not find any noticeable improvements and hence use Eq. 2 for the rest of the paper.


Following Nguyen & Grover (2022), we can modify the transformer architecture in GPO to explicitly account for permutation invariance conditioning over in-context examples. In particular, we
discard the positional encodings commonly found in standard transformer architectures. However,


4


Published as a conference paper at ICLR 2024


this loses the pairwise relations between ( _xi, yi_ ). To solve this, we concatenate each pair ( _xi, yi_ )
into a single token to inform the transformer of their pairwise relation. For the target inputs, we pad
the _xi_ ’s with a dummy token (e.g., 0). Finally, we employ a masking strategy where the context pairs
can self-attend to each other, whereas the padded targets can only attend to the context points and
not to other target points to follow the conditional independence assumption in Eq. 2. GPO satisfies
the properties of context invariance (Property 1.) and target equivalence (Property 2.).


Note that even though GPO uses in-context learning, it is distinct from in-context prompting a base
LLM. The latter does not update the parameters of the base LLM and requires examples of desired
text generations. On the other hand, GPO learns a few-shot model which augments the base LLM
and only requires preferences of users for the LLM generations. That said, both these schemes are
complementary to each other as we can use any engineered prompt (e.g., with in-context examples)
as a drop-in replacement for the default prompt used in the inputs _x_ .


**Scaling to long dataset contexts.** One challenge with GPO is that the effective sequence length
for the transformer can grow significantly if we use raw representations of prompts and responses
within each input _x_ . This can degrade performance and efficiency significantly. To overcome this
challenge, we propose to use embedded representations of text within _x_, as LLM representations
can contain sufficient information for solving tasks (Bhatia et al., 2023). In particular, we first
concatenate the prompt and response and compute their joint embedding _π_ emb( _qi_ _[g][, r]_ _i_ _[g]_ [)][ using the base]
LLM. We explored different techniques for extracting the joint embeddings from the base LLM, as
detailed in the ablation study in Appendix D, and found it best to use the average embedding of all
the tokens in the input.


**Algorithm 1** _Group Preference Optimization_ (GPO)


1: **Input:** LLM embeddding function _π_ emb; Preference datasets _Dg ∀g ∈_ _G_ train.
2: Initialize GPO transformer with parameters _θ_ .
3: For all _g ∈_ _G_ train, cache embedded pairs ( _x_ _[g]_ _i_ _[, y]_ _i_ _[g]_ [)][ in] _[ D]_ _g_ [emb] where _x_ _[g]_ _i_ [=] _[ π]_ [emb][(] _[q]_ _i_ _[g][, r]_ _i_ _[g]_ [)][.]
4: **repeat**
5: Sample training group _g ∈_ _G_ train.
6: Sample context size _m ∼_ Uniform[1 _, n −_ 1] where _n_ = _|Dg|_ .
7: Split _Dg_ [emb] randomly into _m_ context ( _x_ _[g]_ 1: _m_ _[, y]_ 1: _[g]_ _m_ [)][ and][ (] _[n][]_ _[m]_ [)][ target][ (] _[x][g]_ _m_ +1: _n_ _[, y]_ _m_ _[g]_ +1: _n_ [)]
pairs.
8: Predict target preferences _ym_ _[g]_ +1: _n_ [using context][ (] _[x]_ 1: _[g]_ _m_ _[, y]_ 1: _[g]_ _m_ [)][ and padded targets]
( _x_ _[g]_ _m_ +1: _n_ _[,]_ [ 0)][.]
9: Update _θ_ to minimize in-context loss function _L_ ( _θ_ ) in Eq. 2.
10: **until** convergence
11: **Output:** GPO transformer with learned parameters _θ_


3 EXPERIMENTS


**Datasets.** While GPO is general-purpose and can be applied broadly to many language model use
cases, our work is focused on benchmarks which reflect a diverse landscape of human preferences.
Quantitatively evaluating the diverse opinions through open-ended questions (e.g., creative writing)
is inherently complex, and often demands expensive human labels. In contrast, closed-ended responses (e.g., multiple-choice questions) offer a standardized means of capturing diverse opinions,
thus reducing ambiguity and noise in evaluation. Survey datasets have been used in prior work (Santurkar et al., 2023; Durmus et al., 2023) to demonstrate the weaknesses of current LLMs in catering
to diverse populations, and hence can be effectively used to benchmark progress in group alignment.


We benchmark group alignment on 2 recent survey datasets: (1) _OpinionQA_ (Santurkar et al., 2023),
which spans 22 US demographic groups (e.g. income, political ideology, race, and sex) across 500
multiple-choice questions and (2) _GlobalOpinionQA_ (Durmus et al., 2023), which contains multiplechoice questions answered by participants from 14 countries, amounting to 2,554 questions which
cover various topics including politics, media, technology, religion, race, and ethnicity. Survey
questions are shared across different groups, so we use _xi_ (and not _x_ _[g]_ _i_ [) for brevity henceforth.]
Detailed dataset descriptions can be found in Appendix B.


5


Published as a conference paper at ICLR 2024


Next, we construct group _g_ preference dataset _Dg_ from the survey data. Let _Q_ be the set of all
survey questions and _G_ be the groups participating in the survey. Consider a survey question _q ∈_ _Q_,
with _T_ unique answer options. Each option can be interpreted as a response _r_, yielding a set of _T_
viewpoints _{xi}_ _[T]_ _i_ =1 [=] _[ {][π]_ [emb][(] _[q, r][i]_ [)] _[}]_ _i_ _[T]_ =1 [. The preference score] _[ y]_ _i_ _[g]_ [for the viewpoint] _[ x][i]_ [ is obtained by]
aggregating the survey responses given to ( _q, ri_ ) from group _g_ . These scores are normalized within
each question to form the group preference distribution vector _Pg_ ( _q_ ) = [ _y_ 1 _[g][, ..., y]_ _T_ _[g]_ []][ for question] _[ q]_ [,]
such that [] _i_ _[T]_ =1 _[y]_ _i_ _[g]_ [= 1][. Repeating this process for all] _[ n]_ [ questions in] _[ Q]_ [ yields] _[ D][g]_ [. During training]
and testing, all viewpoints from the same question belong to either the context or target set. Finally,
we apply a softmax layer to predictions for each question, yielding normalized preference scores for
each survey question in the target set.


**Evaluation Metric.** To rigorously assess the degree of alignment between two opinion distributions, _P_ 1 and _P_ 2, we calculate the _Alignment Score_, denoted as _A_ ( _P_ 1 _, P_ 2; _Q_ ) over a set of questions
_Q_ . This metric employs a similarity function _Sim_ :



1
_A_ ( _P_ 1 _, P_ 2; _Q_ ) = _|Q|_





_Sim_ ( _P_ 1( _q_ ) _, P_ 2( _q_ )) (3)

_q∈Q_



For the OpinionQA dataset (Santurkar et al., 2023) with its ordinal answers, we employ the onedimensional Wasserstein Distance as our similarity metric. Conversely, for the GlobalOpinionQA
dataset, which often presents non-ordinal answer structures, we use the Jensen-Shannon Distance as
suggested by the original paper. Further details are available in Appendix B.


**Base Large Language Models.** We use two different-sized LMs as our base models for baselines
and GPO. The first, Alpaca-7B Taori et al. (2023), is an instruction-tuned variant of the Llama-7B
(Touvron et al., 2023a), crafted using 52K instruction-response pairs. The second, Llama2-13B chat
version, is finetuned over 1M human preferences for helpfulness and safety. For baseline methods
requiring updates of model weights, we use low-rank adaptation (LoRA) (Hu et al., 2021).


**Baselines.** We compare our method against extensive baseline approaches as introduced below.
For a detailed description of the baselines, refer to the Appendix F. (1) **Uniform Distribution** assumes equal preference scores for all options; (2) _**LM Base**_ following Santurkar et al. (2023); Durmus et al. (2023), gets the LM’s default opinion distribution _Pπ_ ( _q_ ) by extracting and normalizing
prediction scores for answer choices; (3) _**LM Steered**_ uses prompting strategies to convey group
information to the LM (examples in Appendix K); (4) _**Few-shot Prompt**_ appends a few examples
showing a group’s preferences for _m_ context questions to the prompt, where _m_ is constrained by
the LM’s context window size and _cg_ includes the context samples _{xi, yi}_ _[m]_ _i_ =1 [(see Figure 8 in the]
Appendix); (5) _**SFT per group**_ fine-tunes the LM separately for each group _g_ with a maximum
likelihood loss on augmented training examples created by sampling responses _r_ according to the
preference distribution _Pg_ ( _q_ ); (6) _**Reward Model**_ trains a per-group reward model by adding a linear
MLP head on a base LLM and training it on _m_ context samples _{xi, yi_ _[g][}]_ _i_ _[m]_ =1 [with MSE loss to predict]
preference scores; and (7) _**In-Context Finetune**_ investigates few-shot in-context alignment ability
by partitioning the group set into meta-train/test sets, splitting training questions into context/query,
supplementing each query _q_ with a context _cg_ of _m_ ground truth preferences, and fine-tuning the
LM with maximum likelihood loss where responses are sampled according to _Pg_ ( _q_ ).


3.1 RESULTS AND DISCUSSION


**Adapting to US demographics in OpinionQA.** We conducted experiments with three distinct
meta-train and meta-test splits, allocating 40%, 60%, and 80% of the 22 US demographics groups
respectively as the meta-train groups _Gtrain_ . This group split was consistent with the _In-context_
_Finetune_ baseline and GPO. For other baselines that operate on a per-group basis, we calculated the
alignment score for the meta-test groups and present results averaged over three random seeds.


Our results are presented in Figure 3. Alpaca-7b base model exhibit alignment scores that are
similar to the alignment score of a uniform distribution. This does not necessarily imply an absence
of biases, as averaging across groups could obscure biases towards certain demographics. Prior
work has found LMs may disproportionately over-represent some groups and under-represent others
(Santurkar et al., 2023). However, Llama2-13b-chat base model exhibits a lower alignment score as


6


Published as a conference paper at ICLR 2024















|Alpaca-7b - OpinionQA Dataset Llama2-13b - OpinionQA Dataset<br>1.00 1.00<br>Meta-train on 40% groups Meta-train on 40% groups<br>0.95 M Me et ta a- -t tr ra ai in n o on n 6 80 0% % g gr ro ou up ps s 0.95 M Me et ta a- -t tr ra ai in n o on n 6 80 0% % g gr ro ou up ps s<br>0.90 0.90<br>Score Score<br>0.85 0.85<br>0.80 0.80 Alignment Alignment<br>0.75 0.75<br>0.70 0.70<br>0.65 0.65<br>0.60 0.60 U niform LM p LrF o Me mw p-- tsb ha os te Rew-steered ard m f p ie nIr en t- g c ur oS o nF nu e t T po ed xt el G PO U niform LM p LrF o Me mw p-- tsb ha os te Rew-steered ard m f p ie nIr en t- g c ur oS o nF nu e t T po ed xt el G PO<br>Alpaca-7b - GlobalOpinionQA Dataset Llama2-13b - GlobalOpinionQA Dataset<br>0.9 0.9<br>Meta-train on 40% groups Meta-train on 40% groups<br>Meta-train on 60% groups Meta-train on 60% groups<br>Meta-train on 80% groups Meta-train on 80% groups<br>0.8 0.8<br>Score Score<br>0.7 0.7<br>Alignment Alignment<br>0.6 0.6<br>0.5 0.5<br>0.4 0.4 U niform LM p LrF o Me mw p-- tsb ha os te Rew-steered ard m f p ie nIr en t- g c ur oS o nF nu e t T po ed xt el G PO U niform LM p LrF o Me mw p-- tsb ha os te Rew-steered ard m f p ie nIr en t- g c ur oS o nF nu e t T po ed xt el G PO<br>ure 3: Alignment score comparisons on the OpinionQA dataset and GlobalOpinionQA d<br>Alpaca-7b and Llama2-13b-chat as base models. Results have been averaged across<br>up split setups and three random seeds, with standard deviations provided.|Col2|Col3|Col4|
|---|---|---|---|
|Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Alpaca~~-~~7b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Llama2~~-~~13b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Alpaca~~-~~7b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Llama2~~-~~13b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>ure 3: Alignment score comparisons on the OpinionQA dataset and GlobalOpinionQA d<br> Alpaca-7b and Llama2-13b-chat as base models. Results have been averaged across<br>up split setups and three random seeds, with standard deviations provided.|Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Alpaca~~-~~7b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Llama2~~-~~13b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Alpaca~~-~~7b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Llama2~~-~~13b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>ure 3: Alignment score comparisons on the OpinionQA dataset and GlobalOpinionQA d<br> Alpaca-7b and Llama2-13b-chat as base models. Results have been averaged across<br>up split setups and three random seeds, with standard deviations provided.|se<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per<br>ma2~~-~~13b~~-~~ GlobalOpinion<br> on 40% groups<br> on 60% groups<br> on 80% groups|<br> group<br>In-context<br>finetune GPO<br> QA Dataset|
|Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Alpaca~~-~~7b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Llama2~~-~~13b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Alpaca~~-~~7b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Llama2~~-~~13b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>ure 3: Alignment score comparisons on the OpinionQA dataset and GlobalOpinionQA d<br> Alpaca-7b and Llama2-13b-chat as base models. Results have been averaged across<br>up split setups and three random seeds, with standard deviations provided.|.5<br>.6<br>.7|<br><br>|<br>|
|Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Alpaca~~-~~7b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.60<br>0.65<br>0.70<br>0.75<br>0.80<br>0.85<br>0.90<br>0.95<br>1.00<br>Alignment Score<br>Llama2~~-~~13b~~-~~ OpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Alpaca~~-~~7b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>Uniform<br>LM-base<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-context<br>finetune GPO<br>0.4<br>0.5<br>0.6<br>0.7<br>0.8<br>0.9<br>AlignmentScore<br>Llama2~~-~~13b~~-~~ GlobalOpinionQA Dataset<br>Meta~~-~~train on 40% groups<br>Meta~~-~~train on 60% groups<br>Meta~~-~~train on 80% groups<br>ure 3: Alignment score comparisons on the OpinionQA dataset and GlobalOpinionQA d<br> Alpaca-7b and Llama2-13b-chat as base models. Results have been averaged across<br>up split setups and three random seeds, with standard deviations provided.|.5<br>.6<br>.7|se<br>Few-shot<br>prompt<br>LM-steered<br>Reward model<br>SFT<br>per<br>    dataset and Global<br>    ults have been av<br>     ations provided.|<br> group<br>In-context<br>finetune GPO<br>     OpinionQA d<br>     eraged across|


|Col1|ptot Rew-steered edel<br>LMm ard m f p ie nIr en t- g c ur oS oF nu t T po<br>b - GlobalOpinionQA Data<br>oups<br>oups<br>oups|xt PO<br>ne G<br>set|Col4|
|---|---|---|---|
|<br><br>.5<br>.6<br>.7|<br><br> <br> <br>|<br>||
|<br><br>.5<br>.6<br>.7|ot<br>mpt<br>LM-steered<br>Reward model<br>SFT<br>per group<br>In-conte<br>finetu<br>  score comparisons<br>  Llama2-13b-chat a<br>  d three random seed|xt<br>ne GPO<br>   on t<br>  s bas<br>   s, wi|xt<br>ne GPO<br>   on t<br>  s bas<br>   s, wi|


compared to the uniform distribution. This might be attributed to its fine-tuning for safety, causing
the model to lean towards the least harmful option, which can be seen from the qualitative examples
in Appendix J. When we incorporate group information into the LMs, we deploy various prompting
strategies—QA, BIO, and PORTRAY—to convey this information (see Appendix K for examples).
We report results for the strategy that yields the best alignment as _LM-steered_ . Given explicit group
information, _Alpaca-7b-steered_ displays slightly lower relative gains as compared to _Llama2-13b-_
_steered_ . Next, when provided with few-shot group preference context samples, which serve as
an implicit method of conveying group information, the LM’s alignment performance significantly
declines compared to the base language model’s performance. We hypothesize this decline might be
due to the prompting format being outside the distribution of the language model’s training corpus.


For methods involving gradient updates, we maintain a consistent number of context samples across
all baselines, which is also the same number of context examples used in _Few-shot prompt_ . Specifically, we use 15 samples for Alpaca-7b and 20 for Llama2-13b experiments. With gradient updates,
_SFT per-group_ brings improvement as compared to other gradient-free steering methods. However,
training a _Reward Model_ to predict alignment scores from context samples, and subsequently use
it to predict preference scores for query examples underperforms SFT methods. This outcome may
suggest a risk of overfitting when working with a limited sample size.


GPO achieves notably higher alignment scores on this dataset compared to the baselines for both
the Alpaca and Llama2 base models. GPO uses the same number of context samples for adaptation
and the test groups are unseen during training. We observed performance increases when a larger
number of meta-training groups were used. GPO’s closest baseline, the _In-context Finetune_ method
ranks second, where the LMs are trained to infer from few-shot context samples. On average over
the two base models and the three group split settings, GPO achieves a 7.1% increase over the
_In-context Finetune_ .


Figure 4 qualitatively illustrates the predicted alignment scores from different methods in response
to an OpinionQA example concerning climate change concerns across six demographic groups.
The first row depicts the ground truth group opinion distribution. Given just 15 context samples,


7


Published as a conference paper at ICLR 2024


GPO successfully adapts to match the opinion distributions of different groups. For instance, it
increases preference for option A when adapted to the group _Hindus_, while the steered LMs do
not exhibit correct distribution changes. For example, _Llama2-13b-steered_ appears to be biased
towards a specific option, overrepresenting it rather than accurately reflecting the distribution of the
targeted group. On the contrary, in demographics with a more balanced distribution like _College_
_graduate/some postgrad_, GPO maintains this balance more consistently. This demonstrates that
GPO does not merely adapt to the overall dataset group preferences, but can align to specific groups
using limited context.

























































































































































































































Figure 4: Qualitative comparison of GPO alignment with steered LMs, where each pie chart denotes
the preference distribution of the group. Here, GPO uses Alpaca-7b’s embedding.


**Adapting to cross-nation groups in GlobalOpinionQA.** The diverse and highly contrasting
opinions across nations in GlobalOpinionQA presents a more complex landscape than the OpinionQA dataset. Upon analyzing performance, trends in the GlobalOpinionQA dataset closely followed those observed in the OpinionQA, as depicted in Figure 3. Notably, the alignment score of
the Alpaca-7b base model surpasses that of the uniform distribution while Llama2-13b base model
shows lower alignment. For Alpaca-7b _LM-base_, this could suggest that the base models might
exhibit stronger alignment to certain specific countries and this hypothesis is supported by the increased standard deviation of the Alpaca-7b _LM-base_ alignment scores, hinting at varied alignment
across different countries, a phenomenon also reported in the dataset (Durmus et al., 2023). Alternatively, this could imply that the base models tend to align more with the dataset’s general
respondents, which naturally would exceed a uniform distribution. With gradient updates, the _SFT_
_per-group_ method here surpasses the alignment performance of steering methods, while the _Reward_
_Model_ underperforms SFT methods. The _In-context Finetune_ method emerges as the third-best
and second-best in terms of alignment for Alpaca-7b and Llama-13b respectively, which showcases
enhanced in-context few-shot adaptation post meta-training. However, its training demands are substantially higher; it requires approximately 4.7 times more training time as compared with GPO on
an NVIDIA RTX A6000 to achieve the depicted performance. Averaged across both base models
and the three group split scenarios, GPO posts a 8.4% improvement over the second-best baseline.


**Scalability with Increasing Context Samples.** We evaluate the scalability of different methods
with respect to the size of the in-context examples. Figure 5 demonstrates that for Nigeria in the
GlobalOpinionQA dataset, GPO enhances alignment scores with fewer than 10 preference context samples. The performance of _Few-shot Prompt_ improves with more examples but plateaus
with greater variance. In comparison, _In-context Finetune_ exhibits superior adaptability post metatraining than _Few-shot Prompt_, yet its alignment is still suboptimal and the number of group context
samples is limited by the context window size of the LM. Both _SFT per-group_ and _Reward Model_


8


Published as a conference paper at ICLR 2024











Figure 5: Alignment score of various methods based on Llama2-13B with varying group
context sample size. Evaluation conducted on
survey questions for Nigeria from the GlobalOpinionQA dataset. The shaded region represents the standard deviation across three different seed results.


|(GlobalOpinionQA - Nige<br>.85<br>.80|Col2|
|---|---|
|.80<br>.85<br>   <br>(GlobalOpinionQA~~-~~ Nige||
|.80<br>.85<br>   <br>(GlobalOpinionQA~~-~~ Nige||
|.60<br>.65<br>.70<br>.75||
|.60<br>.65<br>.70<br>.75||
|.45<br>.50<br>.55<br>||
|.45<br>.50<br>.55<br>||
|0<br>.40<br>||
|0<br>.40<br>|25<br>50<br>75<br>100<br>125<br>Number of Group Context Sa|



show incremental improvements with added context samples; however, their sample efficiency is
modest. In contrast, GPO adeptly adapts to groups in a sample-efficient manner.










|Individual Alignment Accuracy Comparison on the topic of guns|Col2|Col3|Col4|
|---|---|---|---|
|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns|
|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns|||
|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns||||
|LM Base<br>LM Steered<br>Few-shot Prompt<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br> <br> <br>on the topic of guns|t<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO|t<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO|t<br>SFT<br>per individual<br>Reward model<br>In-context finetune<br>GPO|


|Individual Alignment Accuracy Comparison across 15 topics|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|
|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics|||
|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics||||||||
|LM Base<br>LM Steered<br>Few-shot Prompt<br>In-context finetune<br>GPO<br>0.0<br>0.1<br>0.2<br>0.3<br>0.4<br>0.5<br>0.6<br>Alignment Accuracy<br> <br>across 15 topics||||ot Prompt<br>In-context finetune<br>GPO|ot Prompt<br>In-context finetune<br>GPO|ot Prompt<br>In-context finetune<br>GPO|ot Prompt<br>In-context finetune<br>GPO|



Figure 6: Individual alignment accuracy comparisons from the OpinionQA dataset. **Left:** Individual
alignment on the gun topic survey. **Right:** Comprehensive comparison across all 15 topics, showcasing the performance of various methods on diverse subjects. Experiments use Alpaca-7b as the
base LM. Both GPO and _In-context finetune_ are meta-trained on 40% of individuals and evaluated
on the remaining 60%. The horizontal red line represents the average accuracy of a random model.


**Adapting to Individual Preferences.** Variations in individual opinions can manifest even within
the same demographic groups (Hwang et al., 2023). We align GPO with individual-level preferences.
From the OpinionQA dataset, encompassing 15 surveys across 15 unique topics, we randomly select
100 participants from each survey, along with their responses to 30 topic-related questions. For each
individual, 40% questions serve as context samples and 60% for queries. We use Alpaca-7b here
as the base model. To steer the LM with individual information, we create individual context from
combined demographic variables, such as income, religion, and age, as demonstrated in Appendix
Figure 9. Since each individual only selects one option, we calculate alignment accuracy instead
by treating the option with the highest predicted preference score as the predicted option. Due to
computational constraints, we confined our evaluations of the SFT per-individual and reward model
methods to one survey. Since both of them operate on a per-individual basis, the training needed
for about a thousand individuals made broader comparisons of the two baselines impractical. In
contrast, other baselines, including in-context finetune and GPO, were assessed across all 15 survey
topics. Across the full breadth of the 15 topics, GPO consistently exhibited superior performance in
adapting to individual preferences relative to other baselines, as depicted in Figure 6.


4 CONCLUSION


We introduced, GPO, a novel method for few-shot aligning LLM outputs to both individual and
group preferences given little preference data. GPO is trained on a meta-train dataset containing


9


Published as a conference paper at ICLR 2024


group-wise preference data. During inference, GPO adapts to a new test group, predicting aligned
preferences given a few context examples from that group. GPO significantly outperforms prior
methods as measured by alignment score for group preference alignment while requiring no gradient
updates to the base LLM. We find that GPO is also more sample efficient, improving alignment score
significantly more than baseline methods while using fewer samples, and is effective across multiple
popular open-source LLMs of various parameter and pre-training dataset scales.


10


Published as a conference paper at ICLR 2024


ETHICS STATEMENT


GPO can be used to align models to preferences of diverse interest groups which can provide a
more positive, useful, and inclusive experience for end users of LLM applications. We acknowledge
that aligning LLMs to the preferences of demographic groups can have malicious applications. For
example, making LLMs more capable of producing responses that are more tailored to specific
users may be misused to convince or show members of a group how to perform unethical actions.
Additionally, GPO’s methodology can be used to align a model to a group’s preferences even if
those preferences are harmful. Biased, offensive, and harmful preferences present in the meta-train
or meta-test datasets may be reflected in the outputs of GPO. Future work should investigate methods
for aligning LLM outputs to group preferences without amplifying harmful outputs.


ACKNOWLEDGMENTS


This research is supported by a Google Award for Inclusion Research and an Adobe Data Science
Award. We want to thank Hritik Bansal for insightful discussions.


11


Published as a conference paper at ICLR 2024


REFERENCES


Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones,
Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory
for alignment. _arXiv e-prints_, pp. arXiv–2112, 2021.


Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn
Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless
assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_,
2022a.


Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. _arXiv preprint arXiv:2212.08073_, 2022b.


Hritik Bansal, John Dang, and Aditya Grover. Peering through preferences: Unraveling feedback
acquisition for aligning large language models. _arXiv preprint arXiv:2308.15812_, 2023.


Kush Bhatia, Avanika Narayan, Christopher De Sa, and Christopher R´e. Tart: A plug-and-play
transformer module for task-agnostic reasoning. _arXiv preprint arXiv:2306.07536_, 2023.


Su Lin Blodgett, Solon Barocas, Hal Daum´e III, and Hanna Wallach. Language (technology) is
power: A critical survey of “bias” in nlp. In _Proceedings of the 58th Annual Meeting of the_
_Association for Computational Linguistics_, pp. 5454–5476, 2020.


Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. _Advances in neural information processing systems_, 33:1877–1901, 2020.


Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_, 2022.


Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik
Narasimhan. Toxicity in chatgpt: Analyzing persona-assigned language models. _arXiv preprint_
_arXiv:2304.05335_, 2023.


RIM Dunbar, Anna Marriott, and NDC Duncan. Human conversational behavior. _Human Nature_, 8
(3):231–246, 1997.


Esin Durmus, Karina Nyugen, Thomas I Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin,
Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, et al. Towards measuring
the representation of subjective global opinions in language models. _arXiv e-prints_, pp. arXiv–
2306, 2023.


Amelia Glaese, Nat McAleese, Maja Trkebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. Improving alignment of
dialogue agents via targeted human judgements. _arXiv preprint arXiv:2209.14375_, 2022.


Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric J Horvitz, and
Stefano Ermon. Bias correction of learned generative models using likelihood-free importance
weighting. _Advances in neural information processing systems_, 32, 2019.


Christian Haerpfer, Ronald Inglehart, Alejandro Moreno, Christian Welzel, Kseniya Kizilova, Jaime
Diez-Medrano, Milena Lagos, Pippa Norris, Eduard Ponarin, and Bianca Puranen. _World values_
_survey: Round seven – country-pooled datafile version 5.0.0_ . 2022.


Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar.
Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics_
_(Volume 1: Long Papers)_, pp. 3309–3326, 2022.


Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen,
et al. Lora: Low-rank adaptation of large language models. In _International Conference on_
_Learning Representations_, 2021.


12


Published as a conference paper at ICLR 2024


EunJeong Hwang, Bodhisattwa Prasad Majumder, and Niket Tandon. Aligning language models to
user opinions. _arXiv e-prints_, pp. arXiv–2305, 2023.


Hang Jiang, Xiajie Zhang, Xubo Cao, Jad Kabbara, and Deb Roy. Personallm: Investigating the ability of gpt-3.5 to express personality traits and gender differences. _arXiv preprint_
_arXiv:2305.02547_, 2023.


Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _International_
_Conference on Learning Representations (ICLR)_, San Diega, CA, USA, 2015.


Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt
tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Pro-_
_cessing_, pp. 3045–3059, 2021.


Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _International Confer-_
_ence on Learning Representations_, 2018.


Tung Nguyen and Aditya Grover. Transformer neural processes: Uncertainty-aware meta learning
via sequence modeling. In _International Conference on Machine Learning_, pp. 16569–16594.
PMLR, 2022.


Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow
instructions with human feedback. _Advances in Neural Information Processing Systems_, 35:
27730–27744, 2022.


PewResearch. Writing survey questions. [URL https://www.pewresearch.org/](https://www.pewresearch.org/our-methods/u-s-surveys/writing-survey-questions/)
[our-methods/u-s-surveys/writing-survey-questions/.](https://www.pewresearch.org/our-methods/u-s-surveys/writing-survey-questions/)


Guanghui Qin and Jason Eisner. Learning how to ask: Querying lms with mixtures of soft prompts.
In _Proceedings of the 2021 Conference of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Technologies_, pp. 5203–5212, 2021.


Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding with unsupervised learning. 2018.


Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language
models are unsupervised multitask learners. 2019.


Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea
Finn. Direct preference optimization: Your language model is secretly a reward model. _arXiv_
_preprint arXiv:2305.18290_, 2023.


Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the
few-shot paradigm. In _Extended Abstracts of the 2021 CHI Conference on Human Factors in_
_Computing Systems_, pp. 1–7, 2021.


Michael Santacroce, Yadong Lu, Han Yu, Yuanzhi Li, and Yelong Shen. Efficient rlhf: Reducing
the memory usage of ppo. _arXiv preprint arXiv:2309.00754_, 2023.


Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto.
Whose opinions do language models reflect? _arXiv preprint arXiv:2303.17548_, 2023.


John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy
optimization algorithms. _arXiv preprint arXiv:1707.06347_, 2017.


Irene Solaiman and Christy Dennison. Process for adapting language models to society (palms)
with values-targeted datasets. _Advances in Neural Information Processing Systems_, 34:5861–
5873, 2021.


Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang.
Preference ranking optimization for human alignment. _arXiv preprint arXiv:2306.17492_, 2023.


Simeng Sun, Dhawal Gupta, and Mohit Iyyer. Exploring the impact of low-rank adaptation on the
performance, efficiency, and regularization of rlhf. _arXiv preprint arXiv:2309.09055_, 2023.


13


Published as a conference paper at ICLR 2024


Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
[https://github.com/tatsu-lab/stanford_alpaca, 2023.](https://github.com/tatsu-lab/stanford_alpaca)


Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze
Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog
applications. _arXiv preprint arXiv:2201.08239_, 2022.


Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee
Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023a.


Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_, 2023b.


Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions.
_arXiv preprint arXiv:2212.10560_, 2022.


Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B
Hashimoto. Benchmarking large language models for news summarization. _arXiv preprint_
_arXiv:2301.13848_, 2023.


Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and
Jimmy Ba. Large language models are human-level prompt engineers. In _The Eleventh Interna-_
_tional Conference on Learning Representations_, 2022.


Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul
Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. _arXiv_
_preprint arXiv:1909.08593_, 2019.


14


Published as a conference paper at ICLR 2024


A LIMITATIONS


We highlight a few limitations and directions for future work below:


**Opinion Datasets:** We use datasets containing opinions of various demographic groups to validate
GPO. Survey data is imperfect and may not be fully representative of an entire group’s population.
Additionally, all the datasets that we use in this work are in English. When aligning to groups, the
language that is used to collect preference data and during alignment may have a significant effect
on alignment metrics, especially if the inputs and outputs are in a different language than the native
language of members of a group. Future work should also investigate more challenging few-shot
alignment settings, such as adapting to individual creative preferences where there may be much
higher variance between group preferences.


**Multiple-choice Format:** Like many previous works, we focus on a multiple-choice format due
to the availability of existing datasets and ease of quantitative evaluations. LLMs are capable of
producing much more complicated long-form responses, and it is important that alignment methods
can be extended to the general long-form response setting. While the GPO framework extends more
broadly to different formats of LLM generations, future work should validate the effectiveness of
GPO for longer form responses and additional considerations such as group preference feedback
representation and evaluation metrics needed to extend to the long-form setting.


**Alignment Objectives:** When aligning LLMs, multiple factors beyond group preference alignment
are also very important. Aligning to group preferences may result in worse alignment for other
factors including as harmlessness and helpfulness especially if the group preference data includes
examples that contradicts these values. Moreover, aligning to group preferences may amplify undesirable behaviors from LLMs including biased or harmful outputs. Future work should study the
impact of group alignment on other important alignment factors and methods to reduce regressions
for these factors when aligning to group preferences.


**Model Initialization:** Initializing GPO with a pretrained LM transformer backbone might offer
advantages in performance. Specifically, leveraging a pretrained backbone could potentially enhance
GPO’s capacity to encode world knowledge, thereby improving its ability to generalize to OOD
examples. Investigating the performance and generalization benefits of this initialization approach
could be a promising direction for future work.


B DATASET DETAILS


B.1 OPINIONQA DATASET


This dataset is sourced from Pew American Trends Panel (PewResearch). This dataset’s unique
structural characteristics: the answer choices in the survey questions are principally ordinal (Santurkar et al., 2023). For instance, options often extend across a spectrum, ranging from categories
such as “A great deal,” “Fair amount,” “Not much,” to “Not at all.” Traditional divergence metrics,
such as the Kullback-Leibler (KL) divergence, are ill-suited for this task, as they fail to encapsulate
the ordinal relationships inherent in the answer choices. In this dataset, the ordinal answer choices
are mapped to a metric space using corresponding positive integers. For example, a typical mapping
in our dataset might look like _{A_ : 1 _, B_ : 2 _, . . ., D_ : 4 _}_ . Therefore, 1-D Wasserstein Distance metric
is used. The alignment score for two opinion distributions _P_ 1 and _P_ 2 is consequently expressed as:



1
_A_ ( _P_ 1 _, P_ 2; _Q_ ) = _|Q|_






_q∈Q_




1 _−_ _[WD]_ [(] _[P]_ [1][(] _[q]_ [)] _[, P]_ [2][(] _[q]_ [))]

_N −_ 1




(4)



Here, _N_ denotes the total number of selectable answer options, excluding the option to refuse. The
term _N −_ 1 functions as a normalization factor, representing the maximal possible Wasserstein
distance in the given metric space. The score is bounded within the interval [0 _,_ 1], with a score of 1
indicating perfect alignment between the two distributions.


We employ the dataset as encompassing 22 demographic groups within the US, as outlined in Table
1. Our analysis focuses on 500 contentious questions, characterized by frequent disagreements


15


Published as a conference paper at ICLR 2024


among the considered subgroups. These questions are the same ones used in the steerability analysis
presented in the OpinionQA dataset (Santurkar et al., 2023).

|Attribute|Demographic Group|
|---|---|
|CREGION|Northeast, South|
|EDUCATION|College graduate/some postgrad, Less than high school|
|GENDER|Male, Female|
|POLIDEOLOGY|Liberal, Conservative, Moderate|
|INCOME|More than $100K+, Less than $30,000|
|POLPARTY|Democrat, Republican|
|RACE|Black, White, Asian, Hispanic|
|RELIG|Protestant, Jewish, Hindu, Atheist, Muslim|



Table 1: Demographic groups considered in our analysis from the OpinionQA dataset.


B.2 GLOBALOPINIONQA DATASET


The survey questions in this dataset is sourced from the Pew Research Center’s Global Attitudes
surveys (PewResearch) and the World Values Survey (Haerpfer et al., 2022). These questions do
not generally contain ordinal structures in the options and the ordinal scores are not presented in the
datasets. Therefore, we choose to use a different metric for evaluating the alignment in this dataset.



1
_A_ ( _P_ 1 _, P_ 2; _Q_ ) = _|Q|_




- [1 _−J D_ ( _P_ 1( _q_ ) _, P_ 2( _q_ ))] (5)


_q∈Q_



In this alternate scenario, _J D_ signifies the Jensen-Shannon Distance following the paper’s choice
(Durmus et al., 2023).


**Country**
Nigeria
Egypt
India (Current national sample)
China
Japan
Germany
France
Spain
United States
Canada
Brazil
Argentina
Australia
New Zealand


Table 2: List of countries considered in our study, from GlobalOpinionQA dataset.


Out of the 138 countries in the original GlobalOpinionQA dataset, we selected a subsample of 14
countries for our study due to computational constraints. We extract all the survey questions that
have the target countries’ answers. The countries chosen (in Table 2) span several continents to
ensure a broad representation in our evaluation. For instance, Nigeria and Egypt cover Africa, while
India and China represent Asia. European nations are represented by countries such as Germany,
France, and Spain, and the Americas include the United States, Canada, Brazil, and Argentina.
Lastly, the Oceania region is represented by Australia and New Zealand.


C ABLATION ON THE GPO’S TRANSFORMER ARCHITECTURE


We design GPO with inductive biases that satisfy two properties that are important for accurate
preference prediction:


16


Published as a conference paper at ICLR 2024


**(1) Context Invariance (Property 1.)** : Unlike traditional transformers that utilize positional encodings, GPO omits these encodings to ensure predictions remain unaffected by the sequence or
permutation of context preference pairs. However, this loses the pairwise relations between ( _xi, yi_ ).
To solve this, we concatenate each pair ( _xi, yi_ ) into a single token to inform the transformer of
their pairwise relation. It adopts an alternative masking strategy that differs from the conventional
causal mask. This approach enables context pairs to exclusively interact with each other, thereby
maintaining focus on relevant information.


**(2) Target Equivalence (Property 2.)** : The masking strategy also makes target points only attend
to the context points, which ensures that the targets are only influenced by the context points, not by
other targets. This aligns with the principle of conditional independence as stated in Equation 2. In
this setup, we don’t require the query preference scores to be generated autoregressively, meaning
the prediction doesn’t depend on previously predicted queries. Therefore, we use a masking strategy
where context samples self-attend, while query pairs attend only to context samples, not to other
query pairs.


Property 1. **Context Invariance.** A model _pθ_ exhibits context invariance if, given any permutation
function _π_ and any _m ∈_ [1 _, n −_ 1], it satisfies:
_pθ_ ( _ym_ +1: _n|xm_ +1: _n, x_ 1: _m, y_ 1: _m_ ) = _pθ_ ( _ym_ +1: _n|xm_ +1: _n, xπ_ (1): _π_ ( _m_ ) _, yπ_ (1): _π_ ( _m_ ))
Property 2. **Target Equivariance.** Model _pθ_ demonstrates target equivariance if, for any permutation function _π_ and any _m ∈_ [1 _, n −_ 1], the following is true:
_pθ_ ( _yq,m_ +1: _n|xq,m_ +1: _n, xq,_ 1: _m, yq,_ 1: _m_ ) = _pθ_ ( _yq,π_ ( _m_ +1): _π_ ( _n_ ) _|xq,π_ ( _m_ +1): _π_ ( _n_ ) _, xq,_ 1: _m, yq,_ 1: _m_ )


To illustrate the effectiveness of these biases, we compare GPO with a standard autoregressive transformer that employs a causal mask, akin to the transformers used in GPT-x series (Radford et al.,
2018; 2019; Brown et al., 2020). This basic architecture includes autoregressive generation with the
causal mask and uses positional encoding, which we previously omitted to ensure context invariance. Using an autoregressive generation approach violates the target equivalence property since the
prediction of each query point relies on previously generated ones. As depicted in Table 3, GPO’s inherent inductive biases yield superior alignment performance compared to a traditional transformer.
It’s noteworthy that in this comparison, we still concatenate the ( _x, y_ ) pairs into single tokens for
the standard transformer, thus preserving the relationship between the viewpoint _x_ and the group
preference score.


Meta train on 40% groups Meta train on 60% groups Meta train on 80% groups
GPO **0.798** _±_ **0.007** **0.820** _±_ **0.004** **0.799** _±_ **0.015**
Transformer 0 _._ 780 _±_ 0 _._ 009 0 _._ 782 _±_ 0 _._ 004 0 _._ 772 _±_ 0 _._ 006


Table 3: Comparison of the alignment scores of GPO and a standard autoregressive transformer on
alignment tasks on GlobalOpinionQA datasets with three group splits and runs are averaged over
three seeds. Experiments are conducted on OpinionQA with Alpaca-7b as the base model.


D ABLATION ON GETTING EMBEDDINGS FROM THE LLM.


Given that the base LLMs we considered in our experiments were not explicitly trained for text
summarizing, we examined three methods to generate the embedding _x_ of the sentence: 1) Using
the embedding of the last token as the sentence embedding. 2) Averaging over the embeddings of all
tokens in the sentence. 3) Concatenating the embeddings obtained from the previous two methods.
As depicted in the table 4, averaging over the token embeddings of the sentence yielded the most
effective results, whereas relying solely on the last token embedding proved less adept at capturing
sentence-level information.


E ABLATION ON ADDING GROUP META-CONTEXT FOR GPO


In the primary experiments, viewpoints _x_ are embedded using an LLM. Notably, in our previous
experiments, each _xi_ does not contain group meta-data about the group’s identity or attributes. This


17


Published as a conference paper at ICLR 2024


Embedding Method Alignment Score
Alpaca-7b last token 0 _._ 903 _±_ 0 _._ 014
Alpaca-7b average tokens **0** _._ **946** _±_ **0** _._ **007**
Alpaca-7b last token + average 0 _._ 942 _±_ 0 _._ 009


Table 4: Comparison of different embedding methods using Alpaca-7b as the base model on the
OpinionQA dataset, with a meta train split of 80%. Results are averaged across three seeds.


ablation study explores the potential performance enhancement that could be achieved by integrating
meta-data into GPO. Specifically, the context information _cg_ is embedded into a vector _zctx_ _[g]_ [, which]
is of the same dimension as _x_ as embedded by the same LLM. We examined adding _cg_ from the
three kinds of contextual prompts we study in K. This embedding is then concatenated with each
of the ( _x, y, zctx_ ) pairs, serving as the one input token for GPO. As illustrated in Table 5, incorporating context embeddings into the structure doesn’t bolster GPO’s performance across the three
group split scenarios, instead it performs worse. We hypothesize this outcome arises because GPO,
unlike LLMs, lacks comprehensive world knowledge of diverse group attributes, making it challenging to adapt to the meta-data embeddings of unfamiliar groups. Instead, GPO excels in deducing
preference distributions based on the available ( _x, y_ ) context sample pairs.


Meta train on 40% groups Meta train on 60% groups Meta train on 80% groups
GPO **0.920** _±_ **0.003** **0.926** _±_ **0.013** **0.946** _±_ **0.007**
GPO w/ meta-data 0 _._ 900 _±_ 0 _._ 003 0 _._ 916 _±_ 0 _._ 017 0 _._ 926 _±_ 0 _._ 006


Table 5: Comparison of the alignment scores of GPO with and without meta-data embeddings with
three group splits and runs are averaged over three seeds. Experiments are conducted on OpinionQA
with Alpaca-7b as the base model.


F BASELINES DETAILS


We compare our method against extensive baseline approaches for aligning an LLM’s predicted
opinion distributions with human groups:




- **Uniform Distribution:** This baseline assumes that all answer options are chosen with equal probability, indicating no preference or bias towards any specific option. For a given question _q ∈_ _Q_
with _N_ answer choices, the distribution _PU_ ( _q_ ) is represented as: _PU_ ( _q_ ) =  - 1 _[,]_ [1] _[, . . .,]_ [1]  - _._



1 [1]

_N_ _[,]_ _N_




[1] [1]

_N_ _[, . . .,]_ _N_



with _N_ answer choices, the distribution _PU_ ( _q_ ) is represented as: _PU_ ( _q_ ) =  - _N_ 1 _[,]_ _N_ [1] _[, . . .,]_ _N_ [1]  - _._

- _**LM Base**_ **:** The opinion distribution, denoted by _Pπ_, is derived from a pre-trained LM without
any group-specific steering or fine-tuning. For a given question _q ∈_ _Q_, the distribution _Pπ_ ( _q_ )
generated by the model is extracted from the output probability distribution across the _N_ available
answer choices. We first extract the prediction scores for the next token from the LM, focusing on
the top- _K_ tokens. We then normalize the values to obtain _Pπ_ ( _q_ ). For a token that is missing from
the top- _K_ set, we allocate the smallest prediction score in the top- _K_ set. We use _K_ = 200 in our
experiments.

- _**LM Steered**_ **:** This baseline gauges the model’s adaptability to align with a specific group _g ∈_ _G_
when informed of the group information explicitly through the prompt. We use diverse prompting
strategies—QA, BIO, and PORTRAY—to convey group information, with examples in Appendix
K. The opinion distribution obtained for group _g_ under this steering is expressed as _Pπ_ ( _q_ ; _cg_ ),
where _cg_ denotes the context for group _g_ .

- _**Few-shot Prompt**_ **:** Rather than giving the model explicit group information, we input a few examples showing a group’s preferences for _m_ context questions, constrained by the LM’s context
window size. Here the _cg_ includes the context samples _{q, ri, yi}_ _[m]_ _i_ =1 [. Using this context, the]
model is prompted to generate a response for a new, unseen question that aligns with the group’s
opinions. See Figure 8 in the Appendix for examples.

- _**SFT per group**_ **:** The LM is fine-tuned separately for each group _g_ using a supervised loss. Let
_Q_ train _⊂_ _Q_ denote the subset of _m_ context questions used for training. We create training examples
( _q, r_ ) by sampling _q_ from _Q_ train and then sampling responses _r_ with respect to the preference
distribution _Pg_ ( _q_ ). The loss is defined as:



18


Published as a conference paper at ICLR 2024


_L_ SFT = _−_ E _q∼Q_ train _,r∼Pg_ ( _q_ ) log _pψ_ ( _r|q_ ) (6)


where _ψ_ represents the LM parameters and _pψ_ ( _r|q_ ) denotes the probability of producing the response _r_ given the question _q_ . This procedure fine-tunes the LM to maximize the likelihood of the
sampled responses that align with the preference distribution of the specific group.

- _**Reward Model**_ **:** We start with the architecture of the base LM and add a linear MLP head. The
augmented LLM is trained on _m_ context samples to predict the preference scores for the _{xi}_ _[m]_ _i_ =1
using a mean squared error loss. Then, the model is employed to predict the preference scores for
the query questions and softmax is applied to ensure that [] _i_ _[T]_ =1 _[y]_ [ˆ] _[g,q,i]_ [ = 1][ for each query] _[ q]_ [.]

- _**In-Context Finetune**_ **:** We investigate whether the LM can be fine-tuned, akin to GPO, to adapt
to a distribution of groups using few-shot learning. This would ideally enable improved fewshot in-context adaptation for unseen groups. To this end, we partition the group set _G_ into a
meta-train set _G_ train and a meta-test set _G_ test. During training, each group in _G_ train serves as a
training instance. The training questions for each group are split into context samples and query
questions. For a given query question _q_, we supplement it with a few-shot context _cg_, consisting
of _m_ questions paired with the respective ground truth preference scores. This context mirrors the
_Few-shot Prompt_ strategy with example shown in Appendix 8. For supervision, for each query,
we sample responses _r_, aligned with the human preference distribution _Pg_ ( _q_ ). The LM undergoes
fine-tuning using a dataset formed from these context-enhanced samples. The associated loss
function is:


_LICT_ = _−_ E _g∼Gtrain,q∼Q,r∼Pg_ ( _q_ ) log _pψ_ ( _r|q, cg_ ) (7)


G TRAINING SETTINGS


For all baseline fine-tuning methods, including SFT per group, reward modeling, and in-context
fine-tuning that necessitate training the base LM, we employ 8-bit integer quantization and utilize
a single Nvidia RTX A6000 GPU with 48GB VRAM. Our parameter search for the learning rate
encompassed values _{_ 3e-4, 2e-5, 1e-4 _}_ . We settled on 1e-4 for the Alpaca baselines and 2e-5 for the
Llama2-13B-chat baselines. For both SFT and in-context fine-tuning tasks, our effective batch size
was 8, comprised of a batch size of 1 and 8 gradient accumulation steps. In contrast, reward model
training had a batch size of 4 with the same gradient accumulation steps. All baseline methodologies
were trained with LoRA (with r=12, alpha=32, and a dropout rate of 0.05) with a weight decay
of 0.01, utilizing bf16 precision and the AdamW optimizer (Loshchilov & Hutter, 2018). For all
methods, We use the validation alignment score for early stopping.


For GPO, the transformer’s feedforward dimension was set to 128, with an embedding depth of 4, 4
heads, and 6 layers. We sampled _m_ uniformly from the range [10, 100] as context samples for every
training task. We also used a learning rate of 3e-4, coupled with the Adam Optimizer (Kingma &
Ba, 2015). More training details can be found in our codebase.


H EXTENDING GPO BEYOND MULTIPLE-CHOICE QUESTIONS


The GPO framework presented in the main paper experiments can be extended beyond the multiple
choice setting. GPO works for any LLM generation setting where there is some scalar which represents feedback over an LLM response. We present GPO formulations for producing group aligned
LLM responses in the long-form generation setting with two common forms of sparse feedback: (1)
relative (e.g. is response 1 or response 2 better) and (2) absolute (e.g. rate the response on a scale of
1-7).


**Relative feedback:** each context example includes 2 responses and GPO is trained with a binary
classification objective for each example. During inference, the GPO module can be used to perform
inference through a modified version of best-of-n sampling where _n_ sample responses are sampled
from the base LLM and each of the - _n_ 2� pairs of responses is inputted to GPO as queries. GPO’s
output can used to calculate a win rate for each of the _n_ responses and the response with the highest
win-rate is chosen as the aligned output response.


19


Published as a conference paper at ICLR 2024


**Absolute feedback:** each context example includes 1 prompt and GPO is trained to regress the
absolute feedback score. During inference, the GPO module can be used as a reward model in
best-of-n sampling to produce a group aligned response.


Since GPO predicts group preference scalars, GPO can be used as a reward model to fine-tune
the base LLM with PPO in settings where performing inference with an additional model is not
desirable.


I ADDITIONAL RELATED WORK


**Alignment via Prompting.** The conditional nature of LLMs enables them to be conditioned on
specific task information or context data and alter their output distribution to respect the conditional
information. Various studies have investigated this capability in adapting to groups and personas.
Deshpande et al. (2023) observed that prompts like _“Speak like xxx”_ could elevate LLM’s toxicity
levels contingent upon the persona’s characteristics. Jiang et al. (2023) used prompts to guide GPT3.5 to adopt certain personality traits. Beyond explicit persona or group traits, an LLM’s behavior
can also be influenced by presenting it with few-shot examples from its in-context learning ability
(Brown et al., 2020). For example, Hwang et al. (2023) uses the previous opinions of an individual
to adapt the LLM to align with the user. The advantages of this strategy include its computational
efficiency, eliminating the necessity for gradient updates. However, the few-shot examples are restricted by the model’s context size, and designing prompts for effective completion of tasks often
requires careful prompt engineering (Lester et al., 2021; Qin & Eisner, 2021; Zhou et al., 2022;
Reynolds & McDonell, 2021). Additionally, when steering the LLM to be more representative of
a demographics group on nuanced societal questions from survey datasets, especially on nuanced
societal matters, research by Santurkar et al. (2023); Durmus et al. (2023) shows that this steerability
can be constrained, resulting in limited or no enhancements in model alignment.


**Gradient-based Alignment.** Another line of alignment work involves adjusting the LM’s parameters using a preference dataset through fine-tuning. Methods have been proposed to align LLMs with
specific human values, such as helpfulness, harmlessness, and non-toxicity, using RLHF (Glaese
et al., 2022; Ouyang et al., 2022; Rafailov et al., 2023; Bai et al., 2022a). This necessitates the modeling of a reward model from human-labeled preference datasets and the use of RL policies, such as
PPO (Schulman et al., 2017), to maximize accumulated rewards. This PPO phase poses challenges
in terms of computational and memory demands, necessitating the training and storing of largescale policy, value, reward, and reference models, as well as optimizer states and gradients in GPU
memory. Moreover, this process could require complex hyperparameter tuning (Sun et al., 2023;
Santacroce et al., 2023). To address this, methods such as Direct Preference Optimization (Rafailov
et al., 2023) and Preference Ranking Optimization (Song et al., 2023) have been proposed to directly
learn from pairwise or ranking-based preference datasets without reward modeling. However, these
methods typically result in specialized models for every alignment task. Adapting to multifaceted,
sometimes conflicting group preferences requires fine-tuning distinct models for each subgroup.


J QUALITATIVE EXAMPLES OF GPO.


_Warning: This section contains qualitative examples that may be viewed as offensive or harmful._
Here we demonstrate multiple qualitative examples of GPO’s predicted group preference versus the
language model’s steered performance. Here we used only 15 context examples for GPO and the
steered LM uses group’s meta-data as context.


20


Published as a conference paper at ICLR 2024



21


Published as a conference paper at ICLR 2024


K CONTEXTUAL PROMPT EXAMPLES


In this paper, we examine three types of contextual prompts, as delineated in Santurkar et al. (2023).
Below, we present examples of the question-answer, biographical, and portrait-based contextual
prompts designed for individuals residing in the Northeastern United States.


22


Published as a conference paper at ICLR 2024


**Question-Answer Prompt:**
Which part of the United States do you currently live in?
Response: Northeast


**Biographical Prompt:**
Below, please provide a brief description of the region in
which you currently reside within the United States, followed
by answers to several questions.
Description: I currently reside in the Northeast.


**Portrait-Based Prompt:**
Answer the following question as if you currently reside in the
Northeast.


Figure 7: Three types of contextual prompts to provide group information.


Below is an instruction that describes a task, paired with an
input that provides further context. Write a response that
appropriately completes the request.


### Instruction:
Given the answer distributions from a specific demographic
group for certain questions in a public opinion survey, answer
the subsequent new question by selecting ONE of the options, as
if you are a member of this identified demographic group:


### Input:


Question: Question ~~1~~
A. Option ~~1~~
B. Option ~~2~~
C. Option ~~3~~
Answer Distribution:
A: 25%, B: 35%, C: 40%


...


Question: Question ~~m~~
A. Option ~~1~~
B. Option ~~2~~
C. Option ~~3~~
Answer Distribution:
A: 35%, B: 25%, C: 40%


Based on the above list of answered questions from a
demographic group, answer the new question by selecting ONE of
the options, as if you are a member of this demographic group:


Question: Question ~~m~~ +1
A. Option ~~1~~
B. Option ~~2~~
C. Option ~~3~~


### Response:


Figure 8: Few-shot in-context prompt with _n_ context questions in Alpaca prompt format.


23


Published as a conference paper at ICLR 2024


Below is an instruction that describes a task, paired with an
input that provides further context. Write a response that
appropriately completes the request.


### Instruction:


Given that you have the following demographics context:
Marital Status: Married,
Religious attendance: Roman Catholic,
Region: Northeast,
Age: 65+,
Sex: Male,
Education: Some college or no degree,
Income: $30,000-$50,000,
Political ideology: Conservative,
Race: White,
Answer the following question by picking ONE of the given
options


### Input:


Would you say Germany has done a good or bad job dealing with
the coronavirus outbreak?


Options:
A. Very good
B. Somewhat good
C. Somewhat bad
D. Very bad


### Response:


Figure 9: A randomly selected individual contextual prompt examples in Alpaca prompt format.


24