File size: 94,262 Bytes
19e67d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 |
## **AlignBench: Benchmarking Chinese Alignment of Large Language Models**
**Xiao Liu** [1] _[,]_ [2] _[,][∗]_ **, Xuanyu Lei** [1] _[,]_ [2] _[,][∗][,][†]_ **, Shengyuan Wang** [1] _[,]_ [2] _[,][†]_ **, Yue Huang** [5] _[,]_ [2] _[,][†]_ **,**
**Zhuoer Feng** [4] _[,]_ [2] _[,][†]_ **, Bosi Wen** [4] _[,]_ [2] _[,][†]_ **, Jiale Cheng** [4] _[,]_ [2] _[,][†]_ **, Pei Ke** [4] **, Yifan Xu** [1] _[,]_ [2] **,**
**Weng Lam Tam** [2] **, Xiaohan Zhang** [2] **, Lichao Sun** [6] **, Xiaotao Gu** [2] **,**
**Hongning Wang** [4] **, Jing Zhang** [3] **, Minlie Huang** [4] **, Yuxiao Dong** [1] _[,][‡]_ **, Jie Tang** [1] _[,][‡]_
1The Knowledge Engineering Group (KEG), Tsinghua University; 2Zhipu AI;
3Renmin University of China; 4The CoAI Group, Tsinghua University;
5University of Notre Dame; 6Lehigh University
**Abstract**
Alignment has become a critical step for
instruction-tuned Large Language Models
(LLMs) to become helpful assistants. However,
the effective evaluation of alignment for emerging Chinese LLMs is still largely unexplored.
To fill in this gap, we introduce ALIGNBENCH,
a comprehensive multi-dimensional benchmark
for evaluating LLMs’ alignment in Chinese.
We design a human-in-the-loop data curation
pipeline, containing eight main categories, 683
real-scenario rooted queries and corresponding human verified references. To ensure
the correctness of references, each knowledgeintensive query is accompanied with evidences
collected from reliable web sources (including URLs and quotations) by our annotators.
For automatic evaluation, our benchmark employs a rule-calibrated multi-dimensional LLMas-Judge (Zheng et al., 2023) approach with
Chain-of-Thought to generate explanations and
final ratings, ensuring high reliability and interpretability. All evaluation code, data, and
[LLM generations are available at https://](https://github.com/THUDM/AlignBench)
[github.com/THUDM/AlignBench. Since its re-](https://github.com/THUDM/AlignBench)
lease, AlignBench has been adopted by top
(Chinese) LLMs for evaluating their alignment
capabilities in Chinese, including ChatGLM,
Qwen, DeepSeek, Yi, Baichuan, and Abab.
**1** **Introduction**
Large Language Models (LLMs) (Brown et al.,
2020; Chowdhery et al., 2022; Zhang et al., 2022;
Zeng et al., 2022; Touvron et al., 2023) have experienced a surge in development thanks to popular
products such as ChatGPT (OpenAI, 2022). During the period, alignment (Ouyang et al., 2022;
Bai et al., 2022), including supervised fine-tuning
(SFT), reinforcement learning from human feedback (RLHF), and related techniques, has been
_∗_ Xiao and Xuanyu made equal contributions.
_†_ Work done while these authors interned at Zhipu AI.
_‡_ Corresponding Authors: Yuxiao and Jie.
justified as a key strategy to endow pre-trained
LLMs (which can hardly follow instructions) with
strong grasping of human intentions and preferences. After training, aligned LLMs have not
only mastered a wide array of established NLP
tasks (Wang et al., 2019; Liang et al., 2022) but
also versatile language-grounded missions (Cobbe
et al., 2021; Chen et al., 2021; Liu et al., 2023a).
As a result, LLMs have paced a firm step towards
practical applications in the wild.
Meanwhile, reliably benchmarking the broad
and strong competence of LLMs has also become
a significant challenge. In English, there have been
MMLU (Hendrycks et al., 2021), Big-Bench (Srivastava et al., 2023), and HELM (Wang et al.,
2019; Liang et al., 2022); in Chinese, there are
C-Eval (Huang et al., 2023b) and CMMLU (Zeng,
2023). However, prior arts hardly examine aligned
LLMs’ fulfillment of user intention and human
preference in real-world conditions, and even fall
short to tell the difference between aligned and
base LLMs. Consequently, dedicated benchmarks
are crucial for development and meaningful comparisons of aligned LLMs.
Nevertheless, designing a comprehensive and
reliable benchmark for LLM alignment is nontrival. An alignment benchmark should meet several important requirements, which correspond to
the unique strengths of LLMs and their applications
for users:
- **Real-World Scenarios** : Query forms and topics should be diverse and derived from real scenarios to reflect the authentic usages of LLMs.
- **Open-Ended** : As aligned LLMs usually produce long open-ended replies, the benchmark
should judge the correctness of detailed responses without specified forms.
- **Challenging** : LLMs are improving so rapidly
on various aspects beyond estimation. The
benchmark thus has to ensure its difficulty to
identify subtle capability gaps between LLMs.
|1. Data Curation|2. Task Taxonomy|3. LLM Evaluation|
|---|---|---|
||||
Figure 1: Overall framework of ALIGNBENCH. 1) Data Curation: a human-in-the-loop pipeline to allow continual
high-quality test query harvesting from real scenarios. 2) Task Taxonomy: 8 main categories that cover the common
usages of LLMs in Chinese. 3) LLM Evaluation: automatic multi-dimensional rule-calibrated LLM-as-Judge.
- **Automatic Judging** : Benchmark construction
and evaluation should be as automatic as possible to provide scalable, reproducible, and intime feedback to facilitate LLM development.
There have been recent attempts to introduce
LLM-as-Judge (Li et al., 2023; Zheng et al., 2023)
for evaluating the general alignment of LLMs. AlpacaEval (Li et al., 2023) compares target LLM’s
replies against text-davinci-003’s, but has been
shown unstable and uninterpretable due to its direct and pairwise scoring. MT-Bench (Zheng et al.,
2023) harnesses point-wise scoring with Chain-ofThought (CoT) (Wei et al., 2022) explanations for
better accuracy and transparency. However, it employs only 80 test samples and a scoring prompt
that judges queries of different tasks and domains
uniformly. Lastly, both benchmarks are designed
only in English and cannot well reflect the level of
alignment of many emerging Chinese LLMs.
In light of all mentioned issues, in this work
we present ALIGNBENCH, a comprehensive multidimensional benchmark for evaluating LLMs’
alignment in Chinese. Based on observations from
an online LLM service (anonymous due to review
policy), we set up a semi-automatic data curation pipeline with human in the loop to create
high-quality queries to construct ALIGNBENCH.
ALIGNBENCH summarizes a taxonomy comprising 8 major categories of queries (Cf. Figure 1) to
comprehensively cover and align with real-scenario
usages. In order to make the judge model generate objective and fair evaluations, each sample is accompanied with a human-corrected LLMgenerated reference. To ensure reference correctness for knowledge intensive queries (which accounts for 66.5%), we ask annotators to search
on the web, provide evidences including urls and
quotations, and finally synthesize references.
To enhance the automation of the evaluation,
similar to MT-Bench (Zheng et al., 2023), ALIGNBENCH distinctly leverages GPT-4 (OpenAI, 2023)
as the major model evaluator in its development,
which serves to discern the data samples and evaluate by referenced point-wise scoring with CoT. Differently, ALIGNBENCH further highlights strategies of rules-calibration and task-specific multidimensional judgement in the scoring. Our experiments demonstrate that these strategies contribute
to ALIGNBENCH’s better consistency with human
judgement and better explanation quality. Based on
ALIGNBENCH, we evaluate 17 popular API-based
or open-sourced LLMs that support Chinese, providing detailed comparisons of these LLMs across
fine-grained capabilities on Chinese alignment.
In summary, the contributions of our work are:
- We construct ALIGNBENCH, a systematic
benchmark rooted in real-scenario usages to
evaluate Chinese alignment of LLMs. We also
tailor a human-in-the-loop pipeline to allow accurate and sustainable benchmark maintenance.
- Targeting accurate and automatic evaluation
of LLMs, we design a rule-calibrated multidimensional point-wise LLM-as-judge method
for grading. Human evaluations justified its applicability compared to existing LLM-as-Judge
methods (Zheng et al., 2023).
- We systematically benchmark 17 LLMs’ Chinese alignment on ALIGNBENCH. On top of
their performance, we provide deep insights
into status quo of Chinese LLMs’ development
and highlight future directions.
**2** **The ALIGNBENCH Dataset**
In this section, we introduce the data composition
and construction pipeline of ALIGNBENCH.
Dataset Information Evaluation Method
Benchmark
Open MultiData Size Language Data Source Domain Metric
-ended Dimensional
MMLU (Hendrycks et al., 2021) 15,908 English Exams & Textbooks Knowledge Accuracy
GSM8k (Cobbe et al., 2021) 8,000 English Human Writers Math ✓ Accuracy
HumanEval (Chen et al., 2021) 164 Python Human Writers Code ✓ Pass@k
CMMLU (Zeng, 2023) 11,528 Chinese Exams & Textbooks Knowledge Accuracy
AGI-Eval (Zhong et al., 2023) 8,062 Chi. & Eng. Exams Knowledge Accuracy
C-Eval (Huang et al., 2023b) 13,948 Chinese Exams Knowledge Accuracy
AlpacaEval (Li et al., 2023) 805 English Alpaca Data General ✓ Model Judge (w/o CoT)
MT-Bench (Zheng et al., 2023) 80 English Self-constructed General ✓ Model Judge (w/ CoT)
**ALIGNBENCH (ours)** **683** **Chinese** **Real-Scenario Usage** **General** ✓ ✓ **Model Judge (w/ CoT)**
Table 1: Comparisons between ALIGNBENCH and other benchmarks, illustrating the features of ALIGNBENCH in
terms of dataset information and evaluation methods.
Category `中文名` #Samples
Fundamental Language Ability `基本任务` 68
Advanced Chinese Understanding `中文理解` 58
Open-ended Questions `综合问答` 38
Writing Ability `文本写作` 75
Logical Reasoning `逻辑推理` 92
Mathematics `数学计算` 112
Task-oriented Role Play `角色扮演` 116
Professional Knowledge `专业能力` 124
Total `总计` 683
Table 2: Sample distribution of ALIGNBENCH dataset.
**2.1** **Dataset Composition**
In this section, we introduce the overall composition of ALIGNBENCH. To perform a systematic
evaluation, we frame a comprehensive taxonomy
of the LLMs’ abilities based on real-user instructions. We inspect and summarize these use-cases
into 8 main categories and 683 samples in total as
shown in Table 2.
**Fundamental Language Ability.** This category
focuses on the basic language tasks derived from
traditional NLP tasks such as information extraction (Etzioni et al., 2008), text classification (Wang
and Manning, 2012), and commonsense knowledge (Talmor et al., 2019). They reflect common
users’ practical needs of LLMs to conduct traditional tasks under zero-shot or few-shot settings
with customized prompts and formats, such as text
classification, information extraction, and short
summarization. Thus we select high-quality diverse queries relevant to each traditional task in
this category.
**Advanced Chinese Understanding.** This category
aims to evaluate LLMs’ ability to understand cultural and historical background in Chinese-specific
tasks. In Chinese context, a large percentage of
real-user needs are related to Chinese culture, characters, and history. However, without deliberate
optimization on Chinese, cutting-edge LLMs (e.g.,
GPT-4 (OpenAI, 2023)) would fail to understand
and answer these questions correctly. As ALIGNBENCH targets Chinese alignment, the category
plays a vital role in our overall design.
**Open-Ended Questions.** The category represents
a common usage of LLMs to answer subjective
questions in an open-ended manner. Users may
seek for advice, recommendations, and tutoring for
many daily questions concerning their work, study,
travel, and lives. The key to good open-ended responses is about catering to human preference, featuring long, detailed, and highly related content.
**Writing Ability.** Writing, one of the most frequently used LLM function, plays a vital role in
LLMs’ applications. We select challenging writing instructions, which require not only an excellent mastering of language but also a high level of
instruction-following (e.g., specific formatting conditions), consistency (e.g., argumentative topics),
and creativity (e.g., fictions or poems).
**Logical Reasoning.** The ability to process complicated problems with step-by-step reasoning and
LLMs’ inherent knowledge is highlighted for current strong LLMs. The category aims to evaluate LLMs’ abilities to understand, analyze, and
produce correct responses given intricate logical
problems, using questions that require deductive,
abductive, multi-hop, or commonsense reasoning.
**Mathematics.** Math problems are challenging but
also widely-adopted for LLMs’ ability testing due
to practical applications. We collect math problems
in different difficulty levels from elementary to advanced mathematics and in different formats, including calculations, simple problem solving, concept explanation, theorem-proof, etc.
**Task-Oriented Role Play.** Lots of users request
the model to play as a specific identity to perform
corresponding tasks, which is summarized as taskoriented role play. In order to evaluate the ful
fillment of users’ instructions and the quality of
responses when role play, we collect role play instructions of high complexity for the category.
**Professional Knowledge.** LLMs have proven their
competence in solving domain-specific problems
that require professional knowledge. The category
aims to evaluate LLMs’ abilities in specific domains, including physics, history, music, law, etc.
Additionally, the queries we select are generative
open-ended questions that allow LLMs to generate
freely and provide sufficient details.
**2.2** **Dataset Construction**
Each sample in ALIGNBENCH contains a taskoriented query, a high-quality referenced answer,
and its category in our taxonomy. The detailed
construction pipeline is described as follows.
**Query Curation.** To ensure the diversity and authenticity of the queries, we mainly refer to two
sources, namely the scenarios from an online chat
service and challenging problems written by researchers. Given the inherently noisy nature, we
go through a high-standard data curation pipeline
following rules described below. **1) Task-oriented:**
The query should represent human intentions and
instruct LLMs to complete the specified task. **2)**
**Clarity & Fluency:** The query should be clear,
and easy-to-understand and the demands should
be smoothly expressed. **3) Complexity & Diffi-**
**culty:** The query should be hard for most LLMs,
requiring them to utilize their capabilities to solve
it comprehensively. **4) Desensitization:** Ensure
that the queries are safe and insensitive.
**Reference Acquisition & Improvement.** For
point-wise grading for ALIGNBENCH (Cf. Section 3 for analysis), scoring with a pivotal reference answer has been found beneficial to improve
the reliability of the LLM-as-Judge (Zheng et al.,
2023; Zhang et al., 2020). Thus we decide to provide human-curated reference answers, serving to
assist evaluators in determining the correctness of
the answer, and act as a scoring pivot.
However, because ALIGNBENCH has been designed to be difficult and of wide coverage, it turns
out quite challenging for human annotators to provide answers from scratch in our preliminary trial.
As a result, we first utilize GPT-4 to generate answers, and then ask human annotators to meticulously review, revise, and refine them as reference
answers for ALIGNBENCH. To ensure reference
quality, especially for knowledge-intensive queries
Figure 2: Cumulative distribution of judging by human,
general (Zheng et al., 2023) and rule-calibrated on sampled ALIGNBENCH along their ratings.
```
高音单簧管和高音萨克斯的调性相同吗?如果相同,请说出他们的调性,如果
```
Question `不同,请分别说出他们的调性`
Evidence 1 [url] [https://baike.baidu.com/item/%E5%8D%95%E7%B0%A7%E7%AE%A1/346415](https://baike.baidu.com/item/%E5%8D%95%E7%B0%A7%E7%AE%A1/346415)
quote `常见的单簧管分为` bB `调、` A `调和` C `调,这三种都属于高音单簧管。`
Evidence 2 [url] [https://baike.baidu.com/item/%E8%90%A8%E5%85%8B%E6%96%AF/147180](https://baike.baidu.com/item/%E8%90%A8%E5%85%8B%E6%96%AF/147180)
quote `乐器本调` :bB( `高音,次中音` )
`高音单簧管和高音萨克斯的调性不同。高音单簧管的调性通常为` B _♭_ `调、` A `调`
Reference `和` C `调这三种,而高音萨克斯的调性则通常为` B _♭_ `。`
Table 3: An example case from ALIGNBENCH’s knowledge intensive query annotation.
from categories such as Professional Knowledge,
Mathematics, and Logical Reasoning, we explicitly
ask annotators to conduct web search during the
verification. During the search, webpage urls and
quotations of contents for writing the references
are required to be recorded, as shown in Table 3.
**Filtering & Classification.** To enhance distinguishment in scores between strong LLMs, it is
necessary to filter more challenging samples for
evaluation. Consequently, we engage three relatively advanced Chinese-supported LLMs, including GPT-3.5-turbo (OpenAI, 2022), ChatGLM (Du
et al., 2022; Zeng et al., 2022) APIs and Sparkdesk
to serve as difficulty filters within our construction procedure. We subject these models to evaluation, analyzing their responses to the processed
queries and subsequently scoring the answers utilizing GPT-4. By computing the average score across
responses and utilizing it as a signal, we discard
50% of the queries that garnered the highest average scores, indicative of their lower difficulty levels.
The approach ensures a meticulous and discerning
selection of samples, effectively distinguishing between strong LLMs of varying capacities.
**3** **Methods for Evaluating Alignment**
To effectively evaluate the quality of responses,
ALIGNBENCH employs GPT-4 (OpenAI, 2023)
as a major evaluator to analyze and subsequently grade the responses following adopted prac
[https://xinghuo.xfyun.cn/](https://xinghuo.xfyun.cn/)
tices (Zheng et al., 2023; Li et al., 2023; Liu et al.,
2023b). However, a significant designing space
still exists regarding prompting, score calibration,
critique explainability, and evaluation dimensions,
which have been hardly explored.
Therefore, in ALIGNBENCH we design a rulecalibrated multi-dimensional point-wise LLM-asJudge method. The detailed prompts are in Appendix A.2 and an example is displayed in Fig 3.
**Point-Wise Grading & Chain-of-Thought (Wei**
**et al., 2022).** When LLM-as-Judge is leveraged,
two grading methods have been previously implemented: point-wise (Zheng et al., 2023) or pairwise
grading (Li et al., 2023). Nevertheless, previous
study has indicated that the point-wise grading possessed comparable agreement with humans than
the pair-wise grading, which suffers from position
bias (Zheng et al., 2023). Additionally, considering
the evaluating efficiency, compared to pair-wise
grading’s quadratic number of comparisons, the
point-wise grading has advantages in terms of expenses and time. Therefore ALIGNBENCH adopts
point-wise grading either. During the evaluation,
the inputs are the query, the model’s response, and
a human-curated reference answer, and the output
is an multi-dimensional analytical explanation and
a final rating, ranging from 1 to 10. As the task of
grading involves complex reasoning, introducing
Chain-of-Thought in the scoring process has also
been proved useful to augment both the score reliability and interpretability (Zheng et al., 2023).
Specifically, GPT-4 is instructed to generate explanations from multiple dimensions before providing
a final grade on a scale of 1 to 10.
**Rule-Calibrated Referencing.** Given that many
of the questions in ALIGNBENCH are of significant
complexity and difficulty even for GPT-4, we provide a high-quality reference answer, which is primarily generated by GPT-4 and modified by human
annotators to ensure its correctness and improve
its quality. To guide the evaluator to compare the
answer with the reference and generate more controllable scores, we provided detailed grading rules
elaborating the relationship between score intervals
and the answer’s quality compared to the reference.
Additionally, we set the reference answer to score
8 as a reference scoring pivot.
We plot the cumulative distribution of human
judge, general judge and rule-calibrated judge in
Figure 2 to show that rule-calibration judge has
a narrower gap to human evaluation’s cumulative
distribution. Typically, rule-calibrated judge scores
much fewer top scores (9 and 10) than general
judge, aligned with human scoring habits and therefore enhance the discrimination of ALIGNBENCH.
**Multi-Dimensional Analysis.** As tasks vary in
their nature and characteristics, applying the same
evaluation criteria to all tasks would be unjust. For
instance, writing tasks should prioritize creativity,
whereas logical reasoning tasks primarily require
logical coherence. As a solution, we propose a
multi-dimensional scoring approach to evaluate
LLMs’ responses, tailoring the evaluation to the
specific task at hand, promising a more comprehensive and organized explanation. Specifically, we
set up different evaluation dimensions based on different types of questions, as shown in Table 7 and
we instructed the evaluator to analyze the model
answer from specified dimensions and provide dimensional scores. Furthermore, we found that our
multi-dimensional method could effectively balance different dimensions, reducing verbosity bias,
with an example shown in Table 11. The categorical information is also useful for conditioning
generation temperature for target LLMs to generate reply (Zheng et al., 2023). For tasks that has a
relatively fixed answers (e.g., Mathematics, Professional Knowledge, etc.), we set temperature to 0.1
to ensure more deterministic and reproducible generation; for other tasks (e.g., Writing, Task-oriented
Role Play, etc.) that may need more creativity, a
high temperature (e.g., 0.7) is adopted to encourage
longer and more diverse generation.
**4** **Human Evaluation on ALIGNBENCH**
To justify the rule-calibrated multi-dimensional
point-wise LLM-as-Judge method we design for
ALIGNBENCH, we conduct extensive human evaluation over ALIGNBENCH’s selected queries. We especially focus on two aspects: the method’s agreement with human judging, and the method’s critique quality for more human-interpretable results.
**4.1** **Agreement Evaluation**
Previous studies (Zheng et al., 2023) have executed
comprehensive agreement experiments, demonstrating that GPT-4 (OpenAI, 2023) evaluator concur excellently with humans within English contexts. However, such agreement remains considerably under-investigated in Chinese contexts,
Since the translation subcategory contains open-ended
questions that requires creativity, it’s treated as Generative
Question.
**Metric** **Method** **Overall** **Dom.** **Chi.** **Fund.** **Math** **Writ.** **Open.** **Role.** **Logic.**
**Sample-level**
**Pearson**
**System-level**
**Pearson**
**Pairwise**
**Agreement**
**(w/o tie)**
**general** 0.618 0.738 0.576 0.549 0.669 **0.548** 0.524 0.621 0.600
**rules** 0.628 0.709 **0.667** 0.568 **0.689** 0.524 **0.541** **0.673** 0.581
**ours** **0.638** **0.739** 0.634 **0.589** 0.677 0.544 0.539 0.653 **0.622**
**general** 0.998 0.983 0.829 0.992 0.990 **0.978** 0.938 0.979 **0.980**
**rules** **0.999** 0.981 **0.901** 0.987 **0.995** 0.976 **0.979** 0.981 0.975
**ours** 0.998 **0.991** 0.869 **0.995** 0.993 0.960 0.936 **0.987** 0.978
**general** 0.751 **0.827** 0.784 0.692 **0.780** **0.714** 0.665 0.735 0.784
**rules** 0.724 0.775 0.764 0.686 0.745 0.651 0.667 0.750 0.743
**ours** **0.753** 0.803 **0.817** **0.701** 0.773 0.697 **0.679** **0.759** **0.791**
Table 4: Comparison on human agreement between different judging methods on sampled ALIGNBENCH, rated
by **gpt-4-0613** . The “general” method is a translated version of Zheng et al. (2023)’s with minor modifications.
“ _Fund._ ” denotes Fundamental Language Ability, “ _Chi._ ” denotes Advanced Chinese Understanding, “ _Open._ ” denotes
Open-ended Questions, “ _Writ._ ” denotes Writing Ability, “ _Role._ ” denotes Task-oriented Role Play, “ _Pro_ ” denotes
“Professional Knowledge”, “ _Math._ ” denotes Mathematics, and “ _Logic._ ” denotes Logical Reasoning.
Judge Results ∆ **WR**
**Winner** [Win Rate]
(w/o tie)(%) (%)
A B A Win Tie B Win
ours general 217 94 155 **ours** 58.3 +12.4
ours rules 241 102 139 **ours** 63.4 +20.4
rules general 186 167 147 **rules** 55.9 +7.8
Table 5: Results of quality evaluation (pairwise comparison) by human annotators. Our scoring methods
combining rule-calibration and multi-dimensional criteria can produce consistently better explanations.
thereby warranting further exploration. We have
conducted a comprehensive human annotation experiment, aiming to measure the agreement between evaluations adjudicated by human annotators
and our method.
**Dataset.** We randomly sample a subset of 400
queries from the complete ALIGNBENCH dataset.
To make sure each category consists of enough
samples to produce reliable results, smaller categories are upsampled. To cover LLMs with a
wider levels of capability, we adopt answers from
8 LLMs, including GPT-4 (OpenAI, 2023), three
versions of ChatGLM series (Zeng et al., 2022; Du
et al., 2022), Sparkdesk, Qwen-plus-v1-search(Bai
et al., 2023a), InternLM-7B-Chat (Team, 2023)
and Chinese-Llama2-7B-Chat, producing a total
of 3200 question-answer pairings. Subsequent to
the compilation of the evaluation set, the questionanswer-reference triples are delivered to human
annotators, tasked with assigning quality ratings to
the answers according to the references. Given the
inherent limitations bound to human cognition, annotators are instructed to employ a rating on a scale
from 1 to 5. The scores are indicative of response
quality, with higher scores epitomizing superior
quality and profound satisfaction. In particular, a
score of 1 marks irrelevant, incorrect, or potentially
harmful responses.
[https://xinghuo.xfyun.cn/](https://xinghuo.xfyun.cn/)
**Baselines.** The experiment incorporated two robust baseline comparisons to benchmark our evaluation approach. Note that all the methods use
GPT-4 to evaluate for fairness. **1) General grad-**
**ing:** which leverages a translated and then slightly
modified Chinese version of the evaluation prompt
employed in MT-bench (Zheng et al., 2023). **2)**
**Rule-calibrated grading:** To better instruct the
model to compare the model answer and reference
answer and reduce score variances, we incorporate grading rules into the evaluation process. The
method comprises five scoring intervals, each associated with a specific quality description. The
reference answer is anchored to the score of 8, serving as a relative scoring pivot.
**Metrics.** To comprehensively measure the agreement between the GPT-4 judges and human evaluators, we adopt several metrics. **1) Sample-level**
**Pearson Correlation:** measures correlations at
sample level, by averaging the Pearson score of
each sample. **2) System-level Pearson Correla-**
**tion:** measures correlations at system level. It
calculates the Pearson coefficient between humanjudge and model-judge average scores of LLMs. **3)**
**Pairwise Agreement (w/o tie):** For each response,
human-judge and model-judge scores were converted into pairwise comparisons (excluding ties).
**Analysis.** Results of agreement experiment are
presented in Table 4. It shows that our point-wise
multi-dimensional rules-calibrated LLM-as-Judge
method performs best, particularly on the Samplelevel Pearson metric and the Pairwise Agreement
(w/o tie) metric, thereby substantiating the excellent agreement with human judges. Furthermore, it
is noteworthy that all methods considered demonstrates impeccable performance on the Systemlevel Pearson metric, indicating the reliability and
robustness of the LLM-as-judge.
|Reasoning 中文推理 Language 中文语言<br>model Overall<br>Avg. Math. Logi. Avg. Fund. Chi. Open. Writ. Role. Pro.<br>推理 数学 逻辑 语言 基本 中文 综合 文本 角色 专业<br>模型 总分<br>总分 计算 推理 总分 任务 理解 问答 写作 扮演 能力|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|**model**<br>**Overall**<br>Reasoning` 中文推理`<br>Language` 中文语言`<br>**_Avg._**<br>**Math.**<br>**Logi.**<br>**Avg.**<br>**Fund.**<br>**Chi.**<br>**Open.**<br>**Writ.**<br>**Role.**<br>**Pro.**<br>`模型`<br>`总分`<br>`推理`<br>`总分`<br>`数学`<br>`计算`<br>`逻辑`<br>`推理`<br>`语言`<br>`总分`<br>`基本`<br>`任务`<br>`中文`<br>`理解`<br>`综合`<br>`问答`<br>`文本`<br>`写作`<br>`角色`<br>`扮演`<br>`专业`<br>`能力`|**model**<br>**Overall**<br>Reasoning` 中文推理`<br>Language` 中文语言`<br>**_Avg._**<br>**Math.**<br>**Logi.**<br>**Avg.**<br>**Fund.**<br>**Chi.**<br>**Open.**<br>**Writ.**<br>**Role.**<br>**Pro.**<br>`模型`<br>`总分`<br>`推理`<br>`总分`<br>`数学`<br>`计算`<br>`逻辑`<br>`推理`<br>`语言`<br>`总分`<br>`基本`<br>`任务`<br>`中文`<br>`理解`<br>`综合`<br>`问答`<br>`文本`<br>`写作`<br>`角色`<br>`扮演`<br>`专业`<br>`能力`|**model**<br>**Overall**<br>Reasoning` 中文推理`<br>Language` 中文语言`<br>**_Avg._**<br>**Math.**<br>**Logi.**<br>**Avg.**<br>**Fund.**<br>**Chi.**<br>**Open.**<br>**Writ.**<br>**Role.**<br>**Pro.**<br>`模型`<br>`总分`<br>`推理`<br>`总分`<br>`数学`<br>`计算`<br>`逻辑`<br>`推理`<br>`语言`<br>`总分`<br>`基本`<br>`任务`<br>`中文`<br>`理解`<br>`综合`<br>`问答`<br>`文本`<br>`写作`<br>`角色`<br>`扮演`<br>`专业`<br>`能力`|**Math.**<br>**Logi.**<br>`数学`<br>`计算`<br>`逻辑`<br>`推理`|**Math.**<br>**Logi.**<br>`数学`<br>`计算`<br>`逻辑`<br>`推理`|**Fund.**<br>**Chi.**<br>**Open.**<br>**Writ.**<br>**Role.**<br>**Pro.**<br>`基本`<br>`任务`<br>`中文`<br>`理解`<br>`综合`<br>`问答`<br>`文本`<br>`写作`<br>`角色`<br>`扮演`<br>`专业`<br>`能力`|
|gpt-4-1106-preview<br>gpt-4-0613<br>chatglm-turbo`(智谱清言)`<br>erniebot-3.5`(文心一言)`<br>gpt-3.5-turbo-0613<br>chatglm-pro`(智谱清言)`<br>spark_desk_v2`(讯飞星火)`<br>Qwen-14B-Chat<br>Baichuan2-13B-Chat<br>ChatGLM3-6B<br>Baichuan2-7B-Chat<br>InternLM-20B<br>Qwen-7B-Chat<br>ChatGLM2-6B<br>InternLM-Chat-7B<br>Chinese-LLaMA-2-7B-Chat<br>LLaMA-2-13B-Chinese-Chat|**8.01**<br>**7.53**<br>**6.24**<br>**6.14**<br>**6.08**<br>**5.83**<br>**5.74**<br>**5.72**<br>**5.25**<br>**4.97**<br>**4.97**<br>**4.96**<br>**4.91**<br>**4.48**<br>**3.65**<br>**3.57**<br>**3.35**|**_7.73_**<br>_7.47_<br>_5.00_<br>_5.15_<br>_5.35_<br>_4.65_<br>_4.73_<br>_4.81_<br>_3.92_<br>_3.85_<br>_3.66_<br>_3.66_<br>_3.73_<br>_3.39_<br>_2.56_<br>_2.68_<br>_2.47_|**7.80**<br>**7.66**<br>7.56<br>7.37<br>4.74<br>5.26<br>5.03<br>5.27<br>5.68<br>5.02<br>4.54<br>4.75<br>4.71<br>4.74<br>4.91<br>4.71<br>3.76<br>4.07<br>3.55<br>4.14<br>3.56<br>3.75<br>3.39<br>3.92<br>3.62<br>3.83<br>3.16<br>3.61<br>2.45<br>2.66<br>2.29<br>3.07<br>2.21<br>2.73|**8.29**<br>7.59<br>7.49<br>7.13<br>6.82<br>7.01<br>6.76<br>6.63<br>6.59<br>6.10<br>6.28<br>6.26<br>6.09<br>5.58<br>4.75<br>4.46<br>4.23|**7.99**<br>7.33<br>**8.61**<br>**8.67**<br>**8.47**<br>**8.65**<br>7.81<br>6.93<br>7.42<br>7.93<br>7.51<br>7.94<br>6.82<br>7.17<br>8.16<br>7.77<br>7.76<br>7.24<br>6.62<br>**7.60**<br>7.26<br>7.56<br>6.83<br>6.90<br>6.71<br>5.81<br>7.29<br>7.03<br>7.28<br>6.77<br>6.51<br>6.76<br>7.47<br>7.07<br>7.34<br>6.89<br>5.84<br>6.97<br>7.29<br>7.18<br>6.92<br>6.34<br>6.90<br>6.36<br>6.74<br>6.64<br>6.59<br>6.56<br>6.22<br>6.05<br>7.11<br>6.97<br>6.75<br>6.43<br>5.75<br>5.29<br>6.71<br>6.83<br>6.28<br>5.73<br>5.81<br>5.50<br>7.13<br>6.84<br>6.53<br>5.84<br>5.96<br>5.50<br>7.18<br>6.19<br>6.49<br>6.22<br>6.40<br>5.74<br>6.26<br>6.31<br>6.19<br>5.66<br>4.91<br>4.52<br>6.66<br>6.25<br>6.08<br>5.08<br>4.34<br>4.09<br>5.82<br>4.89<br>5.32<br>4.06<br>4.31<br>4.26<br>4.50<br>4.63<br>4.91<br>4.13<br>4.13<br>3.31<br>4.79<br>3.93<br>4.53<br>4.71|
Table 6: ALIGNBENCH rated by **gpt-4-0613** . “ _Fund._ ” denotes Fundamental Language Ability, “ _Chi._ ” denotes
Advanced Chinese Understanding, “ _Open._ ” denotes Open-ended Questions, “ _Writ._ ” denotes Writing Ability, “ _Role._ ”
denotes Task-oriented Role Play, “ _Pro_ ” denotes “Professional Knowledge”, “ _Math._ ” denotes Mathematics, and
“ _Logic._ ” denotes Logical Reasoning.
**4.2** **Quality Evaluation**
Previous study (Zheng et al., 2023) mainly focuses
on the agreement between model judges and human judges when evaluating LLM-as-Judge methods. However, considering the interpretability and
readability as an evaluation process and the clarity
and helpfulness as a feedback-providing approach,
it is also of great significance to evaluate the quality
of the explanation generated by the model judge before grading. To assess the quality of explanations
generated by the methods, as well as to ascertain
the final rating, we conduct a pairwise quality comparison experiment.
**Experiment Settings.** To compare the quality
of the explanations given by our method and the
two baselines mentioned above, we sample 500
question-answer pairs from the ALIGNBENCH
dataset and generated explanations using the three
LLM-as-Judge methods. Then, we pair three explanations under each sample in pairs, producing
1,500 samples, subsequently delivered into human
preference comparisons.
Given a question, a model answer, a reference
answer, and two explanations (denoted as A and
B) given by GPT-4 judges, human annotators are
instructed to compare the quality of explanations.
To guide the human evaluators on the comparison,
we make a quality-judgement guideline as follow:
**1) Rationality** : if the explanation is reasonable,
correct, and fair. **2) Readability** : if the explanation
is logical, well-organized, comprehensive, and detailed. **3) Consistency** : if the explanation and the
final rating are consistent. which also serves as a
brief standard of high-quality explanations.
**Analysis.** Results of quality evaluations are presented in Table 5. Results indicate that our method
generate the most high-quality explanation and
helpful feedback, defeating two baselines in pairwise comparisons with high win rates. Furthermore, it’s worth noting that rules-based grading
outperforms general grading in terms of explanation quality, proving that the grading rules can provide a clear reference-based standard and, therefore
contribute to the clear comparison of the reference
answer and the model answer.
**5** **Evaluating LLMs with ALIGNBENCH**
Based on the validness of ALIGNBENCH’s LLMas-Judge scoring, we systematically benchmark a
wide array of LLMs on Chinese alignment with the
help of ALIGNBENCH. We use gpt-4-0613 (OpenAI, 2023) as the judge model to evaluate model
performances.
**Main Results.** Results are shown in Table 6.
The results highlight that most of the evaluated
close-sourced LLMs successfully achieve commendable scores (above or near 6 points). This
demonstrates the potent capability of these ad
vanced LLMs in fulfilling user intents with highquality responses, showing a relatively excellent
level of alignment. It is particularly promising
for the Chinese LLM community to note that certain innovative Chinese-developed LLMs display
performance either equivalent to or marginally surpassing that of gpt-3.5-turbo (OpenAI, 2022),
drawing closer to the recognized leading model,
gpt-4-1106-preview (OpenAI, 2023).
**Analysis: Reasoning Drawbacks.** The evaluation
has revealed some drawbacks in reasoning abilities amongst Chinese-oriented LLMs, which require further attention and development. Given the
leading capability of gpt-4-1106-preview (OpenAI, 2023) in Logical Reasoning and Mathematics
(with scores of 7.66 and 7.80 respectively), there is
substantial room for improvement in light of their
significantly weaker performance.
**Analysis: Chinese Abilities.** Furthermore, certain categories, such as Advanced Chinese Understanding, underline the necessity for an LLM
to possess a proficient understanding of the Chinese language, culture, and history. Our findings
suggest that while gpt-4-1106-preview (OpenAI, 2023) performs relatively well (ranks the second) in these areas, the best-performing Chinesedeveloped LLMs achieved comparable or even better performance in Advanced Chinese Understanding category, potentially attributed to incorporating
more culture-relevant and high-quality Chinesespecific instruction-tuning data in the alignment of
these Chinese LLMs.
**Analysis:** **Open-Source Achievements.** The
results indicated that the top-tier Chinese opensourced LLMs such as Qwen-14B-Chat (Bai et al.,
2023a), Baichuan2-13B-Chat (Yang et al., 2023),
have shown great performance in terms of instruction following and high-quality generation, approaching the performance of some close-sourced
LLMs. Based on our evaluation, Chinese opensourced LLMs have demonstrated their potential
to become alternatives and even competitors with
close-sourced LLMs, showing the inspiring dynamism of the Chinese LLM community.
**Analysis: Dimensions.** From Table 10, we observe that gpt-4-1106-preview (OpenAI, 2023)
has achieved the highest scores in all dimensions.
Generally, the dimensional results are aligned with
the overall results in Table 6. Furthermore, in terms
of correctness and user satisfaction, there is a significant gap between gpt-4 series (OpenAI, 2023)
and other LLMs. We show complete results in Appendix A.3 and hope the evaluation can better assist
the researchers to understand and enhance Chinese
LLMs’ alignment on multiple dimensions.
**6** **Related Work**
**Evaluation of LLMs.** Self-supervisedly (Liu et al.,
2021) pre-trained LLMs (OpenAI, 2023; Anil et al.,
2023) exhibit excellent performance in language
tasks, bringing severe challenges to the effective
and comprehensive evaluation of LLMs (Chang
et al., 2023; Zhuang et al., 2023; Xu et al., 2023b).
Faced with the challenges, researchers have proposed benchmarks (Hendrycks et al., 2021; Zhong
et al., 2023; Huang et al., 2023a; Cobbe et al., 2021;
Chen et al., 2021; Bai et al., 2023b; Zhang et al.,
2023) focused on measuring atomic abilities, which
fall short to consider real-scenario usages enough.
**LLM-as-Judge Methods.** LLMs have shown great
potential in evaluating the text quality with high
agreement with human judges (Li et al., 2023;
Wang et al., 2023a; Liu et al., 2023b; Zheng et al.,
2023) and providing helpful feedbacks (Wang et al.,
2023c; Cui et al., 2023) serving as guides for improvement. However, some potential bias and risks
are also identified (Zheng et al., 2023; Wang et al.,
2023b; Ke et al., 2023) when using LLMs-as-judge.
**Alignment of LLMs.** Alignment, including following human instructions and providing helpful
assistance, is crucial for LLMs’ pragmatic applications (Liu et al., 2023c). To aligning LLMs with
humans, related methods include supervisedly finetune LLMs (Wang et al., 2023d; Xu et al., 2023a;
Sun et al., 2023) and improve further through reinforcement learning from human feedback (Stiennon et al., 2020; Ouyang et al., 2022; Glaese
et al., 2022; Rafailov et al., 2023). However, it’s
challenging to effectively evaluate the capabilities
of alignment since the high expense to acquire human preferences and the open-ended reference-free
feature in real application scenarios.
**7** **Conclusion**
We introduce ALIGNBENCH, a comprehensive multi-dimensional benchmark for evaluating
LLMs’ alignment in Chinese. A sustainable humanin-the-loop data curation pipeline and a better
LLM-as-Judge method has been tailored to enable
ALIGNBENCH’s high-quality automatic evaluation
of LLMs’ Chinese alignment. Benchmarking results of 17 Chinese-supported LLMs are reported.
**Limitations**
Here we discuss several limitations of this work.
**To improve automatic LLM-as-Judge.** While we
show that leveraging GPT-4 as judge could achieve
relatively high correlation with human evaluation,
there is a large room for improving this correlation and granularity. Additionally, it is shown that
LLM-as-Judge has some potential biases (Zheng
et al., 2023), including position, verbosity, and selfenhancement. These biases could harm the correctness of evaluation on certain models. We leave
these open problems for future research.
**To cover more topics and queries.** Despite
ALIGNBENCH has a relatively large query set in its
class, it still needs to be enriched for a more stable
and reliable LLM evaluation. Also, ALIGNBENCH
does not include evaluation over long-text queries,
which could be added in its future version.
**Acknowledgements**
This work is supported by Technology and Innovation Major Project of the Ministry of Science and
Technology of China under Grant 2022ZD0118600,
Natural Science Foundation of China (NSFC)
62276148 and 62425601, the New Cornerstone Science Foundation through the XPLORER PRIZE.
**References**
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin
Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
[Chen, Eric Chu, et al. 2023. PaLM 2 technical report.](https://doi.org/10.48550/arXiv.2305.10403)
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang,
Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei
Huang, et al. 2023a. Qwen technical report. _arXiv_
_preprint arXiv:2309.16609_ .
Yuntao Bai, Saurav Kadavath, Sandipan Kundu,
Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini,
Cameron McKinnon, et al. 2022. Constitutional
ai: Harmlessness from ai feedback. _arXiv preprint_
_arXiv:2212.08073_ .
Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu,
Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao
Liu, Aohan Zeng, Lei Hou, et al. 2023b. Longbench:
A bilingual, multitask benchmark for long context
understanding. _arXiv preprint arXiv:2308.14508_ .
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In
_Proceedings of the 34th International Conference on_
_Neural Information Processing Systems_, NIPS’20,
Red Hook, NY, USA. Curran Associates Inc.
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu,
Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi,
Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang,
Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie.
2023. A survey on evaluation of large language models. _arXiv preprint arXiv:2307.03109_ .
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen
Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray,
Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz
Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie
Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welinder,
Bob McGrew, Dario Amodei, Sam McCandlish, Ilya
[Sutskever, and Wojciech Zaremba. 2021. Evaluating](http://arxiv.org/abs/2107.03374)
[large language models trained on code.](http://arxiv.org/abs/2107.03374)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. _arXiv preprint_
_arXiv:2204.02311_ .
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao,
Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and
Maosong Sun. 2023. Ultrafeedback: Boosting language models with high-quality feedback. _arXiv_
_preprint arXiv:2310.01377_ .
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding,
Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm:
General language model pretraining with autoregressive blank infilling. In _Proceedings of the 60th An-_
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers)_, pages 320–335.
Oren Etzioni, Michele Banko, Stephen Soderland, and
Daniel S Weld. 2008. Open information extraction from the web. _Communications of the ACM_,
51(12):68–74.
Amelia Glaese, Nat McAleese, Maja Tr˛ebacz, John
Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh,
Laura Weidinger, Martin Chadwick, Phoebe Thacker,
Lucy Campbell-Gillingham, Jonathan Uesato, PoSen Huang, Ramona Comanescu, Fan Yang, Abigail
See, Sumanth Dathathri, Rory Greig, Charlie Chen,
Doug Fritz, Jaume Sanchez Elias, Richard Green,
Soˇna Mokrá, Nicholas Fernando, Boxi Wu, Rachel
Foley, Susannah Young, Iason Gabriel, William Isaac,
John Mellor, Demis Hassabis, Koray Kavukcuoglu,
Lisa Anne Hendricks, and Geoffrey Irving. 2022.
[Improving alignment of dialogue agents via targeted](http://arxiv.org/abs/2209.14375)
[human judgements.](http://arxiv.org/abs/2209.14375)
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
2021. Measuring massive multitask language understanding. In _International Conference on Learning_
_Representations_ .
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei
Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu,
Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao
Fu, Maosong Sun, and Junxian He. 2023a. Ceval: A multi-level multi-discipline chinese evaluation suite for foundation models. _arXiv preprint_
_arXiv:2305.08322_ .
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei
Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu,
Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. 2023b.
C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. _arXiv preprint_
_arXiv:2305.08322_ .
Pei Ke, Bosi Wen, Zhuoer Feng, Xiao Liu, Xuanyu Lei,
Jaile Cheng, Shengyuan Wang, Aohan Zeng, Yuxiao
Dong, Hongning Wang, Jie Tang, and Minlie Huang.
2023. Critiquellm: Scaling llm-as-critic for effective
and explainable evaluation of large language model
generation. _arXiv preprint arXiv:2311.18702_ .
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori,
Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and
Tatsunori B. Hashimoto. 2023. Alpacaeval: An automatic evaluator of instruction-following models.
[https://github.com/tatsu-lab/alpaca_eval.](https://github.com/tatsu-lab/alpaca_eval)
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris
Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian
Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language
models. _arXiv preprint arXiv:2211.09110_ .
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding,
Kaiwen Men, Kejuan Yang, et al. 2023a. Agentbench: Evaluating llms as agents. _arXiv preprint_
_arXiv:2308.03688_ .
Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu
Wang, Jing Zhang, and Jie Tang. 2021. Selfsupervised learning: Generative or contrastive. _IEEE_
_transactions on knowledge and data engineering_,
35(1):857–876.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang,
Ruochen Xu, and Chenguang Zhu. 2023b. G-eval:
Nlg evaluation using gpt-4 with better human alignment. _arXiv preprint arXiv:2303.16634_ .
Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying
Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov,
[Muhammad Faaiz Taufiq, and Hang Li. 2023c. Trust-](http://arxiv.org/abs/2308.05374)
[worthy llms: a survey and guideline for evaluating](http://arxiv.org/abs/2308.05374)
[large language models’ alignment.](http://arxiv.org/abs/2308.05374)
[OpenAI. 2022. Introducing chatgpt.](https://openai.com/blog/chatgpt)
OpenAI. 2023. GPT-4 technical report. _arXiv preprint_
_arXiv:2303.08774_ .
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
[Training language models to follow instructions with](http://arxiv.org/abs/2203.02155)
[human feedback.](http://arxiv.org/abs/2203.02155)
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano
Ermon, Christopher D Manning, and Chelsea Finn.
2023. Direct preference optimization: Your language
model is secretly a reward model. _arXiv preprint_
_arXiv:2305.18290_ .
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta, Adrià
Garriga-Alonso, et al. 2023. Beyond the imitation
game: Quantifying and extrapolating the capabilities of language models. _Transactions on Machine_
_Learning Research_ .
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel
Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford,
Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. _Advances_
_in Neural Information Processing Systems_, 33:3008–
3021.
Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin
Zhang, Zhenfang Chen, David Cox, Yiming Yang,
and Chuang Gan. 2023. Principle-driven selfalignment of language models from scratch with
minimal human supervision. _arXiv preprint_
_arXiv:2305.03047_ .
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
Jonathan Berant. 2019. Commonsenseqa: A question
answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of_
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers)_, pages
4149–4158.
InternLM Team. 2023. Internlm: A multilingual language model with progressively enhanced
capabilities. [https://github.com/InternLM/](https://github.com/InternLM/InternLM-techreport)
[InternLM-techreport.](https://github.com/InternLM/InternLM-techreport)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro,
Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. _arXiv preprint_
_arXiv:2302.13971_ .
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy,
and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. _Advances in neural information_
_processing systems_, 32.
Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang
Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou.
2023a. Is chatgpt a good nlg evaluator? a preliminary
study. _arXiv preprint arXiv:2303.04048_ .
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai
Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui.
2023b. Large language models are not fair evaluators.
_arXiv preprint arXiv:2305.17926_ .
Sida I Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic
classification. In _Proceedings of the 50th Annual_
_Meeting of the Association for Computational Lin-_
_guistics (Volume 2: Short Papers)_, pages 90–94.
Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen,
[Lifan Yuan, Hao Peng, and Heng Ji. 2023c. Mint:](http://arxiv.org/abs/2309.10691)
[Evaluating llms in multi-turn interaction with tools](http://arxiv.org/abs/2309.10691)
[and language feedback.](http://arxiv.org/abs/2309.10691)
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa
Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh
[Hajishirzi. 2023d. Self-instruct: Aligning language](http://arxiv.org/abs/2212.10560)
[models with self-generated instructions.](http://arxiv.org/abs/2212.10560)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural_
_Information Processing Systems_, 35:24824–24837.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng,
Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin
Jiang. 2023a. [Wizardlm: Empowering large lan-](http://arxiv.org/abs/2304.12244)
[guage models to follow complex instructions.](http://arxiv.org/abs/2304.12244)
Liang Xu, Anqi Li, Lei Zhu, Hang Xue, Changtai Zhu,
Kangkang Zhao, Haonan He, Xuanwei Zhang, Qiyue
Kang, and Zhenzhong Lan. 2023b. Superclue: A
comprehensive chinese large language model benchmark. _arXiv preprint arXiv:2307.15020_ .
Aiyuan Yang, Bin Xiao, Bingning Wang, Borong
Zhang, Chao Yin, Chenxu Lv, Da Pan, Dian Wang,
Dong Yan, Fan Yang, et al. 2023. Baichuan 2:
Open large-scale language models. _arXiv preprint_
_arXiv:2309.10305_ .
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b:
An open bilingual pre-trained model. _arXiv preprint_
_arXiv:2210.02414_ .
Hui Zeng. 2023. Measuring massive multitask chinese
understanding. _arXiv preprint arXiv:2304.12986_ .
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
_arXiv preprint arXiv:2205.01068_ .
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating
text generation with bert. In _International Confer-_
_ence on Learning Representations_ .
Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun,
Yongkang Huang, Chong Long, Xiao Liu, Xuanyu
Lei, Jie Tang, and Minlie Huang. 2023. Safetybench: Evaluating the safety of large language models with multiple choice questions. _arXiv preprint_
_arXiv:2309.07045_ .
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang,
Joseph E. Gonzalez, and Ion Stoic. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena.
_arXiv preprint arXiv:2306.05685_ .
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang,
Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,
and Nan Duan. 2023. Agieval: A human-centric
benchmark for evaluating foundation models. _arXiv_
_preprint arXiv:2304.06364_ .
Ziyu Zhuang, Qiguang Chen, Longxuan Ma, Mingda
Li, Yi Han, Yushan Qian, Haopeng Bai, Zixian Feng,
[Weinan Zhang, and Ting Liu. 2023. Through the lens](http://arxiv.org/abs/2308.07902)
[of core competency: Survey on evaluation of large](http://arxiv.org/abs/2308.07902)
[language models.](http://arxiv.org/abs/2308.07902)
**A** **Appendix**
**A.1** **Detailed Description of the Dataset**
In this section, we will further elaborate the overall
composition of ALIGNBENCH. In order to perform
a systematic evaluation, we conducted a thorough
real-user query analysis in our LLM-based chat
service. We framed a comprehensive taxonomy of
the LLMs’ abilities based on the Chinese real-user
demands. The overall taxonomy consists of 8 categories, namely Fundamental Language Ability, Advanced Chinese Understanding, Open-ended Questions, Writing Ability, Mathematics, Task-oriented
Role Play, and Professional Knowledge. Each category contains several subcategories, effectively
serving as a complementary materials to deeper
understand the composition of our dataset.
Figure 3: An exemplar scoring process of ALIGNBENCH on _Logical Reasoning_ category. Given question, reference,
and LLM’s answer, ALIGNBENCH harnesses multi-dimensional rule-calibrated LLM-as-Judge to produce a comprehensive comment, consequently giving an integral score of the LLM response.
**A.1.1** **Fundamental Language Ability**
This category focuses on the basic language understanding and processing tasks, which are derived
from traditional NLP tasks such as entity extraction,
text classification, and commonsense knowledge.
- **Commonsense knowledge.** This subcategory
mainly evaluates the model’s ability to master commonsense knowledge and fundamental
facts, explain primary concepts, and form a
basic understanding of the physical world.
- **Reading Comprehension.** This subcategory
mainly evaluates the model’s ability to process
and understand the provided textual material
and answer users’ questions based on that.
- **Translation.** This subcategory requires models to master different kinds of languages and
understand the interactions between them. Currently, this subcategory mainly covers English
and Chinese.
- **Text Classification.** This subcategory tests
model’s ability to classify text data into given
categories with different characteristics.
- **Information Extraction.** This subcategory
measures model’s ability to extract knowledge
from text data, which lays a solid foundation
for more challenging tasks.
**A.1.2** **Advanced Chinese Understanding**
This category aims to evaluate the abilities to understand, analyze and produce reasonable and correct
responses faced with Chinese-specific problems,
including Chinese characters, history and culture.
- **Character-wise Understanding.** This subcategory focuses on the real queries related to Chinese characters and phrases, requiring LLMs
to understand the complex structure and contextual meaning of certain Chinese characters
and utilize them to form reasonable phrases and
sentences.
- **Cultural Understanding.** This subcategory is
intended to measure models’ abilities to master
the Chinese language at a higher level, including understanding implicit meaning, mastering rhetorical techniques such as humor and
sarcasm, utilizing Chinese-specific idioms and
phrases, and mastering knowledge related to
Chinese culture and history.
**A.1.3** **Open-ended Questions**
This category represents an important role for the
LLMs to become a critic or an advisor for the
users. Equipped with sufficient knowledge and
advanced reasoning ability, LLMs are capable of
providing fresh thoughts, creative perspectives, feasible advice, and comprehensive recommendations
for users. Therefore, this category measures the
LLMs’ ability to offer opinions and suggestions.
- **Opinion Expression.** This subcategory requires the model to offer reasonable thoughts
to certain subjects, people, events, or circumstances following users’ instructions. Containing open-ended questions without groundtruths, the logical smoothness and the information correctness were of high priority during
evaluation.
- **Suggestion Offering.** This subcategory aims
to evaluate the ability to analyze users’ intentions and offer some feasible suggestions or recommendations. Being selected carefully, the
questions cover a wide range of topics from
daily life to professional advice.
**A.1.4** **Writing Ability**
Regarded as one of the most frequently used capabilities, writing abilities play a vital role in LLMs’
applications. Therefore, we systematically framed
this category into 4 subcategories and selected typical real-user writing instructions, which require not
only an excellent mastery of language but also a
high level of thought formulation and creativity.
- **Practical Writing.** Practical writing is a practical style of articles formed in long-term social
practice activities and is often used when dealing with public and private affairs in people’s
daily work, including speech scripts, workrelated emails, personal statements, regulations,
etc. Additionally, practical writing assistance
can play a vital part in alleviating workloads
and boosting productivity.
- **Creative Writing.** Creative writing is a writing style that requires a high level of creativity, emotions, aesthetic taste, and delicate design, including writing novels, essays, poems,
lyrics, and even jokes. It represents not only
higher requirements for the LLMs to fulfill
users’ writing instructions as well as generating high-quality outputs but also a promising
attempt at AI creativity.
- **Professional Writing.** Professional writing
usually contains domain-specific materials and
has high requirements for professional format
and content. Considering the demands to use
LLMs as professional writing assistants, this
subcategory includes instructions like academic
reports, comprehensive surveys, legal documents, financial analysis, etc.
- **Custom Writing.** With lots of the writing
queries classified into the above 3 subcategories, there exist other highly personalized
and context-related instructions that require our
attention. Therefore, we collected them into
this subcategory, for instance, rewriting with
casual style, correction of grammar errors, expansion of the given text, imitation writing, etc.
**A.1.5** **Logical Reasoning**
This category aims to evaluate the abilities to understand, analyze, and produce reasonable and correct
responses faced with Chinese-specific textual logic
problems. Therefore, this category plays a vital
role in our overall design.
- **Proof** Proof problems are a widely used technique for determining the correctness of arguments and propositions. It involves verifying a
proposed proposition through a series of legitimate reasoning steps to ensure that it is valid
and reasonable within a given logical framework. This subcategory focuses on several intellectual qualities such as LLMs’ ability to reason logically, reverse thinking, mathematical
symbolic representation, and clarity of reasoning.
- **Reasoning** In logical reasoning, reasoning
methods are required to solve a variety of complex problems related to arguments and propositions. Reasoning problems require LLMs to
use a combination of logical reasoning skills,
creative thinking, mathematical symbolic representation, and clear reasoning processes in order to solve a variety of complex logical puzzles
effectively. Compared with Proof subcategory,
this subcategory doesn’t provide ground-truths,
increasing difficulty and complexity.
**A.1.6** **Mathematics**
Considering its logical complexity and a large proportion, math problems are regarded as a necessary
field to evaluate LLMs. We collected math problems in different difficulty levels from elementary
mathematics to advanced mathematics and in different formats, including calculations, simple problem
solving, concept explanation, theorem-proof, etc.
- **Elementary Mathematics.** The elementary
mathematics subcategory is a branch of mathematics that examines the capability to master
basic mathematical concepts, techniques, and
methods by LLMs. It typically includes but is
not limited to arithmetic, Algebra, geometry,
probability, and statistics.
- **Advanced Mathematics.** This subcategory is
from a branch of mathematics that covers a
wide range of mathematical topics and concepts and is designed to provide students with
in-depth knowledge of mathematical theory and
problem-solving skills. Advanced mathematics
typically includes calculus, differential equations, linear algebra, probability and statistics,
mathematical proofs, functions of a complex
variable, and linear programming. These elements form the core of advanced mathematics
and provide the basis for solving complex mathematical problems in a variety of fields.
- **Applied Mathematics.** This subcategory focuses on how mathematical theories and methods can be applied to solve real-life problems
and challenges. Distinguished from higher and
elementary mathematics, applied mathematics
not only requires a certain mathematical ability
but also tests the linguistic comprehension and
mathematical modeling skills of large language
models.
**A.1.7** **Task-oriented Role Play**
Our real-user scenario analysis indicates that lots
of users request the model to play as a specific
identity to perform corresponding tasks, which is
summarized as task-oriented role play. In order
to evaluate the fulfillment of users’ instructions
and the quality of responses when role-play, we
collected role-play instructions and constructed this
category.
- **Celebrity.** The instructions in this subcategory
perform the role-play of realistic celebrities,
such as politicians, athletes, etc.
- **Emotional.** The instructions in this subcategory assign the identity of real-life roles such
as friends, relatives, pets, etc., and provide emotional support for the users.
- **Entertainment.** The instructions in this subcategory involves games played between the
user and the LLM, requiring imagination and
creativity.
- **Functional.** The instructions in this subcategory assign the identity of roles with different occupations, experience, or knowledge and
achieve some specific tasks.
- **Daily life.** The instructions in this subcategory
perform the role-play of different kinds of activities in a more realistic life scenario.
**A.1.8** **Professional Knowledge**
With their advanced knowledge abilities, LLMs
have proven their competence in solving domainspecific problems that require professional knowledge. This category aims to evaluate LLMs’ abilities in specific domains, for instance, physics, history, music, law, etc.
- **Physics.** Physics is the natural science that
studies the nature and properties of matter and
energy.
- **Chemistry.** Chemistry is the study of the nature, composition, structure, and patterns of
change of substances. The study of chemistry involves the interrelationships between
substances or the correlation between matter
and energy.
- **Computer Science.** Computer science is the
systematic study of the theoretical foundations
of information and computation and the practical techniques of how they are implemented
and applied in computer systems.
- **Biology or Medicine.** Biology consists of the
empirical and extensive study of all aspects of
life. Medicine is an applied science that aims
at and studies the protection of human health
and the enhancement of physical and mental
fitness.
- **Economics.** Economics is the social science
discipline that studies the relationship between
goods and services, including all purchasing,
production, distribution, and consumption behaviors therein.
- **Astronomy.** Astronomy is a natural science
that studies celestial bodies and astronomical
phenomena.
- **Sociology.** Sociology uses a variety of research
methods of empirical investigation and critical
analysis to develop and refine a body of knowledge about the structure of human societies,
social action, or social relations, and to apply
that knowledge.
- **History.** History, the study of human history
as an object of study, is a form of knowledge
in which human beings sift and combine their
historical materials
- **Music.** Music, broadly speaking, is any art
that consists of sound. All human cultures have
music, which means that the performance of
music is a universal phenomenon in all cultures.
- **Law.** Law is a system of rules, the implementa
tion of which is ensured by the coercive power
of the state, which regulates the behavior of
individuals.
- **Sport.** Sports or sports competition is a social activity that aims at strengthening physical fitness, improving skills, and enriching cultural life through physical exercise, technology,
training, and competition.
- **Geography.** Geography is the study of the
Earth and its features, inhabitants, and phenomena, the study of the Earth’s surface circles
interacting with each other, and its spatial differences and the process of change.
- **Literature.** Literature, in a narrow sense, is a
kind of language art, i.e., an art that uses language and writing as a means to visualize and
reflect objective social life and to express the
thoughts and feelings of the subjective author.
- **Others.** This subcategory contains questions
that not covered by the above domains.
**A.2** **Prompts and Details of Methods**
In our multi-dimensional analysis, we select different dimensions for different categories to provide
a more comprehensive and reasonable evaluation.
The detailed selections of the dimensions are described in Table 8 and the descriptions of the dimensions are described in Table 9.
All the prompts used in our experiments are displayed as follows.
**General LLM-as-Judge** is displayed in Figure 6.
**Rule-calibrated LLM-as-Judge** is displayed in
Figure 5.
**Our Multi-dimensional Rule-calibrated LLM-**
**as-Judge** is displayed in Figure 4.
**A.3** **Dimensional Performance**
Our multi-dimensional rule-calibrated LLM-asJudge method provides multi-dimensional analysis
and scores. Therefore, we also calculated dimensional average scores and reported them in Table 10.
For each dimension, the dimensional score was
averaged across all the samples that evaluate the
dimension ability, since each category could be
used for evaluating several but not all dimensions
according to the features of the category.
**A.4** **Case Study**
**A.4.1** **Misleading.**
Upon meticulously verifying and modifying the
reference answers to ensure correctness, we ob
served several instances of incorrect referencing.
This underscores the inherent challenges in utilizing LLMs for evaluative tasks in practical settings,
where reference information may be fraught with
inaccuracies and confusion. Therefore, it is imperative to delve into the LLM evaluator’s response to
such ambiguous or misleading references.
As shown in Figure 7, we highlight a discrepancy
where the correct answer should be _India_ but the
provided reference erroneously states _Japan_ . Notably, the GPT-4 (OpenAI, 2023) evaluator failed to
detect this error, evaluating based on the incorrect
reference. Conversely, when posed the question
directly, GPT-4 (OpenAI, 2023) accurately generated _India_ as the correct answer, demonstrating
its profound knowledge to the question. This dichotomy suggests that while LLMs can provide
accurate responses independently, their evaluative
capabilities can be compromised when presented
with incorrect references or ambiguous materials.
This raises concerns regarding the reliability of using LLMs as evaluators in large-scale applications
and implies that the alignment process of LLMs
may inadvertently reduce the model’s ability to
rectify user errors.
**A.4.2** **Reference-free Judgements**
During the evaluation, we noted that certain LLMs
are prone to producing extensive details pertinent
to the query posed. This complexity poses a significant challenge for the LLM evaluator, particularly
when attempting to ascertain the accuracy of the
information provided in scenarios where reference
materials are scarce or incomplete.
As shown in Figure 8, The provided information( _more than 30_ ) regarding the number of countries participating( _in fact 13)_ is inaccurate. In the
absence of corresponding reference information,
the LLM evaluator was unable to assess the factual
correctness of the response accurately. This underscores a discernible weakness in the evaluation
capabilities of LLMs when operating in settings
devoid of sufficient reference material, resulting
in suboptimal performance in discerning factual
inaccuracies.
Addressing this issue may necessitate the integration of an autonomous factual verification tool,
supported by a robust and dynamically updated
information database. We acknowledge the complexity of this challenge and propose it as an avenue
for future research endeavors.
**Category** **Question Type** **Evaluation Dimension** **Reply Temperature**
(Fundamental Language Ability) `基本任务` (Factual and Explanatory Question) `事实与解释型问题` `事实正确性清晰` (Correctness), `度` (Clarity), `满 完足备用性户` (Completeness) `需求` (User Satisfaction), 0.1
(Advanced Chinese Understanding) `中文理解` (Factual and Explanatory Question) `事实与解释型问题` `事实正确性清晰` (Correctness), `度` (Clarity), `满 完足备用性户` (Completeness) `需求` (User Satisfaction), 0.1
(Open-ended Questions) `综合问答` (Recommendation Question) `建议型问题` `公平事与实可正负责确性程` (Correctness), `度` (Fairness and Responsibility), `满足用户需求` (User Satisfaction), `创造性` (Creativity) 0.7
(Writing Ability) `文本写作` (Generative Question) `生成型问题` `逻辑连事贯实性正` (Logical Coherence), `确性` (Correctness), `满 创足造用性户` (Creativity), `需求` (User Satisfaction), `丰富度` (Richness) 0.7
(Logical Reasoning) `逻辑推理` (Logical Reasoning Question) `逻辑推理型问题` `事逻实辑连正确贯性性` (Correctness),(Logical Coherence), `满足用户 完需备求性` (User Satisfaction),(Completeness) 0.1
(Mathematics) `数学计算` (Logical Reasoning Question) `逻辑推理型问题` `事逻实辑连正确贯性性` (Correctness),(Logical Coherence), `满足用户 完需备求性` (User Satisfaction),(Completeness) 0.1
(Task-oriented Role Play) `角色扮演` (Generative Question) `生成型问题` `逻辑连事贯实性正` (Logical Coherence), `确性` (Correctness), `满 创足造用性户` (Creativity), `需求` (User Satisfaction), `丰富度` (Richness) 0.7
(Professional Knowledge) `专业能力` (Factual and Explanatory Question) `事实与解释型问题` `事实正确性清晰` (Correctness), `度` (Clarity), `满 完足备用性户` (Completeness) `需求` (User Satisfaction), 0.1
Table 7: Judging dimensions and LLM reply generation temperatures of ALIGNBENCH on different categories.
They both help to provide better category-conditioned scoring in practice (Cf. Section 3.)
Table 8: Dimension setting for different kinds of questions.
|Question Type|Evaluation Dimension|
|---|---|
|`事实与解释型问题`<br>(Factual and Explanatory Question)|`事实正确性`(Factuality),` 满足用户需求`(User Satisfaction),<br>`清晰度`(Clarity),` 完备性`(Completeness)|
|`逻辑推理型问题`<br>(Logical Reasoning Question)|`事实正确性`(Factuality),` 满足用户需求`(User Satisfaction),<br>`逻辑连贯性`(Logical Coherence),` 完备性`(Completeness)|
|`生成型回答`<br>(Generative Question)|`事实正确性`(Factuality),` 满足用户需求`(User Satisfaction),<br>`逻辑连贯性`(Logical Coherence),` 创造性`(Creativity),` 丰富度`(Richness)|
|`建议型回答`<br>(Recommendation Question)|`事实正确性`(Factuality),` 满足用户需求`(User Satisfaction),<br>`公平与可负责程度`(Fairness and Responsibility),` 创造性`(Creativity)|
Table 9: The definition of different dimensions.
|Dimension|Definition|
|---|---|
|`事实正确性`<br>(Factuality)|`回答中提供的信息是否准确无误,是否基于可信的事实和数据。`<br>Whether the information provided in the response is accurate, based on reliable facts and data.|
|`满足用户需求`<br>(User Satisfaction)|`回答是否满足了用户提出问题的目的和需求,是否对问题进行了全面而恰当的回应。`<br>Whether the response meets the user’s question and needs, and<br>provides a comprehensive and appropriate answer to the question.|
|`逻辑连贯性`<br>(Logical Coherence)|`回答是否在整体上保持一致,是否在不同部分之间保持逻辑连贯性,避免了自相矛盾。`<br>Whether the response maintains overall consistency and<br>logical coherence between different sections, avoiding self-contradiction.|
|`丰富度`<br>(Richness)|`回答包含丰富的信息、深度、上下文考虑、多样性、`<br>`详细解释和实例,以满足用户需求并提供全面理解。`<br>Whether the response includes rich info, depth, context, diversity, detailed<br>explanations and examples to meet user needs and provide a comprehensive understanding.|
|`创造性`<br>(Creativity)|`回答是否具有创新性或独特性,是否提供了新颖的见解或解决方法。`<br>Whether the response is innovative or unique, providing novel insights or solutions.|
|`公平与可负责程度`<br>(Fairness and Responsibility)|`回答中提供的建议或信息是否可行,`<br>`是否负有一定的责任,是否考虑了潜在风险和后果。`<br>Whether the advice or information provided in the response is feasible, carries a certain<br>degree of responsibility, and considers potential risks and consequences.|
|`完备性`<br>(Completeness)|`回答是否提供了足够的信息和细节,以满足用户的需求,是否遗漏了重要的方面。`<br>Whether the response provides suffcient information and details to meet the<br>user’s needs, and whether it avoids omitting important aspects.|
|`清晰度`<br>(Clarity)|`回答是否表达清晰易懂,是否使用了简洁的语言和结构,以便用户可以轻松理解。`<br>Whether the response is clear and understandable, and whether it uses<br>concise language and structure so that the user can easily understand it.|
Figure 4: The prompt template of multi-dimension rule-calibrated method. The text in blue includes different
evaluation dimensions, the text in red is the scoring process, and the text in green is the scoring rules.
Figure 5: The prompt template for rule-calibrated grading. The text in green is the scoring rules
Figure 6: The prompt template for general grading.
Table 10: LLMs’ performance on different dimensions judged by **gpt-4-0613**, where Corr., Satis., Logic., Compl.,
Clar., Crea., Rich. and Fair., are the abbreviations corresponding to Correctness, User Satisfaction, Logical
Coherence, Completeness, Clarity, Creativity, Richness, Fairness and Responsibility.
**Model** **Corr. Satis. Logic. Compl. Clar. Crea. Rich. Fair.**
**gpt-4-1106-preview** **8.33** **8.03** **9.25** **8.40** **9.12** **8.04** **8.59** **9.58**
**gpt-4-0613** 8.00 7.33 9.21 7.85 8.97 6.97 7.41 9.03
**chatglm3-turbo** 6.85 6.52 8.88 6.69 7.99 7.19 7.79 8.89
**erniebot (** `文心一言` **)** 6.94 6.16 8.76 6.20 7.69 6.71 7.15 8.11
**gpt-3.5-turbo-0613** 6.94 6.11 8.80 6.12 7.92 6.50 6.89 8.51
**chatglm2-pro (** `智谱清言` **)** 6.58 6.02 8.73 6.17 7.80 6.62 7.07 8.41
**spark_desk_v2 (** `讯飞星火` **)** 6.60 5.76 8.40 5.68 7.56 6.46 6.74 8.27
**Qwen-14B-Chat** 6.72 5.77 8.73 5.77 7.63 5.94 6.27 7.73
**Baichuan2-13B-Chat** 6.16 5.48 8.43 5.30 7.24 6.27 6.68 8.35
**internlm-20b** 5.97 5.17 8.14 5.15 6.82 5.85 6.33 8.35
**Baichuan2-7B-Chat** 5.83 5.13 8.30 4.91 6.79 6.23 6.47 7.92
**chatglm3-6b-openai** 5.74 5.12 8.21 5.09 6.77 5.93 6.35 7.59
**Qwen-7B-Chat** 5.96 4.93 8.48 4.90 6.75 5.58 5.70 7.30
**chatglm2-6b** 5.13 4.54 7.94 4.46 6.60 5.66 5.79 7.27
**internlm-chat-7b** 4.79 3.51 6.94 3.00 5.82 4.75 4.77 7.35
**Chinese-LLaMA-2-7B-Chat** 4.47 3.49 6.83 3.46 5.22 4.56 4.64 5.16
**LLaMa-2-13B-Chinese-Chat** 4.39 3.29 6.77 3.31 4.84 4.17 3.98 5.62
Figure 7: Misleading Case. The text in red is a Wrong
Reference.
Figure 8: Reference-free Case. The text in red is an
Insufficient Reference.
Query `对于正整数` _n_ `,计算积分` �01 _[x][n][dx]_
_Mathematics_ For a positive integer _n_, calculate the integral �01 _[x][n][dx]_
Reference `对于给定的积分` **:** �01 _[x][n][dx]_ `[ 我][们][可][以][直][接][对]` _[x][n]_ `[ 进][行][不][定][积][分][,][得][到][:]` - _x_ _[n]_ _dx_ = ( _x_ [(] _[n]_ [+1)] ) _/_ ( _n_ + 1) + _C_ `其中` **C** `是常数。然后使用定积分`
Generated
Response
_**Wrong**_
_**Answer**_
General
Judge
_**Lost In**_
_**Verbosity**_
`的上下限` 0 `和` 1 `来计算这个积分的值:` �01 _[x][n][dx]_ [ = (] _[x]_ [(] _[n]_ [+1)] _[/]_ [(] _[n]_ [ + 1))] _[|]_ 0 [1] [= (1][(] _[n]_ [+1)][)] _[/]_ [(] _[n]_ [ + 1)] _[ −]_ [(0][(] _[n]_ [+1)][)] _[/]_ [(] _[n]_ [ + 1) = 1] _[/]_ [(] _[n]_ [ + 1)] _[ −]_ [0 =]
1 _/_ ( _n_ + 1) `所以,` �01 _[x][n][dx]_ [ = 1] _[/]_ [(] _[n]_ [ + 1)] `[。]`
For the given integral �01 _[x][n][dx]_ [, we can directly integrate] _[ x][n]_ [ to obtain the indefinite integral:] - _x_ _[n]_ _dx_ = ( _x_ [(] _[n]_ [+1)] ) _/_ ( _n_ + 1) + _C_,
where _C_ is a constant. Then, we use the definite integral limits 0 and 1 to compute the value of this integral. �01 _[x][n][dx]_ [ = (] _[x]_ [(] _[n]_ [+1)] _[/]_ [(] _[n]_ [+]
1)) _|_ [1] 0 [= (1][(] _[n]_ [+1)][)] _[/]_ [(] _[n]_ [ + 1)] _[ −]_ [(0][(] _[n]_ [+1)][)] _[/]_ [(] _[n]_ [ + 1) = 1] _[/]_ [(] _[n]_ [ + 1)] _[ −]_ [0 = 1] _[/]_ [(] _[n]_ [ + 1)] `[ 所][以][,]` �01 _[x][n][dx]_ [ = 1] _[/]_ [(] _[n]_ [ + 1)]
`要计算这个积分,需要先确定` _n_ `的值。因为` _n_ `是整数,所以我们可以尝试计算` _n_ = 1 _,_ 2 _,_ 3 _,_ 4 _,_ 5 `等不同情况下的积分值,从而找到通解。`
`当` _n_ = 1 `时,原式变为` �01 _[xdx]_ **[,]** `[这][个][积][分][可][以][直][接][求][解][:]` �01 _[xdx]_ [ =] _[ x][|]_ 0 [1] [= 1] _[ −]_ [0 = 1] `[ 当]` **[n=2]** `[ 时][,][原][式][变][为]` �01 _[x]_ [2] _[dx]_ **[,]** `[这][个][积][分][也]`
`可以直接求解:` �01 _[x]_ [2] _[dx]_ [ = (1] _[/]_ [3)] _[x]_ [3] _[|]_ 0 [1] [= (1] _[/]_ [3)(1][3][)] _[ −]_ [(1] _[/]_ [3)(0][3][) = 1] _[/]_ [3] `[ 当]` **[n=3]** `[ 时][,][原][式][变][为]` �01 _[x]_ [3] _[dx]_ **[,]** `[这][个][积][分][也][可][以][直][接][求]`
`解:` �01 _[x]_ [3] _[dx]_ [ = (1] _[/]_ [4)] _[x]_ [4] _[|]_ 0 [1] [= (1] _[/]_ [4)(1][4][)] _[ −]_ [(1] _[/]_ [4)(0][4][) = 1] _[/]_ [4] `[ 当]` **[n=4]** `[ 时][,][原][式][变][为]` �01 _[x]_ [4] _[dx]_ **[,]** `[这][个][积][分][也][可][以][直][接][求][解][:]` �01 _[x]_ [4] _[dx]_ [ =]
(1 _/_ 5) _x_ [5] _|_ [1] 0 [= (1] _[/]_ [5)(1][5][)] _[ −]_ [(1] _[/]_ [5)(0][5][) = 1] _[/]_ [5] `[ 当]` _[n]_ [ = 5] `[ 时][,][原][式][变][为]` �01 _[x]_ [5] _[dx]_ **[,]** `[这][个][积][分][也][可][以][直][接][求][解][:]` �01 _[x]_ [5] _[dx]_ [ = (1] _[/]_ [6)] _[x]_ [6] _[|]_ 0 [1] [=]
(1 _/_ 6)(1 [6] ) _−_ (1 _/_ 6)(0 [6] ) = 1 _/_ 6
`通过这些计算,我们可以猜测当` **n=k** `时,原式可以表示为:` �01 _[x][k][dx]_ [ = (1] _[/k]_ [)] _[x]_ [(] _[k]_ [+1)] _[|]_ 0 [1] `[下][面][用][数][学][归][纳][法][证][明][这][个][猜][测][:][当]` _[n]_ [ = 1] `[ 时][,]`
`左边` **=1,** `右边` **=1,** `猜测成立。假设当` **n=k** `时猜测成立,即:` �01 _[x][k][dx]_ [ = (1] _[/k]_ [)] _[x]_ [(] _[k]_ [+1)] _[|]_ 0 [1] `[那][么][当]` _[n]_ [ =] _[ k]_ [ + 1] `[ 时][,][有][:]` �01 _[x]_ [(] _[k]_ [ + 1)] _[dx]_ [ =]
(1 _/_ ( _k_ + 1)) _x_ [(] _[k]_ [+2)] _|_ [1] 0 `[由][归][纳][假][设][可][知][,][当]` _[n]_ [ =] _[ k]_ `[ 时][猜][测][成][立][,][因][此][当]` _[n]_ [ =] _[ k]_ [ + 1] `[ 时][猜][测][也][成][立][。][因][此][,][对][于任][意][正][整数]` _[n]_ **[,]** `[原][式][可][以][表][示]`
`为:` �01 _[x][n][dx]_ [ = (1] _[/n]_ [)] _[x]_ [(] _[n]_ [+1)] _[|]_ 0 [1]
To compute this integral, it is necessary first to determine the value of _n_ . Since _n_ is an integer, we can attempt to calculate the integral
values for _n_ = 1 _,_ 2 _,_ 3 _,_ 4 _,_ 5, etc., to find a general solution.
When _n_ = 1, the expression becomes �01 _[xdx]_ [, which can be directly solved:] �01 _[xdx]_ [ =] _[ x][|]_ 0 [1] [= 1] _[ −]_ [0 = 1][. When] _[ n]_ [ = 2][, it]
becomes �01 _[x]_ [2] _[dx]_ [, and this integral is also directly solvable:] �01 _[x]_ [2] _[dx]_ [ =] 31 _[x]_ [3] _[|]_ 0 [1] [=] 13 [(1][3][)] _[ −]_ 3 [1] [(0][3][) =] 31 [. When] _[ n]_ [ = 3][, it]
`助手的答案在解答过程中,虽然没有直接给出最终的公式,但是通过列举` _n_ = 1 _,_ 2 _,_ 3 _,_ 4 _,_ 5 `的情况,然后通过观察和归纳,得出了正确的公式。`
```
这种方法虽然比参考答案的直接求解方法更繁琐,但是它展示了一种通过观察和归纳得出结论的思考过程,这对于理解和学习积分的求解方法是
```
`有帮助的。然而,助手的答案在最后的公式中有一个小错误,应该是` �01 _[x][n][dx]_ [ = 1] _[/]_ [(] _[n]_ [ + 1)] `[,][而][不][是]` �01 _[x][n][dx]_ [ = (1] _[/n]_ [)] _[x]_ [(] _[n]_ [+1)] _[|]_ 0 [1] `[。][因][此][,]`
`虽然助手的答案在思考过程和方法上是有价值的,但是在准确性上有所欠缺。评级:` **[[7]]**
The assistant’s response, in the process of solving, did not directly present the final formula. However, by enumerating cases where
_n_ = 1 _,_ 2 _,_ 3 _,_ 4 _,_ 5 and then through observation and induction, the correct formula was derived. Although this method is more cumbersome
than the direct solution approach of the reference answer, it demonstrates a process of reasoning through observation and induction, which
is beneficial for understanding and learning the methods of integral calculus. However, there is a minor error in the final formula presented
by the assistant. The correct expression should be �01 _[x][n][dx]_ [ =] _n_ +11 [, not] �01 _[x][n][dx]_ [ =] _n_ [1] _[x]_ [(] _[n]_ [+1)] _[|]_ 0 [1][. Therefore, while the assistant’s]
response is valuable in terms of its thought process and methodology, it lacks in accuracy. Rating: [7]."
becomes �01 _[x]_ [2] _[dx]_ [, and this integral is also directly solvable:] �01 _[x]_ [2] _[dx]_ [ =] 31 _[x]_ [3] _[|]_ 0 [1] [=] 13 [(1][3][)] _[ −]_ 3 [1] [(0][3][) =] 31 [. When] _[ n]_ [ = 3][, it]
becomes �01 _[x]_ [3] _[dx]_ [, which can be solved in the same way:] �01 _[x]_ [3] _[dx]_ [ =] [1] 4 _[x]_ [4] _[|]_ 0 [1] [=] 4 [1] [(1][4][)] _[ −]_ 4 [1] [(0][4][) =] [1] 4 [. For] _[ n]_ [ = 4][, it becomes]
- 1 - 1 1 1 1 - 1
0 _[x]_ [4] _[dx]_ [, solvable as:] 0 _[x]_ [4] _[dx]_ [ =] 5 _[x]_ [5] _[|]_ 0 [1] [=] 5 [(1][5][)] _[ −]_ 5 [1] [(0][5][) =] 5 [. When] _[ n]_ [ = 5][, it becomes] 0 _[x]_ [5] _[dx]_ [, which is solved as:]
[1] 4 _[x]_ [4] _[|]_ 0 [1] [=] 4 [1]
4 [1] [(1][4][)] _[ −]_ 4 [1]
4 [1] [(0][4][) =] [1] 4
- 1 - 1 1 1 1 - 1
0 _[x]_ [4] _[dx]_ [, solvable as:] 0 _[x]_ [4] _[dx]_ [ =] 5 _[x]_ [5] _[|]_ 0 [1] [=] 5 [(1][5][)] _[ −]_ 5 [1] [(0][5][) =] 5 [. When] _[ n]_ [ = 5][, it becomes] 0 _[x]_ [5] _[dx]_ [, which is solved as:]
�01 _[x]_ [5] _[dx]_ [ =] [1] 6 _[x]_ [6] _[|]_ 0 [1] [=] [1] 6 [(1][6][)] _[ −]_ 6 [1] [(0][6][) =] [1] 6 [.]
From these calculations, we can hypothesize that for _n_ = _k_, the expression can be represented as: �01 _[x][k][dx]_ [ =] _k_ 1 _[x]_ [(] _[k]_ [+1)] _[|]_ 0 [1][. We]
now prove this hypothesis using mathematical induction: When _n_ = 1, the left side equals 1, and the right side equals 1, so the
hypothesis holds. Assume the hypothesis is true for _n_ = _k_, i.e., �01 _[x][k][dx]_ [ =] _k_ 1 _[x]_ [(] _[k]_ [+1)] _[|]_ 0 [1][. Then for] _[ n]_ [ =] _[ k]_ [ + 1][, we have]
- 1 1
0 _[x]_ [(] _[k]_ [+1)] _[dx]_ [ =] _k_ +1 _[x]_ [(] _[k]_ [+2)] _[|]_ 0 [1][. By the inductive assumption, the hypothesis is true for] _[ n]_ [ =] _[ k]_ [; therefore, it also holds for] _[ n]_ [ =] _[ k]_ [ + 1][.]
[1] 6 _[x]_ [6] _[|]_ 0 [1] [=] [1] 6
[1] 6 [(1][6][)] _[ −]_ 6 [1]
6 [1] [(0][6][) =] [1] 6
Hence, for any positive integer _n_, the expression can be represented as: �01 _[x][n][dx]_ [ =] _n_ [1] _[x]_ [(] _[n]_ [+1)] _[|]_ 0 [1][."]
General
Score **7**
Multi-dimensional
Rule-calibrated
Judge(Ours)
_**Fair Due to**_
_**Multi-dimensions**_
**AI** `助手的答案在事实正确性上存在问题。虽然它试图通过计算几个特定的` _n_ `值来找到通解,并使用数学归纳法来证明其猜测,但是它的通解公式`
`是错误的。正确的公式应该是` �01 _[x][n][dx]_ [ = 1] _[/]_ [(] _[n]_ [ + 1)] `[,][而][不][是]` **[AI]** `[助][手][给][出][的]` �01 _[x][n][dx]_ [ = (1] _[/n]_ [)] _[x]_ [(] _[n]_ [+1)] _[|]_ 0 [1] `[。][因][此][,][事][实][正][确][性][的][得][分][是]` [2] `[。]`
`在满足用户需求这一维度上,` **AI** `助手的答案也存在问题。用户的问题是求解积分` �01 _[x][n][dx]_ `[,][而]` **[AI]** `[助][手][的][答][案][虽][然][试][图][解][决][这][个][问][题][,][但][是][由][于]`
`其通解公式错误,所以并没有正确地解决用户的问题。因此,满足用户需求的得分是` **2** `。`
`在逻辑连贯性这一维度上,` **AI** `助手的答案表现得相对较好。它首先计算了几个特定的` _n_ `值,然后基于这些计算结果提出了一个猜测,最后使用数`
`学归纳法来证明这个猜测。虽然其猜测是错误的,但是其推理过程是连贯的。因此,逻辑连贯性的得分是` **6** `。`
`在完备性这一维度上,` **AI** `助手的答案存在问题。虽然它试图通过计算几个特定的` _n_ `值来找到通解,并使用数学归纳法来证明其猜测,但是由于其`
`通解公式错误,所以其答案并不完备。因此,完备性的得分是` **2** `。`
`综合以上各个维度的评价,` **AI** `助手的答案的综合得分是` **3** `。` **{’** `事实正确性` **’: 2, ’** `满足用户需求` **’: 2, ’** `逻辑连贯性` **’: 6, ’** `完备性` **’: 2, ’** `综合得分` **’: 3}**
The AI assistant’s response has issues in terms of factual accuracy. While it attempted to find a general solution by calculating several
specific values of _n_ and used mathematical induction to prove its conjecture, the general solution formula it provided is incorrect. The
correct formula should be �01 _[x][n][dx]_ [ =] _n_ +11 [, not the] �01 _[x][n][dx]_ [ =] _n_ [1] _[x]_ [(] _[n]_ [+1)] _[|]_ 0 [1] [as given by the AI assistant. Therefore, the score for]
factual accuracy is 2.
In terms of meeting user needs, the AI assistant’s response also has issues. The user’s query was to solve the integral �01 _[x][n][dx]_ [, and]
although the AI assistant’s response attempted to address this, it failed to correctly solve the user’s problem due to the incorrect general
solution formula. Hence, the score for meeting user needs is 2.
Regarding logical coherence, the AI assistant’s response performed relatively well. It first calculated several specific values of _n_, then
based on these calculations, it proposed a conjecture, and finally used mathematical induction to prove this conjecture. Although its
conjecture was incorrect, the reasoning process was coherent. Therefore, the score for logical coherence is 6.
In terms of completeness, the AI assistant’s response has shortcomings. Although it attempted to find a general solution by calculating
specific values of _n_ and used mathematical induction to prove its conjecture, the incorrect general solution formula means that its answer
is not complete. Thus, the score for completeness is 2.
Considering the evaluations across these dimensions, the comprehensive score for the AI assistant’s response is 3. ’Factual Accuracy’: 2,
’Meeting User Needs’: 2, ’Logical Coherence’: 6, ’Completeness’: 2, ’Overall Score’: 3."
Ours
Score **3**
Table 11: An comparison example between General Judge and Our Multi-dimensional Rule-calibrated Judge. In this
case, General Judge suffered from verbosity bias, giving a high score to a wrong but long answer. On the contrary,
our Judge balanced different factors and gave a fair rating to the wrong answer.
|